Azure Blob Storage - Retail Revenue
Prerequisites
- An active Microsoft Azure account
- Permission to create storage accounts and containers in your Azure subscription
Step 2: Create a Blob Container
- In your storage account, navigate to Data storage > Containers in the left sidebar.
- Click + Container.
- Enter a Name for the container (e.g.,
prescient-data). - Set Public access level to Private (no anonymous access).
- Click Create.
Take note of the container name -- you will provide this to Prescient.
Step 3: Organize Your Files
Inside your container, create a folder structure to hold your CSV files. For example:
prescient-data/
└── my-data/
├── report_2024_01.csv
├── report_2024_02.csv
└── report_2024_03.csv
Step 4: Generate a SAS Token
A Shared Access Signature (SAS) token grants Prescient secure, scoped access to read files from your container without sharing your full account keys.
- In your storage account, navigate to Security + networking > Shared access signature in the left sidebar.
- Configure the following settings:
- Allowed services: Select Blob only.
- Allowed resource types: Select Container and Object.
- Allowed permissions: Select Read and List only.
- Start and expiry date/time: Set the start to today and the expiry to at least 1 year in the future. We recommend setting a reminder to rotate the token before it expires.
- Allowed protocols: Select HTTPS only.
- Click Generate SAS and connection string.
- Copy and save the following values:
- SAS token -- the token string (starts with
sv=...). - Blob service SAS URL -- the full URL (e.g.,
https://youraccountname.blob.core.windows.net/?sv=...). The account URL portion is everything before the?(e.g.,https://youraccountname.blob.core.windows.net).
- SAS token -- the token string (starts with
Important: Store these values securely. The SAS token will only be shown once. If you lose it, you will need to generate a new one.
The path to your folder (e.g., my-data/) is your base path. All CSV files placed in this folder will be picked up by Prescient.
File Format and Schema Requirements
All files must be in CSV format (.csv extension) with a header row. Files that do not end in .csv will be ignored.
The required columns depend on the type of data you are sending. Required columns must be present in every file. Optional columns can be omitted -- they will default to NULL (or a noted default) if not included.
Expected Table Schema
Our integration expects csv files, that aggregates your key daily or weekly retail metrics. This table should follow the schema below:
| Column Name | Type | Required | Description |
|---|---|---|---|
| date | DATE | Yes | Reported date in YYYY/MM/DD format. Dates need to either be daily or a consistent day of the week when aggregating to weekly values. |
| revenue_amount | FLOAT | Yes | Total revenue that will be modeled against. |
| revenue_ccy | STRING | Yes | Currency the revenue is in. |
| date_aggregation | STRING | Yes | The data aggregation level D(aily) or W(eekly). The weekly aggregation is assumed to end on the date you give it in the column. |
| total_units_sold | INTEGER | Yes | The total number of units sold in the time period. |
| num_of_stores | INTEGER | Yes | The total number of actively selling stores in the time period. Count of all stores with sales greater than 0 in the specific time period. |
Note: The recommendation is for the data to be aggregated at the daily level, with one row per date. If that is not possible then a weekly rollup to the start of week can be supported.
How Ingestion Works
Understanding how Prescient processes your files helps you structure your uploads correctly.
Automatic New File Detection
Prescient automatically scans your blob container on a regular schedule. Each time it runs, it identifies only new files that have not been previously ingested. Files that have already been processed are skipped -- you never need to remove old files from your container.
Latest File Takes Precedence
When multiple files contain records with the same key (e.g., the same date and campaign), the records from the most recently modified file always take precedence. This means:
- To correct data: Simply upload a new CSV file containing the corrected records. The new values will automatically overwrite the previous ones.
- No need to delete or modify old files: Earlier files remain in the container, but their overlapping records are superseded by the newer file.
Example
Suppose you upload two advertising files:
report_jan_v1.csv(uploaded Jan 15) -- contains spend data for Jan 1-14report_jan_v2.csv(uploaded Jan 20) -- contains corrected spend data for Jan 10-14
Prescient will use the data from report_jan_v1.csv for Jan 1-9 and the corrected data from report_jan_v2.csv for Jan 10-14, since it is the more recently modified file.
Updated about 23 hours ago
