FAQs

Interpreting Data & Model Insights

Q: How should I interpret data during high-volatility periods like Black Friday?
A: During high-impact events (e.g., Black Friday), performance can be unpredictable. Our model factors in seasonality and market shifts, but sudden spikes in spend may cause short-term discrepancies. Focus on overall trends rather than daily fluctuations.

Q: What are the tradeoffs between high and low confidence in model results?
A:

  • High Confidence: Typically backed by robust historical data, offering more stable insights.
  • Low Confidence: May reflect limited data or recent shifts in spend. While less stable, these periods still provide directional insights.

Q: Why is there a big difference between Prescient’s MMM ROAS for a new channel and that channel’s own ROAS? Will it adjust over time?
A:

  • Halo Effects: Our model captures indirect or cross-channel influences that a single platform’s dashboard might not.
  • Short Data Windows: In the early days of spend, data can fluctuate widely. Once a channel has ~7+ days of spend and conversions, the model typically stabilizes and aligns more closely with “actual” performance.

Marketing Measurement: MMM vs. MTA

Q: What is the difference between MMM and MTA?
A:

  • Multi-Touch Attribution (MTA): Tracks user touchpoints, often giving more credit to lower-funnel actions (e.g., branded search).
  • Marketing Mix Modeling (MMM): Analyzes how changes in marketing spend impact overall revenue, providing a broader view of all channels.

Q: Why does MMM differ from an MTA like Northbeam on upper-funnel brand search?
A: MMM focuses on correlations between spend and revenue, without overemphasizing specific touchpoints. MTA tools often credit brand search heavily because it’s the last action before conversion. This leads to higher reported CAC or ROAS for brand campaigns in MTA compared to MMM.

Q: How does MMM compare to incrementality testing?
A:

  • MMM: Provides ongoing insights across all channels, is scalable, and cost-effective.
  • Incrementality Testing: Uses controlled experiments to isolate the effect of specific campaigns. Though more precise on a per-test basis, it can be more time-consuming and costly to run continuously.

Q: If someone first sees a YouTube ad, then returns via a Facebook link to purchase, how does the model handle attribution?
A: We don’t use first-touch or last-click rules. Instead, the model looks at overall patterns and can allocate partial credit to both channels (e.g., YouTube for awareness, Facebook for conversion). This is reflected in aggregated “halo” effects rather than forced single-channel credit.


Budgeting & Spend Considerations

Q: How does flat spend impact MMM accuracy?
A: When spend is flat, it’s harder for the model to detect spend-to-revenue relationships. Some variability in spend is ideal for pinpointing incremental impact.

Q: What happens if there’s no spend for a period and it’s reintroduced later?
A: The model can adjust for gaps. However, sudden fluctuations may cause short-term inaccuracies until the model recalibrates. Maintaining consistency in spend is best.

Q: Does higher spend always lead to diminishing returns?
A: Not always. While increasing spend can lead to saturation, some campaigns become more efficient at scale. Finding the right budget allocation is key to optimizing ROAS.


Data Accuracy & Anomaly Detection

Q: How does the model handle outliers or influencer-driven sales?
A: Prescient detects and adjusts for anomalies statistically. If an influencer triggers a major spike, the model accounts for it so that one-off events don’t distort long-term insights.

Q: Can we detect anomalies like promotions or industry-wide shifts?
A: Yes. Deviations from expected patterns are flagged, and the model incorporates them (e.g., big promotional events, market changes) to maintain accurate insights.

Q: How do we adjust the model for outliers without distorting overall performance?
A: We use smoothing techniques that scale outliers relative to historical expectations. This prevents short-lived extremes from skewing overall results.


Understanding Model Results & Updates

Q: Why do modeled results change in historical periods?
A: Prescient uses a rolling 30-day calibration window. As new data is added, attribution for past periods may be updated to reflect newly observed patterns, spend shifts, or delayed conversions.

Q: How does Prescient handle model resets?
A: Resets occur when major changes (e.g., new integrations or spend swings) require a deeper recalibration. While this can slightly alter historical results, it ultimately enhances accuracy.

Q: Does spend accumulate across days?
A: Yes. We consider cumulative spend over time, factoring in “ad stock” effects where conversions may be delayed.

Q: How quickly does Prescient update data, and what happens when a new vendor is added?
A:

  • Regular Updates: Prescient ingests new performance metrics and refreshes insights within ~36 hours.
  • New Vendor: When you add a new vendor or channel, we perform a 24–48 hour historical data backfill. You’ll see the channel in the UI by a specific date, and slight shifts in existing channels can occur once the new data is fully integrated.

Media & Campaign Performance

Q: How does Prescient define top-of-funnel (TOF) vs. bottom-of-funnel (BOF) campaigns?

  • TOF: Awareness-focused (e.g., display ads, video, influencers).
  • BOF: Conversion-focused (e.g., branded search, retargeting, direct-response ads).

Q: Can we measure the long-term impact of awareness campaigns?
A: Yes. MMM tracks delayed (halo) effects, showing how TOF efforts ultimately influence revenue over time.

Q: Do certain channels receive weighting in MMM?
A: No. All channels are evaluated based on historical impact on revenue. The model “weights” them only to the extent that historical data shows measurable results.

Q: How does the model measure ROAS for awareness channels?
A: Because awareness campaigns often have delayed or indirect effects, ROAS can rise over time as customers progress down the funnel. These halo effects are captured in MMM.

Q: How do we determine if a campaign is eligible for modeling?
A: A campaign needs at least 7 days of nonzero spend in the last 365 days and must average at least $50 per nonzero spend day in that timeframe.

Q: What is the Carry Over (ad stock) Effect, and how does it work here?
A: Ad spend on a given day continues to influence subsequent days, decaying over time. If you spend $1,000 today, you’ll see the largest impact immediately, but some benefit lingers in the days/weeks after.

Q: How does your forecasting model account for recent changes without over-relying on historical data?
A: We use an ad stock approach that gives more weight to recent performance while still referencing historical patterns for seasonality. If older, higher ROAS data seems to be influencing forecasts too heavily, check newer metrics (e.g., Saturation Curves in the Performance Tab). Upcoming Omen v2 will include advanced time-weighting features to emphasize recent data further.

Q: We refresh creative assets frequently. Does that confuse the model or reduce insight quality?
A: No. The model tracks daily spend and conversions. If performance changes with a new creative, that trend is reflected in the data.

Q: If we only change the creative (same campaign name, same audience), does the model treat it as a new campaign?
A: No. All spend and conversions stay tied to the existing campaign, preserving historical continuity.

Q: Does the model need every single marketing channel—like influencers, pop-ups, or direct mail—to be accurate?
A: Not necessarily. Channels under 10% of total spend often have minimal impact on overall results. However, if a smaller channel is growing quickly or is strategically significant, add it so the model can measure its contribution.

Q: Will missing smaller channels (like niche influencers or local pop-ups) hurt the model’s accuracy?
A: Generally not. Their share of total spend or conversions is usually small. But if a “small” channel is critical to your strategy, consider integrating it.

Q: Should I wait until every channel is perfectly tracked before starting MMM?
A: No. It’s best to begin with your primary channels and add smaller ones later. The model updates continuously, so you’ll gain insights even with partial channel coverage.

Q: Do you treat every impression the same, or does the model account for differences (e.g., linear TV vs. YouTube vs. Instagram)?
A: The model inherently distinguishes channels by correlating each channel’s spend/impressions to revenue or new customer growth. Traditional TV is “exposure-only,” while online platforms may include clicks—both are captured distinctly.

Q: How does the model handle delayed conversions for channels like podcasts or TV ads?
A: The carryover/ad stock approach infers lag between spend and revenue. Even without a precise timestamp for each impression, the model correlates spend on a given day to uplift over the following days or weeks.

Q: How do you handle channels without a major ad platform (e.g., direct mail, podcasts, tradeshows)?
A: We accept manually provided data (daily or weekly) in a consistent format (date, spend, impressions, etc.). API connections are ideal, but structured manual uploads are sufficient.

Q: Is there a minimum spend or data requirement for the model to be accurate?
A: More spend and a longer history improve overall confidence. However, the Bayesian framework can handle smaller channels by referencing prior knowledge of similar patterns. Accuracy increases as data accumulates.


Platform-Specific Questions

Q: How does Prescient handle programmatic ad spend?
A: We treat it like any other channel. If programmatic accounts for more than 70% of your total spend, additional data integrations may be necessary for optimal accuracy.

Q: How does Prescient detect trends and ensure accurate predictions?
A: The model applies trend adjustments to isolate seasonal and long-term patterns. It also factors in macroeconomic shifts and specific campaign effects to produce accurate forecasts.


Data Integration Questions

Q: I am migrating my eCommerce platform, what do I do to minimize modeling downtime?
A: Alert your Customer Success Manager to this change and they will guide you through the process. We will disable modeling as you connect the new eCommerce source and delete the old source. Once the new data is ingested, we will validate it with you and make sure all old data is purged before restarting your modeling.