Your guide to evaluating advertising effectiveness benchmarks

DISQO
Published on 11/15/22 11:33 AM
Updated on 2/20/25 12:49 PM

As outlined in our more extensive report on advertising effectiveness normative benchmarks and our deep-dive on the optimal ways to leverage benchmarks, norms are essential to understanding advertising performance. Benchmarks inform researchers about baseline ad performance. High-quality benchmarks improve confidence about campaign impact assertions. But, norms based on ancient data, outdated platforms, or invalidated assumptions can lead to inaccurate assumptions and missed opportunities to drive real consumer action. This article is your checklist for evaluating advertising effectiveness benchmarks.

Checklist for robust, high-quality ad effectiveness benchmark data

Do your benchmarks provide what you need to drive organizational value?

  • A full-funnel view across attitudinal and behavioral metrics
    I understand where ads are or aren’t working through the entire purchase path.
  • Using consistent methodology across all campaigns, sites, and channels
    My benchmarks are not unduly influenced by specific publishers or data partners.
  • Enabling cross-channel coverage to span otherwise gated domains
    I am actively minimizing blind spots in my measurement practices.
  • Referencing highly recent and relevant data
    My benchmarks are generalizable for current and future digital-first consumers.
  • Benchmarks give me all the qualities needed for confident attribution
    I can accurately link ad exposure to consumer sentiment and actions across platforms without reliance on outdated tracking methods.

What does “high quality” in advertising effectiveness benchmarks look like?

The adage “garbage in, garbage out” is pretty well-established in any line of reputable research, and it’s even more apt when leveraging normative data. Norms’ aggregation of large, diverse datasets amplifies any violations of data quality. Advertisers and their research partners should establish core quality assumptions around their benchmarks and address any issues before using them. If certain assumptions for data quality are not met, utilizing mediocre data for benchmarking exercises is wishful and suboptimal thinking.

But how do you determine “quality?” Marketers should focus on three key factors:

  1. Breadth of attitudinal and behavioral metrics: is this data wide enough?
  2. Coverage across platforms and channels: can this data capture my audiences?
  3. Temporal recency of data: is this data relevant today?

Below, we unpack each of these factors and explain why each is critical to enabling a high-quality benchmark dataset and creating effective marketing campaigns.

 

Breadth of attitudinal and behavioral metrics

In most cases, lack of breadth constrains advertising benchmarks. They may be able to tell you—with some accuracy—what a “good” or “bad” campaign looks like on a singular metric (e.g., brand awareness, brand favorability, purchase intent, etc.). Still, they rarely tell you this across a multitude of other metrics. If you’re interested in how your campaigns impact multiple metrics (and you should be), you’re left relying on different benchmark sources for Metric A vs. Metric B.

This breadth problem is troublesome because successful advertising measurement should involve assessing attitudinal and behavioral metrics. Almost all legacy research vendors rely solely on top-of-funnel attitudinal metrics to evaluate impact. At the same time, bottom-of-funnel information on conversions and sales is only available from disparate vendors. For example, you can search for site conversion rate norms from various sources, but they all depend on outdated cookie tracking. Some ecommerce sites have norms, but only if you advertise on their sites. Unfortunately, these one-off normative metrics from a hodgepodge of sources offer only a partial view of the complete path-to-purchase funnel. Attitudinal lift is critical, but a lack of unified visibility into behavioral impacts (e.g., search, site visitation, add-to-cart) limits confidence in overall ROI.

So even if a provider can give you solid benchmarks on a few brand metrics—like unaided awareness and brand favorability—they may miss key conclusions about how the same ads influence digital behavior like search and site visitation. For that digital lens, you’ll need to find another provider, run another set of studies, and leverage different benchmarks. Does this sound like what your limited-bandwidth team would enjoy? Probably not.

Let’s say your newest campaign was able to drive a 5-point lift in purchase intent. That’s a fantastic outcome that your organization presumes will impact down-funnel behaviors like search and sales. That said, if you require two research providers to assess lift—one for attitudes and another for digital behavior—it’s relatively complex to determine if this change in brand favorability truly drives downstream consumer actions. That’s why it’s critical to utilize partners who assess both attitudinal and behavioral lift in parallel. They can enable analysis of whether specific brand metrics drive particular outcome metrics, both at the campaign level (for your ads) and at the benchmark level (for your industry).

Brand

Brand

 

Ouctomes

Outcomes

 

Brand & Outcomes

B&O Full

 

Marketers should identify providers who can cover most of the funnel through one program. Standardized research studies that assess brand and behavioral lift in the same overarching methodology help:

  1. Cover all essential KPIs in one fell swoop, 
  2. Identify campaigns that differentially move distinct KPIs, and 
  3. Compare every internal campaign you run against its own internal-facing competitive set. 

Additionally, with a single data source, you can compare your performance with those of others in your industry to learn which KPIs you’re having more or less success with.

Source consistency ensures each study is conducted on a similar audience, ultimately eliminating the risk of comparing apples in some studies to oranges in others. Not only does this empower much stronger research and conclusions for marketers, but it also helps marketers more effectively communicate their findings to stakeholders and advocate for marketing effectiveness. Explaining the nuances of two, three, or even four vendor datasets is seemingly impossible while describing a singular solution's elegance makes for a highly compelling internal narrative.

 

Coverage across platforms and channels

Most advertising effectiveness benchmarks are hamstrung by an inability to see across all channels and platforms in the digital landscape. The slow death of third-party cookies, limited IP address accuracy, and siloed platforms prevent marketers from effectively tracking user behavior. Furthermore, media fragmentation is leading to even more ad exposure blind spots. Together, these issues leave huge gaps in measurement validity, as exposed groups are lost in an immense digital ocean.

When working with research partners on normative benchmarks, you need a keen understanding of where your consumer coverage starts and ends. In other words, you need to know where providers can measure exposure and subsequent digital behavior. Start by asking yourself—or your provider—some of the following questions:

Exposure visibility

Do these benchmarks assess ad exposure impacts on all platforms where you run campaigns? Or are they limited to a few platforms where a small set of your campaigns are focused?

If providers can only see ad exposure on specific platforms, their benchmarks are likely heavily biased by channel. Most brands and campaigns focus on a cross-channel approach and assume that this coverage helps connect to many distinct audiences. However, if your research provider cannot detect ad exposure on specific platforms, their benchmarks will be heavily biased toward those they have access to. This bias can lead to benchmarks with extremely low generalizability and create significant blindspots for your understanding of typical campaign performance in preferred channels.

Longitudinal identifiers

Are the benchmarks based on traced consumer behavior patterns across critical platforms and channels? Or are you relying on outdated, risk-heavy technology?

Keeping tabs on exposed and unexposed consumers in a highly fragmented digital environment is a considerable challenge. Vendors often rely upon nearly deprecated methodologies that face substantial hurdles in a privacy-focused [present and] future. While cookies are still in use, Apple, Google, and other tech titans are scaling back. Without these tags, other research providers scramble to identify potent, consistent ways to view longitudinal consumer behavior across the digital ecosystem.

If you’re working with vendors who utilize these legacy technologies, benchmarking advertising effectiveness is fraught with inaccurate assumptions. What’s the value of a norm based on technology that will no longer be viable soon? If digital behavior is a critical KPI for your organization’s ad effectiveness efforts, find a partner who can reliably assess and benchmark digital behavior without relying on quickly depreciating cookies and their proxies, such as mobile IDs and IP addresses.

Temporal recency of data

Many benchmarks and normative datasets are rapidly losing value because they rely on distant historical campaigns for scale. Campaigns tested 10 years ago may be included in a typical database, even though they launched on antiquated platforms and at a time when consumer behavior differed notably from today. Generational changes, socio-economic impacts, and geopolitics are transforming advertising and corresponding consumer behavior in ways that make recency a critical component of reliable benchmarks.

When working with research partners, probe the recency and relevance of campaigns included in normative benchmarks. Do you want to evaluate your campaign performance against a set of heavily dated ads run a digital generation ago? If not, partner with a vendor who can leverage a high volume of recent studies and one you trust will continue growing as the advertising industry evolves. Being stuck with a given provider due to historical circumstances is challenging, but moving to a more relevant provider in today’s environment can help future-proof your research efforts and normative comparisons.

Where to go from here

Missing something from the checklist? Not to worry! DISQO has semi-annual ad effectiveness benchmarks that can replace your suboptimal legacy solutions. With full-funnel measurability, cross-channel visibility, and near-term recency, these benchmarks help marketers discover what good looks like within a modern ad campaign and test themselves against that standard to create. We also publish regular reporting on our benchmark data, helping clients think through new and unique applications of our industry-leading product.

Instead of relying on antiquated solutions, learn more about how DISQO's advertising benchmarks are setting a new standard for campaign evaluation. Contact us for a walkthrough today, and download our most recent report to see what you might be missing.

Subscribe now!

Get our new reports, case studies, podcasts, articles and events