LRW Resources

Back to Resources
Tracking

Your Brand Tracking Data May Be a Mess

Posted On  May 27, 2021
Share

At Material, we take over a lot of trackers from other research partners who weren’t meeting their clients’ needs, and I mean a lot! Clients come to us for at least one of three reasons: better service, more strategic analysis and insights, or higher quality data.

Though not all of these new clients knew they had been getting bad data, often we discover it when we dig into their old datasets.

Do you have a brand tracking data quality problem?

There are a few telltale signs that your brand tracking data quality might be subpar:

  • Results move too little
  • Results fluctuate too much
  • Results don’t align with in-market performance
  • Client brand wins on everything
  • Scores are too flat across attributes
  • Results make no sense

Tracking results move too little

You know your market is shifting, yet your tracking results are frustratingly stable. Why are you not picking up on the changes? You might have widened your study qualifications too broadly (screening in past year shoppers, for instance), which means recent in-store changes haven’t impacted most respondents.

Or maybe you aspire to have your emergent niche product category adopted widely, so you wanted to set yourself up for the future by interviewing everyone rather than just current category users. But that means most people have only vague general impressions of the category and brands within it, yielding flat results over time, across brands, and across attributes.

Tracking results fluctuate too much

You know your category is pretty stable, so you don’t expect results to shift much wave-to-wave, but you have the tracker for peace of mind to ensure you don’t get blindsided by any changing market dynamics. Instead of peace, it brings you stress, as results bounce inexplicably from wave to wave.

Why on earth is that happening?  The most common problem is insufficient control of sample composition. Likely culprits are fluctuating demographics or shifts in the type of device they are taking the survey on. But did you also know that the results you get from different sample sources will tend to differ from each other (usually within some reasonable bounds)?

If your tracker is not controlling the mix of sample sources from wave to wave, in addition to the demographics and device type, you may be seeing fluctuations that represent random noise.

Tracking results don’t align with in-market performance

This was a common problem during the pandemic. Brand equities for most brands in most categories remained stable, yet sales for some brands skyrocketed as category penetration and frequency surged.  In some cases, sales went up while equities declined. Try explaining that to management! Careful analysis of the data was able to uncover that sales increases for those brands lagged the increases of healthier brands, making it particularly important to examine the competitive weaknesses identified in the tracking.

This was also a time when pre/post-test/control tracking was crucial to understanding the impact of store redesigns, adjusted service models, or heavy-up ad campaigns. The category was sometimes doing one thing that might amplify or drown out the impacts of your actions if a control path were not available for comparison.

Client brand wins on everything

You might wonder why I’m listing this as a problem. But as a category leader, you can fall into complacency and lose sight of challenger brands who are nipping at your heels. If you ask people to rate brands they’ve heard of, but don’t require them to be familiar with those brands, you’ll exaggerate the brand halo that makes your brand dominate lesser-known alternatives. The proper place to capture the familiarity benefit is in your brand funnel, not in the perceptions battery.

Scores are too flat across attributes

This is related to the last problem. If people are rating brands they don’t know well, they’ll tend to straight-line the attribute battery, giving the same rating to all attributes. This tends to inflate the scores of category leaders, deflate the scores of smaller brands, and undermine the ability to statistically infer which attributes are most important in driving key business outcomes.

One way to mitigate this effect is to clean out these respondents for poor-quality data.  But be careful about that solution since it introduces systematic bias against people who are familiar with fewer brands. Giving the same rating to every attribute of a brand you’ve merely heard of is not necessarily evidence of inattentive responding.

 Tracking results make no sense

You’re right to be deploying cleaning steps in your tracker. Survey research fraud is less lucrative than advertising fraud, but it’s still commonplace. Survey bots, click farms, and professional respondents all may be polluting your results if you don’t identify and screen them out. Fraudsters are a particularly large problem in low-incidence studies and high-incentive ones. You can deploy a variety of tactics to catch fraud, both real-time during field and post-hoc once data collection ends. While AI methods are improving the accuracy and efficiency of quality control efforts, there’s still no substitute for a careful human review of open-end responses and general response patterns.

Other common solutions to bad brand tracking data

Tracking studies are unique in the level of pressure they place on research execution and data quality.  Unlike a one-time study, all waves need to be perfectly consistent on a host of design and execution decisions that could reasonably go either way if not focusing on trends. This is why we keep a study-specific tracker manual that documents every decision (what, when, and why it was made) so we can ensure consistency over time, even if the project lead or client contact moves on to bigger and better things. Usually, it’s our client who moves on, and our tracker service teams become the long-term experts on the client’s brand and tracker.

You probably already knew that any shift in tracking survey content or processes can undermine trendability. What you may not realize is that doggedly sticking to past processes can also hurt you.

You must introduce mobile surveys

The majority of trackers today were launched before mobile-optimized survey formats were available. From fear of upsetting the trendability of the data, many of these trackers have not upgraded to a mobile-optimized format. This means that few (or sometimes no!) respondents on mobile devices are being included, despite the fact that most surveys today are now taken on mobile! This means the measurement has stayed consistent, but the population surveyed has shifted as mobile-preferent people are now opting out of the study.

In other words, these trackers are no longer getting a representative view of their markets.

You must control for survey length

Partly due to the shift to mobile survey-taking, but also due to shrinking attention spans and proliferating entertainment options, respondents have become increasingly unwilling to take surveys over 15 minutes in length and sample providers are loath to ask their respondents to take them. This is happening at the same time that long-standing trackers have become bloated through the addition of new questions over time.

Long surveys have lower cooperation rates, lower completion rates, and lower quality data from those who participate.  Make the hard decisions about what you really must track and trim the rest.

Data quality is of utmost importance in tracking studies. If you suspect any of these culprits is to blame for your bad tracking data, it may be time to rethink your tracking study design or your tracking partner.

Comments

Subscribe to Our Perspectives

Hi! We're glad you're here. LRW is now part of Material,
and our site will be migrating to materialplus.io in the near future.
Bookmark me!