Why Response Metrics Don't Equal Brand Growth

Every performance dashboard tells a story about motion. Clicks accumulating, engagement rates climbing, shares radiating outward across feeds. The numbers move and, because they move, teams take it as evidence that the brand is moving with them. This is the illusion. High engagement does not confirm that a brand is growing stronger in the minds of buyers. It confirms that a piece of content generated a reaction. Those two things occupy different universes, and the distance between them is where most measurement strategies quietly collapse.
Digital marketing was built on a promise that traditional media could never keep: proof. Television bought reach on faith. Print counted circulation, not comprehension. The early social platforms offered something radically different. A user clicked, a user shared, a user commented. The interaction was visible, timestamped, and attributable. For a generation of marketers who had inherited vague metrics and slow feedback loops, this felt like a revolution.
The operational appeal grew as engagement updates in real time, allowing benchmarking across creatives and channels, and creating weekly reports that give stakeholders certainty. Platforms reinforce this by making engagement the currency of distribution: content that interacts gets amplified, leading to more interaction and reinforcing engagement as the success metric.
There was also a structural pressure at work. Marketing functions have always struggled to translate their contribution into language that finance recognises. Engagement offered a version of accountability that felt tangible and modern. Likes count. Shares spread. These are legible actions, easy to put in a deck, reassuring in a boardroom. What they are less good at is telling anyone whether the brand is becoming more valuable in the minds of the people who will eventually decide whether to buy it.
The confusion was understandable. The consequences, accumulated over a decade of normalised practice, are significant.
This distinction carries the weight of the entire argument, so it deserves precision.
Response measures a person's actions at a specific time, platform, and content, showing behaviour under certain conditions. It indicates if the creative stopped the scroll, if the hook prompted a click, or if the format suited the context. These signals reflect execution quality but reveal little about the person's changed mindset.
Brand effect is a different kind of question entirely. It asks: Is this brand now easier to recall in a buying situation? Are the associations people hold about it becoming clearer, stronger, or more distinct from competitors? Is the brand's presence in memory growing in ways that will influence future choice, even when the buyer is not actively thinking about it? These questions cannot be answered by watching an engagement counter.
Byron Sharp's research at the Ehrenberg-Bass Institute offers the most rigorous framework for understanding why. Mental availability, the probability that a brand comes to mind in a relevant purchase situation, is the property most predictive of long-term commercial performance. It is built through distinctive assets, consistent associations, and broad repeated exposure across time. A single viral moment does not build mental availability. Consistent, coherent brand presence, reaching the right category contexts repeatedly, does.
The timeline problem widens the gap. Research from the IPA's effectiveness database, summarised by Les Binet and Peter Field, shows brand-building effects take 6-18 months to reflect in buyer behaviour. The most effective long-term campaigns are emotionally driven, broadly distributed, and generate modest engagement, working on a different schedule. Weekly engagement reports tend to undervalue these long-term efforts and overvalue campaigns that create immediate noise without lasting market impact.
Ehrenberg-Bass research adds a further complication. At any given moment, roughly 95 per cent of a brand's potential buyers are not in an active purchase cycle. Engagement metrics, by their nature, skew heavily toward the actively interested: existing followers, recent visitors, people in the consideration window. The much larger population whose future choices will determine long-term growth is largely absent from these signals. Building mental availability among that passive majority is the central task of brand building. It is also the task that engagement data is least equipped to track.
Picasso broke the conventions of representation not to be difficult but to be more honest about the subject. A face seen from one angle does not reveal everything about that face. Brand measurement has the opposite problem: the angle most teams are looking from shows the most activity while hiding the most important information.
Content that drives engagement often triggers a reaction mainly to itself, not the brand. A funny piece shared 50,000 times encodes the joke in memory, but whether it links to the brand depends on how well the brand's assets are integrated. When not, the brand becomes just a logo in a trend-based or culturally timed piece, causing the audience to remember the content but forget the source. This is called the vampire effect in advertising: compelling creative that drains brand recognition.
Controversy has a similar structural problem. Posts designed to provoke disagreement generate high comment counts because conflict is an engagement engine. But the dominant association formed in those moments is rarely aligned with strategic positioning. A brand that earns attention through provocation is conditioning its audience to associate it with conflict rather than with whatever it actually does well. The engagement metric looks healthy. The brand equity is being spent, not saved.
There is also the question of attention quality. Platform interactions happen across a wide spectrum of cognitive engagement. A user might tap through a carousel while waiting for coffee, completing the sequence technically while forming no durable memory of what they saw. View-through rates measure completion. They do not measure processing. A campaign can achieve 80 per cent completion rates on video content and still leave no trace in the memory structures that influence future choice.
The practical consequence is the pattern that performance teams recognise but rarely say out loud: the dashboard is green, the pipeline is flat, and nobody can quite explain the gap.
None of this means engagement data should be ignored. It means it should be kept in its lane, and its lane is narrower than most reporting practices suggest.
Engagement is a valuable diagnostic for creative and channel quality, showing which variants are more effective through click-through rates, completion rates, and comment sentiment. These metrics offer meaningful insights with operational value.
Engagement also serves a practical function in platform economics. Higher engagement reduces distribution costs on most major platforms: better-performing content earns cheaper reach, which matters for efficiency in short-term activation campaigns. This is a utility function, not a brand health indicator. It tells you the content is working for the algorithm. It does not tell you the brand is working in the market.
Early warning functions are legitimate. A sustained decline in engagement from a previously responsive audience can signal creative fatigue, audience drift, or a fundamental mismatch between what the brand communicates and what the audience wants. Engagement acts as a monitor, not a success measure.
The discipline is in keeping the question matched to the metric. Engagement answers: Is this content generating traction in this context? Brand building asks a different question, and it requires different instruments.
How Brand Building Works
Lorca's duende isn't just emotion; it makes art feel necessary and unforgettable. Brands build lasting equity not just by frequency, but by strong associations.
Memory structures underpin brand value, built through repeated exposure to assets like colour, shape, sound, style, or tone of voice across contexts over time. This subconscious process strengthens associations, making the brand salient when consumers encounter relevant categories. It doesn't seek attention but influences future behaviour, like searching for the brand months later.
The mere exposure effect shows familiarity creates positive associations, even without clicks. A user seeing a branded ad without clicking still contributes to memory, and over many exposures, this builds significant impact. Although non-clicks register as zero on the dashboard, the brand still gains value.
Binet and Field's research highlights how brand activity enhances activation efficiency over time, not just awareness. Strong mental availability boosts conversion, lowers acquisition costs, and reduces price sensitivity. The benefits of brand investment appear after twelve months, often beyond the reach of typical dashboards.
The answer is not to replace engagement metrics with something else. It is to build a measurement architecture with enough depth to hold both short-term response signals and longer-term brand indicators at the same time, without allowing the faster-moving numbers to crowd out the slower ones.
Share of search is one of the most accessible tools available to performance teams building this kind of architecture. Developed by Les Binet, the methodology tracks a brand's proportion of total branded search volume within a category relative to competitors. It requires no proprietary data and no long-horizon tracking study. It responds to brand investment within weeks rather than months. And crucially, IPA analysis has shown that the share of search correlates with the share of market at roughly 83 per cent, and tends to lead market share changes rather than lag them. A brand whose share of search is rising is building momentum in the market, regardless of what its engagement rate is doing.
Branded search volume and direct traffic quality offer complementary signals. When buyers seek out a brand by name, that is mental availability expressing itself as behaviour. It is a downstream confirmation that the brand has built enough presence in memory to generate unprompted intent. These numbers belong in the same measurement framework as CTR and ROAS, not as a separate brand audit conducted quarterly and disconnected from campaign reporting.
Brand lift studies, run through platform tools or independently commissioned, provide direct evidence of recall and association shifts in defined audiences. They ask the right questions: Did exposure increase unaided awareness? Did it shift the brand's association with a specific benefit? Did it move consideration among people who did not previously consider the brand? These are the metrics that map to the actual mechanisms of brand building, and they should sit alongside performance KPIs in every reporting cycle where long-term growth is part of the brief.
Marketing mix modelling is regaining importance as cookie-based attribution declines. It assesses media effects across channels without last-click bias and distinguishes long-term brand impact from short-term activation—something most platform analytics can't do. For agencies focused on growth, it's a vital part of the measurement toolkit.
The practical challenge is political as much as technical. Reporting cycles built around weekly platform analytics create institutional pressure to prioritise what updates frequently. Building in the slower signals requires deliberate commitment from both agency and client, and a willingness to hold complexity in the same frame as immediacy. Agencies that can make that case compellingly are offering something structurally more valuable than those who deliver dashboards that look reassuring and explain nothing about the brand's actual trajectory.
Goodhart's Law has a particular sharpness in marketing contexts: when a measure becomes a target, it ceases to be a good measure. Teams optimising for engagement learn quickly how to generate engagement. Giveaways, controversy, trend-chasing, clickbait formatting, provocative headlines that promise more than the brand can deliver. All of these reliably move the metric. None of them reliably builds the brand. In fact, some of them actively damage it, because they train audience expectations in directions that conflict with strategic positioning and undermine the consistency on which mental availability depends.
The bigger risk is cultural. Organisations that normalise engagement as the primary evidence of marketing effectiveness gradually lose the language and institutional permission to do the slower work that brand building requires. When every campaign must justify itself through interaction rates, the campaigns most important to long-term growth become the hardest to defend. The measurement culture consumes the strategy, and the decay is invisible in the very dashboards being used to prove that things are going well.
Strong agencies accept engagement metrics, but don't let them dominate measurement. They distinguish response from residue clearly, explaining it to clients, structuring reports accordingly, and making decisions that support the brand's long-term health rather than just the dashboard's short-term success.
Enduring brands are not those generating the most reactions but those building a strong, consistent presence and clear category claim when buyers arrive. Achieving this requires measuring predictive metrics, not just easy-to-count ones. The gap between these metrics is where real brand work and key growth decisions occur.