Getting the measure

Avoiding these five fundamental measurement mistakes will bring you closer to the c-suite

If there is one thing that helps get communicators a seat at the management table, it is speaking the language of management: charts, statistics, and metrics. Yet measurement remains one of the biggest challenges for communicators, both in-house and in agencies.

How can communicators avoid the most common measurement mistakes, and open a management dialogue through measurement?

1. Putting measurement last instead of first

Measurement cannot wait.

Indeed, in a technical sense a measurement mechanism is often only possible when it is incorporated in the design of the activity from the start. To take a basic example: gathering feedback from a trade show may be as simple as taking an on-the-spot survey. However, whenever I want to do so I find it necessary to discuss in advance the objectives that need to be measured, as well as the survey content that must be developed and agreed before being printed or input into electronic format. I try to settle the mechanics of data collection (print, app, or online) long before the show begins. When I forget this, the alternative is to end up counting imperfect metrics like “number of people attending the show” – when I should instead be finding out something more like “percentage improved engagement” or, at a minimum, “customer leads produced”.

More broadly, it is only possible to measure a communications approach when objectives are set up front. At BASF, an innovation-based company, I would be very surprised to hear any of our chemist-managers confuse the measurement of basic scientific characteristics like heat or strength. Why should the corporate affairs department hold itself to a lower standard? Yet sometimes I still find myself starting to talk about how we will do a campaign without clarifying first why we are doing a campaign – are we trying to increase awareness, build trust, both, or achieve something else?

Discipline is therefore needed to ensure measurement makes it into the first draft of every communications plan. Implementing a standard template, with standard headings ([Measurable] Objectives, Approach, Implementation, Timeline), is the necessary first step to incorporating measurement routinely into communications activities.

"Discipline is therefore needed to ensure measurement makes it into the first draft of every communications plan."

Rather than a “project” or a “programme”, I like to think of measurement as a good habit to get into, something to be done automatically – and we know a good habit is truly ingrained when its absence is noticed: just as we feel uncomfortable going to bed without brushing our teeth, we should feel uncomfortable writing even the most basic communications plan without a measurement mechanism.

2. Not establishing basic assumptions with management

Is positive media coverage desirable? Should our company be attempting to raise awareness or not?

The answer should be obvious. Yet not all management team members outside the communications profession will automatically agree. Some may actively fear notoriety for the company, imagining that it will overwhelm the salespeople with unfillable orders. Some think it nice to have, but not worth the effort or the risk. Others err in the opposite direction: they have adopted the confused idea that “all publicity is good publicity”. The specific views depend on the individual executive’s experience in his or her career, and are not universal in today’s diverse management teams.

For this reason, if I forget to agree on the ground rules up front, I will certainly end up wasting precious management time later on, going over the basics, or falling into existential arguments about the value of communication.

Here are some basics that I attempt to establish if I want to have any kind of fruitful discussion when it comes time to present measurement results:

  • Do we agree that awareness and/or trust in the brand is desirable? (And: is one more important than the other?) For management team members that feel unsure about the very question of whether having a strong brand is worthwhile, a useful example is the fiscal valuation of brands in acquisitions – it clearly shows that (intangible) brands are worth (real) money. This can be demonstrated generically or with data from peer companies, and does not need to be re-established in every single results presentation. Additionally, comparisons of the pricing difference between name-brand and generic products are a useful tool in this discussion. Having established this as a base rule, I do not then need to try to assign a spurious sales increase value to each news clipping.
  • Who are our target audiences? Because our company sells to such an eclectic group of customers – from rice farmers in Indonesia, who need crop protection solutions, to automotive CEOs in Japan who are considering alternative power trains – any discussion of “corporate” communication measurement must start on the basis of a common understanding and definition of the target audience. In BASF’s case, as a company we have established a description known as our “Relevant Public”, defined demographically and psychographically, and gained management understanding that this group is a worthwhile target, no matter whether any given member of the group is individually our current customer.
  • How fast do the results have to be visible? Building trust takes time, and the business results of activities like our science education programme BASF Kids’ Lab (target: ages 6-12) might not be seen literally for decades. If the management instead have an intense focus on near term results, then this reality must be acknowledged and addressed.
  • What is our risk tolerance? Some of the most outrageously successful communications campaigns, before they were successful, were simply the most outrageous. I tell managers that just like in financial services, if you are risk-averse, then you cannot expect more than modest returns.

3. Measuring the wrong thing (and then jumping to conclusions)

Just because it’s possible to measure something doesn’t mean it’s helpful. Here are some common metrics, and what they do and do not indicate:

  • How many people showed up? This number of event attendees is a product of the robustness of the database, the relevance of the topic, the seniority of the speaker, and the attractiveness of the invitation. It also is a basic indicator of how many people were exposed to the brand. However, to truly measure the success of an event, it is necessary to set an objective for engagement or message delivery (example: “After this event, do stakeholders agree more or less with the following statements?”) and to elicit a response from the attendees, either through observing their actions after the event or through a direct request for a response via a survey.
  • How many clippings? Clip count measures notoriety among a particular readership – good or bad. In short, a brand or individual who is already famous will get coverage, no matter what. However, a large number of clippings has very little to do with how many people have read or absorbed the information, whether or not they have accepted the company’s key messages, nor whether they intend to act on the messages. Instead, a longitudinal comparison of key message delivery and share of voice against competitors can help give an indication of overall brand health and can be a strategic tool for communicators to adjust messaging.
  • How many clippings for this announcement? Additionally, pure clip count does not necessarily measure the excellence of the media relations team or PR agency. Instead, comparing the quality and quantity of coverage of similar announcements by similar companies can help show the strength of the team’s media relationships.
  • How many followers? We can find out with a single click how many fans there are on a Facebook page or followers of a WeChat account – but these figures have very little to do with whether or not those followers are engaged with the content, are willing to recommend the brand, or have the intention to purchase. Instead, track engagement per post (comments or reactions) as a strategic tool to guide selection of topics and progress over time.

Should communications directors set media coverage targets?

An occasional complaint by media relations units is that target-setting based on coverage is unfair. Media can be influenced by any number of factors (bad company news, poor product design, availability of top executives) that are out of the control of a public relations executive. While this is true, in this case I am somewhat more unforgiving: salespeople never complain about having sales targets despite many factors being out of their control (bad company news, poor product design, availability of top executives). As long as the targets are set based on a fair dialogue, assumptions are set in advance, and the metrics are as controllable as possible (key message delivery rather than pure clip count), media coverage targets can be a useful management tool.


4. Zooming in (or out) too far

In my agency days, eager to support my client’s high expectations, my team and I spent many hours poring over a complex, clunky Excel template I designed, that would give an exhaustively complete estimate of how many eyeballs actually fell upon each article. While this was certainly interesting information, I am not convinced that the number was ever treated as more than an arbitrary “points” system. The result was that sometimes we spent a greater portion of the manpower on measurement than on communication. Was it really necessary to have this level of granularity?

On the other hand, it is also possible to be too vague. I have also heard company leaders declare that they are “not interested in the performance of PR versus marketing” but that they only care about the results. This might be true for the CEO – but measuring the success of various aspects of communications is a vital question for the person in charge of defining the mix. So, to the degree that it is not overly-expensive, I do prefer to measure what I can.

"Measuring the success of various aspects of communications is a vital question for the person in charge of defining the mix."

Finally, measuring consolidated results on a regional basis is not always useful. For example: BASF is at present the world’s largest chemical company and has the strongest overall position in the media in Asia Pacific on a consolidated basis. However, our brand recognition and our media coverage in any given country in Asia Pacific still tends to be lower than the respective national chemical company of that country. To be #1 in the region does not automatically translate to being #1 in every (or any) country in the region!

5. Confusing measurement with presentation

Presenting results to management is never totally objective – rather, it is a delicate combination of celebrating successes and highlighting gaps or deficits to provide information that helps them make management decisions. In no case do I use the same results presentations for senior management that I do for communicators: each group has different needs and is charged with making a different kind of decisions, and as such needs a different kind of information to make these decisions.

The tools that are most useful for management are indicators that can help management do the following:

  • Compare performance to peers: by contextualising communications metrics in a proper peer-to-peer comparison, management are able to understand key points and ignore “noise”. For example, in China, state-owned company Sinopec has a more prominent brand name than BASF, but this is mainly due to its consumer visibility with thousands of gas stations – and not an indicator of lack of brand strength in the chemical industry. Instead, it is more useful to compare BASF to multinational peers such as Dow or Dupont.
  • Visualize successes or failures: the combination of data plus a visual illustration of a major communications event (such as a sample article to illustrate a peak in media coverage, or a lobbying success story to illustrate an advocacy strategy) can help the management understand what kind of programmes to fund (or not) in the future.
  • Show progress over time: a number without a reference point is useless. Is the figure – whether or not it is on a scale that refers to reality – bigger or smaller than it was last year? Last quarter? Did the campaign have a lasting effect, or was its impact limited to a short time? Is the media coverage or online engagement in one particular country declining or rising more rapidly than in other countries?
  • Identify drivers of change: If there are changes over time, is there a mechanism to identify the cause? In our quarterly media analysis reports, the most interesting part of the slide is always the small boxes that show which pieces of news caused the peaks.

Finally: company leadership team members are human beings first of all, and respond best to ordinary human conversation, with simple explanations and clear examples. Adopting the perspective of the management team can be the best way to avoid measurement mistakes – and find out what really counts.

Image: Christine Cavalier

Genevieve Hilton

Genevieve Hilton has been head of external communications Asia Pacific for BASF, the world's leading chemical company, since 2008. In this role she is responsible for crisis communications readiness, stakeholder relations, media relations, and external communications strategy for the region. She was previously a member of senior management at multinational and local public relations firms, most recently at Ketchum Greater China.