The department of made-up numbers
Posted on 16:15, February 27th, 2009 by Lew
This excellent article did the rounds in my department at work today, about the methodological rigour (or lack thereof) in ratings, on which I’ve been meaning to write a post for a while.
I refer almost daily to such demographic information – ratings, audience/circulation, readership and particularly advertising value equivalents – as `the department of made-up numbers’ because, basically, that’s what they are. At best, they are a set of figures which, while deeply flawed, are horizontally and vertically consistent, and well-enough understood that their failings can be accounted for (this approximates a definition of any useful long-term demographic data). At worst, they are the patina of officious statistical rigour over a set of numbers tuned to tell people whatever the media outlet, its owners, or its PR company want people to know – and that means they’re designed to fool. Most often, a given dataset lies in between – in the murky liminal zone where it’s impossible to tell whether it’s the former or the latter or something else entirely without access to the raw data and its provenance, which is nearly always impossible to get, and would entail phenomenal amounts of very specialised, expensive, time-consuming work to make sense of even if you could get it.
Despite these dire problems, demographic data, ratings, audience/circulation and advertising value equivalent data are the mainstay of the media and communications industry’s performance measurement infrastructure, for two simple reasons: First, it gives you nice clear figures to prove your department is doing its job; and second, nothing else does, because media demographics is the art of measuring the unmeasurable. So people who are otherwise cautious and crafty and suspicious just accept the numbers at face value and trust them implicitly because the alternative is no data, and with apologies to the Bard, nothing will come of nothing.
This reliance on demographic figures is highly detrimental to the health of the media industry, because the data can’t be verified, and there exists an inflation imperative. I dislike comparisons to communism as a rule, but there’s a parallel in this sort of reporting in the media/PR/comms industry as it presently is to the problems of productivity reporting seen in the 20s in the USSR and the 50s in China. When both producers and their supposedly independent auditors are ranked according to the quantity – not the quality – of the figures they produce, there inevitably emerges a tendency to inflate those figures.
In the USSR and China, wheat and rice yields were inflated in this way, because the producers would be punished if their yields fell, and the municipal authorities didn’t look too closely at the production figures because they would be punished if their municipality’s yields fell. Central government assumed these figures were correct, and based budgets and food allocations and projections and such upon them, planning more than they could realistically achieve because there was in fact less food than they thought in the granaries.
If we substitute `food’ for `ratings, I think the parallel is pretty clear: the media are relying on bad data to demonstrate that their product has value to advertisers first and journalistic merit second and to boost the egos of their stable of opinion leaders third; internal communications departments use it to measure the effectiveness of their campaigns and initiatives; external PR firms use it to prove their worth to client companies; boards of directors rely on it to make decisions about what publicity campaigns to fund, which products to launch, and who to promote. All this is good money thrown after bad – frankly, it’s a miracle it hasn’t all come tumbling down sooner.