A matter of definition.

Recent reports have surfaced that hospital officials in some US localities are inflating the CV-19 death count by classifying anyone who dies in their care who is not the victim of an accident or other obvious non-viral cause as a CV-19 victim. Apparently this is because the US public health scheme, Medicaid, pays hospitals USD$5000 per non CV-19 death versus USD$13,000 for CV-19 related deaths. Most hospitals in the US are private, for profit entities so the hospital administrators (not doctors) who do the paperwork submissions to the federal government for Medicaid death reimbursements have financial incentive to falsify the real causes of death.

There is no independent body above hospital administrations regularly overseeing how cause of death in hospitals is classified unless some gross error comes to the attention of local and state authorities, and there is no way for the federal government to unilaterally challenge the legitimacy of CV-19 death claims. Moreover, since local coroners are swamped by an influx of CV-19 dead and Medicaid is stretched to the breaking point by the upsurge in (legitimate) CV-19 claims, there is little way to hold the dishonest hospital administrators to account unless a whistleblower from within a hospital provides concrete proof of institutional malfeasance.

In contrast, official Russian statistics show that there are over 263,000 cases in the country, with nearly 2.500 deaths and new cases exceeding 10,000 per day. That death count has raised eyebrows outside of Russia, as it is remarkably low when compared to other countries given the number of cases and rate of infection.

Russian officials counter the skeptics by claiming that their definition of a CV-19 death refers only to those that can be directly attributed to the pathogen. They deliberately exclude other causes that are exacerbated by CV-19 contagion, such as heart failures and smoking-related pulmonary embolisms, liver failures etc. Because of this the Russian CV-19 mortality rate is not only very low but also does not disproportionately affect the elderly, whose deaths are most often attributed to the underlying condition rather than to CV-19.

These differences in reporting remind me of an incident that happened to me when conducting research in Brazil in 1987. I had an interest in national health administration because I had worked on that subject when conducting Ph.D. dissertation research in Argentina earlier in the decade, I lived in Rio at the time and had experienced Carnaval in February, when thousands of sex tourists of every persuasion descended on the city in the middle of what was clearly an AIDS epidemic (in a cultural context where men refused to use condoms because that was considered “unmanly” and in which many (usually) straight men used Carnaval as an excuse to enjoy gay sex). Around that time I had to donate blood for my then-wife to use in a blood transfusion after she picked up a water-carried blood infection while cleaning vegetables and because we were told that most of the blood supplies in Rio were infected with both AIDS and syphilis, so I was acutely interested in how health authorities dealt with the convergence of viral calamities.

I managed to arrange an interview with a senior official in the Health Ministry in Brasilia, one who just happened to be involved in infectious disease mitigation. As part of our conversation I asked him how many AIDS cases there were in Brazil. He said “100.” I laughed and said “no, seriously, how many cases are there because I just came from Rio during Carnaval and it was a 24/7 bacchanal of unprotected sex, drug use, drinking, dancing and other assorted debauchery, plus I am told than the blood banks are unreliable because the supplies are infected with AIDS and syphilis.”

He smiled and leaned back in his chair for a moment, and then said “you see, that is where my country and your country are different. In this country a person gets the AIDS virus, loses immune system efficiency, and eventually succumbs to an infectious tropical disease such as malaria or dengue fever. We put the cause of death as the tropical disease, not AIDS. In your country a person gets AIDS and eventually dies of a degenerative disease such as a rare thyroid or other soft tissue cancer. Since they otherwise would not have likely had that cancer, your health authorities list the cause of death as AIDS. For us, the methodology for defining cause of death is not only a means of keeping the official AIDS count low. It also keeps the foreign tourist numbers up because visitors are not fearful of contracting AIDS and have much less fear of malaria or dengue because those are preventable.” I asked him what he thought about those tourists who did contract AIDS while in Brazil on holiday. He replied “that is a problem for their home authorities and how those authorities define their cause of death.”

I recount this story because it seems that we have entered a phase in the CV-19 pandemic where definition of what is and what is not has become a bit of a hair-splitting exercise that has increasing levels of political spin attached to it. It opens a Pandora’s box of questions: Is the lockdown approach overkill? Is the re-opening too soon? Are the overall US CV-19 death figures inflated because of the structural imperatives layered into their health system? Are the Russian figures underestimated because of their politics or because of their accounting methods? Has the PRC lied all along about the extent of the disease before and after it left its borders (in part by assigning different causes of death than CV-19)? At what point do honest medical professionals assign primary cause of death to CV-19 rather than an underlying condition?

There is one thing that I am fairly certain about. In Bolsonaro’s Brazil, I have little doubt that the rationale I heard in 1987 is still the rationale being used today, except that now it is CV-19 rather than AIDS that is the scourge that cannot be named.

The department of made-up numbers

This excellent article did the rounds in my department at work today, about the methodological rigour (or lack thereof) in ratings, on which I’ve been meaning to write a post for a while.

I refer almost daily to such demographic information – ratings, audience/circulation, readership and particularly advertising value equivalents – as `the department of made-up numbers’ because, basically, that’s what they are. At best, they are a set of figures which, while deeply flawed, are horizontally and vertically consistent, and well-enough understood that their failings can be accounted for (this approximates a definition of any useful long-term demographic data). At worst, they are the patina of officious statistical rigour over a set of numbers tuned to tell people whatever the media outlet, its owners, or its PR company want people to know – and that means they’re designed to fool. Most often, a given dataset lies in between – in the murky liminal zone where it’s impossible to tell whether it’s the former or the latter or something else entirely without access to the raw data and its provenance, which is nearly always impossible to get, and would entail phenomenal amounts of very specialised, expensive, time-consuming work to make sense of even if you could get it.

Despite these dire problems, demographic data, ratings, audience/circulation and advertising value equivalent data are the mainstay of the media and communications industry’s performance measurement infrastructure, for two simple reasons: First, it gives you nice clear figures to prove your department is doing its job; and second, nothing else does, because media demographics is the art of measuring the unmeasurable. So people who are otherwise cautious and crafty and suspicious just accept the numbers at face value and trust them implicitly because the alternative is no data, and with apologies to the Bard, nothing will come of nothing.

This reliance on demographic figures is highly detrimental to the health of the media industry, because the data can’t be verified, and there exists an inflation imperative. I dislike comparisons to communism as a rule, but there’s a parallel in this sort of reporting in the media/PR/comms industry as it presently is to the problems of productivity reporting seen in the 20s in the USSR and the 50s in China. When both producers and their supposedly independent auditors are ranked according to the quantity – not the quality – of the figures they produce, there inevitably emerges a tendency to inflate those figures.

In the USSR and China, wheat and rice yields were inflated in this way, because the producers would be punished if their yields fell, and the municipal authorities didn’t look too closely at the production figures because they would be punished if their municipality’s yields fell. Central government assumed these figures were correct, and based budgets and food allocations and projections and such upon them, planning more than they could realistically achieve because there was in fact less food than they thought in the granaries.

If we substitute `food’ for `ratings, I think the parallel is pretty clear: the media are relying on bad data to demonstrate that their product has value to advertisers first and journalistic merit second and to boost the egos of their stable of opinion leaders third; internal communications departments use it to measure the effectiveness of their campaigns and initiatives; external PR firms use it to prove their worth to client companies; boards of directors rely on it to make decisions about what publicity campaigns to fund, which products to launch, and who to promote. All this is good money thrown after bad – frankly, it’s a miracle it hasn’t all come tumbling down sooner.

L