Cyclospora in California strawberries in 1996 was Guatemalan raspberries.
Salmonella in tomatoes in 2008 was jalapeno peppers.
E. coli O104 in Spanish cucumbers was organic sprout seeds from Egypt.
Ron Doering, a past president of the Canadian Food Inspection Agency who practices food law in the Ottawa offices of Gowling Lafleur Henderson, LLP, writes in his monthly Food in Canada column that food safety regulators face “diabolical complexity when they carry out investigations characterized by deep factual and scientific uncertainty.”
“In the later two cases, investigators were dealing with rare strains of pathogens, and traceability was complicated by the fact that the source was unpackaged vegetables — without barcodes or lot numbers — that were quickly consumed, often with other produce. Microbiological testing proved quite unhelpful so investigators had to rely primarily on epidemiology. Pressed for “results,” both cases had regula¬tors initially jumping to the wrong conclusions, destroying in their wake the livelihood of many innocent people and seriously undermining the credibility of government food safety regulators. Both cases prove the “Iron Law of Food Safety Outbreak Investigations”— after the fact academ¬ics and the media will criticize government regulators either for overreacting or under-reacting.
“Perhaps government regulators have themselves to blame for the Iron Law because they continue to buy into the academic theory and language that they are engaged in risk management. They should be so lucky. The classical model of risk analysis falls far short in describing what reg¬ulators actually do and in providing much useful guidance on how they should do it. In both cases, regulators were not dealing with risk — a concept that surely involves at least some aspect of measuring probabilities — they were dealing with uncertainty and crisis management.
“The language of risk disguises the degree of ambiguity inherent in large-scale food safety investigations. “Risk” creates the illusion of precision, of assessing hazards in quantitative terms, or measuring the probability of harm. Science-based quantitative expert risk assessments often disguise the underlying subjective framework of assump¬tions and understate the high degree of uncertainty. Food safety risk assessors do not do double blind laboratory studies over a long period; they generally just review the conclusions of other scientists.
“In fact, in spite of their name, they typically do not even assess cases of risk, as calculations of probability are usually impossible to determine especially in the context of an urgent food safety crisis.
“The most that “risk assessors” can do is assess situations of uncertainty and then engage in a complex iterative process with decision-makers to try to find ways to man¬age an immediate issue fraught with multiple perspec¬tives where the science, however uncertain, is important but rarely determinative.
“Understanding what is going on is complicated too by everyone pretending the de¬cision is mostly science-based, unadulterated by policy considerations, and that they are managing the actual science-health risk, not the perception of risk.
“We need to abandon the language of risk and recognize that most food safety investigations are about issue man¬agement. We need to develop a new theoretical model and language that would borrow heavily from the emerg¬ing literature on adaptive management: in the face of such uncertainty, making policy choices and implement¬ing regulatory decisions should be recognized as neces¬sarily experimental; decisions are made that expect the unexpected; policies and regulatory responses are adapted as lessons are learned.
“The new model would also have to more fully recognize that while food safety must be paramount, trade-offs and weighing benefits are always a necessary part of the process. And this model would have to grapple with communicating this uncertainty to a generally scientifically illiterate consumer who simply expects retailers to only sell safe food and expects the regulatory system to guarantee it.”
Doering has some valid points. I don’t care what model is used as long as there is fewer sick people. Epidemiology, like humans, is flawed. But it’s better than astrology.
The more that public health folks can articulate when to go public and why, the more confidence in the system. Past risk communication research has demonstrated that if people have confidence in the decision-making process they will have more confidence in the decision. People may not agree about when to go public, but if the assumptions are laid on the table, and value judgments are acknowledged, then maybe the focus can be on fewer sick people.
On June 12, 1996, Ontario, Canada’s chief medical officer, Dr. Richard Schabas, issued a public health advisory on the presumed link between consumption of California strawberries and an outbreak of diarrheal illness among some 40 people in the Metro Toronto area. The announcement followed a similar statement from the Department of Health and Human Services in Houston, Texas, which was investigating a cluster of 18 cases of cyclospora illness among oil executives.
Turns out it was Guatemalan raspberries, and no one was happy.
Once epidemiology identifies a probable link between a food and some dangerous bug, health officials have to decide whether it makes sense to warn the public. In retrospect, the decision seems straightforward, but there are several possibilities that must be weighed at the time.
Back in 1996, when the Ontario Ministry of Health decided to warn people that eating imported strawberries might be connected to cyclospora infection, two outcomes were possible: if it turned out that strawberries were implicated, the ministry made a smart decision, warning people against something that could hurt them; if strawberries were not implicated, then the ministry made a bad decision with the result that strawberry growers and sellers lost money and people stopped eating something that was good for them.
If the ministry decided not to warn people, another two outcomes were possible: if strawberries were implicated, then the ministry made a bad decision and people could have acquired a parasitic infection they could have avoided had they been given the information (lawsuits usually follow); if strawberries were definitely not implicated then nothing happens, the industry does not suffer and the ministry does not get in trouble for not telling people.
These scenarios apply to any decision to go public.
It’s not that a new model is required – any model will do – as long as someone in some regulatory agency will put in writing the decisions involved in when to go public, with all assumptions laid bare. Then it can enter public discourse and be improved.