AI and dairy

Why did I write the other day about an artificial intelligence dude who I knew 25 years ago, and whose primary application at the time was ensuring elevators in skyscrapers were efficiently dispersed to floors that needed them – oh, and vision?

Because he made the N.Y Times with an hyperbaric headline about making Toronto a high-tech hotbed (he didn’t write the headline) and because his AI basics are making their way into food safety.

Caroline Diana of Inquisitr writes IBM and Cornell University, which primarily focuses on dairy research, will make use of artificial intelligence (AI) to make dairy safe(r) for consumption.

By sequencing and analyzing the DNA and RNA of food microbiomes, researchers plan to create new tools that can help monitor raw milk to detect anomalies that represent food safety hazards and possible fraud.

While many food producers already have rigorous processes in place to ensure food safety hazards are managed appropriately, this pioneering application of genomics will be designed to enable a deeper understanding and characterization of microorganisms on a much larger scale than has previously been possible.

Only a PR thingy could have written this paragraph: “This work could eventually be extended to the larger context of the food supply chain — from farm to fork — and, using artificial intelligence and machine learning, may lead to new insights into how microorganisms interact within a particular environment. A carefully designed informatics infrastructure developed in the IBM Accelerated Discovery Lab, a data and analytics hub for IBM researchers and their clients and partners, will help the team parse and aggregate terabytes of genomic data.”

Better than a poorly designed informatics infrastructure.

Maybe IBM is good at some stuff: Grocery scanner data to speed investigations during early foodborne illness outbreaks

Foodborne illnesses, like salmonella, E. coli and norovirus infections, are a major public health concern affecting more than one out of six Americans each year, according to the Centers for Disease Control and Prevention (CDC)1. During a foodborne illness outbreak, rapidly identifying the contaminated food source is vital to minimizing illness, loss and impact on society.

ibm.nerdsToday, IBM Research – Almaden announced its scientists have discovered that analyzing retail-scanner data from grocery stores against maps of confirmed cases of foodborne illness can speed early investigations. In the study, researchers demonstrated that with as few as 10 medical-examination reports of foodborne illness they can narrow down the investigation to 12 suspected food products in just a few hours.

In the study, researchers created a data-analytics methodology to review spatio-temporal data, including geographic location and possible time of consumption, for hundreds of grocery product categories. Researchers also analyzed each product for its shelf life, geographic location of consumption and likelihood of harboring a particular pathogen – then mapped the information to the known location of illness outbreaks. The system then ranked all grocery products by likelihood of contamination in a list from which public health officials could test the top 12 suspected foods for contamination and alert the public accordingly.

A traditional investigation can take from weeks to months and the timing can significantly influence the economic and health impact of a disease outbreak. The typical process employs interviews and questionnaires to trace the contamination source. In 2011, an outbreak of E. coli in Europe took more than 60 days to identify the source, imported fenugreek seeds. By the time the investigation was completed, all the sprouts produced from the seeds had been consumed. Nearly 4,000 people became ill in 16 countries and more than 50 people died before public health officials could pinpoint the source, according to the European Food Safety Authority2.

“When there’s an outbreak of foodborne illness, the biggest challenge facing public health officials is the speed at which they can identify the contaminated food source and alert the public,” said Kun Hu, public health research scientist, IBM Research – Almaden in San Jose, Calif. “While traditional methods like interviews and surveys are still necessary, analyzing big data from retail grocery scanners can significantly narrow down the list of contaminants in hours for further lab testing. Our study shows that Big Data and analytics can profoundly reduce investigation time and human error and have a huge impact on public health.”

Already, the method in this study has been applied to an actual E.  coli  illness outbreak in Norway. With just 17 confirmed cases of infection, public health officials were able to use this methodology to analyze grocery-scanner data related to more than 2,600 possible food products and create a short-list of 10 possible contaminants. Further lab analysis pinpointed the source of contamination down to the batch and lot numbers of the specific product – sausage.


How sequencing foods’ DNA could help us prevent diseases

Davey Alba asks in Wired, what’s almost as important to life as food? Food safety.

Last year, in the US, according to the CDC, one in six people were affected by food-borne diseases, resulting in 128,000 hospitalizations, 3,000 deaths, and an economic burden totaling $80 billion.

Scientist from IBM Research and Mars Incorporated have announced the Sequencing the Food Supply Chain Consortium, a collaborative food safety organization that aims to leverage advances in genomics and analytics to further our understanding of what makes food safe.

The researchers will conduct the largest-ever metagenomics study of our foods, sequencing the DNA and RNA of popular foods in an effort to identify what traits keep food safe and these can be effected by outside microorganisms and other factors. Eventually, the researchers will extend the project “from farm to fork,” examining materials across the length and breadth of the supply chain.

In this way, IBM Research and Mars are joining many others, including the San Francisco-based startup Hampton Creek, who hope to supercharge food R&D using data analysis. After reinventing Google and Facebook and so many other online operations, the big data movement is now moving into other industries, ranging from medicine and healthcare to the development of new industrial materials.

“We want to get a baseline for safe food ingredients, all the way up and down the food supply chain, including what makes healthy biochemistry,” says James Kaufman, public health manager for IBM Research. “If you can understand what a normal, healthy microbiome looks like, you can figure some things out about how that microbiome will respond to the unknown.”

Essentially, the scientists are hoping to uncover what combination of microbes that makes food ingredients safe, and what factors affect the structure of these microbial communities, including exposure to new pathogenic organisms and other impurities that may not have ever come up yet. It is these unknowns, Kaufman explains, that can eventually make food unsafe—whether that’s the evolution of new organisms, a misguided attempt at innovating food, or even because of an intentional act of terrorism.

And for no particular reason, here are the Beatles, who today in 1969, made their last public performance on the roof of Apple Records in London.

U2 sucks.


IBM, others to help public health officials improve food safety

I normally don’t run company press releases because they are long on possibilities and short on actualities.

But this one may have some public health benefit. And was published in a journal.

star-trek-dataUsing novel algorithms, visualization, and statistical techniques, a new tool developed by IBM can use information on the date and location of billions of supermarket food items sold each week to quickly identify with high probability a set of potentially “guilty” products with in as few as 10 outbreak case reports. This research was published today in the peer-reviewed journal PLOS Computational Biology together with collaborators from Johns Hopkins University, Purdue University and the German Federal Institute for Risk Assessment (BfR).

Foodborne disease outbreaks of recent years demonstrate that due to increasingly interconnected supply chains food-related crisis situations have the potential to affect thousands of people, leading to significant healthcare costs, loss of revenue for food companies, and –in the worst cases– death. In the United States alone, one in six people are affected by food-borne diseases each year, resulting in 128,000 hospitalizations, 3,000 deaths, and a nearly $80B economic burden.

When a foodborne disease outbreak is detected, identifying the contaminated food quickly is vital to minimize the spread of illness and limit economic losses. However, the time required to detect it may range from days to weeks, creating extensive strain on the public health system.

Perhaps surprisingly, the petabytes of retail sales data have never before been used to accelerate the identification of contaminated food. In fact, this data already exists as part of the inventory systems used by retailers and distributors today, which manage up to 30,000 food items at any given time with nearly 3,000 of them being perishable.

Recognizing this issue, IBM scientists built a system that automatically identifies, contextualizes and displays data from multiple sources to help reduce the time to identify the mostly likely contaminated sources by a factor of days or weeks. It integrates pre-computed retail data with geocoded public health data to allow investigators to see the distribution of suspect foods and, selecting an area of the map, view public health case reports and lab reports from clinical encounters. The algorithm effectively learns from every new report and re-calculates the probability of each food that might be causing the illness.

ibm.punch.card“Predictive analytics based on location, content, and context are driving our ability to quickly discover hidden patterns and relationships from diverse public health and retail data,” said James Kaufman, Manager of Public Health Research for IBM Research, “We are working with our public health clients and with retailers in the U.S. to scale this research prototype and begin focusing on the 1.7B supermarket items sold each week in the United States.”

To demonstrate the system’s effectiveness, IBM scientists worked with the Department of Biological Safety of the German Federal Institute for Risk Assessment. In this demonstration, the scientists simulated 60,000 outbreaks of foodborne disease across 600 products using real-world food sales data from Germany.

Unfortunately, in real life cases of foodborne disease do not show up all at once as outbreaks are reported over a period of time. Depending on the circumstances, it takes public health officials weeks or months to identify the real cause.

“The success of an outbreak investigation often depends on the willingness of private sector stakeholders to collaborate pro-actively with public health officials. This research illustrates an approach to create significant improvements without the need for any regulatory changes. This can be achieved by combining innovative software technology with already existing data and the willingness to share this information in crisis situations between private and public sector organizations,” said Dr. Bernd Appel, Head of the Department Biological Safety, BfR.