AI and dairy

Why did I write the other day about an artificial intelligence dude who I knew 25 years ago, and whose primary application at the time was ensuring elevators in skyscrapers were efficiently dispersed to floors that needed them – oh, and vision?

Because he made the N.Y Times with an hyperbaric headline about making Toronto a high-tech hotbed (he didn’t write the headline) and because his AI basics are making their way into food safety.

Caroline Diana of Inquisitr writes IBM and Cornell University, which primarily focuses on dairy research, will make use of artificial intelligence (AI) to make dairy safe(r) for consumption.

By sequencing and analyzing the DNA and RNA of food microbiomes, researchers plan to create new tools that can help monitor raw milk to detect anomalies that represent food safety hazards and possible fraud.

While many food producers already have rigorous processes in place to ensure food safety hazards are managed appropriately, this pioneering application of genomics will be designed to enable a deeper understanding and characterization of microorganisms on a much larger scale than has previously been possible.

Only a PR thingy could have written this paragraph: “This work could eventually be extended to the larger context of the food supply chain — from farm to fork — and, using artificial intelligence and machine learning, may lead to new insights into how microorganisms interact within a particular environment. A carefully designed informatics infrastructure developed in the IBM Accelerated Discovery Lab, a data and analytics hub for IBM researchers and their clients and partners, will help the team parse and aggregate terabytes of genomic data.”

Better than a poorly designed informatics infrastructure.

Ideas, not geography or institutes, make for public advances

When you haven’t seen a prof dude for 25 years, and then he’s being featured in the N.Y. Times as “The man who helped turn Toronto into a high-tech hotbed,” it’s time for a reality check.

The webs we spin over time.

I was a lousy grad student.

Not the PhD one but the eventually aborted MS one.

I spent hours staring through a microscope – sometimes the electronic kind – at tomato cells artificially infected with a fungus called Verticillium.

I spent months trying to extract and sequence DNA from this slimy fungus.

After 2.5 years, I quit.

I became newspaper dude – that’s right kids, in my day, newspapers existed, and we even started our own paper using a Mac SE and a program called PageMaker.

That was 1988.

It was all because of a girl.

Now, I’ve been to Kansas and Brisbane.

All because of another girl.

But after working for a year at a computer trade magazine in Toronto, I landed a job at the University of Waterloo in Jan. 1990, with an Ontario Centre of Excellence.

I had ideas to try out with my science, computing and journalism experience, and the powers that be said sure, play along.

Within a couple of years, I got tired of writing about other people’s science, and wanted to write about my own science, which led to be starting a PhD at the University of Guelph in the fall of 1992.

But there was this prof at the University of Toronto who I helped promote – specifically his artificial intelligence course, which I sat through a couple of times because it was fascinating – and at one point he said to me: all this targeted research money, and all these oversight committees with their expenses, just get rid of them all and give profs some basic funding and see what happens.

I sorta agreed.

I knew my job was BS, that could be exterminated when the next provincial government came around, and when chatting with Dr. Hinton, he made a lot of sense.

So I soon quit, went and got a PhD, and got to write about what I wanted.

And then Dr. Hinton shows up in the N.Y. Times.

Craig S Smith writes as an undergraduate at Cambridge University, Geoffrey Everest Hinton thought a lot about the brain. He wanted to better understand how it worked but was frustrated that no field of study — from physiology and psychology to physics and chemistry — offered real answers.

So he set about building his own computer models to mimic the brain’s process.

“People just thought I was crazy,” said Dr. Hinton, now 69, a Google fellow who is also a professor emeritus of computer science at the University of Toronto.

He wasn’t. He became one of the world’s foremost authorities on artificial intelligence, designing software that imitates how the brain is believed to work. At the same time, Dr. Hinton, who left academia in the United States in part as a personal protest against military funding of research, has helped make Canada a high-tech hotbed.

Dictate a text on your smartphone, search for a photo on Google or, in the not too distant future, ride in a self-driving car, and you will be using technology based partly on Dr. Hinton’s ideas.

His impact on artificial intelligence research has been so deep that some people in the field talk about the “six degrees of Geoffrey Hinton” the way college students once referred to Kevin Bacon’s uncanny connections to so many Hollywood movies.

Dr. Hinton’s students and associates are now leading lights of artificial intelligence research at Apple, Facebook, Google and Uber, and run artificial intelligence programs at the University of Montreal and OpenAI, a nonprofit research company.

“Geoff, at a time when A.I. was in the wilderness, toiled away at building the field and because of his personality, attracted people who then dispersed,” said Ilse Treurnicht, chief executive of Toronto’s MaRS Discovery District, an innovation center that will soon house the Vector Institute, Toronto’s new public-private artificial intelligence research institute, where Dr. Hinton will be chief scientific adviser.

Dr. Hinton also recently set up a Toronto branch of Google Brain, the company’s artificial intelligence research project. His tiny office there is not the grand space filled with gadgets and awards that one might expect for a man at the leading edge of the most transformative field of science today. There isn’t even a chair. Because of damaged vertebrae, he stands up to work and lies down to ride in a car, stretched out on the back seat.

“I sat down in 2005,” said Dr. Hinton, a tall man, with uncombed silvering hair and hooded eyes the color of the North Sea.

Dr. Hinton started out under a constellation of brilliant scientific stars. He was born in Britain and grew up in Bristol, where his father was a professor of entomology and an authority on beetles. He is the great-great-grandson of George Boole, the father of Boolean logic.

His middle name comes from another illustrious relative, George Everest, who surveyed India and made it possible to calculate the height of the world’s tallest mountain that now bears his name.

Dr. Hinton followed the family tradition by going to Cambridge in the late 1960s. But by the time he finished his undergraduate degree, he realized that no one had a clue how people think.

“I got fed up with academia and decided I would rather be a carpenter,” he recalled with evident delight, standing at a high table in Google’s white-on-white cafe here. He was 22 and lasted a year in the trade, although carpentry remains his hobby today.

When artificial intelligence coalesced into a field of study from the fog of information science after World War II, scientists first thought that they could simulate a brain by building neural networks assembled from vast arrays of switches, which would mimic synapses.

But the approach fell out of favor because computers were not powerful enough then to produce meaningful results. Artificial intelligence research turned instead to using logic to solve problems.

As he was having second thoughts about his carpentry skills, Dr. Hinton heard about an artificial intelligence program at the University of Edinburgh and moved there in 1972 to pursue a Ph.D. His adviser favored the logic-based approach, but Dr. Hinton focused on artificial neural networks, which he thought were a better model to simulate human thought.

His study didn’t make him very employable in Britain, though. So, Ph.D. in hand, he turned to the United States to work as a postdoctoral researcher in San Diego with a group of cognitive psychologists who were also interested in neural networks.

They were soon making significant headway.

They began working with a formula called the back propagation algorithm, originally described in a 1974 Harvard Ph.D. thesis by Paul J. Werbos. That algorithm allowed neural networks to learn over time and has since become the workhorse of deep learning, the term now used to describe artificial intelligence based on those networks.

Dr. Hinton moved in 1982 to Carnegie Mellon University in Pittsburgh as a professor, where his work with the algorithm and neural networks allowed computers to produce some “interesting internal representations,” as he put it.

Here’s an example of how the brain produces an internal representation. When you look at a cat — for some reason cats are a favorite subject of artificial intelligence research — light waves bouncing off it hit your retina, which converts the light into electrical impulses that travel along the optic nerve to the brain. Those impulses, of course, look nothing like a cat. The brain, however, reconstitutes those impulses into an internal representation of the cat, and if you close your eyes, you can see it in your mind.

By 2012, computers had become fast enough to allow him and his researchers to create those internal representations as well as reproduce speech patterns that are part of the translation applications we all use today.

He formed a company specializing in speech and photo recognition with two of his students at the University of Toronto. Google bought the business, so Dr. Hinton joined Google half time and continues to work there on creating artificial neural networks.

The deal made Dr. Hinton a wealthy man.

Now he is turning his attention to health care, thinking that artificial intelligence technology could be harnessed to scan lesions for cancer. The combination of the Vector Institute, a surrounding cluster of hospitals and government support, he added, makes Toronto “one of the best places in the world to do it.”

Toronto is not Silicon Valley north.

You got where you are because of your ideas, not geography.


Improving food safety odds in Vegas: AI-based restaurant inspections

Computer science researchers from the University of Rochester have developed an app for health departments that uses natural language processing and artificial intelligence to identify food poisoning-related tweets, connect them to restaurants using geotagging and identify likely hot spots. team presented the results of its research at the 30th Association for the Advancement of Artificial Intelligence (AAAI) conference in Phoenix, Arizona, in February. The project was supported by grants from the National Science Foundation, the National Institutes of Health and the Intel Science and Technology Center for Pervasive Computing.

Location-based epidemiology is nothing new. John Snow, credited as the world’s first epidemiologist, used maps of London in 1666 to identify the source of the Cholera epidemic that was rampaging the city (a neighborhood well) and in the process discovered the connection between the disease and water sources.

However, as the researchers showed, it’s now possible to deduce the source of outbreaks using publicly available social media content and deep learning algorithms trained to recognize the linguistic traits associated with a disease – “I feel nauseous,” for instance.

“We don’t need to go door to door like John Snow did,” says Adam Sadilek, a researcher who worked on the project at the University of Rochester and who is now at Google Research. “We can use all this data and mine it automatically.”

The work presented at AAAI described a recent collaboration with the Las Vegas health department, where officials used the app they developed, called nEmesis, to improve the city’s inspection protocols.

Typically, cities (including Las Vegas) use a random system to decide which restaurants to inspect on any given day. The research team convinced Las Vegas officials to replace their random system with a list of possible sites of infection derived using their smart algorithms.

In a controlled experiment, half of the inspections were performed using the random approach and half were done using nEmesis, without the inspectors knowing that any change had occurred in the system.“Each morning we gave the city a list of places where we knew that something was wrong so they could do an inspection of those restaurants,” Sadilek said.

For three months, the system automatically scanned an average of 16,000 tweets from 3,600 users each day. 1,000 of those tweets snapped to a specific restaurant and of those, approximately 12 contained content that likely signified food poisoning. They used these tweets to generate a list of highest-priority locations for inspections.

Analyzing the results of the experiment, they found the tweet-based system led to citations for health violations in 15 percent of inspections, compared to 9 percent using the random system. Some of the inspections led to warnings; others resulted in closures.

The researchers estimate that these improvements to the efficacy of the inspections led to 9,000 fewer food poisoning incidents and 557 fewer hospitalization in Las Vegas during the course of the study.