Ideas, not geography or institutes, make for public advances

When you haven’t seen a prof dude for 25 years, and then he’s being featured in the N.Y. Times as “The man who helped turn Toronto into a high-tech hotbed,” it’s time for a reality check.

The webs we spin over time.

I was a lousy grad student.

Not the PhD one but the eventually aborted MS one.

I spent hours staring through a microscope – sometimes the electronic kind – at tomato cells artificially infected with a fungus called Verticillium.

I spent months trying to extract and sequence DNA from this slimy fungus.

After 2.5 years, I quit.

I became newspaper dude – that’s right kids, in my day, newspapers existed, and we even started our own paper using a Mac SE and a program called PageMaker.

That was 1988.

It was all because of a girl.

Now, I’ve been to Kansas and Brisbane.

All because of another girl.

But after working for a year at a computer trade magazine in Toronto, I landed a job at the University of Waterloo in Jan. 1990, with an Ontario Centre of Excellence.

I had ideas to try out with my science, computing and journalism experience, and the powers that be said sure, play along.

Within a couple of years, I got tired of writing about other people’s science, and wanted to write about my own science, which led to be starting a PhD at the University of Guelph in the fall of 1992.

But there was this prof at the University of Toronto who I helped promote – specifically his artificial intelligence course, which I sat through a couple of times because it was fascinating – and at one point he said to me: all this targeted research money, and all these oversight committees with their expenses, just get rid of them all and give profs some basic funding and see what happens.

I sorta agreed.

I knew my job was BS, that could be exterminated when the next provincial government came around, and when chatting with Dr. Hinton, he made a lot of sense.

So I soon quit, went and got a PhD, and got to write about what I wanted.

And then Dr. Hinton shows up in the N.Y. Times.

Craig S Smith writes as an undergraduate at Cambridge University, Geoffrey Everest Hinton thought a lot about the brain. He wanted to better understand how it worked but was frustrated that no field of study — from physiology and psychology to physics and chemistry — offered real answers.

So he set about building his own computer models to mimic the brain’s process.

“People just thought I was crazy,” said Dr. Hinton, now 69, a Google fellow who is also a professor emeritus of computer science at the University of Toronto.

He wasn’t. He became one of the world’s foremost authorities on artificial intelligence, designing software that imitates how the brain is believed to work. At the same time, Dr. Hinton, who left academia in the United States in part as a personal protest against military funding of research, has helped make Canada a high-tech hotbed.

Dictate a text on your smartphone, search for a photo on Google or, in the not too distant future, ride in a self-driving car, and you will be using technology based partly on Dr. Hinton’s ideas.

His impact on artificial intelligence research has been so deep that some people in the field talk about the “six degrees of Geoffrey Hinton” the way college students once referred to Kevin Bacon’s uncanny connections to so many Hollywood movies.

Dr. Hinton’s students and associates are now leading lights of artificial intelligence research at Apple, Facebook, Google and Uber, and run artificial intelligence programs at the University of Montreal and OpenAI, a nonprofit research company.

“Geoff, at a time when A.I. was in the wilderness, toiled away at building the field and because of his personality, attracted people who then dispersed,” said Ilse Treurnicht, chief executive of Toronto’s MaRS Discovery District, an innovation center that will soon house the Vector Institute, Toronto’s new public-private artificial intelligence research institute, where Dr. Hinton will be chief scientific adviser.

Dr. Hinton also recently set up a Toronto branch of Google Brain, the company’s artificial intelligence research project. His tiny office there is not the grand space filled with gadgets and awards that one might expect for a man at the leading edge of the most transformative field of science today. There isn’t even a chair. Because of damaged vertebrae, he stands up to work and lies down to ride in a car, stretched out on the back seat.

“I sat down in 2005,” said Dr. Hinton, a tall man, with uncombed silvering hair and hooded eyes the color of the North Sea.

Dr. Hinton started out under a constellation of brilliant scientific stars. He was born in Britain and grew up in Bristol, where his father was a professor of entomology and an authority on beetles. He is the great-great-grandson of George Boole, the father of Boolean logic.

His middle name comes from another illustrious relative, George Everest, who surveyed India and made it possible to calculate the height of the world’s tallest mountain that now bears his name.

Dr. Hinton followed the family tradition by going to Cambridge in the late 1960s. But by the time he finished his undergraduate degree, he realized that no one had a clue how people think.

“I got fed up with academia and decided I would rather be a carpenter,” he recalled with evident delight, standing at a high table in Google’s white-on-white cafe here. He was 22 and lasted a year in the trade, although carpentry remains his hobby today.

When artificial intelligence coalesced into a field of study from the fog of information science after World War II, scientists first thought that they could simulate a brain by building neural networks assembled from vast arrays of switches, which would mimic synapses.

But the approach fell out of favor because computers were not powerful enough then to produce meaningful results. Artificial intelligence research turned instead to using logic to solve problems.

As he was having second thoughts about his carpentry skills, Dr. Hinton heard about an artificial intelligence program at the University of Edinburgh and moved there in 1972 to pursue a Ph.D. His adviser favored the logic-based approach, but Dr. Hinton focused on artificial neural networks, which he thought were a better model to simulate human thought.

His study didn’t make him very employable in Britain, though. So, Ph.D. in hand, he turned to the United States to work as a postdoctoral researcher in San Diego with a group of cognitive psychologists who were also interested in neural networks.

They were soon making significant headway.

They began working with a formula called the back propagation algorithm, originally described in a 1974 Harvard Ph.D. thesis by Paul J. Werbos. That algorithm allowed neural networks to learn over time and has since become the workhorse of deep learning, the term now used to describe artificial intelligence based on those networks.

Dr. Hinton moved in 1982 to Carnegie Mellon University in Pittsburgh as a professor, where his work with the algorithm and neural networks allowed computers to produce some “interesting internal representations,” as he put it.

Here’s an example of how the brain produces an internal representation. When you look at a cat — for some reason cats are a favorite subject of artificial intelligence research — light waves bouncing off it hit your retina, which converts the light into electrical impulses that travel along the optic nerve to the brain. Those impulses, of course, look nothing like a cat. The brain, however, reconstitutes those impulses into an internal representation of the cat, and if you close your eyes, you can see it in your mind.

By 2012, computers had become fast enough to allow him and his researchers to create those internal representations as well as reproduce speech patterns that are part of the translation applications we all use today.

He formed a company specializing in speech and photo recognition with two of his students at the University of Toronto. Google bought the business, so Dr. Hinton joined Google half time and continues to work there on creating artificial neural networks.

The deal made Dr. Hinton a wealthy man.

Now he is turning his attention to health care, thinking that artificial intelligence technology could be harnessed to scan lesions for cancer. The combination of the Vector Institute, a surrounding cluster of hospitals and government support, he added, makes Toronto “one of the best places in the world to do it.”

Toronto is not Silicon Valley north.

You got where you are because of your ideas, not geography.

 

Trump’s expected pick for USDA’s top scientist is not a scientist

Catherine Woteki, served as the U.S. Department of Agriculture’s undersecretary for research, education and economics in the Obama administration.

She recently told Pro Publica “This position is the chief scientist of the Department of Agriculture. It should be a person who evaluates the scientific body of evidence and moves appropriately from there.”

Trump expects to appoint Sam Clovis — who, according to sources with knowledge of the appointment and members of the agriculture trade press, is President Trump’s pick to oversee the section — appears to have no such credentials.

Clovis has never taken a graduate course in science and is openly skeptical of climate change. While he has a doctorate in public administration and was a tenured professor of business and public policy at Morningside College for 10 years, he has published almost no academic work.

Morningside College sounds like painting with Dali (below) on SCTV’s Sunrise Semester.

Clovis advised Trump on agricultural issues during his presidential campaign and is currently the senior White House advisor within the USDA, a position described by The Washington Post as “Trump’s eyes and ears” at the agency.

Clovis was also responsible for recruiting Carter Page, whose ties to Russia have become the subject of intense speculation and scrutiny, as a Trump foreign policy advisor.

Neither Clovis, nor the USDA, nor the White House responded to questions about Clovis’ nomination to be the USDA’s undersecretary for research, education and economics.

Clovis has a B.S. in political science from the U.S. Air Force Academy, an MBA from Golden State University and a doctorate in public administration from the University of Alabama. The University of Alabama canceled the program the year after Clovis graduated, but an old course catalogue provided by the university does not indicate the program required any science courses.

Clovis’ published works do not appear to include any scientific papers. His 2006 dissertation concerned federalism and homeland security preparation, and a search for academic research published by Clovis turned up a handful of journal articles, all related to national security and terrorism.

I can’t make this shit up.

Science PR? I just think about sex

I’ve been a scientist, journalist, writer

Sure, I dabbled in college, didn’t everyone?

But when I got the scientist gig, I quickly realized the PR hacks at whatever university I was at were just that –J-school grad hacks.

CSWA-Slideshow2Can’t blame them, went for the stable income and routine.

But it annoyed me to shit when they wouldn’t do their job, for whatever bureaucratic reason.

I soon learned to just write my own press releases whenever the news was relevant or a new paper came out.

If only I could get paid for it.

But that would lead to excruciatingly endless and mind-numbing meetings where I would daydream about sex.

Good science PR has its role, and Nick Stockton of Wired writes a website called EurekAlert gives journalists access to the latest studies before publication, before those studies are revealed to the general public. Launched 20 years ago this week, EurekAlert has tracked, and in some ways shaped, the way places like Wired cover science in the digital era.

Yes, of course the Internet was going to change science journalism—the same way it was destined to change all journalism. But things could have been very different. EurekAlert gathered much of the latest breaking scientific research in one easily accessible place.

You probably know the basic process of science: Researcher asks a question, forms a hypothesis, tests the hypothesis (again and again and again and again), gets results, submits to journal—where peers review—and if the data is complete and the premise is sound, the journal agrees to publish.

Science happens at universities, government institutions, and private labs. All of those places have some interest in publicizing the cool stuff they do. So those places hire people—public information officers—to alert the public of each new, notable finding. (OK, maybe some aren’t so notable, but whatever.) And the route for that notification is often via journalists.

And, much like the way journalists compete with one another for scoops, journals compete with one another for the attention of journalists to publicize their research. After all, Science, Nature, JAMA, and so on are interested in promoting their brands so they can attract more smart, impactful research. As someone smart once said, science is a contact sport.

So how did EurekAlert become the one clearinghouse to rule them all?

In 1995, an employee for the American Association for the Advancement of Science had an idea for this newfangled Internet thing. Nan Broadbent was the organization’s director of communications, and she imagined a web platform where reporters could access embargoed journal articles. Not just from the AAAS publication Science, but all new research from every journal. Which might seem trite on today’s hyper-aggregated web. But remember, this was an era when anime nerds on Geocities were still struggling to organize their competing Ranma 1/2 fansites into webrings1.

Sorta the way Food Safety Network started in 1993. I hosted the Canadian Science Writers Association annual meeting in 1991 while working at the University of Waterloo, where ideas about access were vigorously discussed.

But while EurekAlert democratized journalists’ access to papers, and PIOs’ access to journalists, those who had the resources to develop their own connections—like Grabmeier’s boss, or reporters at big, national outlets—suddenly found themselves competing with, well, everyone. “EurekAlert is kind of like a giant virtual press conference, in that it pulls everybody to the same spot,” says Cristine Russell, a freelance science writer since the 1970s.

That centralization, coupled with the embargo system (which has existed way before EurekAlert), has contributed to a longstanding tension within science journalism about what gets covered—and what does not. Embargoes prohibit scientists and journalists from publicizing any new research until a given date has passed, specified by whatever journal is publishing the work.

EurekAlert opened up science in a way that it had never been open before. The site has 12,000 registered reporters from 90 different countries (Getting embargoed access is a minor rite of passage for new science writers). It receives around 200 submissions a day, from 10,000 PIOs representing 6,000 different institutions all around the world. Once an embargo lifts, anyone is free to read the same press releases as the journalists (access to the original papers is a trickier ordeal). EurekAlert gets about 775,000 unique visitors every month. Its articles are translated into French, German, Spanish, Portuguese, Japanese, and Chinese.

Sure, the site is not perfect. It is arguably no longer even necessary—modern journalists are web native-or-die self-aggregators. But that’s the thing. EurekAlert was never trying to be much more than a convenience. Which turned out to be its greatest gift: Making science easy to access.

 

Lifehacker covers the science of Thanksgiving

Lots of folks like to say that food safety in the home is simple. It isn’t. There are a lot of variables and messages have historically been distilled down to a sanitized sound bite. Saying that managing food safety risks is simple isn’t good communication; isn’t true; and, does a disservice to the nerds who want to know more. The nerds that are increasingly populating the Internet as they ask bigger, deeper questions.

Friend of barfblog, and Food Safety Talk podcast co-host extraordinaire, Don Schaffner provides a microbiological catch-phrase that gets used on almost every episode of our show to combat the food-safety-is-simple mantra; when asked about whether something is safe, Don often answers with, ‘it depends’ and ‘it’s complicated’. And then engages around the uncertainties.IMG_4138

Beth Skwarecki of Life Hacker’s Vitals blog called last week to talk about Thanksgiving dinner, turkey preparation and food safety and provided the platform to get into the  ‘it depends’ and ‘it’s complicated’ discussion. Right down to time/temperature combinations equivalent to 165F when it comes to Salmonella destruction.

Here are some excerpts.

How Do You Tell When the Turkey Is Done?

With a thermometer, of course. The color of the meat or juices tells you nothing about doneness, as this guide explains: juices may run pink or clear depending on how stressed the animal was at the time of slaughter (which changes the pH of the meat). The color of the bone depends on the age of the bird at slaughter. And pink meat can depend on roasting conditions or, again, the age of the bird. It’s possible to have pink juices, meat, or bones even when the bird is cooked, or clear juices even when it’s not done yet.

So you’ve got your thermometer. What temperature are you targeting? Old advice was to cook the turkey to 180 degrees Fahrenheit, but that was a recommendation based partly on what texture people liked in their meat, Chapman says. The guidelines were later revised to recommend a minimum safe temperature, regardless of what the meat tastes like, and that temperature is 165. You can cook it hotter, if you like, but that won’t make it any safer.

There’s a way to bend this rule, though. The magic 165 is the temperature that kills Salmonella and friends instantly, but you can also kill the same bacteria by holding the meat at a lower temperature, for a longer time. For example, you can cook your turkey to just 150 degrees, as long as you ensure that it stays at 150 (or higher) for five minutes, something you can verify with a high-tech thermometer like an iGrill. This high-tech thermometer stays in your turkey while it cooks, and sends data to your smartphone. Compare its readings to these time-temperature charts for poultry to make sure your turkey is safe.

The whole piece can be found here.

What is science?

Neil deGrasse Tyson writes, if you cherry-pick scientific truths to serve cultural, economic, religious or political objectives, you undermine the foundations of an informed democracy.

scienceScience distinguishes itself from all other branches of human pursuit by its power to probe and understand the behavior of nature on a level that allows us to predict with accuracy, if not control, the outcomes of events in the natural world. Science especially enhances our health, wealth and security, which is greater today for more people on Earth than at any other time in human history.

The scientific method, which underpins these achievements, can be summarized in one sentence, which is all about objectivity:

Do whatever it takes to avoid fooling yourself into thinking something is true that is not, or that something is not true that is.

This approach to knowing did not take root until early in the 17th century, shortly after the inventions of both the microscope and the telescope. The astronomer Galileo and philosopher Sir Francis Bacon agreed: conduct experiments to test your hypothesis and allocate your confidence in proportion to the strength of your evidence. Since then, we would further learn not to claim knowledge of a newly discovered truth until multiple researchers, and ultimately the majority of researchers, obtain results consistent with one anther.

This code of conduct carries remarkable consequences. There’s no law against publishing wrong or biased results. But the cost to you for doing so is high. If your research is re-checked by colleagues, and nobody can duplicate your findings, the integrity of your future research will be held suspect. If you commit outright fraud, such as knowingly faking data, and subsequent researchers on the subject uncover this, the revelation will end your career.

It’s that simple.

This internal, self-regulating system within science may be unique among professions, and it does not require the public or the press or politicians to make it work. But watching the machinery operate may nonetheless fascinate you. Just observe the flow of research papers that grace the pages of peer reviewed scientific journals. This breeding ground of discovery is also, on occasion, a battlefield where scientific controversy is laid bare.

Science discovers objective truths. These are not established by any seated authority, nor by any single research paper. The press, in an effort to break a story, may mislead the public’s awareness of how science works by headlining a just-published scientific paper as “the truth,” perhaps also touting the academic pedigree of the authors. In fact, when drawn from the moving frontier, the truth has not yet been established, so research can land all over the place until experiments converge in one direction or another — or in no direction, itself usually indicating no phenomenon at all.

Once an objective truth is established by these methods, it is not later found to be false. We will not be revisiting the question of whether Earth is round; whether the sun is hot; whether humans and chimps share more than 98 percent identical DNA; or whether the air we breathe is 78 percent nitrogen.

The era of “modern physics,” born with the quantum revolution of the early 20th century and the relativity revolution of around the same time, did not discard Newton’s laws of motion and gravity. What it did was describe deeper realities of nature, made visible by ever-greater methods and tools of inquiry. Modern physics enclosed classical physics as a special case of these larger truths. So the only times science cannot assure objective truths is on the pre-consensus frontier of research, and the only time it couldn’t was before the 17th century, when our senses — inadequate and biased — were the only tools at our disposal to inform us of what was and was not true in our world.

Objective truths exist outside of your perception of reality, such as the value of pi; E= m c 2; Earth’s rate of rotation; and that carbon dioxide and methane are greenhouse gases. These statements can be verified by anybody, at any time, and at any place. And they are true, whether or not you believe in them.

Meanwhile, personal truths are what you may hold dear, but have no real way of convincing others who disagree, except by heated argument, coercion or by force. These are the foundations of most people’s opinions. Is Jesus your savior? Is Mohammad God’s last prophet on Earth? Should the government support poor people? Is Beyoncé a cultural queen? Kirk or Picard? Differences in opinion define the cultural diversity of a nation, and should be cherished in any free society. You don’t have to like gay marriage. Nobody will ever force you to gay-marry. But to create a law preventing fellow citizens from doing so is to force your personal truths on others. Political attempts to require that others share your personal truths are, in their limit, dictatorships.

Note further that in science, conformity is anathema to success. The persistent accusations that we are all trying to agree with one another is laughable to scientists attempting to advance their careers. The best way to get famous in your own lifetime is to pose an idea that is counter to prevailing research and which ultimately earns a consistency of observations and experiment. This ensures healthy disagreement at all times while working on the bleeding edge of discovery.

In 1863, a year when he clearly had more pressing matters to attend to, Abraham Lincoln — the first Republican president — signed into existence the National Academy of Sciences, based on an Act of Congress. This august body would provide independent, objective advice to the nation on matters relating to science and technology.

Today, other government agencies with scientific missions serve similar purpose, including NASA, which explores space and aeronautics; NIST, which explores standards of scientific measurement, on which all other measurements are based; DOE, which explores energy in all usable forms; and NOAA, which explores Earth’s weather and climate.

These centers of research, as well as other trusted sources of published science, can empower politicians in ways that lead to enlightened and informed governance. But this won’t happen until the people in charge, and the people who vote for them, come to understand how and why science works.

Neil deGrasse Tyson, author of Space Chronicles: Facing the Ultimate Frontier, is an astrophysicist with the American Museum of Natural History. His radio show StarTalk became the first ever science-based talk show on television, now in its second season with National Geographic Channel.

Really? Rapid test for E. coli improves food safety

Scientists are always talking about new rapid tests for pathogens, but I don’t see them in grocery stores – that’s a place where people buy food.

scienceBut, here goes the PR from Western (in Canada).

Dr. Michael Rieder and his team have created a new rapid-test system to detect E. coli O157 – a foodborne bacteria most commonly found in ground meat. The test would allow manufacturers to identify contaminated food quickly before it leaves the processing plant and enters the grocery store. The system was developed as a result of collaborations between Dr. Rieder, associate scientist at Robarts, and London entrepreneurs, Michael Brock and Craig Coombe.

Current conventional testing can take from three to 21 days for definitive results and relies on bacterial culture. By the time the bacteria are identified, the food has been shipped to grocery stores and may have already caused illness. With this current system, two weeks of food may need to be recalled to ensure against cross-contamination.

 Dr. Rieder’s rapid-test system would allow food to be sampled at the end of one day, and the results would be available before the food is shipped the next morning. “This means that one day’s production is lost, not five day’s production,” he said. “This has the potential to save companies considerable money, and more importantly could save a lot of people from being exposed to food-borne disease.”

  The rapid-test relies on targeting proteins identified by Dr. Rieder’s lab that are only present in the organisms that cause people to become ill. By collaborating with Toronto-based company International Point of Care, the team was able to use flow-through technology to mark the protein with colloidal gold so that it is visible to the naked eye. The process is similar to that used in pregnancy tests – one line for negative, two lines for positive.

 Much of the work has been funded through a grant from Mitacs, a provincial program that encourages academic and industrial collaboration. Dr. Rieder credits the success of the project to these collaborations with industry, as well as colleagues at Robarts and Western’s Schulich School of Medicine & Dentistry. Sadly, Michael Brock, a key member of the project, died suddenly just as it was entering its final stages.

 The rapid-test system has completed testing at Robarts and the Health Canada-certified Agriculture and Food Laboratory at the University of Guelph. The final application has been submitted to Health Canada for approval.

Dietary pseudoscience: ‘How I fooled millions into thinking chocolate helps weight loss’

When I first met the father of my ex-wife, I asked him if he liked hockey.

scienceHe said, nah, that’s all acting.

I watch wrestling.

Who knows what’s genuine anymore.

Science has become an adventure in chasing money rather than chasing evidence.

The following is from http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800 where author John Bohannon explains how he tricked the scientific process.

And it was too easy.

“Slim by Chocolate!” the headlines blared. A team of German researchers had found that people on a low-carb diet lost weight 10 percent faster if they ate a chocolate bar every day. It made the front page of Bild, Europe’s largest daily newspaper, just beneath their update about the Germanwings crash. From there, it ricocheted around the internet and beyond, making news in more than 20 countries and half a dozen languages. It was discussed on television news shows. It appeared in glossy print, most recently in the June issue of Shape magazine (“Why You Must Eat Chocolate Daily”, page 128).

Not only does chocolate accelerate weight loss, the study found, but it leads to healthier cholesterol levels and overall increased well-being. The Bild story quotes the study’s lead author, Johannes Bohannon, Ph.D., research director of the Institute of Diet and Health: “The best part is you can buy chocolate everywhere.”

I am Johannes Bohannon, Ph.D. Well, actually my name is John, and I’m a journalist. I do have a Ph.D., but it’s in the molecular biology of bacteria, not humans. The Institute of Diet and Health? That’s nothing more than a website.

Other than those fibs, the study was 100 percent authentic. My colleagues and I recruited actual human subjects in Germany. We ran an actual clinical trial, with subjects randomly assigned to different diet regimes. And the statistically significant benefits of chocolate that we reported are based on the actual data. It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.

The story is long but thorough.

The complaints are unfounded.

 

Science! MIT experiencing gastroenteritis outbreak

The boffins at MIT Medical need a refresher course in handwashing following an outbreak of acute gastroenteritis on campus.

scienceAccording to associate medical director Howard Heller, MIT Medical saw two patients with nausea, vomiting, and diarrhea at the beginning of the week and 16 during the day on Wednesday. MIT-EMS responded to a few more cases overnight, and as of noon on Thursday, a small number of additional patients with similar symptoms had come into Urgent Care. Heller notes that cases do not appear to be linked to any specific dorm or dining hall.

“This may or may not be norovirus,” Heller says. Norovirus, which causes a severe and acute form of gastroenteritis, can spread quickly, especially in dense, semi-closed communities. “But whether it’s norovirus or not,” Heller continues, “our response should be the same — paying extra attention to practicing good hygiene. Frequent and consistent hand-washing is the best way to prevent the spread of this type of virus.” 

Go evidence or go home: some online journals will publish fake science, for a fee

A long time ago in a galaxy far, far away – Canada – we ran the national food safety info line.

You can imagine rotary phones, but it was a tad more sophisticated.

The question we grappled with was, who’s evidence is right?

We came up with specific guidelines for how to answer questions based on the preponderance of scientific evidence, and were completely transparent about the the.sting.publishinglimitations, using a sound risk analysis framework.

When answers in the scientific literature seemed, uh, weird or missing, we’d go do our own original research and fill in the gaps.

We questioned everything and still do. It’s good for science, but can be hard on relationships.

Any time some hack said, here’s the science to prove something, we would question it.

Apparently with good reason.

As reported by NPR, an elaborate sting carried out by Science found that many online journals are ready to publish bad research in exchange for a credit card number.

The business model of these “predatory publishers” is a scientific version of those phishes from Nigerians who want help transferring a few million dollars into your bank account.

To find out just how common predatory publishing is, Science contributor John Bohannon sent a deliberately faked research article 305 times to online journals. More than half the journals that supposedly reviewed the fake paper accepted it.
“This sting operation,” Bohannan , reveals “the contours of an emerging Wild West in academic publishing.”

Online scientific journals are springing up at a great rate. There are thousands out there. Many, such as PLOS, are totally respectable. This “open access” model is making good science more accessible than ever before, without making users pay the hefty subscription fees of traditional print journals.

(It should be noted that Science is among these legacy print journals, charging subscription fees and putting much of its online content behind a pay wall.)

But the Internet has also opened the door to clever imitators who collect fees from scientists eager to get published. “It’s the equivalent of paying someone to publish the.sting.noseyour work on their blog,” Bohannan tells Shots.

Bohannan says his experiment shows many of these online journals didn’t notice fatal flaws in a paper that should be spotted by “anyone with more than high-school knowledge of chemistry.” And in some cases, even when one of their reviewers pointed out mistakes, the journal accepted the paper anyway — and then asked for hundreds or thousands of dollars in publication fees from the author.

A journalist with an Oxford University PhD in molecular biology, Bohannan fabricated a paper purporting to discover a chemical extracted from lichen that kills cancer cells. Its authors were fake too — nonexistent researchers with African-sounding names based at the fictitious Wassee Institute of Medicine in Asmara, a city in Eritrea.

With help from collaborators at Harvard, Bohannan made the paper look as science-y as possible – but larded it with fundamental errors in method, data and conclusions.

The highest density of acceptances was from journals based in India, where academics are under intense pressure to publish in order to get promotions and bonuses.

“Peer review is in a worse state than anyone guessed,” he says.

The Internet and open access are great tools, but like any technology, hucksters will be there to exploit the tool for personal (PhD) gain.

Maybe the peer-review system needs to open up, and the Internet can help with that.

Charles Birdseye, science, freezing, and the joys of frozen food

Frozen fruits, veggies and meat are a fabulous invention.

Clarence Birdseye is the man credited with inventing frozen food.

NPR reports that Mark Kurlansky, known for his histories on eclectic topics such as Salt and Cod, has written a new biography about Clarence Birdseye. He joins Weekend Edition host Rachel Martin to talk about the book, called Birdseye: The Adventures of a Curious Man.

On Clarence Birdseye’s outsized curiosity:
"He was somebody who just wanted to know about everything. He wanted to know why people did things the way they did, and couldn’t they be done better. He was very interested in processes. He was very curious about nature. He had a nickname for a while — other kids called him ‘Bugs.’ He was interested in all these slimy little things."

On his time living in Newfoundland (that’s in Canada), the setting for his great inspiration:
"This was just really the wilds. There wasn’t fresh food, and so he became concerned about his wife and baby. He noticed that the Inuits would catch fish and they would freeze as soon as they were out of the water. And what he had discovered was that if you freeze very quickly, you don’t destroy the texture of food. It’s something salt makers knew for centuries — that in crystallization, the faster the crystals are formed, the smaller they are. And the problem with frozen food is that they were frozen barely at the freezing point, and they took days to freeze, and they get huge crystals and they just became mush."

On Birdseye’s entrepreneurial mindset:
"He set up a company in Gloucester, Mass., but he wasn’t so much interested in having a seafood company; he understood perfectly well that there wasn’t much of a market for it. What this company was to do was to develop machinery and ideas and patent them, and sell the patents to people with big money. … The decade before he was born, Bell invented the telephone and Edison the phonograph and the light bulb was invented, and [Birdseye] very much had that idea in his head, that that’s what you did — you came up with an idea and you started a company based on it."

On the source of Birdseye’s passion:
"One thing that was very clear about him was that, in his way, he was a real foodie. … He would go out to farms and talk to farmers about how they could make their processes and their product better suited for industry. Just the reverse of what food lovers think about today … [in] the locavore movement. He was trying to correct the locavore movement."

On Birdseye’s personality:
"He was a very garrulous, likable person and an absolutely brilliant salesman. When he was trying to get investors, he would send entire dinners of frozen food to their Manhattan apartments. He really had a confidence in this product that if people just tried it, they would love it."