Going public(er): Alzheimer’s edition

Alzheimer’s has affected me, indirectly, in ways I still can’t understand, but am trying.

My grandfather died of Alzheimer’s in the 1980s, when I was a 20-something.

It wasn’t pretty, so stark that my grandmother took her own life rather than spend winter days going to a hospital where the man she had been with for all those years increasingly didn’t recognize her.

Glen Campbell’s death yesterday from Alzheimer’s, and Gene Wilder’s before that, rekindled lots of conflicting emotions.

In 1995, I was a cocky PhD student and about to be a father for the fourth time, when I was summoned to a meeting with, Ken Murray.

I rode my bike to a local golf club, met the former long-time president of Schneiders Meats, and established a lifelong friendship.

When Ken told me about a project he had established at the University of Waterloo in 1993, the Murray Alzheimer Research and Education Program (MAREP), after his wife’s demise from the disease, I said, I can’t understand the hell of being the primary caregiver for so long, but I know of the side-effects.

Ken had heard I might know something of science-and-society stuff, and he actually funded my faculty position at the University of Guelph for the first two years.

Sure, other weasels at Guelph tried to appropriate the money, but Ken would have none of it.

For over 20 years now, I’ve tried to promote Ken’s vision, of making the best technology available to enhance the safety of the food supply.

I’ve got lots of demons, and what I’ve learned is that it’s best to be public about them. It removes the stigma. It makes one recognize they are not alone. It’s humbling (and that is good).

In addition to being an unbelievably gifted songwriter, session player, and hit maker, Glen Campbell was – directly or not – an outstanding advocate of awareness about Alzheimer’s.

 

Michael Pollack writes in The New York Times obituary that Glen Campbell, the sweet-voiced, guitar-picking son of a sharecropper who became a recording, television and movie star in the 1960s and ’70s, waged a publicized battle with alcohol and drugs and gave his last performances while in the early stages of Alzheimer’s disease, died on Tuesday in Nashville. He was 81.

Tim Plumley, his publicist, said the cause was Alzheimer’s.

Mr. Campbell revealed that he had the disease in June 2011, saying it had been diagnosed six months earlier. He also announced that he was going ahead with a farewell tour later that year in support of his new album, “Ghost on the Canvas.” He and his wife, Kimberly Campbell, told People magazine that they wanted his fans to be aware of his condition if he appeared disoriented onstage.

What was envisioned as a five-week tour turned into 151 shows over 15 months. Mr. Campbell’s last performance was in Napa, Calif., on Nov. 30, 2012, and by the spring of 2014 he had moved into a long-term care and treatment center near Nashville.

Mr. Campbell released his final studio album, “Adiós,” in June. The album, which included guest appearances by Willie Nelson, Vince Gill and three of Mr. Campbell’s children, was recorded after his farewell tour.

That tour and the way he and his family dealt with the sometimes painful progress of his disease were chronicled in a 2014 documentary, “Glen Campbell: I’ll Be Me,” directed by the actor James Keach. Former President Bill Clinton, a fellow Arkansas native, appears in the film and praises Mr. Campbell for having the courage to become a public face of Alzheimer’s.

At the height of his career, Mr. Campbell was one of the biggest names in show business, his appeal based not just on his music but also on his easygoing manner and his apple-cheeked, all-American good looks. From 1969 to 1972 he had his own weekly television show, “The Glen Campbell Goodtime Hour.” He sold an estimated 45 million records and had numerous hits on both the pop and country charts. He was inducted into the Country Music Hall of Fame in 2005.

Decades after Mr. Campbell recorded his biggest hits — including “Wichita Lineman,” “By the Time I Get to Phoenix” and “Galveston” (all written by Jimmy Webb, his frequent collaborator for nearly 40 years) and “Southern Nights” (1977), written by Allen Toussaint, which went to No. 1 on pop as well as country charts — a resurgence of interest in older country stars brought him back onto radio stations.

Like Bobbie Gentry, with whom he recorded two Top 40 duets, and his friend Roger Miller, Mr. Campbell was a hybrid stylist, a crossover artist at home in both country and pop music.

Although he never learned to read music, Mr. Campbell was at ease not just on guitar but also on banjo, mandolin and bass. He wrote in his autobiography, “Rhinestone Cowboy” (1994) — the title was taken from one of his biggest hits — that in 1963 alone his playing and singing were heard on 586 recorded songs.

He could be a cut-up in recording sessions. “With his humor and energetic talents, he kept many a record date in stitches as well as fun to do,” the electric bassist Carol Kaye, who often played alongside Mr. Campbell, said in an interview in 2011. “Even on some of the most boring, he’d stand up and sing some off-color country song — we’d almost have a baby trying not to bust a gut laughing.”

After playing on many Beach Boys sessions, Mr. Campbell became a touring member of the band in late 1964, when its leader, Brian Wilson, decided to leave the road to concentrate on writing and recording. He remained a Beach Boy into the first few months of 1965.

Mr. Campbell had his most famous movie role in 1969, in the original version of “True Grit.” He had the non-singing part of a Texas Ranger who joins forces with John Wayne and Kim Darby to hunt down the killer of Ms. Darby’s father. (Matt Damon had the role in a 2010 remake.) The next year, Mr. Campbell and the New York Jets quarterback Joe Namath played ex-Marines in “Norwood,” based on a novel by Charles Portis, the author of “True Grit.”

Mr. Campbell made his Las Vegas debut in 1970 and, a year later, performed at the White House for President Richard M. Nixon and for Queen Elizabeth II in London.

But his life in those years had a dark side. “Frankly, it is very hard to remember things from the 1970s,” he wrote in his autobiography. Though his recording and touring career was booming, he began drinking heavily and later started using cocaine. He would annoy his friends by quoting from the Bible while high. “The public had no idea how I was living,” he recalled.

In 1980, after his third divorce, he said: “Perhaps I’ve found the secret for an unhappy private life. Every three years I go and marry a girl who doesn’t love me, and then she proceeds to take all my money.” That year, he had a short, tempestuous and very public affair with the singer Tanya Tucker, who was about half his age.

He credited his fourth wife, the former Kimberly Woollen, with keeping him alive and straightening him out — although he would continue to have occasional relapses for many years. He was arrested in November 2003 in Phoenix and charged with extreme drunken driving and leaving the scene of an accident. He pleaded guilty and served 10 nights in jail in 2004.

I cried with many emotions when I first watched his documentary, I’ll Be Me.

And I’ll watch it again today with humility, respect and gratitude, to people like Glen and Ken.

Science is my faith; the arena is my church: Hucksters, buskers and Kato Kaelin

It was a visit to the Wizard of Oz museum in Kansas that solidified my belief in the hucksters and buskers ruining the dream of America.

I was willingly living in Kansas with a girl I fell in love with – still am — and two of my Canadian daughters were visiting in 2007, so we decided to travel down the road from Manhattan, Kansas, to Wamego, KS, home of the Wizard of Oz museum.

If there’s genius in David Lynch, it’s predicting things before they happen – the elevated hairdo in Eraserhead made popular by Lyle Lovett, the Dr. Amp personified by radio-talk shrill Alan Jones.

Even John Oliver has had a go at Dr. Group, the unfortunately-named chiropractor and Kato-Kaelin–lookalike who shrills science with the veracity of a Kardashian.

Which reminds me, I gotta tell Chapman to shorten his bio.

In academia, when starting as an assistant professor, most  feel a need to put everything they farted out that passed peer review into their bio, including boy scout leader, and hockey coach.

As time goes on, the bio becomes shorter, because you’ve earned that full prof title, and even you don’t give a shit about repeating everything you’ve toiled over for the past 40 years – you also correctly reason that no one else gives a shit either, and if they do, google it.

Check out the degrees behind Dr. Group.

The struggle to confirm who is legitimate and why, continues, and is often laid bare in the fanciest of university-type institutions.

Do these people really care about learning, or are they just there, to make a paycheck, get their retirement and go through the motions.

I won’t go into the latest details about Gwenyth preaching that livers and kidneys can be detoxed by handstands, why Canada’s Dr. Jen Gunter has taken on debunking her Gwenythness, or why a top uni in Spain scraped homeopathy — because it’s nonsense.

Instead I give you the wisdom of John Oliver.

These sponges go to 14

Friend of barfblog, and frequent contributor (and modeler extraordinaire), Don Schaffner writes,

I’m always interested in the way microbiology is perceived in the popular culture. When peer reviewed research articles get wide pick up, I’m especially interested. This happened recently with an article on kitchen sponges. Rob Mancini has already blogged about this right here on barfblog, but I’d like to share my thoughts and perspectives.

The fact the kitchen sponges can be massively contaminated by high levels of microorganisms is not news. This has been shown repeatedly in the peer reviewed literature.

What was apparently new in this latest article was the application of “454–pyrosequencing of 16S rRNA genes and fluorescence in situ hybridization coupled with confocal laser scanning microscopy (FISH–CLSM)”. And I get it. Molecular-based methods are all the rage, and the ability to visualize the presence of microorganisms is very important.

But we shouldn’t lose sight of the fact that experimental design, and proper experimental controls are important no matter what sort of science you’re doing. When I dug a little deeper into the above article I was shocked to learn that all of their conclusions were based on a sample of 14 sponges. That’s right, 14 sponges. Furthermore, the authors make claims that “sponge sanitation methods appear not sufficient to efectively reduce the bacterial load in kitchen sponges”. How did they know this? Well when they were collecting those 14 sponges they asked the sponge owners “to specify whether they regularly apply special measures to clean their sponge. The procedures mentioned were: heating in a microwave and rinsing with hot, soapy water”. Of the 14 sponges collected, in five cases the sponge owners reported applying special measures, although the authors do not which of the five used microwaving and which used rinsing with hot, soapy water.

What’s my take away message from this? By all means, go out there and use the hot new technology. But please don’t forget that sample size is very important, and while surveying people for their opinion about what they do might be convenient, it’s no substitute for actually investigating. And if I had to predict the effect of washing sponges with hot soapy water? Probably no different than washing in cold soapy water.

And if anybody out there has access to “454–pyrosequencing of 16S rRNA genes and fluorescence in situ hybridization coupled with confocal laser scanning microscopy (FISH–CLSM)“ and wants to collaborate, I am available.

Ideas, not geography or institutes, make for public advances

When you haven’t seen a prof dude for 25 years, and then he’s being featured in the N.Y. Times as “The man who helped turn Toronto into a high-tech hotbed,” it’s time for a reality check.

The webs we spin over time.

I was a lousy grad student.

Not the PhD one but the eventually aborted MS one.

I spent hours staring through a microscope – sometimes the electronic kind – at tomato cells artificially infected with a fungus called Verticillium.

I spent months trying to extract and sequence DNA from this slimy fungus.

After 2.5 years, I quit.

I became newspaper dude – that’s right kids, in my day, newspapers existed, and we even started our own paper using a Mac SE and a program called PageMaker.

That was 1988.

It was all because of a girl.

Now, I’ve been to Kansas and Brisbane.

All because of another girl.

But after working for a year at a computer trade magazine in Toronto, I landed a job at the University of Waterloo in Jan. 1990, with an Ontario Centre of Excellence.

I had ideas to try out with my science, computing and journalism experience, and the powers that be said sure, play along.

Within a couple of years, I got tired of writing about other people’s science, and wanted to write about my own science, which led to be starting a PhD at the University of Guelph in the fall of 1992.

But there was this prof at the University of Toronto who I helped promote – specifically his artificial intelligence course, which I sat through a couple of times because it was fascinating – and at one point he said to me: all this targeted research money, and all these oversight committees with their expenses, just get rid of them all and give profs some basic funding and see what happens.

I sorta agreed.

I knew my job was BS, that could be exterminated when the next provincial government came around, and when chatting with Dr. Hinton, he made a lot of sense.

So I soon quit, went and got a PhD, and got to write about what I wanted.

And then Dr. Hinton shows up in the N.Y. Times.

Craig S Smith writes as an undergraduate at Cambridge University, Geoffrey Everest Hinton thought a lot about the brain. He wanted to better understand how it worked but was frustrated that no field of study — from physiology and psychology to physics and chemistry — offered real answers.

So he set about building his own computer models to mimic the brain’s process.

“People just thought I was crazy,” said Dr. Hinton, now 69, a Google fellow who is also a professor emeritus of computer science at the University of Toronto.

He wasn’t. He became one of the world’s foremost authorities on artificial intelligence, designing software that imitates how the brain is believed to work. At the same time, Dr. Hinton, who left academia in the United States in part as a personal protest against military funding of research, has helped make Canada a high-tech hotbed.

Dictate a text on your smartphone, search for a photo on Google or, in the not too distant future, ride in a self-driving car, and you will be using technology based partly on Dr. Hinton’s ideas.

His impact on artificial intelligence research has been so deep that some people in the field talk about the “six degrees of Geoffrey Hinton” the way college students once referred to Kevin Bacon’s uncanny connections to so many Hollywood movies.

Dr. Hinton’s students and associates are now leading lights of artificial intelligence research at Apple, Facebook, Google and Uber, and run artificial intelligence programs at the University of Montreal and OpenAI, a nonprofit research company.

“Geoff, at a time when A.I. was in the wilderness, toiled away at building the field and because of his personality, attracted people who then dispersed,” said Ilse Treurnicht, chief executive of Toronto’s MaRS Discovery District, an innovation center that will soon house the Vector Institute, Toronto’s new public-private artificial intelligence research institute, where Dr. Hinton will be chief scientific adviser.

Dr. Hinton also recently set up a Toronto branch of Google Brain, the company’s artificial intelligence research project. His tiny office there is not the grand space filled with gadgets and awards that one might expect for a man at the leading edge of the most transformative field of science today. There isn’t even a chair. Because of damaged vertebrae, he stands up to work and lies down to ride in a car, stretched out on the back seat.

“I sat down in 2005,” said Dr. Hinton, a tall man, with uncombed silvering hair and hooded eyes the color of the North Sea.

Dr. Hinton started out under a constellation of brilliant scientific stars. He was born in Britain and grew up in Bristol, where his father was a professor of entomology and an authority on beetles. He is the great-great-grandson of George Boole, the father of Boolean logic.

His middle name comes from another illustrious relative, George Everest, who surveyed India and made it possible to calculate the height of the world’s tallest mountain that now bears his name.

Dr. Hinton followed the family tradition by going to Cambridge in the late 1960s. But by the time he finished his undergraduate degree, he realized that no one had a clue how people think.

“I got fed up with academia and decided I would rather be a carpenter,” he recalled with evident delight, standing at a high table in Google’s white-on-white cafe here. He was 22 and lasted a year in the trade, although carpentry remains his hobby today.

When artificial intelligence coalesced into a field of study from the fog of information science after World War II, scientists first thought that they could simulate a brain by building neural networks assembled from vast arrays of switches, which would mimic synapses.

But the approach fell out of favor because computers were not powerful enough then to produce meaningful results. Artificial intelligence research turned instead to using logic to solve problems.

As he was having second thoughts about his carpentry skills, Dr. Hinton heard about an artificial intelligence program at the University of Edinburgh and moved there in 1972 to pursue a Ph.D. His adviser favored the logic-based approach, but Dr. Hinton focused on artificial neural networks, which he thought were a better model to simulate human thought.

His study didn’t make him very employable in Britain, though. So, Ph.D. in hand, he turned to the United States to work as a postdoctoral researcher in San Diego with a group of cognitive psychologists who were also interested in neural networks.

They were soon making significant headway.

They began working with a formula called the back propagation algorithm, originally described in a 1974 Harvard Ph.D. thesis by Paul J. Werbos. That algorithm allowed neural networks to learn over time and has since become the workhorse of deep learning, the term now used to describe artificial intelligence based on those networks.

Dr. Hinton moved in 1982 to Carnegie Mellon University in Pittsburgh as a professor, where his work with the algorithm and neural networks allowed computers to produce some “interesting internal representations,” as he put it.

Here’s an example of how the brain produces an internal representation. When you look at a cat — for some reason cats are a favorite subject of artificial intelligence research — light waves bouncing off it hit your retina, which converts the light into electrical impulses that travel along the optic nerve to the brain. Those impulses, of course, look nothing like a cat. The brain, however, reconstitutes those impulses into an internal representation of the cat, and if you close your eyes, you can see it in your mind.

By 2012, computers had become fast enough to allow him and his researchers to create those internal representations as well as reproduce speech patterns that are part of the translation applications we all use today.

He formed a company specializing in speech and photo recognition with two of his students at the University of Toronto. Google bought the business, so Dr. Hinton joined Google half time and continues to work there on creating artificial neural networks.

The deal made Dr. Hinton a wealthy man.

Now he is turning his attention to health care, thinking that artificial intelligence technology could be harnessed to scan lesions for cancer. The combination of the Vector Institute, a surrounding cluster of hospitals and government support, he added, makes Toronto “one of the best places in the world to do it.”

Toronto is not Silicon Valley north.

You got where you are because of your ideas, not geography.

 

Trump’s expected pick for USDA’s top scientist is not a scientist

Catherine Woteki, served as the U.S. Department of Agriculture’s undersecretary for research, education and economics in the Obama administration.

She recently told Pro Publica “This position is the chief scientist of the Department of Agriculture. It should be a person who evaluates the scientific body of evidence and moves appropriately from there.”

Trump expects to appoint Sam Clovis — who, according to sources with knowledge of the appointment and members of the agriculture trade press, is President Trump’s pick to oversee the section — appears to have no such credentials.

Clovis has never taken a graduate course in science and is openly skeptical of climate change. While he has a doctorate in public administration and was a tenured professor of business and public policy at Morningside College for 10 years, he has published almost no academic work.

Morningside College sounds like painting with Dali (below) on SCTV’s Sunrise Semester.

Clovis advised Trump on agricultural issues during his presidential campaign and is currently the senior White House advisor within the USDA, a position described by The Washington Post as “Trump’s eyes and ears” at the agency.

Clovis was also responsible for recruiting Carter Page, whose ties to Russia have become the subject of intense speculation and scrutiny, as a Trump foreign policy advisor.

Neither Clovis, nor the USDA, nor the White House responded to questions about Clovis’ nomination to be the USDA’s undersecretary for research, education and economics.

Clovis has a B.S. in political science from the U.S. Air Force Academy, an MBA from Golden State University and a doctorate in public administration from the University of Alabama. The University of Alabama canceled the program the year after Clovis graduated, but an old course catalogue provided by the university does not indicate the program required any science courses.

Clovis’ published works do not appear to include any scientific papers. His 2006 dissertation concerned federalism and homeland security preparation, and a search for academic research published by Clovis turned up a handful of journal articles, all related to national security and terrorism.

I can’t make this shit up.

Science PR? I just think about sex

I’ve been a scientist, journalist, writer

Sure, I dabbled in college, didn’t everyone?

But when I got the scientist gig, I quickly realized the PR hacks at whatever university I was at were just that –J-school grad hacks.

CSWA-Slideshow2Can’t blame them, went for the stable income and routine.

But it annoyed me to shit when they wouldn’t do their job, for whatever bureaucratic reason.

I soon learned to just write my own press releases whenever the news was relevant or a new paper came out.

If only I could get paid for it.

But that would lead to excruciatingly endless and mind-numbing meetings where I would daydream about sex.

Good science PR has its role, and Nick Stockton of Wired writes a website called EurekAlert gives journalists access to the latest studies before publication, before those studies are revealed to the general public. Launched 20 years ago this week, EurekAlert has tracked, and in some ways shaped, the way places like Wired cover science in the digital era.

Yes, of course the Internet was going to change science journalism—the same way it was destined to change all journalism. But things could have been very different. EurekAlert gathered much of the latest breaking scientific research in one easily accessible place.

You probably know the basic process of science: Researcher asks a question, forms a hypothesis, tests the hypothesis (again and again and again and again), gets results, submits to journal—where peers review—and if the data is complete and the premise is sound, the journal agrees to publish.

Science happens at universities, government institutions, and private labs. All of those places have some interest in publicizing the cool stuff they do. So those places hire people—public information officers—to alert the public of each new, notable finding. (OK, maybe some aren’t so notable, but whatever.) And the route for that notification is often via journalists.

And, much like the way journalists compete with one another for scoops, journals compete with one another for the attention of journalists to publicize their research. After all, Science, Nature, JAMA, and so on are interested in promoting their brands so they can attract more smart, impactful research. As someone smart once said, science is a contact sport.

So how did EurekAlert become the one clearinghouse to rule them all?

In 1995, an employee for the American Association for the Advancement of Science had an idea for this newfangled Internet thing. Nan Broadbent was the organization’s director of communications, and she imagined a web platform where reporters could access embargoed journal articles. Not just from the AAAS publication Science, but all new research from every journal. Which might seem trite on today’s hyper-aggregated web. But remember, this was an era when anime nerds on Geocities were still struggling to organize their competing Ranma 1/2 fansites into webrings1.

Sorta the way Food Safety Network started in 1993. I hosted the Canadian Science Writers Association annual meeting in 1991 while working at the University of Waterloo, where ideas about access were vigorously discussed.

But while EurekAlert democratized journalists’ access to papers, and PIOs’ access to journalists, those who had the resources to develop their own connections—like Grabmeier’s boss, or reporters at big, national outlets—suddenly found themselves competing with, well, everyone. “EurekAlert is kind of like a giant virtual press conference, in that it pulls everybody to the same spot,” says Cristine Russell, a freelance science writer since the 1970s.

That centralization, coupled with the embargo system (which has existed way before EurekAlert), has contributed to a longstanding tension within science journalism about what gets covered—and what does not. Embargoes prohibit scientists and journalists from publicizing any new research until a given date has passed, specified by whatever journal is publishing the work.

EurekAlert opened up science in a way that it had never been open before. The site has 12,000 registered reporters from 90 different countries (Getting embargoed access is a minor rite of passage for new science writers). It receives around 200 submissions a day, from 10,000 PIOs representing 6,000 different institutions all around the world. Once an embargo lifts, anyone is free to read the same press releases as the journalists (access to the original papers is a trickier ordeal). EurekAlert gets about 775,000 unique visitors every month. Its articles are translated into French, German, Spanish, Portuguese, Japanese, and Chinese.

Sure, the site is not perfect. It is arguably no longer even necessary—modern journalists are web native-or-die self-aggregators. But that’s the thing. EurekAlert was never trying to be much more than a convenience. Which turned out to be its greatest gift: Making science easy to access.

 

Lifehacker covers the science of Thanksgiving

Lots of folks like to say that food safety in the home is simple. It isn’t. There are a lot of variables and messages have historically been distilled down to a sanitized sound bite. Saying that managing food safety risks is simple isn’t good communication; isn’t true; and, does a disservice to the nerds who want to know more. The nerds that are increasingly populating the Internet as they ask bigger, deeper questions.

Friend of barfblog, and Food Safety Talk podcast co-host extraordinaire, Don Schaffner provides a microbiological catch-phrase that gets used on almost every episode of our show to combat the food-safety-is-simple mantra; when asked about whether something is safe, Don often answers with, ‘it depends’ and ‘it’s complicated’. And then engages around the uncertainties.IMG_4138

Beth Skwarecki of Life Hacker’s Vitals blog called last week to talk about Thanksgiving dinner, turkey preparation and food safety and provided the platform to get into the  ‘it depends’ and ‘it’s complicated’ discussion. Right down to time/temperature combinations equivalent to 165F when it comes to Salmonella destruction.

Here are some excerpts.

How Do You Tell When the Turkey Is Done?

With a thermometer, of course. The color of the meat or juices tells you nothing about doneness, as this guide explains: juices may run pink or clear depending on how stressed the animal was at the time of slaughter (which changes the pH of the meat). The color of the bone depends on the age of the bird at slaughter. And pink meat can depend on roasting conditions or, again, the age of the bird. It’s possible to have pink juices, meat, or bones even when the bird is cooked, or clear juices even when it’s not done yet.

So you’ve got your thermometer. What temperature are you targeting? Old advice was to cook the turkey to 180 degrees Fahrenheit, but that was a recommendation based partly on what texture people liked in their meat, Chapman says. The guidelines were later revised to recommend a minimum safe temperature, regardless of what the meat tastes like, and that temperature is 165. You can cook it hotter, if you like, but that won’t make it any safer.

There’s a way to bend this rule, though. The magic 165 is the temperature that kills Salmonella and friends instantly, but you can also kill the same bacteria by holding the meat at a lower temperature, for a longer time. For example, you can cook your turkey to just 150 degrees, as long as you ensure that it stays at 150 (or higher) for five minutes, something you can verify with a high-tech thermometer like an iGrill. This high-tech thermometer stays in your turkey while it cooks, and sends data to your smartphone. Compare its readings to these time-temperature charts for poultry to make sure your turkey is safe.

The whole piece can be found here.

What is science?

Neil deGrasse Tyson writes, if you cherry-pick scientific truths to serve cultural, economic, religious or political objectives, you undermine the foundations of an informed democracy.

scienceScience distinguishes itself from all other branches of human pursuit by its power to probe and understand the behavior of nature on a level that allows us to predict with accuracy, if not control, the outcomes of events in the natural world. Science especially enhances our health, wealth and security, which is greater today for more people on Earth than at any other time in human history.

The scientific method, which underpins these achievements, can be summarized in one sentence, which is all about objectivity:

Do whatever it takes to avoid fooling yourself into thinking something is true that is not, or that something is not true that is.

This approach to knowing did not take root until early in the 17th century, shortly after the inventions of both the microscope and the telescope. The astronomer Galileo and philosopher Sir Francis Bacon agreed: conduct experiments to test your hypothesis and allocate your confidence in proportion to the strength of your evidence. Since then, we would further learn not to claim knowledge of a newly discovered truth until multiple researchers, and ultimately the majority of researchers, obtain results consistent with one anther.

This code of conduct carries remarkable consequences. There’s no law against publishing wrong or biased results. But the cost to you for doing so is high. If your research is re-checked by colleagues, and nobody can duplicate your findings, the integrity of your future research will be held suspect. If you commit outright fraud, such as knowingly faking data, and subsequent researchers on the subject uncover this, the revelation will end your career.

It’s that simple.

This internal, self-regulating system within science may be unique among professions, and it does not require the public or the press or politicians to make it work. But watching the machinery operate may nonetheless fascinate you. Just observe the flow of research papers that grace the pages of peer reviewed scientific journals. This breeding ground of discovery is also, on occasion, a battlefield where scientific controversy is laid bare.

Science discovers objective truths. These are not established by any seated authority, nor by any single research paper. The press, in an effort to break a story, may mislead the public’s awareness of how science works by headlining a just-published scientific paper as “the truth,” perhaps also touting the academic pedigree of the authors. In fact, when drawn from the moving frontier, the truth has not yet been established, so research can land all over the place until experiments converge in one direction or another — or in no direction, itself usually indicating no phenomenon at all.

Once an objective truth is established by these methods, it is not later found to be false. We will not be revisiting the question of whether Earth is round; whether the sun is hot; whether humans and chimps share more than 98 percent identical DNA; or whether the air we breathe is 78 percent nitrogen.

The era of “modern physics,” born with the quantum revolution of the early 20th century and the relativity revolution of around the same time, did not discard Newton’s laws of motion and gravity. What it did was describe deeper realities of nature, made visible by ever-greater methods and tools of inquiry. Modern physics enclosed classical physics as a special case of these larger truths. So the only times science cannot assure objective truths is on the pre-consensus frontier of research, and the only time it couldn’t was before the 17th century, when our senses — inadequate and biased — were the only tools at our disposal to inform us of what was and was not true in our world.

Objective truths exist outside of your perception of reality, such as the value of pi; E= m c 2; Earth’s rate of rotation; and that carbon dioxide and methane are greenhouse gases. These statements can be verified by anybody, at any time, and at any place. And they are true, whether or not you believe in them.

Meanwhile, personal truths are what you may hold dear, but have no real way of convincing others who disagree, except by heated argument, coercion or by force. These are the foundations of most people’s opinions. Is Jesus your savior? Is Mohammad God’s last prophet on Earth? Should the government support poor people? Is Beyoncé a cultural queen? Kirk or Picard? Differences in opinion define the cultural diversity of a nation, and should be cherished in any free society. You don’t have to like gay marriage. Nobody will ever force you to gay-marry. But to create a law preventing fellow citizens from doing so is to force your personal truths on others. Political attempts to require that others share your personal truths are, in their limit, dictatorships.

Note further that in science, conformity is anathema to success. The persistent accusations that we are all trying to agree with one another is laughable to scientists attempting to advance their careers. The best way to get famous in your own lifetime is to pose an idea that is counter to prevailing research and which ultimately earns a consistency of observations and experiment. This ensures healthy disagreement at all times while working on the bleeding edge of discovery.

In 1863, a year when he clearly had more pressing matters to attend to, Abraham Lincoln — the first Republican president — signed into existence the National Academy of Sciences, based on an Act of Congress. This august body would provide independent, objective advice to the nation on matters relating to science and technology.

Today, other government agencies with scientific missions serve similar purpose, including NASA, which explores space and aeronautics; NIST, which explores standards of scientific measurement, on which all other measurements are based; DOE, which explores energy in all usable forms; and NOAA, which explores Earth’s weather and climate.

These centers of research, as well as other trusted sources of published science, can empower politicians in ways that lead to enlightened and informed governance. But this won’t happen until the people in charge, and the people who vote for them, come to understand how and why science works.

Neil deGrasse Tyson, author of Space Chronicles: Facing the Ultimate Frontier, is an astrophysicist with the American Museum of Natural History. His radio show StarTalk became the first ever science-based talk show on television, now in its second season with National Geographic Channel.

Really? Rapid test for E. coli improves food safety

Scientists are always talking about new rapid tests for pathogens, but I don’t see them in grocery stores – that’s a place where people buy food.

scienceBut, here goes the PR from Western (in Canada).

Dr. Michael Rieder and his team have created a new rapid-test system to detect E. coli O157 – a foodborne bacteria most commonly found in ground meat. The test would allow manufacturers to identify contaminated food quickly before it leaves the processing plant and enters the grocery store. The system was developed as a result of collaborations between Dr. Rieder, associate scientist at Robarts, and London entrepreneurs, Michael Brock and Craig Coombe.

Current conventional testing can take from three to 21 days for definitive results and relies on bacterial culture. By the time the bacteria are identified, the food has been shipped to grocery stores and may have already caused illness. With this current system, two weeks of food may need to be recalled to ensure against cross-contamination.

 Dr. Rieder’s rapid-test system would allow food to be sampled at the end of one day, and the results would be available before the food is shipped the next morning. “This means that one day’s production is lost, not five day’s production,” he said. “This has the potential to save companies considerable money, and more importantly could save a lot of people from being exposed to food-borne disease.”

  The rapid-test relies on targeting proteins identified by Dr. Rieder’s lab that are only present in the organisms that cause people to become ill. By collaborating with Toronto-based company International Point of Care, the team was able to use flow-through technology to mark the protein with colloidal gold so that it is visible to the naked eye. The process is similar to that used in pregnancy tests – one line for negative, two lines for positive.

 Much of the work has been funded through a grant from Mitacs, a provincial program that encourages academic and industrial collaboration. Dr. Rieder credits the success of the project to these collaborations with industry, as well as colleagues at Robarts and Western’s Schulich School of Medicine & Dentistry. Sadly, Michael Brock, a key member of the project, died suddenly just as it was entering its final stages.

 The rapid-test system has completed testing at Robarts and the Health Canada-certified Agriculture and Food Laboratory at the University of Guelph. The final application has been submitted to Health Canada for approval.

Dietary pseudoscience: ‘How I fooled millions into thinking chocolate helps weight loss’

When I first met the father of my ex-wife, I asked him if he liked hockey.

scienceHe said, nah, that’s all acting.

I watch wrestling.

Who knows what’s genuine anymore.

Science has become an adventure in chasing money rather than chasing evidence.

The following is from http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800 where author John Bohannon explains how he tricked the scientific process.

And it was too easy.

“Slim by Chocolate!” the headlines blared. A team of German researchers had found that people on a low-carb diet lost weight 10 percent faster if they ate a chocolate bar every day. It made the front page of Bild, Europe’s largest daily newspaper, just beneath their update about the Germanwings crash. From there, it ricocheted around the internet and beyond, making news in more than 20 countries and half a dozen languages. It was discussed on television news shows. It appeared in glossy print, most recently in the June issue of Shape magazine (“Why You Must Eat Chocolate Daily”, page 128).

Not only does chocolate accelerate weight loss, the study found, but it leads to healthier cholesterol levels and overall increased well-being. The Bild story quotes the study’s lead author, Johannes Bohannon, Ph.D., research director of the Institute of Diet and Health: “The best part is you can buy chocolate everywhere.”

I am Johannes Bohannon, Ph.D. Well, actually my name is John, and I’m a journalist. I do have a Ph.D., but it’s in the molecular biology of bacteria, not humans. The Institute of Diet and Health? That’s nothing more than a website.

Other than those fibs, the study was 100 percent authentic. My colleagues and I recruited actual human subjects in Germany. We ran an actual clinical trial, with subjects randomly assigned to different diet regimes. And the statistically significant benefits of chocolate that we reported are based on the actual data. It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.

The story is long but thorough.

The complaints are unfounded.