Thursday, June 28, 2012

Science Regulation


Regulation

At the center of the 2008 financial collapse was the failure of the regulators. Financial regulation was supposed to vouch that bankers are doing an honest job. Who makes sure that the scientists are doing theirs properly? 

Science Regulation

Should science be regulated? We need to control scientists who engineer deathly viruses or nuclear devices and to make sure that human and animal subjects are treated humanely. We need regulators that can vouch for the correctness and quality of scientific research and scientific predictions. We need mechanisms to rate the quality of different research institutions, scientists, and works of science.

Who are the science regulators? And how well are they doing their job?

At the government level much of the regulation is carried out by granting agencies. In the USA these include the National Science Foundation, the National Institute of Health, the Department of Energy, and the Department of Defense.

At the "community" level much of the regulation is done via scientific journals. If a publication appears in a respectable journal, it is supposed to be a good piece of science. Those in the most prestigious journals are supposed to be "groundbreaking". At the university level, committees play a crucial role in the hiring, firing, and promotion of scientists.
   
Is science regulation necessary? 
According to the AAAS, there are about six million science and engineering researchers worldwide. Among six million people of any kind, some will be corrupt, some incompetent, some will deny global warming and some will believe in the powers of homeopathy. If given free rein, such attitudes and views could cause irreparable damage to science and society.

Is the current regulatory system working?

Almost all of the scientific regulatory work is done on an anonymous voluntary basis. By and large the regulators have very little or no accountability. Let's see how this plays out at the different levels of scientific regulation. 

Granting Agencies

Scientific evaluation of grants is generally carried out by scientists on a voluntary and anonymous basis. Scientists are asked to review proposals and participate in panels. Panels then rank proposals, effectively choosing the ones that will get funded. The process lacks transparency and real incentives to invest hard work. I do not know of a single case where a panel member was criticized after the fact in allocating funds to the wrong research project, or lauded after promoting a successful one.

I have participated in some better and some worse panels. The quality of decision making varies dramatically. It is very difficult to locate and recruit the right reviewers and panel members. This crucial part of the process usually entrusted to government officials, which gives them an inordinate amount of influence.

In one of the worst panels I sat on, there were numerous proposals in a very wide area. The panel consisted of three scientists who also reviewed the proposals. For most of the proposals our expertise was inadequate. A government scientist who was supposed to review a quarter of the proposals "disappeared" and we were asked to cover for him by working extra hours. There were millions of dollars on the table. We did our best to allocate them fairly, but we had little confidence in our decisions. 

Journals

The review process for journals is, if anything, much worse. It is often hard to find good reviewers for a paper. Reviewers who are too close to a specific paper tend to provide biased reviews as publication promotes their area and their work. Some try to gain or maintain a leadership status in their fields by strongly criticizing their "competitors". Those who are more removed from the area may not bother to learn it before passing judgement. All of them are pressed with time. Even if they read the paper carefully, they cannot allow the luxury to redo experiments, calculations or simulations and rarely have the time to seriously study the work related to a paper in order to make an intelligent judgement. 

University Committees

And it can get even worse. When candidates are evaluated for university positions, there is rarely even a single expert in the relevant committee. Much of the evaluation is based on the candidate's claims and her recommendations. Some advisors will write for each and every one of their students that they are the best scientist ever. Some sign a blank letter and ask the student to fill it in!

Job applicants sometimes take credit for work that is not theirs. They can safely assume that more often than not, committees will not be familiar with facts. They may get caught from time to time, but it is enough to succeed once. Modest candidates lose and over-sellers win. Honest advisers lose and corrupt ones win. The outsiders lose and the well connected ones win.  

Regulations that is never wrong is also never right

What is common about regulation by granting agencies, journals, and university committees? Once decisions are made, they are never reviewed. Panels are never punished for funding bad research. Refutations of works published in reputable journals have little effect on their popularity, and even in public perception of their accuracy (see this ESA paper about refutations and a Wired Story about retractions). Hiring committees are rarely punished for hiring the wrong people. 
  
The Solution - Transparency!

There are many ways to address the problems in the regulatory system but there is one common approach which is appropriate to all levels of regulation: transparency.
All reports of regulatory bodies should be publicly available, including the names of the reviewers. Panel members and reviewers would have to be much more careful in their work if the reviews are openly available. They would think twice before getting into a conflict of interest. Those who are good in their role as regulators would be able to publicize their service and be acknowledged for it.
 If the reports are public, each review would take a significant amount of work, but the overall time spent regulating science would be much lower.

Today, scientists can submit the same paper to many journals, the same flawed proposal to many funding agencies, the same inflated job application to many departments. As long as one of them goes along with the story, the scientist is bound to reap the benefits.If all of the submissions and reviews were publicly available, there wouldn't be much of an incentive for such strategies. The regulators would be saved from needless work and would have open access to important information regarding their decisions.

It is possible that open reviews would make science more confrontational. But scientists should aspire for openness. Isn't science all about learning in order to make progress?  Isn't it better to see who punches you in the face than being stabbed in the back?

Thanks to Andrej Bogdanov for very helpful edit of a draft of this post. 

Monday, June 4, 2012

Why don’t we believe science? – an empirical point of view.



A quiz:

Let's begin with a quick quiz: 

For each of the statements below,  determine if it was made by a well respected scientist or by a politician: 

1. "Most research findings are wrong" 
2. "The science is bad". 
3. "Some proportion of the findings in the literature simply might not replicate".

Disbelief in Science 

We will get to the answers in a bit. But it should be quite shocking if any of the statements above were made by a serious scientist, wouldn't it? Intelligent people should believe in science.
For example, P. Krugman column in the NY times and the Guardian was outraged by the fact that  the republican party "is becoming the 'anti-science party.' This is an enormously important development. And it should terrify us". 
Phys.org reported in Feb 2012 summarizes the climate science by saying that: "A stark theme emerged from an annual scientific get-together in Vancouver: the world must be helped to believe in science again or it could be too late to save our planet". 

How come so many do not believe in science? It would be even harder to understand if scientists could make any of the statements in the quiz.

Without science, the industrial and electronic revolutions could not have taken place. 
The life expectancy would have been much lower and most of us would spend most of our time fulfilling the most basic needs instead of having so much recreational time. 

Still many of us instinctively do not believe to statement by scientists. And 2 of statements in the quiz were made by respected scientists. The first statement was made by John P. A. Ioannidis from Stanford in a 2008 paper in PLoS Medecine. The third statement was made by Eric-Jan Wagenmakers from Amsterdam in a recent paper in Nature discussing results published in Journal of Personality and Social Psychology establishing a psi effect. The 2nd statement was made by the republican candidate Tim Pawlenty in 2012.

How come both the general public and leading scientists do not believe in science?

Why is it rational to not believe in science?

I want to suggest a very simple explanation which justifies the disbelief on rational and even scientific grounds.
 
The first ingredient of the explanation is that science and scientists often present their work in public as indisputable truth. Although there are some careful scientists around, we have all seen scientists talk in great confidence in the media many times before.

The second ingredient is that during our lifetimes scientific theories keep changing, even dramatically.

Here are a few quick examples from the medical world: until very recently we were told that fats are bad for us while carbs are not while now we are told the opposite is true (see Gary Taubes book on "Good Calories Bad Calories")  
Until very recently we were told that what is important for health is to exercise every day or a few times a week while now we are told the most important thing is not to sit for too long (see NPR coverage).
Even more dramatic examples include the strongly advocated radical mastectomy and prostate surgery for treating breast and prostate cancer respectively - we are now told that these aggressive treatments do not help! (for the breast cancer story I recommend the book "the emperor of maladies by Siddhartha Mukherjee) 

In addition to the confidence of scientists and their statements and the fact that scientific theories goes into and out of fashion,  the third ingredient is just common sense:
Scientists tell us that we should believe without doubt well founded scientific theories.
Yet – we all see that theories that a decade or two ago were considered bullet proof are now considered simply wrong. As we saw above in some cases respectable scientists refer in the same way to the majority of the work in their own field. So common sense implies we should not trust scientists! 

The Popper Falsifiability Principle in action

Interestingly the application of the logic above follows basic scientific principles - that is the "Popper Falsifiability Principle". Roughly speaking, the principle states that theories should make predictions that are falsifiable and that we should reject the theories if the predictions are false.
At its extreme, the principle says that a good theory should make many predictions and that if even one prediction is wrong than the theory is wrong. 

In our discussion the theory is that "we should listen to scientists since they know what they are saying" and the predictions are scientific statements.  History demonstrates that theories from the past are now considered dead wrong. As we saw above in some cases respectable scientists refer in the same way to the majority of the current work in their own field!
The public then concludes that we should not listen to scientists since many of their predictions are wrong. 

In other words, applying the Popper principle, people quickly realize that the meta-theory that science that is said to be well founded must be true is just wrong. So there is no need to even consider new scientific theories seriously. 

Note that this logic doesn't contradict the usefulness that the same reasonable people extract from science. They have seen many people fly safely - so they will fly. They have seen and heard of antibiotics successfully fighting infections so they will take antibiotics to fight their own infections. 

What reasonable people do

However, this sound approach to scientific truths is not without problems. While
these individuals have seen antibiotics help, they may not believe what their doctors are saying. So they may insist on taking antibiotics for viral infections. Moreover, they have never seen anyone suffer from any of the diseases for which they are required to vaccinate their children - so why would they take the unknown risk of vaccination? And why would they believe climate scientists when the meteorologists fail so consistently in predicting the weather pattern for next month?  

In other words, while the "reasonable approach" of believing only science that we see working with our own eyes is well founded by the many failures of sciences, it creates enormous problems if policies are to be informed by science. As the number of smokers and lung cancer cases are declining , why should people believe that smoking is bad for them? In a world where many of us sit all day, why should people believe that it's bad for them?  why should people believe that the world is warming up and they should do something about it?

There is a stronger form of belief of science that is needed to believe in immunization or in climate change. However, as we've seen the empirical observation of the dynamic changes in what is "scientifically true" results in the public being skeptic.

I think that much of this can be fixed and that it is up to us scientists to fix it.
In future posts I am planning to investigate these and related questions in more detail by looking at the role of "invisible hand of science" (science evaluation mechanisms), the interaction of science with the public, industry and the government; the roles of replication and critique. I also plan to look at specific branches of science and ask if some are better than others and if 
some branches of science do more harm than good.