I have mixed feelings about the proliferation of the term "Open Notebook Science". I started using the term a year ago to describe our UsefulChem project because it had no hits on Google and so it offered an opportunity to start with a fresh definition. There are currently over 43 000 hits for that term and it is nice to see that the first hit is still the post with the original definition. The first part of the term, "Open Notebook", is meant to be taken literally. It refers to the ultimate information source used by a researcher to record their work. The fundamental philosophy of ONS is that of "no insider information". That means linking to all raw data associated with experiments and making available all experiments relating to a project under discussion. This includes failed and aborted experiments. (see recent conference on ONS) I think that if ONS is practiced in this way it is potentially better than our current system of peer-reviewed article publication. Here is why: A key aspect of the scientific revolution a few centuries ago was moving from trust in an authority to mistrust of everything and everybody. Galileo wrote (as reproduced on p. 25 of John Gribbin's "The Fellowship"):
It appears to me that they who in proof of any assertion rely simply on the weight of authority, without adducing any argument [that is, experimental evidence] in support of it, act very absurdly.
In principle, articles reporting on experimental science are supposed to contain enough information for a reasonably competent peer to repeat. Speaking from experience in my own field of organic chemistry, experimental sections are often highly condensed. When space was limited in paper journals this may have made some sense. But now with electronic storage being cheap that is not an issue (at least in organic chemistry). Of course, journals now usually have a supplementary section available online to address this. But the past few instances that I have tried to debug a reaction using this resource I have found it insufficiently detailed. What I really needed was access to the researcher's lab notebook and all associated files to follow specific instances of reactions, not abstracted general procedures. That way I can see what the researcher did and did not do without making so many assumptions. For example: Were the starting materials checked for purity? How exactly was the reaction monitored and what do those spectra or TLC images look like? Was there any solvent left in the product when it was weighed that might account for those impressive yields? A major flaw in the current scientific publication system is that there is still too much trust. Readers are expected to trust editors to choose appropriate anonymous peers to review submissions. Reviewers trust primary authors when reporting the summarizing of their research results. Primary authors trust their collaborators, students and postdocs to give them accurate information when writing papers. If we make the laboratory notebook and all associated raw data public we can significantly reduce the amount of trust required to keep this house of cards standing. The main problem is not so much that people will completely fabricate data, although this does happen. It is more that mistakes get made and corners are cut to get the paper out the door under pressure. And once these errors are in print it is very difficult to get people to correct them, if they are ever discovered. As a researcher I don't even trust myself. And I shouldn't. Students who haven't yet mastered the discipline required to keep a good detailed timed lab notebook log as they execute and observe will likely be humbled quickly when they realize how poor human memory reveals itself to be for these tasks. Time-stamped video and digital photographs can go a long way to reduce the burden on the researcher to record details of their experimental set-up and observation of the reaction over time. This has proved very useful in the past in my group and I would like to see even more systematic use. There is currently tremendous skepticism associated with publishing scientific results on the web using social software. If people start citing blog posts that do not link to primary raw data for support, in a manner implying that there is strong support, then it is going to be difficult to do good science with social software. There is already plenty of that type of thing going on in the peer-reviewed literature, with one article citing another citing another citing "unpublished results". Now there is nothing wrong with discussing some interesting aspect of ongoing research without linking to primary raw data, but that is not Open Notebook Science in the sense that I have been using it. These could be called "teaser posts" and might be useful for finding collaborators and initiating discussions but they cannot be used as an alternative to traditional peer-reviewed literature. I know that there are many who think that peer review is needed to legitimize scientific blog posts. I think that the ability to comment on posts is useful to continue discussions with the community. But expecting any kind review system to completely validate research published using any vehicle is not realistic. The only people truly qualified to judge a piece of research are those who have actually looked at the raw data to see if everything adds up and that takes time, assuming they have access to it. It is unlikely than anyone will do that without being properly motivated - generally only other researchers trying to reproduce the experiment for their own purposes will have a good reason to invest the time. Now comments from these individuals would be valuable but that applies to a very small proportion of all recently published work. And going forward, I see Open Notebook Science as being a natural way for the scientific process to become more automated, with machines reporting executed protocols on the open web. We have a ways to go to reach that point but I think this is a more likely scenario than expecting machines to learn how to write human-reviewed articles. The point of science is generating actionable information - at least in a field like synthetic organic chemistry. We have a great opportunity to use these new web tools to do this so much more efficiently than ever before. (By the way, we still intend to publish our work through conventional channels. Not so much to communicate new information to the community but for all the other reasons that researchers are under pressure to publish.)