Monday, September 13, 2010

Peer Review 2.0?

Interesting piece over at the NY Times: For Scholars, Web Changes Sacred Rite of Peer Review. The article concerns the partnership between the Center for History and New Media and the Shakespeare Quarterly to experiment with an online, crowd-sourced system of scholarly peer review. The Chronicle of Higher Education also has a good article about the experiment, with some valuable remarks in the comments section.

Peer review, for those of you in the real world, is the process by which scholarship is vetted by experts in advance of publication. Peer-reviewed books and articles are considered the gold standard in academia, particularly for tenure and promotion. To say that a book or article is peer-reviewed is to say that it has been closely examined by another scholar or scholars with expertise in the field who looked it over for factual accuracy, relevance to other scholarship, originality, clarity of argument, and other factors. The big question is "Does this manuscript add to our understanding of its topic?" Peer-review in the humanities is usually single-blind, which is to say that the reviewer knows the identity of the author but the author does not get to know the identity of the reviewer(s). This is supposed to encourage the reviewer to be frank and honest. The reviewers can usually recommend publication, recommend publication after some specific revisions (the dreaded but often useful "revise and resubmit) or outright rejection.

It all sounds good on paper, and on paper is where traditional peer review takes place. But as the Times notes, "Clubby exclusiveness, sloppy editing and fraud have all marred peer review on occasion." That is putting it politely! I think that everyone who has been through the process has enjoyed thoughtful, constructive criticism and advice from conscientious reviewers. Some of us have also had reviewers who seem to have skimmed our manuscript and sunk our hopes of publication with half-baked objections. Though reviewers are supposed to be anonymous this very often breaks down in the small worlds of most academic subdisciplines. (Hint: When you put down your work a a peer reviewer on your C.V., don't list the name of the person whom you reviewed, because your university might put your CV online and blow your cover. I should add that Jay was the thoughtful and helpful type of reviewer and is an excellent scholar.)

And the process is glacially slow. Reviewers are paid at best a few hundred dollars for closely examining a book manuscript that could run many hundreds of pages. For an article we are paid nothing at all. We do it as a service to the profession, and because we want someone to do it for us when we submit a manuscript, and we do it on top of the many other duties that come with being an academic. So it is not unusual that it takes weeks and months for reviews to get back. The situation seems to be getting worse as academic publishers are placed under increasing strain. A friend of mine was encouraged by a major university press in his field to submit his book to them, eight months later he has no feedback and the editor is not answering his emails. My friend was hoping to leverage a book contract for a promotion, this will now have to wait another year.

So how did moving the process online and calling for greater participation work out for the Shakespeare Quarterly? I think it proved a great success.

The special issue of the Shakespeare Quarterly to serve as the guinea pig for the experiment focuses on “Shakespeare and New Media.” The peer-review period ran for three months and was widely publicized in the digital humanities world. You can see the papers and all of the review comments here. There was also a project blog.

My first impression was that there were fewer comments than I expected on most of the articles, particularly compared to the many thousands of edits and active talk pages for even moderately popular Wikipedia articles. The first requirement of crowd-sourcing is a crowd, and not that all that many Shakespeare scholars came to this party. The first essay, "Networks of Deep Impression: Shakespeare and the History of Information" by Alan Galey has about thirty comments, including the author's replies, and most of the comments are from 3-4 people. The second essay, Kate Rumbold's "From “Access” to “Creativity”: Shakespeare Institutions, New Media and the Language of Cultural Value" received nearly 100 comments.

As I began examining the comments, however, my opinion changed. The bulk of the comments are very thoughtful indeed, and the authors usually respond with equal professionalism. The scholarly exchange is similar to what happens at the best conference--at those rare occasions when 1) no one goes over their time and 2) the conversation period is not dominated by some blowhard making speeches from the floor.

A nice example of the exchange is in this section of Rumbold's essay (click on "5 Comments on paragraph 5") where Rumbold goes back-and-forth with three reviewers about the interpretation in a particular paragraph. This is light years better (and faster!) than the usual exchanges of typewritten comments via the intermediary of the journal editor.

The online peer review was both similar and different from normal scholarly peer review. There was no anonymity. The reviewers wrote their critiques under there real names. I wonder if this did not cause them to pull their punches in their more critical comments? At the same time, the experimented differed from a genuine crowd-sourced effort such as Wikipedia. The Shakespeare Quarterly reviewers were recruited by the journal editors from known scholars and had to be registered into the commenting system--it was not as if anyone with an interest in the topic could create an account and sound off. I think both of these were probably good decision at this stage in the experiment.

One wonders if the open source model tested with this issue of the Shakespeare Quarterly is sustainable. Can scholars continue to be recruited for future issues? There are some technically adept humanities professors out there, but there are still just as many who use their computer monitors to organize their Post-It notes. Will online peer review continue to make reviewers use their real names? I would love to see another experiment where the reviewers were screened, but anonymous. And would the model work with genuine crowd-sourcing--open registration, the ability of reviewers to rate one another's comments, etc.?

The print edition of this special issue of the Shakespeare Quarterly comes out on September 17. I don't know how the articles stack up in the world of Shakespeare Studies, but I think that the issue marks a milestone in the development of the digital humanities.


KF said...

Larry, thanks so much for the thoughtful comments on the SQ open review process. The question of sustainability is an enormous one that we're wrestling with at MediaCommons. My hunch is that online scholarly communities will need to develop some kind of pay-to-play system (i.e., in order to publish here, you have to participate in the review process) in order to make it work, but what that will look like in point of practice is an open question.

One further note: the editor's blog is still available here; if you found a broken link to it, would you let me know so I can fix it? Many thanks!

Larry Cebula said...

Kathleen, thanks for your comments. I suspect that this model is at least as sustainable as 1.0 peer review--you recruit scholars who do it for a line on their vita. Some will do a good job, others...

Another issue that I wish I had added--these papers seem to have been pretty mature and close to publishable as initially submitted. I wonder how well this would work with papers that need more work, or are just not up to snuff?

Hmmm...the link to the blog is working fine, it must have been some hiccup on my end. I'll revise the post.

If you have an email list of the participants, could you send them a link to this post?

Anne said...

Dr. Cebula, Thanks for posting this informative and interesting article. If I might suggest, the content is relevant in many ways to your graduate students, informing us of the rigorousness of evaluation in our work; a measurement of the labors of intelltual and written work. Perhaps it can be bundled with the opening day syllabus? --Anne