Thursday, July 29, 2010

Crowd-Sourcing Peer Reviews

While procrastinating and staring blankly at my Twitter feed a moment ago, I noticed this post by ACRL (ALA's Association of College and Research Libraries):

@ALA_ACRL: RT @chronicle: Open, crowd-sourced peer review works for one humanities journal:http://bit.ly/9HIDJC

As an OA journal editor, I was intrigued. When you click the link above, you're lead to a Chronicle article about Shakespeare Quarterly's new approach to the review process. Basically, a draft of the publication is placed online, and is open for comment by a pool of reviewers with varying specialties. The process is not an anonymous review, so those comments that are made about a submission are attached to the reviewer's name. 
I was pleasantly surprised for a number of reasons. First because SQ is a well-recognized and respected humanities publication, so the fact that they are taking the lead on re-thinking their methods is a good sign that others might follow. Second, I was happy to see positive responses from the authors: Michael Whitmore mentioned getting citations and helpful feedback from 6 people who actively involved themselves in reviewing his work (think of how much more help that would be than the usual 2-3 'experts!') I was also glad that the reviewers included both junior and senior scholars. 

One critique I have is the focus on expertise. I agree, we want people who are knowledgeable in the area in which they review, otherwise it is much harder to create rigorous scholarly publications. However, if we look for expertise alone, we lose sight of the passion of someone who might be less experienced, but who is dedicated to their field and eager to learn and share what they know. I wonder how we can balance these: perhaps through putting out a call for reviewers in addition to inviting those we know we want to work with? For some reason I keep thinking of publicly-created metadata in relation to this, and I think the underlying argument is the same: how much control do we want to give 'just folks' in determining what information is considered 'quality' or how that information is classified? Is an article's intrinsic value lessened if someone outside of the literary field (or someone outside of academia entirely) comments on it? 

This definitely gives me some food for thought to mull over with the other folks involved with B Sides-we've just set ourselves up (and are about to enter our 2nd semester) as a student journal, and part of our purpose is educational (i.e. to teach students and alumni about the publication process). Since, by and large, most journals still do the "submit research>peer review>revisions>publication" thing, I would be really hesitant to let go of peer review for B Sides when we are using it as a tool to help our fellow LIS students!
However, I think that crowd-sourced peer review is an AWESOME idea, because it lets us engage with both the author and other reviewers, provides more feedback (and more varied feedback) from a larger number of people, and allows authors the chance to gauge the reaction of those who would actually be reading their work after it was published (i.e. people in the field but not necessarily people who have been selected as reviewers). 
This sort of reviewing is also an excellent opportunity for us graduate students: I can't speak for every grad student, but I definitely think that the more opportunities we have to get involved with the review and publication process, get our voices heard, and gain expertise the better! It makes me wonder how to construct a place within our department (or even independently, if anyone is interested in working on this with me!) to create a similar model for graduate student publication, or even just to put our work out there for feedback in a less formal setting--fellow students, educators, researchers, what are your thoughts?  I am hoping this model becomes more and more accepted and widely used!

No comments:

Post a Comment