Thursday, August 14, 2008

Evaluation of genomic island predictors using a comparative genomics approach

Well after a long hiatus from blogging I thought would start again with announcing my recently accepted paper, "Evaluation of genomic island predictors using a comparative genomics approach" in BMC Bioinformatics.

Quick Summary
This research provides a comparison of several previously published tools that are used to predict genomic islands (large regions of HGT in bacteria).These tools use various methods of identifying abnormal sequence composition, such as GC percent, to predict regions of HGT. The predicitons made by these tools were compared to reference datasets of genomic islands (GIs) and non-GIs (very conserved regions) that were constructed using whole genome alignments. One of the novel and cool (well I like to think so) things about this comparative genomics method, called IslandPick, is that it automatically selects appropriate genomes for comparison given a query genome. Normally in most compartive genomics studies the user/scientist has to pick which genomes are relavant and should be used in the comparison. This works well until you have to do it for ~1000 different genomes. If you want more information on how this works read the paper!

This was my first experience with a very tough and stubborn reviewer. This would have been published almost 6 months ago if it wasn't for one reviewer that kept insisting that our method was flawed even after we clearly defended and addressed their concerns. After much correspondence and waiting, a fresh group of reviewers accepted the research after some minor revisions. *Sigh* Makes me wonder how much of publishing is just a crapshoot?

1 comment:

Benjamin Good said...

I hate to say it, but I think quite a bit of the review process is a crapshoot - much like the rest of life! Congrats on a solid, scientific paper (I'm jealous).

Regarding the review process, do you think it might be improved if reviews were a) non-anonymous and b) public ? Seems that publishing the reviews, for example with the paper (even better with the appropriate drafts of the paper), might really improve the process. Reviewers would have the opportunity to receive some credit for their efforts via essentially a new form of publication, students could see firsthand what the process looks like before they were submitted to it, and crappy reviews/reviewers could be spotted by the community. Not to mention that it essentially forces open a dialogue between folks that probably should be talking anyway.