Assessing the value of research is always a tricky effort. One method currently being used is to count the number of times an article has been cited as a reference. This has always fallen short to me but I couldn't explain why. Now I can. The purpose of publishing scientific results is so that others can put those results to use, not just to refer to them in an abstract manner.
So let me propose this: keep counting citations, but only those citations where the previous results were used and/or duplicated and/or were a productive, integral part of the current research. This would pretty much mean ignoring the background citations in the introduction which are "used" only to describe the playing field, but weren't important enough to be duplicated. Maybe some of the citations in the discussions sections too.
My papers from grad school would most likely fall victim to this axe as the results have not been duplicated or used (as far as I am aware) by any other researchers despite being cited probably a dozen times. I can live with that.
Good research is good because it advances the field, because other people use it, because it can be duplicated. So let's align the citation counting with those principles.
Thoughts?
No comments:
Post a Comment