Why don't professionals post rebuttals to crank science in the professional literature?
The answer, as always, is more complex than acknowledged or recognized by the advocates of pseudo-science.
There is a lot of science, viewed as crank or pseudo-science today, which was at one time, the best we knew. Medicine is probably the best known field where former standards of practice are today viewed as ineffective to outright dangerous (Wikipedia).
The physical sciences have a similar history - there are ideas which at one time were viewed as viable hypotheses, where there was ongoing research to determine if it was correct. Quantized redshifts in the 1970s, Tony Peratt's galaxy model in the 1980s, were both explored when first proposed. However, eventually the quality of observational data increased to the point that these models could not match the data and were ruled out (see Quantized Redshifts XI. My Designer Universe Meets Some Data and What's Next... , Scott Rebuttal. II. The Peratt Galaxy Model vs. the Cosmic Microwave Background).
Today, many of the more prestigious journals do not publish papers on these topics at all, any more than they would publish papers proposing the Earth is flat. These ideas are now so totally ruled out by the data that it is a waste of space to publish papers for or against such theories. Such analyses must be published in other, less formal venues. There are other reasons as well, and I'll explore more of these below.
In the best cases of quality peer-review, reviewers actually examine the paper under review in some detail. Through this process, it is hoped that one can identify papers that have blatant errors and send them back to the author. In most cases, where the authors of the paper are making a real effort to do legitimate science, a reviewer may think of some follow-up questions which it might be good for the author to answer, and maybe some additional work that was missed in the analysis. Probably most reviewed papers fall in this category.
Occasionally a paper can make it through the peer-review process even with some unusual or questionable results, because the reviewer(s) might suspect there is a significant error in the conclusions, but can't put their finger on the problem. In some cases, the reviewer will recommend the paper for publication so the scientific community at large can examine the work.
Sometimes, a blatantly bad paper can make it through peer-review to publication. In many cases, this is often a sign that the peer-review process itself has been corrupted (Wikipedia).
In scientific circles, even making it successfully through peer-review does not guarantee the correctness of a paper. The writing and publication of a peer-reviewed paper is just the start of the process.
But let's say your paper has successfully made it through the peer-review process to publication. What happens next?
The Scientific Publication “Market”
When exploring how peer-review works, and the incentives and disincentives in the system, it helps to use a market analogy. The product being 'sold' in this market is good science.
The real measure of the value of your paper is the number of times that other researchers find your work of value, by citing your work in their papers.
If other researchers see your work and think it might help solve some problems in their research, they will try to use the results of your work to further their research. If they had success using your results, they will cite your paper in their publication list. The fact that the product of your research helped solve another problem is a point in favor of your paper possibly being correct. However, if your result was not helpful, other researchers might only cite your work to make a record of the fact that they explored your idea but it was unsuccessful.
So if other researchers don't find your work useful, after some initial interest, they will stop using your work, and will stop citing it. Over time, work with more citations is more likely to be correct.
Citation is the 'currency' of scientific publication
Like a market system, researchers whose work is very useful to others will receive many citations of their work. Those researchers whose work is less useful will receive fewer citations. This provides a simple measure of the impact of the one's research. This measure is often used in hiring and promotional decisions, for academic and similar technical jobs, where publication record is important.
Rebuttal Publications Inadvertently Promote Bad Science
With the ranking system based on number of citations, this provides a reason not to publish rebuttals in professional journals. The usual assumption is that if a paper is cited by others, that others found it's results useful.
But in the case of a rebuttal publication, the crank paper would be cited and this citation would probably still be counted as a positive result. In this case, doing bad science that generates a large number of peer-reviewed rebuttals inadvertently helps promote the bad science. This could be considered the scientific analogy of the celebrity claim “There is no such thing as bad press.”
In cases of academic hiring, when published papers are reviewed by hiring committee, publications in proceedings are often excluded, as these publications are often available to anyone who pays the conference fee and are often not peer-reviewed.
There are a growing number of 'Vanity Journals' in scientific publications, with questionable peer-review practices. For some of these journals, it is clear that paying the publication charge is all that is required to get the stamp of 'peer-reviewed'. These journals are the scientific publication equivalent of 'diploma mills' (Wikipedia). Such publications are hotbeds of crank science. Crank publications are usually identifiable by large number of self-citations, where the researcher often cites their own work.
To avoid the side-effect of lending apparent credibility to bad science through the professional publication system, most badly done science floats round the community by word-of-mouth or non-peer reviewed literature.
Frankly, I wish there was a reliable mechanism where rebuttals could be provided in a way that enabled detailed review and updating.
Attempts by crank science supporters to get their material mentioned in peer-review science literature should really be thought of as a con, an attempt to fool legitimate researchers into giving undeserved credit to cranks. This is similar to the issue that debating creationists give more status to the creationist than they would otherwise be able to earn on their own (NCSE: Confronting Creationism).
- Publishing a rebuttal in a respected peer-reviewed journal raises the status of the bad science.
- Publishing a rebuttal to a crank claim opens the door to the crank claiming discrimination if they are not allowed to respond in the journal. For this reason, reputable journals rarely support such rebuttals.
- Publishing a rebuttal to a crank gives the crank a citation. As noted above, many academic promotions are based on the number of citations of a researchers work, so a published rebuttal raises the status of the crank.
- Most peer-reviewed journals have a publication charge. Most professional would not want to use their limited publication funds to publish a rebuttal paper which will receive very few citations in the professional literature (except perhaps by the crank trying to defend themselves).