In relation to “Retraction Watch” and “Retraction Watch II“, concerning academic fraud in accounting research, particularly as related to unsubstantiated data, Neil Remington Abramson commented as follows:
It’s hard for the peer review process to deal with outright fraud, especially the falsification of data. The reviewer receives the paper, which includes the analysis of data. Usually a stats-based paper would include a correlation table plus tables reporting whatever statistical techniques were employed. Appendices might reveal how summary variables were created.
Suppose the underlying data was falsified? Suppose you generated a bunch of fictitious cases to improve your levels of significance? How would I, as a reviewer, even know? Yet your bigger sample and higher levels of significance might get you a slot in a better journal; beating out someone whose paper was less impressive because honesty limited its value.
The last time I tried to do a China study, I lined up some Chinese researchers to administer my questionnaires in China. I told them I wanted them to send me the completed questionnaires and the research funding would pay for the courier. I said I wanted the questionnaires because there are various reports of data falsification in China (and elsewhere). I was told my Chinese counterparts were willing to send me the data spreadsheet but not the questionnaires. If I insisted on the questionnaires, they would withdraw.
I discussed this with my colleagues and advisors here at SFU, and we decided I should withdraw myself. There was no reason we could think of why they couldn’t courier the questionnaires unless they didn’t exist. It’s easy to falsify a data spreadsheet if you are dishonest. It’s harder to do so with hundreds of questionnaires.
It is conceivable, however, that had I accepted the data set without the questionnaires, I could have produced a falsified analysis without knowing it, and reviewers would not have known either. It would only be through inability to replicate the findings that questions could be raised, and there are few replications done. Replications, when successful, are unlikely to appeal to top journals. They are the results one just expects when the initial work has been done honestly in the first place