Should editors get a CLUE? Who should investigate Questionable Research Practices? Is Chinese research seriously sullied by misconduct? How to solve publishing’s wicked challenges? Pro-predatory P&T committees?


  • Liz Wager and others posted the CLUE (Cooperation And Liaison Between Universities And Editors) guidelines on the preprint server biorxiv, regarding how journals and institutions should work together in alleged research misconduct cases. They will consider comments and suggestions posted on the preprint. Their main recommendations:
    • “National registers of individuals or departments responsible for research integrity at institutions should be created
    • Institutions should develop mechanisms for assessing the validity of research reports that are independent from processes to determine whether individual researchers have committed misconduct
    • Essential research data and peer review records should be retained for at least 10 years
    • While journals should normally raise concerns with authors in the first instance, they also need criteria to determine when to contact the institution before, or at the same time as, alerting the authors in cases of suspected data fabrication or falsification to prevent the destruction of evidence
    • Anonymous or pseudonymous allegations made to journals or institutions should be judged on their merit and not dismissed automatically
    • Institutions should release relevant sections of reports of research trustworthiness or misconduct investigations to all journals that have published research that was the subject of the investigation.

Editors: The first proposed CLUE criterion is “*While journals should normally raise concerns with authors in the first instance, they also need criteria to determine when to contact the institution before, or at the same time as, alerting the authors in cases of suspected data fabrication or falsification to prevent the destruction of evidence.” What criteria do you think would be appropriate?

Preprint: Wager E et al. Cooperation And Liaison Between Universities And Editors (CLUE): Recommendations On Best Practice doi:

Interview: When misconduct occurs, how should journals and institutions work together? (Retraction Watch)

  • Denmark is redefining how they handle research misconduct 

As of July 1, research misconduct will be limited to fabrication, falsification, and plagiarism and will be investigated by the Board for the Prevention of Scientific Misconduct. Institutions remain responsible for investigating allegations of Questionable Research Practices (eg, selective reporting of results to support the hypothesis).

Denmark to institute sweeping changes in handling misconduct (Retraction Watch)

  • A large proportion of Chinese research may be affected by misconduct

The subject survey published in Science and Engineering Ethics, estimates 40%, but has a standard deviation of ±24%. “The forms of misconduct that were most concerning to respondents-ahead of falsification, fabrication, and duplication-were plagiarism (25%) and the ‘inclusion of someone without permission or contribution in the authorship’ (28%)…The survey also shows that scientists strongly feel authorities have done little to address the underlying publish-or-perish environment that breeds misconduct; 72% thought that reforms to current systems of academic assessment was the most important measure, with only 13% prioritizing stronger systems of monitoring for misconduct.”

Four in 10 biomedical papers out of China are tainted by misconduct, says new survey (Retraction Watch)

  • Ginny Barbour concludes her term as COPE Chair and comments on positive changes and wicked challenges in publishing: “The importance of good processes is only underpinned by the fact that the types of problems that editors face are increasing in complexity.”

From the outgoing chair  (COPE Digest)

  • Should advisors publish with their PhD students?

Supervisors are morally obliged to publish with their PhD students (Times High Education — registration may be required)

  • Quest for Research Excellence Conference
    • Location: The George Washington University, Washington, DC
    • Date: August 7-9, 2017

The 2017 Quest for Research Excellence Conference.m co-sponsored by the Office of Research Integrity, The George Washington University (GWU), and Public Responsibility in Medicine and Research. “The goal of the Quest for Research Excellence conference series is to fuel knowledge sharing among all the parties involved in promoting the responsible conduct of research and scientific integrity, from scientists to educators, administrators, government officials, journal editors, science publishers and attorneys.”

Office of Research Integrity 



The predatory/pseudo-journal plot thickens: A university promotion & tenure committee is complicit in their faculty publishing in predatory/pseudo-journals. “…I included my initial finding that I had found that I was one of a minority of researchers in my department with no publications in predatory journals.” The author suggests that administrators with research backgrounds may be less likely to equate predatory with legitimate journal publications.

When most faculty publish in predatory journals, does the school become “complicit?” (Retraction Watch)



A brief review of citation performance indicators. “A good indicator simplifies the underlying data, is reliable in its reporting, provides transparency to the underlying data, and is difficult to game. Most importantly, a good indicator has a tight theoretical connection to the underlying construct it attempts to measure.” Has a good indicator been created?



A Canadian initiative to help implement ORCID more broadly, as the greatest challenge is still to get people to register their ORCID ID. “Consortium members have access to the Premium Member API, which facilitates integrating ORCID identifiers in key systems and workflows, such as research information systems, manuscript submission systems, grant application processes, and membership databases.” You can get your ID for free at .

ORCID-CA, the ORCID Consortium in Canada, to provide Canadian institutions and organizations the opportunity to obtain premium membership to ORCID (CRKN/RCDR)


Newsletter #9, originally circulated May 23, 2017. Sources of links include Retraction Watch, Scholarly Kitchen, Twitter.   Providing the links does not imply WAME’s endorsement.



Why do researchers commit research misconduct? Should you publish a paper withdrawn (maybe) from a predatory journal? Should an editor also be a researcher? Researcher and reviewer gender gaps


Clinical trial registration and negative results

A study in BMJ tests the hypothesis that clinical trial registration should improve trial reporting and therefore increase the number of trials that do not report positive outcomes. Registered trials were slightly less likely to report positive results, particularly if they were not industry-funded. The authors did not compare the registered trial outcomes with the outcomes that were reported (they studied 1122 trials so that would have been a major undertaking). A great benefit of trial registration for editors and reviewers is  being able to determine whether outcome switching has occurred. If outcomes were switched, that could explain why trial registration was not associated with a larger reduction in positive results.
Another important observation: much of the trial reporting was poor, pointing out the importance of all authors using, and all medical journals requiring and verifying use of, CONSORT reporting (
Odutayo A, Emdin CA, Hsiao AJ. Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials—ESORT). BMJ 2017;356:j917. doi:


Why do researchers commit research misconduct? 

A case study with a chastening message for investigators (and a sobering message for editors): “He described how and why he started tampering with data. The first time it happened he had analyzed a dataset and the results were just shy of significance. Fox noticed that if he duplicated a couple of cases and deleted a couple of cases, he could shift the p-value to below .05. And so he did. Fox recognized that the system rewarded him, and his collaborators, not for interesting research questions, or sound methodology, but for significant results. When he showed his collaborators the findings they were happy with them-and happy with Fox.” What messages are investigators sending when research doesn’t turn out as hoped? “Hindsight’s a bitch:” Colleagues dissect painful retraction. Retraction Watch (blog). March 7, 2017.

Publishing a paper withdrawn from a predatory journal
What would you do if authors submitted a paper that they had unknowingly submitted to a predatory journal, then withdrew, but the predatory journal wouldn’t respond to confirm? COPE has published a case study on such an instance.
Withdrawal of accepted manuscript from predatory journal. Case Number 16-22. COPE.



The importance of research experience when evaluating research (blog):
“So pointing out why a study is not perfect is not enough: good criticism takes into account that research always involves a trade-off between validity and practicality… good research is always a compromise between experimental rigor, practical feasibility, and ethical considerations. To be able to appreciate this as a critic, it really helps to have been actively involved in research projects. I do not mean to say that we should become less critical, but rather that we become better constructive critics if we are able to empathize with the researcher’s goals and constraints.” The value of experience in criticizing research (Rolf Zwaag Blog)

Relationship between time to reject without review, the review process, and author satisfaction

An analysis across scientific fields, the authors find “One-third of journals take more than 2 weeks for an immediate (desk) rejection and one sixth even more than 4 weeks. This suggests that besides the time reviewers take, inefficient editorial processes also play an important role. As might be expected, shorter peer review processes and those of accepted papers are rated more positively by authors. More surprising is that peer review processes in the fields linked to long processes are rated highest and those in the fields linked to short processes lowest. Hence authors’ satisfaction is apparently influenced by their expectations regarding what is common in their field. Qualitative information provided by the authors indicates that editors can enhance author satisfaction by taking an independent position vis-à-vis reviewers and by communicating well with authors.” Huisman J, Smits J. Duration and quality of the peer review process: the author’s perspective. Scientometrics (2017). doi:10.1007/s11192-017-2310-5


Reporting race/ethnicity in research

“An explanation of who classified individuals as to race, ethnicity, or both, the classifications used, and whether the options were defined by the investigator or the participant should be included in the Methods section. The reasons that race/ethnicity was assessed in the study also should be described in the Methods section. ” Robinson JK, McMichael AJ, Hernandez C. Transparent Reporting of Demographic Characteristics of Study ParticipantsJAMA Dermatol. 2017;153(3):263-264. doi:10.1001/jamadermatol.2016.5978

What is happening with the researcher gender gap, in 12 countries?
A report from Elsevier (using Scopus): Gender in the Global Research Landscape . Analysis of research performance through a gender lens across 20 years, 12 geographies, and 27 subject areas. (2017)
and Scholarly Kitchen’s assessment: Alice Meadows. The Global Gender Gap: Research and Researchers Scholarly Kitchen Blog.

Is there a gender bias in selecting reviewers? 
Here we present evidence that women of all ages have fewer opportunities to take part in peer review. Using a large data set that includes the genders and ages of authors and reviewers from 2012 to 2015 for the journals of the American Geophysical Union (AGU), we show that women were used less as reviewers than expected…The bias is a result of authors and editors, especially male ones, suggesting women as reviewers less often, and a slightly higher decline rate among women in each age group when asked.
These findings underline the need for efforts to increase female scientists’ engagement in manuscript reviewing to help in the advancement and retention of women in science.” Lerback J, Hanson B. Journals invite too few women to referee. Nature | Comment, January 25, 2017.


BMJ will declare all its industry revenues, in the interest of transparency. Hear, hear! BMJ editor confirms that revenues from industry will be declared. BMJ 2015;351:h3908.


Newsletter #4, originally circulated March 16, 2017. Sources include Retraction Watch, COPE, LinkedIn, and Scholarly Kitchen. Providing the links and information does not imply WAME’s endorsement.

How would you change medical publishing? Authors offer bribes, New issues in informed consent, Why do predatory journals exist?


  • What would you change about medical publishing? Scholarly Kitchen offers some interesting perspectives. Share yours via Comments below.

If you could change one thing about scholarly publishing, what would that be? (Scholarly Kitchen blog)


  • Editor receives offer of cash for publishing manuscripts

Pay to play? Three new ways companies are subverting academic publishing (Retraction Watch blog)

  • Editors step down after their citation cartel was discovered (European Geophysical Union)  (Retraction Watch blog)


  • Commentaries on new developments with informed consent: e-consent and internet-based clinical trials, changes in perceptions of risk, new types of risk

Informed Consent  (NEJM [free])


  • Should scientists attempt to replicate their own studies? They have an inherent desire (or conflict of interest) to see consistent results

Why Scientists Shouldn’t Replicate Their Own Work (Discover Magazine)


Do predatory journals fill a niche?

Predatory Publishing as a Rational Response to Poorly Governed Academic Incentives (Scholarly Kitchen blog)


  • A neuroscientist posts his peer reviews online, emails the authors, and tweets a link to his review (but only if the manuscript is available as a preprint)

The Rogue Neuroscientist on a Mission to Hack Peer Review (Wired Magazine)

Newsletter #3. Originally circulated March 7, 2017. Sources include Retraction Watch and Scholarly Kitchen. Providing the links and information does not imply WAME’s endorsement.