Should editors get a CLUE? Who should investigate Questionable Research Practices? Is Chinese research seriously sullied by misconduct? How to solve publishing’s wicked challenges? Pro-predatory P&T committees?


  • Liz Wager and others posted the CLUE (Cooperation And Liaison Between Universities And Editors) guidelines on the preprint server biorxiv, regarding how journals and institutions should work together in alleged research misconduct cases. They will consider comments and suggestions posted on the preprint. Their main recommendations:
    • “National registers of individuals or departments responsible for research integrity at institutions should be created
    • Institutions should develop mechanisms for assessing the validity of research reports that are independent from processes to determine whether individual researchers have committed misconduct
    • Essential research data and peer review records should be retained for at least 10 years
    • While journals should normally raise concerns with authors in the first instance, they also need criteria to determine when to contact the institution before, or at the same time as, alerting the authors in cases of suspected data fabrication or falsification to prevent the destruction of evidence
    • Anonymous or pseudonymous allegations made to journals or institutions should be judged on their merit and not dismissed automatically
    • Institutions should release relevant sections of reports of research trustworthiness or misconduct investigations to all journals that have published research that was the subject of the investigation.

Editors: The first proposed CLUE criterion is “*While journals should normally raise concerns with authors in the first instance, they also need criteria to determine when to contact the institution before, or at the same time as, alerting the authors in cases of suspected data fabrication or falsification to prevent the destruction of evidence.” What criteria do you think would be appropriate?

Preprint: Wager E et al. Cooperation And Liaison Between Universities And Editors (CLUE): Recommendations On Best Practice doi:

Interview: When misconduct occurs, how should journals and institutions work together? (Retraction Watch)

  • Denmark is redefining how they handle research misconduct 

As of July 1, research misconduct will be limited to fabrication, falsification, and plagiarism and will be investigated by the Board for the Prevention of Scientific Misconduct. Institutions remain responsible for investigating allegations of Questionable Research Practices (eg, selective reporting of results to support the hypothesis).

Denmark to institute sweeping changes in handling misconduct (Retraction Watch)

  • A large proportion of Chinese research may be affected by misconduct

The subject survey published in Science and Engineering Ethics, estimates 40%, but has a standard deviation of ±24%. “The forms of misconduct that were most concerning to respondents-ahead of falsification, fabrication, and duplication-were plagiarism (25%) and the ‘inclusion of someone without permission or contribution in the authorship’ (28%)…The survey also shows that scientists strongly feel authorities have done little to address the underlying publish-or-perish environment that breeds misconduct; 72% thought that reforms to current systems of academic assessment was the most important measure, with only 13% prioritizing stronger systems of monitoring for misconduct.”

Four in 10 biomedical papers out of China are tainted by misconduct, says new survey (Retraction Watch)

  • Ginny Barbour concludes her term as COPE Chair and comments on positive changes and wicked challenges in publishing: “The importance of good processes is only underpinned by the fact that the types of problems that editors face are increasing in complexity.”

From the outgoing chair  (COPE Digest)

  • Should advisors publish with their PhD students?

Supervisors are morally obliged to publish with their PhD students (Times High Education — registration may be required)

  • Quest for Research Excellence Conference
    • Location: The George Washington University, Washington, DC
    • Date: August 7-9, 2017

The 2017 Quest for Research Excellence Conference.m co-sponsored by the Office of Research Integrity, The George Washington University (GWU), and Public Responsibility in Medicine and Research. “The goal of the Quest for Research Excellence conference series is to fuel knowledge sharing among all the parties involved in promoting the responsible conduct of research and scientific integrity, from scientists to educators, administrators, government officials, journal editors, science publishers and attorneys.”

Office of Research Integrity 



The predatory/pseudo-journal plot thickens: A university promotion & tenure committee is complicit in their faculty publishing in predatory/pseudo-journals. “…I included my initial finding that I had found that I was one of a minority of researchers in my department with no publications in predatory journals.” The author suggests that administrators with research backgrounds may be less likely to equate predatory with legitimate journal publications.

When most faculty publish in predatory journals, does the school become “complicit?” (Retraction Watch)



A brief review of citation performance indicators. “A good indicator simplifies the underlying data, is reliable in its reporting, provides transparency to the underlying data, and is difficult to game. Most importantly, a good indicator has a tight theoretical connection to the underlying construct it attempts to measure.” Has a good indicator been created?



A Canadian initiative to help implement ORCID more broadly, as the greatest challenge is still to get people to register their ORCID ID. “Consortium members have access to the Premium Member API, which facilitates integrating ORCID identifiers in key systems and workflows, such as research information systems, manuscript submission systems, grant application processes, and membership databases.” You can get your ID for free at .

ORCID-CA, the ORCID Consortium in Canada, to provide Canadian institutions and organizations the opportunity to obtain premium membership to ORCID (CRKN/RCDR)


Newsletter #9, originally circulated May 23, 2017. Sources of links include Retraction Watch, Scholarly Kitchen, Twitter.   Providing the links does not imply WAME’s endorsement.



Paraphrasing plagiarism? Who gets the DiRT? Coming to terms with conflicts of interest: CROs, practice guidelines, authors, editors, publishers. Future of peer review, sharing data more easily


  • Free Paraphrasing tools make evading plagiarism detection tools easier, requiring manual review to identify problems. The article provides useful tips to help identify such work. However, how does one determine whether the awkward phrasing that the paraphrasing tools may create is due to the tool or to lack of English writing fluency?

A troubling new way to evade plagiarism detection software. (And how to tell if it’s been used.) (Retraction Watch)

  • Retraction Watch and STAT announce the DiRT (do the right thing) award and the first recipient, apparently a judge who rejected a defamation lawsuit against a journal for expressions of concern.

Announcing the DiRT Award, a new “doing the right thing” prize — and its first recipient (Retraction Watch)



  • Challenges to trial integrity may occur when for-profit clinical research organizations (CROs) conduct international RCTs, as they’re doing more and more– as illustrated by the TOPCAT spironolactone study

Serious Questions Raised About Integrity Of International Trials (CardioBrief)

  • A JAMA theme issue on conflicts of interest includes some commentaries [some restricted access]; the following seem especially relevant to editors:

(1) Why There Are No “Potential” Conflicts of Interest By McCoy and Emanuel, who argue that conflicts of interest aren’t potential; there are conflicts of interest and ways to mitigate them

(2) Strategies for Addressing a Broader Definition of Conflicts of Interest by McKinney and Pierce: “[Conflict of interest] disclosure is thus useful as a minimum expectation, but is fundamentally insufficient. It is one tool in a toolbox, but no more.”

(3) Conflict of Interest in Practice Guidelines Panels by Hal Sox, including guidance from the Institute of Medicine, useful to editors who review such guidelines. “To accept a recommendation for practice, the profession and the public require a clear explanation of the reasoning linking the evidence to the recommendations. The balance of harms and benefits is a valuable heuristic for determining the strength of a recommendation, but this determination often involves a degree of subjectivity because harms and benefits seldom have the same units of measure. Because of these subjective elements, guideline development is vulnerable to biased judgments.”

(4) How Should Journals Handle the Conflict of Interest of Their Editors? Who Watches the “Watchers”? by Gottlieb and Bressler, who discuss current recommendations for how editors should handle their conflicts of interest. As is usually the case the advice does not address small journals with very few decision-making editors; other solutions may be needed in those cases.

(5) Medical Journals, Publishers, and Conflict of Interest by JAMA‘s publisher Tom Easley. This article pertains primarily to large journal-publisher relationships, but many journals have a different arrangement and additional guidance is needed.



  • Predatory Indian journals apply to DOAJ in large numbers

“Since March 2014, when the new criteria for DOAJ listing were put out, there have been about 1,600 applications from Open Access journal publishers in India…Of these, only 4% (74) were found to be from genuine publishers and accepted for inclusion in the DOAJ directory. While 18% applications are still being processed, 78% were rejected for various reasons. One of the main reasons for rejection is the predatory or dubious nature of the journals.”

” ‘Nearly 20% of the journals have a flashy impact factor and quick publication time, which are quick give-aways….Under contact address, some journal websites do not provide any address but just a provision for comments. In many cases, we have written to people who have been listed as reviewers to know if the journal website is genuine.’ ”

Predatory journals make desperate bid for authenticity (The Hindu)

  • A journal published by Gavin changes its name from Journal of Arthritis and Rheumatology in response to the American College of Rheumatology–to a name very similar to a different journal




BioMedCentral and Digital Science publish a report on “What might peer review look like in 2030?” and recommend:

  1. “Find new ways of matching expertise and reviews by better identifying, verifying and inviting peer reviewers (including using AI)
  2. Increase diversity in the reviewer pool (including early career researchers, researchers from different regions, and women)
  3. Experiment with different and new models of peer review, particularly those that increase transparency
  4. Invest in reviewer training programs
  5. Find cross-publisher solutions to improve efficiency and benefit all stakeholders, such as portable peer review
  6. Improve recognition for review by funders, institutions, and publishers
  7. Use technology to support and enhance the peer review process, including automation”

The Future of Peer Review (Scholarly Kitchen)



Angela Cochran blogs about the apparent failure of online commenting, but she defines success as percentage of papers with comments. If few letters to the editor are published do we consider them a waste? Maybe the approach isn’t mature yet. Ultimately. all PPPR comments need to be compiled with the article. If they’re useful to the commenters, some readers, and maybe the authors, that’s sufficient.

Should we stop with the commenting already? (Scholarly Kitchen)



Figshare releases new platform to help authors share data more easily

Figshare Launches New Tool for Publishers To Support Open Research (PRWeb)


Newsletter #8, first circulated May 8, 2017.  Sources of links include Retraction Watch, Stat News, Scholarly Kitchen. Providing the links does not imply WAME’s endorsement.


Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals and how (maybe) to identify possibly predatory journals. Fake editor, Rehabbed retraction, Peer reviewer plagiarizing. Writing for a lay audience; Proof to a famous problem almost lost to publishing obscurity


  • Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals

“An early-career researcher isn’t necessarily going to have the basic background knowledge to say ‘this journal looks a bit dodgy’ when they have never been taught what publishing best practice actually looks like…We also have to consider the language barrier. It is only fair, since we demand that the rest of the scientific world communicates in academic English. As a lucky native speaker, it takes me a few seconds to spot nonsense and filler text in a journal’s aims and scope, or a conference ‘about’ page, or a spammy ‘call for papers’ email. It also helps that I have experience of the formal conventions and style that are used for these types of communication. Imagine what it is like for a researcher with English as a basic second language, who is looking for a journal in which to publish their first research paper? They probably will not spot grammatical errors (the most obvious ‘red flag’) on a journal website, let alone the more subtle nuances of journal-speak.”

How should you not identify a predatory journal? “I know one good-quality journal which was one of the first in its country to get the ‘Green Tick’ on DOAJ. I’ve met the editor who is a keen open access and CC-BY advocate. However, the first iteration of the journal’s website and new journal cover was a real shock. It had all the things we might expect on a predatory journal website: 1990s-style flashy graphics, too many poorly-resized pictures, and the homepage (and journal cover) plastered with logos of every conceivable indexing service they had an association with…I knew this was a good journal, but the website was simply not credible, so we strongly advised them to clean up the site to avoid the journal being mistaken for predatory…This felt wrong (and somewhat neo-colonial). ‘Professional’ website design as we know it is expensive, and what is wrong with creating a website that appeals to your target audience, in the style they are familiar with? In the country that this journal is from, a splash of colour and flashing lights are used often in daily life, especially when marketing a product. I think we need to bear in mind that users from the Global South can sometimes have quite different experiences and expectations of ‘credibility’ on the internet, both as creators and users of content and, of course, as consumers looking for a service.”

Andy Nobes, INASP.  Critical thinking in a post-Beall vacuum (Research Information)

  • Characteristics of possibly predatory journals (from Beall’s list) vs legitimate open access journals

Research finds 13 characteristics associated with possibly predatory journals (defined as those on Beall’s list, which included some non-predatory journals). See Table 10 — misspellings, distorted or potentially unauthorized images, editors or editorial board members whose affiliation with the journal was unverified, and use of the Index Copernicus Value for impact factor were much more common among potentially predatory journals. These findings may be somewhat circular since the characteristics evaluated overlap with Beall’s criteria and some of those criteria (e.g., distorted images) were identified in the previous article as falsely identifying predatory journals, for reasons of convention rather than quality. However, the results may be useful for editors who are concerned their journal might be misidentified as predatory.

Shamseer L, Moher D, Maduekwe O, et al. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison  BMC Medicine 2017;15:28. DOI: 10.1186/s12916-017-0785-9

  • From the Department of Stings: A fake academic is accepted onto editorial boards and in a few cases, as editor

“We conceived a sting operation and submitted a fake application [Anna O. Szust] for an editor position to 360 journals, a mix of legitimate titles and suspected predators. Forty-eight titles accepted. Many revealed themselves to be even more mercenary than we had expected….We coded journals as ‘Accepted’ only if a reply to our e-mail explicitly accepted Szust as editor (in some cases contingent on financial contribution) or if Szust’s name appeared as an editorial board member on the journal’s website. In many cases, we received a positive response within days of application, and often within hours. Four titles immediately appointed Szust editor-in-chief.”

Sorokowski P, Kulczycki ESorokowska A, Pisanski K Predatory journals recruit fake editor. Nature Comment 543, 481–483 (23 March 2017). doi:10.1038/543481a



  • A retracted study is republished in another journal without the second editor being aware of the retraction. How much history is an author obligated to provide? What is a reasonable approach?

“Strange. Very strange:” Retracted nutrition study reappears in new journal (Retraction Watch)

  • A peer reviewer plagiarized text from the manuscript under review. “We received a complaint from an author that his unpublished paper was plagiarized in an article published in the Journal... After investigation, we uncovered evidence that one of the co-authors of … acted as a reviewer on the unpublished paper during the peer review process at another journal. We ran a plagiarism report and found a high percentage of similarity between the unpublished paper and the one published in the Journal... After consulting with the corresponding author, the editors decided to retract the paper.” Publishing timing does not always reveal who has plagiarized whom.

Nightmare scenario: Text stolen from manuscript during review (Retraction Watch)



  • Instructions for writing research summaries for a lay audience. “It is particularly intended to help scientists who are used to writing about biomedical and health research for their peers to reach a wider audience, including the general public, research funders, health-care professionals, patients and other scientists unfamiliar with the research being described…Plain English avoids using jargon, technical terms, acronyms and any other text that is not easy to understand. If technical terms are needed, they should be properly explained. When writing in plain English, you should not change the meaning of what you want to say, but you may need to change the way you say it…A plain-English summary is not a ‘dumbed down’ version of your research findings. You must not treat your audience as stupid or patronise them.”

Access to Understanding (British Library)

  • A retired mathematician solved, and published, a theorum proving Gaussian correlation inequality, yet the proof remained obscure because it was published in a less well-known journal. “But Royen, not having a career to advance, chose to skip the slow and often demanding peer-review process typical of top journals. He opted instead for quick publication in the Far East Journal of Theoretical Statistics, a periodical based in Allahabad, India, that was largely unknown to experts and which, on its website, rather suspiciously listed Royen as an editor. (He had agreed to join the editorial board the year before.)…With this red flag emblazoned on it, the proof continued to be ignored.

A Long-Sought Proof, Found and Almost Lost (Quantum Magazine)



How are types of statistics used changing over time? “…the average number of methods used per article was 1.9 in 1978–1979, 2.7 in 1989, 4.2 in 2004–2005, and 6.1 in 2015. In particular, there were increases in the use of power analysis (i.e., calculations of power and sample size) (from 39% to 62%), epidemiologic statistics (from 35% to 50%), and adjustment and standardization (from 1% to 17%) during the past 10 years. In 2015, more than half the articles used power analysis (62%), survival methods (57%), contingency tables (53%), or epidemiologic statistics (50%).” Are more journals now in need of statistical reviewers?

Sato Y, Gosho M, Nagashima K, et al. Statistical Methods in the Journal — An Update . N Engl J Med 2017; 376:1086-1087. DOI: 10.1056/NEJMc1616211




Newsletter #5, circulated April 1, 2017. Sources include Retraction Watch and Open Science Initiative listserve. Providing the links does not imply WAME’s endorsement.

Publishing research with ethical lapses, P values, Reproducibility, WAME’s predatory journals statement


  • An editorial by Bernard Lo and Rita Redberg discusses ethical issues in recently published research in which abnormal lab values were not conveyed to research participants: “Should a study with an ethical lapse be published?…Many journals will not publish research with grave ethical violations, such as lack of informed consent, lack of institutional review board (IRB) approval, or scientific misconduct. However, if violations are contested or less serious, as in this study, the ethical consensus has been to publish valid findings, together with an editorial to raise awareness of the ethical problems and stimulate discussion of how to prevent or address them.”

Addressing Ethical Lapses in Research (JAMA) [formerly free, now first PDF page visible]

  • What should research misconduct be called? “At the heart of the debate is the history of the term. In the U.S., in particular, lobbying from scientists dating to the 1980s has resulted in the term “misconduct” being codified to only refer to the cardinal sins of falsification, fabrication, and plagiarism. This has left lesser offenses, often categorized as “questionable research practices,” relatively free from scrutiny. Nicholas Steneck, a research ethicist at the University of Michigan in Ann Arbor, calls the term “artificial:”

Does labeling bad behavior “scientific misconduct” help or hurt research integrity? A debate rages (Retraction Watch Blog)


  • Hilda Bastian provides 5 tips for avoiding P value potholes: commonly encountered problems with how P values are used and interpreted.

5 Tips for Avoiding P-Value Potholes (Absolutely Maybe blog)

  • Videos on research methods related to epidemiology, by Greg Martin, MD, MPH, MBA (University of Witwatersrand, Ireland) — basic but useful for anyone wanting a quick well-done overview on a variety of research topics.

Epidemiology (YouTube)

  • For a bit of humor, The Five Diseases of Academic Publishing.

Got “significosis?” Here are the five diseases of academic publishing (Retraction Watch blog)


  • Acceptance rates for journals applying for membership to OASPA: “Between 2013 and 2015 we accepted fewer than 25% of the total number of applications we received. Some from 2016 are still undergoing review, but we expect the number of accepted applications for last year to fall below 10% once all are concluded. “

Identifying quality in scholarly publishing: Not a black and white issue (OASPA Blog)


  • Overcoming nonreproducibility in basic and preclinical research, by John Ioannidis: “The evidence for nonreproducibility in basic and preclinical biomedical research is compelling. Accumulating data from diverse subdisciplines and types of experimentation suggest numerous problems that can create a fertile ground for nonreproducibility. For example, most raw data and protocols are often not available for in-depth scrutiny and use by other scientists. The current incentive system rewards selective reporting of success stories.

Acknowledging and Overcoming Nonreproducibility in Basic and Preclinical Research (JAMA) [formerly free, now first PDF page visible]

  • Research reported in newspapers has poor replication validity: “Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed.

Poor replication validity of biomedical association studies reported by newspapers (PLOS ONE)


WAME published a new statement on Identifying Predatory or Pseudo-Journals.

Identifying Predatory or Pseudo-Journals (WAME)


WAME Newsletter #2, original version circulated February 23, 2017. Identified (in part) from Retraction Watch, Stat News, and Linked In Global Health. Providing the links and information does not imply WAME’s endorsement.