ICMJE on data sharing/ Not so random RCTs? Positive results bias/ What’s next for peer review? Ethics of predatory publishing/ Is the Impact Factor stochastic?


ICMJE statement on data sharing, published June 5, 2017, in the ICMJE journals:

“1. As of July 1, 2018 manuscripts submitted to ICMJE journals that report the results of clinical trials must contain a data sharing statement as described below

2. Clinical trials that begin enrolling participants on or after January 1, 2019 must include a data sharing plan in the trial’s registration…If the data sharing plan changes after registration this should be reflected in the statement submitted and published with the manuscript, and updated in the registry record. Data sharing statements must indicate the following: whether individual deidentified participant data (including data dictionaries) will be shared; what data in particular will be shared; whether additional, related documents will be available (e.g., study protocol, statistical analysis plan, etc.); when the data will become available and for how long; by what access criteria data will be shared (including with whom, for what types of analyses and by what mechanism)…Sharing clinical trial data is one step in the process articulated by the World Health Organization (WHO) and other professional organizations as best practice for clinical trials: universal prospective registration; public disclosure of results from all clinical trials (including through journal publication); and data sharing.”

Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, et al. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal EditorsPLOS Med. 2017.14(6): e1002315. https://doi.org/10.1371/journal.pmed.1002315



  • Not so random?

Randomization in an RCT confers an advantage over other study designs because random sampling means that any differences in variables between comparison groups occur at random (rather than due to confounding). However, some researchers have identified RCTs that do not appear to have been randomly sampled–a clue that the methodology may have been different from what authors are reporting.

Carlisle “analysed the distribution of 72,261 means of 29,789 variables in 5087 randomised, controlled trials published in eight journals between January 2000 and December 2015…Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 1015 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools).”

Carlisle JB.  Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals , Anaesthesia, 2017.72: 944–952. doi:10.1111/anae.13938

  • In another study, Carlisle et al applied the same approach and concluded that “The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals.”

Carlisle JB, Dexter F, Pandit JJ, Shafer SL, Yentis SM. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials. Anaesthesia, 2015.70: 848–858. doi:10.1111/anae.13126

  • Bolland et al used Carlisle’s method to analyze RCTs published by a group of investigators “about which concerns have been raised” and found:

Treatment groups were improbably similar. The distribution of p values for differences in baseline characteristics differed markedly from the expected uniform distribution (p 5 5.2 3 10282). The distribution of standardized sample means for baseline continuous variables and the differences between participant numbers in randomized groups also differed markedly from the expected distributions (p 5 4.3 3 1024, p 5 1.5 3 1025, respectively).”

Mark J. Bolland, Alison Avenell, Greg D. Gamble, and Andrew Grey
Systematic review and statistical analysis of the integrity of 33 randomized controlled trials. Neurology 2016 : WNL.0000000000003387v1-10.1212/WNL.0000000000003387.

  • Is this approach yet another type of manuscript review for busy editors to apply, assuming the calculations are not too daunting? In Retraction Watch, Oransky comments, “So should all journals use the method — which is freely available online — to screen papers? In their editorial accompanying Carlisle’s paper, Loadsman and McCulloch note that if that were to become the case, ‘…dishonest authors could employ techniques to produce data that would avoid detection. We believe this would be quite easy to achieve although, for obvious reasons, we prefer not to describe the likely methodology here.’ Which begs the question: what should institutions’ responsibilities be in all this?

From: Two in 100 clinical trials in eight major journals likely contain inaccurate data: Study (Retraction Watch)

  • In other news, PubPeer announces PubPeer 2.0. From Retraction Watch: “RW: Will the identity changes you’ve installed make it more difficult for scientists to unmask (and thereby seek recourse from) anonymous commenters? BS: Yes, that is one of the main motivations for that change. Once the transition to the new site is complete our goal is to not be able to reveal any user information if we receive another subpoena or if the site is hacked.”

Meet PubPeer 2.0: New version of post-publication peer review site launches today (Retraction Watch)



Addressing bias toward positive results

  • “The good news is that the scientific community seems increasingly focused on solutions…But true success will require a change in the culture of science. As long as the academic environment has incentives for scientists to work in silos and hoard their data, transparency will be impossible. As long as the public demands a constant stream of significant results, researchers will consciously or subconsciously push their experiments to achieve those findings, valid or not. As long as the media hypes new findings instead of approaching them with the proper skepticism, placing them in context with what has come before, everyone will be nudged toward results that are not reproducible…For years, financial conflicts of interest have been properly identified as biasing research in improper ways. Other conflicts of interest exist, though, and they are just as powerful — if not more so — in influencing the work of scientists across the country and around the globe. We are making progress in making science better, but we’ve still got a long way to go.”

Carroll AE.  Science Needs a Solution for the Temptation of Positive Results (NY Times)

  • But replication leads to a different bias, says Strack: “In contrast, what is informative for replications? Not that the original finding has been replicated, but that it has been ‘overturned.’ Even if the editors’ bias (Gertler, 2016) bias [sic] is controlled by preregistration, overturned findings are more likely to attract readers’ attention and to get cited…However, there is an important difference between these two biases in that a positive effect can only be obtained by increasing the systematic variance and/or decreasing the error variance. Typically, this requires experience with the subject matter and some effort in controlling unwanted influences, while this may also create some undesired biases. In contrast, to overturn the original result, it is sufficient to decrease the systematic variance and to increase the error. In other words, it is easier to be successful at non-replications while it takes expertise and diligence to generate a new result in a reliable fashion..”

Track F.  From Data to Truth in Psychological Science. A Personal PerspectiveFront Psychol, 16 May 2017 | https://doi.org/10.3389/fpsyg.2017.00702



What’s next for peer review?

From the London School of Economics blog, reproduced from “SpotOn Report: What might peer review look like in 2030?” from BioMed Central and Digital Science:

“To square the [peer reviewer] incentives ledger, we need to look to institutions, world ranking bodies and funders. These parties hold either the purse strings or the decision-making power to influence the actions of researchers. So how can these players more formally recognise review to bring balance back to the system and what tools do they need to do it?

Institutions: Quite simply, institutions could give greater weight to peer review contributions in funding distribution and career advancement decisions. If there was a clear understanding that being an active peer reviewer would help further your research career, then experts would put a greater emphasis on their reviewing habits and research would benefit.

Funders: If funders factored in peer review contributions and performance when determining funding recipients, then institutions and individuals would have greater reason to contribute to the peer review process.

World ranking bodies: Like researchers, institutions also care about their standing and esteem on the world stage. If world ranking bodies such as THE World University Rankings and QS World Rankings gave proportionate weighting to the peer review contributions and performance of institutions, then institutions would have greater reason to reward the individuals tasked with peer reviewing.

More formal weighting for peer review contributions also makes sense, because peer review is actually a great measure of one’s expertise and standing in the field. Being asked to peer review is external validation that academic editors deem a researcher equipped to scrutinise and make recommendations on the latest research findings.

Researchers: Researchers will do what they have to in order to advance their careers and secure funding. If institutions and funders make it clear that peer review is a pathway to progression, tenure and funding, researchers will make reviewing a priority.

Tools In order for peer review to be formally acknowledged, benchmarks are necessary. There needs to be a clear understanding of the norms of peer review output and quality across the myriad research disciplines in order to assign any relative weighting to an individual’s review record. This is where the research enterprise can utilise the new data tools available to track, verify and report all the different kinds of peer review contributions. These tools already exist and researchers are using them. It’s time the institutions that rely on peer review got on board too.”

Formal recognition for peer review will propel research forward (London School of Economics)


Biochemia Medica published a cluster of papers on predatory journals this month, including research by Stojanovski and Ana Marusic on 44 Croatian open access journals, which concludes: “In order to clearly differentiate themselves from predatory journals, it is not enough for journals from small research communities to operate on non-commercial bases…[they must also have] transparent editorial policies.” The issue also include ethical issues of predatory publishing (for which I am a coauthor, by way of disclosure) and an essay by Jeffrey Beall.


“…more productive years yield higher-cited papers because they have more chances to draw a large value. This suggests that citation counts, and the rewards that have come to be associated with them, may be more stochastic [randomly determined] than previously appreciated.”

Michalska-Smith MJ, Allesina S. And, not or: Quality, quantity in scientific publishing. PLOS ONE. 2017.12(6): e0178074. https://doi.org/10.1371/journal.pone.0178074



  • The American Psychological Association raised the ire of some authors after requesting that links to free copies of APA-published articles (“unauthorized online postings”) from authors’ websites be removed.

Researchers protest publisher’s orders to remove papers from their websites (Retraction Watch)

  • Access challenges in a mobile world 

Bianca Kramer at the University of Utrecht in the Netherlands studied Sci-Hub usage data attributed to her institution and compared it with holdings data at her library. She found that “75% of Utrecht Sci-Hub downloads would have been available either through our library subscriptions (60%) or as Gold Open Access/free from publisher (15%).” While these data are not comprehensive, nor granular enough for certainty, she concluded that a significant component of usage of Sci-Hub was caused by problems of access and the desire for convenience by users.

Failure to Deliver: Reaching Users in an Increasingly Mobile World (Scholarly Kitchen)


Newsletter #11: Originally circulated June 18, 2017. Sources of links include Retraction Watch, Health Information for All listserve, Scholarly Kitchen, Twitter. Providing the links does not imply WAME’s endorsement. 


Paraphrasing plagiarism? Who gets the DiRT? Coming to terms with conflicts of interest: CROs, practice guidelines, authors, editors, publishers. Future of peer review, sharing data more easily


  • Free Paraphrasing tools make evading plagiarism detection tools easier, requiring manual review to identify problems. The article provides useful tips to help identify such work. However, how does one determine whether the awkward phrasing that the paraphrasing tools may create is due to the tool or to lack of English writing fluency?

A troubling new way to evade plagiarism detection software. (And how to tell if it’s been used.) (Retraction Watch)

  • Retraction Watch and STAT announce the DiRT (do the right thing) award and the first recipient, apparently a judge who rejected a defamation lawsuit against a journal for expressions of concern.

Announcing the DiRT Award, a new “doing the right thing” prize — and its first recipient (Retraction Watch)



  • Challenges to trial integrity may occur when for-profit clinical research organizations (CROs) conduct international RCTs, as they’re doing more and more– as illustrated by the TOPCAT spironolactone study

Serious Questions Raised About Integrity Of International Trials (CardioBrief)

  • A JAMA theme issue on conflicts of interest includes some commentaries [some restricted access]; the following seem especially relevant to editors:

(1) Why There Are No “Potential” Conflicts of Interest By McCoy and Emanuel, who argue that conflicts of interest aren’t potential; there are conflicts of interest and ways to mitigate them

(2) Strategies for Addressing a Broader Definition of Conflicts of Interest by McKinney and Pierce: “[Conflict of interest] disclosure is thus useful as a minimum expectation, but is fundamentally insufficient. It is one tool in a toolbox, but no more.”

(3) Conflict of Interest in Practice Guidelines Panels by Hal Sox, including guidance from the Institute of Medicine, useful to editors who review such guidelines. “To accept a recommendation for practice, the profession and the public require a clear explanation of the reasoning linking the evidence to the recommendations. The balance of harms and benefits is a valuable heuristic for determining the strength of a recommendation, but this determination often involves a degree of subjectivity because harms and benefits seldom have the same units of measure. Because of these subjective elements, guideline development is vulnerable to biased judgments.”

(4) How Should Journals Handle the Conflict of Interest of Their Editors? Who Watches the “Watchers”? by Gottlieb and Bressler, who discuss current recommendations for how editors should handle their conflicts of interest. As is usually the case the advice does not address small journals with very few decision-making editors; other solutions may be needed in those cases.

(5) Medical Journals, Publishers, and Conflict of Interest by JAMA‘s publisher Tom Easley. This article pertains primarily to large journal-publisher relationships, but many journals have a different arrangement and additional guidance is needed.



  • Predatory Indian journals apply to DOAJ in large numbers

“Since March 2014, when the new criteria for DOAJ listing were put out, there have been about 1,600 applications from Open Access journal publishers in India…Of these, only 4% (74) were found to be from genuine publishers and accepted for inclusion in the DOAJ directory. While 18% applications are still being processed, 78% were rejected for various reasons. One of the main reasons for rejection is the predatory or dubious nature of the journals.”

” ‘Nearly 20% of the journals have a flashy impact factor and quick publication time, which are quick give-aways….Under contact address, some journal websites do not provide any address but just a provision for comments. In many cases, we have written to people who have been listed as reviewers to know if the journal website is genuine.’ ”

Predatory journals make desperate bid for authenticity (The Hindu)

  • A journal published by Gavin changes its name from Journal of Arthritis and Rheumatology in response to the American College of Rheumatology–to a name very similar to a different journal




BioMedCentral and Digital Science publish a report on “What might peer review look like in 2030?” and recommend:

  1. “Find new ways of matching expertise and reviews by better identifying, verifying and inviting peer reviewers (including using AI)
  2. Increase diversity in the reviewer pool (including early career researchers, researchers from different regions, and women)
  3. Experiment with different and new models of peer review, particularly those that increase transparency
  4. Invest in reviewer training programs
  5. Find cross-publisher solutions to improve efficiency and benefit all stakeholders, such as portable peer review
  6. Improve recognition for review by funders, institutions, and publishers
  7. Use technology to support and enhance the peer review process, including automation”

The Future of Peer Review (Scholarly Kitchen)



Angela Cochran blogs about the apparent failure of online commenting, but she defines success as percentage of papers with comments. If few letters to the editor are published do we consider them a waste? Maybe the approach isn’t mature yet. Ultimately. all PPPR comments need to be compiled with the article. If they’re useful to the commenters, some readers, and maybe the authors, that’s sufficient.

Should we stop with the commenting already? (Scholarly Kitchen)



Figshare releases new platform to help authors share data more easily

Figshare Launches New Tool for Publishers To Support Open Research (PRWeb)


Newsletter #8, first circulated May 8, 2017.  Sources of links include Retraction Watch, Stat News, Scholarly Kitchen. Providing the links does not imply WAME’s endorsement.


Clinical trial data sharing — not just for “research parasites” anymore? Use Unpaywall to find free articles, join Initiative for Open Citations. Are women authors different? What will your journal do without you? Can technology improve global health?


Clinical trial data sharing — not just for “research parasites” anymore

“Using the NHLBI data repository, 370 investigators requested data from at least one clinical trial — 51% of them trials on cardiovascular prevention and treatment. Requests were largely for post hoc secondary analysis (72%); a minority of requests were initiated for analytic or statistical approaches to clinical trials (9%) and meta-analyses (7%). More than half of investigators (53%) made their requests in the last 4.4 years of the study period (January 2000 to May 2016), ‘indicating an increasing demand for trial data that has outpaced acquisition,’ wrote Sean A. Coady, MS, MA, of the NHLBI in Bethesda, Md., and colleagues. ‘In contrast, demand for observational data has increased in a pattern more directly proportional to time.’ ”

NHLBI Data Sharing: Fears of ‘Research Parasites’ Melt Away Experience of NIH institute bolsters value of open trial data (MedPage Today)



  • Unpaywall

Trying to find free articles online? Use http://unpaywall.org, a new widget to identify free copies of research articles. Unlike the open access button available for libraries and interlibrary loan, this is available to anyone (requires Firefox or Chrome browsers). Putting the OA Into Interlibrary Loan 

Covered in:


  • Initiative for Open Citations

“The Initiative for Open Citations I4OC is a collaboration between scholarly publishers, researchers, and other interested parties to promote the unrestricted availability of scholarly citation data…The aim of this initiative is to promote the availability of data on citations that are structuredseparable, and openStructured means the data representing each publication and each citation instance are expressed in common, machine-readable formats, and that these data can be accessed programmatically. Separable means the citation instances can be accessed and analyzed without the need to access the source bibliographic products (such as journal articles and books) in which the citations are created. Open means the data are freely accessible and reusable.”



  • Fast corrections: Authors use PubMed’s commenting feature PubMed Commons to post corrections before a formal correction is published

Authors alerting readers via PubMed Commons

  • Ghosts who don’t know they’re ghosts: Researcher provides fake contact information for coauthors, who aren’t aware they’re authors

Busted: Researcher used fake contact info for co-authors



Study of economics papers shows that while women authors take longer to revise, the readability of the revised manuscript is more improved than men’s. “Research papers with female authors spend six months longer in peer review at the top economics journals…In what appears to be a consequence, papers by women are easier to read and improve more as they are being revised than papers written by men.”

Gender Differences in Peer Review: Economics papers by women are stalled longer at journals – but they end up more readable and more improved (Royal Economic Society)



Succession planning: How to prepare for when you’re no longer around — written more for publishers than editors but maybe useful for some. “With a mature workforce, you need to watch that knowledge and skills do not reside in one person. When that person leaves, for whatever reason, it is entirely possible that you will be stuck and with their departure goes an essential resource that you will be scrambling to replace.”

Succession Planning (Scholarly Kitchen))



Talk with Google: Using Technology to Tackle Global Health’s Biggest Challenges



Newsletter #6, circulated April 11, 2017. Sources include Retraction Watch and Open Science Initiative  listserve. Providing the links does not imply WAME’s endorsement.