ICMJE on data sharing/ Not so random RCTs? Positive results bias/ What’s next for peer review? Ethics of predatory publishing/ Is the Impact Factor stochastic?

DATA SHARING

ICMJE statement on data sharing, published June 5, 2017, in the ICMJE journals:

“1. As of July 1, 2018 manuscripts submitted to ICMJE journals that report the results of clinical trials must contain a data sharing statement as described below

2. Clinical trials that begin enrolling participants on or after January 1, 2019 must include a data sharing plan in the trial’s registration…If the data sharing plan changes after registration this should be reflected in the statement submitted and published with the manuscript, and updated in the registry record. Data sharing statements must indicate the following: whether individual deidentified participant data (including data dictionaries) will be shared; what data in particular will be shared; whether additional, related documents will be available (e.g., study protocol, statistical analysis plan, etc.); when the data will become available and for how long; by what access criteria data will be shared (including with whom, for what types of analyses and by what mechanism)…Sharing clinical trial data is one step in the process articulated by the World Health Organization (WHO) and other professional organizations as best practice for clinical trials: universal prospective registration; public disclosure of results from all clinical trials (including through journal publication); and data sharing.”

Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, et al. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal EditorsPLOS Med. 2017.14(6): e1002315. https://doi.org/10.1371/journal.pmed.1002315

 

RESEARCH REPRODUCIBILITY AND MISCONDUCT

  • Not so random?

Randomization in an RCT confers an advantage over other study designs because random sampling means that any differences in variables between comparison groups occur at random (rather than due to confounding). However, some researchers have identified RCTs that do not appear to have been randomly sampled–a clue that the methodology may have been different from what authors are reporting.

Carlisle “analysed the distribution of 72,261 means of 29,789 variables in 5087 randomised, controlled trials published in eight journals between January 2000 and December 2015…Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 1015 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools).”

Carlisle JB.  Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals , Anaesthesia, 2017.72: 944–952. doi:10.1111/anae.13938

  • In another study, Carlisle et al applied the same approach and concluded that “The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals.”

Carlisle JB, Dexter F, Pandit JJ, Shafer SL, Yentis SM. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials. Anaesthesia, 2015.70: 848–858. doi:10.1111/anae.13126

  • Bolland et al used Carlisle’s method to analyze RCTs published by a group of investigators “about which concerns have been raised” and found:

Treatment groups were improbably similar. The distribution of p values for differences in baseline characteristics differed markedly from the expected uniform distribution (p 5 5.2 3 10282). The distribution of standardized sample means for baseline continuous variables and the differences between participant numbers in randomized groups also differed markedly from the expected distributions (p 5 4.3 3 1024, p 5 1.5 3 1025, respectively).”

Mark J. Bolland, Alison Avenell, Greg D. Gamble, and Andrew Grey
Systematic review and statistical analysis of the integrity of 33 randomized controlled trials. Neurology 2016 : WNL.0000000000003387v1-10.1212/WNL.0000000000003387.

  • Is this approach yet another type of manuscript review for busy editors to apply, assuming the calculations are not too daunting? In Retraction Watch, Oransky comments, “So should all journals use the method — which is freely available online — to screen papers? In their editorial accompanying Carlisle’s paper, Loadsman and McCulloch note that if that were to become the case, ‘…dishonest authors could employ techniques to produce data that would avoid detection. We believe this would be quite easy to achieve although, for obvious reasons, we prefer not to describe the likely methodology here.’ Which begs the question: what should institutions’ responsibilities be in all this?

From: Two in 100 clinical trials in eight major journals likely contain inaccurate data: Study (Retraction Watch)

  • In other news, PubPeer announces PubPeer 2.0. From Retraction Watch: “RW: Will the identity changes you’ve installed make it more difficult for scientists to unmask (and thereby seek recourse from) anonymous commenters? BS: Yes, that is one of the main motivations for that change. Once the transition to the new site is complete our goal is to not be able to reveal any user information if we receive another subpoena or if the site is hacked.”

Meet PubPeer 2.0: New version of post-publication peer review site launches today (Retraction Watch)

 

RESEARCH BIAS

Addressing bias toward positive results

  • “The good news is that the scientific community seems increasingly focused on solutions…But true success will require a change in the culture of science. As long as the academic environment has incentives for scientists to work in silos and hoard their data, transparency will be impossible. As long as the public demands a constant stream of significant results, researchers will consciously or subconsciously push their experiments to achieve those findings, valid or not. As long as the media hypes new findings instead of approaching them with the proper skepticism, placing them in context with what has come before, everyone will be nudged toward results that are not reproducible…For years, financial conflicts of interest have been properly identified as biasing research in improper ways. Other conflicts of interest exist, though, and they are just as powerful — if not more so — in influencing the work of scientists across the country and around the globe. We are making progress in making science better, but we’ve still got a long way to go.”

Carroll AE.  Science Needs a Solution for the Temptation of Positive Results (NY Times)

  • But replication leads to a different bias, says Strack: “In contrast, what is informative for replications? Not that the original finding has been replicated, but that it has been ‘overturned.’ Even if the editors’ bias (Gertler, 2016) bias [sic] is controlled by preregistration, overturned findings are more likely to attract readers’ attention and to get cited…However, there is an important difference between these two biases in that a positive effect can only be obtained by increasing the systematic variance and/or decreasing the error variance. Typically, this requires experience with the subject matter and some effort in controlling unwanted influences, while this may also create some undesired biases. In contrast, to overturn the original result, it is sufficient to decrease the systematic variance and to increase the error. In other words, it is easier to be successful at non-replications while it takes expertise and diligence to generate a new result in a reliable fashion..”

Track F.  From Data to Truth in Psychological Science. A Personal PerspectiveFront Psychol, 16 May 2017 | https://doi.org/10.3389/fpsyg.2017.00702

 

PEER REVIEW

What’s next for peer review?

From the London School of Economics blog, reproduced from “SpotOn Report: What might peer review look like in 2030?” from BioMed Central and Digital Science:

“To square the [peer reviewer] incentives ledger, we need to look to institutions, world ranking bodies and funders. These parties hold either the purse strings or the decision-making power to influence the actions of researchers. So how can these players more formally recognise review to bring balance back to the system and what tools do they need to do it?

Institutions: Quite simply, institutions could give greater weight to peer review contributions in funding distribution and career advancement decisions. If there was a clear understanding that being an active peer reviewer would help further your research career, then experts would put a greater emphasis on their reviewing habits and research would benefit.

Funders: If funders factored in peer review contributions and performance when determining funding recipients, then institutions and individuals would have greater reason to contribute to the peer review process.

World ranking bodies: Like researchers, institutions also care about their standing and esteem on the world stage. If world ranking bodies such as THE World University Rankings and QS World Rankings gave proportionate weighting to the peer review contributions and performance of institutions, then institutions would have greater reason to reward the individuals tasked with peer reviewing.

More formal weighting for peer review contributions also makes sense, because peer review is actually a great measure of one’s expertise and standing in the field. Being asked to peer review is external validation that academic editors deem a researcher equipped to scrutinise and make recommendations on the latest research findings.

Researchers: Researchers will do what they have to in order to advance their careers and secure funding. If institutions and funders make it clear that peer review is a pathway to progression, tenure and funding, researchers will make reviewing a priority.

Tools In order for peer review to be formally acknowledged, benchmarks are necessary. There needs to be a clear understanding of the norms of peer review output and quality across the myriad research disciplines in order to assign any relative weighting to an individual’s review record. This is where the research enterprise can utilise the new data tools available to track, verify and report all the different kinds of peer review contributions. These tools already exist and researchers are using them. It’s time the institutions that rely on peer review got on board too.”

Formal recognition for peer review will propel research forward (London School of Economics)

PREDATORY/PSEUDO-JOURNALS

Biochemia Medica published a cluster of papers on predatory journals this month, including research by Stojanovski and Ana Marusic on 44 Croatian open access journals, which concludes: “In order to clearly differentiate themselves from predatory journals, it is not enough for journals from small research communities to operate on non-commercial bases…[they must also have] transparent editorial policies.” The issue also include ethical issues of predatory publishing (for which I am a coauthor, by way of disclosure) and an essay by Jeffrey Beall.

IMPACT FACTOR

“…more productive years yield higher-cited papers because they have more chances to draw a large value. This suggests that citation counts, and the rewards that have come to be associated with them, may be more stochastic [randomly determined] than previously appreciated.”

Michalska-Smith MJ, Allesina S. And, not or: Quality, quantity in scientific publishing. PLOS ONE. 2017.12(6): e0178074. https://doi.org/10.1371/journal.pone.0178074

 

ACCESS

  • The American Psychological Association raised the ire of some authors after requesting that links to free copies of APA-published articles (“unauthorized online postings”) from authors’ websites be removed.

Researchers protest publisher’s orders to remove papers from their websites (Retraction Watch)

  • Access challenges in a mobile world 

Bianca Kramer at the University of Utrecht in the Netherlands studied Sci-Hub usage data attributed to her institution and compared it with holdings data at her library. She found that “75% of Utrecht Sci-Hub downloads would have been available either through our library subscriptions (60%) or as Gold Open Access/free from publisher (15%).” While these data are not comprehensive, nor granular enough for certainty, she concluded that a significant component of usage of Sci-Hub was caused by problems of access and the desire for convenience by users.

Failure to Deliver: Reaching Users in an Increasingly Mobile World (Scholarly Kitchen)

__

Newsletter #11: Originally circulated June 18, 2017. Sources of links include Retraction Watch, Health Information for All listserve, Scholarly Kitchen, Twitter. Providing the links does not imply WAME’s endorsement. 

 

Paraphrasing plagiarism? Who gets the DiRT? Coming to terms with conflicts of interest: CROs, practice guidelines, authors, editors, publishers. Future of peer review, sharing data more easily

RESEARCH ETHICS AND MISCONDUCT

  • Free Paraphrasing tools make evading plagiarism detection tools easier, requiring manual review to identify problems. The article provides useful tips to help identify such work. However, how does one determine whether the awkward phrasing that the paraphrasing tools may create is due to the tool or to lack of English writing fluency?

A troubling new way to evade plagiarism detection software. (And how to tell if it’s been used.) (Retraction Watch)

  • Retraction Watch and STAT announce the DiRT (do the right thing) award and the first recipient, apparently a judge who rejected a defamation lawsuit against a journal for expressions of concern.

Announcing the DiRT Award, a new “doing the right thing” prize — and its first recipient (Retraction Watch)

 

CONFLICTS OF INTEREST

  • Challenges to trial integrity may occur when for-profit clinical research organizations (CROs) conduct international RCTs, as they’re doing more and more– as illustrated by the TOPCAT spironolactone study

Serious Questions Raised About Integrity Of International Trials (CardioBrief)

  • A JAMA theme issue on conflicts of interest includes some commentaries [some restricted access]; the following seem especially relevant to editors:

(1) Why There Are No “Potential” Conflicts of Interest By McCoy and Emanuel, who argue that conflicts of interest aren’t potential; there are conflicts of interest and ways to mitigate them

(2) Strategies for Addressing a Broader Definition of Conflicts of Interest by McKinney and Pierce: “[Conflict of interest] disclosure is thus useful as a minimum expectation, but is fundamentally insufficient. It is one tool in a toolbox, but no more.”

(3) Conflict of Interest in Practice Guidelines Panels by Hal Sox, including guidance from the Institute of Medicine, useful to editors who review such guidelines. “To accept a recommendation for practice, the profession and the public require a clear explanation of the reasoning linking the evidence to the recommendations. The balance of harms and benefits is a valuable heuristic for determining the strength of a recommendation, but this determination often involves a degree of subjectivity because harms and benefits seldom have the same units of measure. Because of these subjective elements, guideline development is vulnerable to biased judgments.”

(4) How Should Journals Handle the Conflict of Interest of Their Editors? Who Watches the “Watchers”? by Gottlieb and Bressler, who discuss current recommendations for how editors should handle their conflicts of interest. As is usually the case the advice does not address small journals with very few decision-making editors; other solutions may be needed in those cases.

(5) Medical Journals, Publishers, and Conflict of Interest by JAMA‘s publisher Tom Easley. This article pertains primarily to large journal-publisher relationships, but many journals have a different arrangement and additional guidance is needed.

 

PREDATORY/PSEUDO-JOURNALS

  • Predatory Indian journals apply to DOAJ in large numbers

“Since March 2014, when the new criteria for DOAJ listing were put out, there have been about 1,600 applications from Open Access journal publishers in India…Of these, only 4% (74) were found to be from genuine publishers and accepted for inclusion in the DOAJ directory. While 18% applications are still being processed, 78% were rejected for various reasons. One of the main reasons for rejection is the predatory or dubious nature of the journals.”

” ‘Nearly 20% of the journals have a flashy impact factor and quick publication time, which are quick give-aways….Under contact address, some journal websites do not provide any address but just a provision for comments. In many cases, we have written to people who have been listed as reviewers to know if the journal website is genuine.’ ”

Predatory journals make desperate bid for authenticity (The Hindu)

  • A journal published by Gavin changes its name from Journal of Arthritis and Rheumatology in response to the American College of Rheumatology–to a name very similar to a different journal

 

 

PEER REVIEW

BioMedCentral and Digital Science publish a report on “What might peer review look like in 2030?” and recommend:

  1. “Find new ways of matching expertise and reviews by better identifying, verifying and inviting peer reviewers (including using AI)
  2. Increase diversity in the reviewer pool (including early career researchers, researchers from different regions, and women)
  3. Experiment with different and new models of peer review, particularly those that increase transparency
  4. Invest in reviewer training programs
  5. Find cross-publisher solutions to improve efficiency and benefit all stakeholders, such as portable peer review
  6. Improve recognition for review by funders, institutions, and publishers
  7. Use technology to support and enhance the peer review process, including automation”

The Future of Peer Review (Scholarly Kitchen)

 

POST-PUBLICATION PEER REVIEW

Angela Cochran blogs about the apparent failure of online commenting, but she defines success as percentage of papers with comments. If few letters to the editor are published do we consider them a waste? Maybe the approach isn’t mature yet. Ultimately. all PPPR comments need to be compiled with the article. If they’re useful to the commenters, some readers, and maybe the authors, that’s sufficient.

Should we stop with the commenting already? (Scholarly Kitchen)

 

DATA SHARING

Figshare releases new platform to help authors share data more easily

Figshare Launches New Tool for Publishers To Support Open Research (PRWeb)

___

Newsletter #8, first circulated May 8, 2017.  Sources of links include Retraction Watch, Stat News, Scholarly Kitchen. Providing the links does not imply WAME’s endorsement.

 

How does the NAS suggest journals should foster research integrity? How should one critically evaluate a manuscript (plus more fake peer reviews)? One year of ORCID IDs, a Dear Journal letter from a biostatistician

RESEARCH MISCONDUCT

National Academy of Sciences on how to improve research integrity

A U.S. National Academy of Sciences panel calls for formation of an independent group to address research misconduct and related issues, including [quoted from Retraction Watch, U.S. panel sounds alarm on “detrimental” research practices, calls for new body to help tackle misconduct ] “misleading statistical analysis that falls short of falsification, awarding authorship to researchers who don’t deserve it (and vice versa), not sharing data, and poorly supervising research – as ‘detrimental’ research practices.”

Fostering Integrity in Research“, from the National Academy of Sciences (free PDF download available):

The document includes 11 major recommendations; those most relevant to journal editors are pasted below (emphasis added):

“RECOMMENDATION ONE: To better align the realities of research with its values and ideals, all stakeholders in the research enterprise-researchers, research institutions, research sponsors, journals, and societies-should significantly improve and update their practices and policies to respond to the threats to research integrity identified in this report.

RECOMMENDATION FIVE: Societies and journals should develop clear disciplinary authorship standards. Standards should be based on the principle that those who have made a significant intellectual contribution are authors. Significant intellectual contributions can be made in the design or conceptualization of a study, the conduct of research, the analysis or interpretation of data, or the drafting or revising of a manuscript for intellectual content. Those who engage in these activities should be designated as authors of the reported work, and all authors should approve the final manuscript. In addition to specifying all authors, standards should (1) provide for the identification of one or more authors who assume responsibility for the entire work, (2) require disclosure of all author roles and contributions, and (3) specify that gift or honorary authorship, coercive authorship, ghost authorship, and omitting authors who have met the articulated standards are always unacceptable. Societies and journals should work expeditiously to develop such standards in disciplines that do not already have them.

RECOMMENDATION SIX: Through their policies and through the development of supporting infrastructure, research sponsors and science, engineering, technology, and medical journal and book publishers should ensure that information sufficient for a person knowledgeable about the field and its techniques to reproduce reported results is made available at the time of publication or as soon as possible after publication.

RECOMMENDATION EIGHT: To avoid unproductive duplication of research and to permit effective judgments on the statistical significance of findings, researchers should routinely disclose all statistical tests carried out, including negative findings. Research sponsors, research institutions, and journals should support and encourage this level of transparency.”

 

PEER REVIEW

  • How to critically evaluate a manuscript

At How to critically evaluate a manuscript: 12 questions you should always ask yourself (Publons), a useful general approach to peer review, but it’s missing some important points (I’m sure you can find more–add your comments below):

-Can the study design answer the hypothesis posed? (e.g., is the hypothesis a question of causality but the study design is observational?)

-Do the conclusions follow from the results or do they exaggerate the importance and implications of the research?

-What are the funding source(s) and potential conflicts of interest of the authors?

  • Fall out from fake peer reviews continues with more than 100 retractions

A new record: Major publisher retracting more than 100 studies from cancer journal over fake peer review (Retraction Watch)

 

JOURNAL STANDARDS

  • Results after one year of journals requiring ORCID IDs 

“Our 2015 community survey indicated that most researchers supported the idea of their organizations requiring the use of ORCID — 72% agreed or strongly agreed that these would benefit the global research community, 21% were neutral, and only 7% disagreed or strongly disagreed. Three quarters said specifically that it would be useful if their publisher mandated ORCID iDs.”

It Takes a Village: One Year of Journals Requiring ORCID IDs (Scholarly Kitchen)

  • Technical Image Editor wanted?

Journal of Biological Chemistry is hiring editors to manually screen images for potential manipulation or duplication, before publication.

 

STATISTICS

“Dear Journal”, from a concerned biostatistician

“The safe-conducts given by the editorial system to articles that do not disclose exact sample sizes are shocking. Science must be based on the possibility to repeat comparable designs, which obviously encompasses the use of similar numbers of observations. Sample sizes given as intervals (e.g. “n=3- 18”), inequalities (e.g. “n>3”) or absurdly nebulous sentences (e.g. “n=4, data representative of 3 rats from 2 independent experiments”) are evident obstructions to reproducibility.

Similarly, it is perplexing to notice the proportion of publications that do not clearly reveal the statistical tests used. A clear attribution of tests must be given, including the post-hoc tests used after analysis of variance. It should not be sufficient to list all statistical procedures in the method section with no indication of which test was used in which figure or table.”

Dear journals: Clean up your act. Regards, Concerned Biostatistician (Retraction Watch)

_____

 

Newsletter #7, originally circulated on April 24, 2017. Sources include Retraction Watch, Health Information for All listserve, and Open Science Initiative listserve. Providing the links does not imply WAME’s endorsement.

 

 

How would you change medical publishing? Authors offer bribes, New issues in informed consent, Why do predatory journals exist?

FUTURE OF MEDICAL PUBLISHING

  • What would you change about medical publishing? Scholarly Kitchen offers some interesting perspectives. Share yours via Comments below.

If you could change one thing about scholarly publishing, what would that be? (Scholarly Kitchen blog)

EDITOR ETHICS

  • Editor receives offer of cash for publishing manuscripts

Pay to play? Three new ways companies are subverting academic publishing (Retraction Watch blog)

  • Editors step down after their citation cartel was discovered (European Geophysical Union)

http://retractionwatch.com/2017/03/03/citation-boosting-episode-leads-editor  (Retraction Watch blog)

RESEARCH ETHICS

  • Commentaries on new developments with informed consent: e-consent and internet-based clinical trials, changes in perceptions of risk, new types of risk

Informed Consent  (NEJM [free])

RESEARCH REPRODUCIBILITY

  • Should scientists attempt to replicate their own studies? They have an inherent desire (or conflict of interest) to see consistent results

Why Scientists Shouldn’t Replicate Their Own Work (Discover Magazine)

PREDATORY/PSEUDO-JOURNALS

Do predatory journals fill a niche?

Predatory Publishing as a Rational Response to Poorly Governed Academic Incentives (Scholarly Kitchen blog)

PEER REVIEW

  • A neuroscientist posts his peer reviews online, emails the authors, and tweets a link to his review (but only if the manuscript is available as a preprint)

The Rogue Neuroscientist on a Mission to Hack Peer Review (Wired Magazine)

Newsletter #3. Originally circulated March 7, 2017. Sources include Retraction Watch and Scholarly Kitchen. Providing the links and information does not imply WAME’s endorsement.