ICMJE on data sharing/ Not so random RCTs? Positive results bias/ What’s next for peer review? Ethics of predatory publishing/ Is the Impact Factor stochastic?

DATA SHARING

ICMJE statement on data sharing, published June 5, 2017, in the ICMJE journals:

“1. As of July 1, 2018 manuscripts submitted to ICMJE journals that report the results of clinical trials must contain a data sharing statement as described below

2. Clinical trials that begin enrolling participants on or after January 1, 2019 must include a data sharing plan in the trial’s registration…If the data sharing plan changes after registration this should be reflected in the statement submitted and published with the manuscript, and updated in the registry record. Data sharing statements must indicate the following: whether individual deidentified participant data (including data dictionaries) will be shared; what data in particular will be shared; whether additional, related documents will be available (e.g., study protocol, statistical analysis plan, etc.); when the data will become available and for how long; by what access criteria data will be shared (including with whom, for what types of analyses and by what mechanism)…Sharing clinical trial data is one step in the process articulated by the World Health Organization (WHO) and other professional organizations as best practice for clinical trials: universal prospective registration; public disclosure of results from all clinical trials (including through journal publication); and data sharing.”

Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, et al. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal EditorsPLOS Med. 2017.14(6): e1002315. https://doi.org/10.1371/journal.pmed.1002315

 

RESEARCH REPRODUCIBILITY AND MISCONDUCT

  • Not so random?

Randomization in an RCT confers an advantage over other study designs because random sampling means that any differences in variables between comparison groups occur at random (rather than due to confounding). However, some researchers have identified RCTs that do not appear to have been randomly sampled–a clue that the methodology may have been different from what authors are reporting.

Carlisle “analysed the distribution of 72,261 means of 29,789 variables in 5087 randomised, controlled trials published in eight journals between January 2000 and December 2015…Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 1015 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools).”

Carlisle JB.  Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals , Anaesthesia, 2017.72: 944–952. doi:10.1111/anae.13938

  • In another study, Carlisle et al applied the same approach and concluded that “The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals.”

Carlisle JB, Dexter F, Pandit JJ, Shafer SL, Yentis SM. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials. Anaesthesia, 2015.70: 848–858. doi:10.1111/anae.13126

  • Bolland et al used Carlisle’s method to analyze RCTs published by a group of investigators “about which concerns have been raised” and found:

Treatment groups were improbably similar. The distribution of p values for differences in baseline characteristics differed markedly from the expected uniform distribution (p 5 5.2 3 10282). The distribution of standardized sample means for baseline continuous variables and the differences between participant numbers in randomized groups also differed markedly from the expected distributions (p 5 4.3 3 1024, p 5 1.5 3 1025, respectively).”

Mark J. Bolland, Alison Avenell, Greg D. Gamble, and Andrew Grey
Systematic review and statistical analysis of the integrity of 33 randomized controlled trials. Neurology 2016 : WNL.0000000000003387v1-10.1212/WNL.0000000000003387.

  • Is this approach yet another type of manuscript review for busy editors to apply, assuming the calculations are not too daunting? In Retraction Watch, Oransky comments, “So should all journals use the method — which is freely available online — to screen papers? In their editorial accompanying Carlisle’s paper, Loadsman and McCulloch note that if that were to become the case, ‘…dishonest authors could employ techniques to produce data that would avoid detection. We believe this would be quite easy to achieve although, for obvious reasons, we prefer not to describe the likely methodology here.’ Which begs the question: what should institutions’ responsibilities be in all this?

From: Two in 100 clinical trials in eight major journals likely contain inaccurate data: Study (Retraction Watch)

  • In other news, PubPeer announces PubPeer 2.0. From Retraction Watch: “RW: Will the identity changes you’ve installed make it more difficult for scientists to unmask (and thereby seek recourse from) anonymous commenters? BS: Yes, that is one of the main motivations for that change. Once the transition to the new site is complete our goal is to not be able to reveal any user information if we receive another subpoena or if the site is hacked.”

Meet PubPeer 2.0: New version of post-publication peer review site launches today (Retraction Watch)

 

RESEARCH BIAS

Addressing bias toward positive results

  • “The good news is that the scientific community seems increasingly focused on solutions…But true success will require a change in the culture of science. As long as the academic environment has incentives for scientists to work in silos and hoard their data, transparency will be impossible. As long as the public demands a constant stream of significant results, researchers will consciously or subconsciously push their experiments to achieve those findings, valid or not. As long as the media hypes new findings instead of approaching them with the proper skepticism, placing them in context with what has come before, everyone will be nudged toward results that are not reproducible…For years, financial conflicts of interest have been properly identified as biasing research in improper ways. Other conflicts of interest exist, though, and they are just as powerful — if not more so — in influencing the work of scientists across the country and around the globe. We are making progress in making science better, but we’ve still got a long way to go.”

Carroll AE.  Science Needs a Solution for the Temptation of Positive Results (NY Times)

  • But replication leads to a different bias, says Strack: “In contrast, what is informative for replications? Not that the original finding has been replicated, but that it has been ‘overturned.’ Even if the editors’ bias (Gertler, 2016) bias [sic] is controlled by preregistration, overturned findings are more likely to attract readers’ attention and to get cited…However, there is an important difference between these two biases in that a positive effect can only be obtained by increasing the systematic variance and/or decreasing the error variance. Typically, this requires experience with the subject matter and some effort in controlling unwanted influences, while this may also create some undesired biases. In contrast, to overturn the original result, it is sufficient to decrease the systematic variance and to increase the error. In other words, it is easier to be successful at non-replications while it takes expertise and diligence to generate a new result in a reliable fashion..”

Track F.  From Data to Truth in Psychological Science. A Personal PerspectiveFront Psychol, 16 May 2017 | https://doi.org/10.3389/fpsyg.2017.00702

 

PEER REVIEW

What’s next for peer review?

From the London School of Economics blog, reproduced from “SpotOn Report: What might peer review look like in 2030?” from BioMed Central and Digital Science:

“To square the [peer reviewer] incentives ledger, we need to look to institutions, world ranking bodies and funders. These parties hold either the purse strings or the decision-making power to influence the actions of researchers. So how can these players more formally recognise review to bring balance back to the system and what tools do they need to do it?

Institutions: Quite simply, institutions could give greater weight to peer review contributions in funding distribution and career advancement decisions. If there was a clear understanding that being an active peer reviewer would help further your research career, then experts would put a greater emphasis on their reviewing habits and research would benefit.

Funders: If funders factored in peer review contributions and performance when determining funding recipients, then institutions and individuals would have greater reason to contribute to the peer review process.

World ranking bodies: Like researchers, institutions also care about their standing and esteem on the world stage. If world ranking bodies such as THE World University Rankings and QS World Rankings gave proportionate weighting to the peer review contributions and performance of institutions, then institutions would have greater reason to reward the individuals tasked with peer reviewing.

More formal weighting for peer review contributions also makes sense, because peer review is actually a great measure of one’s expertise and standing in the field. Being asked to peer review is external validation that academic editors deem a researcher equipped to scrutinise and make recommendations on the latest research findings.

Researchers: Researchers will do what they have to in order to advance their careers and secure funding. If institutions and funders make it clear that peer review is a pathway to progression, tenure and funding, researchers will make reviewing a priority.

Tools In order for peer review to be formally acknowledged, benchmarks are necessary. There needs to be a clear understanding of the norms of peer review output and quality across the myriad research disciplines in order to assign any relative weighting to an individual’s review record. This is where the research enterprise can utilise the new data tools available to track, verify and report all the different kinds of peer review contributions. These tools already exist and researchers are using them. It’s time the institutions that rely on peer review got on board too.”

Formal recognition for peer review will propel research forward (London School of Economics)

PREDATORY/PSEUDO-JOURNALS

Biochemia Medica published a cluster of papers on predatory journals this month, including research by Stojanovski and Ana Marusic on 44 Croatian open access journals, which concludes: “In order to clearly differentiate themselves from predatory journals, it is not enough for journals from small research communities to operate on non-commercial bases…[they must also have] transparent editorial policies.” The issue also include ethical issues of predatory publishing (for which I am a coauthor, by way of disclosure) and an essay by Jeffrey Beall.

IMPACT FACTOR

“…more productive years yield higher-cited papers because they have more chances to draw a large value. This suggests that citation counts, and the rewards that have come to be associated with them, may be more stochastic [randomly determined] than previously appreciated.”

Michalska-Smith MJ, Allesina S. And, not or: Quality, quantity in scientific publishing. PLOS ONE. 2017.12(6): e0178074. https://doi.org/10.1371/journal.pone.0178074

 

ACCESS

  • The American Psychological Association raised the ire of some authors after requesting that links to free copies of APA-published articles (“unauthorized online postings”) from authors’ websites be removed.

Researchers protest publisher’s orders to remove papers from their websites (Retraction Watch)

  • Access challenges in a mobile world 

Bianca Kramer at the University of Utrecht in the Netherlands studied Sci-Hub usage data attributed to her institution and compared it with holdings data at her library. She found that “75% of Utrecht Sci-Hub downloads would have been available either through our library subscriptions (60%) or as Gold Open Access/free from publisher (15%).” While these data are not comprehensive, nor granular enough for certainty, she concluded that a significant component of usage of Sci-Hub was caused by problems of access and the desire for convenience by users.

Failure to Deliver: Reaching Users in an Increasingly Mobile World (Scholarly Kitchen)

__

Newsletter #11: Originally circulated June 18, 2017. Sources of links include Retraction Watch, Health Information for All listserve, Scholarly Kitchen, Twitter. Providing the links does not imply WAME’s endorsement. 

 

Is citation manipulation now acceptable? Whither the digital revolution? New predatory journal blacklist? How can research be made more reproducible? Criminal charges for research misconduct

 

IMPACT FACTOR

Many fewer journals are suspended for citation manipulation from Impact Factors analyses this year than previous years, and two are added back after previous suspension. How much manipulation is acceptable? (Why is a measure so easily manipulated considered so important–to some?)

How Much Citation Manipulation Is Acceptable? (Scholarly Kitchen)

OPEN ACCESS

  • From The Guardian, whither the digital revolution?

“…although digital technology and the internet have created a new terrain in which the ideals of open access have begun to germinate, they have yet to produce a cost-effective and reliable harvest of accessible knowledge. The acquisition by private publishing companies of peer review processes that had previously been the preserve of scholarly societies has combined with the increased dependence of individual academics on where, rather than what, they publish to control the digital revolution in scholarly publishing. This has prevented the full realisation of its promise to make publishing faster and cheaper.”

It’s time for academics to take back control of research journals (The Guardian)

  • Are journals with few resources less likely to be found, thanks to Google’s algorithms for displaying search results? Another gap for Global South journals to surmount?

“Solid article promotion practices may explain why 89% of the Top 100 Almetric articles in 2016 came from journals that generally employ paywalls as well as the trend for those articles to perform better in social media and the tendency for Gold OA articles from for-profit publishers to perform better.”

Detours and Diversions — Do Open Access Publishers Face New Barriers? (Scholarly Kitchen)

PREDATORY/PSEUDO- JOURNALS

Cabell’s International is forming a paywalled blacklist of journals. Cabell’s list will be drawn from all journals, not just open access journals. Their criteria will be provided at some point in the future (below, plagiarized articles is a criterion, suggesting that journals that don’t screen for plagiarized articles will be at risk of getting listed). However, journals will have to contact Cabell’s to find out whether they are listed. From Nature:

Cabell uses some 65 criteria – which will be reviewed quarterly – to check whether a journal should be on its blacklist, adding points for each suspect finding. Examples include fake editors, plagiarized articles and unclear peer-review policies, says Berryman, although she declined to provide all criteria, saying that the firm would present them later in the year. A team of four employees checks for evidence that journals meet the criteria by searching online or contacting authors and journals for verification.

“It’s pretty much as scientific as we can get at this point,” she says.

“Some of the publishers and journals listed by Beall aren’t on Cabell’s list,” says Berryman. And Cabell’s has added new journals, including some that aren’t open access. The firm declined to provide details of the differences between its list and Beall’s, but says that it will clearly state all the reasons that a journal is on its list. Berryman hopes that will limit libel suits. Publishers or journals will be able to contact Cabell’s to find out whether they are indexed, and will have the opportunity to appeal their status once a year.
Pay-to-view blacklist of predatory journals set to launch (Nature News)

RESEARCH INTEGRITY AND REPRODUCIBILITY

  • A study of Editorial Expressions of Concern: “…We identified 230 EEoCs that affect 300 publications indexed in PubMed, the earliest issued in 1985. Half of the primary EEoCs were issued between 2014 and 2016 (52%). We found evidence of some EEoCs that had been removed by the publisher without leaving a record, and some were not submitted for PubMed or PMC indexing. A minority of publications affected by EEoCs had been retracted by early December 2016 (25%)…The majority of EEoCs were issued because of concerns with validity of data, methods, or interpretation of the publication (68%), and 31% of cases remained open. Issues with images were raised in 40% of affected publications.”

Vaught M, Jordan DC, Bastian H. Concern noted: a descriptive study of editorial expressions of concern in PubMed and PubMed CentralResearch Integrity and Peer Review. 2017;2:10. https://doi.org/10.1186/s41073-017-0030-2

  • What scientists accused of misconduct go through:

“…whistleblowers urgently need an internationally accepted code of conduct, including pretty simple rules such as not attacking the scientists in public while the investigation is running, no personal insults, no mass e-mails to multiple recipients in order to ruin the reputation of the scientists, etc.”

It’s not just whistleblowers who deserve protection during misconduct investigations, say researchers (Retraction Watch)

  • Time to expand the Methods section to improve reproducibility?

“Journals can greatly improve the reproducibility of research by requiring methodological transparency. The print paradigm of journal publishing led us to poor practices in an attempt to save space and reduce the number of printed pages. When trying to cut down an article to reach an assigned page/word limit, usually the first thing to go was a detailed methods section. In a digital era where journals are doing away with page limits, why not add back in this vital information? For a journal that still exists in print, why not require detailed methodologies in the supplementary material? If you have a policy requiring public posting of the data behind the experiments, why not a similar policy for the methods?
Reproducible Research, Just Not Reproducible By You (Scholarly Kitchen)

  • How can research be made more reproducible?

In Nature, William Kaelin Jr argues that when researchers are required to provide too many experiments to make broad assertions, they spread their research thin, rather than first confirming their findings using multiple approaches. It also makes peer review daunting for reviewers (requiring a “mini-sabbatical” to review).

We must return to more careful examination of research papers for originality, experimental design and data quality, and adopt more humility about predicting impact, which can truly be known only in retrospect …We should also place more emphasis on the quality of a body of work and whether it has enabled subsequent discoveries, and focus less on where individual papers are published…The main question when reviewing a paper should be whether its conclusions are likely to be correct, not whether it would be important if it were true.”
Publish houses of brick, not mansions of straw (Nature World View)

Peer reviewer stole data and published; now work has been retracted

Yikes: Peer reviewer stole (and published) author’s data (Retraction Watch)

 

  • BMJ Global Health pulled a published paper on a US-funded trial in Mumbai that had been found to be unethical, after deciding it failed legal review.

BMJ journal yanks paper on cancer screening in India for fear of legal action (Retraction Watch)

Criminal charges for research misconduct

Oransky and colleague present at 5th World Congress on Research Integrity: “A total of 39 science researchers from 7 countries were identified as having been subject to criminal sanctions for actions related to research misconduct between 1979 and 2015…Overall, 14 researchers were criminally sanctioned for actions directly involving their own research. Three of those 14 had criminal charges solely related to research, while the other 11 also had charges stemming indirectly from their research process, e.g., grant fraud, embezzlement of research funds, or bribery.”

Oransky I, Abritis A. Who Faces Criminal Sanctions for Scientific Misconduct? 5th World Congress on Research Integrity 2017 (Abstract).

AUTHORSHIP

CRediT (Contributor Roles Taxonomy) proposes a new author contribution taxonomy, to be embedded in the byline. Formerly posted for comment at http://biorxiv.org/content/early/2017/05/20/14022 ; no longer available but project can be viewed at http://docs.casrai.org/CRediT .

____

Newsletter #10: Originally distributed June 1, 2017. Sources of links include Retraction Watch, Scholarly Kitchen, Twitter.   Providing the links does not imply WAME’s endorsement.