ICMJE on data sharing/ Not so random RCTs? Positive results bias/ What’s next for peer review? Ethics of predatory publishing/ Is the Impact Factor stochastic?

DATA SHARING

ICMJE statement on data sharing, published June 5, 2017, in the ICMJE journals:

“1. As of July 1, 2018 manuscripts submitted to ICMJE journals that report the results of clinical trials must contain a data sharing statement as described below

2. Clinical trials that begin enrolling participants on or after January 1, 2019 must include a data sharing plan in the trial’s registration…If the data sharing plan changes after registration this should be reflected in the statement submitted and published with the manuscript, and updated in the registry record. Data sharing statements must indicate the following: whether individual deidentified participant data (including data dictionaries) will be shared; what data in particular will be shared; whether additional, related documents will be available (e.g., study protocol, statistical analysis plan, etc.); when the data will become available and for how long; by what access criteria data will be shared (including with whom, for what types of analyses and by what mechanism)…Sharing clinical trial data is one step in the process articulated by the World Health Organization (WHO) and other professional organizations as best practice for clinical trials: universal prospective registration; public disclosure of results from all clinical trials (including through journal publication); and data sharing.”

Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, et al. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal EditorsPLOS Med. 2017.14(6): e1002315. https://doi.org/10.1371/journal.pmed.1002315

 

RESEARCH REPRODUCIBILITY AND MISCONDUCT

  • Not so random?

Randomization in an RCT confers an advantage over other study designs because random sampling means that any differences in variables between comparison groups occur at random (rather than due to confounding). However, some researchers have identified RCTs that do not appear to have been randomly sampled–a clue that the methodology may have been different from what authors are reporting.

Carlisle “analysed the distribution of 72,261 means of 29,789 variables in 5087 randomised, controlled trials published in eight journals between January 2000 and December 2015…Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 1015 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools).”

Carlisle JB.  Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals , Anaesthesia, 2017.72: 944–952. doi:10.1111/anae.13938

  • In another study, Carlisle et al applied the same approach and concluded that “The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals.”

Carlisle JB, Dexter F, Pandit JJ, Shafer SL, Yentis SM. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials. Anaesthesia, 2015.70: 848–858. doi:10.1111/anae.13126

  • Bolland et al used Carlisle’s method to analyze RCTs published by a group of investigators “about which concerns have been raised” and found:

Treatment groups were improbably similar. The distribution of p values for differences in baseline characteristics differed markedly from the expected uniform distribution (p 5 5.2 3 10282). The distribution of standardized sample means for baseline continuous variables and the differences between participant numbers in randomized groups also differed markedly from the expected distributions (p 5 4.3 3 1024, p 5 1.5 3 1025, respectively).”

Mark J. Bolland, Alison Avenell, Greg D. Gamble, and Andrew Grey
Systematic review and statistical analysis of the integrity of 33 randomized controlled trials. Neurology 2016 : WNL.0000000000003387v1-10.1212/WNL.0000000000003387.

  • Is this approach yet another type of manuscript review for busy editors to apply, assuming the calculations are not too daunting? In Retraction Watch, Oransky comments, “So should all journals use the method — which is freely available online — to screen papers? In their editorial accompanying Carlisle’s paper, Loadsman and McCulloch note that if that were to become the case, ‘…dishonest authors could employ techniques to produce data that would avoid detection. We believe this would be quite easy to achieve although, for obvious reasons, we prefer not to describe the likely methodology here.’ Which begs the question: what should institutions’ responsibilities be in all this?

From: Two in 100 clinical trials in eight major journals likely contain inaccurate data: Study (Retraction Watch)

  • In other news, PubPeer announces PubPeer 2.0. From Retraction Watch: “RW: Will the identity changes you’ve installed make it more difficult for scientists to unmask (and thereby seek recourse from) anonymous commenters? BS: Yes, that is one of the main motivations for that change. Once the transition to the new site is complete our goal is to not be able to reveal any user information if we receive another subpoena or if the site is hacked.”

Meet PubPeer 2.0: New version of post-publication peer review site launches today (Retraction Watch)

 

RESEARCH BIAS

Addressing bias toward positive results

  • “The good news is that the scientific community seems increasingly focused on solutions…But true success will require a change in the culture of science. As long as the academic environment has incentives for scientists to work in silos and hoard their data, transparency will be impossible. As long as the public demands a constant stream of significant results, researchers will consciously or subconsciously push their experiments to achieve those findings, valid or not. As long as the media hypes new findings instead of approaching them with the proper skepticism, placing them in context with what has come before, everyone will be nudged toward results that are not reproducible…For years, financial conflicts of interest have been properly identified as biasing research in improper ways. Other conflicts of interest exist, though, and they are just as powerful — if not more so — in influencing the work of scientists across the country and around the globe. We are making progress in making science better, but we’ve still got a long way to go.”

Carroll AE.  Science Needs a Solution for the Temptation of Positive Results (NY Times)

  • But replication leads to a different bias, says Strack: “In contrast, what is informative for replications? Not that the original finding has been replicated, but that it has been ‘overturned.’ Even if the editors’ bias (Gertler, 2016) bias [sic] is controlled by preregistration, overturned findings are more likely to attract readers’ attention and to get cited…However, there is an important difference between these two biases in that a positive effect can only be obtained by increasing the systematic variance and/or decreasing the error variance. Typically, this requires experience with the subject matter and some effort in controlling unwanted influences, while this may also create some undesired biases. In contrast, to overturn the original result, it is sufficient to decrease the systematic variance and to increase the error. In other words, it is easier to be successful at non-replications while it takes expertise and diligence to generate a new result in a reliable fashion..”

Track F.  From Data to Truth in Psychological Science. A Personal PerspectiveFront Psychol, 16 May 2017 | https://doi.org/10.3389/fpsyg.2017.00702

 

PEER REVIEW

What’s next for peer review?

From the London School of Economics blog, reproduced from “SpotOn Report: What might peer review look like in 2030?” from BioMed Central and Digital Science:

“To square the [peer reviewer] incentives ledger, we need to look to institutions, world ranking bodies and funders. These parties hold either the purse strings or the decision-making power to influence the actions of researchers. So how can these players more formally recognise review to bring balance back to the system and what tools do they need to do it?

Institutions: Quite simply, institutions could give greater weight to peer review contributions in funding distribution and career advancement decisions. If there was a clear understanding that being an active peer reviewer would help further your research career, then experts would put a greater emphasis on their reviewing habits and research would benefit.

Funders: If funders factored in peer review contributions and performance when determining funding recipients, then institutions and individuals would have greater reason to contribute to the peer review process.

World ranking bodies: Like researchers, institutions also care about their standing and esteem on the world stage. If world ranking bodies such as THE World University Rankings and QS World Rankings gave proportionate weighting to the peer review contributions and performance of institutions, then institutions would have greater reason to reward the individuals tasked with peer reviewing.

More formal weighting for peer review contributions also makes sense, because peer review is actually a great measure of one’s expertise and standing in the field. Being asked to peer review is external validation that academic editors deem a researcher equipped to scrutinise and make recommendations on the latest research findings.

Researchers: Researchers will do what they have to in order to advance their careers and secure funding. If institutions and funders make it clear that peer review is a pathway to progression, tenure and funding, researchers will make reviewing a priority.

Tools In order for peer review to be formally acknowledged, benchmarks are necessary. There needs to be a clear understanding of the norms of peer review output and quality across the myriad research disciplines in order to assign any relative weighting to an individual’s review record. This is where the research enterprise can utilise the new data tools available to track, verify and report all the different kinds of peer review contributions. These tools already exist and researchers are using them. It’s time the institutions that rely on peer review got on board too.”

Formal recognition for peer review will propel research forward (London School of Economics)

PREDATORY/PSEUDO-JOURNALS

Biochemia Medica published a cluster of papers on predatory journals this month, including research by Stojanovski and Ana Marusic on 44 Croatian open access journals, which concludes: “In order to clearly differentiate themselves from predatory journals, it is not enough for journals from small research communities to operate on non-commercial bases…[they must also have] transparent editorial policies.” The issue also include ethical issues of predatory publishing (for which I am a coauthor, by way of disclosure) and an essay by Jeffrey Beall.

IMPACT FACTOR

“…more productive years yield higher-cited papers because they have more chances to draw a large value. This suggests that citation counts, and the rewards that have come to be associated with them, may be more stochastic [randomly determined] than previously appreciated.”

Michalska-Smith MJ, Allesina S. And, not or: Quality, quantity in scientific publishing. PLOS ONE. 2017.12(6): e0178074. https://doi.org/10.1371/journal.pone.0178074

 

ACCESS

  • The American Psychological Association raised the ire of some authors after requesting that links to free copies of APA-published articles (“unauthorized online postings”) from authors’ websites be removed.

Researchers protest publisher’s orders to remove papers from their websites (Retraction Watch)

  • Access challenges in a mobile world 

Bianca Kramer at the University of Utrecht in the Netherlands studied Sci-Hub usage data attributed to her institution and compared it with holdings data at her library. She found that “75% of Utrecht Sci-Hub downloads would have been available either through our library subscriptions (60%) or as Gold Open Access/free from publisher (15%).” While these data are not comprehensive, nor granular enough for certainty, she concluded that a significant component of usage of Sci-Hub was caused by problems of access and the desire for convenience by users.

Failure to Deliver: Reaching Users in an Increasingly Mobile World (Scholarly Kitchen)

__

Newsletter #11: Originally circulated June 18, 2017. Sources of links include Retraction Watch, Health Information for All listserve, Scholarly Kitchen, Twitter. Providing the links does not imply WAME’s endorsement. 

 

Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals and how (maybe) to identify possibly predatory journals. Fake editor, Rehabbed retraction, Peer reviewer plagiarizing. Writing for a lay audience; Proof to a famous problem almost lost to publishing obscurity

PREDATORY/PSEUDO-JOURNALS

  • Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals

“An early-career researcher isn’t necessarily going to have the basic background knowledge to say ‘this journal looks a bit dodgy’ when they have never been taught what publishing best practice actually looks like…We also have to consider the language barrier. It is only fair, since we demand that the rest of the scientific world communicates in academic English. As a lucky native speaker, it takes me a few seconds to spot nonsense and filler text in a journal’s aims and scope, or a conference ‘about’ page, or a spammy ‘call for papers’ email. It also helps that I have experience of the formal conventions and style that are used for these types of communication. Imagine what it is like for a researcher with English as a basic second language, who is looking for a journal in which to publish their first research paper? They probably will not spot grammatical errors (the most obvious ‘red flag’) on a journal website, let alone the more subtle nuances of journal-speak.”

How should you not identify a predatory journal? “I know one good-quality journal which was one of the first in its country to get the ‘Green Tick’ on DOAJ. I’ve met the editor who is a keen open access and CC-BY advocate. However, the first iteration of the journal’s website and new journal cover was a real shock. It had all the things we might expect on a predatory journal website: 1990s-style flashy graphics, too many poorly-resized pictures, and the homepage (and journal cover) plastered with logos of every conceivable indexing service they had an association with…I knew this was a good journal, but the website was simply not credible, so we strongly advised them to clean up the site to avoid the journal being mistaken for predatory…This felt wrong (and somewhat neo-colonial). ‘Professional’ website design as we know it is expensive, and what is wrong with creating a website that appeals to your target audience, in the style they are familiar with? In the country that this journal is from, a splash of colour and flashing lights are used often in daily life, especially when marketing a product. I think we need to bear in mind that users from the Global South can sometimes have quite different experiences and expectations of ‘credibility’ on the internet, both as creators and users of content and, of course, as consumers looking for a service.”

Andy Nobes, INASP.  Critical thinking in a post-Beall vacuum (Research Information)

  • Characteristics of possibly predatory journals (from Beall’s list) vs legitimate open access journals

Research finds 13 characteristics associated with possibly predatory journals (defined as those on Beall’s list, which included some non-predatory journals). See Table 10 — misspellings, distorted or potentially unauthorized images, editors or editorial board members whose affiliation with the journal was unverified, and use of the Index Copernicus Value for impact factor were much more common among potentially predatory journals. These findings may be somewhat circular since the characteristics evaluated overlap with Beall’s criteria and some of those criteria (e.g., distorted images) were identified in the previous article as falsely identifying predatory journals, for reasons of convention rather than quality. However, the results may be useful for editors who are concerned their journal might be misidentified as predatory.

Shamseer L, Moher D, Maduekwe O, et al. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison  BMC Medicine 2017;15:28. DOI: 10.1186/s12916-017-0785-9

  • From the Department of Stings: A fake academic is accepted onto editorial boards and in a few cases, as editor

“We conceived a sting operation and submitted a fake application [Anna O. Szust] for an editor position to 360 journals, a mix of legitimate titles and suspected predators. Forty-eight titles accepted. Many revealed themselves to be even more mercenary than we had expected….We coded journals as ‘Accepted’ only if a reply to our e-mail explicitly accepted Szust as editor (in some cases contingent on financial contribution) or if Szust’s name appeared as an editorial board member on the journal’s website. In many cases, we received a positive response within days of application, and often within hours. Four titles immediately appointed Szust editor-in-chief.”

Sorokowski P, Kulczycki ESorokowska A, Pisanski K Predatory journals recruit fake editor. Nature Comment 543, 481–483 (23 March 2017). doi:10.1038/543481a

 

RESEARCH ETHICS AND MISCONDUCT

  • A retracted study is republished in another journal without the second editor being aware of the retraction. How much history is an author obligated to provide? What is a reasonable approach?

“Strange. Very strange:” Retracted nutrition study reappears in new journal (Retraction Watch)

  • A peer reviewer plagiarized text from the manuscript under review. “We received a complaint from an author that his unpublished paper was plagiarized in an article published in the Journal... After investigation, we uncovered evidence that one of the co-authors of … acted as a reviewer on the unpublished paper during the peer review process at another journal. We ran a plagiarism report and found a high percentage of similarity between the unpublished paper and the one published in the Journal... After consulting with the corresponding author, the editors decided to retract the paper.” Publishing timing does not always reveal who has plagiarized whom.

Nightmare scenario: Text stolen from manuscript during review (Retraction Watch)

 

ACCESS

  • Instructions for writing research summaries for a lay audience. “It is particularly intended to help scientists who are used to writing about biomedical and health research for their peers to reach a wider audience, including the general public, research funders, health-care professionals, patients and other scientists unfamiliar with the research being described…Plain English avoids using jargon, technical terms, acronyms and any other text that is not easy to understand. If technical terms are needed, they should be properly explained. When writing in plain English, you should not change the meaning of what you want to say, but you may need to change the way you say it…A plain-English summary is not a ‘dumbed down’ version of your research findings. You must not treat your audience as stupid or patronise them.”

Access to Understanding (British Library)

  • A retired mathematician solved, and published, a theorum proving Gaussian correlation inequality, yet the proof remained obscure because it was published in a less well-known journal. “But Royen, not having a career to advance, chose to skip the slow and often demanding peer-review process typical of top journals. He opted instead for quick publication in the Far East Journal of Theoretical Statistics, a periodical based in Allahabad, India, that was largely unknown to experts and which, on its website, rather suspiciously listed Royen as an editor. (He had agreed to join the editorial board the year before.)…With this red flag emblazoned on it, the proof continued to be ignored.

A Long-Sought Proof, Found and Almost Lost (Quantum Magazine)

 

STATISTICS

How are types of statistics used changing over time? “…the average number of methods used per article was 1.9 in 1978–1979, 2.7 in 1989, 4.2 in 2004–2005, and 6.1 in 2015. In particular, there were increases in the use of power analysis (i.e., calculations of power and sample size) (from 39% to 62%), epidemiologic statistics (from 35% to 50%), and adjustment and standardization (from 1% to 17%) during the past 10 years. In 2015, more than half the articles used power analysis (62%), survival methods (57%), contingency tables (53%), or epidemiologic statistics (50%).” Are more journals now in need of statistical reviewers?

Sato Y, Gosho M, Nagashima K, et al. Statistical Methods in the Journal — An Update . N Engl J Med 2017; 376:1086-1087. DOI: 10.1056/NEJMc1616211

 

 

____

Newsletter #5, circulated April 1, 2017. Sources include Retraction Watch and Open Science Initiative listserve. Providing the links does not imply WAME’s endorsement.

Publishing research with ethical lapses, P values, Reproducibility, WAME’s predatory journals statement

RESEARCH ETHICS AND MISCONDUCT

  • An editorial by Bernard Lo and Rita Redberg discusses ethical issues in recently published research in which abnormal lab values were not conveyed to research participants: “Should a study with an ethical lapse be published?…Many journals will not publish research with grave ethical violations, such as lack of informed consent, lack of institutional review board (IRB) approval, or scientific misconduct. However, if violations are contested or less serious, as in this study, the ethical consensus has been to publish valid findings, together with an editorial to raise awareness of the ethical problems and stimulate discussion of how to prevent or address them.”

Addressing Ethical Lapses in Research (JAMA) [formerly free, now first PDF page visible]

  • What should research misconduct be called? “At the heart of the debate is the history of the term. In the U.S., in particular, lobbying from scientists dating to the 1980s has resulted in the term “misconduct” being codified to only refer to the cardinal sins of falsification, fabrication, and plagiarism. This has left lesser offenses, often categorized as “questionable research practices,” relatively free from scrutiny. Nicholas Steneck, a research ethicist at the University of Michigan in Ann Arbor, calls the term “artificial:”

Does labeling bad behavior “scientific misconduct” help or hurt research integrity? A debate rages (Retraction Watch Blog)

RESEARCH REPORTING AND STATISTICS

  • Hilda Bastian provides 5 tips for avoiding P value potholes: commonly encountered problems with how P values are used and interpreted.

5 Tips for Avoiding P-Value Potholes (Absolutely Maybe blog)

  • Videos on research methods related to epidemiology, by Greg Martin, MD, MPH, MBA (University of Witwatersrand, Ireland) — basic but useful for anyone wanting a quick well-done overview on a variety of research topics.

Epidemiology (YouTube)

  • For a bit of humor, The Five Diseases of Academic Publishing.

Got “significosis?” Here are the five diseases of academic publishing (Retraction Watch blog)

ACCESS, INDEXING

  • Acceptance rates for journals applying for membership to OASPA: “Between 2013 and 2015 we accepted fewer than 25% of the total number of applications we received. Some from 2016 are still undergoing review, but we expect the number of accepted applications for last year to fall below 10% once all are concluded. “

Identifying quality in scholarly publishing: Not a black and white issue (OASPA Blog)

RESEARCH REPRODUCIBILITY

  • Overcoming nonreproducibility in basic and preclinical research, by John Ioannidis: “The evidence for nonreproducibility in basic and preclinical biomedical research is compelling. Accumulating data from diverse subdisciplines and types of experimentation suggest numerous problems that can create a fertile ground for nonreproducibility. For example, most raw data and protocols are often not available for in-depth scrutiny and use by other scientists. The current incentive system rewards selective reporting of success stories.

Acknowledging and Overcoming Nonreproducibility in Basic and Preclinical Research (JAMA) [formerly free, now first PDF page visible]

  • Research reported in newspapers has poor replication validity: “Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed.

Poor replication validity of biomedical association studies reported by newspapers (PLOS ONE)

PREDATORY/PSEUDO-JOURNALS

WAME published a new statement on Identifying Predatory or Pseudo-Journals.

Identifying Predatory or Pseudo-Journals (WAME)

____

WAME Newsletter #2, original version circulated February 23, 2017. Identified (in part) from Retraction Watch, Stat News, and Linked In Global Health. Providing the links and information does not imply WAME’s endorsement.

 

 

 

Welcome to the WAME Blog: Authors’ view of the manuscript submission process, Science’s English-language bias, Transparent research results (or not), Open access publishing in the Global South

The WAME Blog, featuring the WAME Newsletter, offers news, views, and resources of interest to medical journal editors in general and WAME members — comprising medical journal editors and scholars throughout the world — in particular. The challenges and experiences of editors at small journals and journals in low and middle income countries are of particular interest. Articles and activities are free to everyone, unless, in rare instances, otherwise indicated.

The first Newsletter was circulated via listserve on February 16, 2017 and is excerpted below. Sources of this Newsletter are Retraction Watch and the Open Science Initiative listserve. Providing the links and information does not imply WAME’s endorsement.

AUTHORS

The delights, discomforts, and downright furies of the manuscript submission process, from Learned Publishing. A discussion of issues authors face as part of the manuscript submission process, including the following list of recommendations (plus a useful appendix of items authors should have on hand when ready to submit a manuscript):

  • “Editors and reviewers should consider manuscripts in any (appropriate) format first – and publishers reset only the accepted papers.
  • There should be three or four standard formats for journals that everyone can copy. Trivial house style requirements should be abolished.
  • The layouts of tables, graphs and references also need to be standardised more. Tables and graphs, and their caption, should be placed where they fit in the text, not at the end of manuscripts.
  • A named person (with an e-mail address at the publisher’s) should be provided by the publisher who can help with the submission process if an author gets stuck.
  • Finally, when the submission process is completed successfully or otherwise, authors should be invited to send any comments/feedback on the system that they have used.”

These authors’ comments, as well as the whole system, should be reviewed, say every 3-5 years. They also note the importance of allowing authors to review their proofs.

LANGUAGE

The problems and loss of information created by the bias toward science reported in English. “Not only does the larger scientific community miss out on research published in non-English languages. But the dominance of English as science’s lingua franca makes it more difficult for researchers and policy makers speaking non-English languages to take advantage of science that might help them…Amano thinks that journals and scientific academies working to include international voices is one of the best solutions to this language gap. He suggests that all major efforts to compile reviews of research include speakers of a variety of languages so that important work isn’t overlooked. He also suggests that journals and authors should be pushed to translate summaries of their work into several languages so that it’s more easily found by people worldwide.”

How a bias toward English-language science can result in preventable crises, duplicated efforts and lost knowledge (Smithsonian Magazine)

RESEARCH REPORTING AND STATISTICS

  • Paul Glasziou describes how to present research results in a transparent way so that readers can understand the study’s implications–or not. “…presenting the results in a clear, unbiased, and understandable way is of paramount importance. Editors should insist on clear, simple presentations of the main results—preferably in graphical formats. Without that, authors and editors will continue to contribute to the considerable waste in research and the gaps between research and practice.”

How to hide trial results in plain sight (BMJ Blogs)

  • The influence of statistical noise in medical research results: “Statistically speaking, a statistical significant result obtained under highly noisy conditions is more likely to be an overestimate and can even be in the wrong direction.  In short:  a finding from a low-noise study can be informative, while the finding at the same significance level from a high-noise study is likely to be little more than . . . noise.”

Why traditional statistics are often “counterproductive to research the human sciences” (Retraction Watch blog)

ACCESS

  • OASPA hosted a Twitter Chat on Open Access Publishing in the Global South. From the OASPA website (and thank you to the OSI listserve for the heads up): “February 10, 2017 by Leyla Williams On Wednesday 22nd February 2017, OASPA will host a live Twitter chat about open access publishing in the Global South with Xin Bi (Xi’an Jiaotong-Liverpool University/DOAJ), Ina Smith (Academy of Science of South Africa), Abel Packer (SciELO), and Lars Bjørnshauge (DOAJ).”
  • OASPA has posted a Webinar on open access publishing in the Global South; free OASPA webinars are offered here: http://oaspa.org/information-resources/oaspa-webinars/