Implementing Reporting Guidelines: Why and How, for Journal Editors

The 4 days of the 2017 Peer Review Congress (3 days of research plus a day of pre-conference sessions) provided more information than I can convey here; luckily there are the conference abstracts (http://www.peerreviewcongress.org/index.html ), Twitter (#prc8), and others’ summaries (eg, Hilda Bastian’s pithy blog at http://blogs.plos.org/absolutely-maybe/2017/09/10/bias-conflicts-spin-the-8th-olympiad-of-research-on-science-publishing-begins , with subsequent days linked from the initial blog) for your perusal.

However, there were some sessions that cannot be found online. One of these, implementing reporting guidelines, provided sufficient practical points for journal editors that I recorded the session in detail for WAME members. The following is my record of the session. It does not represent official policies or endorsements of WAME .

EQUATOR GoodReports Campaign Workshop on Implementing Reporting Guidelines: Time for Action

Saturday, September 9, 2017

  • Doug Altman, Director, UK EQUATOR Centre
  • Caroline Struthers, Education and Training Manager, UK EQUATOR Centre
  • Jason Roberts, Executive Editor, Headache

The stated goal of the session was to develop one page action plan to implement reporting guidelines. That may be a work in progress, but the information shared was substantially more than one page.

  • Many journals recommend or require guidelines reporting, but is requiring reporting checklists sufficient to increase author submissions that submit checklists and adhere to guidelines? The clear answer at this session, previous research, and in several studies presented at the PRC was a resounding No.
  • How can greater uptake at publication be achieved? See Implementing guidelines at journals and Case Reports below.

Why are reporting guidelines necessary and important?

The medical literature should not mislead and should provide enough information on methods to allow replication (as also stated in the ICMJE). Methods and results should be presented with key characteristics to allow the study to be included in a subsequent systematic review/meta-analysis. Accuracy, completeness, and transparency are essential, as stated in the Declaration of Helsinki. Reporting must include declaring outcomes that are prespecified or not.

There is plenty of evidence of poor and misleading reporting. For example:

  • Hoffman et al [BMJ 2013 https://doi.org/10.1136/bmj.f3755]: 39% of non-pharma interventions were adequately described
  • Vera-Badillo et al [Ann Oncol 2013 https://doi.org/10.1093/annonc/mds636]: “33% showed bias in reporting of the primary endpoint and 67% in the reporting of toxicity”
  • Munter et al [Eur J Anaesthesiol 2015. doi:10.1097/EJA.0000000000000176]: >10 yrs after CONSORT, RCTs reported a median of 60.0% of CONSORT criteria; only 72.1% presented clearly defined primary and secondary outcome parameters, and the number of times cited was only weakly associated with CONSORT reporting
  • Sivendran et al [J Clin Oncol 2014 doi: 10.1200/JCO.2013.52.2219]: re adverse events in oncology research — “96% reported only adverse events occurring above a threshold rate or severity, 37% did not specify the criteria used to select which adverse events were reported, and 88% grouped together adverse events of varying severity”
  • Wayant et al [PLOS ONE 2017 1371/journal.pone.0178379]: re discrepancies in outcomes between trial registration and reporting in hematology journals: among 118 discrepancies, 25% of primary outcomes were demoted, 40% of primary outcomes were omitted, and 25% of primary outcomes were added.

These discrepancies tend to lead to cumulative over-optimism, including overstating positive findings and under-stating harms–all potentially harming patients. It also makes extracting the information for systematic reviews difficult. As noted by Larson and Cortazal [J Clin Epidemiol 2012 doi: 10.1016/j.jclinepi.2011.07.008], findings that are poorly reported have the potential to cause real harm.

Reporting guidelines and their effects

Reporting guidelines were developed beginning with CONSORT in 1996. There are now 375 guidelines on the EQUATOR website, a daunting number to sort through, although the most commonly used ones are linked from the home page (https://www.equator-network.org) and a search tool is provided (https://www.equator-network.org/reporting-guidelines/). About 20 guidelines address most research; 350 are specialized and needn’t be bothered with for most editors but are helpful for the specific research to which they apply. (EQUATOR scans the peer reviewed literature for guidelines to include. Guidelines included in EQUATOR were published in a peer reviewed journal but did not necessarily adhere to a specific methodology.)

Have the reporting guidelines made a difference? It depends. Sekula et al [PLOS ONE. 2017 doi: 10.1371/journal.pone.0178531] evaluated what happened after publication of the REMARK guidelines for tumor marker prognostic studies. Overall mean reporting scores were 53% (range: 10%-90%) before the REMARK guidelines and, for studies that cited the REMARK guidelines, 58% (range: 30%-100%) after, vs 58% (range: 20%-100%) for studies that did not cite the REMARK guidelines. Some manuscripts cite the guidelines without adhering to them at all.

Authors may have a difficult time identifying the relevant guideline. Hopewell, Boutron, Altman et al [BMC Medicine 2016 https://doi.org/10.1186/s12916-016-0736-x] tested the tool WebCONSORT to help authors identify the correct CONSORT extension. The intent was to randomize RCTs for the study, but approximately a third of manuscripts referred by journals were not RCTs. A quarter of authors did not select the correct extension and the tool did not result in a higher number of CONSORT and CONSORT extension items being reported.

Bruce et al [BMC Medicine 2016 https://doi.org/10.1186/s12916-016-0631-5 ] conducted a systematic review to evaluate randomized interventions to improve the quality of peer review of biomedical journals and found that use of a checklist did not improve the quality of reviews (the factors that did have an effect were adding a statistical peer review, which improved the quality of the final manuscript, and open peer review, which improved the quality of the peer review report and decreased the rate of rejection). Cobo et al [BMJ 2011 doi: https://doi.org/10.1136/bmj.d6783 ] found modest improvement with implementation of reporting guidelines. Wynne KE et al [J Pediatric Surgery 2011 doi: 10.1016/j.jpedsurg.2010.09.077 ] found that implementing guidelines at J Pediatr Surg improved mean global reporting scores from 72 to 80. Pandis N et al [J Clin Epidemiol doi: 10.1016/j.jclinepi.2014.04.001] found in dental surgery trials’ reporting of CONSORT items increased from 40% to 80% after implementation.

Tools to help with adherence

Caroline Struthers reported on the use of the EQUATOR Wizard to identify which of the nearly 400 guidelines are appropriate for a given study. It asks 4-5 questions about the work (many result in STROBE, of course, and it doesn’t include the extensions). However, it depends on knowing basics about the research such as whether it involved randomization; in their research 33% still didn’t identify the correct one. They’re working toward adding additional questions to help guide authors to the correct guideline rather than asking what their study design is.

Another tool, Penelope check, will highlight issues in the manuscript for the author. A third is designed for studies that use more than one design (such as an RCT with an economic evaluation) and combines checklists into a single checklist without duplication. A fourth concept places manuscript content into an article template. Building a checklist format into pre-registration templates could help encourage researchers to conduct research appropriately by addressing reporting issues much earlier upstream.

Implementing guidelines at journals

While many journals have listed guidelines as a requirement or recommendation in their Instructions for Authors, with little apparent effect, some journals have gone beyond to enforce implementation. Jason Roberts stated that there is variability of level of consciousness at biomedical journals regarding implementing reporting guidelines. Implementation is especially challenging at journals with fewer resources, and lower skills of editors and authors.

Furthermore, editors want a solution that will cause minimal resistance from authors, rather than driving authors away to journals that don’t require them. However, essentially no research has been done in implementation of guidelines.

Some tools have been developed such as StatReviewer, which fills out the CONSORT checklist automatically. It has been built into the Editorial Manager system (available January 2018).

Some challenges of implementation even when the journal has staff to facilitate it:

  • The checklist process may lead to an overly complex submission and review process
  • Entrenched practices are accepted but flawed
  • Subject thought leaders believe their research results trump methods/reporting standards
  • Checklists are subject to misinterpretation (although they also have accompanying documents to explain the checklist)
  • Researchers may be unable to comprehend reporting guidelines, particularly if they have weak methodology skills with little training

Some journals have had relatively successful implementation (checklists are required, and authors comply rather than going elsewhere). They’ve found the following steps to be successful:

  • Audit the problem — understand the problem at the journal. Who will you have to convince?
  • Find champions among thought leaders to raise awareness among editors and researchers
  • Organize a cohort of journals in a given field to implement simultaneously, so that authors can’t simply go to a journal without the requirement
  • Consider initial spotlight editorial (since people don’t read Instructions for Authors)
  • Devise technical and educational implementation strategies — at the publisher level. Provide training packages at meetings or virtually.
  • Think about enforcement — how will your journal check for adherence?
  • Submit checklist (can be a technical requirement), but if the submission is completed by a staff person, the checklist may simply have boxes checked off page numbers or the information being present in the manuscript
  • Need to have a content expert familiar with the checklist requirements to see whether the checklists have been completed correctly — clinicians likely won’t know
  • Phased or complete launch? Phasing in doesn’t work–mandate and go for it
  • Consider teaming up with other journals — write editorials about the requirements
  • Follow up

Other practical points re implementation:

  • Recruit reviewers into helping to check requirements
  • Find well reported papers in their field that follow the reporting guidelines to show as an example
  • Require the checklist be followed before sending for peer review to make peer review more effective — reviewers have the information necessary to evaluate the paper, rather than having to ask for a lot of additional information

Case reports

Three journal editors shared their experiences and take home messages.

  1. Headache. Jason discussed developing and implementing a reporting guidelines policy at the small journal Headache. He suggested the following steps:
  • Identify the needs of your journal
    • what are other journals in your field doing?
    • which guidelines do you want to endorse? what’s relevant to your journal?
  • Select champions — what well respected researchers in your field will support the cause?
  • Identify appropriate checklists
  • Determine enforcement level
    • upload with submission (eg, Scholar One–select study type and receive checklist )
    • if not done appropriately then add it at revision — many don’t add the changes in the paper but only in the checklist. To verify it’s in the paper requires manual effort
    • has been dependent on methods consultant to review each study
  • Phased or full launch
  • Write up proposal on implementing improved reporting standards
  • Preparations for launch
  • Launch activities
  • Evaluation and audit
  • Offer uniformly good reporting — not highly variable
  • Some journals point to EQUATOR but can give them the form
  • Often authors are not uploading the papers (he pointed out that the checklist is more challenging for the older generation who has never used it)
  1. Arch Phys Med Rehab. Alan Heinemann, co-Editor in Chief of Arch Phys Med Rehab, described his experience (in brief, at the journal about half of submissions are rejected; 15 to 20% are accepted).

The driving force was when 28 rehabilitation journals published an initial agreement; 4 more agreed later after phone and email conversations. The initial holdouts were worried about losing submissions but eventually agreed because of importance for accurate reporting for patient care and systematic review.

They published an editorial announcing it (Chan L, Heinemann AW, Roberts J Arch Phys Med 2014 https://doi.org/10.2522/ptj.2014.94.4.446) and made a checklist mandatory for submission.

Authors were not necessarily pleased — they received complaints, pushback, and authors shopped around to other journals for the first 1 to 1.5 years. They also found that noncompliant authors were more likely to be submitting poorer quality papers.

What difference does it make? They found that 60% of editors reported requiring the reporting guidelines CONSORT, PRISMA, STROBE, STARD and CARE. 59% believe authors complete them accurately. 57% believed that guidelines improve quality a great deal, 38% some, 5% not at all. Even those journals that don’t require it benefitted as authors started to complete the checklist whether or not it was required. They had a Q&A with authors at a society conference to discuss the rationale for the requirements and questions and issues that authors had.

Barriers were authors’ lack of familiarity and the difficulties of outreach to authors — not only do they not read the Instructions for Authors but they won’t read the editorial either.

It places a burden on editorial staff, editors, reviewers — time and cost. However, they have paid statisticians who verify whether information is present, which helped with enforcement.

Conclusions:

  • long lead time — 1 year recommending followed by requiring, but authors were still not fully on board
  • educate authors at conferences
  • detailed author institutions
  • apply the lessons of implementation science (e.g., Knowledge Translation into healthcare — Sharon Straus et al)
  • cooperate among journals

It is important for editors from different journals to discuss this at society meetings.

  1. F1000 Research. Sabina Alam presented their experience at F1000 Research, where ~20% of manuscripts are rejected and authors will submit elsewhere if they don’t want to adhere. It is a Life Sciences “Platform” (not journal, since the manuscripts are posted before peer review) with many types of articles. Authors expect to be able to post any paper without revising. F1000 Research tried to convince authors that their paper will have a better time in peer review if they adhere to the reporting guidelines.
  • Staff editors all trained in main EQUATOR guidelines — CONSORT, CARE, PRISMA, STROBE, SPIRIT.
  • The Editorial team checks every submission, takes into account key reporting guidelines. They liaise with authors until the manuscript can be published (especially because it is published before peer review).
  • The mandatory guidelines are CONSORT, PRISMA, and CARE.
  • Many authors are not aware of reporting guidelines especially for interview-based studies, case reports, cross-sectional studies.
  • STROBE: they often miss details on requirements, variables, bias — don’t always see the point
  • COREQ: miss details on who conducted the interview, relationship with the participants, conditions under which they consented to be interviewed
  • CARE: routinely miss key details about patient history, main take home message, and timeline
  • ARRIVE: miss details on license, ethics, safe animal handling, adverse events
  • Language barriers can be a challenge but author workshops can help reach authors with such barriers (Sabina did so when she was at BMC).

Q: Has F1000 Research considered unified reporting guidelines, a la Nature, to find the common denominators that should be presented for all research? A: No, that would be an interesting idea but still have to get authors to adhere to it.

Q: Could GoodReports be a quality mark that journals receive if they implement reporting guidelines (and implementation in the studies themselves is verified)? A: Yes, but it would be important to distinguish quality of reporting from quality of research. A poorly conducted study can be well reported (and vice versa, although it is difficult to know whether a study is well done without good reporting).

Beyond journals

Should making researchers use reporting guidelines be only the journals’ responsibility? Who else could be involved? It would be vastly better to implementing reporting guidelines at the time of study design to ensure not only that the reporting is complete but that the methodology is appropriate. The following suggestions were made by the panel and audience.

  • Institutions — publication officers — One at the conference had good success in getting papers published after applying checklist and improving reporting guidelines
  • Funders — key element — could funders require the reporting checklist? Funders spend a great deal on deciding what to fund but very little on the quality of what is published as a result of that funding. Funders in the audience explained that they don’t want to interfere in the work of researchers. However, using guidelines would protect funders in that they’re following best practices; they could require the checklist as part of author’s report. Adhering to the recommendations reduces risk of rejection, which is also a benefit for funders.
  • Reporting guideline developers
  • Publishers: they’re aware of EQUATOR but there’s no financial incentive — encourage as ethical issue? Won’t cost them, aside from editor time
  • Editing companies like Editage (for-profit) — 600 people work for them — could they implement some reporting standards?
  • AMWA and EMWA — medical writers must keep up to date with reporting guidelines https://c.ymcdn.com/sites/www.amwa.org/resource/resmgr/about_amwa/JointPositionStatement.Profe.pdf

Without additional incentives, authors need to see what’s in it for them. Perhaps they’re more likely to get published? The checklist helps editors review content and reviewers identify what’s missing. However, it also increases the amount of material to review.

Authors see it as yet another form to submit without seeing the value in it–having a champion behind it is important. For example, trial registration occurred because of champions, evidence, and a concerted effort by journals.

Discussion

It is important to create a sense of urgency and to tie it to things that people feel a sense of crisis about, eg, Evidence-based guidelines — systematic reviews are important to inform evidence-based guidelines and adherence to reporting guidelines is important to be able to conduct accurate systematic reviews. Machine learning may help.

Reporting guidelines may be a way for journals to help differentiate themselves from predatory Journals. Perhaps journals that require guidelines could receive a “good reports” badge.

What is the role of publishers? They need to be convinced it is in their best interest (by increasing efficiency and quality of review); this can be a political issue. For example, a journal in a small shop can have greater control over what they implement in their submission system than a journal in a very large publishing house.

What if most authors go to preprint servers?

Would a reviewer template help the review process?

What do third party editing companies do?

  • Rubriq: Does third party review but that activity didn’t work out, so they screen for reporting guidelines, etc.
  • ISMPP CMPP  — high quality, high cost

Use a train-the-trainer model for medical society meetings?

Please note that implementation resources for journals are provided on the EQUATOR website at https://www.equator-network.org/toolkits/using-guidelines-in-journals/.

[Disclosure: I accepted an EQUATOR whistle to “blow the whistle on poor research reporting” (best conference bling)]

Note: This article does not represent official policies or endorsements by WAME.  

Article updated 9/21/17.

 

ICMJE on data sharing/ Not so random RCTs? Positive results bias/ What’s next for peer review? Ethics of predatory publishing/ Is the Impact Factor stochastic?

DATA SHARING

ICMJE statement on data sharing, published June 5, 2017, in the ICMJE journals:

“1. As of July 1, 2018 manuscripts submitted to ICMJE journals that report the results of clinical trials must contain a data sharing statement as described below

2. Clinical trials that begin enrolling participants on or after January 1, 2019 must include a data sharing plan in the trial’s registration…If the data sharing plan changes after registration this should be reflected in the statement submitted and published with the manuscript, and updated in the registry record. Data sharing statements must indicate the following: whether individual deidentified participant data (including data dictionaries) will be shared; what data in particular will be shared; whether additional, related documents will be available (e.g., study protocol, statistical analysis plan, etc.); when the data will become available and for how long; by what access criteria data will be shared (including with whom, for what types of analyses and by what mechanism)…Sharing clinical trial data is one step in the process articulated by the World Health Organization (WHO) and other professional organizations as best practice for clinical trials: universal prospective registration; public disclosure of results from all clinical trials (including through journal publication); and data sharing.”

Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, et al. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal EditorsPLOS Med. 2017.14(6): e1002315. https://doi.org/10.1371/journal.pmed.1002315

 

RESEARCH REPRODUCIBILITY AND MISCONDUCT

  • Not so random?

Randomization in an RCT confers an advantage over other study designs because random sampling means that any differences in variables between comparison groups occur at random (rather than due to confounding). However, some researchers have identified RCTs that do not appear to have been randomly sampled–a clue that the methodology may have been different from what authors are reporting.

Carlisle “analysed the distribution of 72,261 means of 29,789 variables in 5087 randomised, controlled trials published in eight journals between January 2000 and December 2015…Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 1015 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools).”

Carlisle JB.  Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals , Anaesthesia, 2017.72: 944–952. doi:10.1111/anae.13938

  • In another study, Carlisle et al applied the same approach and concluded that “The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals.”

Carlisle JB, Dexter F, Pandit JJ, Shafer SL, Yentis SM. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials. Anaesthesia, 2015.70: 848–858. doi:10.1111/anae.13126

  • Bolland et al used Carlisle’s method to analyze RCTs published by a group of investigators “about which concerns have been raised” and found:

Treatment groups were improbably similar. The distribution of p values for differences in baseline characteristics differed markedly from the expected uniform distribution (p 5 5.2 3 10282). The distribution of standardized sample means for baseline continuous variables and the differences between participant numbers in randomized groups also differed markedly from the expected distributions (p 5 4.3 3 1024, p 5 1.5 3 1025, respectively).”

Mark J. Bolland, Alison Avenell, Greg D. Gamble, and Andrew Grey
Systematic review and statistical analysis of the integrity of 33 randomized controlled trials. Neurology 2016 : WNL.0000000000003387v1-10.1212/WNL.0000000000003387.

  • Is this approach yet another type of manuscript review for busy editors to apply, assuming the calculations are not too daunting? In Retraction Watch, Oransky comments, “So should all journals use the method — which is freely available online — to screen papers? In their editorial accompanying Carlisle’s paper, Loadsman and McCulloch note that if that were to become the case, ‘…dishonest authors could employ techniques to produce data that would avoid detection. We believe this would be quite easy to achieve although, for obvious reasons, we prefer not to describe the likely methodology here.’ Which begs the question: what should institutions’ responsibilities be in all this?

From: Two in 100 clinical trials in eight major journals likely contain inaccurate data: Study (Retraction Watch)

  • In other news, PubPeer announces PubPeer 2.0. From Retraction Watch: “RW: Will the identity changes you’ve installed make it more difficult for scientists to unmask (and thereby seek recourse from) anonymous commenters? BS: Yes, that is one of the main motivations for that change. Once the transition to the new site is complete our goal is to not be able to reveal any user information if we receive another subpoena or if the site is hacked.”

Meet PubPeer 2.0: New version of post-publication peer review site launches today (Retraction Watch)

 

RESEARCH BIAS

Addressing bias toward positive results

  • “The good news is that the scientific community seems increasingly focused on solutions…But true success will require a change in the culture of science. As long as the academic environment has incentives for scientists to work in silos and hoard their data, transparency will be impossible. As long as the public demands a constant stream of significant results, researchers will consciously or subconsciously push their experiments to achieve those findings, valid or not. As long as the media hypes new findings instead of approaching them with the proper skepticism, placing them in context with what has come before, everyone will be nudged toward results that are not reproducible…For years, financial conflicts of interest have been properly identified as biasing research in improper ways. Other conflicts of interest exist, though, and they are just as powerful — if not more so — in influencing the work of scientists across the country and around the globe. We are making progress in making science better, but we’ve still got a long way to go.”

Carroll AE.  Science Needs a Solution for the Temptation of Positive Results (NY Times)

  • But replication leads to a different bias, says Strack: “In contrast, what is informative for replications? Not that the original finding has been replicated, but that it has been ‘overturned.’ Even if the editors’ bias (Gertler, 2016) bias [sic] is controlled by preregistration, overturned findings are more likely to attract readers’ attention and to get cited…However, there is an important difference between these two biases in that a positive effect can only be obtained by increasing the systematic variance and/or decreasing the error variance. Typically, this requires experience with the subject matter and some effort in controlling unwanted influences, while this may also create some undesired biases. In contrast, to overturn the original result, it is sufficient to decrease the systematic variance and to increase the error. In other words, it is easier to be successful at non-replications while it takes expertise and diligence to generate a new result in a reliable fashion..”

Track F.  From Data to Truth in Psychological Science. A Personal PerspectiveFront Psychol, 16 May 2017 | https://doi.org/10.3389/fpsyg.2017.00702

 

PEER REVIEW

What’s next for peer review?

From the London School of Economics blog, reproduced from “SpotOn Report: What might peer review look like in 2030?” from BioMed Central and Digital Science:

“To square the [peer reviewer] incentives ledger, we need to look to institutions, world ranking bodies and funders. These parties hold either the purse strings or the decision-making power to influence the actions of researchers. So how can these players more formally recognise review to bring balance back to the system and what tools do they need to do it?

Institutions: Quite simply, institutions could give greater weight to peer review contributions in funding distribution and career advancement decisions. If there was a clear understanding that being an active peer reviewer would help further your research career, then experts would put a greater emphasis on their reviewing habits and research would benefit.

Funders: If funders factored in peer review contributions and performance when determining funding recipients, then institutions and individuals would have greater reason to contribute to the peer review process.

World ranking bodies: Like researchers, institutions also care about their standing and esteem on the world stage. If world ranking bodies such as THE World University Rankings and QS World Rankings gave proportionate weighting to the peer review contributions and performance of institutions, then institutions would have greater reason to reward the individuals tasked with peer reviewing.

More formal weighting for peer review contributions also makes sense, because peer review is actually a great measure of one’s expertise and standing in the field. Being asked to peer review is external validation that academic editors deem a researcher equipped to scrutinise and make recommendations on the latest research findings.

Researchers: Researchers will do what they have to in order to advance their careers and secure funding. If institutions and funders make it clear that peer review is a pathway to progression, tenure and funding, researchers will make reviewing a priority.

Tools In order for peer review to be formally acknowledged, benchmarks are necessary. There needs to be a clear understanding of the norms of peer review output and quality across the myriad research disciplines in order to assign any relative weighting to an individual’s review record. This is where the research enterprise can utilise the new data tools available to track, verify and report all the different kinds of peer review contributions. These tools already exist and researchers are using them. It’s time the institutions that rely on peer review got on board too.”

Formal recognition for peer review will propel research forward (London School of Economics)

PREDATORY/PSEUDO-JOURNALS

Biochemia Medica published a cluster of papers on predatory journals this month, including research by Stojanovski and Ana Marusic on 44 Croatian open access journals, which concludes: “In order to clearly differentiate themselves from predatory journals, it is not enough for journals from small research communities to operate on non-commercial bases…[they must also have] transparent editorial policies.” The issue also include ethical issues of predatory publishing (for which I am a coauthor, by way of disclosure) and an essay by Jeffrey Beall.

IMPACT FACTOR

“…more productive years yield higher-cited papers because they have more chances to draw a large value. This suggests that citation counts, and the rewards that have come to be associated with them, may be more stochastic [randomly determined] than previously appreciated.”

Michalska-Smith MJ, Allesina S. And, not or: Quality, quantity in scientific publishing. PLOS ONE. 2017.12(6): e0178074. https://doi.org/10.1371/journal.pone.0178074

 

ACCESS

  • The American Psychological Association raised the ire of some authors after requesting that links to free copies of APA-published articles (“unauthorized online postings”) from authors’ websites be removed.

Researchers protest publisher’s orders to remove papers from their websites (Retraction Watch)

  • Access challenges in a mobile world 

Bianca Kramer at the University of Utrecht in the Netherlands studied Sci-Hub usage data attributed to her institution and compared it with holdings data at her library. She found that “75% of Utrecht Sci-Hub downloads would have been available either through our library subscriptions (60%) or as Gold Open Access/free from publisher (15%).” While these data are not comprehensive, nor granular enough for certainty, she concluded that a significant component of usage of Sci-Hub was caused by problems of access and the desire for convenience by users.

Failure to Deliver: Reaching Users in an Increasingly Mobile World (Scholarly Kitchen)

__

Newsletter #11: Originally circulated June 18, 2017. Sources of links include Retraction Watch, Health Information for All listserve, Scholarly Kitchen, Twitter. Providing the links does not imply WAME’s endorsement. 

 

Is citation manipulation now acceptable? Whither the digital revolution? New predatory journal blacklist? How can research be made more reproducible? Criminal charges for research misconduct

 

IMPACT FACTOR

Many fewer journals are suspended for citation manipulation from Impact Factors analyses this year than previous years, and two are added back after previous suspension. How much manipulation is acceptable? (Why is a measure so easily manipulated considered so important–to some?)

How Much Citation Manipulation Is Acceptable? (Scholarly Kitchen)

OPEN ACCESS

  • From The Guardian, whither the digital revolution?

“…although digital technology and the internet have created a new terrain in which the ideals of open access have begun to germinate, they have yet to produce a cost-effective and reliable harvest of accessible knowledge. The acquisition by private publishing companies of peer review processes that had previously been the preserve of scholarly societies has combined with the increased dependence of individual academics on where, rather than what, they publish to control the digital revolution in scholarly publishing. This has prevented the full realisation of its promise to make publishing faster and cheaper.”

It’s time for academics to take back control of research journals (The Guardian)

  • Are journals with few resources less likely to be found, thanks to Google’s algorithms for displaying search results? Another gap for Global South journals to surmount?

“Solid article promotion practices may explain why 89% of the Top 100 Almetric articles in 2016 came from journals that generally employ paywalls as well as the trend for those articles to perform better in social media and the tendency for Gold OA articles from for-profit publishers to perform better.”

Detours and Diversions — Do Open Access Publishers Face New Barriers? (Scholarly Kitchen)

PREDATORY/PSEUDO- JOURNALS

Cabell’s International is forming a paywalled blacklist of journals. Cabell’s list will be drawn from all journals, not just open access journals. Their criteria will be provided at some point in the future (below, plagiarized articles is a criterion, suggesting that journals that don’t screen for plagiarized articles will be at risk of getting listed). However, journals will have to contact Cabell’s to find out whether they are listed. From Nature:

Cabell uses some 65 criteria – which will be reviewed quarterly – to check whether a journal should be on its blacklist, adding points for each suspect finding. Examples include fake editors, plagiarized articles and unclear peer-review policies, says Berryman, although she declined to provide all criteria, saying that the firm would present them later in the year. A team of four employees checks for evidence that journals meet the criteria by searching online or contacting authors and journals for verification.

“It’s pretty much as scientific as we can get at this point,” she says.

“Some of the publishers and journals listed by Beall aren’t on Cabell’s list,” says Berryman. And Cabell’s has added new journals, including some that aren’t open access. The firm declined to provide details of the differences between its list and Beall’s, but says that it will clearly state all the reasons that a journal is on its list. Berryman hopes that will limit libel suits. Publishers or journals will be able to contact Cabell’s to find out whether they are indexed, and will have the opportunity to appeal their status once a year.
Pay-to-view blacklist of predatory journals set to launch (Nature News)

RESEARCH INTEGRITY AND REPRODUCIBILITY

  • A study of Editorial Expressions of Concern: “…We identified 230 EEoCs that affect 300 publications indexed in PubMed, the earliest issued in 1985. Half of the primary EEoCs were issued between 2014 and 2016 (52%). We found evidence of some EEoCs that had been removed by the publisher without leaving a record, and some were not submitted for PubMed or PMC indexing. A minority of publications affected by EEoCs had been retracted by early December 2016 (25%)…The majority of EEoCs were issued because of concerns with validity of data, methods, or interpretation of the publication (68%), and 31% of cases remained open. Issues with images were raised in 40% of affected publications.”

Vaught M, Jordan DC, Bastian H. Concern noted: a descriptive study of editorial expressions of concern in PubMed and PubMed CentralResearch Integrity and Peer Review. 2017;2:10. https://doi.org/10.1186/s41073-017-0030-2

  • What scientists accused of misconduct go through:

“…whistleblowers urgently need an internationally accepted code of conduct, including pretty simple rules such as not attacking the scientists in public while the investigation is running, no personal insults, no mass e-mails to multiple recipients in order to ruin the reputation of the scientists, etc.”

It’s not just whistleblowers who deserve protection during misconduct investigations, say researchers (Retraction Watch)

  • Time to expand the Methods section to improve reproducibility?

“Journals can greatly improve the reproducibility of research by requiring methodological transparency. The print paradigm of journal publishing led us to poor practices in an attempt to save space and reduce the number of printed pages. When trying to cut down an article to reach an assigned page/word limit, usually the first thing to go was a detailed methods section. In a digital era where journals are doing away with page limits, why not add back in this vital information? For a journal that still exists in print, why not require detailed methodologies in the supplementary material? If you have a policy requiring public posting of the data behind the experiments, why not a similar policy for the methods?
Reproducible Research, Just Not Reproducible By You (Scholarly Kitchen)

  • How can research be made more reproducible?

In Nature, William Kaelin Jr argues that when researchers are required to provide too many experiments to make broad assertions, they spread their research thin, rather than first confirming their findings using multiple approaches. It also makes peer review daunting for reviewers (requiring a “mini-sabbatical” to review).

We must return to more careful examination of research papers for originality, experimental design and data quality, and adopt more humility about predicting impact, which can truly be known only in retrospect …We should also place more emphasis on the quality of a body of work and whether it has enabled subsequent discoveries, and focus less on where individual papers are published…The main question when reviewing a paper should be whether its conclusions are likely to be correct, not whether it would be important if it were true.”
Publish houses of brick, not mansions of straw (Nature World View)

Peer reviewer stole data and published; now work has been retracted

Yikes: Peer reviewer stole (and published) author’s data (Retraction Watch)

 

  • BMJ Global Health pulled a published paper on a US-funded trial in Mumbai that had been found to be unethical, after deciding it failed legal review.

BMJ journal yanks paper on cancer screening in India for fear of legal action (Retraction Watch)

Criminal charges for research misconduct

Oransky and colleague present at 5th World Congress on Research Integrity: “A total of 39 science researchers from 7 countries were identified as having been subject to criminal sanctions for actions related to research misconduct between 1979 and 2015…Overall, 14 researchers were criminally sanctioned for actions directly involving their own research. Three of those 14 had criminal charges solely related to research, while the other 11 also had charges stemming indirectly from their research process, e.g., grant fraud, embezzlement of research funds, or bribery.”

Oransky I, Abritis A. Who Faces Criminal Sanctions for Scientific Misconduct? 5th World Congress on Research Integrity 2017 (Abstract).

AUTHORSHIP

CRediT (Contributor Roles Taxonomy) proposes a new author contribution taxonomy, to be embedded in the byline. Formerly posted for comment at http://biorxiv.org/content/early/2017/05/20/14022 ; no longer available but project can be viewed at http://docs.casrai.org/CRediT .

____

Newsletter #10: Originally distributed June 1, 2017. Sources of links include Retraction Watch, Scholarly Kitchen, Twitter.   Providing the links does not imply WAME’s endorsement.

 

Should editors get a CLUE? Who should investigate Questionable Research Practices? Is Chinese research seriously sullied by misconduct? How to solve publishing’s wicked challenges? Pro-predatory P&T committees?

RESEARCH ETHICS AND MISCONDUCT

  • Liz Wager and others posted the CLUE (Cooperation And Liaison Between Universities And Editors) guidelines on the preprint server biorxiv, regarding how journals and institutions should work together in alleged research misconduct cases. They will consider comments and suggestions posted on the preprint. Their main recommendations:
    • “National registers of individuals or departments responsible for research integrity at institutions should be created
    • Institutions should develop mechanisms for assessing the validity of research reports that are independent from processes to determine whether individual researchers have committed misconduct
    • Essential research data and peer review records should be retained for at least 10 years
    • While journals should normally raise concerns with authors in the first instance, they also need criteria to determine when to contact the institution before, or at the same time as, alerting the authors in cases of suspected data fabrication or falsification to prevent the destruction of evidence
    • Anonymous or pseudonymous allegations made to journals or institutions should be judged on their merit and not dismissed automatically
    • Institutions should release relevant sections of reports of research trustworthiness or misconduct investigations to all journals that have published research that was the subject of the investigation.

Editors: The first proposed CLUE criterion is “*While journals should normally raise concerns with authors in the first instance, they also need criteria to determine when to contact the institution before, or at the same time as, alerting the authors in cases of suspected data fabrication or falsification to prevent the destruction of evidence.” What criteria do you think would be appropriate?

Preprint: Wager E et al. Cooperation And Liaison Between Universities And Editors (CLUE): Recommendations On Best Practice doi: https://doi.org/10.1101/139170

Interview: When misconduct occurs, how should journals and institutions work together? (Retraction Watch)

  • Denmark is redefining how they handle research misconduct 

As of July 1, research misconduct will be limited to fabrication, falsification, and plagiarism and will be investigated by the Board for the Prevention of Scientific Misconduct. Institutions remain responsible for investigating allegations of Questionable Research Practices (eg, selective reporting of results to support the hypothesis).

Denmark to institute sweeping changes in handling misconduct (Retraction Watch)

  • A large proportion of Chinese research may be affected by misconduct

The subject survey published in Science and Engineering Ethics, estimates 40%, but has a standard deviation of ±24%. “The forms of misconduct that were most concerning to respondents-ahead of falsification, fabrication, and duplication-were plagiarism (25%) and the ‘inclusion of someone without permission or contribution in the authorship’ (28%)…The survey also shows that scientists strongly feel authorities have done little to address the underlying publish-or-perish environment that breeds misconduct; 72% thought that reforms to current systems of academic assessment was the most important measure, with only 13% prioritizing stronger systems of monitoring for misconduct.”

Four in 10 biomedical papers out of China are tainted by misconduct, says new survey (Retraction Watch)

  • Ginny Barbour concludes her term as COPE Chair and comments on positive changes and wicked challenges in publishing: “The importance of good processes is only underpinned by the fact that the types of problems that editors face are increasing in complexity.”

From the outgoing chair  (COPE Digest)

  • Should advisors publish with their PhD students?

Supervisors are morally obliged to publish with their PhD students (Times High Education — registration may be required)

  • Quest for Research Excellence Conference
    • Location: The George Washington University, Washington, DC
    • Date: August 7-9, 2017

The 2017 Quest for Research Excellence Conference.m co-sponsored by the Office of Research Integrity, The George Washington University (GWU), and Public Responsibility in Medicine and Research. “The goal of the Quest for Research Excellence conference series is to fuel knowledge sharing among all the parties involved in promoting the responsible conduct of research and scientific integrity, from scientists to educators, administrators, government officials, journal editors, science publishers and attorneys.”

Office of Research Integrity 

 

PREDATORY/PSEUDO-JOURNALS

The predatory/pseudo-journal plot thickens: A university promotion & tenure committee is complicit in their faculty publishing in predatory/pseudo-journals. “…I included my initial finding that I had found that I was one of a minority of researchers in my department with no publications in predatory journals.” The author suggests that administrators with research backgrounds may be less likely to equate predatory with legitimate journal publications.

When most faculty publish in predatory journals, does the school become “complicit?” (Retraction Watch)

 

JOURNAL IMPACT 

A brief review of citation performance indicators. “A good indicator simplifies the underlying data, is reliable in its reporting, provides transparency to the underlying data, and is difficult to game. Most importantly, a good indicator has a tight theoretical connection to the underlying construct it attempts to measure.” Has a good indicator been created?

 

JOURNAL STANDARDS

A Canadian initiative to help implement ORCID more broadly, as the greatest challenge is still to get people to register their ORCID ID. “Consortium members have access to the Premium Member API, which facilitates integrating ORCID identifiers in key systems and workflows, such as research information systems, manuscript submission systems, grant application processes, and membership databases.” You can get your ID for free at https://orcid.org/register .

ORCID-CA, the ORCID Consortium in Canada, to provide Canadian institutions and organizations the opportunity to obtain premium membership to ORCID (CRKN/RCDR)

____

Newsletter #9, originally circulated May 23, 2017. Sources of links include Retraction Watch, Scholarly Kitchen, Twitter.   Providing the links does not imply WAME’s endorsement.

 

 

Paraphrasing plagiarism? Who gets the DiRT? Coming to terms with conflicts of interest: CROs, practice guidelines, authors, editors, publishers. Future of peer review, sharing data more easily

RESEARCH ETHICS AND MISCONDUCT

  • Free Paraphrasing tools make evading plagiarism detection tools easier, requiring manual review to identify problems. The article provides useful tips to help identify such work. However, how does one determine whether the awkward phrasing that the paraphrasing tools may create is due to the tool or to lack of English writing fluency?

A troubling new way to evade plagiarism detection software. (And how to tell if it’s been used.) (Retraction Watch)

  • Retraction Watch and STAT announce the DiRT (do the right thing) award and the first recipient, apparently a judge who rejected a defamation lawsuit against a journal for expressions of concern.

Announcing the DiRT Award, a new “doing the right thing” prize — and its first recipient (Retraction Watch)

 

CONFLICTS OF INTEREST

  • Challenges to trial integrity may occur when for-profit clinical research organizations (CROs) conduct international RCTs, as they’re doing more and more– as illustrated by the TOPCAT spironolactone study

Serious Questions Raised About Integrity Of International Trials (CardioBrief)

  • A JAMA theme issue on conflicts of interest includes some commentaries [some restricted access]; the following seem especially relevant to editors:

(1) Why There Are No “Potential” Conflicts of Interest By McCoy and Emanuel, who argue that conflicts of interest aren’t potential; there are conflicts of interest and ways to mitigate them

(2) Strategies for Addressing a Broader Definition of Conflicts of Interest by McKinney and Pierce: “[Conflict of interest] disclosure is thus useful as a minimum expectation, but is fundamentally insufficient. It is one tool in a toolbox, but no more.”

(3) Conflict of Interest in Practice Guidelines Panels by Hal Sox, including guidance from the Institute of Medicine, useful to editors who review such guidelines. “To accept a recommendation for practice, the profession and the public require a clear explanation of the reasoning linking the evidence to the recommendations. The balance of harms and benefits is a valuable heuristic for determining the strength of a recommendation, but this determination often involves a degree of subjectivity because harms and benefits seldom have the same units of measure. Because of these subjective elements, guideline development is vulnerable to biased judgments.”

(4) How Should Journals Handle the Conflict of Interest of Their Editors? Who Watches the “Watchers”? by Gottlieb and Bressler, who discuss current recommendations for how editors should handle their conflicts of interest. As is usually the case the advice does not address small journals with very few decision-making editors; other solutions may be needed in those cases.

(5) Medical Journals, Publishers, and Conflict of Interest by JAMA‘s publisher Tom Easley. This article pertains primarily to large journal-publisher relationships, but many journals have a different arrangement and additional guidance is needed.

 

PREDATORY/PSEUDO-JOURNALS

  • Predatory Indian journals apply to DOAJ in large numbers

“Since March 2014, when the new criteria for DOAJ listing were put out, there have been about 1,600 applications from Open Access journal publishers in India…Of these, only 4% (74) were found to be from genuine publishers and accepted for inclusion in the DOAJ directory. While 18% applications are still being processed, 78% were rejected for various reasons. One of the main reasons for rejection is the predatory or dubious nature of the journals.”

” ‘Nearly 20% of the journals have a flashy impact factor and quick publication time, which are quick give-aways….Under contact address, some journal websites do not provide any address but just a provision for comments. In many cases, we have written to people who have been listed as reviewers to know if the journal website is genuine.’ ”

Predatory journals make desperate bid for authenticity (The Hindu)

  • A journal published by Gavin changes its name from Journal of Arthritis and Rheumatology in response to the American College of Rheumatology–to a name very similar to a different journal

 

 

PEER REVIEW

BioMedCentral and Digital Science publish a report on “What might peer review look like in 2030?” and recommend:

  1. “Find new ways of matching expertise and reviews by better identifying, verifying and inviting peer reviewers (including using AI)
  2. Increase diversity in the reviewer pool (including early career researchers, researchers from different regions, and women)
  3. Experiment with different and new models of peer review, particularly those that increase transparency
  4. Invest in reviewer training programs
  5. Find cross-publisher solutions to improve efficiency and benefit all stakeholders, such as portable peer review
  6. Improve recognition for review by funders, institutions, and publishers
  7. Use technology to support and enhance the peer review process, including automation”

The Future of Peer Review (Scholarly Kitchen)

 

POST-PUBLICATION PEER REVIEW

Angela Cochran blogs about the apparent failure of online commenting, but she defines success as percentage of papers with comments. If few letters to the editor are published do we consider them a waste? Maybe the approach isn’t mature yet. Ultimately. all PPPR comments need to be compiled with the article. If they’re useful to the commenters, some readers, and maybe the authors, that’s sufficient.

Should we stop with the commenting already? (Scholarly Kitchen)

 

DATA SHARING

Figshare releases new platform to help authors share data more easily

Figshare Launches New Tool for Publishers To Support Open Research (PRWeb)

___

Newsletter #8, first circulated May 8, 2017.  Sources of links include Retraction Watch, Stat News, Scholarly Kitchen. Providing the links does not imply WAME’s endorsement.

 

Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals and how (maybe) to identify possibly predatory journals. Fake editor, Rehabbed retraction, Peer reviewer plagiarizing. Writing for a lay audience; Proof to a famous problem almost lost to publishing obscurity

PREDATORY/PSEUDO-JOURNALS

  • Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals

“An early-career researcher isn’t necessarily going to have the basic background knowledge to say ‘this journal looks a bit dodgy’ when they have never been taught what publishing best practice actually looks like…We also have to consider the language barrier. It is only fair, since we demand that the rest of the scientific world communicates in academic English. As a lucky native speaker, it takes me a few seconds to spot nonsense and filler text in a journal’s aims and scope, or a conference ‘about’ page, or a spammy ‘call for papers’ email. It also helps that I have experience of the formal conventions and style that are used for these types of communication. Imagine what it is like for a researcher with English as a basic second language, who is looking for a journal in which to publish their first research paper? They probably will not spot grammatical errors (the most obvious ‘red flag’) on a journal website, let alone the more subtle nuances of journal-speak.”

How should you not identify a predatory journal? “I know one good-quality journal which was one of the first in its country to get the ‘Green Tick’ on DOAJ. I’ve met the editor who is a keen open access and CC-BY advocate. However, the first iteration of the journal’s website and new journal cover was a real shock. It had all the things we might expect on a predatory journal website: 1990s-style flashy graphics, too many poorly-resized pictures, and the homepage (and journal cover) plastered with logos of every conceivable indexing service they had an association with…I knew this was a good journal, but the website was simply not credible, so we strongly advised them to clean up the site to avoid the journal being mistaken for predatory…This felt wrong (and somewhat neo-colonial). ‘Professional’ website design as we know it is expensive, and what is wrong with creating a website that appeals to your target audience, in the style they are familiar with? In the country that this journal is from, a splash of colour and flashing lights are used often in daily life, especially when marketing a product. I think we need to bear in mind that users from the Global South can sometimes have quite different experiences and expectations of ‘credibility’ on the internet, both as creators and users of content and, of course, as consumers looking for a service.”

Andy Nobes, INASP.  Critical thinking in a post-Beall vacuum (Research Information)

  • Characteristics of possibly predatory journals (from Beall’s list) vs legitimate open access journals

Research finds 13 characteristics associated with possibly predatory journals (defined as those on Beall’s list, which included some non-predatory journals). See Table 10 — misspellings, distorted or potentially unauthorized images, editors or editorial board members whose affiliation with the journal was unverified, and use of the Index Copernicus Value for impact factor were much more common among potentially predatory journals. These findings may be somewhat circular since the characteristics evaluated overlap with Beall’s criteria and some of those criteria (e.g., distorted images) were identified in the previous article as falsely identifying predatory journals, for reasons of convention rather than quality. However, the results may be useful for editors who are concerned their journal might be misidentified as predatory.

Shamseer L, Moher D, Maduekwe O, et al. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison  BMC Medicine 2017;15:28. DOI: 10.1186/s12916-017-0785-9

  • From the Department of Stings: A fake academic is accepted onto editorial boards and in a few cases, as editor

“We conceived a sting operation and submitted a fake application [Anna O. Szust] for an editor position to 360 journals, a mix of legitimate titles and suspected predators. Forty-eight titles accepted. Many revealed themselves to be even more mercenary than we had expected….We coded journals as ‘Accepted’ only if a reply to our e-mail explicitly accepted Szust as editor (in some cases contingent on financial contribution) or if Szust’s name appeared as an editorial board member on the journal’s website. In many cases, we received a positive response within days of application, and often within hours. Four titles immediately appointed Szust editor-in-chief.”

Sorokowski P, Kulczycki ESorokowska A, Pisanski K Predatory journals recruit fake editor. Nature Comment 543, 481–483 (23 March 2017). doi:10.1038/543481a

 

RESEARCH ETHICS AND MISCONDUCT

  • A retracted study is republished in another journal without the second editor being aware of the retraction. How much history is an author obligated to provide? What is a reasonable approach?

“Strange. Very strange:” Retracted nutrition study reappears in new journal (Retraction Watch)

  • A peer reviewer plagiarized text from the manuscript under review. “We received a complaint from an author that his unpublished paper was plagiarized in an article published in the Journal... After investigation, we uncovered evidence that one of the co-authors of … acted as a reviewer on the unpublished paper during the peer review process at another journal. We ran a plagiarism report and found a high percentage of similarity between the unpublished paper and the one published in the Journal... After consulting with the corresponding author, the editors decided to retract the paper.” Publishing timing does not always reveal who has plagiarized whom.

Nightmare scenario: Text stolen from manuscript during review (Retraction Watch)

 

ACCESS

  • Instructions for writing research summaries for a lay audience. “It is particularly intended to help scientists who are used to writing about biomedical and health research for their peers to reach a wider audience, including the general public, research funders, health-care professionals, patients and other scientists unfamiliar with the research being described…Plain English avoids using jargon, technical terms, acronyms and any other text that is not easy to understand. If technical terms are needed, they should be properly explained. When writing in plain English, you should not change the meaning of what you want to say, but you may need to change the way you say it…A plain-English summary is not a ‘dumbed down’ version of your research findings. You must not treat your audience as stupid or patronise them.”

Access to Understanding (British Library)

  • A retired mathematician solved, and published, a theorum proving Gaussian correlation inequality, yet the proof remained obscure because it was published in a less well-known journal. “But Royen, not having a career to advance, chose to skip the slow and often demanding peer-review process typical of top journals. He opted instead for quick publication in the Far East Journal of Theoretical Statistics, a periodical based in Allahabad, India, that was largely unknown to experts and which, on its website, rather suspiciously listed Royen as an editor. (He had agreed to join the editorial board the year before.)…With this red flag emblazoned on it, the proof continued to be ignored.

A Long-Sought Proof, Found and Almost Lost (Quantum Magazine)

 

STATISTICS

How are types of statistics used changing over time? “…the average number of methods used per article was 1.9 in 1978–1979, 2.7 in 1989, 4.2 in 2004–2005, and 6.1 in 2015. In particular, there were increases in the use of power analysis (i.e., calculations of power and sample size) (from 39% to 62%), epidemiologic statistics (from 35% to 50%), and adjustment and standardization (from 1% to 17%) during the past 10 years. In 2015, more than half the articles used power analysis (62%), survival methods (57%), contingency tables (53%), or epidemiologic statistics (50%).” Are more journals now in need of statistical reviewers?

Sato Y, Gosho M, Nagashima K, et al. Statistical Methods in the Journal — An Update . N Engl J Med 2017; 376:1086-1087. DOI: 10.1056/NEJMc1616211

 

 

____

Newsletter #5, circulated April 1, 2017. Sources include Retraction Watch and Open Science Initiative listserve. Providing the links does not imply WAME’s endorsement.

How does the NAS suggest journals should foster research integrity? How should one critically evaluate a manuscript (plus more fake peer reviews)? One year of ORCID IDs, a Dear Journal letter from a biostatistician

RESEARCH MISCONDUCT

National Academy of Sciences on how to improve research integrity

A U.S. National Academy of Sciences panel calls for formation of an independent group to address research misconduct and related issues, including [quoted from Retraction Watch, U.S. panel sounds alarm on “detrimental” research practices, calls for new body to help tackle misconduct ] “misleading statistical analysis that falls short of falsification, awarding authorship to researchers who don’t deserve it (and vice versa), not sharing data, and poorly supervising research – as ‘detrimental’ research practices.”

Fostering Integrity in Research“, from the National Academy of Sciences (free PDF download available):

The document includes 11 major recommendations; those most relevant to journal editors are pasted below (emphasis added):

“RECOMMENDATION ONE: To better align the realities of research with its values and ideals, all stakeholders in the research enterprise-researchers, research institutions, research sponsors, journals, and societies-should significantly improve and update their practices and policies to respond to the threats to research integrity identified in this report.

RECOMMENDATION FIVE: Societies and journals should develop clear disciplinary authorship standards. Standards should be based on the principle that those who have made a significant intellectual contribution are authors. Significant intellectual contributions can be made in the design or conceptualization of a study, the conduct of research, the analysis or interpretation of data, or the drafting or revising of a manuscript for intellectual content. Those who engage in these activities should be designated as authors of the reported work, and all authors should approve the final manuscript. In addition to specifying all authors, standards should (1) provide for the identification of one or more authors who assume responsibility for the entire work, (2) require disclosure of all author roles and contributions, and (3) specify that gift or honorary authorship, coercive authorship, ghost authorship, and omitting authors who have met the articulated standards are always unacceptable. Societies and journals should work expeditiously to develop such standards in disciplines that do not already have them.

RECOMMENDATION SIX: Through their policies and through the development of supporting infrastructure, research sponsors and science, engineering, technology, and medical journal and book publishers should ensure that information sufficient for a person knowledgeable about the field and its techniques to reproduce reported results is made available at the time of publication or as soon as possible after publication.

RECOMMENDATION EIGHT: To avoid unproductive duplication of research and to permit effective judgments on the statistical significance of findings, researchers should routinely disclose all statistical tests carried out, including negative findings. Research sponsors, research institutions, and journals should support and encourage this level of transparency.”

 

PEER REVIEW

  • How to critically evaluate a manuscript

At How to critically evaluate a manuscript: 12 questions you should always ask yourself (Publons), a useful general approach to peer review, but it’s missing some important points (I’m sure you can find more–add your comments below):

-Can the study design answer the hypothesis posed? (e.g., is the hypothesis a question of causality but the study design is observational?)

-Do the conclusions follow from the results or do they exaggerate the importance and implications of the research?

-What are the funding source(s) and potential conflicts of interest of the authors?

  • Fall out from fake peer reviews continues with more than 100 retractions

A new record: Major publisher retracting more than 100 studies from cancer journal over fake peer review (Retraction Watch)

 

JOURNAL STANDARDS

  • Results after one year of journals requiring ORCID IDs 

“Our 2015 community survey indicated that most researchers supported the idea of their organizations requiring the use of ORCID — 72% agreed or strongly agreed that these would benefit the global research community, 21% were neutral, and only 7% disagreed or strongly disagreed. Three quarters said specifically that it would be useful if their publisher mandated ORCID iDs.”

It Takes a Village: One Year of Journals Requiring ORCID IDs (Scholarly Kitchen)

  • Technical Image Editor wanted?

Journal of Biological Chemistry is hiring editors to manually screen images for potential manipulation or duplication, before publication.

 

STATISTICS

“Dear Journal”, from a concerned biostatistician

“The safe-conducts given by the editorial system to articles that do not disclose exact sample sizes are shocking. Science must be based on the possibility to repeat comparable designs, which obviously encompasses the use of similar numbers of observations. Sample sizes given as intervals (e.g. “n=3- 18”), inequalities (e.g. “n>3”) or absurdly nebulous sentences (e.g. “n=4, data representative of 3 rats from 2 independent experiments”) are evident obstructions to reproducibility.

Similarly, it is perplexing to notice the proportion of publications that do not clearly reveal the statistical tests used. A clear attribution of tests must be given, including the post-hoc tests used after analysis of variance. It should not be sufficient to list all statistical procedures in the method section with no indication of which test was used in which figure or table.”

Dear journals: Clean up your act. Regards, Concerned Biostatistician (Retraction Watch)

_____

 

Newsletter #7, originally circulated on April 24, 2017. Sources include Retraction Watch, Health Information for All listserve, and Open Science Initiative listserve. Providing the links does not imply WAME’s endorsement.

 

 

Clinical trial data sharing — not just for “research parasites” anymore? Use Unpaywall to find free articles, join Initiative for Open Citations. Are women authors different? What will your journal do without you? Can technology improve global health?

DATA SHARING

Clinical trial data sharing — not just for “research parasites” anymore

“Using the NHLBI data repository, 370 investigators requested data from at least one clinical trial — 51% of them trials on cardiovascular prevention and treatment. Requests were largely for post hoc secondary analysis (72%); a minority of requests were initiated for analytic or statistical approaches to clinical trials (9%) and meta-analyses (7%). More than half of investigators (53%) made their requests in the last 4.4 years of the study period (January 2000 to May 2016), ‘indicating an increasing demand for trial data that has outpaced acquisition,’ wrote Sean A. Coady, MS, MA, of the NHLBI in Bethesda, Md., and colleagues. ‘In contrast, demand for observational data has increased in a pattern more directly proportional to time.’ ”

NHLBI Data Sharing: Fears of ‘Research Parasites’ Melt Away Experience of NIH institute bolsters value of open trial data (MedPage Today)

 

OPEN ACCESS

  • Unpaywall

Trying to find free articles online? Use http://unpaywall.org, a new widget to identify free copies of research articles. Unlike the open access button available for libraries and interlibrary loan, this is available to anyone (requires Firefox or Chrome browsers). Putting the OA Into Interlibrary Loan 

Covered in:

 

  • Initiative for Open Citations

“The Initiative for Open Citations I4OC is a collaboration between scholarly publishers, researchers, and other interested parties to promote the unrestricted availability of scholarly citation data…The aim of this initiative is to promote the availability of data on citations that are structuredseparable, and openStructured means the data representing each publication and each citation instance are expressed in common, machine-readable formats, and that these data can be accessed programmatically. Separable means the citation instances can be accessed and analyzed without the need to access the source bibliographic products (such as journal articles and books) in which the citations are created. Open means the data are freely accessible and reusable.”

 

RESEARCH INTEGRITY

  • Fast corrections: Authors use PubMed’s commenting feature PubMed Commons to post corrections before a formal correction is published

Authors alerting readers via PubMed Commons

  • Ghosts who don’t know they’re ghosts: Researcher provides fake contact information for coauthors, who aren’t aware they’re authors

Busted: Researcher used fake contact info for co-authors

 

GENDER GAP

Study of economics papers shows that while women authors take longer to revise, the readability of the revised manuscript is more improved than men’s. “Research papers with female authors spend six months longer in peer review at the top economics journals…In what appears to be a consequence, papers by women are easier to read and improve more as they are being revised than papers written by men.”

Gender Differences in Peer Review: Economics papers by women are stalled longer at journals – but they end up more readable and more improved (Royal Economic Society)

 

JOURNAL STANDARDS

Succession planning: How to prepare for when you’re no longer around — written more for publishers than editors but maybe useful for some. “With a mature workforce, you need to watch that knowledge and skills do not reside in one person. When that person leaves, for whatever reason, it is entirely possible that you will be stuck and with their departure goes an essential resource that you will be scrambling to replace.”

Succession Planning (Scholarly Kitchen))

 

GLOBAL HEALTH

Talk with Google: Using Technology to Tackle Global Health’s Biggest Challenges

 

___

Newsletter #6, circulated April 11, 2017. Sources include Retraction Watch and Open Science Initiative  listserve. Providing the links does not imply WAME’s endorsement.

 

Why do researchers commit research misconduct? Should you publish a paper withdrawn (maybe) from a predatory journal? Should an editor also be a researcher? Researcher and reviewer gender gaps

TRIAL REGISTRATION

Clinical trial registration and negative results

A study in BMJ tests the hypothesis that clinical trial registration should improve trial reporting and therefore increase the number of trials that do not report positive outcomes. Registered trials were slightly less likely to report positive results, particularly if they were not industry-funded. The authors did not compare the registered trial outcomes with the outcomes that were reported (they studied 1122 trials so that would have been a major undertaking). A great benefit of trial registration for editors and reviewers is  being able to determine whether outcome switching has occurred. If outcomes were switched, that could explain why trial registration was not associated with a larger reduction in positive results.
Another important observation: much of the trial reporting was poor, pointing out the importance of all authors using, and all medical journals requiring and verifying use of, CONSORT reporting (http://www.consort-statement.org).
Odutayo A, Emdin CA, Hsiao AJ. Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials—ESORT). BMJ 2017;356:j917. doi: https://doi.org/10.1136/bmj.j917

RESEARCH ETHICS

Why do researchers commit research misconduct? 

A case study with a chastening message for investigators (and a sobering message for editors): “He described how and why he started tampering with data. The first time it happened he had analyzed a dataset and the results were just shy of significance. Fox noticed that if he duplicated a couple of cases and deleted a couple of cases, he could shift the p-value to below .05. And so he did. Fox recognized that the system rewarded him, and his collaborators, not for interesting research questions, or sound methodology, but for significant results. When he showed his collaborators the findings they were happy with them-and happy with Fox.” What messages are investigators sending when research doesn’t turn out as hoped? “Hindsight’s a bitch:” Colleagues dissect painful retraction. Retraction Watch (blog). March 7, 2017.

Publishing a paper withdrawn from a predatory journal
What would you do if authors submitted a paper that they had unknowingly submitted to a predatory journal, then withdrew, but the predatory journal wouldn’t respond to confirm? COPE has published a case study on such an instance.
Withdrawal of accepted manuscript from predatory journal. Case Number 16-22. COPE.

 

AUTHORS 

The importance of research experience when evaluating research (blog):
“So pointing out why a study is not perfect is not enough: good criticism takes into account that research always involves a trade-off between validity and practicality… good research is always a compromise between experimental rigor, practical feasibility, and ethical considerations. To be able to appreciate this as a critic, it really helps to have been actively involved in research projects. I do not mean to say that we should become less critical, but rather that we become better constructive critics if we are able to empathize with the researcher’s goals and constraints.” The value of experience in criticizing research (Rolf Zwaag Blog)

Relationship between time to reject without review, the review process, and author satisfaction

An analysis across scientific fields, the authors find “One-third of journals take more than 2 weeks for an immediate (desk) rejection and one sixth even more than 4 weeks. This suggests that besides the time reviewers take, inefficient editorial processes also play an important role. As might be expected, shorter peer review processes and those of accepted papers are rated more positively by authors. More surprising is that peer review processes in the fields linked to long processes are rated highest and those in the fields linked to short processes lowest. Hence authors’ satisfaction is apparently influenced by their expectations regarding what is common in their field. Qualitative information provided by the authors indicates that editors can enhance author satisfaction by taking an independent position vis-à-vis reviewers and by communicating well with authors.” Huisman J, Smits J. Duration and quality of the peer review process: the author’s perspective. Scientometrics (2017). doi:10.1007/s11192-017-2310-5

RESEARCH REPORTING 

Reporting race/ethnicity in research

“An explanation of who classified individuals as to race, ethnicity, or both, the classifications used, and whether the options were defined by the investigator or the participant should be included in the Methods section. The reasons that race/ethnicity was assessed in the study also should be described in the Methods section. ” Robinson JK, McMichael AJ, Hernandez C. Transparent Reporting of Demographic Characteristics of Study ParticipantsJAMA Dermatol. 2017;153(3):263-264. doi:10.1001/jamadermatol.2016.5978

GENDER GAP 
What is happening with the researcher gender gap, in 12 countries?
A report from Elsevier (using Scopus): Gender in the Global Research Landscape . Analysis of research performance through a gender lens across 20 years, 12 geographies, and 27 subject areas. (2017)
and Scholarly Kitchen’s assessment: Alice Meadows. The Global Gender Gap: Research and Researchers Scholarly Kitchen Blog.

Is there a gender bias in selecting reviewers? 
Here we present evidence that women of all ages have fewer opportunities to take part in peer review. Using a large data set that includes the genders and ages of authors and reviewers from 2012 to 2015 for the journals of the American Geophysical Union (AGU), we show that women were used less as reviewers than expected…The bias is a result of authors and editors, especially male ones, suggesting women as reviewers less often, and a slightly higher decline rate among women in each age group when asked.
These findings underline the need for efforts to increase female scientists’ engagement in manuscript reviewing to help in the advancement and retention of women in science.” Lerback J, Hanson B. Journals invite too few women to referee. Nature | Comment, January 25, 2017.

CONFLICTS OF INTEREST 

BMJ will declare all its industry revenues, in the interest of transparency. Hear, hear! BMJ editor confirms that revenues from industry will be declared. BMJ 2015;351:h3908.

__

Newsletter #4, originally circulated March 16, 2017. Sources include Retraction Watch, COPE, LinkedIn, and Scholarly Kitchen. Providing the links and information does not imply WAME’s endorsement.

How would you change medical publishing? Authors offer bribes, New issues in informed consent, Why do predatory journals exist?

FUTURE OF MEDICAL PUBLISHING

  • What would you change about medical publishing? Scholarly Kitchen offers some interesting perspectives. Share yours via Comments below.

If you could change one thing about scholarly publishing, what would that be? (Scholarly Kitchen blog)

EDITOR ETHICS

  • Editor receives offer of cash for publishing manuscripts

Pay to play? Three new ways companies are subverting academic publishing (Retraction Watch blog)

  • Editors step down after their citation cartel was discovered (European Geophysical Union)

http://retractionwatch.com/2017/03/03/citation-boosting-episode-leads-editor  (Retraction Watch blog)

RESEARCH ETHICS

  • Commentaries on new developments with informed consent: e-consent and internet-based clinical trials, changes in perceptions of risk, new types of risk

Informed Consent  (NEJM [free])

RESEARCH REPRODUCIBILITY

  • Should scientists attempt to replicate their own studies? They have an inherent desire (or conflict of interest) to see consistent results

Why Scientists Shouldn’t Replicate Their Own Work (Discover Magazine)

PREDATORY/PSEUDO-JOURNALS

Do predatory journals fill a niche?

Predatory Publishing as a Rational Response to Poorly Governed Academic Incentives (Scholarly Kitchen blog)

PEER REVIEW

  • A neuroscientist posts his peer reviews online, emails the authors, and tweets a link to his review (but only if the manuscript is available as a preprint)

The Rogue Neuroscientist on a Mission to Hack Peer Review (Wired Magazine)

Newsletter #3. Originally circulated March 7, 2017. Sources include Retraction Watch and Scholarly Kitchen. Providing the links and information does not imply WAME’s endorsement.