Implementing Reporting Guidelines: Why and How, for Journal Editors

The 4 days of the 2017 Peer Review Congress (3 days of research plus a day of pre-conference sessions) provided more information than I can convey here; luckily there are the conference abstracts (http://www.peerreviewcongress.org/index.html ), Twitter (#prc8), and others’ summaries (eg, Hilda Bastian’s pithy blog at http://blogs.plos.org/absolutely-maybe/2017/09/10/bias-conflicts-spin-the-8th-olympiad-of-research-on-science-publishing-begins , with subsequent days linked from the initial blog) for your perusal.

However, there were some sessions that cannot be found online. One of these, implementing reporting guidelines, provided sufficient practical points for journal editors that I recorded the session in detail for WAME members. The following is my record of the session. It does not represent official policies or endorsements of WAME .

EQUATOR GoodReports Campaign Workshop on Implementing Reporting Guidelines: Time for Action

Saturday, September 9, 2017

  • Doug Altman, Director, UK EQUATOR Centre
  • Caroline Struthers, Education and Training Manager, UK EQUATOR Centre
  • Jason Roberts, Executive Editor, Headache

The stated goal of the session was to develop one page action plan to implement reporting guidelines. That may be a work in progress, but the information shared was substantially more than one page.

  • Many journals recommend or require guidelines reporting, but is requiring reporting checklists sufficient to increase author submissions that submit checklists and adhere to guidelines? The clear answer at this session, previous research, and in several studies presented at the PRC was a resounding No.
  • How can greater uptake at publication be achieved? See Implementing guidelines at journals and Case Reports below.

Why are reporting guidelines necessary and important?

The medical literature should not mislead and should provide enough information on methods to allow replication (as also stated in the ICMJE). Methods and results should be presented with key characteristics to allow the study to be included in a subsequent systematic review/meta-analysis. Accuracy, completeness, and transparency are essential, as stated in the Declaration of Helsinki. Reporting must include declaring outcomes that are prespecified or not.

There is plenty of evidence of poor and misleading reporting. For example:

  • Hoffman et al [BMJ 2013 https://doi.org/10.1136/bmj.f3755]: 39% of non-pharma interventions were adequately described
  • Vera-Badillo et al [Ann Oncol 2013 https://doi.org/10.1093/annonc/mds636]: “33% showed bias in reporting of the primary endpoint and 67% in the reporting of toxicity”
  • Munter et al [Eur J Anaesthesiol 2015. doi:10.1097/EJA.0000000000000176]: >10 yrs after CONSORT, RCTs reported a median of 60.0% of CONSORT criteria; only 72.1% presented clearly defined primary and secondary outcome parameters, and the number of times cited was only weakly associated with CONSORT reporting
  • Sivendran et al [J Clin Oncol 2014 doi: 10.1200/JCO.2013.52.2219]: re adverse events in oncology research — “96% reported only adverse events occurring above a threshold rate or severity, 37% did not specify the criteria used to select which adverse events were reported, and 88% grouped together adverse events of varying severity”
  • Wayant et al [PLOS ONE 2017 1371/journal.pone.0178379]: re discrepancies in outcomes between trial registration and reporting in hematology journals: among 118 discrepancies, 25% of primary outcomes were demoted, 40% of primary outcomes were omitted, and 25% of primary outcomes were added.

These discrepancies tend to lead to cumulative over-optimism, including overstating positive findings and under-stating harms–all potentially harming patients. It also makes extracting the information for systematic reviews difficult. As noted by Larson and Cortazal [J Clin Epidemiol 2012 doi: 10.1016/j.jclinepi.2011.07.008], findings that are poorly reported have the potential to cause real harm.

Reporting guidelines and their effects

Reporting guidelines were developed beginning with CONSORT in 1996. There are now 375 guidelines on the EQUATOR website, a daunting number to sort through, although the most commonly used ones are linked from the home page (https://www.equator-network.org) and a search tool is provided (https://www.equator-network.org/reporting-guidelines/). About 20 guidelines address most research; 350 are specialized and needn’t be bothered with for most editors but are helpful for the specific research to which they apply. (EQUATOR scans the peer reviewed literature for guidelines to include. Guidelines included in EQUATOR were published in a peer reviewed journal but did not necessarily adhere to a specific methodology.)

Have the reporting guidelines made a difference? It depends. Sekula et al [PLOS ONE. 2017 doi: 10.1371/journal.pone.0178531] evaluated what happened after publication of the REMARK guidelines for tumor marker prognostic studies. Overall mean reporting scores were 53% (range: 10%-90%) before the REMARK guidelines and, for studies that cited the REMARK guidelines, 58% (range: 30%-100%) after, vs 58% (range: 20%-100%) for studies that did not cite the REMARK guidelines. Some manuscripts cite the guidelines without adhering to them at all.

Authors may have a difficult time identifying the relevant guideline. Hopewell, Boutron, Altman et al [BMC Medicine 2016 https://doi.org/10.1186/s12916-016-0736-x] tested the tool WebCONSORT to help authors identify the correct CONSORT extension. The intent was to randomize RCTs for the study, but approximately a third of manuscripts referred by journals were not RCTs. A quarter of authors did not select the correct extension and the tool did not result in a higher number of CONSORT and CONSORT extension items being reported.

Bruce et al [BMC Medicine 2016 https://doi.org/10.1186/s12916-016-0631-5 ] conducted a systematic review to evaluate randomized interventions to improve the quality of peer review of biomedical journals and found that use of a checklist did not improve the quality of reviews (the factors that did have an effect were adding a statistical peer review, which improved the quality of the final manuscript, and open peer review, which improved the quality of the peer review report and decreased the rate of rejection). Cobo et al [BMJ 2011 doi: https://doi.org/10.1136/bmj.d6783 ] found modest improvement with implementation of reporting guidelines. Wynne KE et al [J Pediatric Surgery 2011 doi: 10.1016/j.jpedsurg.2010.09.077 ] found that implementing guidelines at J Pediatr Surg improved mean global reporting scores from 72 to 80. Pandis N et al [J Clin Epidemiol doi: 10.1016/j.jclinepi.2014.04.001] found in dental surgery trials’ reporting of CONSORT items increased from 40% to 80% after implementation.

Tools to help with adherence

Caroline Struthers reported on the use of the EQUATOR Wizard to identify which of the nearly 400 guidelines are appropriate for a given study. It asks 4-5 questions about the work (many result in STROBE, of course, and it doesn’t include the extensions). However, it depends on knowing basics about the research such as whether it involved randomization; in their research 33% still didn’t identify the correct one. They’re working toward adding additional questions to help guide authors to the correct guideline rather than asking what their study design is.

Another tool, Penelope check, will highlight issues in the manuscript for the author. A third is designed for studies that use more than one design (such as an RCT with an economic evaluation) and combines checklists into a single checklist without duplication. A fourth concept places manuscript content into an article template. Building a checklist format into pre-registration templates could help encourage researchers to conduct research appropriately by addressing reporting issues much earlier upstream.

Implementing guidelines at journals

While many journals have listed guidelines as a requirement or recommendation in their Instructions for Authors, with little apparent effect, some journals have gone beyond to enforce implementation. Jason Roberts stated that there is variability of level of consciousness at biomedical journals regarding implementing reporting guidelines. Implementation is especially challenging at journals with fewer resources, and lower skills of editors and authors.

Furthermore, editors want a solution that will cause minimal resistance from authors, rather than driving authors away to journals that don’t require them. However, essentially no research has been done in implementation of guidelines.

Some tools have been developed such as StatReviewer, which fills out the CONSORT checklist automatically. It has been built into the Editorial Manager system (available January 2018).

Some challenges of implementation even when the journal has staff to facilitate it:

  • The checklist process may lead to an overly complex submission and review process
  • Entrenched practices are accepted but flawed
  • Subject thought leaders believe their research results trump methods/reporting standards
  • Checklists are subject to misinterpretation (although they also have accompanying documents to explain the checklist)
  • Researchers may be unable to comprehend reporting guidelines, particularly if they have weak methodology skills with little training

Some journals have had relatively successful implementation (checklists are required, and authors comply rather than going elsewhere). They’ve found the following steps to be successful:

  • Audit the problem — understand the problem at the journal. Who will you have to convince?
  • Find champions among thought leaders to raise awareness among editors and researchers
  • Organize a cohort of journals in a given field to implement simultaneously, so that authors can’t simply go to a journal without the requirement
  • Consider initial spotlight editorial (since people don’t read Instructions for Authors)
  • Devise technical and educational implementation strategies — at the publisher level. Provide training packages at meetings or virtually.
  • Think about enforcement — how will your journal check for adherence?
  • Submit checklist (can be a technical requirement), but if the submission is completed by a staff person, the checklist may simply have boxes checked off page numbers or the information being present in the manuscript
  • Need to have a content expert familiar with the checklist requirements to see whether the checklists have been completed correctly — clinicians likely won’t know
  • Phased or complete launch? Phasing in doesn’t work–mandate and go for it
  • Consider teaming up with other journals — write editorials about the requirements
  • Follow up

Other practical points re implementation:

  • Recruit reviewers into helping to check requirements
  • Find well reported papers in their field that follow the reporting guidelines to show as an example
  • Require the checklist be followed before sending for peer review to make peer review more effective — reviewers have the information necessary to evaluate the paper, rather than having to ask for a lot of additional information

Case reports

Three journal editors shared their experiences and take home messages.

  1. Headache. Jason discussed developing and implementing a reporting guidelines policy at the small journal Headache. He suggested the following steps:
  • Identify the needs of your journal
    • what are other journals in your field doing?
    • which guidelines do you want to endorse? what’s relevant to your journal?
  • Select champions — what well respected researchers in your field will support the cause?
  • Identify appropriate checklists
  • Determine enforcement level
    • upload with submission (eg, Scholar One–select study type and receive checklist )
    • if not done appropriately then add it at revision — many don’t add the changes in the paper but only in the checklist. To verify it’s in the paper requires manual effort
    • has been dependent on methods consultant to review each study
  • Phased or full launch
  • Write up proposal on implementing improved reporting standards
  • Preparations for launch
  • Launch activities
  • Evaluation and audit
  • Offer uniformly good reporting — not highly variable
  • Some journals point to EQUATOR but can give them the form
  • Often authors are not uploading the papers (he pointed out that the checklist is more challenging for the older generation who has never used it)
  1. Arch Phys Med Rehab. Alan Heinemann, co-Editor in Chief of Arch Phys Med Rehab, described his experience (in brief, at the journal about half of submissions are rejected; 15 to 20% are accepted).

The driving force was when 28 rehabilitation journals published an initial agreement; 4 more agreed later after phone and email conversations. The initial holdouts were worried about losing submissions but eventually agreed because of importance for accurate reporting for patient care and systematic review.

They published an editorial announcing it (Chan L, Heinemann AW, Roberts J Arch Phys Med 2014 https://doi.org/10.2522/ptj.2014.94.4.446) and made a checklist mandatory for submission.

Authors were not necessarily pleased — they received complaints, pushback, and authors shopped around to other journals for the first 1 to 1.5 years. They also found that noncompliant authors were more likely to be submitting poorer quality papers.

What difference does it make? They found that 60% of editors reported requiring the reporting guidelines CONSORT, PRISMA, STROBE, STARD and CARE. 59% believe authors complete them accurately. 57% believed that guidelines improve quality a great deal, 38% some, 5% not at all. Even those journals that don’t require it benefitted as authors started to complete the checklist whether or not it was required. They had a Q&A with authors at a society conference to discuss the rationale for the requirements and questions and issues that authors had.

Barriers were authors’ lack of familiarity and the difficulties of outreach to authors — not only do they not read the Instructions for Authors but they won’t read the editorial either.

It places a burden on editorial staff, editors, reviewers — time and cost. However, they have paid statisticians who verify whether information is present, which helped with enforcement.

Conclusions:

  • long lead time — 1 year recommending followed by requiring, but authors were still not fully on board
  • educate authors at conferences
  • detailed author institutions
  • apply the lessons of implementation science (e.g., Knowledge Translation into healthcare — Sharon Straus et al)
  • cooperate among journals

It is important for editors from different journals to discuss this at society meetings.

  1. F1000 Research. Sabina Alam presented their experience at F1000 Research, where ~20% of manuscripts are rejected and authors will submit elsewhere if they don’t want to adhere. It is a Life Sciences “Platform” (not journal, since the manuscripts are posted before peer review) with many types of articles. Authors expect to be able to post any paper without revising. F1000 Research tried to convince authors that their paper will have a better time in peer review if they adhere to the reporting guidelines.
  • Staff editors all trained in main EQUATOR guidelines — CONSORT, CARE, PRISMA, STROBE, SPIRIT.
  • The Editorial team checks every submission, takes into account key reporting guidelines. They liaise with authors until the manuscript can be published (especially because it is published before peer review).
  • The mandatory guidelines are CONSORT, PRISMA, and CARE.
  • Many authors are not aware of reporting guidelines especially for interview-based studies, case reports, cross-sectional studies.
  • STROBE: they often miss details on requirements, variables, bias — don’t always see the point
  • COREQ: miss details on who conducted the interview, relationship with the participants, conditions under which they consented to be interviewed
  • CARE: routinely miss key details about patient history, main take home message, and timeline
  • ARRIVE: miss details on license, ethics, safe animal handling, adverse events
  • Language barriers can be a challenge but author workshops can help reach authors with such barriers (Sabina did so when she was at BMC).

Q: Has F1000 Research considered unified reporting guidelines, a la Nature, to find the common denominators that should be presented for all research? A: No, that would be an interesting idea but still have to get authors to adhere to it.

Q: Could GoodReports be a quality mark that journals receive if they implement reporting guidelines (and implementation in the studies themselves is verified)? A: Yes, but it would be important to distinguish quality of reporting from quality of research. A poorly conducted study can be well reported (and vice versa, although it is difficult to know whether a study is well done without good reporting).

Beyond journals

Should making researchers use reporting guidelines be only the journals’ responsibility? Who else could be involved? It would be vastly better to implementing reporting guidelines at the time of study design to ensure not only that the reporting is complete but that the methodology is appropriate. The following suggestions were made by the panel and audience.

  • Institutions — publication officers — One at the conference had good success in getting papers published after applying checklist and improving reporting guidelines
  • Funders — key element — could funders require the reporting checklist? Funders spend a great deal on deciding what to fund but very little on the quality of what is published as a result of that funding. Funders in the audience explained that they don’t want to interfere in the work of researchers. However, using guidelines would protect funders in that they’re following best practices; they could require the checklist as part of author’s report. Adhering to the recommendations reduces risk of rejection, which is also a benefit for funders.
  • Reporting guideline developers
  • Publishers: they’re aware of EQUATOR but there’s no financial incentive — encourage as ethical issue? Won’t cost them, aside from editor time
  • Editing companies like Editage (for-profit) — 600 people work for them — could they implement some reporting standards?
  • AMWA and EMWA — medical writers must keep up to date with reporting guidelines https://c.ymcdn.com/sites/www.amwa.org/resource/resmgr/about_amwa/JointPositionStatement.Profe.pdf

Without additional incentives, authors need to see what’s in it for them. Perhaps they’re more likely to get published? The checklist helps editors review content and reviewers identify what’s missing. However, it also increases the amount of material to review.

Authors see it as yet another form to submit without seeing the value in it–having a champion behind it is important. For example, trial registration occurred because of champions, evidence, and a concerted effort by journals.

Discussion

It is important to create a sense of urgency and to tie it to things that people feel a sense of crisis about, eg, Evidence-based guidelines — systematic reviews are important to inform evidence-based guidelines and adherence to reporting guidelines is important to be able to conduct accurate systematic reviews. Machine learning may help.

Reporting guidelines may be a way for journals to help differentiate themselves from predatory Journals. Perhaps journals that require guidelines could receive a “good reports” badge.

What is the role of publishers? They need to be convinced it is in their best interest (by increasing efficiency and quality of review); this can be a political issue. For example, a journal in a small shop can have greater control over what they implement in their submission system than a journal in a very large publishing house.

What if most authors go to preprint servers?

Would a reviewer template help the review process?

What do third party editing companies do?

  • Rubriq: Does third party review but that activity didn’t work out, so they screen for reporting guidelines, etc.
  • ISMPP CMPP  — high quality, high cost

Use a train-the-trainer model for medical society meetings?

Please note that implementation resources for journals are provided on the EQUATOR website at https://www.equator-network.org/toolkits/using-guidelines-in-journals/.

[Disclosure: I accepted an EQUATOR whistle to “blow the whistle on poor research reporting” (best conference bling)]

Note: This article does not represent official policies or endorsements by WAME.  

Article updated 9/21/17.

 

Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals and how (maybe) to identify possibly predatory journals. Fake editor, Rehabbed retraction, Peer reviewer plagiarizing. Writing for a lay audience; Proof to a famous problem almost lost to publishing obscurity

PREDATORY/PSEUDO-JOURNALS

  • Why do researchers mistakenly publish in predatory journals? How not to identify predatory journals

“An early-career researcher isn’t necessarily going to have the basic background knowledge to say ‘this journal looks a bit dodgy’ when they have never been taught what publishing best practice actually looks like…We also have to consider the language barrier. It is only fair, since we demand that the rest of the scientific world communicates in academic English. As a lucky native speaker, it takes me a few seconds to spot nonsense and filler text in a journal’s aims and scope, or a conference ‘about’ page, or a spammy ‘call for papers’ email. It also helps that I have experience of the formal conventions and style that are used for these types of communication. Imagine what it is like for a researcher with English as a basic second language, who is looking for a journal in which to publish their first research paper? They probably will not spot grammatical errors (the most obvious ‘red flag’) on a journal website, let alone the more subtle nuances of journal-speak.”

How should you not identify a predatory journal? “I know one good-quality journal which was one of the first in its country to get the ‘Green Tick’ on DOAJ. I’ve met the editor who is a keen open access and CC-BY advocate. However, the first iteration of the journal’s website and new journal cover was a real shock. It had all the things we might expect on a predatory journal website: 1990s-style flashy graphics, too many poorly-resized pictures, and the homepage (and journal cover) plastered with logos of every conceivable indexing service they had an association with…I knew this was a good journal, but the website was simply not credible, so we strongly advised them to clean up the site to avoid the journal being mistaken for predatory…This felt wrong (and somewhat neo-colonial). ‘Professional’ website design as we know it is expensive, and what is wrong with creating a website that appeals to your target audience, in the style they are familiar with? In the country that this journal is from, a splash of colour and flashing lights are used often in daily life, especially when marketing a product. I think we need to bear in mind that users from the Global South can sometimes have quite different experiences and expectations of ‘credibility’ on the internet, both as creators and users of content and, of course, as consumers looking for a service.”

Andy Nobes, INASP.  Critical thinking in a post-Beall vacuum (Research Information)

  • Characteristics of possibly predatory journals (from Beall’s list) vs legitimate open access journals

Research finds 13 characteristics associated with possibly predatory journals (defined as those on Beall’s list, which included some non-predatory journals). See Table 10 — misspellings, distorted or potentially unauthorized images, editors or editorial board members whose affiliation with the journal was unverified, and use of the Index Copernicus Value for impact factor were much more common among potentially predatory journals. These findings may be somewhat circular since the characteristics evaluated overlap with Beall’s criteria and some of those criteria (e.g., distorted images) were identified in the previous article as falsely identifying predatory journals, for reasons of convention rather than quality. However, the results may be useful for editors who are concerned their journal might be misidentified as predatory.

Shamseer L, Moher D, Maduekwe O, et al. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison  BMC Medicine 2017;15:28. DOI: 10.1186/s12916-017-0785-9

  • From the Department of Stings: A fake academic is accepted onto editorial boards and in a few cases, as editor

“We conceived a sting operation and submitted a fake application [Anna O. Szust] for an editor position to 360 journals, a mix of legitimate titles and suspected predators. Forty-eight titles accepted. Many revealed themselves to be even more mercenary than we had expected….We coded journals as ‘Accepted’ only if a reply to our e-mail explicitly accepted Szust as editor (in some cases contingent on financial contribution) or if Szust’s name appeared as an editorial board member on the journal’s website. In many cases, we received a positive response within days of application, and often within hours. Four titles immediately appointed Szust editor-in-chief.”

Sorokowski P, Kulczycki ESorokowska A, Pisanski K Predatory journals recruit fake editor. Nature Comment 543, 481–483 (23 March 2017). doi:10.1038/543481a

 

RESEARCH ETHICS AND MISCONDUCT

  • A retracted study is republished in another journal without the second editor being aware of the retraction. How much history is an author obligated to provide? What is a reasonable approach?

“Strange. Very strange:” Retracted nutrition study reappears in new journal (Retraction Watch)

  • A peer reviewer plagiarized text from the manuscript under review. “We received a complaint from an author that his unpublished paper was plagiarized in an article published in the Journal... After investigation, we uncovered evidence that one of the co-authors of … acted as a reviewer on the unpublished paper during the peer review process at another journal. We ran a plagiarism report and found a high percentage of similarity between the unpublished paper and the one published in the Journal... After consulting with the corresponding author, the editors decided to retract the paper.” Publishing timing does not always reveal who has plagiarized whom.

Nightmare scenario: Text stolen from manuscript during review (Retraction Watch)

 

ACCESS

  • Instructions for writing research summaries for a lay audience. “It is particularly intended to help scientists who are used to writing about biomedical and health research for their peers to reach a wider audience, including the general public, research funders, health-care professionals, patients and other scientists unfamiliar with the research being described…Plain English avoids using jargon, technical terms, acronyms and any other text that is not easy to understand. If technical terms are needed, they should be properly explained. When writing in plain English, you should not change the meaning of what you want to say, but you may need to change the way you say it…A plain-English summary is not a ‘dumbed down’ version of your research findings. You must not treat your audience as stupid or patronise them.”

Access to Understanding (British Library)

  • A retired mathematician solved, and published, a theorum proving Gaussian correlation inequality, yet the proof remained obscure because it was published in a less well-known journal. “But Royen, not having a career to advance, chose to skip the slow and often demanding peer-review process typical of top journals. He opted instead for quick publication in the Far East Journal of Theoretical Statistics, a periodical based in Allahabad, India, that was largely unknown to experts and which, on its website, rather suspiciously listed Royen as an editor. (He had agreed to join the editorial board the year before.)…With this red flag emblazoned on it, the proof continued to be ignored.

A Long-Sought Proof, Found and Almost Lost (Quantum Magazine)

 

STATISTICS

How are types of statistics used changing over time? “…the average number of methods used per article was 1.9 in 1978–1979, 2.7 in 1989, 4.2 in 2004–2005, and 6.1 in 2015. In particular, there were increases in the use of power analysis (i.e., calculations of power and sample size) (from 39% to 62%), epidemiologic statistics (from 35% to 50%), and adjustment and standardization (from 1% to 17%) during the past 10 years. In 2015, more than half the articles used power analysis (62%), survival methods (57%), contingency tables (53%), or epidemiologic statistics (50%).” Are more journals now in need of statistical reviewers?

Sato Y, Gosho M, Nagashima K, et al. Statistical Methods in the Journal — An Update . N Engl J Med 2017; 376:1086-1087. DOI: 10.1056/NEJMc1616211

 

 

____

Newsletter #5, circulated April 1, 2017. Sources include Retraction Watch and Open Science Initiative listserve. Providing the links does not imply WAME’s endorsement.