Clinical trial data sharing — not just for “research parasites” anymore? Use Unpaywall to find free articles, join Initiative for Open Citations. Are women authors different? What will your journal do without you? Can technology improve global health?


Clinical trial data sharing — not just for “research parasites” anymore

“Using the NHLBI data repository, 370 investigators requested data from at least one clinical trial — 51% of them trials on cardiovascular prevention and treatment. Requests were largely for post hoc secondary analysis (72%); a minority of requests were initiated for analytic or statistical approaches to clinical trials (9%) and meta-analyses (7%). More than half of investigators (53%) made their requests in the last 4.4 years of the study period (January 2000 to May 2016), ‘indicating an increasing demand for trial data that has outpaced acquisition,’ wrote Sean A. Coady, MS, MA, of the NHLBI in Bethesda, Md., and colleagues. ‘In contrast, demand for observational data has increased in a pattern more directly proportional to time.’ ”

NHLBI Data Sharing: Fears of ‘Research Parasites’ Melt Away Experience of NIH institute bolsters value of open trial data (MedPage Today)



  • Unpaywall

Trying to find free articles online? Use, a new widget to identify free copies of research articles. Unlike the open access button available for libraries and interlibrary loan, this is available to anyone (requires Firefox or Chrome browsers). Putting the OA Into Interlibrary Loan 

Covered in:


  • Initiative for Open Citations

“The Initiative for Open Citations I4OC is a collaboration between scholarly publishers, researchers, and other interested parties to promote the unrestricted availability of scholarly citation data…The aim of this initiative is to promote the availability of data on citations that are structuredseparable, and openStructured means the data representing each publication and each citation instance are expressed in common, machine-readable formats, and that these data can be accessed programmatically. Separable means the citation instances can be accessed and analyzed without the need to access the source bibliographic products (such as journal articles and books) in which the citations are created. Open means the data are freely accessible and reusable.”



  • Fast corrections: Authors use PubMed’s commenting feature PubMed Commons to post corrections before a formal correction is published

Authors alerting readers via PubMed Commons

  • Ghosts who don’t know they’re ghosts: Researcher provides fake contact information for coauthors, who aren’t aware they’re authors

Busted: Researcher used fake contact info for co-authors



Study of economics papers shows that while women authors take longer to revise, the readability of the revised manuscript is more improved than men’s. “Research papers with female authors spend six months longer in peer review at the top economics journals…In what appears to be a consequence, papers by women are easier to read and improve more as they are being revised than papers written by men.”

Gender Differences in Peer Review: Economics papers by women are stalled longer at journals – but they end up more readable and more improved (Royal Economic Society)



Succession planning: How to prepare for when you’re no longer around — written more for publishers than editors but maybe useful for some. “With a mature workforce, you need to watch that knowledge and skills do not reside in one person. When that person leaves, for whatever reason, it is entirely possible that you will be stuck and with their departure goes an essential resource that you will be scrambling to replace.”

Succession Planning (Scholarly Kitchen))



Talk with Google: Using Technology to Tackle Global Health’s Biggest Challenges



Newsletter #6, circulated April 11, 2017. Sources include Retraction Watch and Open Science Initiative  listserve. Providing the links does not imply WAME’s endorsement.


Why do researchers commit research misconduct? Should you publish a paper withdrawn (maybe) from a predatory journal? Should an editor also be a researcher? Researcher and reviewer gender gaps


Clinical trial registration and negative results

A study in BMJ tests the hypothesis that clinical trial registration should improve trial reporting and therefore increase the number of trials that do not report positive outcomes. Registered trials were slightly less likely to report positive results, particularly if they were not industry-funded. The authors did not compare the registered trial outcomes with the outcomes that were reported (they studied 1122 trials so that would have been a major undertaking). A great benefit of trial registration for editors and reviewers is  being able to determine whether outcome switching has occurred. If outcomes were switched, that could explain why trial registration was not associated with a larger reduction in positive results.
Another important observation: much of the trial reporting was poor, pointing out the importance of all authors using, and all medical journals requiring and verifying use of, CONSORT reporting (
Odutayo A, Emdin CA, Hsiao AJ. Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials—ESORT). BMJ 2017;356:j917. doi:


Why do researchers commit research misconduct? 

A case study with a chastening message for investigators (and a sobering message for editors): “He described how and why he started tampering with data. The first time it happened he had analyzed a dataset and the results were just shy of significance. Fox noticed that if he duplicated a couple of cases and deleted a couple of cases, he could shift the p-value to below .05. And so he did. Fox recognized that the system rewarded him, and his collaborators, not for interesting research questions, or sound methodology, but for significant results. When he showed his collaborators the findings they were happy with them-and happy with Fox.” What messages are investigators sending when research doesn’t turn out as hoped? “Hindsight’s a bitch:” Colleagues dissect painful retraction. Retraction Watch (blog). March 7, 2017.

Publishing a paper withdrawn from a predatory journal
What would you do if authors submitted a paper that they had unknowingly submitted to a predatory journal, then withdrew, but the predatory journal wouldn’t respond to confirm? COPE has published a case study on such an instance.
Withdrawal of accepted manuscript from predatory journal. Case Number 16-22. COPE.



The importance of research experience when evaluating research (blog):
“So pointing out why a study is not perfect is not enough: good criticism takes into account that research always involves a trade-off between validity and practicality… good research is always a compromise between experimental rigor, practical feasibility, and ethical considerations. To be able to appreciate this as a critic, it really helps to have been actively involved in research projects. I do not mean to say that we should become less critical, but rather that we become better constructive critics if we are able to empathize with the researcher’s goals and constraints.” The value of experience in criticizing research (Rolf Zwaag Blog)

Relationship between time to reject without review, the review process, and author satisfaction

An analysis across scientific fields, the authors find “One-third of journals take more than 2 weeks for an immediate (desk) rejection and one sixth even more than 4 weeks. This suggests that besides the time reviewers take, inefficient editorial processes also play an important role. As might be expected, shorter peer review processes and those of accepted papers are rated more positively by authors. More surprising is that peer review processes in the fields linked to long processes are rated highest and those in the fields linked to short processes lowest. Hence authors’ satisfaction is apparently influenced by their expectations regarding what is common in their field. Qualitative information provided by the authors indicates that editors can enhance author satisfaction by taking an independent position vis-à-vis reviewers and by communicating well with authors.” Huisman J, Smits J. Duration and quality of the peer review process: the author’s perspective. Scientometrics (2017). doi:10.1007/s11192-017-2310-5


Reporting race/ethnicity in research

“An explanation of who classified individuals as to race, ethnicity, or both, the classifications used, and whether the options were defined by the investigator or the participant should be included in the Methods section. The reasons that race/ethnicity was assessed in the study also should be described in the Methods section. ” Robinson JK, McMichael AJ, Hernandez C. Transparent Reporting of Demographic Characteristics of Study ParticipantsJAMA Dermatol. 2017;153(3):263-264. doi:10.1001/jamadermatol.2016.5978

What is happening with the researcher gender gap, in 12 countries?
A report from Elsevier (using Scopus): Gender in the Global Research Landscape . Analysis of research performance through a gender lens across 20 years, 12 geographies, and 27 subject areas. (2017)
and Scholarly Kitchen’s assessment: Alice Meadows. The Global Gender Gap: Research and Researchers Scholarly Kitchen Blog.

Is there a gender bias in selecting reviewers? 
Here we present evidence that women of all ages have fewer opportunities to take part in peer review. Using a large data set that includes the genders and ages of authors and reviewers from 2012 to 2015 for the journals of the American Geophysical Union (AGU), we show that women were used less as reviewers than expected…The bias is a result of authors and editors, especially male ones, suggesting women as reviewers less often, and a slightly higher decline rate among women in each age group when asked.
These findings underline the need for efforts to increase female scientists’ engagement in manuscript reviewing to help in the advancement and retention of women in science.” Lerback J, Hanson B. Journals invite too few women to referee. Nature | Comment, January 25, 2017.


BMJ will declare all its industry revenues, in the interest of transparency. Hear, hear! BMJ editor confirms that revenues from industry will be declared. BMJ 2015;351:h3908.


Newsletter #4, originally circulated March 16, 2017. Sources include Retraction Watch, COPE, LinkedIn, and Scholarly Kitchen. Providing the links and information does not imply WAME’s endorsement.

How would you change medical publishing? Authors offer bribes, New issues in informed consent, Why do predatory journals exist?


  • What would you change about medical publishing? Scholarly Kitchen offers some interesting perspectives. Share yours via Comments below.

If you could change one thing about scholarly publishing, what would that be? (Scholarly Kitchen blog)


  • Editor receives offer of cash for publishing manuscripts

Pay to play? Three new ways companies are subverting academic publishing (Retraction Watch blog)

  • Editors step down after their citation cartel was discovered (European Geophysical Union)  (Retraction Watch blog)


  • Commentaries on new developments with informed consent: e-consent and internet-based clinical trials, changes in perceptions of risk, new types of risk

Informed Consent  (NEJM [free])


  • Should scientists attempt to replicate their own studies? They have an inherent desire (or conflict of interest) to see consistent results

Why Scientists Shouldn’t Replicate Their Own Work (Discover Magazine)


Do predatory journals fill a niche?

Predatory Publishing as a Rational Response to Poorly Governed Academic Incentives (Scholarly Kitchen blog)


  • A neuroscientist posts his peer reviews online, emails the authors, and tweets a link to his review (but only if the manuscript is available as a preprint)

The Rogue Neuroscientist on a Mission to Hack Peer Review (Wired Magazine)

Newsletter #3. Originally circulated March 7, 2017. Sources include Retraction Watch and Scholarly Kitchen. Providing the links and information does not imply WAME’s endorsement.

Publishing research with ethical lapses, P values, Reproducibility, WAME’s predatory journals statement


  • An editorial by Bernard Lo and Rita Redberg discusses ethical issues in recently published research in which abnormal lab values were not conveyed to research participants: “Should a study with an ethical lapse be published?…Many journals will not publish research with grave ethical violations, such as lack of informed consent, lack of institutional review board (IRB) approval, or scientific misconduct. However, if violations are contested or less serious, as in this study, the ethical consensus has been to publish valid findings, together with an editorial to raise awareness of the ethical problems and stimulate discussion of how to prevent or address them.”

Addressing Ethical Lapses in Research (JAMA) [formerly free, now first PDF page visible]

  • What should research misconduct be called? “At the heart of the debate is the history of the term. In the U.S., in particular, lobbying from scientists dating to the 1980s has resulted in the term “misconduct” being codified to only refer to the cardinal sins of falsification, fabrication, and plagiarism. This has left lesser offenses, often categorized as “questionable research practices,” relatively free from scrutiny. Nicholas Steneck, a research ethicist at the University of Michigan in Ann Arbor, calls the term “artificial:”

Does labeling bad behavior “scientific misconduct” help or hurt research integrity? A debate rages (Retraction Watch Blog)


  • Hilda Bastian provides 5 tips for avoiding P value potholes: commonly encountered problems with how P values are used and interpreted.

5 Tips for Avoiding P-Value Potholes (Absolutely Maybe blog)

  • Videos on research methods related to epidemiology, by Greg Martin, MD, MPH, MBA (University of Witwatersrand, Ireland) — basic but useful for anyone wanting a quick well-done overview on a variety of research topics.

Epidemiology (YouTube)

  • For a bit of humor, The Five Diseases of Academic Publishing.

Got “significosis?” Here are the five diseases of academic publishing (Retraction Watch blog)


  • Acceptance rates for journals applying for membership to OASPA: “Between 2013 and 2015 we accepted fewer than 25% of the total number of applications we received. Some from 2016 are still undergoing review, but we expect the number of accepted applications for last year to fall below 10% once all are concluded. “

Identifying quality in scholarly publishing: Not a black and white issue (OASPA Blog)


  • Overcoming nonreproducibility in basic and preclinical research, by John Ioannidis: “The evidence for nonreproducibility in basic and preclinical biomedical research is compelling. Accumulating data from diverse subdisciplines and types of experimentation suggest numerous problems that can create a fertile ground for nonreproducibility. For example, most raw data and protocols are often not available for in-depth scrutiny and use by other scientists. The current incentive system rewards selective reporting of success stories.

Acknowledging and Overcoming Nonreproducibility in Basic and Preclinical Research (JAMA) [formerly free, now first PDF page visible]

  • Research reported in newspapers has poor replication validity: “Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed.

Poor replication validity of biomedical association studies reported by newspapers (PLOS ONE)


WAME published a new statement on Identifying Predatory or Pseudo-Journals.

Identifying Predatory or Pseudo-Journals (WAME)


WAME Newsletter #2, original version circulated February 23, 2017. Identified (in part) from Retraction Watch, Stat News, and Linked In Global Health. Providing the links and information does not imply WAME’s endorsement.




Welcome to the WAME Blog: Authors’ view of the manuscript submission process, Science’s English-language bias, Transparent research results (or not), Open access publishing in the Global South

The WAME Blog, featuring the WAME Newsletter, offers news, views, and resources of interest to medical journal editors in general and WAME members — comprising medical journal editors and scholars throughout the world — in particular. The challenges and experiences of editors at small journals and journals in low and middle income countries are of particular interest. Articles and activities are free to everyone, unless, in rare instances, otherwise indicated.

The first Newsletter was circulated via listserve on February 16, 2017 and is excerpted below. Sources of this Newsletter are Retraction Watch and the Open Science Initiative listserve. Providing the links and information does not imply WAME’s endorsement.


The delights, discomforts, and downright furies of the manuscript submission process, from Learned Publishing. A discussion of issues authors face as part of the manuscript submission process, including the following list of recommendations (plus a useful appendix of items authors should have on hand when ready to submit a manuscript):

  • “Editors and reviewers should consider manuscripts in any (appropriate) format first – and publishers reset only the accepted papers.
  • There should be three or four standard formats for journals that everyone can copy. Trivial house style requirements should be abolished.
  • The layouts of tables, graphs and references also need to be standardised more. Tables and graphs, and their caption, should be placed where they fit in the text, not at the end of manuscripts.
  • A named person (with an e-mail address at the publisher’s) should be provided by the publisher who can help with the submission process if an author gets stuck.
  • Finally, when the submission process is completed successfully or otherwise, authors should be invited to send any comments/feedback on the system that they have used.”

These authors’ comments, as well as the whole system, should be reviewed, say every 3-5 years. They also note the importance of allowing authors to review their proofs.


The problems and loss of information created by the bias toward science reported in English. “Not only does the larger scientific community miss out on research published in non-English languages. But the dominance of English as science’s lingua franca makes it more difficult for researchers and policy makers speaking non-English languages to take advantage of science that might help them…Amano thinks that journals and scientific academies working to include international voices is one of the best solutions to this language gap. He suggests that all major efforts to compile reviews of research include speakers of a variety of languages so that important work isn’t overlooked. He also suggests that journals and authors should be pushed to translate summaries of their work into several languages so that it’s more easily found by people worldwide.”

How a bias toward English-language science can result in preventable crises, duplicated efforts and lost knowledge (Smithsonian Magazine)


  • Paul Glasziou describes how to present research results in a transparent way so that readers can understand the study’s implications–or not. “…presenting the results in a clear, unbiased, and understandable way is of paramount importance. Editors should insist on clear, simple presentations of the main results—preferably in graphical formats. Without that, authors and editors will continue to contribute to the considerable waste in research and the gaps between research and practice.”

How to hide trial results in plain sight (BMJ Blogs)

  • The influence of statistical noise in medical research results: “Statistically speaking, a statistical significant result obtained under highly noisy conditions is more likely to be an overestimate and can even be in the wrong direction.  In short:  a finding from a low-noise study can be informative, while the finding at the same significance level from a high-noise study is likely to be little more than . . . noise.”

Why traditional statistics are often “counterproductive to research the human sciences” (Retraction Watch blog)


  • OASPA hosted a Twitter Chat on Open Access Publishing in the Global South. From the OASPA website (and thank you to the OSI listserve for the heads up): “February 10, 2017 by Leyla Williams On Wednesday 22nd February 2017, OASPA will host a live Twitter chat about open access publishing in the Global South with Xin Bi (Xi’an Jiaotong-Liverpool University/DOAJ), Ina Smith (Academy of Science of South Africa), Abel Packer (SciELO), and Lars Bjørnshauge (DOAJ).”
  • OASPA has posted a Webinar on open access publishing in the Global South; free OASPA webinars are offered here: