Australian study debunking homeopathy exposed

20 SEPTEMBER, 2018

Science fact or fiction? NHMRC admits they did not use accepted scientific methods

AUSTRALIA’S peak medical research body, the National Health & Medical Research Council (NHMRC), has admitted under Senate scrutiny that they did not follow recognised scientific guidelines or standards in reviewing the evidence on homeopathy, using an approach also applied to reviews of other natural therapies.
The Homeopathy Review was the first of 17 natural therapy reviews the NHMRC conducted between 2012 and 2015, used to justify removal of the Private Health Insurance rebate for these therapies, which was passed by the Senate on 11 September 2018.
The NHMRC’s response to a Senate Question on Notice posed by Senator Stirling Griff on 30 May 2018, reveals that instead of using accepted scientific methods they simply made them up along the way.
The integrity of the Homeopathy Review rests on NHMRC’s public assurance that it “used internationally accepted methods” and that it used “a rigorous approach that has been developed by Australian experts in research methods” when evaluating health evidence.
The Senate probe has forced the NHMRC to admit that this was not true.
In its response to Senator Griff’s question the NHMRC has admitted, “At the time this work was underway there was no relevant guidance or standard endorsed by NHMRC, or a relevant international organization, on the development and content of evidence statements” – which formed the basis of the Review’s published conclusion of ‘no reliable evidence’.
The NHMRC has also admitted that the criteria used were, “drafted over a number of months following the completion of the overview search for literature”.
“Here we have an admission under Senate scrutiny that instead of using accepted scientific methods, the NHMRC review team not only invented the methods along the way, they also did this well after the evidence had already been collated and assessed”, said Your Health Your Choice’s Petrina Reichman.
“This removed essential safeguards routinely applied to scientific review processes to ensure they are conducted transparently and ethically”, she said. “This means there was absolutely nothing stopping the review team from manipulating the methodology to get whatever answer they wanted”.
Research protocols are an important safeguard used to reduce/ prevent reporting bias in scientific studies. Before a study begins, a protocol is created that outlines in detail all essential aspects of the project, such as the research question being asked, methods of data retrieval, criteria used to determine which studies will be included or excluded from the review, and how the data will be analysed to produce the final results.
Freedom of Information (FOI) documents reveal that the original research protocol was agreed and finalised in December 2012 but was never published. It bears no resemblance to the protocol the NHMRC review committee eventually applied.
FOI documents also reveal that the NHMRC review committee systematically reinvented the research protocol between April and July 2013, after the contracted reviewer had already completed its initial evidence assessment in March 2013. All the criteria used for the evidence statements were retrospectively developed during this period.
“Even worse, FOI document also reveal that none of these retrospective changes to the Review’s research protocol were disclosed to the public in the NHMRCs final report, even though they assured the public they conducted a “transparent” review”, said Ms Reichman. “In scientific investigation you ALWAYS have to reveal all changes to the protocol for ethical reasons”.
“This is a serious research scandal of the highest degree, revealing the extent to which the review team secretly manipulated the methods well after the contractor had already collated and assessed the evidence, with none of the changes disclosed in the final report released to the public.”
These manipulations directly resulted in the findings of 171 out of the 176 included studies being retrospectively categorised as “unreliable”, meaning they were dismissed from the Review’s published findings of “no reliable evidence”. The Review’s findings were therefore based on only 5 “reliable” trials – not reported to the public.
If standard, accepted scientific methods were used, the review team would have had to report that around 50% were positive, including studies of high methodological quality. Only around 5% of the 176 studies were negative and the rest inconclusive – strikingly similar to conventional medical research findings.
“It doesn’t get more serious than this. The NHMRC has misled both the Australian public and Government, damaging the NHMRC’s high standing and the public’s trust in science and taxpayer funded institutions”.
These events occurred after the NHMRC terminated a taxpayer funded First Review in August 2012, without publicly disclosing its existence, findings or expenditure – raising further serious questions regarding research integrity and misappropriation of public funds. The Review published in 2015 was the second attempt.
These and other issues concerning the NHMRC Homeopathy Review have been detailed in a Submission of Complaint to the Commonwealth Ombudsman for investigation.
Read more below for further details regarding NHMRC’s response to Senator Stirling Griff’s Question and make up your own mind: was the NHMRC Homeopathy Review science fact or fiction?
Sign the petition calling for a full Senate inquiry into the NHMRC’s conduct.

Question on Notice no. 270 – Impact of midstream changes to the research protocol


Senator Stirling Griff asked the Department of Health on 30 May 2018—
“In the Homeopathy Review:
a) Changes were made to the originally agreed research protocol after the reviewer (Optum) had already retrieved and assessed the evidence in March 2013. For example, the ‘adapted GRADE’ tool was developed in May 2013 and the ‘150’ sample size and 5/5 Jadad quality trial exclusion thresholds were decided in mid July 2013.
What was the quantitative impact of these changes on the Review’s findings?”

NHMRC’s answer:

(emphasis added, addressed below)
“At the time this work was underway there was no relevant guidance or standard endorsed by NHMRC, or a relevant international organization, on the development and content of evidence statements. As such Optum, in consultation with the Homeopathy Working Committee, developed a guidance document for drafting evidence statements (Appendix C of the Overview Report)This document was drafted over a number of months following the completion of the overview search for literature. The criteria reflect the discussions and agreement of the HWC members on the key features of the evidence base that should be captured in each evidence statement.
The only quantitative impact of these changes to the guidance document from May 2013 is that the definition of a small trial was revised from ’51 to 199 participants’ to ‘50 to 149 participants’. This saw an additional 11 studies regarded to be of a medium size.”

Fact Check:

1. “At the time this work was underway there was no relevant guidance or standard endorsed by NHMRC, or a relevant international organization, on the development and content of evidence statements.”

The integrity of the NHMRC Homeopathy Review rests on the NHMRC’s assurance to the Australian public that:
– it used “standardised, internationally accepted methods” (NHMRC Media Release, 11 March 2015; NHMRC Information Paper, pp. 5, 14, 15), and
– “When evaluating health evidence and drafting health advice, NHMRC uses a rigorous approach that has been developed by Australian experts in research methods” (NHMRC Information Paper, pp.9-10).
NHMRC’s response confirms that this is untrue.
The “standardised, internationally accepted [overview] method” referred to does not specify or prescribe any of the criteria used in the Review’s evidence statements. This was a unique framework that had never been used before (or since) by any other research group or agency in Australia or internationally, including NHMRC. Further, the Review did not use the ‘rigorous approach developed by Australian research experts’ (NHMRC’s standard ‘dimensions of evidence’ framework for assessing health evidence), which was in fact abandoned at an early stage in the Review process.
The reason NHMRC’s standard ‘rigorous approach’ was abandoned was that the NHMRC chose not to retrieve or assess any original research studies – deviating from long established best practice. In fact, this was the first time NHMRC had ever not assessed primary research studies when assessing evidence. Instead, they decided to rely solely on secondary studies (systematic reviews), which only summarisedoriginal studies.
By the NHMRC’s own admission, this approach was adopted purely to save time and money (NHMRC Frequently Asked Questions document, Q9, p.7) – not because it represented rigorous scientific procedure.
But the problem was that the secondary systematic review studies incompletely and/or in accurately reported essential information about the original studies required to properly assess them, with significant primary trial information missing. This is noted on page 20 of the Optum Overview Report (the report prepared for NHMRC by their contractor), which states:
“Studies that were identified during the systematic review of Level I evidence were not assessed according to the NHMRC dimensions of evidence [NHMRC’s ‘rigorous approach developed by Australian experts in research methods’] as planned in the research protocol. These dimensions were originally developed for use in assessing primary studies. It became apparent during the evidence review that they would not be appropriate for overviews, as study-level data was often incompletely reported in the systematic reviews (e.g. primary outcomes were often not specified, effect estimates and confidence intervals were rarely reported).”
(Members of the public would not be expected to read, let alone understand, such technical information, which was not disclosed in the NHMRC Information Paper released on 11 March 2015 intended to report the Review’s methods and findings to the public.)
This only “became apparent” between January and March 2013, while the evidence reviewer (OptumInsight) was collating the evidence and completing its initial assessment using the research protocol agreed between the NHMRC, the Homeopathy Working Committee (HWC) and Optum in December 2012 (confirmed through Freedom of Information documents).
Rather than fix this serious methodological problem by simply retrieving and assessing the original studies, in line with NHMRC’s ‘standard and accepted’ methods, NHMRC instead just noted it as a ‘limitation’. However admitting such a serious flaw does nothing towards correcting it.
The HWC/ NHMRC then set about inventing an entirely new methodology to apply to the data, as explained below.

2. “This document was drafted over a number of months following the completion of the overview search for literature”

This refers to the guidance document developed for drafting evidence statements for each of the 61 conditions assessed (Appendix C of the Overview Report). As explained above, this was developed entirely post hoc and was not specified in the original research protocol.
The NHMRC never revealed that the evidence statement framework – which formed the basis of the Review’s findings of ‘no reliable evidence’ – was created in its entirety between April and June 2013 by a special working committee ‘Sub-Group’, which set about systematically reinventing the Review’s research protocol. For further details, click here.
This included “the development and content of evidence statements” referred to in NHMRC’s answer, which included the criteria that defined a ‘reliable’ trial – that is, which trials were ‘reliable’ enough to contribute to the Review’s published findings (see below).
The original research protocol that was agreed and finalised in December 2012 did not specify any of the criteria or elements of the evidence statement framework developed during this time. The original evidence statement framework agreed in December 2012 has been obtained through Freedom of Information (FOI) and can be viewed here.
Importantly, none of these retrospective changes to the research protocol were disclosed or reported, despite the Optum Overview Report including a ‘Changes from the research protocol’ section (Section 3.8, p.20).
This exposed the Review to serious risk of bias, as it removed all customary safeguards normally put in place to prevent researchers from manipulating the results according to predetermined intentions.
NHMRC had already secretly commissioned and terminated a First Review on homeopathy between April and August 2012, without public disclosure of its existence or findings. This controversial revelation increases legitimate suspicion that a predetermined agenda had been set for the Review.
3. “Optum, in consultation with the Homeopathy Working Committee, developed a guidance document for drafting evidence statements (Appendix C of the Overview Report)”
Documents obtained under Freedom of Information tell a different story.
These show that after Optum completed its initial assessment in March 2013 according to the specifications of the originally agreed research protocol, the Homeopathy Working Committee (HWC) and senior NHMRC officials initiated the process of developing the evidence statement framework (the ‘guidance document’ referred to) – not Optum. The Sub-Group meetings convened between April and June 2013 to create the framework were chaired and guided by senior NHMRC staff and the HWC.
The evidence statement guidance document, including all its component criteria , were developed by senior NHMRC officials in conjunction with the HWC. Optum was tasked with applying the new framework to the data.
The following quote from the minutes of the HWC meeting on 18 March 2013 show that when developing draft evidence statements, the original research protocol was not followed because it would have found that the evidence for homeopathy in all conditions was “uncertain” (NHMRC FOI 2015/16 007-03):
“Optum noted that the approach taken when developing draft evidence statements did not align with the approach proposed in the research protocol as the evidence statement for all conditions would have stated that the effect was uncertain”.
This shows that Optum was being guided by the HWC/ NHMRC to deviate from the original protocol in order to alter the results of the Review and reach more definitive conclusions – a clear example of bias. The original framework allowing for a conclusion of “uncertainty” was not definitively negative: it allowed for positive trends in the research to be reported and for future research to shed more light. This is consistent with how conventional research is usually assessed and reported.
Between March and July 2013, the HWC and Office of NHMRC took charge of the process of developing the guidance document. The evidence statement framework was continuously massaged until one was developed that facilitated a definitively negative evidence statement to be made against all 61 conditions assessed, irrespective of positive findings and trends in the research evidence for a number of conditions.
The following entry from the minutes of the 29 April 2013 HWC Sub Group teleconference meeting shows the extent to which the HWC/ NHMRC guided the process, which included the development of criteria that Optum was not even familiar with (NHMRC FOI 2015/16 008-02) . For example, the HWC/ NHMRC proposed that the evidence statement framework should consist of two Elements (in May extended to three Elements), introducing for the first time a unique, never before used “adapted GRADE” tool to rate level of confidence in the data:
“OptumInsight noted that they were unfamiliar with the GRADE tool and that they were therefore reluctant to apply it without a full understanding of how it should be used. [HWC] Members noted that where the body of evidence was accurately described, the application of GRADE to formulate a level of confidence could be readily performed by [HWC] Members or Prof Ghersi [Office of NHMRC].”

4. “The only quantitative impact of these changes to the guidance document from May 2013 is that the definition of a small trial was revised from ’51 to 199 participants’ to ‘50 to 149 participants’.”

This response avoids answering the actual question asked and is highly misleading.
Sen Griff asked what impact the undisclosed midstream changes to the research protocol had on the Review’s findings, which was not answered. Instead, NHMRC provided a response to an entirely different question – relating to ‘what quantitative changes were made to the evidence statement framework from May 2013’. Why was the question not answered?
Independent forensic investigation of NHMRC’s methods has shown that the evidence statement framework developed between April and June 2013 included arbitrary, never before used criteria that directly dismissed the results of 171 out of the 176 studies initially included in the overview from contributing to the Review’s findings.
As a result, the Review’s findings were reduced to, and therefore founded on, only 5 ‘reliable’ trials – a fact not reported anywhere in the NHMRC report. Could this be the reason why NHMRC did not provide a direct answer?
The arbitrary criteria applied included that a trial had to have at least 150 participants (impacting 146 trials) AND/OR be given a minimum 100% ‘quality rating’ (impacting a further 25 trials) for their results to be considered ‘reliable’ and hence warrant any consideration in the Review’s findings (see NHMRC Information Paper, pp.34-36).
NHMRC did not disclose that these arbitrary criteria were not formally adopted until mid July 2013 – seven months after the original research protocol had been finalised and four months after Optum had already collated and assessed the evidence.
Such unusually stringent exclusion thresholds are unheard of in scientific review processes and cannot be authenticated against any recognised guidelines or standards – as NHMRC’s Senate response admits.
The internationally esteemed BMJ Clinical Evidence Reviews defines a minimum trial size of only 20 participants as being acceptable, with the results of smaller trials accepted in fields where little research exists.
NHMRC routinely funds and collaborates on trials with fewer than 150 participants and it has never applied such an arbitrary cut off in any other review it has ever conducted before or since. So why was it applied here and why wasn’t the impact on the Review’s findings reported?