The major patent-based research pharmaceutical companies also nominally commit themselves to improving health and relieving suffering. For example, Merck promises “to provide innovative, distinctive products that save and improve lives…and to provide investors with a superior rate of return.”[7] Pfizer is dedicated “to applying science and our global resources to improve health and well-being at every stage of life.” Pharmaceutical companies continuously emphasize how deeply society depends on their development of innovative products to improve health. But in fact, these companies are mostly developing drugs that are mostly little better than existing products but have the potential to cause widespread adverse reactions even when appropriately prescribed. This deviation from the principles of health care by institutions allegedly dedicated to health care is institutional corruption. We present evidence that industry has a hidden business model to maximize profits on scores of drugs with clinically minor additional benefits.[8] Physician commitment to better health is compromised as the industry spends billions to create what Lessig calls a “gift economy” of interdependent reciprocation.[9] New research finds that truly innovative new drugs sell themselves in the absence of such gift-economy marketing.[10]
Regulators such as the FDA and the Environmental Protection Agency arise when unregulated competition is perceived to cause serious harm to society and government regulation is needed to address the problem. The FDA was founded to protect the public’s health from the fraudulent cures peddled in the 19th century.[11] Through a series of legislative enactments, often in response to a drug disaster, the pharmaceutical regulatory side of the FDA has acquired ever wider responsibilities to ensure that new drugs do more good than harm. Institutional corruption consists of distortions of these responsibilities, such as approving drugs that are mostly little better than existing medications, failing to ensure sufficient testing for serious risks, and inadequately guarding the public from harmful side effects. These distortions serve commercial interests well and public health poorly.
For the past 50 years, patent-based research companies have objected to the FDA’s gate-keeping function as being too rigid and too slow. They have claimed that an obsessive concern about safety has undermined patient access to drugs that could save lives or reduce the burdens of ill health.[12] This message is increasingly being accepted by the FDA.
Flooding the Market with Drugs of Little Benefit
In response to the emphasis by pharmaceutical companies, their lobbyists, and their trade association—the Pharmaceutical Research Manufacturers of America (PhRMA) – on the high risk and cost of research and development (R&D), Congress has authorized billions in taxpayer contributions to support R&D, exemptions from market competition, and special privileges.[13] Patents, of course, can be found in all industries, but lobbyists for the pharmaceutical industry have successfully pressured Congress to provide several forms of market protection beyond patents.
The industry measures “innovation” in terms of new molecular entities (NMEs), but most NMEs provide at best minor clinical advantages over existing ones and may lawfully be approved by the FDA even if they are inferior to previously approved drugs. The preponderance of drugs without significant therapeutic gain dates back at least 35 years. From the mid-1970s through the mid-1990s, multiple assessments have found that only 11 to 15.6 percent of NMEs provide an important therapeutic gain.[14] Millions of patients benefit from the one out of six drugs that are therapeutically significant advances; but most R&D dollars are devoted to developing molecularly different but therapeutically similar drugs, which tends to involve less risk and cost for manufacturers. These drugs are then sold through competition based on brand name, patent status, and newness, rather than on their therapeutic merits. An analysis of data from the National Science Foundation by Donald Light and Joel Lexchin indicates that patent-based pharmaceutical companies – often deemed by Congress, the press, the public, and themselves to be “innovative” – in fact devote only 1.3 percent of revenues, net of taxpayer subsidies, to discovering new molecules.[15] The 25 percent of revenues spent on promotion is about 19 times more than the amount spent on discovering new molecules.[16] In short, the term “R&D” as used by industry primarily means “development” of variations rather than the path breaking “research” that onlookers might like to imagine.
The independent drug bulletin, La revue Prescrire, analyzes the clinical value of every new drug product or new indication approved in France. From 1981 to 2001, it found that about 12 percent offered therapeutic advantages.[17] But in the following decade, 2002-2011, as shown in Figure 1, only 8 percent offered some advantages and nearly twice that many—15.6 percent—were judged to be more harmful than beneficial.[18] A mere 1.6 percent offered substantial advantages. Assessments by the Canadian advisory panel to the Patented Medicine Prices Review Board and by a Dutch general practice drug bulletin have come to similar conclusions.[19] No comparable review has been done in the United States on the 229 NMEs approved by the FDA between 2002 and 2011.
This decrease does not come from the “innovation crisis” of fewer new molecules entering trials or eventually being approved but from fewer new drugs being clinically superior.[20] The number of products put into trials has actually increased as the number of clinically superior drugs has decreased.[21] These facts provide evidence that companies are using patents and other protections from market competition primarily to develop drugs with few if any new therapeutic benefits and to charge inflated prices protected by their strong IP rights. Despite the small number of clinically superior drugs, sales and profits have soared as successful marketing persuades physicians to prescribe the much more costly new products that are at best therapeutically equivalent to established drugs.[22] Both an American and a Canadian study found that 80 percent of the increase in drug expenditures went to paying for these minor-variation new drugs, not for important advances.[23] Companies claim that R&D costs are “unsustainable.” But over the past 15 years, revenues have increased six times faster than has investment in R&D.[24]
Almost a decade ago, Jerry Avorn, a widely respected pharmacoepidemiologist and author of a book on the risks of drugs, described how the big pharmaceutical companies exploited patents and concluded that “[l]aws designed to encourage and protect meaningful innovation had been turned into a system that rewarded trivial pseudo-innovation even more profitably than important discoveries.”[25] He also noted that efforts in Congress to introduce a “reasonable pricing clause” that would reflect large taxpayer contributions to new drugs were defeated by industry lobbyists.
An Epidemic of Harmful Side Effects
Most new drugs approved and promoted since the 1970s lack additional clinical advantages over existing drugs and – as with all drugs – they have been accompanied by harmful side effects. A systematic review of the 39 methodologically strongest studies performed in the U.S. between 1964 and 1995 examined patients who were hospitalized due to a serious adverse drug reaction (ADR) or who experienced an ADR while in the hospital.[26] The review found that 4.7 percent of hospital admissions were due to serious reactions from prescription drugs that had been appropriately prescribed and used. In addition, 2.1 percent of in-hospital patients who received correctly prescribed medications experienced a serious ADR, for a total of 6.8 percent of hospital patients having serious ADRs.[27] Applying this 6.8 percent hospital ADR rate to the 40 million annual admissions in U.S. acute care hospitals indicates that up to 2.7 million hospitalized Americans each year have experienced a serious adverse reaction. Of all hospitalized patients, 0.32 percent died due to ADRs, which means that an estimated 128,000 hospitalized patients died annually matching stroke as the 4th leading cause of death. Deaths and serious reactions outside of hospitals would significantly increase the totals.
An analysis conducted in 2011, based on a year of ADRs reported to the FDA, came to similar conclusions: Americans experienced “2.1 million serious injuries, including 128,000 patient deaths.”[28] Other studies reveal that one in every five NMEs eventually caused enough serious harm in patients to warrant a severe warning or withdrawal from the market. Of priority drugs that were reviewed in slightly more than half the normal time, at least one in three of them caused serious harm.[29]
The public health impacts are even greater when milder adverse reactions are taken into account. Given estimates that about [30] ADRs occur for every one that leads to hospitalization, about 81 million side effects are currently experienced every year by the 170 million Americans who use pharmaceuticals.30 Groups such as pregnant women, elderly patients, and those who are taking multiple medications are especially at risk. Most of these medically minor adverse reactions are never brought to clinical attention, but even minor reactions can impair productivity or functioning, lead to falls, and cause potentially fatal motor vehicle accidents.[31]
Contributors to More Harm and Less Benefit
Are the adverse side effects we have just been describing simply the “price of progress or an unavoidable risk of drug therapy?”[32] In fact, evidence suggests that commercial distortions of the review process and aggressive marketing contribute to both undermining beneficence as health care’s raison d’être and to the epidemic of harm to patients.[33]
Distorting, Limiting, and Circumventing Safety Regulations
Since at least the 1890s, the public has clamored for Congress to regulate contaminated or adulterated foods and harmful or ineffective medicines (medicines that may delay truly useful treatments).[34] At that time, lobbyists—paid from drug profits—argued that even bills to require accurate listing of secret ingredients would destroy the industry. These lobbyists had managed to have earlier bills sent to die in the Committee on Manufactures until President Roosevelt intervened to secure passage of the 1906 Food and Drug Act, which still only required that statements on labels be true and provided no budget for enforcement.
Work on what would become the 1938 food and drug law began in 1933 with a bill that would prohibit misstatements in advertising and require manufacturers to prove to the FDA that drugs were safe before being allowed to sell them.[35] The companies’ two trade associations launched “well-choreographed screams of protest” and letter-writing campaigns to mislead Congress and to distort its mission to protect its constituents from harm. Employees of drug makers wrote to Congress, arguing that requiring companies to make honest claims about safe drugs would put thousands out of work. The FDA staff wanted the legislation passed but were stopped by threats of prosecution if they campaigned for it. Then a manufacturer added diethylene glycol (antifreeze) to a sulfa drug to make a sweet-tasting elixir and children started dying. Public response trumped industry lobbyists and Congress passed the 1938 law, requiring that drugs be safe but leaving it to companies to decide how to define and test for safety.
For the next 25 years, drugs were approved within 180 days unless the FDA objected, based on the companies’ tests and reports of safety. Some companies “tested” their products by sending samples out to providers for feedback, keeping no records of the results, and denying serious harms when reported by doctors.[36] Daniel Carpenter, the author of a book considered to be a definitive work on the politics of the FDA, has detailed how the FDA staff dedicated themselves to enforcing the rules and developing better criteria for safety and efficacy. But as Malcolm Salter, at the Harvard Business School emphasizes, companies institutionalize corruption by getting legislative and administrative rules shaped to serve their interests, either directly or by crafting rules in ways they can game.[37]
In his review of new pharmaceutical products in the 1940s and 1950s, Dr. Henry Dowling, an AMA senior officer and expert, found that companies launched 200-400 a year but only three on average were clinically useful.[38] Physicians, swamped with far more drugs than they could know much about, relied on sales reps to brief them, entertain them, and leave an ample supply of free samples as gifts that the physicians could then give to their patients—a two-stage economy of reciprocation.[39] In effect, through political pressure and lobbying, companies minimized the role of the FDA as the protector of public health for its first 56 years.
Following the 1962 amendments, propelled to passage by the thalidomide tragedy, the FDA commissioned the National Research Council, a part of the National Academy of Sciences, to review the effectiveness of all 2820 drugs (available in 4350 different versions) approved between 1938 to 1962. Companies were required to submit substantial evidence of effectiveness. The review concluded that seven percent of the drugs reviewed were completely ineffective for every claim they made and a further 50 percent were only effective for some of the claims made for them.[40] Although the FDA has acted to remove many of these ineffective drugs from the market, some pre-1962 drugs are—more than 50 years later—still undergoing review and are among the “several thousand drug products” that, according to a 2011 FDA guidance document, are today “marketed illegally without required FDA approval.”[41]
Regulatory capture begins with the dependency corruption of Congress, which passes the regulations and provides the funding for agencies to protect the public. While the 1962 amendments ushered in the modern era of testing for safety and efficacy before a drug can be approved, [42] three key features of the modern drug testing system actually work for industry profits and against the development of safe drugs that improve health.
First, three criteria used by the FDA contribute to the large number of new drugs approved with few therapeutic advantages. New drugs are often tested against placebos rather than against established effective treatments and measure surrogate or substitute end points, rather than actual effects on patients’ health.[43] Noninferiority trials that merely show that the product is not worse than another drug used to treat the same condition by more than a specified margin are accepted, rather than requiring superiority trials.[44] Silvio Garattini, founder of the Mario Negri Institute for Pharmacological Research, points out that placebo and noninferiority trials violate international ethical standards and provide no useful information for prescribing. [45]
Second, allowing companies to test their own products has led them—as rational economic actors—to design trials in ways that minimize detection and reporting of harms and maximize evidence of benefits.[46] Furthermore, clinical trials for new drugs are designed to test primarily for efficacy and generally are not able to detect less common adverse events. Industry-friendly rules allow companies to exclude those patients most likely to have adverse reactions, while including those most likely to benefit, so that drugs look safer and more effective than they are in practice.47 Approvals based on scientifically compromised trials underlie the large number of heavily marketed new drugs with few or no new therapeutic benefits to offset their under-tested risks of harm.
Third, companies have created what can be characterized as the trial-journal pipeline because companies treat trials and journals as marketing vehicles. They design trials to produce results that support the marketing profile for a drug and then hire “publication planning” teams of editors, statisticians, and writers to craft journal articles favorable to the sponsor’s drug.[48] Articles that present the conclusions of commercially funded clinical trials are at least 2.5 times more likely to favor the sponsor’s drug than are the conclusions in articles discussing non-commercially funded clinical trials.[49] Yet, journal approval is deemed to certify what constitutes medical knowledge. Published papers legitimate the pharmaceutical products emerging from the R&D pipeline and provide the key marketing materials.
Furthermore, companies are much less likely to publish negative results and they have threatened researchers who break the code of secrecy and confidentiality about those results.[50] Positive results are sometimes published twice—or even more often—under different guises. This further biases meta-analyses—a method of statistically combining the results of multiple studies—and clinical guidelines used for prescribing. The result is “a massive distortion of the clinical evidence”[51] For decades, the FDA has kept silent about these practices and about the discrepancies between the data submitted to the FDA by companies and the findings published in journal articles, to the detriment of patients but much to the benefit of the companies.
In sum, testing and FDA criteria approval provide little or no information to clinicians on how to prescribe new drugs, a vacuum filled by company-shaped “evidence” that misleads physicians to prescribe drugs that are less safe and effective than indicated by evidence that the FDA possesses.
PDUFA: Conflict-of-Interest Payments
In 1992, after years of underfunding and cuts in the 1980s that contributed to drug review times ballooning from 6 to 30 months, Congress passed the Prescription Drug User Fee Act (PDUFA), authorizing the FDA to collect “user fees” from drug companies that would allow it to hire 600 more reviewers and thereby speed up drug review.[52] Supporters claimed that fees would increase incentives for innovation and improve health; but aside from clearing the backlog of NMEs waiting for approval, industry fees have not increased innovation as measured by clinically superior drugs.[53]
In return for paying user fees, companies required the FDA to guarantee that it would review priority applications within six months and standard applications within 12 months of submission. Shortened review times led to substantial increases in serious harms. An in-depth analysis found that each 10-month reduction in review time—which could take up to 30 months—resulted in an 18.1-percent increase in serious adverse reactions, a 10.9-percent increase in hospitalizations, and a 7.2-percent increase in deaths.[54] Now, 20 years later, what Carpenter calls “corrosive capture” has set in—a weakened application of regulatory tools and a cultural capture of rhetoric about saving lives by getting new drugs to patients more quickly.[55] For the FDA, the reduction in review time combined with the fear that missing review deadlines will jeopardize continued PDUFA funding has also led to an increase in “up against the wall” approvals as review deadlines approach. Carpenter and his colleagues found that “the probability of a drug approved in the two months before the deadline receiving a new black-box warning (the most serious safety warning that the FDA can issue) is 3.27 times greater than a drug approved at some other time” and the likelihood of a drug being withdrawn from the market because of serious adverse events is 6.92 times greater.[56]
These detailed studies corroborate what FDA staff told the Office of the Inspector General,[57] namely, that concerns arising near the end of the review period are not adequately addressed, that needed meetings with advisory committees are not held, and that label warnings and contraindications are hastily written. As a result, there are “tens of thousands of additional hospitalizations, adverse drug reactions, and deaths.”[58]
The 1998 withdrawal of five drugs, used by 19.8 million Americans, prompted critical reflection. Three distinguished physicians were struck by how little information had been gathered about the harmful side effects of these drugs before they were withdrawn.[59] They attributed inaction to the FDA’s lack of interest in safety, lack of funds, and to “the lack of a proactive, comprehensive and independent system to evaluate the long-term safety, efficacy, and toxicity of drugs” after FDA approval. To compensate for the FDA’s failures, they called for an independent National Drug Safety Board—akin to the National Transportation Safety Board that investigates each plane crash and holds public meetings—so that the same part of the FDA that approves drugs, the Center for Drug Evaluation and Research (CDER), would not later be asked to decide whether that drug should be restricted or withdrawn. In other words, public health would not depend on FDA officials’ willingness to admit their own mistakes. Such an independent board should establish an active monitoring system and gather comparative data across a given therapeutic class so it could provide objective information and develop better strategies for addressing adverse reactions as a major cause of death.
In 1997, a year before these five withdrawals, Congress had passed PDUFA II and companies had insisted that none of the fees collected be spent on post-market surveillance or on drug8 safety programs. PDUFA II, III, IV, and V and related legislation provided the FDA with steeply increasing user fees but included lower criteria for approval, mandated that an industry representative be on FDA scientific advisory committees, lowered barriers to promotional efforts by companies, and required FDA officers to consult and negotiate with industry on the agency’s goals and plans.[60]
Offsetting the harms associated with PDUFA I’s shortened approval framework are several tools created in PDUFA III through V for detecting, managing, and raising awareness of risks such as the Sentinel system and the Risk Evaluation and Mitigation Strategies; but there is no clear evidence these are reducing the epidemic of harms.[61] These tools are inadequate to counterbalance the increase in risks—let alone to improve safety. The additional $10 million of funding provided by PDUFA III for the Office of Drug Safety and the $7.5 million provided for the FDA’s advertising enforcement arm are tiny in comparison to the more than $690 million in user fees that flow to the FDA each year.[62] In sum, PDUFA allocates user fees overwhelmingly to ensure speedy review of new drug applications while leaving safety and enforcement dependent on grossly inadequate funding, perpetuating a history of underfunding safety.
Granting priority status to more drugs further increases the number of drugs reviewed in the shortest time and the chance of a major safety issue increases from one drug in five to one in three.[63] Between 1999 and 2008, the FDA gave priority review status to almost 47 percent (114 of 244) of new drug applications, more than four times the proportion of drugs found to have superior clinical effects by independent review groups.[64] Reflecting the cultural and corrosive capture of the FDA, its Commissioner said recently that “an increasing number of treatments are being approved under the agency’s fast-track, priority review … to get critical and innovative medicines to market more rapidly.”[65] Quicker reviews and less evidence of clinical benefit have rewarded the hidden business model of developing still more drugs with minor benefits.
Post-Marketing Surveillance
Large and growing user fees, with a sunset expiration every five years and a threat of nonrenewal that would severely cripple the FDA, have intensified the classic principal-agent conflict.[66] Marcia Angell, former editor-in-chief of the New England Journal of Medicine, observed that, “[i]n effect, the user fee act put the FDA on the payroll of the industry it regulates [and]…has drastically changed the way it operates.”[67] The FDA’s obligation to serve the public is being corroded by pressures to serve the companies it regulates. As for post-market surveillance —“the single most important function…for protecting the public against the dangers of harmful drugs”[68]– it is put largely in the hands of the manufacturers and the FDA Center for Drug Evaluation and Research (CDER), the part of the FDA that companies pay to review their new drug applications.
After approval, aggressive marketing of new drugs to doctors for both approved and unapproved uses before good safety information is available maximizes the number of patients exposed to risks from the roughly 25 to 40 new drugs approved annually.[69] Field studies find that most drug representatives do not discuss adverse side effects.[70] Although the law requires companies to submit some marketing materials for review, Congress and the FDA allocate only a small budget and staff to review about 75,000 submissions a year for false or misleading information.[71] Further, the small stream of letters ordering that inaccuracies be corrected are subject to a review process that delays their reaching the companies.[72]
Marketing for unapproved or “off-label” uses worsens the balance of harm and benefit and undermines the purpose of testing to show that a drug is effective and safe for a specific use.[73] While trying drugs for new uses is clinically important, especially for certain populations such as children and cancer patients, 75 percent of off-label prescribing is neither supported by sound evidence nor accompanied by an organized means for gathering such evidence.[74] Companies retain leading experts to expand use, broaden clinical guidelines, and conduct small, short sham trials that companies get published and hand out to their physician-customers as “evidence.”[75]
A 15-month investigation by the Committee on Government Reform of the U.S. House of Representatives found “a growing laxity in FDA’s surveillance and enforcement procedures, a dangerous decline in regulatory vigilance, and an obvious unwillingness to move forward even on claims from its own field offices.”[76] The resulting 2006 report also documented a 53.7- percent decline in warning letters. Since then, FDA leadership has shifted to talking about being a “partner” with industry to get more drugs to patients more quickly.
For the reasons we explained above, the proportion of new products with clinical advantages seems to have moved from about 1 in 8 down to 1 in 12, while the proportion with serious harms has gone up from 1 in 5 towards 1 in 3 as the number of drugs given priority status increases.
Restoring Institutional Integrity for Safer Drugs
Many concerned experts have suggested ways to reduce conflicts of interest and improve the safety and effectiveness of drugs.
First, while research companies play important roles in discovering and developing superior drugs, they should play no role in testing them.[77] Over the years, expert bodies and prominent scientists have called for an independent institute to test drugs because commercial trials were so poor, biased, and conflicted.[78] Yet this bedrock reform has never been accomplished, as the industry’s lobbying of Congress and its contributions to Congressional campaigns have soared.[79]
Second, the FDA needs new leadership to restore public trust and build a new culture focused on safety through enforcement of its existing rules. Hearings through the 1960s and 1970s documented how frequently the FDA fails to adhere to its own rules and protocols.[80]
Third, user fees must end and the FDA must be entirely funded by taxpayers-as-consumers. The FDA should be entirely clear about whom it serves.
Fourth, while approval criteria should allow for a sufficient number of therapeutically equivalent drugs in a class to give clinicians a range of choices,[81] they should also require patient-relevant evidence of superiority. Limiting the number of drugs in the same therapeutic class worked successfully in Norway but was stopped when Norway harmonized its regulatory requirements with those of the European Union in 1995.[82] Non-inferiority trials should be allowed only if one can ethically justify entering patients into a trial in which there can be no benefit for them.[83]
All adverse events, including those occurring among subjects who drop out, must be reported with follow-up for two years.
Fifth, Congress needs to restore trust by creating a National Drug Safety Board with adequate powers, funds, and mandates to independently investigate and report on drug safety issues. The creation of this board would support the position that all data related to how drugs and vaccines affect people are a public good and that access to this data is a human right. Both the inadequacy of pre-approval safety testing and the lack of systematic post-approval monitoring need urgent attention.[84]
None of this is likely to happen until third-party payers, politicians, and the people decide they want to stop paying so much for so many drugs of little value and then for treating the millions harmed by those drugs. Nor is it likely until the campaign to restore institutional integrity to Congress through funding elections by the 99 percent, rather than by the one percent, is successful.[85]
See paper and references HERE
Continued on next page…