Release Date: April 20, 2016
BUFFALO, N.Y. – Policymakers in the U.S. have an opportunity, in light of the growing opioid epidemic, to craft effective and comprehensive drug policies that better protect everyone from the harms of addiction, according to a University at Buffalo historian who specializes in the history of prescription drug abuse and addiction.
“To do this we need to acknowledge an enormous middle ground that exists between medical and non-medical use of these substances,” says David Herzberg, an associate professor in UB’s Department of History and lead author of the commentary, “Recurring Epidemics of Pharmaceutical Drug Abuse in America: Time for an All-Drug Strategy,” co-authored with Honoria Guarino, Pedro Mateu-Gelabert and Alex Bennett, which appears in the March issue of the American Journal of Public Health.
Herzberg, an expert in the history of 20th-century pharmaceuticals and the consumer culture, says that category errors often hobble efforts to establish effective drug policy and curb drug abuse.
“Historically, the vast majority of psychoactive drug use, and drug addiction, has involved legally produced pharmaceuticals,” he said. “An obsessive focus on ‘medical’ versus ‘non-medical’ obscures this fact, and makes it difficult to develop drug policies designed to keep drug users safe.”
Herzberg says the divide between those two extremes means that many people are poorly served by policies that rely on these erroneous categories – and it’s not a new problem.
“Today’s opioid epidemic gives us an opportunity to re-think our basic common sense about drugs and addiction,” he says.
“As a historian, I know how rare this is, and I don’t want us to lose this opportunity.”
In the late 19th century, a confluence of technological and commercial developments made pure narcotics like cocaine, morphine and heroin widely accessible and easy to use.
“These drugs had been around for a long time, but now they were available in a purified form that could be sold on a national market,” says Herzberg. “They were made by pharmaceutical companies; they were considered medicines; and they developed popular followings.”
These factors contributed to producing the highest addiction rates in U.S. history, until the arrival of the current opioid epidemic, he says.
After professional and federal reforms reduced use of opiates and cocaine in the early 20th century, a second wave of addictions emerged in the 1920s as doctors started prescribing newly discovered drugs, barbiturate sedatives and, eventually, amphetamine stimulants, for many of the conditions they had previously treated with narcotics.
Large populations developed dependence on pharmaceutical sedatives and stimulants, but because of the category problem, few authorities understood it as addiction.
Authorities defined “addicts,” says Herzberg, as people who used drugs for pleasure, not therapy. Thus middle-class, suburban men and, especially, women who used barbiturates, often provided by physicians, were not seen as addicts even when they became dependent or suffered drug harms.
The problem, Herzberg argues, is that policy was focused on either protecting those seen as innocent victims or punishing those viewed as purposeful deviants.
The Pure Food and Drug Act of 1906, the country’s first consumer protection legislation that created the Food and Drug Administration, required accurate drug labeling.
This measure approached addiction as something like accidental poisoning, and suggested that knowing a medicine’s contents could help protect consumers from addiction.
“But there was so much morphine, cocaine and heroin moving through various channels that is was impossible to contain,” says Herzberg. “So reformers turned to more restrictive measures.”
The Harrison Anti-Narcotic Act of 1914 was a punitive measure. It worked so well that after 14 years on the books, three out of every 10 federal inmates was serving time for narcotics violations.
“The Anti-Narcotics Act reduced narcotics use so much that people suffered a lot of unnecessary pain,” says Herzberg. “There was so much paranoia about narcotics prescriptions that people in need no longer had access.
“That can’t get lost in the present fear of the opioid epidemic.”
Informed by the shortcomings of the Pure Food and Drug Act, policymakers knew the Harrison Act would require a dedicated crackdown on what was a large and profitable market.
“To overcome resistance to regulating the large narcotics market, reformers sensationalized non-medical drug users as fearsome ‘junkies,’” says Herzberg. “This ushered in an era of divided drug policies that were both too strong (in punishing non-medical users) and too weak (in regulating medical markets).”
In the 1970s, the Controlled Substances Act downplayed distinctions between street drugs and prescription drugs. All psychoactive substances were placed on a “schedule” of controlled substances, categorized by their medical value and their risk of abuse. While the law increased policing of non-medical use, it increased funding for addiction treatment even more.
“This changed the meaning of medicine,” says Herzberg. “The laws incorporated the idea that psychoactive drugs can produce addiction — even when they are medicines.”
The gains were short-lived, however, because they relied on two fragile political commitments.
“First, to see medicines as risky requires a certain skepticism of corporations and physicians, two major American institutions,” Herzberg says. “Second, you have to believe that some addicts are not bad people — something made possible at that time, in part, by the civil rights movement, which had been challenging racial stereotypes in other contexts.”
By the 1980s, the Reagan administration relaxed regulations to unleash the private sector, including the pharmaceutical industry. He also renewed the war on drugs with its concomitant perception of addicts as super-predators who should be imprisoned.
As a result, he says, some of the tools that were effective were lost again.
“The substances on the street and in the medicine cabinet are often the same, but the people who use them are treated very differently,” says Herzberg.
“From a historian’s point of the view, the best drug policy may have come in the 1970s when we at least partially looked beyond those categories, acknowledged that many millions of Americans used sedatives, stimulants and narcotics; recognized that this was risky behavior; and attempted to protect the full range of drug consumers from those risks.
“It wasn’t perfect, but when you look at century of repeated failures to protect Americans from the harms of drugs, you can’t throw away successes when they come – no matter how partial they were.”