TCS Daily

Drug Testing, Drug Hazards

By Henry I. Miller - September 15, 2006 12:00 AM

A clinical trial that went badly awry at London's Northwick Park Hospital in March became the drug-testing community's worst nightmare. Six healthy volunteers ended up in intensive care after each received the first injection of a new drug called TGN1412, a highly purified antibody intended as a treatment for autoimmune diseases such as rheumatoid arthritis.

What particularly alarmed some observers was that the violent reaction - "cytokine storm," the outpouring of hormone-like chemicals from certain kinds of white blood cells, which gives rise to a widespread toxic response - seems not to have resulted from problems with the manufacture or formulation or from contamination of the drug. British regulators concluded that the immediate cause was probably "an unpredicted biological action" of the drug itself.

In order to understand the implications of these events, a little background is necessary. Clinical trials study potential treatments in human volunteers to determine whether they should be approved for use in the general population. Before human experimentation begins, drugs must be tested in laboratory animals to determine toxicity, estimate dosage, and gain information about pharmacology. Only drugs that have acceptable safety profiles and show promise are then moved into clinical (human) trials.

Clinical trials are carefully designed to answer specific research questions. The trial protocol specifies study procedures and measurements, whether the drug will be tested against standard treatments or placebo, and how the data will be analyzed. The trials are conducted in graduated phases. Initial trials usually involve a small number of healthy volunteers and are intended to determine dosing, document how a drug is metabolized and excreted and identify acute side effects. Progressively larger studies - involving as many as tens of thousands of subjects - are then performed to document safety and efficacy in patients who have the disease or condition for which the drug is ultimately intended. When there is sufficient evidence of safety and efficacy, the drug's sponsor submits the data to regulators and requests permission to market the drug.

The gradual progression in the size and complexity of clinical trials is intended to minimize untoward, unexpected effects on test subjects and ultimately on patients to whom the drug is prescribed. But clinical trials, by definition, explore uncharted territory, and, at any stage of the process the unexpected can occur.

When I was a medical officer at the U.S. Food and Drug Administration in the early 1980's, a completely unexpected side effect occurred in the initial clinical testing of a new formulation of human growth hormone: At the outset of the initial clinical studies in healthy human volunteers (who happened to be executives of the drug company), the drug caused extreme pain at the injection site, fever, and blood chemistry abnormalities that indicated an inflammatory process. The culprit turned out to be a low-level contaminant in the drug preparation that stimulated human white blood cells to release a substance that caused the signs and symptoms. The contaminant had not been detected in standard, sophisticated screening tests that are supposed to assure a drug's purity and quality, nor was it found in preclinical animal studies because of the indirect mechanism of its toxicity and its specificity for human cells. Thus, prior to the human trial, the problem was exceedingly difficult to predict and avoid.

More damaging by far was the use of a synthetic estrogen steroid hormone called diethylstilbestrol, or DES, that was prescribed by American physicians from 1938-1971 to between five and ten million pregnant women in order to prevent miscarriages or premature deliveries. In 1971, federal drug regulators advised physicians to stop prescribing DES to pregnant women because it was linked to a rare vaginal cancer in female offspring. In other words, the side effect was not seen until 15 to 20 years or more after women were exposed to the drug in utero during their mothers' pregnancies. Such a long time lag between exposure and the adverse effect makes the association extremely difficult, perhaps impossible, to detect during clinical testing.

Finally, it is axiomatic that very rare drug side effects are detectable only when large numbers of subjects are exposed.

So where does all of this leave us?

Drug testing is a risky business, so we need to do it according to prevailing ethical standards and using the most advanced scientific methods. The unexpected toxicity found in the recent British drug trial serves as a reminder that when a substance about to be administered for the first time to humans acts through a novel mechanism or on a poorly understood biological target, it is prudent to begin at a very low dose with a single subject, and to leave a reasonable interval between exposures of additional persons. But we cannot be deterred from performing clinical trials if we are to have the innovative new drug products needed by an aging population.

Henry I. Miller is a physician and fellow at the Hoover Institution and the Competitive Enterprise Institute. From 1989 to 1993, he was director of the U.S. FDA's Office of Biotechnology. His most recent book, "The Frankenfood Myth..." was selected by Barron's one of the 25 Best Books of 2004.

TCS Daily Archives