Editor’s note: Today Marcus Chatfield continues his series on Straight Inc., the involuntary treatment program for adolescents suspected of drug use that operated in several states between the 1970s and 1990s. Parts 1 and 2 of the series can be found here and here.
In Help at Any Cost (2006), Maia Szalavitz reveals some of the troubling history of coercive programs. The sub-title of her book is, “How the Troubled Teen Industry Cons Parents and Hurts Kids,” and this is one of the hardest things for a survivor to describe – the deceit that protects abusive programs. Dr. Charles Huffine writes, “I cannot tell you how many youth I’ve been in contact with that do not tell their family about the painful aspects of their experiences for fear of making their family feel bad — though I can say they number in the majority. All too frequently, simply, they did not know they were abused, or worse, that the abuse was justified and necessary for them to ‘get better.’”
Tough-love programs often ritualize emotional testimonies and require testimony about conversion experiences as a prerequisite for release from treatment. Because there is no scientific evidence to validate the safety and efficacy of coercive methods, these anecdotes are the “hook” that this multi-billion dollar industry is built upon. Many victims of thought-reform treatments, like victims of domestic violence, will defend their captors as a self-protective survival response. Similar to abusive dynamics in families, when people are beaten down long enough they may believe it’s normal, deserved, and even good for them. As one former staff member of the program said to me recently, “at the time I graduated I was so duped into believing that I’d been helped, I couldn’t even begin to see the damage caused to me.”
Abusive program dynamics lend themselves to inadvertent sentinel events, but they are also designed to inflict trauma responses in order to produce desired psychological changes. Through such trauma responses, captive victims of torture will often begin to identify with those they are dependent upon for their basic human needs, including the need for human connection. These captors possess the omnipotent power to bestow relief from the deprivations they impose, and in these dehumanizing situations it is natural for victims to attempt to create meaning where there is none. As the captive learns to please the abusers, small increments of freedom and privileges may be awarded and experienced with overwhelming gratitude. This gratitude can be played upon and perpetuated to produce “improvements” that eventually result in moving testimonials. Much like American POW’s after the Korean War proclaiming gratitude for their communist re-education, adolescent victims of thought-reform treatment and their families often have emotional stories of conversion. Most of these testimonies claim that the program “saved their lives” and “gave them back their child.”
In their 1989 article “Outcome of a Unique Youth Drug Abuse Program: A Follow-up Study of Clients of Straight Inc.” in the Journal of Substance Abuse Treatment (JSAT), Alfred S. Friedman, Richard Schwartz, and Arlene Utada ignore this phenomenon and the subject of abuse entirely. In their report, they focus on numbers, omitting the context in which their numbers were produced. They admit that their data was based on faulty methodology, and that their findings are “almost not to be believed,” but repeatedly refer to this flawed data as being “significant.” The authors contradict themselves, claiming significance while simultaneously admitting there is a serious lack of validity to their findings.
For example, they claim that “very significant reductions in average frequency of use … were reported,” and repeat this assertion two pages later, saying a “statistically significant amount of reduction in drug use was, in fact, reported.” But then they state, “these results must be regarded only as suggestive rather than definitive because … an important difference exists between the question asked of each client at intake as compared to the question asked at follow-up.” They explain that upon intake, when clients were asked about frequency of use, “no precise period of time frame was indicated by the examiner,” and that they compared this data with a much shorter timeframe, which was “the month preceding the follow up assessment (the time frame adopted for reporting post treatment frequency of use).” They say that this month-long window of assessment at follow-up “was probably shorter, for most of the clients, than the duration of time, pretreatment, during which they were exposed to drug use.” Not surprisingly, they fail to disclose that many clients in treatment at Straight had no history of drug abuse.
The authors go on to discount their comparisons while simultaneously asserting (again) that these are in fact significant findings: “our comparing such frequencies [of drug use] would result in an inequitable pre versus post comparison and in an exaggeration of the degree of reduction that occurred in frequency of use. Even if we discount or repudiate the findings of statistically significant reductions in prevalence rates and in frequencies of use …. there remains the evidence that a significant number of the clients reported that they reduced their substance use … This finding could be taken as probable evidence … were it not for the possibility that a certain amount of denial and discounting of increased drug use may have occurred in the subjects’ self-reports at follow-up.” In other words, their follow-up data is not only invalid, it is also likely to be inaccurate. In the abstract, however, they simply claim that “Eighty-five (85) percent of the clients reported that their drug use was less at follow-up than when they started in the program.” Not only do they misrepresent their data, they do not disclose the main reason to suspect its inaccuracy, which was the corrupted nature of the study itself.
Friedman, Schwartz and Utada simply state in a footnote that co-author Richard Schwartz is Straight’s Medical Director, and that he participated in all phases of the study and provided the data. We can only assume that the biases and conflicts of interest raised by these facts would have been mentioned if they had been accounted for. Consequently, their omission implies that the design, the selection of the study sample, the collection of data, and the interpretation of the findings were all affected by Schwartz’s professional loyalties to Straight.
The authors also state in this footnote that their report fulfills a contract with the National Institute on Drug Abuse (NIDA). They do not disclose the fact that Straight’s most prestigious and influential paid advisor was Robert DuPont, NIDA’s founding director in the 1970s.
Although DuPont was no longer officially affiliated with NIDA at the time of publication, he was professionally affiliated with Straight for many years. He is quoted on Straight brochures and by Arnold Trebach in the book, The Great Drug War, saying in 1981 that Straight is the single best treatment program he has seen, bar none. In addition, Straight executives state in official correspondence (3-4) with the White House that DuPont was the force behind Straight’s 18 million dollar national expansion plan. In 1984, DuPont testified in defense of Straight in federal court as an expert witness during the Fred Collins case, in which Straight was successfully sued for false imprisonment and abuse. In 1989, DuPont sent a letter (5-7) to President George H.W. Bush, suggesting that the president visit the Springfield Straight facility as part of a widely publicized, public awareness campaign. DuPont testified in court that he encouraged Nancy Reagan to learn about Straight during the first few weeks after she became First Lady, and he also stated that he encouraged her to visit the St. Petersburg Straight facility. These visits by Nancy Reagan were highly televised promotions of the program.
Ethical research standards require that when the benefactor of a study has a potential interest in the positive outcome of research, safeguards should be implemented and described. It would be wrong to assert that these facts “prove” that NIDA had a vested interest in Straight’s research, but it also would be naive to dismiss the close ties that Straight’s executives had with NIDA, the White House and other federal offices that funded Straight.
Straight’s research makes no mention of the need to protect its participants. More importantly, not only did the research put former clients at risk, the “treatment” they were researching was risky and experimental, making ethical research impossible and irrelevant. As stated by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in the Belmont Report, “there cannot be any meaningful protection of research subjects in the field of mental health research unless there is regulation of innovative, experimental, research-demanding mental health treatments” (15-17).
It is the known efficacy of a behavioral therapy that determines the degree to which it is a therapeutic practice or an experiment. Good intentions alone do not substantiate the efficacy (or safety) of a treatment, as also stated by the Belmont Report: “the boundary between research and practice is the degree to which the knowledge of efficacy exists. That knowledge is a complex, but inevitable function of the extent to which the relevant research has already been done and replicated, not of the intentions of the particular scientist or therapist” (15-15).
Next week: design flaws in detail.
1 thought on “Setting the Record Straight, Part 3”
Comments are closed.