Good for the FDA denying this attempt to treat PTSD with Molly

A lot of science is agenda-driven. Researchers already know the answer, the heralded Truth, but no one’s going to take their word for it, so they have to go through the motions of publishing material in a peer-reviewed journal to garner legitimacy. C.f. Andrew Gelman’s thoughts on the subject, only a sample of which is linked therein.

With no surprise I found this God awful study making the rounds. The long and short of it is that Lykos Therapeutics has packaged Molly/ecstasy/MDMA to treat PTSD, and an organization, MAPS (the Multidisciplinary Association for Psychedelic Studies), has gone All-In on pushing it. They’re not exactly totally separate organizations, since Lykos used to be MAPS Public Benefit Corporation. The basic purpose of MAPS is to legitimize psychedelic treatments to gain FDA approval and widespread acceptance.

The MAPS authors (and really let’s call them that) published the above Nature article, highlighting very positive results for the treatment compared to placebo. They probably figured FDA approval was inevitable, because their work showed incontrovertible evidentiary value for the treatment. After all, a randomized controlled trial, the Gold Standard of scientific research, demonstrated a large effect size in the treatment’s favor. Done, dusted.

But we’re a bit wiser now as a community of research consumers, and so an FDA advisory panel, apparently full of cranky skeptics, rejected the study’s conclusions–and the FDA proper followed suit and denied the drug.

I was heartened to see this denial. We’re in a replication crisis and have been in one for quite some time. Daniele Fanelli has pointed out that the crisis is uneven, and we shouldn’t assume all fields are equally bad; that’s fair. But given the insane amount of data manipulation, and even outright fraud, that goes on in what we consider research today, the default position for any scientific finding should be skepticism. The burden of proof on researchers never has been higher.

So, with this study, I naturally want to review the data. This is 2024, after all.

The data that support the findings of this study are available from the sponsor beginning 1 year after completion of the trial. However, restrictions apply to the availability of these data, which were used under license for the current study and so are not publicly available. Data are, however, available from the authors upon reasonable request and with the permission of the sponsor. All requests for raw and analyzed data are promptly reviewed to verify if the request is subject to any confidentiality obligations. Participant-related data not included in the paper were generated as part of clinical trials and may be subject to participant confidentiality. Any data that can be shared will be released via a data use agreement. Proposals should be directed to https:/maps.org/datause. Source data are provided with this paper.

Let me translate this for you.

The data that support the findings of this study likely will not hold up to independent scrutiny. They may have been outright fabricated. Very likely other trained statistical analysts would come to differing conclusions than we did, based on the same data. We will say that the data are available upon reasonable request, but we will ignore any such requests and not release anything. If you do somehow strong-arm us into giving you access to the data, then you will need to sign a legal agreement saying that you won’t second guess what we’re doing or risk litigation.

This is fucking ridiculous. All the data are anonymized and so there’s absolutely no justification for not releasing it. You clowns want to hook PTSDs sufferers–one of the literally most vulnerable populations you can find–to psychedelics, you want government agencies to approve this plan, and you don’t think it appropriate to share the mother fucking data with the very same public you want to treat?

Avon said it best. What. The. Fuck.

When I see that you won’t share the data, I automatically assume you did something wrong. Is that fair? Yes, it is. Nothing stops you from uploading a file to a website. It’s not hard. I can give you a tutorial on it if you want. What does stop you from doing that is fear. Fear that the Data Thugs and Methodological Terrorists might find something untoward in your data, code, or analysis. Fear that someone will find an Excel error that invalidated everything you wrote. Fear that data sleuths will expose you as a fraud.

When the FDA advisory board rejected the study’s findings, one dude dissented. Dr. Walter Dunn acknowledged the many problems in the study, which I’ll get to, but claimed the “effect sizes of the treatment were large enough to indicate it can be effective for PTSD.”

Bro, what?

Do you think misconduct might somehow relate to the obviously inflated effect size you read about? Rather than evidence in favor of the treatment, the big effect size should be evidence against it, because it likely wasn’t obtained legitimately. Literally the opposite conclusion you were supposed to draw, my man. Big effect sizes are often the product of underpowered studies or all kinds of questionable research practices, which is a pleasant euphemism for trash science.

When someone presents you with an implausibly large effect size, like a fifteen minute writing exercise lowered suspension rates among black students by two thirds (we’ll get to this one), then they’re doing you a favor, because they’re telling you that the study is total bullshit. You don’t have to waste your time reading any of it; just move in with your life. At the absolute best, the implausibly large effect size resulted from a useless measure or some variation on “teaching to the test”; like, immediately after a workshop on critical reasoning, participants took an attitudinal survey and claimed they felt super confident in critical reasoning. But that’s the absolute best case scenario. More insidious is that the effect was manipulated.

How suspect is the study’s effect size? I’d say, very suspect, because it’s almost identical to another Lykos sponsored study conducted by basically the same gang.

Least squares (LS) mean change in CAPS-5 score (95% confidence interval (CI)) was −23.7 (−26.94, −20.44) for MDMA-AT versus −14.8 (−18.28, −11.28) for placebo with therapy (P < 0.001, d = 0.7) (2024)

The mean change in CAPS-5 scores in participants completing treatment was −24.4 (s.d. 11.6) in the MDMA group and −13.9 (s.d. 11.5) in the placebo group.(2023)

Wow, that’s incredible. In 2023 they found reductions of -24.4/-13.9 in treatment/control, and in 2024 they found -23.7/-14.8. They’re functionally equivalent values, generated as if there were almost no variation at all. That’s one super stable effect, basically impervious to the variance endemic to human research.

That’s very suspicious to me, but I can’t probe any deeper, because, again, none of the data are available.

But that’s not the most suspicious finding to me. This is.

By study end, 37 of 52 (71.2%) participants in the MDMA-AT group no longer met DSM-5 criteria for PTSD versus 20 of 42 (47.6%) participants in the placebo with therapy group.

lmao

Hold up. PTSD is a very serious, persistent, enduring disorder. It’s the kind of thing that doesn’t just disappear with a sugar pill and talking it out. If it did, then it wouldn’t present in the population as it does. There wouldn’t be millions of people suffering worldwide with it every day, anchored to agonizing traumas.

So, OK, let’s put aside the absurdity in believing that 71% of those in the treatment group were basically cured of their PTSD–because that’s what the implication here is, they “no longer met DSM-5 criteria for PTSD.” The real smoking gun is that almost half of those in the placebo group did as well.

What on earth is going on when taking a placebo, supplemented by some mysterious talk-therapy, can cure almost half of the participants of their PTSD? And, no, not over the course of five years, with an intensive, longitudinal intervention. Over 18 weeks, three sessions. Remember, too, that the vast majority of participants in this study were suffering with moderate and severe PTSD. These weren’t exactly people who kind of maybe sort of qualified as PTSD by the DSM-5 criteria.

This tells me that the research team has no idea what it’s doing. I don’t know specifically where to locate this ignorance, because the study itself exemplifies Closed Trust Me Bro Science, whether it’s in the measurement used, the way investigators classified PTSD, how data were tabulated, or what.

What should we have seen? Well, think it through. Maybe you are a True Believer in MDMA and so you believe 71% of those treated with it won’t show enough symptoms to be classified as PTSD, according to the DSM-5. Fine. Then what do you think, prior to collecting any data whatsoever, three sessions of talk therapy and a sugar pill would do? How many participants would no longer meet the criteria?

One participant, two, three perhaps, because of human variability, and how maybe some people who were screened with moderate PTSD didn’t actually have it. Maybe there was an error in classifying them. Not almost fucking half, lmao

Can’t stop lmao, sorry

And you know what? If your mysterious talk therapy is THAT GOOD, then why even deal with the drugs at all? I’d say that should be the takeaway here: trailblazing talk therapy so effective it cures almost half of its patients’ PTSD, in only a handful of sessions!

It’s so flagrant that the researchers even acknowledged it in their editorializing section, aka, the Discussion.

The notable effect seen in the placebo with therapy arm could suggest the standalone value of the manualized inner-directed therapy that was developed for use with MDMA. Additional head-to-head studies will need to be conducted to evaluate whether this form of manualized therapy provides greater value in the treatment of PTSD than the current first-line cognitive behavioral therapy and prolonged exposure therapy treatments

Or let’s Ocham’s Razor this mawfucker, and say you just don’t know what you’re doing. We’re totally blind to what this so-called inner-directed therapy even is, and how it can magically show such a dramatic effect over and above what alternate treatments like cognitive behavioral therapy have done.

Dr. Paul Holtzheimer called the treatment a “bit of a black box.” And that’s a problem, what I’d call a fatal flaw. If you can’t clearly explain what it is you’re doing, then why should anyone believe what you say? That should be grounds alone to reject anything this study’s claiming. We want to move beyond Closed Science and that means black boxes immediately invalidate a research finding. That’s it, no further discussion. Put the method up to scrutiny from everyone, the lay public to experienced psychiatrists to government regulators. You can’t do that, then don’t waste our time.

Now let’s talk about selection bias and sampling. One main worry about research is that it’s not generalizable. So, you should always pay special attention to where the trial participants come from. The article is vague about recruitment methods:

Participants were recruited from 21 August 2020 to 18 May 2022 (last participant visit on 2 November 2022). Overall, 324 individuals were screened, and 121 were enrolled.

The 2023 article I linked above is a bit more detailed about recruitment:

Participants were recruited through print and internet advertisements, referrals from treatment providers, and by word of mouth.

But this article discloses something else, that 40% of study participants had used MDMA before.

Smh

That’s a very high number. I suppose that those suffering from PTSD will have a higher than population baseline figure for MDMA usage, but this tells me that recruitment likely was done to a population experienced and sympathetic with MDMAs already. “Word of mouth” is doing a LOT of work here. Why shouldn’t I just assume that patients were being recommended who the researchers knew might be sympathetic to MDMA already?

You have to trust researchers, who are motivated for profit and fame, to present credible findings. That’s a hard sell. These are people highly motivated to find success in psychedelic usage, and it’s no big shock that they can influence the vulnerable population over which they’re presiding here. So I wasn’t surprised at all, when doing a little digging, that others have had a similar concern.

Those folks pointed out how the study involved functional unblinding.

A blinding survey conducted at study termination showed that 33 of 44 (75.0%) participants in the placebo with therapy group were certain or thought they received placebo, whereas nine of 44 (20.5%) participants inaccurately thought that they received MDMA, and two of 44 (4.5%) participants could not tell. In the MDMA-AT group, 49 of 52 (94.2%) participants were certain or thought that they received MDMA; one of 52 (1.9%) participants inaccurately thought that they received placebo; and two of 52 (3.8%) participants could not tell

This is a problem, because there’s heavy pressure by everyone involved for the treatment to work. The therapists want it to work. The research team does. Lykos does. The participants do, understandably, since they don’t want to suffer from PTSD anymore. So when you have a study that says, we’re testing the effect of a drug on PTSD symptoms, and basically everyone in your treatment group can tell they have the drug, what happens?

1) The participants know they have the treatment and are motivated to report positively what happens. Imagine you’ve suffered from PTSD for a very long time, and it’s fucking debilitating, because it’s fucking PTSD, and you want something to work. You’re told that MDMA has helped other sufferers and it might help you, too; lots of groups exist now to push this idea. Then you take the drug and you know it’s MDMA because, well, you’re tripping, and so you’re motivated to disclose positive responses to it–you want the drug approved.

2) The therapists know who has the treatment and are motivated to report positively what happens. This is therapy we’re talking about, where people disclose their personal feelings. I’ve no doubt that participants told their therapists what arm of the study they believed they were in, which would basically unblind themselves to the therapists, who can then push them in a certain direction. Sarah McNamee, a participant in the study, wrote a letter claiming that this is exactly what happened to her and others she encountered in the study.

3) There’s no third thing, sorry. A couple is two, Phil.

Sarah wrote,

While I was in the study, there were many things my trial therapists did – things I accepted because I thought they were experts and I wanted to heal — and because they said this was a “paradigm shifting treatment” (i.e., a cue to release previously held beliefs about what therapy or medicine “should” look like). Things I would never do as a therapist or as a researcher. The list is too long and too vulnerable for me to fully cover in this format. But, it includes things like encouraging me to view my worsening symptoms as evidence of healing and “spiritual awakening;” seeding mistrust in mainstream psychiatry; talking to me about past life traumas; encouraging and, one time, pressuring me to cuddle with them; repeatedly telling me I was “helping make history” and that I was “part of a movement;” and letting me know how my responses and behaviours during and after the trial could jeopardize legalization

That’s a lot of text, so I know you don’t want to read it, but do it, slowly…and then come to the part that says, pressuring me to cuddle with them. What in the fuck. C.f. the Avon gif above.

Yes, that likely happened, because therapists involved previously with MAP did that. The therapist in question, Richard Yensen, made no bones about the fact that he was fucking a participant, he claimed consensually, while she was still enrolled in the study. He was caught on video, with his wife (the licensed therapist of the crew), holding the participant down, spooning her, and trying to help her re-experience her traumatic sexual assaults.

So it’s very believable that the therapists involved with the current studies also did inappropriate sexual behavior, as alleged by Sarah, since they also hail from the same organization that Yensen did.

Like I said before, you have to take a lot of what researchers do on faith. You have to trust that they collected data honestly, that they applied the same standards to treatment and control groups, that they didn’t try to influence their patients’ responses, that they administered measurements in a standardized manner, that they performed the statistical analysis correctly–the list goes on and on.

Do I trust a gang of acid trippers who are spooning and fucking participants in the study? No, no, I don’t. If they don’t see a problem doing that, then I really doubt they’ll see any problem engaging in the questionable research practices so endemic to closed science, like p-hacking. Practices they likely aren’t even aware of as being problematic.

Finally, safety. The big concern with MDMA is that it’s not particularly safe and has abuse potential, both for the participant and for the therapist, who can exploit the vulnerable and drug-susceptible patient. Yet remarkably we have no such safety concerns in this study:

Most participants (102/104, 98.1%) experienced at least one treatment-emergent adverse event (TEAE) during the study (Table 2); seven experienced a severe TEAE (MDMA-AT, n = 5 (9.4%); placebo with therapy, n = 2 (3.9%)). None had a serious TEAE. Two participants (3.9%) in the placebo with therapy group discontinued treatment due to TEAEs. Frequently reported TEAEs (occurring with incidence >10% and at least twice the prevalence in the MDMA-AT group versus the placebo with therapy group) included muscle tightness, nausea, decreased appetite and hyperhidrosis (Table 2). These were mostly transient and of mild or moderate severity. At least one treatment-emergent adverse event of special interest (TEAESI) occurred in six of 53 (11.3%) participants in the MDMA-AT group and three of 51 (5.9%) participants in the placebo with therapy group (Table 2). No TEAESIs of MDMA abuse, misuse, physical dependence or diversion were reported.

Gee, isn’t that tidy, and I just don’t believe it. The ICER folks I cited before (and again because, this is the Internet, and I can post as many links as I f’ing want, and I can share code and data, too, like a magician) spoke to study participants who claimed that they had experienced unreported harm. The report authors have to abide by the Academic Game and so can’t come right out and say that. Instead, they write,

“There seems to be some disconnect between the reporting of these harms in the
clinical trials and what we heard from patients; however, it is possible that this is due to the timing of evaluation measures rather than deliberate attempts to suppress these reports” (p. 9)

“Appears to be some disconnect,” lmao

Ok, then I’ll say it. There were adverse events, and the research team, for whatever innocuous or malicious reason you’d personally like to invent, didn’t report them. Personally, given all I know about these folks, and the cloak and dagger way in which this entire study was conducted, I’m not giving them the benefit of the doubt, but you do you, dawg.

It doesn’t surprise me that a bunch of people affiliated with the study wrote an aggressive rebuttal to the ICER’s skepticism. It’s a lot of yapping, because the authors interpret science as nothing more than rhetoric and politicization; we are on this side, and you’re on this side, and we’ll vigorously protest what you wrote with words, words, and more words.

One hundred and nine therapists and principal/co-investigators contributed to the Phase 3 trials of MDMA-AP for PTSD. To our knowledge, none of them were consulted before the preliminary report was issued. However, this group is in the strongest position to describe the studies and address accusations related to inappropriate study design and conduct. In the absence of such input, a number of assertions in the ICER report represent hearsay, and should be weighted accordingly.

Sure, and Diederik Stapel was in the best position to know whether he fabricated data. Anil Potter, too. Andrew Wakefield’s bullshit which has connected autism with vaccines forever. You see how this works? Appealing to yourself as an authority that you didn’t do anything wrong doesn’t carry the weight you believe it might.

Besides, where’s the data?

Then they write this,

The draft report indicates that one or more participants in MDMA-AP trials suffered significant boundary violations at the hands of study therapists, and suggests that such experiences would alter the risk/benefit analysis for this combination treatment. Unfortunately, the report relies heavily on one particular, well-publicized case of ethical misconduct in a Phase II trial, as well as anecdotal comments made by a small number of undisclosed study participants and unnamed “experts” rather than validated research outcomes.

This one’s truly out there, which is what I’d expect from these acid trippers. Yeah, man, the evidence that misconduct happened came from talking to the people alleging it happened–not from a validated research outcome, like the CAPS-5! Because somehow a numerical value’s going to signal unethical conduct? What in the fuck are you even saying?

I don’t care if you want to get high off psychedelics. Idgaf. But everything about this research tells me that this is sham science with zero promise. I’m sure some people report feeling better after taking MDMA. And so you don’t misinterpret what I say, I’m sure they did feel better. But that’s not what you base a cumulative knowledge base on. You need to do actual, open science, with transparent methods that you can share and which can be criticized by everyone.

Until you do that, you’re just a witch doctor.

Richard Yensen (right), in happier times.


Posted

in

by

Tags: