Rethinking Research Ethics: The Case of Postmarketing Trials

Good news!

Toward the end of the year in which I was working on my thesis, my supervisor had me write up a shorter version of my thesis for an attempt at publication. This was no small feat—imagine trying to compress a 90-page master’s thesis into 2 pages!

After my RA-ship ended, my supervisor, Jonathan Kimmelman, and Alex John London took the paper, made some substantial edits, and submitted it to a couple journals. The paper was accepted, and as of this week, it was published in Science.

Needless to say, I’m thrilled. :D

Game theory and medical research

I recently learned what exactly a Nash equilibrium is, and like any obnoxious academic with a new idea, I’m really excited about it. Hence, I will apply what I’ve learned in Game Theory so far to the field of medical research ethics.

First, some definitions: A Nash equilibrium is a set of strategies that the players in a formalised game adopt such that the utility that each player receives for her chosen strategy is the greatest, given the choices of strategies of all the other players in the game.

This could be formalised as follows:

A Nash equilibrium exists when ui (ai, a-i) ≥ ui (ai′, a-i) for all ai′ and all i, where:

  • ui is a function whose range is utility values for player i and whose domain is an ordered n-tuple of strategies taken by all the players in the game
  • ai is the chosen strategy of player i
  • a-i is the set of chosen strategies for all the other players, and
  • ai′ is some alternate strategy that player i might adopt.

What’s interesting about Nash equilibria is that given a particular formalised game, other non-Nash sets of strategies are “unstable”—that is, if a player finds out that given the strategy choices of the other players, she could have made a better decision, she will change her strategy accordingly.

The famous Prisoner’s Dilemma (look it up if you haven’t heard of it) is a great example of a Nash equilibrium where the outcome for each of the players is not optimal, even though they are in equilibrium.

What’s interesting to me about things like this is how it can be applied to medical research, if we make certain simplifying assumptions. Let’s imagine that medical research is like a two-player game. The players are the pharmaceutical industry on the one hand and some other participant in human research on the other.

In the tables below, Big Pharma has two strategies open to it—developing a “seeding” study or developing a “quality” study. The other participant (who could be a research subject or a physician-investigator or a journal that publishes medical research papers) also has two strategies available—participating in the study developed by Big Pharma, or not participating.

If the other stakeholder in the research project doesn’t participate, neither Big Pharma nor the participant receive any benefit. The utility outcomes for Big Pharma and the other stakeholder are 0, 0, respectively.

If the other stakeholder participates and the study is a high-quality study that provides socially valuable medical information, Big Pharma and the other stakeholder receive utilities of 1, 1, respectively.

But, if it turns out that the pharmaceutical company has produced a “seeding” study—one that is designed for narrow ends, namely those of being a marketing tool to get physicians used to prescribing a drug that has already received licensure—the pharmaceutical company receives a utility of 2 and the other stakeholder receives a utility of -1. That is to say, Big Pharma gets a big payout, because hundreds of doctors are now prescribing the drug, but the other stakeholder incurs a net harm in some way. (If she is a study participant, he may feel used or cheated. If she is a doctor, it may be a source of professional embarrassment. If it is a journal that published a “seeding” study, that journal will lose some of its reputation, etc.)

Participate Not
“Seeding” study 2, -1 0, 0 *
“Quality” study 1, 1 0, 0
Table 1. Asterisk (*) indicates Nash equilibrium.

So if we go through each set of strategies that the players in this game can take, we find that the one with the asterisk is the only one that is a Nash equilibrium. This is because if you are Big Pharma in this game, given that the other stakeholder has chosen not to participate, you are indifferent between strategies, and if you are the other stakeholder, given that Big Pharma has chosen to develop a “seeding” study, your best choice is to not participate.

It’s interesting to note that this setup is analogous to markets for financial products and other “confidence goods,” where the buyer has a really hard time telling the difference between high and low quality products.

But what if no one caught on that the study was a “seeding” study? Let’s imagine that Big Pharma got away with running a seeding study and no one ever figured out that that’s what it was. We would end up with a game that can be represented as follows:

Participate Not
“Seeding” study 2, 1 * 0, 0
“Quality” study 1, 1 0, 0
Table 2. Asterisk (*) indicates Nash equilibrium.

Here, the equilibrium has shifted. This explains why pharmaceutical companies try to develop “seeding” studies, and why they try to hide it.

So the question becomes, how can we set up the “rules of the game” of medical research in order to shift the equilibrium such that other stakeholders will participate and the pharmaceutical company will develop quality studies?

Or to put it another way, if we assume that the utility for non-participation for all players is 0, and that both the pharmaceutical company and the other stakeholder should both come away from a quality study having received some utility, what value for x will put the Nash equilibrium where the asterisk is in the table below?

Participate Not
“Seeding” study x, -1 0, 0
“Quality” study 1, 1 * 0, 0
Table 3. Asterisk (*) indicates Nash equilibrium.

The value of x must be less than 1 in order for the Nash equilibrium to fall where the pharmaceutical company develops a “quality” study and the other stakeholder participates. This is because if x = 1, Big Pharma will be indifferent between its strategies, given the choice of the other player, and if x > 1, as we saw in Table 1, the equilibrium will shift to where Big Pharma produces a “seeding” study and the other stakeholder declines to participate.

So in real life, how do we make x to be less than 1? There has to be some sort of sanction or penalty for pharmaceutical companies for producing seeding studies that makes their expected utility less than that of a quality study. This can be done by either putting a tax on seeding studies or by making regulations against seeding studies outright.

Free online game theory course

So a few months ago I signed up for a free online course in Game Theory, taught by two professors at Stanford. I like Stanford. Ever since I discovered the Stanford Encyclopaedia of Philosophy as an undergrad (the one website that philosophy profs will allow you to cite in your papers), I had a profound respect for this institution’s free online offerings.

The course isn’t for credit at all—there’s just video lectures, and “quizzes” integrated into the videos. I guess I’m sort of interested in it because it relates to my thesis subject. Ever since I wrote my thesis on it, I find the whole idea of collaborative enterprises fascinating, and I would love to be able to more rigorously analyse what regulations would make a complex system with multiple stakeholders work best.

The course was supposed to start in “late February 2012,” so I waited until today—I was going to send the professors an email, since February 29th is about as late in February as you can get. So I opened up the site for the course to find a contact email address, and found the following message:

Regarding the start-date of the Game Theory Online course: The University is still finalizing policies to cover its new online courses, and so there has been some delay in the launching of the courses. We anticipate being able to launch the course soon, and will keep you informed of any news on the starting date. Matt and Yoav

I’ll let you know if anything interesting comes of this. Let me know if you sign up for the course yourself. :)

I graduated this week

Backward compatibility

I'm getting hit by a tube
I'm getting hit by a tube

I like graduation ceremonies. Don’t get me wrong—hearing the names of a couple hundred students read in order of academic programme isn’t my idea of a wild party, but I’m glad such things exist. There’s a couple things that I like about graduations.

Convocation is the ultimate example of backward compatibility. There’s something positively medieval about them. As the Principal said, the tradition of graduation ceremonies at McGill predates Canadian Confederation. If a person from even ten centuries ago was magically transported to Place-des-Arts on the morning of November 23rd, 2011, that person would probably be able to recognise what is going on, just by seeing all these acamedics in their robes and the giving of certificates.

When I graduated from Western, the procession of professors, chancellors, etc was preceded by a guy carrying a big gold mace. Maces are symbols of power, and historically speaking, they were there to serve the purpose of keeping everyone in line, in case the meeting got out of hand. And at some point in history, someone thought, “Carrying around an implement for bludgeoning rabble-rousers is something that we have to keep doing forever. Just in case.”

When I got the actual paper with my degree printed on it, I discovered that it was all written in Latin. According to the paper, I have a “Magistrum Artium” now. I’m going to take a picture of my degree and get my little sister (whose Latin is much better than mine) to read it at Christmas break.

At McGill by tradition, undergrads are tapped on the head by an academic cap as they graduate. Grad students used to have their hands shaken by the Chancellor, however in the wake of the Swine Flu scare, hand-shaking fell out of fashion. (Not based on any evidence, mind you—Swine Flu is not transmitted by hand-to-hand contact.) Hence, the Chancellor hits graduate students with a tube as they pass him on the stage.

That was the weirdest thing. It was like a knighting (“I dub thee “Magistrum Artium”) except it would have been a whole lot awesomer if they had tapped me on the shoulder with the sword of Gryffindor or something. Actually, I’d settle for the sword of James McGill.

Academic regalia

Hood and robe for MA at McGill
Hood and robe for MA at McGill

What’s also fun (but expensive) is the academic regalia. This time, they let me keep the hat, at least!

I can wear it whenever I want to look smart and make people pay attention to my ideas.

Every programme/faculty/level of achievement has a different robe/hood/hat that they wear to graduate. For a MA at McGill, you get a black robe with funny sleeves that you can’t actually put your arms through, a mortar board and a baby blue hood that goes around the neck. In the attached photo, I’m trying to show what the hood looks like a bit. That’s the interesting part.

Not only do the students all wear different things, but because each professor wears the academic regalia of the school where she earned her PhD (not the school she works at), many professors will have different robes/hoods/hats. Some are boring, some are very eye-catching. The profs who did their PhD at McGill all have funny black McGill hats.

Framing my degree

I looked at the prices of the fancy “McGill” frames that were for sale just outside the theatre and asked them how much they cost. They said they were $200 apiece.

When I stopped laughing, I realised that they were serious and moved on.

Part of me wants to go out and find a “Dora the Explorer” frame for my degree. Something really tacky to keep it in, at least while I’m looking for a frame that won’t require another student loan for me to buy. The only problem with that is that if I do that as a joke while I’m looking for the “real frame,” it might become the “real frame.”

I will be clean-shaven this Movember

“Movember” is the name of a movement that emphasises men’s health, specifically prostate cancer awareness during the month of November, by encouraging men to grow moustaches. There are two main reasons why I will be clean-shaven this November.

Screening for prostate cancer

When is it rational to be screened for a condition?
When is it rational to be screened for a condition?

The first major problem I have with Movember is the emphasis that is placed on prostate cancer screening for men—even men who are not in a high risk group for this type of cancer.

Not every test is completely reliable. Think about it this way: If you put a toothpick into something you baked and it comes out dry, it’s likely that your baking is done. But it’s also possible that you just poked the wrong part of your banana bread, and the rest of it is all gooey. If that happens, it’s called a “false positive” result for your test, or a “Type I error.”

This isn’t just a problem for bakers. It’s a problem with pretty much all medical tests (or any test at all for that matter) that there is a non-zero chance that you will get a false positive (“Type I error”) or a false negative (“Type II error”) result.

For prostate cancer, there are two methods of screening: a digital rectal exam (DRE) or a prostate-specific antigen test (PSA). The DRE is a physical examination of your rectum by palpation and the PSA is a chemical assay performed on a blood draw. Neither of these tests can be relied upon to give perfectly accurate results all the time.

The problem is that if a doctor finds what he takes to be evidence of a tumour growth in the prostate, he may order a biopsy of the prostate. This is an invasive, expensive, painful (and in the case of Type I errors, unnecessary) procedure that brings its own set of medical risks. A biopsy carries the risk of infection, for example.

Please examine the decision tree I have attached to this post. I have tried to make it as general as possible. If you wanted to be really rigorous, you would assign dollar values to each of the outcomes, and then for each of the branches off a probability node (a circle), calculate the probability of that branch. Then if you multiply the probability value of that branch and the dollar value of the outcome for that branch, and take the sum of all the branches, it will give you the value of that node. Repeat the process from right to left, until you come to a decision node (a square). The branch that carries the highest value as calculated using the algorithm I outlined is the decision that one has most reason to take.

I haven’t done the research to find out what the rates of Type I and II errors are for PSA tests, but they are pretty high, and you can see that if the probability of an inaccurate test result is high enough, and the consequences for having a bad test result are dire enough, that might give you reason to go without testing, provided you aren’t in a high risk group for prostate cancer. Further, a randomised control trial of men showed that there is no significant difference in mortality between a group of men who were screened for prostate cancer and those who weren’t. The evidence shows that prostate cancer screening doesn’t help reduce mortality.

If you are in a high risk group, like if there is a history of it in your family, and you are in a certain age range, then by all means, you should be tested for prostate cancer regularly—but don’t start encouraging young healthy men who are not at high risk for developing this sort of cancer to go looking for it. They may find more trouble than is actually there.

Emphasis on men’s health

The second major problem I have with Movember is their condescending and naive position on “men’s health” generally. Let’s consider a quote from the Movember Canada website:

Let’s face it – men are known to be a little more indifferent towards their health … The reasons for the poor state of men’s health in the Canada and around the world are numerous and complex and this is primarily due to a lack of awareness of the health issues men face. This can largely be attributed to the reluctance of men to openly discuss the subject, the old ‘it’ll be alright’ attitude. Men are less likely to schedule doctors’ appointments when they feel ill or to go for an annual physical, thereby denying them the chance of early detection and effective treatment of common diseases.

(From Men’s health—Movember Canada)

Movember Canada is stating here that it is “reluctance of men,” an “‘it’ll be all right’ attitude” and the general indifference toward issues of health that make men less likely to schedule a doctor’s appointment when they feel ill, or to make an appointment for a regular physical exam.

This is not the case. In Canada, men don’t schedule doctor’s appointments largely because they don’t have a doctor that they can call to make an appointment. I have been on my CLSC’s waiting list for a doctor for over a year now, and unless I go to the hospital or a walk-in clinic, I think it unlikely that I will see a doctor any time soon. This is not because I’m indifferent toward my health. This is because I don’t have a doctor.

It is not men being “too macho for doctors” that’s the problem. It’s that we as a country have made decisions regarding health care in Canada based on economics and politics that have brought about a doctor shortage. I hesitate to call it a “doctor shortage,” because the word “shortage” makes it sound like it was something unavoidable or unforeseeable—not something that was engineered and implemented as a matter of public policy.

The reason men aren’t seeing doctors in Canada is because we have chosen to limit our health care spending by decreasing the number of doctors in Canada who will order expensive tests and procedures. So don’t you dare turn around and chide men for failing to see a doctor regularly, when that is exactly what we have decided we want.

Is Movember all bad?

No probably not, and insofar as it is a fundraiser for prostate cancer research and survivor programmes, I think it is probably a good thing. That said, the message of Movember needs to be changed before I can support it.

A scary email to receive less than a week before the thesis submission deadline

I bet you thought I was done posting about my thesis. Last Friday (6 days ago), I received this email after I had the pleasure of submitting my thesis electronically.

[Your supervisor] approved your e-thesis on September 23, 2011 at 11:51.

If your thesis has been accepted by all your supervisor(s), it has been sent to GPSO for processing.

If your thesis has been rejected, please make the changes requested by your supervisor(s) to your original document*, and create a new pdf, delete the file on the server, and upload the new file.

You can track the progress of your thesis on Minerva.

Hooray! It was good news to receive this email, and I tweeted about it immediately, of course.

Then, this morning, I received the following email.

Dear Benjamin, … We [at the philosophy department] have been told that you haven’t submitted your thesis electronically, and this is one of the graduation conditions. Can you do this immediately? The conditions have to be met by Tuesday, 4 October. Best wishes.

October 4th is on Tuesday (5 days from now). I’m pretty sure that my thesis has been submitted electronically. Here is my evidence:

  • Minerva lists my thesis as being uploaded and approved
  • I received the aforementioned email from the e-thesis computer

So I really don’t know what this fuss from the philosophy department is all about, but now I’m nervous that something’s messed up.

An alternate ending to Captain America (or “Captain America and the Therapeutic Misconception”)

The therapeutic misconception

In medical practice, the efforts of the medical team are directed toward therapy. That is to say, when a doctor or a nurse or some other medical professional performs some action on a patient, her actions are morally underwritten by the benefit she hopes to provide to the patient.

For example, a blood draw is somewhat uncomfortable. But we allow medical professionals to take blood if it is done for the purposes of diagnosis. Same thing with setting a bone—very painful, but it is allowed because it is aimed at providing some direct medical benefit to the patient.

In human research, this is not the case.

In human medical research, the efforts of the research team are directed toward gaining useful and generalisable knowledge. That is to say, when a doctor or a nurse or some other medical researcher performs some action on a patient, her actions are not morally underwritten by the benefit she hopes to provide to the patient. Rather, her actions are morally underwritten by the benefit she hopes to provide through the use of generalisable knowledge in informing medical practice.

Blood draws are very common in many kinds of medical research as well. But they are allowed in human research, but not because the patient will necessarily receive any benefit. Instead, it is the benefit to others that makes drawing blood from the patient permissible.

To put it simply, medical researchers are not necessarily trying to help their subjects. That is not what they are doing. This is probably pretty clear at this point.

But what about cases where the patient-subject is receiving some new “experimental” therapy? Perhaps our hypothetical example patient-subject has already been through multiple therapies, none of which worked, and this therapy is the patient-subject’s last best hope.

It’s in cases like these where the line between therapy and research becomes fuzzier.

The therapeutic misconception is something that happens when patients regard medical research as medical therapy. Often, patients will have an exaggerated idea of the chances of success of the procedure. In other cases, patients will full-out not understand that it’s possible that they would be randomised to a control group and not receive any treatment other than a placebo.

The therapeutic misconception is a major problem in human research ethics, and different ethicists have had different ideas on how to deal with it. Some have suggested that doctors should wear red labcoats when they are working in their capacity as a researcher, in contrast with their normal white ones. Others have suggested that patient-subjects always be compensated financially for their participation in a trial, so that the patient regards the money she receives as the benefit from the trial, rather than the “treatment.”

I saw Captain America on Friday night. While it is a fun movie, it doesn’t help things too much in terms of the therapeutic misconception. I know it wasn’t written with human research ethics in mind, but really, we’ve got a guy who is a subject of a medical experiment, but who receives tremendous medical benefit.

People who are participating in medical research watch films like this and even though they know that they won’t come out of the research protocol standing a full two feet taller with rippling muscles not having spent a minute at the gym, they still get the wrong idea—that when you’re recruited to human research, one of the researcher’s goals is direct medical benefit to you.

Alternate ending to Captain America

Most of the movie would be the same, but just as Captain America is about to save the world, we find out that Steve Rogers was actually randomised to the placebo group. Captain America crashes the evil airplane into the ice and everyone says, “Oh no. It was just a placebo all along.” The body is never found.

LaTeX, BibTeX and ibidem

Apparently, having been trained in the philosophical tradition, I’m unused to citing sources. My supervisor says that a typical attitude for a philosopher to take toward sources is that if your bibliography has 6 citations, that’s 5 too many. So, on the advice of my supervisor, I have been trying to include more references to published sources in my thesis. As he puts it, “think less; read more.”

Having done that for the last chapter or so (I’m going back later to add lots and lots of citations to the other chapters), I realised that the citations were taking up way too much space on the paper. So, I put them all in footnotes. They still took up a lot of space, and they were hard to read down there.

So, I decided that I should change my citation style, so that when I have multiple citations from the same source, the second, third, etc. citations after the first one would just be “ibid.” (From Latin ibidem, meaning “the same place.”) This would have been a time-consuming and mind-numbing task, going through my entire thesis and picking out all the citations where there’s two or more in a row and replacing all but the first one with “ibid.

Fortunately, I use LaTeX and BibTeX (and OS X front-ends called TeXShop and BibDesk) for writing my thesis and citation management.

I found a great package, called inlinebib that does just that. It actually took a bit of digging to find a bibliography style package for LaTeX that worked the way I wanted it to, with ibidem and all. But once I found it, all I had to do was put inlinebib.bst and inlinebib.sty in my project folder, then write \usepackage{inlinebib} in my document preamble, and it worked just fine!

A non-paternalistic justification for human research subject protections

Just this morning I had a great meeting with my prof regarding my thesis. I showed him the outline for my thesis and we put together a schedule for completing it. He even gave me a few references to go on in terms of researching the topic. I’m starting to feel good about it.

I’ve had a number of people asking me what my thesis is about, so here it is in brief:

There are restrictions that institutions place on the sorts of human research that can be done, and the justification for such restrictions are usually given in terms of subject harm or benefit. Unfortunately, such justifications are paternalistic. By that, I mean there is a sense in which, if someone wants to engage in a very risky research protocol as the subject, what right does the institution’s ethics board have to stand in her way?

That said, there is also a sense in which we do not want human research to just be a free-for-all house of horrors, where anything goes. My thesis is that we should rather justify human research subject protections in terms of protecting the integrity of the human research project as a whole.

So, in colloquial terms, I’m suggesting that rather than saying, “We won’t let you do that risky research because we know better than you what ends you should be pursuing,” rather we should say something more like, “We won’t allow such risky research because allowing such research to go on would make the human research enterprise look sketchy.”

An interesting application of this thesis is in the area of phase IV human research studies. A phase IV study is one that occurs after the drug is already approved for use, and it is essentially a marketing study. The drug company wants to see how to best market the drug to doctors and patients. Often it is even the marketing division of the drug company that applies for the phase IV study.

Ethicists have generally been trying to criticise phase IV studies on the basis of some sort of risk that it may pose to the research subjects. This position is difficult to hold because really, the drug has already been approved for use on humans. I will argue that it is much more defensible to say that such studies are unethical because they do violence to the integrity of human research.

Et voilà. My thesis. All I have to do now is write 80 pages on that, and I’m golden.

I specifically asked for the Borg implant

Maybe next time
Maybe next time

I had a minor accident a few weeks back, where I suffered a blow to the head. I didn’t think it was too bad, so I didn’t end up going to the hospital for it right away.

I didn’t plan on going to the hospital at all, actually. I had a great black eye, and I just told everyone that I got into a big fight.

Come to think of it, “I didn’t think it was very serious, so I didn’t go to the doctor” is a theme that recurs in my medical history a lot.

It wasn’t until my eye got infected that I went to the hospital. I went in, told the ER doctor my symptoms:

“Itchy eye, red eye colouration, headaches, watery eyes, runny nose, sore throat.”

She took my temperature, blood pressure and heart rate.

“You have a fever, Mr. Carlisle,” she told me, struggling with my last name (French Canadians have a hard time figuring out the silent S), “When you blow your nose, does the phlegm have any colour?”

“Yes, in fact. It’s black.”

“Black?” she asked, surprised.

You know that you have something good when your symptoms shock the ER doctor. I blew my nose and proved it to her.

I sat in the waiting room until another doctor came to see me, and pronounced that I had pink eye, and was about to send me on my way when I asked if the pink eye would explain the fever that I had.

“Fever?” she asked. That’s two ER doctors that I shocked.

She started feeling around my skull at that point, seeing where it hurt and didn’t, and decided to send me for a CT scan. I dripped my pink-eye tears all over the CT machine. I’m sure that the next 5 patients to use it will get infected, thanks to me.

When the results came back, she told me that I had broken my right orbital floor, and the tissues surrounding my eye were actually falling down into my sinus. That would explain the fever, sore throat, and the blood in my phlegm. There wasn’t any bone supporting my right eye, so it was literally falling through my face. I would need surgery.

I was sent to see an ophthalmologist, who told me that my right eye had fallen about 3mm from where it should be. On the upside though, he told me that I still have 20/20 vision, and that there’s no nerve damage or damage to my retina. The only problem is the broken bone and the pink eye.

I was sent to see the surgeons who were going to fix my face, and they sent me home for a week and a half, to let the infection go away, so that they don’t let it get inside my skull. On Friday, August 6th, I had my surgery, and despite my specific instructions that they replace my right eye with a Borg-style implant, they only put a metal plate in my skull, to fix the bone, and put my eye right back where it should be. I will make a full recovery and require no bionic implants at all.

The swelling has gone down almost entirely, and I’m feeling good. I think they must have made the incision into my head somewhere inside my eyelid, so there won’t even be a scar.

There were only two really scary parts about this whole thing:

1. When I am put on morphine, I have hallucinations. Not really bad ones, but I consistently have them. This time, I seriously believed that if I stopped consciously thinking about my breathing, then I would stop breathing, and probably die. I was very afraid to go to sleep.

2. When I mentioned to the doctors that I’m a MA bioethics student at McGill, they had a sort of “we better be on our best behaviour now” thing going on, which scared me. What do they think they can normally get away with, that they can’t with a bioethicist watching?