There’s another reason why Pfizer actually did the right thing with Enbrel in Alzheimer’s

The Washington Post recently ran a story about a potential use of Enbrel, an anti-inflammatory drug, to reduce the risk of Alzheimer’s disease. The article reports that a non-randomized, non-interventional, retrospective, non-published, non-peer-reviewed, internal review of insurance claims was correlated with a reduced risk for Alzheimer’s disease. This hypothesis was dismissed after internal consideration, on the basis of scientific (and probably business) considerations.

You can probably guess my take on it, given the way I describe it, but for people who are unfamiliar with the hierarchy of medical evidence, this kind of data mining represents pretty much the lowest, most unreliable form of medical evidence there is. If you were, for some reason, looking for even lower-level evidence than this, you would need to go into the case study literature, but even that would at least be peer-reviewed and published for public scrutiny. If you need further convincing, Derek Lowe has already published a fairly substantial debunking of why this was not a missed Alzheimer’s opportunity on his blog.

Many people, even after being convinced that Pfizer probably made the right call in not pursuing a clinical trial of Enbrel in Alzheimer’s, still think that the evidence should have been made public.

There’s been lots of opinions thrown around on the subject, but there’s one point that people keep missing, and it relates to a paper I wrote in the British Medical Journal (also the final chapter of my doctoral thesis). Low-level evidence that causes suspicion of activity of a drug, when it is not swiftly followed up with confirmatory testing, can create something that we call “Clinical Agnosticism.”

Not all evidence is sufficient to guide clinical practice. But in the absence of a decisive clinical trial, well-intentioned physicians or patients who have exhausted approved treatment options may turn to off-label prescription of approved drugs in indications that have not received regulatory sanction. This occurs where there is a suggestion of activity from exploratory trials, or in this case, extremely poor quality retrospective correlational data.

Pfizer should not be expected to publish every spurious correlation that can be derived from any data set. In fact, doing so would only create Clinical Agnosticism, and potentially encourage worse care for patients.

On techbros in healthcare and medical research

CW: strong language

My thoughts on the following may change over time, but at least for me, I have found it helpful to think about the threats that techbros pose to healthcare and medical research in terms of the following four major categories. These aren’t completely disjoint of course, and even the examples that I give could, in many cases, fit under more than one. I am also not claiming that these categories exhaust the ways that techbros pose a threat to healthcare.

1. Medical-grade technosolutionism, or “medicine plus magic”

When Elizabeth Holmes founded her medical diagnostics company Theranos, she fit exactly into the archetype that we all carry around in our heads for the successful whiz-kid tech-startup genius. She was not just admitted to Stanford University, but she was too smart for it, and dropped out. She wore Steve Jobs-style black turtlenecks. She even founded her company in the state of California—innovation-land.

She raised millions of dollars for her startup, based on the claim that she had come up with a novel method for doing medical diagnostics—dozens of them—from a single drop of blood. Theranos claimed that the research backing up these claims occurred outside the realm of peer-review. This practice was derisively called “stealth research,” and generally criticized because of the threat that this mode of innovation might have posed to the enterprise of medical research as a whole.

It was, of course, too good to be true. The company has now been shut down, and Theranos has been exposed as a complete fraud. This sort of thing happens on a smaller scale on crowdfunding sites with some regularity. (Remember the “Healbe” Indiegogo?)

I’m not entirely sure what is driving this particular phenomenon.

Maybe it’s that we want to believe in the whiz-kid tech-startup genius myth so much that we collectively just let this happen out of sheer misguided hope that the techbros will somehow save us.

Maybe it’s that there’s a certain kind of techbro who thinks, “I’m a computer-genius. Medicine is just a specialized case that I can just figure out if I put my mind to it.” And there’s also a certain kind of medical professional who thinks, “I’m a doctor. I can figure out how to use a computer, thank-you-very-much.” And when those two groups of people intersect, they don’t call each other out on their bullshit, but rather, they commit synergy.

Or maybe it’s just that the extreme form of capitalism that we’re currently living under has poisoned our minds to the extent that the grown-ups—the people who should know better—are turning a blind eye because they can make a quick buck.

Recommended reading: “Stealth Research: Is Biomedical Innovation Happening Outside the Peer-Reviewed Literature?” by John Ioannidis, JAMA. 2015;313(7):663-664.

2. The hype of the week (these days, it’s mostly “medicine plus blockchain”)

In 2014 I wrote a post on this very blog that I low-key regret. In it, I suggest that the blockchain could be used to prospectively timestamp research protocols. This was reported on in The Economist in 2016 (they incorrectly credit Irving and Holden; long story). Shortly thereafter, there was a massive uptick of interest in applications of the blockchain to healthcare and medical research. I’m not claiming that I was the first person to think about blockchain in healthcare and research, or that my blog post started the trend, but I am a little embarrassed to say that I was a part of it.

Back in 2014, being intrigued by the novelty of the blockchain was defensible. There’s a little bit of crypto-anarchist in all of us, I think. At the time, people were just starting to think about alternate applications for it, and there was still optimism that the remaining problems with the technology might still be solved. By 2016, blockchain was a bit passé—the nagging questions about its practicality that everyone thought would have been solved by that point just, weren’t. Now that it’s 2019, and the blockchain as a concept has been around for a full ten years, and I think it’s safe to say that those solutions aren’t coming.

There just aren’t any useful applications for the blockchain in medicine or science. The kinds of problems that medicine and science have are not the kinds of problems that a blockchain can solve. Even my own proposed idea from 2014 is better addressed in most cases by using a central registry of protocols.

Unfortunately, there continues to be well-funded research on blockchain applications in healthcare and science. It is a tech solution desperately in search of its problem, and millions of research funding has already been spent toward this end.

This sort of thing doesn’t just apply to “blockchain in science” stuff, although that is probably the easiest one to spot today. Big new shiny things show up in tech periodically, promising to change everything. And with surprising regularity, there is an attempt to shoehorn them into healthcare or medical research. It wasn’t too long ago that everyone thought that smartphone apps would revolutionize healthcare and research. (They didn’t!)

3. “The algorithm made me do it”

Machine learning and artificial intelligence (ML/AI) techniques have been applied to every area of healthcare and medical research you can imagine. Some of these applications are useful and appropriate. Others are poorly-conceived and potentially harmful. Here I will gesture briefly toward some ways that ML/AI techniques can be applied within medicine or science to abdicate responsibility or bolster claims where the evidence is insufficient to support them.

There’s a lot of problems that could go under this banner, but many of them stem from the “black box” nature of ML/AI techniques. The idea behind machine learning is that the algorithm “teaches itself” in some sense how to interpret the data and make inferences. And often, this means that many ML/AI techniques don’t easily allow for the person using them to audit the way that inputs into the system are turned into outputs. There is work going on in this area, but ML/AI often doesn’t lend itself well to explaining itself.

There is an episode of Star Trek, called “The Ultimate Computer,” in which Kirk’s command responsibilities are in danger of being given over to a computer called the “M-5.” As a test of the computer, Kirk is asked who he would assign to a particular task, and his answer differs slightly from the one given by the M-5. For me, my ability to suspend disbelief while watching it was most thoroughly tested when the M-5 is asked to justify why it made the decision it did, and it was able to do so.

I’ve been to the tutorials offered at a couple different institutions where they teach computer science students (or tech enthusiasts) to use the Python library for machine learning or other similar software packages. Getting an answer to “Why did the machine learning programme give me this particular answer?” is really, really hard.

Which means that potential misuses or misinterpretations are difficult to address. Once you get past a very small number of inputs, there’s rarely any thought given to trying to figure out why the software gave you the answer it did, and in some cases it becomes practically impossible to do so, even if you wanted to.

The unspoken assumption is that if you just get enough bad data points, machine learning or artificial intelligence will magically transmute them into good data points. Unfortunately, that’s not how it works.

This is dangerous because the opaque nature of ML/AI may hide invalid scientific inferences based on analyses of low-quality data, causing well-meaning researchers and clinicians who rely on robust medical evidence to provide poorer care. Decision-making algorithms may also mask the unconscious biases built into them, giving them the air of detached impartiality, but still having all the human biases of their programmers.

Techbros like Bill Gates and Elon Musk are deathly afraid of artificial intelligence because they imagine a superintelligent AI that will someday, somehow take over the world or something. (I will forego an analysis of the extreme hubris of the kind of person who needs to imagine a superhuman foe for themselves.) A bigger danger, and one that is already ongoing, is the noise and false signals that will be inserted into the medical literature, and the obscuring of the biases of the powerful that artificial intelligence represents.

Recommended reading: Weapons of Math Destruction by Cathy O’Neil.

4. Hijacking medical research to enable the whims of the wealthy “… as a service”

I was once at a dinner party with a techbro who has no education in medicine or cancer biology. I told him that I was doing my PhD on cancer drug development ethics. He told me with a straight face that he knew what “the problem with breast cancer drug development” is, and could enlighten me. I took another glass of wine as he explained to me that the real problem is that “there aren’t enough disruptors in the innovation space.”

I can’t imagine being brazen enough to tell someone who’s doing their PhD on something that I know better than them about it, but that’s techbros for you.

And beyond the obnoxiousness of this anecdote, this is an idea that is common among techbros—that medicine is being held back by “red tape” or “ethical constraints” or “vested interests” or something, and that all it would take is someone who could “disrupt the industry” to bring about innovation. They seriously believe that if they were just given the reins, they could fix any problem, even ones they are entirely unqualified to address.

For future reference, whenever a techbro talks about “disrupting an industry,” they mean: “replicating an already existing industry, but subsidizing it heavily with venture capital, and externalizing its costs at the expense of the public or potential workers by circumventing consumer-, worker- or public-protection laws in order to hopefully undercut the competition long enough to bring about regulatory capture.”

Take, for example, Peter Thiel. (Ugh, Peter Thiel.)

He famously funded offshore herpes vaccine tests in order to evade US safety regulations. He is also extremely interested in life extension research, including transfusions from healthy young blood donors. He was willing to literally suck the blood from young people in the hopes of extending his own life. And these treatments were gaining popularity, at least until the FDA made a statement warning that they were dangerous and ineffective. He also created a fellowship to enable students to drop out of college to pursue other things such as scientific research outside of the academic context. (No academic institution, no institutional review board, I suppose.)

And this is the work of just one severely misguided techbro who is able to make all kinds of shady research happen because of the level of wealth that he has been allowed to accumulate. Other techbros are leaving their mark on healthcare and research in other ways. The Gates Foundation for example, is strongly “pro-life,” which is one of the strongest arguments I can think of, for why philanthropists should instead be taxed and the funds they would have spent on their conception of the public good dispersed through democratic means, rather than allowing the personal opinions of an individual become de facto healthcare policy.

The moral compass behind this stuff is calibrated to a different North than the one most of us recognize. Maybe one could come up with a way to justify any one of these projects morally. But you can see that the underlying philosophy (“we can do anything if you’d just get your pesky ‘ethics’ out of the way”) and priorities (e.g. slightly longer life for the wealthy at the expense of the poor) are different from what we might want to be guiding medical research.

Why is this happening and what can we do to stop it?

Through a profound and repeated set of regulatory failures, and a sort of half-resigned public acceptance that techbros “deserve” on some level to have levels of wealth that are comparable with nation states, we have put them in the position where a techbro can pervert the course of entire human research programmes. Because of the massive power that they hold over industry, government and nearly every part of our lives, we have come to uncritically idolize techbros, and this has leached into the way we think about applications of their technology in medicine and science. This is, of course, a terrible mistake.

The small-picture solution is to do all the things we should be doing anyway: ethical review of all human research; peer-review and publication of research (even research done with private funds); demanding high levels of transparency for applications of new technology applied to healthcare and research; etc. A high proportion of the damage they so eagerly want to cause can probably be avoided if all our institutions are working at peak performance and nothing ever slips through the cracks.

The bigger-picture solution is that we need to fix the massive regulatory problems in the tech industry that allowed techbros to become wealthy and powerful in the first place. Certainly, a successful innovation in computer technology should be rewarded. But that reward should not include the political power to direct the course of medicine and science for their own narrow ends.

Star Trek Discovery has a problem with tragic gay representation

Content warning: strong language; description of violence; death; abuse; spoilers for seasons 1 and 2 of Star Trek Discovery

I will start by briefly telling you what tragic gay representation is. I will make a case that Star Trek Discovery has provided nearly exclusively tragic gay representation in seasons 1 and 2. I will conclude by telling you why this is a problem.

What do I mean by “tragic gay representation?”

I have written previously about what I have described as different “levels” of queer representation in media. Here I will focus on tragic gay representation, also known as the “bury your gays” trope.

When I talk about “tragic” representation, I don’t necessarily mean cases in which a queer person dies (although that happens often enough). By “tragic gay representation,” I mean representation in which gay characters are denied a happy ending. While this happens to trans and bi queer people as well, I will mostly be talking about gay representation here, as the specific characters involved in Star Trek Discovery are gay and lesbian, and different (but related) dynamics are present for bi and trans representation.

Tragic gay representation has a very long history. Lesbian representation in media is particularly prone to ensuring that lesbians, when they are depicted at all, are either killed, or converted to being straight. Now that you’ve had it pointed out to you, you’ll start seeing it everywhere too.

Of course, no, not every gay character in every TV show and movie dies, and of course, not every character who dies is gay, but there are fewer gays and lesbians on-screen in general, and unless it’s a film about queer issues, the main character Has To Be Straight, so if someone is going to die to advance the plot, you can guess who it’s going to be.

There is tragic gay representation in Star Trek Discovery and it’s nearly exclusively tragic gay representation

In season 1, the first thing we learn about Stamets is that his research has been co-opted by Starfleet for the war effort. The second thing we learn is that he had a friend that he did research with, and in the same episode we also find out that this friend was tragically deformed and then died in terrible pain. In the time-loop episode, Stamets watches everyone he knows and loves die over and over again.

Stamets’ boyfriend Culber is tragically murdered by one of the straight people. His death serves no other purpose in season 1 other than to up the stakes for the straight-people drama. We find out that Stamets’ boyfriend likes something called Kasseelian opera, and the only thing we find out about this kind of opera is that Kasseelian prima donnas tragically kill themselves after a single performance.

In season 2, we find out that Culber is not dead, but rather he has been trapped in the mushroom dimension, but even after they save him, he and Stamets can’t be happy together. Tragic.

Culber’s change into an angry person does however serve as an object lesson for a pep-talk for the straight people partway through the season. And in case you think that I’m reading too much into that, they double down on it by panning over his face while Burnham is giving a voice-over about how she’s personally changed, so they were definitely intentional about him being an object lesson about personal transformation.

At the end of season 2, Culber gets a bunch of unsolicited relationship advice, and guess what, it comes from an even more powerfully tragic lesbian whose partner died. Culber decides to get back together with Stamets, but tragically, he is only able to tell him after Stamets is tragically, heroically and life-threateningly impaled.

There is almost nothing that happens in Stamets and Culber’s story-arc or that we’re told about their back-story that isn’t specifically calculated to just make us feel bad for them. The writers seem to be fine with burning up gay characters, and the people they love, so that by that light, we can better see the straight-people drama.

Why is tragic gay representation in Star Trek a problem?

So you might be thinking, “These aren’t real people. It’s just a story. No gays were harmed in the making of Star Trek Discovery.” Right?

I mean, sort of. No one’s saying it’s as bad as actually hurting gays in real life, but especially in the context of Star Trek, it’s in poor taste, a faux pas, and sends a homophobic message in real life, whether or not it was intended that way.

First off, the whole premise of Star Trek is: “What if, somehow, hundreds of years in the future, humanity finally got its shit together?” The whole project of Star Trek is to imagine an optimistic, near-utopian, positive conception of a humanity that finally grew up.

This is a future where, when you call the cops, it’s the good guys who show up, so to speak. In this fictional universe, Rule 1 of exploring the galaxy can be summed up as, “don’t be like, colonial about it.” There’s no poverty, no racism, no sexism, and for the “first time” (we can have the erasure talk another day), Star Trek Discovery was supposed to pose the question, “What would it look like for there to be a future in which there was also no hate for queer people?”

And this is part of why it’s so toxic to get these old, tired and low-key homophobic tropes tossed at us in Star Trek. The writers are saying, Even in a future utopia, the best-case-scenario for humanity’s future, the gays still don’t ever get to be happy.

Historically, the bury-your-gays trope hasn’t always come out of innocent sloppiness on the writers’ part. Certainly, sometimes when a writer makes a gay into a tragic one, it’s just because they only have so many characters in their story, and the straight one is the protagonist, so out of this sort of lazy necessity, there’s a lot of rainbow-coloured blood spilled. But that hasn’t always been the case. In a lot of literature, the gays come to a sad end in order to send the message that this is what they deserve. And whether or not the writers at Discovery realize this or if they meant to send that message, they are walking in step with that sad and homophobic tradition. You can only watch so many shows where the queer gets killed, after all, before you start to wonder if there’s a message there.

I don’t think this is being too rough on the show. In the lead-up to Discovery season 1, everyone associated with the show positively crowed about how they were going to do gay representation in Star Trek, and do it right. If you’re gonna preemptively claim moral kudos for representing us gays, you’re gonna be held to a higher standard. There are lots of other TV shows that include gay characters that are examples of good gay representation. (E.g. Felix from Orphan Black is fantastic.)

So if anyone from CBS is reading this, Don’t let your PR people write cheques that your writers aren’t going to honour. If you promise good gay representation, you better deliver. Also if you need a new writer, I’m a published author, I’m looking for a job, I have a PhD in medical ethics, encyclopedic knowledge of Star Trek, and Opinions.

The moral efficiency of clinical trials in anti-cancer drug development

“Doctor”

According to Stephen Leacock, the meaning of a doctoral degree from McGill is that “the recipient of instruction is examined for the last time in his life, and is pronounced completely full. After this, no new ideas can be imparted to him.”

I was soon appointed to a Fellowship in political economy, and by means of this and some temporary employment by McGill University, I survived until I took the degree of Doctor of Philosophy in 1903. The meaning of this degree, is that the recipient of instruction is examined for the last time in his life, and is pronounced completely full. After this, no new ideas can be imparted to him.
Stephen Leacock on the degree of Doctor of Philosophy from McGill

And so, for your sake, I am sorry to inform you that on 2019 April 2, I successfully defended my doctoral thesis, The moral efficiency of clinical trials in anti-cancer drug development.

I will be absolutely insufferable about having those new letters after my name for at least another week.

The nuclear option for blocking Facebook and Google

I took the list of domains from the following pages:

  • https://qz.com/1234502/how-to-block-facebook-all-the-urls-you-need-to-block-to-actually-stop-using-facebook/
  • https://superuser.com/questions/1135339/cant-block-connections-to-google-via-hosts-file

And then I edited my computer’s /etc/hosts file to include the following lines.

This blocks my computer from contacting Google and Facebook, and now a lot of sites load way faster. It still allows Youtube, but you can un-comment those lines out too, if you like.

Put it on your computer too!

(To view the big block of text that you need to copy into your /etc/hosts, click the following button. It’s hidden by default because it’s BIG.)

Introducing “Actually the singular is …”, a Mastobot

Good news for pedantic academics who like to correct others’ use of Latin-derived plurals!

I have automated that task for you with a bot on Mastodon:

Actually, the singular is …

I wrote it using Mastodon.py and BeautifulSoup and it’s a very simple little bot. (Only 42 lines of code!)

It takes all the posts currently visible to the bot on its Public Timeline, strips out the HTML formatting, makes a list of all the words that are alphabetic and end in -i or -a (so “plaza” would make it in, but “@plaza” wouldn’t) and then posts (for example): “plaza?” actually the singular is: plazum (if it had ended with an -i, it would have changed the ending to -us).

Then it adds the original word to a list of words not to post again and a cron job makes it repeat every 10 mins.

I had this idea a while back, but I got stuck on the idea that I’d need a pre-made dictionary of words that end in -i or -a. Then it occurred to me that I could just populate that list on the fly from the bot’s perspective on the Timeline. That also has the added benefit that there’s the possibility that the words it chooses may be topical, and not just random.

Enjoy!

The risks and harms of 3rd party tech platforms in academia

CW: some strong language, description of abuse

Apologies for the rambly nature of this post. I wrote it in airports, partly out of frustration, and I may come back and make it more readable later.

In this post, I’m going to highlight some of the problems that come along with using 3rd party tech companies’ platforms on an institutional level in academia. Tech companies have agendas that are not always compatible with academia, and we have mostly ignored that. Briefly, the core problem with the use of these technologies, and entrenching them into academic life, is that it is an abdication of certain kinds of responsibility. We are giving up control over many of the structures that are necessary to participation in academic work and life, and the people we’re handing the keys to are often hostile to certain members of the academic community, and in a way that is often difficult to see.

I have included a short “too long; didn’t read” at the end of each section, and some potential alternatives.

Using a tech company’s services is risky

There’s an old saying: “There’s no such thing as the cloud; it’s just someone else’s computer.” And it’s true, with all the risks that come associated with using someone else’s computer. The usual response to this is something along the lines of “I don’t care, I have nothing to hide.” But even if that’s true, that isn’t the only reason someone might have for avoiding the use of 3rd party tech companies’ services.

For starters, sometimes tech companies fail on a major scale that could endanger entire projects. Do you remember in 2017 when a bug in Google Docs locked thousands of people out of their own files because they were flagged as a violation of the terms of use?

https://twitter.com/widdowquinn/status/925360317743460352

Or more recently, here’s an example of a guy who got his entire company banned by Google by accident, proving that you can lose everything because of someone else’s actions:

TIFU by getting Google to ban our entire company while on the toilet from tifu

And of course, this gets worse for members of certain kinds of minorities. Google and Facebook for example, both have a real-names policy, which is hostile to people who are trans, and people from our country’s First Nations:

Facebook tells Native Americans that their names aren’t “real”

There are other risks beyond just data loss—for example, if your research involves confidential data, you may even be overstepping the consent of your research subjects, and potentially violating the terms under which your institutional review board granted approval of your study by putting it on a 3rd party server where others can access it. This may also be the case of web apps that include Google Analytics.

tl;dr—If your academic work depends on a 3rd party tech company’s services, you risk: losing your work at a critical time for reasons that have nothing to do with your own conduct; violating research subject consent; and you may be excluding certain kinds of minorities.

Alternatives—In this section, I have mostly focused on data sharing risks. You can avoid using Google Docs and Dropbox by sharing files on a local computer through Syncthing, or by installing an encrypted Nextcloud on a server. If distribution of data sets is your use case, you could use the Dat Project / Beaker Browser.

Tech companies’ agendas are often designed to encourage abuse against certain minorities

I have touched on this already a bit, but it deserves its own section. Tech companies have agendas and biases that do not affect everyone equally. For emphasis: technology is not neutral. It is always a product of the people who built it.

For example, I have been on Twitter since 2011. I have even written Twitter bots. I have been active tweeting for most of that time both personally and about my research. And because I am a queer academic, I have been the target of homophobic trolls nearly constantly.

I have received direct messages and public replies to my tweets in which I was told to kill myself, called a “fag,” and in which a user told me he hopes I get AIDS. Twitter also closed my account for a short period of time because someone reported me for using a “slur”—you see, I used the word “queer.” To describe myself. And for this, there was a short period of time in which I was locked out, and it took some negotiation with Twitter support, and the deletion of some of my tweets to get back on.

I was off Twitter for a number of months because of this and out of a reluctance to continue to provide free content to a website that’s run by a guy who periodically retweets content that is sympathetic to white supremacists:

Twitter CEO slammed for retweeting man who is pro-racial profiling

And this isn’t something that’s incidental to Twitter / Facebook that could be fixed. It is a part of their core business model, which is about maximising engagement. And the main way they do that is by keeping people angry and yelling at each other. These platforms exist to encourage abuse, and they are run by people who will never have to endure it. That’s their meal-ticket, so to speak. And most of that is directed at women, minorities and queers.

I have been told that if I keep my Twitter account “professional” and avoid disclosing my sexuality that I wouldn’t have problems with abuse. I think the trolls would find me again if I did open a new account, but even if it were the case that I could go back into the closet, at least for professional purposes, there are four reasons why I wouldn’t want to:

  • My experience as a queer academic medical ethicist gives me a perspective that is relevant. I can see things that straight people miss, and I have standing to speak about those issues because of my personal experiences.
  • Younger queers in academia shouldn’t have to wonder if they’re the only one in their discipline.
  • As a good friend of mine recently noted, it’s unfair to make me hide who I am, while all the straight men all have “professor, father and husband” or the like in their Twitter bio’s.
  • I shouldn’t have to carefully avoid any mention of my boyfriend or my identity in order to participate in academic discussions, on pain of receiving a barrage of abuse from online trolls.

I’m not saying that everyone who uses Twitter or Facebook is bad. But I am extremely uncomfortable about the institutional use of platforms like Google/Facebook/Twitter for academic communications. When universities, journals, academic departments, etc. use them, they are telling us all that this kind of abuse is the price of entry into academic discussions.

tl;dr—Using 3rd-party tech company platforms for academic communications, etc. excludes certain people or puts them in the way of harm, and this disproportionately affects women, minorities and queers.

Alternatives—In this section, I have mostly focused on academic communications. For micro-blogging, there is Mastodon, for example (there are even instances for science communication and for academics generally). If you are an institution like an academic journal, a working RSS feed (or several, depending on your volume of publications) is better than a lively Twitter account.

Tech companies are not transparent in their decisions, which often cannot be appealed

Some of the problems with using 3rd party tech company platforms go beyond just the inherent risks in using someone else’s computer, or abuse by other users—in many cases, the use of their services is subject to the whims of their support personnel, who may make poor decisions out of carelessness, a discriminatory policy, or for entirely inscrutable or undisclosed reasons. And because these are private companies, there may be nothing that compels them to explain themselves, and no way to appeal such a decision, leaving anyone caught in a situation like this unable to participate in some aspect of academic life.

For example, in the late 00’s, I tried to make a purchase with Paypal and received an error message. I hadn’t used my account for years, and I thought it was just that my credit card needed to be updated. On visiting the Paypal website, I found that my account had been closed permanently. I assumed this was a mistake that could be resolved, so I contacted Paypal support. They informed me that I had somehow violated their terms of use, and that this decision could not be appealed under any circumstance. The best explanation for this situation that I could ever get from them was, to paraphrase, “You know what you did.”

This was baffling to me, as I hadn’t used Paypal in years and I had no idea what I could have possibly done. I tried making a new account with a new email address. When I connected my financial details to this account, it was also automatically closed. I’ve tried to make a new account a few times since, but never with success. As far as I can tell, there is no way for me to ever have a Paypal account again.

And that wasn’t a problem for me until a few months ago when I tried to register for some optional sessions at an academic conference that my department nominated me to attend. In order to confirm my place, I needed to pay a deposit, and the organizers only provided Paypal (not cash or credit card) as a payment option.

And this sort of thing is not unique to my situation either. Paypal has a long, terrible and well-documented history of arbitrarily closing accounts (and appropriating any money involved). This is usually in connexion with Paypal’s bizarre and sometimes contradictory policies around charities, but this also affects people involved in sex work (reminder: being a sex worker is perfectly legal in Canada).

Everything worked out for me in my particular situation at this conference, but it took work. After several emails, I was eventually able to convince them to make an exception and allow me to pay by cash on arrival, but I still had to go through the process of explaining to them why I have no Paypal account, why I could try making a new one, but it wouldn’t work, and that I wasn’t just being a technophobe or difficult to work with on purpose. I was tempted to just opt out of the sessions because I didn’t want to go through the embarrassment of explaining my situation.

And my problem with Paypal was a “respectable” one—it’s just some weird mistake that I’ve never been able to resolve with Paypal. Now imagine trying to navigate a barrier to academic participation like that if you were a person whose Paypal account was closed because you got caught using it for sex work. Do you think you’d even try to explain that to a conference organizer? Or would you just sit those sessions out?

tl;dr—When you use services provided by tech companies, you may be putting up barriers to entry for others that you are unaware of.

Alternatives—This section was about money, and there aren’t that many good solutions. Accept cash. And when someone asks for special accommodation, don’t ask them to justify it.

Conclusion

Technology isn’t neutral. It’s built by people, who have their own biases, agendas and blind-spots. If  we really value academic freedom, and we want to encourage diversity in academic thought, we need to be very critical about the technology that we adopt at the institutional level.

Levels of queer representation in media franchises

Warning: spoilers for Star Trek Discovery season 1.

A few days ago, I wrote a short post about “steps” that media franchises take toward queer representation in media. I got a few requests for clarification, so I have expanded it into a table with examples and explanations.

The impulse behind the post was that resistance to representation of queers in media franchises is very predictable, falling into well-known and discrete categories, and that those categories can be ordered from worst to best. I absolutely do not take credit for being the first observer of these dynamics, as many people much smarter than me have commented on them all before. But this is how I see them, on a sort of a continuum.

I originally phrased it in terms of “steps,” although now I don’t think that’s the best way to think about it, so I call them “levels” here. There’s certainly a gradation from 1-7, but of course, not every media franchise goes one by one. Each successive step is better in some sense than the one previous (although you could probably haggle over the ordering in some cases). Levels 1-3 are refusals to represent queers and levels 5-7 are (decreasingly begrudging) attempts at queer representation. Level 4 is, or is not queer representation, depending on your reading of it.

1. Refusal without any attempt at justification

“No queers because writers hate them”

Naked hate for queers, while surprisingly common, is the least interesting case, and nobody needs help identifying it. But most people are more sophisticated because they know they’ll (rightly) get in trouble if they’re openly hateful toward queers, so usually they dress up their hate by progressing to one of the subsequent steps.

I’ve included this one because you do still see it from time to time, and it’s interesting to note how often as a sort of proxy measure for how friendly our culture is to queers.

2. Refusal with some ostensibly principled reason given

“No queers, but it’s for your own good or something”

A justification that is often given for refusal to include queer characters in a story, TV show, movie or whatever is that it would be “pandering,” and that this would be bad somehow. Straight people love to project onto queers a very strong desire that we not be included in something if it is just for the sake of inclusion, and that doing so would be much worse than the default, namely, pretending that queers don’t exist.

This is of course nonsense. The only people who don’t like queers being included just for the sake of including queers are straight people, and this is a feeble excuse for bigotry. But there’s nothing conservative people like better than telling someone No, and having it be For Their Own Good.

3. Refusal, but with a vague promise of future representation

“No queers, but just because we haven’t had a story that demands a queer”

Star Wars is at this point right now. They know there have been calls for queer characters, and they know they can’t just say “No we hate queers,” so they’re kicking the can down the road by saying “not now, but we’re not opposed to the idea,” and trusting we’ll take them at their word when they claim that it’s just “artistic reasons” that have kept them from doing so already.

It’s disingenuous and a double-standard of course. When these writers come up with characters to put into their stories, nobody demands that there be a compelling reason demanded by the plot for why some character is straight. And if they think they can get away with it, they will continue forever to promise to include queers someday, but no sooner than the story absolutely demands it.

4. Strongly implied but ultimately deniable queer representation

“Okay maybe this secondary character is queer? We didn’t say they’re not!”

Harry Potter is an excellent example of this level. JK Rowling famously made Dumbledore gay retroactively after all the book sales were in the bag, and justified it with the condescending Twitter one-liner that gay people just look like … people! So why write them any differently or provide any explicit representation?

The problem with the JK Rowling-type response is the extremely “everything is already perfect so stop complaining” vibe here. The line that we queers are supposed to believe is that the writer is just so progressive that queer representation is unnecessary.

The Star Trek reboot films are also at this level. Everyone got excited about Sulu being gay in Star Trek Beyond (2016), but what we got was 2.5 seconds of Sulu side-hugging some guy, who we’re supposed to take as his civilian husband or partner or something. No kiss. No dialogue. It’s so casual that a straight viewer could miss it entirely or see it and argue that Sulu is meeting his brother or something. And truthfully, there isn’t enough on-screen to settle the matter conclusively.

Reboot Sulu

It is both erasure of queers and it is a not-so-subtle assertion of control that can be read as a disapproval of any queer who isn’t indistinguishable from straights. Also, because this generally flies under their radar, this is a cowardly concession to the sensibilities of outright bigots.

Many Disney villains are also in this category. Ursula from The Little Mermaid (1989) was even based on a drag queen. Straight people love to include queer-coded villains, because they love queer aesthetics, but they refuse to actually provide representation, because they hate queers.

5. Tragic queer representation

“Okay there’s a queer but oops we killed it lol oops”

“Bury your gays” is a well-known and troubling mode of queer representation that has a long history, and it can be summed up as the trope that queers are not allowed a happy ending.

Sometimes, this is done to send a very specific “you get what you deserve, you dirty queers” message. Read Goldfinger if you want a really clear example of that. (Click the Project Gutenberg link and control-F for “Poor little bitch” to read about what happened to Tilly Masterton because she was a capital-L lesbian.)

Goldfinger is an extreme case and most of the time, it’s not an attempt to send an “if you’re gay you deserve a violent death” message. Usually, this happens just because the main character Has To Be Straight, and so if there’s going to be a character who dies, it’s going to be the queer one, and so we end up with a preponderance of media representation in which queers are killed just to raise the stakes for the main characters, just because we’re less important.

We saw this with Lt Stamets in the most recent episode of Star Trek Discovery (S01E09). I thought Discovery was at level 6, but actually it’s looking like it’s here. Maybe they’ll save him? Reserving judgement on this one.

6. Queer representation designed for straight comfort

“Okay there’s a queer but you can’t really tell they’re queer—Super respectable! Very comfortable for straights!”

Often queer representation that is designed for the comfort of straight people is done with the justification that it’s an attempt to “move past bad gay stereotypes.” Again, this is a more subtle example of straight people denying us actual representation and doing it For Our Own Good.

The idea is that there are, according to the straights, way too many flamboyant or promiscuous gay characters perpetuating bad stereotypes and what the world really needs is to see queer people being “good” and that will make all those bigots understand that you can be a good person even if you’re gay. What a progressive message, and who could be upset about that? Right?

The problem is that this has built into it the not-so-subtle assertion of a) control of straight people over queer lives and lifestyles, and b) implied inferiority of queers to straights.

When straights insist that queer representation include only the “good kind” of queer, there’s an implication that there’s a bad kind, and that our acceptance and inclusion hinges on which kind we are. It is tacitly agreeing that queers should be ashamed of who they are and doing their best to hide it.

This often goes hand-in-hand with a denial of the very existence of queer culture. For anything that isn’t literally gay sex, a Respectable-Gays-Only advocate can say “well, that’s not gay per se, so it’s non-homophobic for me to continue to hate it.”

7. Actual queer representation

This does actually happen sometimes. Here’s a couple examples:

Pretty much everyone in Sense8 counts. Even the straight dude is not all that straight.

Orphan Black also started at this level. Felix is an excellent example of queer representation. Felix is a nuanced and well-developed character that is undeniably gay, and also unambiguously morally good despite him having a number of the traits that Respectable-Gays-Only advocates would probably categorise as being gay stereotypes.