Introducing: Clinical trials viewer

The development of a new drug is often depicted as an orderly, linear progression from small, phase 1 trials testing safety to somewhat larger phase 2 trials to generate efficacy hypotheses, and finally larger phase 3 pivotal trials. It is even described as a “pipeline,” and even depicted as such in pharmacology lectures and textbooks.

However, the reality of clinical trial activity is much more complicated. For example, a clinical trial does not occur all on a single date, but rather is extended in time, often overlapping with trials in later or earlier phases. Earlier phase trials can sometimes follow higher ones, and intermediary phases can sometimes be skipped altogether. Trial activity often continues after licensure, and grasping the amount of research, along with all the meta-data available can be difficult.

To illustrate the totality of registered clinical trial activity reported on clinicaltrials.gov, the STREAM research group at McGill University has been using Clinical trials viewer as an internal research tool since 2012. This software is now available for others to use, install on their own servers, or modify (under the constraint that you make the source code for any modifications available, as per the AGPL).

Methodology

Clinical trials viewer downloads and parses information from clinicaltrials.gov at the time of search to populate the graph of clinical trials. FDA information is updated weekly from the Drugs@FDA dataset and the FDA postmarketing commitment data set.

Yellow flags indicating the dates for FDA submissions appear flattened along the top of the grid. Red flags indicating the dates for FDA documents also appear flattened in the FDA information section. Cyan flags indicating the original projected completion date for FDA postmarketing commitments (PMCs) and requirements (PMRs) also appear here. PMCs and PMRs can be clicked to reveal more details. There are buttons to expand or flatten each of these categories.

Below the horizontal rule, there is a graph aligned to the same horizontal date scale indicating the opening and closure dates for clinical trials registered with the NLM clinical trial registry. By default these are sorted by start date, but they can be sorted according to other meta data. Each trial can be clicked to reveal more information.

There are two boxes at the left. The top box contains buttons for customizing the display of information. The bottom box contains a table for all the products found in the FDA database that match the search term provided, sorted by NDA. This application number will also appear on all the FDA submissions, documents and PMCs/PMRs.

How do I use it?

Visit trials.bgcarlisle.com for a live version of Clinical trials viewer.

Type the name of a drug into the search field and press enter or click the search button. This will bring up a graph of clinical trial data retrieved from clinicaltrials.gov, along with FDA submissions, FDA documents and postmarketing commitments and requirements.

Can I install it on my own server?

Yes, and if you intend to be using it a lot, I recommend that you do, so that you don’t crash mine! I have provided the source code, free of charge, and licensed it under the AGPL. This means that anyone can use my code, however, if you build anything on it, or make any modifications to the code, you are obliged to publish your changes.

Acknowledgements

Clinical trials viewer was built for installation on a LAMP stack using Bootstrap v 4.3.1, and jQuery v 3.4.1.

Clinical trials viewer draws data to populate its graphs from the following sources: clinicaltrials.gov, Drugs@FDA, FDA PMC Download.

Clinical trials viewer was originally designed for use by the STREAM research group at McGill University in Montreal Canada to work on the Signals, Safety and Success CIHR grant.

There’s another reason why Pfizer actually did the right thing with Enbrel in Alzheimer’s

The Washington Post recently ran a story about a potential use of Enbrel, an anti-inflammatory drug, to reduce the risk of Alzheimer’s disease. The article reports that a non-randomized, non-interventional, retrospective, non-published, non-peer-reviewed, internal review of insurance claims was correlated with a reduced risk for Alzheimer’s disease. This hypothesis was dismissed after internal consideration, on the basis of scientific (and probably business) considerations.

You can probably guess my take on it, given the way I describe it, but for people who are unfamiliar with the hierarchy of medical evidence, this kind of data mining represents pretty much the lowest, most unreliable form of medical evidence there is. If you were, for some reason, looking for even lower-level evidence than this, you would need to go into the case study literature, but even that would at least be peer-reviewed and published for public scrutiny. If you need further convincing, Derek Lowe has already published a fairly substantial debunking of why this was not a missed Alzheimer’s opportunity on his blog.

Many people, even after being convinced that Pfizer probably made the right call in not pursuing a clinical trial of Enbrel in Alzheimer’s, still think that the evidence should have been made public.

There’s been lots of opinions thrown around on the subject, but there’s one point that people keep missing, and it relates to a paper I wrote in the British Medical Journal (also the final chapter of my doctoral thesis). Low-level evidence that causes suspicion of activity of a drug, when it is not swiftly followed up with confirmatory testing, can create something that we call “Clinical Agnosticism.”

Not all evidence is sufficient to guide clinical practice. But in the absence of a decisive clinical trial, well-intentioned physicians or patients who have exhausted approved treatment options may turn to off-label prescription of approved drugs in indications that have not received regulatory sanction. This occurs where there is a suggestion of activity from exploratory trials, or in this case, extremely poor quality retrospective correlational data.

Pfizer should not be expected to publish every spurious correlation that can be derived from any data set. In fact, doing so would only create Clinical Agnosticism, and potentially encourage worse care for patients.

On techbros in healthcare and medical research

My thoughts on the following may change over time, but at least for me, I have found it helpful to think about the threats that techbros pose to healthcare and medical research in terms of the following four major categories. These aren’t completely disjoint of course, and even the examples that I give could, in many cases, fit under more than one. I am also not claiming that these categories exhaust the ways that techbros pose a threat to healthcare.

1. Medical-grade technosolutionism, or “medicine plus magic”

When Elizabeth Holmes founded her medical diagnostics company Theranos, she fit exactly into the archetype that we all carry around in our heads for the successful whiz-kid tech-startup genius. She was not just admitted to Stanford University, but she was too smart for it, and dropped out. She wore Steve Jobs-style black turtlenecks. She even founded her company in the state of California—innovation-land.

She raised millions of dollars for her startup, based on the claim that she had come up with a novel method for doing medical diagnostics—dozens of them—from a single drop of blood. Theranos claimed that the research backing up these claims occurred outside the realm of peer-review. This practice was derisively called “stealth research,” and generally criticized because of the threat that this mode of innovation might have posed to the enterprise of medical research as a whole.

It was, of course, too good to be true. The company has now been shut down, and Theranos has been exposed as a complete fraud. This sort of thing happens on a smaller scale on crowdfunding sites with some regularity. (Remember the “Healbe” Indiegogo?)

While I’m not entirely sure what is driving this particular phenomenon, I have a few pet theories. For starters, we all want to believe in the whiz-kid tech-startup genius myth so much that we collectively just let this happen out of sheer misguided hope that the techbros will somehow get it right. And on some level, I understand that impulse. Medical research progress is slow, and it would be wonderful if there actually were a class of smart and talented geniuses out there who could solve it by just applying their Apple Genius powers to the matter. Alas, it is not that easy.

And unfortunately, there is a certain kind of techbro who does think like that: “I’m a computer-genius. Medicine is just a specialized case that I can just figure out if I put my mind to it.” And there’s also a certain kind of medical professional who thinks, “I’m a doctor. I can figure out how to use a computer, thank-you-very-much.” And when those two groups of people intersect, sometimes they don’t call each other out on their lack of specialized knowledge, but rather, they commit synergy.

And worse, all this is happening under an extreme form of capitalism that has poisoned our minds to the extent that the grown-ups—the people who should know better—are turning a blind eye because they can make a quick buck.

Recommended reading: “Stealth Research: Is Biomedical Innovation Happening Outside the Peer-Reviewed Literature?” by John Ioannidis, JAMA. 2015;313(7):663-664.

2. The hype of the week (these days, it’s mostly “medicine plus blockchain”)

In 2014 I wrote a post on this very blog that I low-key regret. In it, I suggest that the blockchain could be used to prospectively timestamp research protocols. This was reported on in The Economist in 2016 (they incorrectly credit Irving and Holden; long story). Shortly thereafter, there was a massive uptick of interest in applications of the blockchain to healthcare and medical research. I’m not claiming that I was the first person to think about blockchain in healthcare and research, or that my blog post started the trend, but I am a little embarrassed to say that I was a part of it.

Back in 2014, being intrigued by the novelty of the blockchain was defensible. There’s a little bit of crypto-anarchist in all of us, I think. At the time, people were just starting to think about alternate applications for it, and there was still optimism that the remaining problems with the technology might still be solved. By 2016, blockchain was a bit passé—the nagging questions about its practicality that everyone thought would have been solved by that point just, weren’t. Now that it’s 2019, and the blockchain as a concept has been around for a full ten years, and I think it’s safe to say that those solutions aren’t coming.

There just aren’t any useful applications for the blockchain in medicine or science. The kinds of problems that medicine and science have are not the kinds of problems that a blockchain can solve. Even my own proposed idea from 2014 is better addressed in most cases by using a central registry of protocols.

Unfortunately, there continues to be well-funded research on blockchain applications in healthcare and science. It is a tech solution desperately in search of its problem, and millions of research funding has already been spent toward this end.

This sort of hype cycle doesn’t just apply to “blockchain in science” stuff, although that is probably the easiest one to spot today. Big new shiny things show up in tech periodically, promising to change everything. And with surprising regularity, there is an attempt to shoehorn them into healthcare or medical research.

It wasn’t too long ago that everyone thought that smartphone apps would revolutionize healthcare and research. (They didn’t!)

3. “The algorithm made me do it”

Machine learning and artificial intelligence (ML/AI) techniques have been applied to every area of healthcare and medical research you can imagine. Some of these applications are useful and appropriate. Others are poorly-conceived and potentially harmful. Here I will gesture briefly toward some ways that ML/AI techniques can be applied within medicine or science to abdicate responsibility or bolster claims where the evidence is insufficient to support them.

There’s a lot of problems that could go under this banner, and I’m not going to say that this is even a good general overview of the problems with ML/AI, but many of these major problems stem from the “black box” nature of ML/AI techniques, which is a hard problem to solve, and it’s almost a constitutive part of what a lot of ML/AI techniques are.

The big idea behind machine learning is that the algorithm “teaches itself” in some sense how to interpret the data and make inferences. And often, this means that many ML/AI techniques don’t easily allow for the person using them to audit the way that inputs into the system are turned into outputs. There is work going on in this area, but ML/AI often doesn’t lend itself well to explaining itself.

There is an episode of Star Trek, called “The Ultimate Computer,” in which Kirk’s command responsibilities are in danger of being given over to a computer called the “M-5.” As a test of the computer, Kirk is asked who he would assign to a particular task, and his answer differs slightly from the one given by the M-5. For me, my ability to suspend disbelief while watching it was most thoroughly tested when the M-5 is asked to justify why it made the decision it did, and it was able to do so.

I’ve been to the tutorials offered at a couple different institutions where they teach computer science students (or tech enthusiasts) to use the Python library for machine learning or other similar software packages. Getting an answer to “Why did the machine learning programme give me this particular answer?” is really, really hard.

Which means that potential misuses or misinterpretations are difficult to address. Once you get past a very small number of inputs, there’s rarely any thought given to trying to figure out why the software gave you the answer it did, and in some cases it becomes practically impossible to do so, even if you wanted to.

And with the advent of “Big Data,” there is often an unspoken assumption is that if you just get enough bad data points, machine learning or artificial intelligence will magically transmute them into good data points. Unfortunately, that’s not how it works.

This is dangerous because the opaque nature of ML/AI may hide invalid scientific inferences based on analyses of low-quality data, causing well-meaning researchers and clinicians who rely on robust medical evidence to provide poorer care. Decision-making algorithms may also mask the unconscious biases built into them, giving them the air of detached impartiality, but still having all the human biases of their programmers.

And there are many problems with human biases being amplified while at the same time being presented as impartial through the use of ML/AI, but it’s worth mentioning that these problems will of course harm the vulnerable, the poor, and the marginalized the most. Or, put simply: the algorithm is racist.

Techbros like Bill Gates and Elon Musk are deathly afraid of artificial intelligence because they imagine a superintelligent AI that will someday, somehow take over the world or something. (I will forego an analysis of the extreme hubris of the kind of person who needs to imagine a superhuman foe for themselves.) A bigger danger, and one that is already ongoing, is the noise and false signals that will be inserted into the medical literature, and the obscuring of the biases of the powerful that artificial intelligence represents.

Recommended reading: Weapons of Math Destruction by Cathy O’Neil.

4. Hijacking medical research to enable the whims of the wealthy “… as a service”

I was once at a dinner party with a techbro who has absolutely no education at all in medicine or cancer biology. I told him that I was doing my PhD on cancer drug development ethics. He told me with a straight face that he knew what “the problem with breast cancer drug development” is, and could enlighten me. I took another glass of wine as he explained to me that the real problem is that “there aren’t enough disruptors in the innovation space.”

I can’t imagine being brazen enough to tell someone who’s doing their PhD on something that I know better than them about it, but that’s techbros for you.

And beyond the obnoxiousness of this anecdote, this is an idea that is common among techbros—that medicine is being held back by “red tape” or “ethical constraints” or “vested interests” or something, and that all it would take is someone who could “disrupt the industry” to bring about true innovation and change. They seriously believe that if they were just given the reins, they could fix any problem, even ones they are entirely unqualified to address.

For future reference, whenever a techbro talks about “disrupting an industry,” they mean: “replicating an already existing industry, but subsidizing it heavily with venture capital, and externalizing its costs at the expense of the public or potential workers by circumventing consumer-, worker- or public-protection laws in order to hopefully undercut the competition long enough to bring about regulatory capture.”

Take, for example, Peter Thiel. (Ugh, Peter Thiel.)

He famously funded offshore herpes vaccine tests in order to evade US safety regulations. He is also extremely interested in life extension research, including transfusions from healthy young blood donors. He was willing to literally suck the blood from young people in the hopes of extending his own life. And these treatments were gaining popularity, at least until the FDA made a statement warning that they were dangerous and ineffective. He also created a fellowship to enable students to drop out of college to pursue other things such as scientific research outside of the academic context. (No academic institution, no institutional review board, I suppose.)

And this is the work of just one severely misguided techbro who is able to make all kinds of shady research happen because of the level of wealth that he has been allowed to accumulate. Other techbros are leaving their mark on healthcare and research in other ways. The Gates Foundation for example, is strongly “pro-life,” which is one of the strongest arguments I can think of, for why philanthropists should instead be taxed and the funds they would have spent on their conception of the public good dispersed through democratic means, rather than allowing the personal opinions of an individual become de facto healthcare policy.

The moral compass behind techbro incursions into medical research is calibrated to a different North than the one most of us recognize. Maybe one could come up with a way to justify any one of these projects morally. But you can see that the underlying philosophy (“we can do anything if you’d just get your pesky ‘ethics’ out of the way”) and priorities (e.g. slightly longer life for the wealthy at the expense of the poor) are different from what we might want to be guiding medical research.

Why is this happening and what can be done to stop it?

Through a profound and repeated set of regulatory failures, and a sort of half-resigned public acceptance that techbros “deserve” on some level to have levels of wealth that are comparable with nation states, we have put us all in the position where a single techbro can pervert the course of entire human research programmes. Because of the massive power that they hold over industry, government and nearly every part of our lives, we have come to uncritically idolize techbros, and this has leaked into the way we think about applications of their technology in medicine and science. This was all, of course, a terrible mistake.

The small-picture solution is to do all the things we should be doing anyway: ethical review of all human research; peer-review and publication of research (even research done with private funds); demanding high levels of transparency for applications of new technology applied to healthcare and research; etc. A high proportion of the damage they so eagerly want to cause can probably be avoided if all our institutions are always working at peak performance and nothing ever slips through the cracks.

The bigger-picture solution is that we need to fix the massive regulatory problems in the tech industry that allowed techbros to become wealthy and powerful in the first place. Certainly, a successful innovation in computer technology should be rewarded. But that reward should not include the political power to direct the course of medicine and science for their own narrow ends.

Star Trek Discovery has a problem with tragic gay representation

Content warning: strong language; description of violence; death; abuse; spoilers for seasons 1 and 2 of Star Trek Discovery

I will start by briefly telling you what tragic gay representation is. I will make a case that Star Trek Discovery has provided nearly exclusively tragic gay representation in seasons 1 and 2. I will conclude by telling you why this is a problem.

What do I mean by “tragic gay representation?”

I have written previously about what I have described as different “levels” of queer representation in media. Here I will focus on tragic gay representation, also known as the “bury your gays” trope.

When I talk about “tragic” representation, I don’t necessarily mean cases in which a queer person dies (although that happens often enough). By “tragic gay representation,” I mean representation in which gay characters are denied a happy ending. While this happens to trans and bi queer people as well, I will mostly be talking about gay representation here, as the specific characters involved in Star Trek Discovery are gay and lesbian, and different (but related) dynamics are present for bi and trans representation.

Tragic gay representation has a very long history. Lesbian representation in media is particularly prone to ensuring that lesbians, when they are depicted at all, are either killed, or converted to being straight. Now that you’ve had it pointed out to you, you’ll start seeing it everywhere too.

Of course, no, not every gay character in every TV show and movie dies, and of course, not every character who dies is gay, but there are fewer gays and lesbians on-screen in general, and unless it’s a film about queer issues, the main character Has To Be Straight, so if someone is going to die to advance the plot, you can guess who it’s going to be.

There is tragic gay representation in Star Trek Discovery and it’s nearly exclusively tragic gay representation

In season 1, the first thing we learn about Stamets is that his research has been co-opted by Starfleet for the war effort. The second thing we learn is that he had a friend that he did research with, and in the same episode we also find out that this friend was tragically deformed and then died in terrible pain. In the time-loop episode, Stamets watches everyone he knows and loves die over and over again.

Stamets’ boyfriend Culber is tragically murdered by one of the straight people. His death serves no other purpose in season 1 other than to up the stakes for the straight-people drama. We find out that Stamets’ boyfriend likes something called Kasseelian opera, and the only thing we find out about this kind of opera is that Kasseelian prima donnas tragically kill themselves after a single performance.

In season 2, we find out that Culber is not dead, but rather he has been trapped in the mushroom dimension, but even after they save him, he and Stamets can’t be happy together. Tragic.

Culber’s change into an angry person does however serve as an object lesson for a pep-talk for the straight people partway through the season. And in case you think that I’m reading too much into that, they double down on it by panning over his face while Burnham is giving a voice-over about how she’s personally changed, so they were definitely intentional about him being an object lesson about personal transformation.

At the end of season 2, Culber gets a bunch of unsolicited relationship advice, and guess what, it comes from an even more powerfully tragic lesbian whose partner died. Culber decides to get back together with Stamets, but tragically, he is only able to tell him after Stamets is tragically, heroically and life-threateningly impaled.

There is almost nothing that happens in Stamets and Culber’s story-arc or that we’re told about their back-story that isn’t specifically calculated to just make us feel bad for them. The writers seem to be fine with burning up gay characters, and the people they love, so that by that light, we can better see the straight-people drama.

Why is tragic gay representation in Star Trek a problem?

So you might be thinking, “These aren’t real people. It’s just a story. No gays were harmed in the making of Star Trek Discovery.” Right?

I mean, sort of. No one’s saying it’s as bad as actually hurting gays in real life, but especially in the context of Star Trek, it’s in poor taste, a faux pas, and sends a homophobic message in real life, whether or not it was intended that way.

First off, the whole premise of Star Trek is: “What if, somehow, hundreds of years in the future, humanity finally got its shit together?” The whole project of Star Trek is to imagine an optimistic, near-utopian, positive conception of a humanity that finally grew up.

This is a future where, when you call the cops, it’s the good guys who show up, so to speak. In this fictional universe, Rule 1 of exploring the galaxy can be summed up as, “don’t be like, colonial about it.” There’s no poverty, no racism, no sexism, and for the “first time” (we can have the erasure talk another day), Star Trek Discovery was supposed to pose the question, “What would it look like for there to be a future in which there was also no hate for queer people?”

And this is part of why it’s so toxic to get these old, tired and low-key homophobic tropes tossed at us in Star Trek. The writers are saying, Even in a future utopia, the best-case-scenario for humanity’s future, the gays still don’t ever get to be happy.

Historically, the bury-your-gays trope hasn’t always come out of innocent sloppiness on the writers’ part. Certainly, sometimes when a writer makes a gay into a tragic one, it’s just because they only have so many characters in their story, and the straight one is the protagonist, so out of this sort of lazy necessity, there’s a lot of rainbow-coloured blood spilled. But that hasn’t always been the case. In a lot of literature, the gays come to a sad end in order to send the message that this is what they deserve. And whether or not the writers at Discovery realize this or if they meant to send that message, they are walking in step with that sad and homophobic tradition. You can only watch so many shows where the queer gets killed, after all, before you start to wonder if there’s a message there.

I don’t think this is being too rough on the show. In the lead-up to Discovery season 1, everyone associated with the show positively crowed about how they were going to do gay representation in Star Trek, and do it right. If you’re gonna preemptively claim moral kudos for representing us gays, you’re gonna be held to a higher standard. There are lots of other TV shows that include gay characters that are examples of good gay representation. (E.g. Felix from Orphan Black is fantastic.)

So if anyone from CBS is reading this, Don’t let your PR people write cheques that your writers aren’t going to honour. If you promise good gay representation, you better deliver. Also if you need a new writer, I’m a published author, I’m looking for a job, I have a PhD in medical ethics, encyclopedic knowledge of Star Trek, and Opinions.

The moral efficiency of clinical trials in anti-cancer drug development

“Doctor”

According to Stephen Leacock, the meaning of a doctoral degree from McGill is that “the recipient of instruction is examined for the last time in his life, and is pronounced completely full. After this, no new ideas can be imparted to him.”

I was soon appointed to a Fellowship in political economy, and by means of this and some temporary employment by McGill University, I survived until I took the degree of Doctor of Philosophy in 1903. The meaning of this degree, is that the recipient of instruction is examined for the last time in his life, and is pronounced completely full. After this, no new ideas can be imparted to him.
Stephen Leacock on the degree of Doctor of Philosophy from McGill

And so, for your sake, I am sorry to inform you that on 2019 April 2, I successfully defended my doctoral thesis, The moral efficiency of clinical trials in anti-cancer drug development.

I will be absolutely insufferable about having those new letters after my name for at least another week.

The nuclear option for blocking Facebook and Google

I took the list of domains from the following pages:

  • https://qz.com/1234502/how-to-block-facebook-all-the-urls-you-need-to-block-to-actually-stop-using-facebook/
  • https://superuser.com/questions/1135339/cant-block-connections-to-google-via-hosts-file

And then I edited my computer’s /etc/hosts file to include the following lines.

This blocks my computer from contacting Google and Facebook, and now a lot of sites load way faster. It still allows Youtube, but you can un-comment those lines out too, if you like.

Put it on your computer too!

(To view the big block of text that you need to copy into your /etc/hosts, click the following button. It’s hidden by default because it’s BIG.)

Introducing “Actually the singular is …”, a Mastobot

Good news for pedantic academics who like to correct others’ use of Latin-derived plurals!

I have automated that task for you with a bot on Mastodon:

Actually, the singular is …

I wrote it using Mastodon.py and BeautifulSoup and it’s a very simple little bot. (Only 42 lines of code!)

It takes all the posts currently visible to the bot on its Public Timeline, strips out the HTML formatting, makes a list of all the words that are alphabetic and end in -i or -a (so “plaza” would make it in, but “@plaza” wouldn’t) and then posts (for example): “plaza?” actually the singular is: plazum (if it had ended with an -i, it would have changed the ending to -us).

Then it adds the original word to a list of words not to post again and a cron job makes it repeat every 10 mins.

I had this idea a while back, but I got stuck on the idea that I’d need a pre-made dictionary of words that end in -i or -a. Then it occurred to me that I could just populate that list on the fly from the bot’s perspective on the Timeline. That also has the added benefit that there’s the possibility that the words it chooses may be topical, and not just random.

Enjoy!

The risks and harms of 3rd party tech platforms in academia

CW: some strong language, description of abuse

Apologies for the rambly nature of this post. I wrote it in airports, partly out of frustration, and I may come back and make it more readable later.

In this post, I’m going to highlight some of the problems that come along with using 3rd party tech companies’ platforms on an institutional level in academia. Tech companies have agendas that are not always compatible with academia, and we have mostly ignored that. Briefly, the core problem with the use of these technologies, and entrenching them into academic life, is that it is an abdication of certain kinds of responsibility. We are giving up control over many of the structures that are necessary to participation in academic work and life, and the people we’re handing the keys to are often hostile to certain members of the academic community, and in a way that is often difficult to see.

I have included a short “too long; didn’t read” at the end of each section, and some potential alternatives.

Using a tech company’s services is risky

There’s an old saying: “There’s no such thing as the cloud; it’s just someone else’s computer.” And it’s true, with all the risks that come associated with using someone else’s computer. The usual response to this is something along the lines of “I don’t care, I have nothing to hide.” But even if that’s true, that isn’t the only reason someone might have for avoiding the use of 3rd party tech companies’ services.

For starters, sometimes tech companies fail on a major scale that could endanger entire projects. Do you remember in 2017 when a bug in Google Docs locked thousands of people out of their own files because they were flagged as a violation of the terms of use?

https://twitter.com/widdowquinn/status/925360317743460352

Or more recently, here’s an example of a guy who got his entire company banned by Google by accident, proving that you can lose everything because of someone else’s actions:

TIFU by getting google to ban our entire company while on the toilet

And of course, this gets worse for members of certain kinds of minorities. Google and Facebook for example, both have a real-names policy, which is hostile to people who are trans, and indigenous North Americans:

https://boingboing.net/2015/02/14/facebook-tells-native-american.html

There are other risks beyond just data loss—for example, if your research involves confidential data, you may even be overstepping the consent of your research subjects, and potentially violating the terms under which your institutional review board granted approval of your study by putting it on a 3rd party server where others can access it. This may also be the case of web apps that include Google Analytics.

tl;dr—If your academic work depends on a 3rd party tech company’s services, you risk: losing your work at a critical time for reasons that have nothing to do with your own conduct; violating research subject consent; and you may be excluding certain kinds of minorities.

Alternatives—In this section, I have mostly focused on data sharing risks. You can avoid using Google Docs and Dropbox by sharing files on a local computer through Syncthing, or by installing an encrypted Nextcloud on a server. If distribution of data sets is your use case, you could use the Dat Project / Beaker Browser.

Tech companies’ agendas are often designed to encourage abuse against certain minorities

I have touched on this already a bit, but it deserves its own section. Tech companies have agendas and biases that do not affect everyone equally. For emphasis: technology is not neutral. It is always a product of the people who built it.

For example, I have been on Twitter since 2011. I have even written Twitter bots. I have been active tweeting for most of that time both personally and about my research. And because I am a queer academic, I have been the target of homophobic trolls nearly constantly.

I have received direct messages and public replies to my tweets in which I was told to kill myself, called a “fag,” and in which a user told me he hopes I get AIDS. Twitter also closed my account for a short period of time because someone reported me for using a “slur”—you see, I used the word “queer.” To describe myself. And for this, there was a short period of time in which I was locked out, and it took some negotiation with Twitter support, and the deletion of some of my tweets to get back on.

I was off Twitter for a number of months because of this and out of a reluctance to continue to provide free content to a website that’s run by a guy who periodically retweets content that is sympathetic to white supremacists:

Twitter CEO slammed for retweeting man who is pro-racial profiling

And this isn’t something that’s incidental to Twitter / Facebook that could be fixed. It is a part of their core business model, which is about maximising engagement. And the main way they do that is by keeping people angry and yelling at each other. These platforms exist to encourage abuse, and they are run by people who will never have to endure it. That’s their meal-ticket, so to speak. And most of that is directed at women, members of racial minorities and queer people.

I have been told that if I keep my Twitter account “professional” and avoid disclosing my sexuality that I wouldn’t have problems with abuse. I think the trolls would find me again if I did open a new account, but even if it were the case that I could go back into the closet, at least for professional purposes, there are four reasons why I wouldn’t want to:

  • My experience as a queer academic medical ethicist gives me a perspective that is relevant. I can see things that straight people miss, and I have standing to speak about those issues because of my personal experiences.
  • Younger queer people in academia shouldn’t have to wonder if they’re the only one in their discipline.
  • As a good friend of mine recently noted, it’s unfair to make me hide who I am, while all the straight men all have “professor, father and husband” or the like in their Twitter bio’s.
  • I shouldn’t have to carefully avoid any mention of my boyfriend or my identity in order to participate in academic discussions, on pain of receiving a barrage of abuse from online trolls.

I’m not saying that everyone who uses Twitter or Facebook is bad. But I am extremely uncomfortable about the institutional use of platforms like Google/Facebook/Twitter for academic communications. When universities, journals, academic departments, etc. use them, they are telling us all that this kind of abuse is the price of entry into academic discussions.

tl;dr—Using 3rd-party tech company platforms for academic communications, etc. excludes certain people or puts them in the way of harm, and this disproportionately affects women, members of racial minorities and queer people.

Alternatives—In this section, I have mostly focused on academic communications. For micro-blogging, there is Mastodon, for example (there are even instances for science communication and for academics generally). If you are an institution like an academic journal, a working RSS feed (or several, depending on your volume of publications) is better than a lively Twitter account.

Tech companies are not transparent in their decisions, which often cannot be appealed

Some of the problems with using 3rd party tech company platforms go beyond just the inherent risks in using someone else’s computer, or abuse by other users—in many cases, the use of their services is subject to the whims of their support personnel, who may make poor decisions out of carelessness, a discriminatory policy, or for entirely inscrutable or undisclosed reasons. And because these are private companies, there may be nothing that compels them to explain themselves, and no way to appeal such a decision, leaving anyone caught in a situation like this unable to participate in some aspect of academic life.

For example, in the late 00’s, I tried to make a purchase with Paypal and received an error message. I hadn’t used my account for years, and I thought it was just that my credit card needed to be updated. On visiting the Paypal website, I found that my account had been closed permanently. I assumed this was a mistake that could be resolved, so I contacted Paypal support. They informed me that I had somehow violated their terms of use, and that this decision could not be appealed under any circumstance. The best explanation for this situation that I could ever get from them was, to paraphrase, “You know what you did.”

This was baffling to me, as I hadn’t used Paypal in years and I had no idea what I could have possibly done. I tried making a new account with a new email address. When I connected my financial details to this account, it was also automatically closed. I’ve tried to make a new account a few times since, but never with success. As far as I can tell, there is no way for me to ever have a Paypal account again.

And that wasn’t a problem for me until a few months ago when I tried to register for some optional sessions at an academic conference that my department nominated me to attend. In order to confirm my place, I needed to pay a deposit, and the organizers only provided Paypal (not cash or credit card) as a payment option.

And this sort of thing is not unique to my situation either. Paypal has a long, terrible and well-documented history of arbitrarily closing accounts (and appropriating any money involved). This is usually in connexion with Paypal’s bizarre and sometimes contradictory policies around charities, but this also affects people involved in sex work (reminder: being a sex worker is perfectly legal in Canada).

Everything worked out for me in my particular situation at this conference, but it took work. After several emails, I was eventually able to convince them to make an exception and allow me to pay by cash on arrival, but I still had to go through the process of explaining to them why I have no Paypal account, why I could try making a new one, but it wouldn’t work, and that I wasn’t just being a technophobe or difficult to work with on purpose. I was tempted to just opt out of the sessions because I didn’t want to go through the embarrassment of explaining my situation.

And my problem with Paypal was a “respectable” one—it’s just some weird mistake that I’ve never been able to resolve with Paypal. Now imagine trying to navigate a barrier to academic participation like that if you were a person whose Paypal account was closed because you got caught using it for sex work. Do you think you’d even try to explain that to a conference organizer? Or would you just sit those sessions out?

tl;dr—When you use services provided by tech companies, you may be putting up barriers to entry for others that you are unaware of.

Alternatives—This section was about money, and there aren’t that many good solutions. Accept cash. And when someone asks for special accommodation, don’t ask them to justify it.

Conclusion

Technology isn’t neutral. It’s built by people, who have their own biases, agendas and blind-spots. If we really value academic freedom, and we want to encourage diversity in academic thought, we need to be very critical about the technology that we adopt at the institutional level.