by Peter Grabitz MD and Benjamin Gregory Carlisle PhD
The year 2020 has already been full of surprises—many of which would have been difficult to imagine just a few months ago. Everything seems to be changing quickly, even science, which is not exactly known for being speedy! So it is worth taking a closer look: How has the pandemic changed the research landscape? And even more important: What can we learn from it?
Science caught Coronavirus
It is the end of May and we are sitting in a cafe. Under normal circumstances, this would not be worth mentioning. However, these are not normal circumstances. Berlin, Germany, and more-or-less the whole world is under lock-down. An international pandemic has spread and curfews have been imposed in almost every city, country and continent. What seemed like an initial competition for the strictest measures (South Africa was in the lead) has now been replaced by a competition for the fastest relaxation of those measures (as of 31.05. Thuringia seems to win.(1))
And even as the world has changed, so has science itself changed. It has virtually freaked out, one might think, virologists are portrayed in all daily and weekly newspapers. Other disciplines are now researching on and around the coronavirus as well. If you meet researchers in the digital Zoom corridors of their offices and laboratories, everyone can tell you about a new COVID-19 project. One could almost think that in science, a competition for the most COVID-19 funding applications in the most distant disciplines has started (according to the authors, the idea of using homeopathy for COVID-19 prophylaxis wins(2)).
On the research funders side, a similar competition appears to be taking place: the Federal Ministry of Education and Research, for example, has launched a 150 million Euro funding package for COVID-19 research for German Medical Centers alone.(3) A little later, the EU even raised 7.4 billion for COVID-19 research and development.(4)
The incentives work, because in May alone, about 4000 articles per week were published about COVID-19.(5)
Is that good? Well, there’s good and bad.
On the one hand, we are all in the middle of a pandemic of global proportions. It affects every facet of our daily lives. Our basic rights have been restricted, our goals and projects have been interrupted and even our safety is at risk. A vaccination should be found as soon as possible. (For those who are critical of vaccinations: they’re good, actually!) Further, research is needed right now on measures to contain the virus (Do school closures help? Masks? Are children less affected? Aerosol transmission or surfaces more relevant?) Due to the dangers posed by the current pandemic, the volume of research published in response to COVID-19 is much higher than that produced in response to other pandemics of the 21st century, as Figure 1 shows.
On the other hand, the pressure that researchers are under to be the first with a hot-take on the Coronavirus pandemic can also backfire. For example, well known epidemiologist John Ioannidis’ controversial antibody study(6) has come under intense scrutiny, with allegations of research misconduct leading to an internal investigation by Stanford.(7) A major, influential study of hydroxychloroquine safety and efficacy in Covid-19 patients (8) has also been retracted, after concerns were raised regarding the veracity of the data.
As meta-researchers, we feel obliged to also think about other consequences of this gigantic re-prioritization of the global scientific apparatus.
What happens if everyone is mostly researching one thing? Is research involving bacteria now worth less than research on viruses? Or even worse: science that deals with quantum physics or space travel? Corona is reason enough to plunge the current generation of young researchers into a deep crisis of meaning. What should a PhD in English literature be worth now?1
If you will, all research has been infected with Sars-CoV-2. This can be impressively reconstructed by looking at the American study registry Clinicaltrials.gov. Here, clinical studies have to be registered before they start enrolling patients. Principal Investigators are required to regularly update their registry entries with current information. Since March 2020, a large number of clinical trials have been suspended, withdrawn or completely terminated (see Figure 2). The green bars show how many studies were suspended, terminated or withdrawn with reasons that explicitly mention COVID-19. In total, as of May 31st, more than 50,000 patients have already been enrolled in studies that stopped because of the current pandemic. Of course, this is not only due to new priority setting in clinical research. In many cases, study participants are no longer able to travel to hospitals and lock-down measures prevent studies from continuing, too. However, the extent to which studies are halted give you food for thought.
At this moment it is not clear yet how many trials will “get back on their feet”. In a comparison cohort of trials that were interrupted in the same months in 2018, only about 10% have been resumed within a year. So it remains to be seen how and if clinical research will recover from COVID-19. The “death rate” (i.e. studies that were stopped permanently because of COVID-19) is currently at 2.5% (33/1336).
Can we learn from pandemics to improve research?
In our everyday jobs we often ask ourselves how we could also implement positive changes in research. How can we make science more robust, transparent and useful? In the following we present two ideas:
Idea 1: More pandemics!
Pandemics, it turns out, are highly effective at moving the static research world and incentivizing new ways of thinking.
Instigating a global pandemic to combat every problem in science, however, may not be a realistic strategy. After careful consideration we regret to inform you that this strategy has an unfavourable risk-benefit balance and did not pass ethical review.
Idea 2: Learn from “bandwagon” effects and try to reproduce the right incentives (and avoid potential problems)
So here is a second suggestion:
Imagine, if you can, a world in which researchers stopped competing over being the first to rush out a pandemic-related publication and instead fought to get ahead of the curve in terms of implementing good scientific practices. For example, who does the most reproducible and transparent research? The idea here is to initiate a similar bandwagon effect, that we are currently observing in COVID-19 research, just not for a scientific topic, but for good research practices. We need to initiate a “fear of missing out” on the Good Science bandwagon among people who are able to make decisions regarding scientific practice!
If successful, we wouldn’t overhear conversations like “My lab published so-many papers about Covid-19” (because that’s what everyone is doing now) but rather “All of my lab’s papers are open access so that everyone can read them” or “How many of your papers include open, interoperable, searchable and reproducible data?
Short background: The bandwagon effect is very well researched. In the popular book “Nudge” by Nobel Prize winner Richard Thaler, an experiment of the “Minnesota Department of Revenue” (10) is described, which investigated how tax compliance can be improved. Taxpayers were either
offered help with paperwork to pay taxes correctly,
they were shown the benefits of tax compliance for society,
more frequent audits were announced,
or they were informed that 93% of people in Minnesota pay their taxes correctly already.
The surprising result: Informing tax payers about the correct behavior of the vast majority of other people led to better tax compliance!
Marketing industries nudge us regularly using similar methods. Just think of the “3549 people” that “are looking at this accommodation right now” or the “only 3 backpacks left” in the online shop of your choice. Nobody wants to be the last to act. It doesn’t matter if it’s about the new iPhone, tax compliance or writing corona publications.
Note of caution to this idea: Of course, we need to be careful that attempts to create a “bandwagon” for Good Science are directed at those who have the ability to make such changes. For example, early career researchers who do not get to choose which journals they submit to should not be shamed for the decisions of their supervisors!
Example: Summary results in clinical research
Let’s take a closer look at an example. Clinical trials are an essential key to medical progress. Clinical trials are used to test whether a new therapy, a new active substance, a new medical product, a new diagnostic procedure or a new preventive measure has the desired benefit for improving care, and whether there is an appropriate balance between risks and benefits.(11)
This is how the German Science Council describes the importance of clinical studies. However, in order to fulfil their role as “essential keys to medical progress”, clinical studies must also publish their results. This may sound obvious at first. But it is not. A study published last year showed that for about a quarter of all completed clinical studies carried out by German university hospitals, there are no published results even after six or more years: Neither as peer reviewed publications, nor as summary results in registries. This corresponds to missing results of at least 171 clinical studies with at least 18,000 patients that were completed 2010-2014.(12)
Why is that? One can make a guess: There may be no one to hold study sponsors accountable. This is supported by the fact that the industry now publishes 90-100% of its study results directly on registries. There is more attention on transparent research practises of the industry than of public institutions.
Another argument could be that the respective principal investigators have left the university and studies were never completed or did not produce results that are easy to publish.
However, none of this matters to the patients who put themselves at risk, or the patients who depend on trustworthy evidence to guide medical practice. The risks and burdens assumed by patients who participate in clinical trials of experimental therapies are justified by the scientifically and socially valuable information that these trials create. It is difficult to say that there is any value at all to performing a clinical trial, if no one knows what the results of that trial were. Non-publication also undermines the trust that patients placed in the institution of human research, which is lost if studies never publish their findings.
So how can the situation be changed now?
The report we mentioned and many of the previous publications present their results in the form of a ranking (see Figure 3): this allows universities to compare themselves with each other. This comparison is exactly the language that universities—and the decision-makers at those universities—understand and listen to. No university wants to be at the bottom of the table.
There has been increased media attention on this issue that has also contributed to the fact that universities are now taking the topic more seriously and are achieving considerable and rapid success in some cases.
Figure 4is taken from the TranspariMED Blog (14), which shows the successes of different universities in recent months. One could even claim that half of German University Medical Centers are actively working on fulfilling their duty to publish all study results, and jumping on the Good Science bandwagon!
This improvement did not even require a pandemic, by the way. ;)
Pandemics like COVID-19 change science at lightning speed; all researchers and funding agencies want to make a positive contribution. One possible psychological basis is the bandwagon effect. We should learn from this and use more positive reinforcement methods to make science more transparent and robust. The case of university performance in clinical trial summary results posting suggests that creating bandwagon effects for good science can work. No pandemics necessary.
Where the authors work, there are other, more or less radical ideas to improve scientific practice. Those interested can find a comprehensive description here:
Strech D, Weissgerber T, Dirnagl U, on behalf of QUEST Group (2020) Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative. PLoS Biol 18(2): e3000576. https://doi.org/10.1371/journal.pbio.3000576
1. Connolly K. German state causes alarm with plans to ease lockdown measures. The Guardian [Internet]. 2020 May 25 [cited 2020 Jun 5]; Available from: https://www.theguardian.com/world/2020/may/25/german-state-causes-alarm-with-plans-to-ease-lockdown-measures-thuringia-second-wave-coronavirus
2. Wie Pseudomedizin gegen das neue Corona-Virus beworben wird [Internet]. MedWatch – der Recherche verschrieben. 2020 [cited 2020 May 31]. Available from: https://medwatch.de/2020/03/02/wie-pseudomedizin-gegen-das-neue-corona-virus-beworben-wird/
3. BMBF-Internetredaktion. Karliczek: Wir fördern Nationales Netzwerk der Universitätsmedizin im Kampf gegen Covid-19 – BMBF [Internet]. Bundesministerium für Bildung und Forschung – BMBF. [cited 2020 Jun 5]. Available from: https://www.bmbf.de/de/karliczek-wir-foerdern-nationales-netzwerk-der-universitaetsmedizin-im-kampf-gegen-covid-11230.html
4. Coronavirus Global Response: €7.4 billion raised [Internet]. European Commission – European Commission. [cited 2020 Jun 5]. Available from: https://ec.europa.eu/commission/presscorner/detail/en/ip_20_797
5. The most influential coronavirus research articles [Internet]. [cited 2020 Jun 5]. Available from: https://www.natureindex.com/news-blog/the-top-coronavirus-research-articles-by-metrics
6. Bendavid E, Mulaney B, Sood N, Shah S, Ling E, Bromley-Dulfano R, et al. COVID-19 Antibody Seroprevalence in Santa Clara County, California. medRxiv. 2020 Apr 17;2020.04.14.20062463.
7. JetBlue Founder David Neeleman Helped Fund The Stanford Coronavirus Antibody Study [Internet]. [cited 2020 Jun 7]. Available from: https://www.buzzfeednews.com/article/stephaniemlee/stanford-coronavirus-neeleman-ioannidis-whistleblower
8. Mehra MR, Desai SS, Ruschitzka F, Patel AN. RETRACTED: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. The Lancet [Internet]. 2020 May 22 [cited 2020 Jun 7];0(0). Available from: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)31180-6/abstract
9. Carlisle, Benjamin. Clinical trials that were terminated, suspended or withdrawn due to Covid-19. 2020 [cited 2020 May 31]; Available from: https://osf.io/prafd/
10. Coleman S. The Minnesota Income Tax Compliance Experiment State Tax Results [Internet]. Rochester, NY: Social Science Research Network; 1996 Apr [cited 2020 Jun 5]. Report No.: ID 1585242. Available from: https://papers.ssrn.com/abstract=1585242
11. Deutscher Wissenschaftsrat. Empfehlungen zu Klinischen Studien (Drs. 7301-18) [Internet]. Hannover; 2018 Oct [cited 2019 Nov 26]. Report No.: Drs. 7301-18. Available from: https://www.wissenschaftsrat.de/download/archiv/7301-18.pdf?__blob=publicationFile&v=5
12. Wieschowski S, Riedel N, Wollmann K, Kahrass H, Müller-Ohlraun S, Schürmann C, et al. Result dissemination from clinical trials conducted at German university medical centers was delayed and incomplete. Journal of Clinical Epidemiology. 2019 Nov;115:37–45.
13. German universities: 445 clinical trials missing results [Internet]. transparimed. [cited 2020 Jun 1]. Available from: https://www.transparimed.org/single-post/2019/12/29/Germany-clinical-trial-results-EudraCT
14. German universities report record number of clinical trial results [Internet]. transparimed. [cited 2020 Jun 1]. Available from: https://www.transparimed.org/single-post/2020/05/22/Germany-clinical-trials-research-waste
1 Rhetorical question. The answer is: Of course, the same as before COVID-19!
Don’t try to install wifi drivers by downloading them on a USB and copying them over. You will be in dependency hell. Plug your phone in to tether the internet connexion, get the wifi drivers, then proceed normally
To get Bluetooth working: https://www.nielsvandermolen.com/bluetooth-headphones-ubuntu/
Install Gnome-Tweaks, then change Appearance > Themes > Applications to Adwaita; there is no other way to have a non-dark theme
To install R: http://sites.psu.edu/theubunturblog/installing-r-in-ubuntu/
The following is a tool for assessing the risk that a fictional character was written mainly for the comfort of straight people. The version from 2020-04-15 is an early draft and will almost certainly be modified later.
Name of queer character and media in which they appear
Section A total
Section B total
Summary score: 30 + (Section A total) – (Section B total)
“Gay people just look like … people”
– Fucking JK Rowling
Score 0 for “doesn’t apply”; 1, “possibly or probably”; 2, “yes, for certain.”
“They’re not a gay character; they’re a character who happens to be gay”
Frames queer rights mainly or exclusively in terms of “love wins,” “love is love,” etc.
Police officer, military or clergy
Regards gay marriage as the end-goal of the gay rights movement
Has adopted children or children by surrogate
White gay man with no mental health issues
Sweater-vest or other non-threatening clothing
It would be in-character for them to say, “I’m not like other queer people”
If trans, this character or their story uncritically places a high value on “passing”
“They break gay stereotypes”
Married or monogamous
Upper middle class
Encounters and overcomes the kind of discrimination that lets straights say “I would never do that”
“That thing that only gay people do? I hate it for non-homophobic reasons.”
– Old straight-people proverb
Score 0 for “doesn’t apply”; 1, “possibly or probably”; 2, “yes, for certain.” For Section B, score 0 if the statement applies, but only as a cautionary tale, a joke or a character flaw.
Has casual sex
Character highlights an intersection of queerness (e.g. being queer and Black)
Kink or fetish
Is single or has more than 1 partner
Lower level of formal education
Has difficulty with, or is critical of the police or other existing power structures
Engages in some stereotypically gay activity
In the closet, at least in some contexts
Flamboyantly gay or otherwise clearly queer-coded
Politically active in progressive causes
Depiction as a “good” character doesn’t depend on how chaste they are
The range of possible scores for sections A and B are 0 to 30. These statements have been equally weighted, however there may be cases where some of them are important or even defining to the queer character in question and should be weighted more heavily.
Subtract the score from Section B from the score for Section A and add 30 for a single summary measure ranging from 0 to 60. The higher the score, the more likely it is that the character was written for straight comfort.
Inspired by talks at the 2019 METAxDATA un-conference, I wrote a little meta-research tool to batch extract ClinicalTrials.gov (“NCT”) numbers from PubMed XML search results and test whether they correspond to legitimate entries.
It’s written for Python 3 on elementary OS 5.1, so I can’t guarantee it will work on anything else. I also wrote a paper based on this tool that I’m currently sending to journals for review. If you’re interested in reading a draft, let me know and I’ll be happy to share it with you.
You can get the code from Codeberg! If you try it out or use it for something, let me know!
On October 7, 2018, I was looking at electric fiddles, as is my habit from time to time. If you’ve never seen an electric violin before, you should check them out. They’re often shaped differently from acoustic violins and they are strange and wonderful. My favourite ones are the ones that look like skeletons of a violin. On a whim, it occurred to me that an electric fiddle that was gold-coloured would be a fun Devil Went Down to Georgia reference. So I started looking around and I couldn’t find one. Maybe I’m bad at online shopping, but as far as I could tell, they were just not for sale.
My current fiddle teacher has an electric fiddle that’s made of plexiglass, so it’s see-through, with multi-colour LED’s, and I suppose, partly inspired by that and by the lack of gold fiddles, I started looking into how hard it would be to make one myself. It turns out, it is very difficult to make a violin of any kind, however it is much less difficult to make an electric violin than it is to make an acoustic one.
I started by drawing out some concept sketches for the fiddle. If I was going to make the fiddle of the Devil himself, it would need some artistic flourishes along those lines.
Then I took some measurements from my acoustic fiddle, did some research, made a few assumptions, and drew a more specific plan.
Making rough cuts
My boyfriend and I went to the hardware store, picked out the wood and went to his father’s garage, to borrow his tools. I did the measuring and the drawing. My boyfriend operated the machines. His brother cut a small piece of metal for us to use to brace the strings at the back of the scroll.
It makes me really happy that this is a project I got to do together with my boyfriend and his family.
We had a couple false starts, but we’re starting to get the hang of it now!
Sanding is one of those things that’s terrible to have to do, but very satisfying to take before and after pictures of.
Staining and gluing the fingerboard on
Burning “Homo fuge” on the back
In Marlowe’s Doctor Faustus, after he cuts his arm to get the blood to sign the contract with Mephistopheles, his arm is miraculously healed, and as a warning from heaven, the words “Homo fuge”—”fly, o man” in Latin—appear there, as a warning for him to get out of that situation. So it seemed appropriate to burn that into the back of the neck of the Stradivarielzebub.
Gluing the pieces together
Applying varnish and tuning pegs
The (almost) final product
On October 7, 2019, we actually strung the fiddle and plugged it in to an amp for the first time.
But does it play?
So there’s a couple things that are left. I want to gild the tail and scroll with gold leaf, so that it can properly be referred to as a “shiny fiddle made of gold” as per The Devil Went Down to Georgia, and there’s a couple small adjustments that I’d like to make so that it’s a bit more playable. Even today, a day later, I’ve taken the strings off again to fix it up, and there’s a few things I want to do to make it better.
And I’ve already got a sketch for what to do for the next violin!
Update: 2019-10-11 (gold leaf; minor adjustments)
I took the fiddle apart for a couple days, made the adjustments I meant to, and put gold leaf on the Devil’s horns and tail. The Devil as they say, is in the details.
Putting gold leaf on something is terrible to do. It’s like working with tin foil, but a tin foil that’s so thin, if you breathe on it too hard, it’ll rip.
In 2015, I collected predictions for the outcome of the federal election. It was a semi-scientific, just-for-fun endeavor, but one that I want to try again. When I was done, I analyzed the predictions and wrote up a little report, available on my blog.
This election is looking to be a close call (again), so I’m collecting predictions just for fun. Tell your friends!
I’m offering a beer to whoever (over drinking age) makes the best prediction.
You can make as many predictions as you like (but please give me an email address if you make more than one so I can combine yours together in analysis). The page will stop accepting new predictions when the last poll closes in Newfoundland on 2019 October 21.