“Widely Available AI Could Have Deadly Consequences”

This pitch letter is part of The Open Notebook’s Pitch Database, which contains 290 successful pitches to a wide range of publications. To share your own successful pitch, please fill out this form.

The Story

“Widely Available AI Could Have Deadly Consequences”
by Jess Craig
Wired, May 17, 2022

The Pitch

In Spring 2021, Sean Ekins, a British pharmacologist who uses artificial intelligence to develop new drugs and vaccines, received an e-mail from the Swiss Federal Institute for Nuclear, Biological, and Chemical Protection asking him to give a presentation at an upcoming conference on the potential misuse of his AI technology. A few months earlier, Ekins and his team published a paper describing how their AI accurately predicted the toxicity of various novel chemicals that might be used as medicine. At the time, Ekins and his team admit they never thought about how their AI could be misused to, say, create a more potent biochemical weapon.

So, they conducted an experiment. The team made slight tweaks to their AI system, called MegaSyn, which generates new molecules, virtually smashing together different types of atomic elements, and then tests those molecules for viability, toxicity and specificity, together assessing if the molecule can effectively treat a specific disease while limiting negative toxic effects. Normally, MegaSyn penalizes toxicity and rewards specificity. To determine how effectively MegaSyn could generate the most toxic molecules, Ekins and his team simply programmed the platform to reward toxicity as it did specificity and then set a threshold for lethality: the system was instructed to generate molecules that would only require a few salt-sized grains to kill a person.

In just 6 hours, the system spit out some 40,000 different molecules that reached their lethal threshold. Ekins and his team were shocked. MegaSyn could easily be used to help create thousands of new biochemical warfare agents. At the Swiss conference held in September 2021 and again in a March 2022 Nature Machine Intelligence paper, Ekins and his team sounded an alarm on their own research calling for more discussion on the security implications of AI in biological research fields and outlining specific ways to help prevent misuse while still unlocking AI’s power to help treat and cure human illness. The Nature paper has amassed hundreds of thousands of views; Ekins was even invited to brief the White House on his team’s findings. The group’s findings speak to a larger concern: AI is becoming more powerful and more accessible, opening doors for “dual-use,” when legitimate research is used for malicious purposes.

I am writing to pitch a news piece that follows Ekins and his team through this work. Ekins has agreed to speak with me. I will also speak with health security experts and bioethicists.

Skip to content