Blog post

Confronting the potential of AI to create new chemical weapons

AI has become one of the hottest buzzwords in biotechnology. Venture capital and public funding have flowed into companies using machine learning to accelerate the research and manufacture of new drugs.

Now, a team of researchers has sounded the alarm over the potential for this artificial intelligence technology to be used to commit wrongdoing — most distressingly, to uncover new, scarier chemical weapons.

Drug developers use AI to come up with thousands or even millions of molecules that can interact with specific biological targets. But the same software can also be used to try to identify toxic gases or powders, the researchers found. Suddenly, the biotech field is worried – both about the hidden dangers of AI and the possibility of overstating them.

The actual immediate risk is probably minimal. But the new debate about the possibility that biotech AI could be used for nefarious purposes underscores the need to monitor the rapid evolution of machine learning technologies. It’s not too early for drug developers and government regulators to assess the unintended consequences of future advances.

The issue arose last year, when the Swiss Federal Institute for Nuclear, Biological and Chemical (NBC) Protection asked a small North Carolina biotech company called Collaborations Pharmaceuticals to report on the use potential abuse of AI technologies at a biannual conference.

The company’s researchers investigated the question by repurposing a machine learning model they had designed to generate lots of molecules that could act on drug targets. The model scores molecules based on various properties – downgrading those whose characteristics are known to cause toxic side effects and giving preference to those that could have a therapeutic effect.

When the researchers trained their model to instead identify molecules that resemble the VX nerve agent, it quickly identified tens of thousands of them, many of which seemed far more harmful. Focusing the model on harm rather than good was as simple as changing a 1 to 0, and therefore directing it to design and prioritize molecules with toxic effects, says Sean Ekins, CEO of Collaborations Pharmaceuticals.

No powerful server was needed for the calculation. The entire project was completed in six hours on an old desktop Mac, using software designed to use publicly available databases of molecules.

What is alarming is how easily the AI ​​model could be changed to cause harm. More reassuring is the low likelihood that AI designed for drug discovery will soon be repurposed in this way.

Consider that, so far, technology has not been so effective in generating new drugs. Computers are very good at identifying new ways to use existing classes of drugs, but they have not yet proven effective at inventing new ones. Human ingenuity still guides this level of exploration.

This means that, at least for now, computers are not likely to come up with entirely new types of chemical weapons. The computer only generates ideas. More work is needed to understand the damaging potential of individual molecules; they must be made and tested (an unethical exercise).

And for all the bad actors concerned, those next steps probably aren’t worth it, according to Derek Lowe, a medicinal chemist who writes for Science magazine’s pharmaceutical blog. “I’m not sure anyone needs to deploy a new compound to wreak havoc – they can save themselves a lot of trouble by just making Sarin or VX, God help us,” he said. writing.

And while computers allow scientists to discover seemingly endless new molecules, they cannot yet predict how these will behave inside a human body. It still takes some good old-fashioned trial and error.

Nevertheless, the project highlights other aspects of the use of AI in drug development that deserve particular attention. A more pressing risk, for example, could be the use of AI in the manufacture of biological weapons. Ten years ago, researchers at Northwestern University showed that it was possible to use an algorithm to find alternative ways to make sarin and mustard gas using unregulated raw materials.

And while the Collaborations Pharmaceuticals team focused on chemical weapons, designs like theirs could also be repurposed to create new illicit drugs that are potent, dangerous, and hard to detect. Machine learning could also enable rapid synthesis of such drugs.

Earlier this year, the Rand Corp signaled the potential for using AI to generate new synthetic opioids and fentanyl analogues.

Academic researchers who create AI tools for drug development want them to be freely available. But as these tools become more refined and efficient, and the bar for their use decreases, there is a growing need to think about other ways to use them.

Consider that some AI models are available not only in “open access” but also in “open source”, which means that anyone can make small changes to the code that could increase the potential for harm. It might be a good idea to limit certain models to open access only, as suggested by MIT professor Connor Coley.

Already, a vast amount of information about machine learning in drug development is publicly available, and it will not be possible to completely close Pandora’s box. There is no need to panic. But the public should be aware of the risks. And biotech companies, academic researchers, and government agencies involved in monitoring chemical and biological weapons now need to have more open discussions about those risks.

More other Bloomberg Opinion writers:

• The great ideals of Google AI Unit tinged with secrecy: Parmy Olson

• Robots Make Us All Buy Overvalued Bonds: John Authers

• The future of humanity will be stranger than we think: Tyler Cowen

This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.

Lisa Jarvis, former editor of Chemical & Engineering News, writes about biotechnology, drug discovery and the pharmaceutical industry for Bloomberg Opinion.

More stories like this are available at bloomberg.com/opinion