Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools

06/24/2023
by   Jonas B. Sandbrink, et al.
0

As advancements in artificial intelligence propel progress in the life sciences, they may also enable the weaponisation and misuse of biological agents. This article differentiates two classes of AI tools that pose such biosecurity risks: large language models (LLMs) and biological design tools (BDTs). LLMs, such as GPT-4, are already able to provide dual-use information that could have enabled historical biological weapons efforts to succeed. As LLMs are turned into lab assistants and autonomous science tools, this will further increase their ability to support research. Thus, LLMs will in particular lower barriers to biological misuse. In contrast, BDTs will expand the capabilities of sophisticated actors. Concretely, BDTs may enable the creation of pandemic pathogens substantially worse than anything seen to date and could enable forms of more predictable and targeted biological weapons. In combination, LLMs and BDTs could raise the ceiling of harm from biological agents and could make them broadly accessible. The differing risk profiles of LLMs and BDTs have important implications for risk mitigation. LLM risks require urgent action and might be effectively mitigated by controlling access to dangerous capabilities. Mandatory pre-release evaluations could be critical to ensure that developers eliminate dangerous capabilities. Science-specific AI tools demand differentiated strategies to allow access to legitimate users while preventing misuse. Meanwhile, risks from BDTs are less defined and require monitoring by developers and policymakers. Key to reducing these risks will be enhanced screening of gene synthesis, interventions to deter biological misuse by sophisticated actors, and exploration of specific controls of BDTs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2022

Super forecasting the technological singularity risks from artificial intelligence

The article forecasts emerging cyber-risks from the integration of AI in...
research
06/06/2023

Can large language models democratize access to dual-use biotechnology?

Large language models (LLMs) such as those embedded in 'chatbots' are ac...
research
05/24/2023

Model evaluation for extreme risks

Current approaches to building general-purpose AI systems tend to produc...
research
01/03/2022

AI Racial Equity: Understanding Sentiment Analysis Artificial Intelligence, Data Security, and Systemic Theory in Criminal Justice Systems

Various forms of implications of artificial intelligence that either exa...
research
07/06/2023

Amplifying Limitations, Harms and Risks of Large Language Models

We present this article as a small gesture in an attempt to counter what...
research
05/13/2023

Dual Use Concerns of Generative AI and Large Language Models

We suggest the implementation of the Dual Use Research of Concern (DURC)...
research
06/20/2023

Opportunities and Risks of LLMs for Scalable Deliberation with Polis

Polis is a platform that leverages machine intelligence to scale up deli...

Please sign up or login with your details

Forgot password? Click here to reset