top of page
Writer's pictureJamal El-Masri

The dual-use dilemma: Safeguarding against AI misuse in biosciences


The dual-use dilemma: Safeguarding against ai misuse in biosciences
The dual-use dilemma: Safeguarding against ai misuse in biosciences

Note: This article is adapted from a piece originally published on Nature.com, authored by Jaspreet Pannu, Sarah Gebauer, Greg McKelvey Jr, Anita Cicero, and Tom Inglesby. It provides a detailed examination of the biosecurity risks associated with AI and the necessary measures to address them effectively.



Artificial intelligence (AI) has revolutionised numerous fields, including biological research, by accelerating innovation and streamlining complex processes. However, as these advancements unfold, the potential misuse of AI, particularly in designing hazardous pathogens, has emerged as a pressing concern. Addressing these biosecurity risks requires a concerted effort among governments, AI developers, and biosafety experts. Such collaboration is essential to ensure that the promise of AI aligns with the principles of sustainable development, climate action, and peace and justice outlined in the UN Global Goals.


The promise and peril of ai in biological research


The advent of large language models (LLMs) like OpenAI's GPT-4o marks a new era in the life sciences. These AI systems have demonstrated remarkable capabilities, from automating cell culture processes to designing antibodies and coding for robotic experiments. For instance, in 2023, researchers at Carnegie Mellon University showcased Coscientist, an AI-driven system capable of independently planning and executing chemical syntheses. This level of automation not only accelerates research but also enhances precision, paving the way for groundbreaking discoveries.


However, the dual-use nature of such technology raises significant biosafety and biosecurity concerns. Advanced AI models can inadvertently or maliciously enable the synthesis of harmful pathogens or toxins. A study by Microsoft highlighted the potential of GPT-4 to design SARS-CoV-2-binding antibodies using existing protein design tools. While this showcases AI's transformative potential, it also underscores the need for stringent safeguards to prevent misuse.


Global efforts to mitigate ai-enabled risks


Recognising these risks, several governments have initiated measures to manage the biosecurity implications of AI. In 2023, the United States secured voluntary commitments from 15 leading AI companies to mitigate risks associated with advanced models. This was followed by an Executive Order mandating AI developers to report the training of models on biological sequence data exceeding specific computational thresholds.


Such actions highlight the importance of integrating AI governance into broader sustainability frameworks. By prioritising environmental protection and economic equality in AI policies, nations can ensure that advancements align with global society objectives.


Case studies: Collaborative initiatives


Real-world examples demonstrate the effectiveness of global collaboration in addressing AI-related risks. At Los Alamos National Laboratory, researchers are partnering with OpenAI to explore GPT-4o's applications in biosciences while assessing potential risks. Similarly, initiatives like the Virtual Lab at Stanford University leverage AI for designing SARS-CoV-2 nanobodies with minimal human intervention. These projects underscore the need for partnerships between academia, industry, and governments to harness AI responsibly.


The Global Society plays a pivotal role in fostering collaboration across borders. By uniting stakeholders, it promotes shared responsibility in developing and implementing biosecurity measures. This includes establishing international standards for AI governance, enhancing transparency, and investing in capacity-building for biosafety experts. Such efforts align with the UN Global Goals, particularly peace and justice (SDG 16), by promoting equitable and secure access to technological advancements.



As AI continues to reshape biological research, its potential must be harnessed responsibly to avoid pandemic-scale risks. Global collaboration, informed by decades of experience in public health and security, is critical in achieving this balance. By prioritising sustainability, science-driven solutions, and economic equality, humanity can ensure that AI remains a force for good.


Further reading: Explore UN Global Goals and the latest in AI governance initiatives to stay informed about efforts shaping our future.

  

 

bottom of page