Artificial Intelligence

How Artificial Intelligence Can Increase the Threat of Bioterrorism


In 1990, the Aum Shinrikyo cult attempted a bioterrorism attack in and around Tokyo by spraying a yellow liquid containing botulinum toxin. Fortunately, their attempt failed, but it highlighted the potential dangers posed by ill-intentioned groups. If they had access to contemporary artificial intelligence (AI) tools like ChatGPT, the outcome could have been different.

Advancements in AI have revolutionized various industries, including science and healthcare. Tools like ChatGPT have greatly expanded our knowledge and capabilities. However, when applied to biology, AI can also amplify the risks associated with bioweapons and bioterrorism.

Large language models (LLMs) like ChatGPT can provide access to dual-use knowledge, making biological weapons more accessible to non-scientists. In an exercise at MIT, ChatGPT was able to instruct non-scientist students about potential pandemic pathogens, including how to acquire them and avoid detection.

The lack of knowledge displayed by Aum Shinrikyo regarding the difference between Clostridium botulinum and botulinum toxin is not uncommon. Past bioweapons programs have been hindered by a lack of expertise. Even Saddam Hussein’s Iraq, despite having the necessary equipment, was unable to convert anthrax into a more deadly form due to a lack of knowledge in the drying and milling process.

As AI-powered tools become more sophisticated, they inadvertently enable individuals with malicious intent to acquire knowledge that can be used for harm. While AI can provide instructional knowledge, the acquisition of tacit knowledge, gained through direct experience, is essential for effective bioweapon development. However, the increasing accessibility to information may lead to more individuals attempting to create bioweapons, ultimately increasing the likelihood of success.

Furthermore, AI tools like ChatGPT are just the beginning. As technology advances, language models and AI systems will further automate scientific processes, reducing the number of scientists required for large-scale projects. This will make it easier to develop biological weapons covertly.

Specialized AI tools, such as protein folding models and design tools, also pose risks. While these tools have the potential for beneficial advancements in drug development, they can also facilitate the creation of bioweapons.

In conclusion, while AI offers numerous benefits, it also raises serious concerns regarding bioweapons and bioterrorism. The accessibility and automation provided by AI tools increase the risks associated with these threats. It is crucial to address these challenges and implement strict regulations to prevent misuse and ensure the safe application of AI in the field of biology.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.