The risk of an artificial pandemic being let loose is rising every year as knowledge and technological barriers are worn down. Unfortunately, it’s easier than you may think to produce an extremely potent bioweapon and it’s getting easier all the time.
Lowering knowledge barriers is usually a fantastic thing that helps the disadvantaged and brings everyone to the same level playing field. However, this is certainly not the case when the knowledge barriers pertain to topics dangerous to the global population.
Researchers at the Massachusetts Institute of Technology (MIT) conducted an experiment where undergraduates were asked to poke at AI chatbots to see if they could be convinced to help a layperson engineer a pandemic. The results of the study aren’t exactly shocking but they are rather terrifying.
In the experiment, the students were able to get the AI models to suggest 4 different potential pandemic pathogens. Furthermore, they explained to the students how these pathogens could be synthesized using relatively simple biology techniques using synthetic DNA. Potentially the scariest result of the experiment is that the AI bots told them which DNA synthesis companies would be least likely to screen orders.
Can Chatbots Actually Help Bad Actors Build Deadly Pathogens?
So far, AI chatbots can’t tell you exactly how to build deadly pathogens but there is no doubt that they can be a significant help. Along with the right biology textbooks and publicly available studies, it likely wouldn’t take long to build the necessary knowledge.
The know-how isn’t the only thing you need to create a pandemic though. Many of the most dangerous pathogens would require at least a small amount of genetic engineering unless the criminal were able to source the pathogen directly.
This wouldn’t be anything new as humans have used various forms of bioweapons such as the Mongols catapulting plague victims into cities or European settlers giving Native Americans smallpox-infested blankets.
Now that biology has advanced far enough to allow for easy direct modifications of pathogens that make them deadly and more transmissible, the threat of devastating artificial pandemics is higher than ever.
Genetic engineering requires some relatively expensive technology. There are already 69 laboratories (Biosafety Level 4 or BSL 4) across the globe designed to handle the most dangerous pathogens in the world. This is about 10 more than there were in 2022. As this number rises, so does the chance of a lab leak or a bad actor secretly releasing a pathogen.
Many experts argue that the COVID-19 pandemic was the first example of a calamitous lab leak but there will likely never be direct proof of a lab leak as China would have almost certainly made sure to destroy any and all evidence of it.
These pathogens don’t even have to be transmissible to humans to be a massive problem. Farmers in the US’ agricultural heartland are worried about a leak from a new BSL 4 lab being built in their backyard to test extremely communicable animal diseases.
You can check out exactly where every BSL 4 lab is located using GlobalBioLabs.com’s interactive map.
According to a recent paper from The Sunshine Project, you wouldn’t even need such a complex laboratory. There is a surplus of simpler biotechnology setups for medical, pharmaceutical, or other research that could be commandeered to create these pathogens.
Some of these techniques could likely even be used at home with a few pieces of relatively expensive equipment, the right knowledge, and the right synthetic DNA from a synthesis company. As technology and knowledge in the field advance, home laboratories will become more and more capable of synthesizing world-breaking pathogens.
Artificial Pandemics Aren’t the Only Rising Threat
Bioengineering isn’t the only field that poses concerns when the general public becomes more informed. The worst-case scenario seems like it would be a population who all knows how to engineer thermonuclear weapons.
Luckily, in this case, knowledge isn’t everything. Actually building a nuclear weapon is extremely difficult and requires an absurd amount of special kinds of fissile material (that are exorbitantly hard to generate or mine in large quantities) as well as the basic knowledge of how to build the device.
Expansive knowledge in chemistry could be harmful when in the hands of the wrong people. With the right knowledge and ingenuity, someone with a deep understanding of chemistry could likely build large bombs with completely legal materials. Many of the most obvious choices for explosive precursors like Potassium nitrate are relatively carefully controlled but this may not be enough to stop intelligent bad actors.
Strong chemistry knowledge could also be enough to engineer simple extremely dangerous chemical weapons as well, which would likely be much easier to make than powerful explosives. Drugmakers also often need to be well-versed in chemistry, or at least know a lot about the particular synthetic pathways used to create illegal drugs.
Finally, knowledge of computer science and hacking could be extraordinarily useful for bad actors. Cybercrime is already a massive problem around the world. According to the University of North Georgia, there is a hacker attack every 39 seconds. Reports estimate that digital threats cost businesses a whopping $400 billion every year. It wouldn’t be surprising if AI chatbots could be convinced into designing or directly building harmful code.
What Can Be Done to Stop AIs From Helping Terrorists?
Now that we know that AI chatbots could be helpful to terrorists looking to harm others, what can realistically be done about it? The steady march of AI technology certainly won’t stop just to acquiesce to our demands because we asked nicely.
Beefing up regulation on DNA synthesis companies or biotech firms that build the necessary technology for building pathogens to require heavy screening could potentially help in this specific field but it isn’t a full solution by any means.
The best answer may lie in strong regulations that require AI tools to build effective measures to prevent the dissemination of incredibly dangerous information. These chatbots should not be a helping hand in finding DNA synthesis companies that may have lax screening measures. Perhaps AI needs to improve just to be able to police itself better.
- Scientists Used AI to Discover New Antibiotic Effective Against Superbugs
- 16 Best AI Writer Tools for 2023
- Best Biotech Stocks to Watch in 2023
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops