Loading . . .
AI chatbots could assist criminals in creating bioweapons, warns Anthropic CEO
Read Time:3 Minute, 16 Second

AI chatbots could assist criminals in creating bioweapons, warns Anthropic CEO

During his testimony before the US Senate, Anthropic CEO Dario Amodei cautioned that AI systems present a significant risk of being used to develop bioweapons and other hazardous weapons in the near future.

Since the introduction of ChatGPT in November of the previous year, the generative AI landscape has been advancing rapidly. Despite concerns expressed by regulators and experts, numerous new chatbots and tools have surfaced, showcasing enhanced capabilities and features. However, this progress also brings with it a potential threat to global security and stability.

Dario Amodei, CEO of Anthropic, has sounded the alarm that AI systems might empower criminals to develop bioweapons and other hazardous weapons within the next two to three years. Anthropic, a company co-founded by former OpenAI employees, gained attention recently with the release of its ChatGPT competitor, Claude.

Reports suggest that the startup has collaborated with biosecurity experts to explore the implications of large language models for potential weaponization in the future.

During a hearing before a US Senate technology subcommittee, Amodei emphasized the urgent need for regulation to address the potential malicious use of AI chatbots in various critical fields, including cybersecurity, nuclear technology, chemistry, and biology.

He stressed the necessity for swift action, suggesting a target timeframe of 2024, 2025, or 2026 to implement measures that restrict the capabilities of AI systems. Amodei warned that without proper safeguards in place, the consequences could be severe, warranting immediate attention to mitigate potential risks.

This marks another instance of an AI company recognizing the potential risks associated with their product and advocating for regulation. Notably, Sam Altman, the leader of OpenAI, which is responsible for ChatGPT, also emphasized the need for international regulations on generative AI during a visit to South Korea in June.

During his testimony before the senators, Amodei highlighted that while Google and textbooks offer only partial information that requires significant expertise to utilize harmfully, his company and collaborators have discovered that current AI systems can assist in bridging some of these knowledge gaps.

In his testimony, Amodei and his collaborators examined whether current AI systems possess the capability to aid in the more challenging aspects of production processes. They found that today’s AI systems can indeed assist in some of these steps, albeit incompletely and unreliably, indicating the initial emergence of potential risks.

He cautioned that without the implementation of appropriate guardrails, AI systems might evolve to entirely fill in these missing gaps. Extrapolating the trajectory of AI advancements over the next two to three years, there arises a significant concern that these systems could potentially complete all the missing elements. If not accompanied by proper mitigations, this could lead to a substantial increase in the number of actors with the technical capability to carry out large-scale biological attacks.

While Amodei’s timeline for the creation of bioweapons using AI may be somewhat overstated, his concerns hold valid ground. The intricate details required for developing weapons of mass destruction, like nuclear bombs, typically reside in classified documents and within the expertise of specialized professionals. However, AI has the potential to render this information more widely available and accessible, raising genuine apprehensions about its potential misuse.

The methods used by the researchers to elicit harmful information from AI chatbots remain unclear. Typically, chatbots like ChatGPT, Google Bard, and Bing chat are designed to avoid answering queries that involve harmful content, such as instructions on creating dangerous items like pipe bombs or napalm.

A recent finding by researchers from Carnegie Mellon University and the Centre for AI Safety in San Francisco has sparked concerns. They found that open-source systems can be manipulated to create jailbreaks for widely used and closed AI systems. By adding specific characters at the end of prompts, these researchers were able to bypass safety measures and prompt chatbots to produce harmful content, hate speech, or misleading information. This revelation highlights the fact that current guardrails may not be completely foolproof.

Pooja Prajapati

I am Pooja Prajapati, a passionate writer specializing in entrepreneurship, technology, and investments. My love for storytelling drives me to create compelling, insightful, and up-to-date content. My mission is to empower my readers by providing them with the resources they need to thrive in the dynamic world of business. Connect with Pooja Prajapati: pooja@founders40.com
Previous post LinkedIn developing AI ‘coach’ for job applications
Next post Easy Guide to Making Money with Blogging: SEO-Friendly Tips and Strategies