The European Union and the US are collaborating to develop a voluntary code of conduct for AI amid growing concerns regarding the potential dangers of this nascent technology.

European Commission Vice President Margrethe Vestager announced Wednesday that a draft of the code was expected to be released in the coming weeks, she said during a meeting of the EU-US Trade and Technology Council.

“We will be very encouraged to take it from here. To produce a draft. To invite global partners to come on board. To cover as many as possible,” Vestager said, adding:

“And we will make this a question of absolute urgency to have such an AI Code of Conduct for a voluntary signup.”

The aim is to bridge the gap while the EU finalizes its groundbreaking AI rules, which will not take effect for up to three years.

Officials will seek feedback from industry players, invite parties to sign up, and promise a “final proposal for the industry to commit to voluntarily,” Vestager said.

US Officials Express Support For AI Regulation

US Secretary of State for Commerce Gina Raimondo, who was also present at the TTC meeting, expressed a willingness to engage in discussions towards shaping a voluntary AI Code of Conduct.

Raimondo acknowledged issues around data privacy, misuse, and the potential for models to fall into the hands of malign actors.

“Unlike other technology, the rate of the pace of innovation is at a breakneck pace, which is different and a hockey stick that doesn’t exist in other technologies,” he said.

US Secretary of State Antony Blinken also stressed the “fierce urgency” for Western partners to act as AI is growing at a rapid pace.

The Trade and Technology Council between the EU and the United States was set up in 2021 to ease trade friction after the turbulent presidency of Donald Trump but has since set its sights largely on artificial intelligence. It is jointly led by American and European officials.

In a joint statement, leaders from both sides acknowledged that AI is a “transformative technology with great promise for our people,” but it carries risks.

They said that experts from both regions would work on “cooperation on AI standards and tools for trustworthy AI and risk management.”

The voluntary code would be open to all “like-minded countries,” plans to advance trustworthy and responsible AI technologies, and mitigate its risks with a risk-based approach.

Vestager also said she hopes to do this “in the broadest possible circle,” bringing Canada, the UK, Japan, and India on board.

Experts Warn AI Risks Should be a Global Priority

Scientists and tech leaders have warned that AI risks should be a global priority, with fears about the technology’s potential risk to humanity.

Even Sam Altman, CEO of ChatGPT maker OpenAI, has signed a statement saying that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Earlier this year, the Center for Artificial Intelligence and Digital Policy, a leading tech ethics group, filed a complaint with the FTC, asking the agency to halt the commercial releases of GPT-4, citing privacy and public safety concerns.

In a complaint, the group claimed that GPT-4 is “biased, deceptive, and a risk to privacy and public safety.” It also said that the tool has caused distress among some users with its quick and human-like responses to queries.

Prior to that, a group of tech gurus, along with some artificial intelligence experts and industry executives, signed an open letter that called for a six-month pause in developing systems more powerful than GPT-4, citing potential risks to society.

Read More:

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops