The Biden administration is pushing for a monumental new rule that would give it sweeping control over the AI industry as part of the AI executive order published in October 20. The regulation would force American cloud companies to report whenever foreign entities use their systems to train AI models. From there, it may be able to restrict cloud providers from doing business with these entities. Furthermore, it has been using the Defense Production Act (DPA) for AI oversight.

These measures are being or have been implemented to bolster national security, particularly to safeguard against entities like China, which is locked in a race with the US for AI supremacy. However, they raise important questions and concerns. Is the balance between security and technological advancement at risk? Could these rules lead to less international cooperation in AI development?

This article delves into these specific aspects, exploring the challenges of these proposals.

Will the Rule Bolster National Security or Just Crush the AI Industry?

The US Department of Commerce’s recent proposal, requiring American cloud companies to report when foreign users use their computing power for AI training, is a major move in tech policy. Such control over global cloud computing, the most important resource in the AI revolution (along with talent), is incredibly powerful but that doesn’t mean it will be a positive change.

This regulation is part of President Biden’s broader plan to make AI technology safer and protect against misuse by foreign countries. Whether it will actually be effective is still to be determined.

Cloud Companies Caught in the Crossfire

This situation puts American cloud companies like Amazon and Google in a very tricky position. They are tasked with a role in national security, but must also manage their global customer relations and business interests. If these rules are only imposed in the US, American companies might lose out to foreign competitors.

This proposal also underlines that the US is serious about keeping its edge in tech, especially AI, in the competition with China. Both countries are racing to be leaders in high-tech areas, and the US is stepping up its game with this proposal.

Strangely, the White House science chief said that the US and China still need to work together to prevent the potential catastrophic consequences of runaway AI development.

Putting this new regulation into action won’t be easy, especially if the government is trying to work with China at the same time. Cloud services are different from regular goods – they’re online and don’t cross borders in the usual way. This means the Commerce Department has to figure out new ways to track and control these cloud services.

Furthermore, this rule could change how countries work together on AI. Countries might start keeping their tech to themselves, leading to less international cooperation in AI development.

Ultimately, while the proposal aims to safeguard national security interests, it raises significant questions about the balance between security, innovation, and international cooperation in the rapidly evolving field of AI. How this rule is put into practice and how other countries react will be key in shaping the future of AI.

Using the Defense Production Act for AI Oversight: Concerns and Questions

The Biden administration is also using the Defense Production Act (DPA) to require technology companies to report their use of significant computing power for training AI models.

Under this requirement, major tech companies like OpenAI, Google, and Amazon will have to share vital information about their AI projects with the US government. This will give the government insight into sensitive projects and the safety testing conducted on AI innovations.

Gina Raimondo, US Secretary of Commerce, explained that the DPA will be used to conduct a survey, obliging companies to disclose when they train a new large language model and provide safety data for government review.

Challenges Surrounding the Use of The DPA for AI Oversight

However, using the DPA in this manner is a move that comes with several concerns.

Firstly, the DPA was created in circumstances related to national security and for emergencies, to ensure that there is adequate physical production and material supply in any crises. This means that using this law to monitor AI model training is a massive departure from its original purpose. Critics like the X user below have been quick to call the president out on this disparity.

Furthermore, the DPA is usually only used in times of critical need. This means that using it for routine reporting could result in an excess of bureaucracy and red tape, impeding companies’ innovations dramatically. This raises questions regarding the level of the government’s involvement in the field of AI, when being agile is absolutely crucial for survival.

Thirdly, AI development is a global activity and tech companies often operate internationally, collaborating across borders. Such extensive levels of reporting could disrupt alliances and cooperation via worries about data privacy and national control.

In summary, while the goal of using the DPA for AI oversight is to enhance national security and transparency, it raises a lot of questions about how suitable this law is for this purpose.

There needs to be a balance between security and technological advancement, which does not seem achievable with this particular application of the DPA.

Will the Rules Help or Hurt US AI Progress?

As the US government takes significant steps to enhance AI security, the proposals related to foreign cloud computing usage and the application of the DPA raise many questions and concerns. It is clear that the US is trying to safeguard its national interests, especially against foreign entities like China, which is locked into an AI race with the US However, the complexity surrounding the implementation of these measures cannot be understated.

The consequences of the Biden admin’s new rules could be the move the US needs to beat China in the AI race but it could just as well be a major stumbling block that makes the country fall behind, tangled in red tape.

The future of AI hinges on the ability to strike a balance between safety and global cooperation. This is why the tech industry, policy makers, and international partners must work together to find this delicate balance.