President Biden has signed an Executive Order aimed at ensuring AI governance for the safe and secure development of Artificial Intelligence (AI) in the U.S. that could prove to be a turning point in the corporate AI race.
Key Initiatives Outlined in the Executive Order for AI Governance
Here are the key points from the Executive Order:
Improving AI Safety and Security
- Makers of powerful AI systems need to share safety test results with the U.S. government.
- New testing standards will be set to ensure AI systems are safe before they are released to the public.
- These standards will also be applied to important infrastructure sectors to keep them safe from AI threats.
- There will be strong regulations to prevent the misuse of AI in making dangerous biological materials.
- New standards will be set to detect fake AI-generated content and confirm real official content to protect people from fraud.
- A new cybersecurity program will be created to use AI to find and fix software problems.
- A National Security Memorandum will be written to make sure the U.S. military and intelligence community use AI safely and effectively and to prepare against enemies who might use AI for military purposes, such as China.
Protecting People’s Privacy
- The order encourages the development of techniques that preserve privacy and calls for new data privacy laws.
- It pushes for more research on privacy tools through a special network, with a national foundation helping to spread these new technologies in government agencies.
- It also looks at how agencies use and collect data, aiming to strengthen privacy guidelines.
- The plan also lays down rules for checking how well privacy-keeping methods work in AI systems, to better protect Americans’ data.
Promoting Fairness and Civil Rights
- Actions to counter discrimination caused by AI misuse are outlined.
- Plans to ensure fair use of AI in the criminal justice system are also included.
Supporting Consumers, Patients, and Students
- The order highlights the responsible use of AI in healthcare and education.
- It also calls for a safety program to report and address AI-related issues in healthcare.
- It discusses principles to address job displacement and uphold labor standards.
- A report on how AI could impact the labor market is also planned.
Encouraging Innovation and Competition
- Efforts to boost AI research and support for small developers are mentioned.
- The order also aims to make it easier for skilled immigrants to work in the U.S. AI sector.
Enhancing U.S. Leadership Abroad
- International collaborations on AI are encouraged to ensure safe and responsible AI use globally.
- The idea is to accelerate the development and implementation of AI standards while promoting its safe and responsible use to solve global challenges.
Improving Government Use of AI
- Guidelines for how government agencies use AI are to be developed.
- The order outlines that agencies need help to acquire certain AI products and services, and that this is to be done via rapid and efficient contracting.
- A hiring drive for AI professionals across government bodies is also mentioned to modernize federal AI systems.
Recent Endeavors in Advancing Responsible AI Governance
This Executive Order is part of a larger plan by the Biden-Harris Administration to promote responsible AI governance, building on previous efforts.
Such efforts include voluntary pledges from tech giants to follow safe AI development practices. Big tech companies like Amazon, Google, and Microsoft voluntarily pledged to work to make AI safer, more secure, and trustworthy, aligning with current laws and filling in gaps where needed. They are focusing on testing advanced AI models thoroughly, improving cybersecurity, and finding ways to identify content created by AI.
Such actions show the growing worry in the AI community about the fast progress of AI technologies without being ready for it.
A statement from a nonprofit group, the Center for AI Safety, shared the concerns of many AI leaders and researchers about the rapid growth of AI. Geoffrey Hinton and leaders from big tech companies like OpenAI, Microsoft, and Google were among those who signed the statement. The statement also highlighted the need to be well-prepared and careful to avoid mistakes that could have serious consequences.
Meanwhile, the United Nations has formed a high-level advisory group on AI, bringing together 38 experts from around the world. They aim to create global guidelines for AI, addressing its risks and promoting its benefits. Co-chaired by Carme Artigas, Secretary of State for Digitalisation and Artificial Intelligence of Spain, and James Manyika, Senior Vice President of Google-Alphabet, this group will provide diverse insights to help shape international AI governance, ensuring it aligns with human rights and global development goals.