As AI permeates every aspect of life, leading tech giants have made voluntary pledges to uphold safety, trust, and security in AI governance.
Navigating the complex dynamics between corporate self-regulation and governmental control, this piece explores the question: Can there be a future where AI is governed in a fair, transparent, and accountable manner that fosters public trust?
This article delves into the commitments of these tech titans and the implications in shaping the landscape of AI governance.
Tech Giants Pledge for Safer AI Governance: Prioritizing Safety, Security, and Trust
Tech giants such as Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, some of which are featured in the list of best AI stocks to invest in 2023, have made voluntary pledges towards safety, trust, and security measures in AI governance.
These companies are aiming to align their commitments with current legal frameworks and to bridge regulatory gaps until formal regulations are instituted. Generative models exceeding current industry capabilities, including GPT-4, Claude 2, PaLM 2, Titan, and DALL-E 2, are the primary targets.
Key focus areas include:
Safety
Firms are pledging to undertake comprehensive ‘red-teaming’ of AI models, encompassing misuse, societal risks, and national security threats. They aim to enhance AI safety research, improve the interpretability of AI decision-making processes, and boost AI system robustness against misuse. Information sharing about safety risks, emerging capabilities, and attempts to bypass safeguards is encouraged among companies and governments.
Security
Companies agree to implement cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. They plan to incentivize third-party discovery and reporting of issues and vulnerabilities in their AI systems. They plan to do this via bounty systems, contests, or prizes.
Trust
Companies aim to deploy mechanisms that enable users to identify AI-generated audio or visual content. They commit to public reporting of their AI models’ capabilities, limitations, and appropriate usage domains. They also intend to prioritize research on societal risks posed by AI systems. The plan is to focus on avoiding harmful bias and discrimination while preserving privacy.
The development and deployment of AI systems to address significant societal challenges are also part of this initiative.
In the field of AI governance, these companies are setting the pace.
The Dynamics of AI Governance
AI governance at this stage could be a compelling power play between the US government and premier tech corporations.
As AI expands its influence across myriad sectors, one of the most striking aspects may lie in a peculiar call from the industry’s trailblazers: the request for regulatory intervention. For example, OpenAI execs have called for an international AI regulatory body, while Tesla’s Elon Musk discussed AI regulation with Senate Majority Leader Chuck Schumer.
Met with @SenSchumer and many members of Congress about artificial intelligence regulation today.
That which affects safety of the public has, over time, become regulated to ensure that companies do not cut corners.
AI has great power to do good and evil. Better the former.
— Elon Musk (@elonmusk) April 27, 2023
When industry founders request regulation, it’s crucial to understand their motivations. While it might indicate a sense of responsibility and foresight about potential AI risks, it might also be a ploy to secure their dominance, creating a government-backed monopoly that could potentially stifle competition.
The Biden-Harris administration swiftly moved to seize AI’s promise while managing its risks and protecting Americans’ rights and safety. They secured voluntary commitments from the aforementioned tech giants, in a move that confirms these entities’ authority and strengthens their role in shaping AI governance.
The dominance of these AI giants raises questions about impartiality in AI governance.
Amid such concerns, it’s worth noting that President Biden has also met with a diverse assembly of AI experts in San Francisco. Guests included: Tristan Harris, Co-founder and Executive Director of the Center for Humane Technology; Jim Steyer, CEO and Founder of Common Sense Media; Joy Buolamwin, Founder of the Algorithmic Justice League; and Sal Khan, Founder and CEO of Khan Academy.
I sat down with experts at the intersection of technology and society who provided a range of perspectives on AI’s enormous promise and risks.
In seizing this moment of technological change, we also need to manage the risks.
My Administration is on it. pic.twitter.com/kLCSeo0Zaz
— President Biden (@POTUS) June 21, 2023
Furthermore, Vice President Harris met with consumer protection, labor, and civil rights leaders to discuss AI.
I convened consumer protection, labor, and civil rights leaders to discuss our work to harness the power of artificial intelligence while protecting Americans from harm and bias. pic.twitter.com/GoseeKkQQE
— Vice President Kamala Harris (@VP) July 13, 2023
Despite these efforts, AI governance at this stage highlights the critical role of the tech giants. The question remains: Can the entities most closely consulted on AI governance create a fair AI landscape when they are the ones shaping the technology?
Public Perception and Trust in AI Governance
It is in this backdrop of power dynamics and regulatory debates within AI governance that attention must turn to the general public’s stance on the matter. As AI continues its pervasive transformation of multiple sectors, the public’s interest and concern grow in tandem. Public perceptions balance both hope and worry, providing a view into societal hopes and fears about the path of AI.
Two recent surveys reveal public views on unchecked AI programs and trust in AI governance bodies.
When it comes to public confidence in institutions regulating AI, trust in AI governance varies. A survey conducted by KPMG Australia and the University of Queensland found that only a little over a third of respondents (34%) had high or complete confidence in governments or tech companies to govern AI. Meanwhile, nearly half of the respondents (47%) expressed high or complete confidence in national universities and security/defense forces to govern AI.
You will find more infographics at Statista
Source: Statista
Furthermore, according to Ipsos Global Advisor’s 2023 Predictions, an average of 27% of respondents worldwide consider it likely that a rogue AI program will cause problems around the world in 2023. The highest share of respondents thinking this is likely was recorded in India (53%). Meanwhile, a quarter of US respondents (25%) think that this is likely.
You will find more infographics at Statista
Source: Statista
This data reflects a skeptical public stance towards the ability of AI industry leaders to responsibly oversee AI’s expansion. It underscores the need for a more transparent, inclusive, and accountable AI governance structure that fosters trust.
Related Articles:
- Corporate AI Leaders Meet With Biden on AI Safety But They’re the Ones That Need Policing
- 30+ OpenAI Statistics for 2023 – Data on Growth, Revenue & Users
- 50+ ChatGPT Statistics for July 2023 – Data on Usage & Revenue
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops