Expect These Advancements and Problems in AI in 2023

What can we expect from AI in 2023? Will regulations slow down the advancement and incorporation of AI in everyday life, or will advanced AI systems replace humans in industries once thought safe from automation?

Taking art-generating AI models in 2022 as an example, they created a surge of new business models, powered new apps, and led to a complete outpour of creativity.

On the other hand, they also forced artists to protest, claiming the AI models stole intellectual property to perform their tasks, forcing repositories like Getty Images to ban AI art.

Expect More Art-Generating Apps in 2023 — and Their Problems

As the popularity of the AI-powered selfie app Lensa from Prisma Labs grows, experts predict a rise in similar apps.

Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, believes that integrating generative AI into consumer technology will amplify both these systems’ positive and negative impacts.

People routinely deceive text-generating models into expressing offensive views or generating misleading content, while art-generating models sometimes alter and over-sexualize women.

Mike Cook from the Knives and Paintbrushes open research group agrees with Gahntz and believes that 2023 will be the year that generative AI “finally puts its money where its mouth is.”

Artists Will Continue to Defend Their Intellectual Property

DeviantArt, an online art community, faced backlash after releasing an AI art generator trained on artwork from its community. Longtime users of DeviantArt criticized and accused the platform of lacking transparency in using their uploaded art to train the AI system.

The creators of popular AI systems OpenAI and Stability AI claim to have implemented measures to reduce harmful content, but social media posts suggest that there’s still room for improvement.

In response to public pressure, Stability AI announced that it would allow artists to opt out of the data set used to train the next generation of its AI model, Stable Diffusion. Using the website HaveIBeenTrained.com, rights-holders can request opt-outs before training begins.

GitHub has added settings to prevent public code from appearing in Copilot’s suggestions

OpenAI, on the other hand, has partnered with organizations like Shutterstock to license portions of their image galleries but may be forced to offer an opt-out mechanism in the future due to legal challenges.

In the U.S., OpenAI, GitHub, and Microsoft are facing a class action lawsuit that alleges they violated copyright law by allowing Copilot, GitHub’s code suggestion service, to reproduce licensed code without proper credit.

Possibly anticipating this legal challenge, GitHub has added settings to prevent public code from appearing in Copilot’s suggestions and plans to reference the source of code suggestions.

With the U.K. considering rules that would remove the requirement for systems trained on public data to be strictly non-commercial, expect criticism and debate surrounding the use of AI to increase in the coming year.

Incoming Regulations and Uncertain Investments Trouble AI Companies

Regulations such as the EU’s AI Act and local initiatives like New York City’s AI hiring statute may significantly change how companies approach the development and implementation of AI systems.

Expect the threat of regulation this year, as there will be much more nitpicking before anyone gets fined or sued. During that period, companies may attempt to position themselves in the most favorable categories under these laws, such as the risk categories outlined in the AI Act.

This classification system divides AI systems into four risk categories, ranging from “high-risk” (robotic surgery apps, credit scoring algorithms) to “minimal or no risk” (spam filters, AI-enabled video games).

Systems in the highest-risk category must meet specific legal, ethical, and technical standards before they can be released into the European market. In contrast, those in the lowest-risk category are only required to inform users that they’re interacting with an AI system.

A Ph.D. candidate at the University of Washington, Os Keyes, is concerned that companies will try to classify their AI systems as low-risk to minimize their responsibilities and reduce their visibility to regulators.

In July, ContentSquare raised $600 million in funding

While investors seem eager to invest in generative AI, when it comes to raising funds, the top-performing AI firms are software-based, outside of self-driving companies like Weride, Wayve, and Cruise, and the robotics firm MegaRobo.

Stability AI raised $101 million amid its Stable Diffusion controversies, and OpenAI is said to be valued at $20 billion as it enters talks to raise money from Microsoft. However, they are seemingly exceptions to the rule.

In July of 2022, ContentSquare, a company that sells an AI-powered service for providing recommendations for web content, raised $600 million in funding.

In February, Uniphore, a company that sells software for “conversational analytics” and conversational assistants, secured $400 million in funding.

In January, Highspot, a company that offers an AI-powered platform for sales reps and marketers with real-time, data-driven recommendations, raised $248 million.

Investors may be more inclined to invest in AI technologies that offer more predictable outcomes, such as automating the analysis of customer complaints or generating sales leads in 2023.

While these are the trends of artificial intelligence and machine learning we’ve found, the limits are boundless. With tools like ChatGPT further pushing the limits of what AI can do, what do you think the next step for AI will be?

Read More Software News:

Meta Platforms and Google’s dominance in US digital ads market keeps retreating

Baidu Scales up Robotaxi Operations Even as Others Dial Back

Tesla Plans to Take a Pause at Its Shanghai Gigafactory Since 3 January