In today’s world, artificial intelligence (AI) is quickly becoming essential for innovation in different fields. Google and Anthropic are among the companies leading the charge, creating new generative AI technologies. However, their progress is not without its hurdles.

They face significant challenges like artificial intelligence making mistakes or dealing with the use of copyrighted and sensitive information.

This opens up a discussion on how these tech giants are not only advancing AI but also addressing its problems. As companies start to use these technologies more, finding the right balance between using AI’s capabilities and understanding its limits is becoming increasingly important.

Overcoming AI’s Accuracy Challenges

At the WSJ CIO Network Summit in Menlo Park, California, Google and Anthropic acknowledged that their AI systems can sometimes produce errors by confidently presenting incorrect information, a problem known as “hallucinations”. Naturally, this is worse than informing the user that it may be wrong.

One of the most famous cases of AI hallucination so far occurred in a courtroom where a lawyer accidentally presented a filing full of fake cases generated by ChatGPT. As you can imagine, this didn’t go so well for him.

Jared Kaplan from Anthropic explained that they are developing methods to reduce the instances of these AI errors. One approach is to train AI models to admit when they do not have enough information to answer a question, essentially teaching them to say, “I don’t know.” They would still get things wrong just as often but it’s still better than confident errors.

However, Kaplan noted a potential issue with this approach, suggesting that if AI models are trained to be overly cautious to avoid mistakes, they might become too hesitant to be truly useful, similar to how a rock, while never wrong, provides no value.

Improving Data Use and Training in AI

They are also tackling other issues, such as improving the training efficiency of their models and dealing with copyright or sensitive data within the training materials, though these challenges don’t have straightforward solutions yet.

Kaplan stated that if an AI company is asked to take out specific content from its model’s training data, there isn’t a straightforward way to do this.

There have been multiple instances of lawsuits based on copyright infringement when training AI algorithms. If AI companies ever lose one of these major lawsuits, it could become an existential threat to the technology as we know it. It would be incredibly difficult to find large enough data sets with no copyrights.

The leading case is the lawsuit initiated by The New York Times against OpenAI and Microsoft. The Times accused both tech giants of unlawfully copying millions of its articles without permission to improve their AI systems.

These systems, including ChatGPT and Copilot, now compete with the Times for readers and advertising money. The lawsuit suggests that by using the newspaper’s detailed reports, OpenAI and Microsoft have unfairly gained from the Times’ efforts without paying for it.

Alongside this, another notable lawsuit has been filed against OpenAI and Meta by a group of authors, including Sarah Silverman, for copyright infringement.

They argue that these companies have used the creative works of many authors without their permission or offering them anything in return.

The lawsuit includes examples where ChatGPT provided detailed plots of copyrighted books upon request, indicating it might have been trained on these texts. Sarah Silverman’s own lawsuit adds to this by stating her work, “The Bedwetter,” and others were used without consent and sourced from illegal collections of copyrighted materials.

Balancing Trust and Innovation in AI for Businesses

Both Anthropic and Google have committed to addressing these limitations, but there remains a level of reluctance among businesses to fully trust these AI systems with their sensitive corporate data. Corporate leaders are seeking to justify their investments in AI, needing assurance that the technology is reliable and grounded in reality.

In fact, during the summit, an executive from a financial services company inquired about strategies for deploying AI technologies in highly regulated or sensitive environments.

Indeed, trusting AI with sensitive data is a significant challenge that affects the use of advanced AI technologies in many industries. This issue is especially important in healthcare, where keeping patient information private and making sure medical diagnoses are correct is crucial; in finance, where accurate data handling and predictions can greatly influence financial markets and investment choices; and in legal services, where the protection of case data and following privacy laws are absolutely essential.

In these areas (and many others), the need for data security and compliance with regulations is tremendously high. Any mistake or misuse of data by AI, such as inaccuracies or “hallucinations”, could cause serious legal problems, financial losses, and harm to a company’s reputation.

The risk is increased by the chance that AI systems might create outputs from wrong, biased, or sensitive data, leading to concerns about whether it’s safe and ethical to use these technologies for critical decisions.

Moreover, there are industries that have proprietary information or intellectual property. For example, the technology sector relies on innovation and trade secrets to stay ahead. Meanwhile, the entertainment industry needs to consider the issue of copyright.

The problem isn’t just about stopping unauthorized access to data; it’s about making sure AI systems are reliable in how they handle and use the sensitive information they work with.

AI Is Still Plagued With Existential Problems

As AI increasingly becomes a part of different sectors, firms like Google and Anthropic are tackling the big issues that arise with using this technology. They are actively working to minimize errors, such as incorrect AI outputs known as “hallucinations,” and to manage sensitive and copyrighted data in a responsible way.

However, at this stage, the reality is that their AI models cannot yet be fully trusted with critical and sensitive data by businesses due to the possible legal, financial, and reputation-related risks.

Finding the right balance between taking advantage of AI’s extensive capabilities and ensuring its reliability, ethical use, and adherence to laws is essential for its effective use in business activities.