The European Consumer Organisation (BEUC), a consumer protection group that represents 46 similar independent organizations from 32 different countries, published yesterday a letter prompting the European’s Consumer Protection Cooperation Network – a.k.a. the CPC Network – to investigate ChatGPT and OpenAI’s commercial practices.
The letter, which was signed by Ursula Pachl, the Deputy Director General of BEUC, argued that even though artificial intelligence will bring many benefits to society and the global economy, there are also “big challenges and concerns” related to how consumers may be harmed by it.
One of those dangers is the spread of authoritative misinformation as AI models like ChatGPT may at times come up with responses that are inaccurate or even made up – also known as AI hallucinations.
“Despite their power to manipulate and distort consumer behaviour, these systems are not specifically regulated and are put on the market without an adequate impact assessment by an independent third party and without public scrutiny or specific oversight”, Pachl asserted.
BEUC Urges the CPC Network to Look into Four Specific Possible Dangers
The letter asks the CPC Network to investigate if systems like ChatGPT and other similar AI-powered tools represent a risk to consumers and propose actions that can remediate these dangers.
Four specific concerns were cited by the BEUC’s letter, the first being the ability of AI-powered chatbots to mislead consumers by providing inaccurate or false information that can appear to be factually correct to users due to the eloquence of the system.
Meanwhile, the BEUC is also worried that ChatGPT and other similar software are being incorporated into systems and applications in sensitive areas of the economy like finance, insurance, and e-commerce where the information provided by the AI tools can persuade consumers to make decisions based on faulty advice.
For example, in many jurisdictions where ChatGPT is offered, it is illegal to offer financial advice. Even though the tool from OpenAI explicitly declares that it is not permitted to offer that kind of information, companies in the financial services industries may opt to override these safeguards by tapping on the application programming interface (API) offered by the company and making the necessary tweaks.
Another aspect of the AI tool that is quite controversial and has been flagged by other regulators as dangerous is the absence of appropriate filters to protect minors and prevent them from accessing the tool without adult supervision.
In this regard, BEUC commented that children and teenagers are more susceptible to believing that any argument and response from the chatbot is factually correct even though it may not be.
Also read: ChatGPT is Costing OpenAI a Whopping $700000 a Day to Run
This can distort their concepts about multiple topics and prompt them to make dangerous and potentially harmful decisions.
“Younger consumers are typically exposed to screens and online content many hours per day and are particularly receptive to harmful uses of AI language model technologies because of their credulity. We have already clear evidence and experience with the dangerous impact that algorithms on social media can have on teenagers and children”, the letter reads.
Regulators in Multiple Latitudes Are Taking Action on AI Tools
There are precedents already within the European Union of independent government-backed organizations acting against ChatGPT. This is the case of the Italian data protection agency (DPA), which recently banned the application in the country unless it complies with several demands related to data privacy and child protection.
The BEUC’s letter cited a complaint filed by the Center for AI and Digital Policy with the United States Federal Trade Commission (FTC) urging the agency to investigate OpenAI and its software to “ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace”.
Meanwhile, a letter signed by over 1,100 tech experts last month also urged AI labs to stop developing models more powerful than the recently launched GPT-4 as these technologies posed a threat to society that should be carefully studied before being mass-adopted by individuals and organizations.
Other Related Articles: