Lina Khan, the chairperson of the Federal Trade Commission (FTC), warned that cutting-edge Artificial Intelligence (AI) tools like OpenAI’s ChatGPT could be used to “turbocharge” fraud and scams, during a congress hearing.

AI Is Enabling Deceit And Fraud

On Tuesday, the US Congress held a hearing to address the FTC’s efforts to safeguard American consumers against fraud and other dubious business practices.

When asked about what the FTC is doing to protect US citizens from unethical practices tied to technological advancements, Khan acknowledged that the growth of AI, while presenting a lot of benefits and opportunities, has also come with many risks.

She further proceeded to warn House representatives of the inherent ability of AI to facilitate and accelerate fraud, a situation she expressed to be a serious concern.

“I think we’ve already seen ways in which it could be used to turbocharge fraud and scams. We’ve been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,” she said.

Additionally, the FTC chair stated that the commission’s technologists were being integrated across all areas of work, including consumer protection and competition, to assist battle the issue and ensure that any AI-related problems would be properly identified and handled.

FTC Will Adapt To AI, Says Slaughter

When addressing the same topic, commissioner Rebecca Slaughter, emphasized Khan’s remarks additionally noting that the agency throughout its 100-year history had successfully adapted to new technology and had the know-how to do so once more to address fraud fueled by artificial intelligence.

Moreover, Slaughter stated that the current buzz surrounding artificial intelligence is significant because, in some respects, it is a breakthrough technology. “But our obligation is to do what we’ve always done — which is apply the tools we have to these changing technologies, make sure that we have the expertise to do that effectively, but to not be scared off by the idea that this is a new revolutionary technology, and dig right in on protecting people,” she added.

In the testimony, which in addition to Khan and Slaughter was also presented by Alvaro Bedoya, the Commission announced that it would take legal action against companies that utilize artificial intelligence for misleading practices or to violate laws against discrimination.

Bedoya warned that companies using algorithms or artificial intelligence were not permitted to violate civil rights laws or break rules against unfair and deceptive acts. “It’s not okay to say that your algorithm is a black box and you can’t explain it,” he said.

He added that ” Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply. There is a law, and companies will need to abide by it.”

In addition to artificial intelligence, the testimony presented before the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce covered a wide range of topics, most of which were tech-related.

The trio further spoke of the FTC’s efforts to combat the growing problem of spam phone calls, how it is handling the COPPA (children’s privacy law) violation, and its warning to Opendoor, an online home buyer, regarding its false claims about potential sales prices among other concerns.

This hearing comes a few weeks after the FTC received a request from Marc Rotenberg, CAIDP’s president and a longtime consumer protection advocate on technology issues, to look into OpenAI and GPT-4 due to the range of risks associated with generative artificial intelligence.

Related articles:

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops