The online AI-powered image generator Midjourney is not letting users test the software for free anymore after several ‘deepfakes’ went viral this week including a picture of Donald Trump being arrested and another one depicting Pope Francis wearing some swaggy clothing.
The Chief Executive Officer of the company, David Holz, revealed in a post published on the company’s Discord channel that some users were abusing the trial and the software to create this type of misleading images.
Before today, Midjourney offered users the chance of creating 25 images for free and then charged a $10 subscription if they wanted to keep using the service.
Midjourney Has Tried to Keep Things Under Control – It Has Not Succeeded
Experts have warned repeatedly that AI-powered solutions like this could be exploited by bad actors to spread misinformation and confuse the public by creating seemingly original pictures that are in fact produced by highly capable and advanced computers.
At one point, these concerns were considered exaggerated as the quality of the images was quite poor and fake pictures could be easily identified. However, the latest advances in artificial intelligence including the sophisticated new models created by OpenAI – i.e. GPT-4 – have made it clear that it could now be harder to tell what is real and what is not.
To prevent users from creating these so-called ‘deepfakes’, Midjourney has been banning several keywords from being used when prompting the AI to produce an image. However, some reports indicate that these bans can be easily bypassed by using synonyms or different phrasing for the prompts. The outcome: something either equally deceitful.
“We tried turning trials back on again with new safeties for abuse but they didn’t seem to be sufficient so we are turning it back off again to maintain the service for everyone else”, Holz commented during a conversation with journalists from the tech-focused magazine The Verge.
‘Deepfakes’ Are Just a Glimpse of How AI Could Be Used for Bad Purposes
The proliferation of ‘deepfakes’ is just an example of how easily the AI trend could spiral out of control if regulators don’t step in to force companies to do their homework in terms of the risks that the technology poses to society.
This week, a group of public figures from the tech industry published a letter urging AI labs to stop developing models for at least six months until the ramifications of further advancements in the field can be adequately studied. A total of 1,100 personalities including Elon Musk and Steve Wozniak signed the letter.
In addition, a group of ethics experts who focus on the tech industry recently urged the United States Federal Trade Commission (FTC) to place a temporary ban on OpenAI to prevent the company from releasing new models.
The most recent studies of OpenAI’s GPT-4 model have compared it with human-like intelligence due to its capacity to solve complex exams. The company clarified recently that it is not working in more powerful versions of its generative AI technology.
Earlier this month, OpenAI allowed third parties to connect ChatGPT to the internet via the use of plugins that would allow the AI-powered chatbot to tap on the web and other databases to make suggestions and respond to users’ queries.
OpenAI commented that it has studied the implications of doing this and it has put in place a set of safeguards that should prevent users from exploiting ChatGPT’s capabilities for ill-gotten purposes.
Even though OpenAI claims to be aware of all the risks that result from this kind of exposure, tech experts believe otherwise. In any case, these concerns have not prevented companies such as Microsoft (MSFT), Alphabet (GOOG), and multiple others from rapidly launching AI-powered products.
Other Related Articles: