There's a company named Graphika that provides data on social media analytics. People are using a thing called "AI undressing." This means they're using special computer tools to take clothes off pictures.
Graphika checked out comments and posts on Reddit and X, and they found something surprising.
Some reports have claimed that there are more than 30 websites that offer this service. In 2022, there were only about 1,280 of these links. But this year, there are already over 32,100!
AI technology is rapidly evolving, and some people are misusing it. Synthetic NCII services are making use of Artificial Intelligence to create Non Contextual Intimate Images, which allows them to generate explicit content without asking for consent from individuals.
Graphica has clearly stated that AI has enhanced the ability to generate explicit content without asking anyone. Moreover, it has become an easier and more cost-effective tool for many users.
Without these tools, the customers have faced a lot of difficulties in generating such images by themselves. It was also a time-consuming and expensive option.
Graphika is sounding an alarm about more people using AI undressing tools. They say this could cause big problems, like making fake explicit content. It might even lead to things like targeted harassment, sextortion, and creating bad material involving children.
These tools aren't just for pictures. AI is also being used to make fake videos, called deepfake. They've used this technology to make videos pretending to be famous people like Mr. Beast from YouTube and actor Tom Hanks.Moreover, these tools have also made a massive impact on the Bollywood industry by targeting celebrities like Rashmika Mandhana and Indian Prime Minister Narendra Modi.
In October, a group called the Internet Watch Foundation found more than 20,254 explicit pictures of children on a secret part of the internet in just one month. They're worried that AI might make even more bad pictures and make the internet really bad.
The IWF says it's harder to tell if a picture is fake or real because of new technology. In June, the United Nations said fake pictures made by computers are a big problem, especially on social media. Just recently, the European Union made rules about how AI can be used.
Predicting when a misuse of technology like AI will stop is complex. It depends on various factors, like advancements in regulations, technological countermeasures, and societal awareness. Efforts to control misuse are ongoing, but complete eradication might be challenging due to the evolving nature of technology and the internet.
Preventing misuse involves a collective effort. It requires stricter regulations, ethical use of technology by developers, heightened awareness among users, and continuous advancements in tools that detect and counteract such misuse.
As technology progresses, so do the means to address these issues. The aim is to create a safer digital environment through a combination of legal frameworks, technological innovations, and education about responsible use. While it may take time, ongoing efforts strive to minimize and ideally eliminate such harmful misuse of AI and technology.
Also read: Challenges for South Korea in Countering North Korea's Move