How would you feel if you were shown a video of you committing a crime? How much more upset would you be if you knew you didn’t do it, but others still took it at face value, regardless of your pleas? AI has the potential to cause such havoc in the future, and even though it hasn’t yet developed enough to generate ultra-realistic videos, it is already ruining lives. In a recent incident at Crandall High School in Texas, a student was found to own an account with millions of views (TikTok: @crandall.kirkinator), where they posted slanderous content about their teachers using deepfake AI technology. The application in question was Viggle AI, an AI platform that allows images of people to be inserted into “Viggle bases”, which are templates that can be used to make the subjects do whatever actions the creator desires. The student running the TikTok account initially posted AI-generated content of teachers dancing or acting out memes, but their posts quickly devolved into harassment. Using AI, videos showcased teachers performing TikTok dances with captions like, “Mr. Sherwood, dance if you’re a bum.” Although these captions were only made to poke fun at some of the teachers, the situation quickly escalated. Upon finding the account, TikTok fans doxxed and tormented the teachers by spam calling, emailing, and generally just harassing them. After the creator’s fans sent him DMs showing the things they were saying to the teachers, the Kirkinator account owner posted a statement saying it was never their intention to negatively influence the teachers’ lives in such a way, and that their videos were meant only as harmless jokes. However, the damage had already been done. Countless strangers had the contact information of the teachers involved, and the account seemed to have inspired many others across the country to do the same things, creating an influx of new “Kirkinators” (and victims). 

This incident highlights the ethical use of AI image and video generation. The teachers in the TikTok videos didn’t consent to be in them, and the rise of tools like Viggle AI has caused new legal quandaries. In the U.S., laws like the “Take it Down Act” have been created to stop sexually suggestive content. The Take it Down Act criminalizes unauthorized explicit images, including AI deep fakes. It requires social media platforms to take them down within 48 hours, as the National Association of Attorneys General explains. However, in the case of the Crandall Kirkinator, because the AI-generated videos and memes were not sexually explicit, they were not encompassed by this act. And now, the question is: Is AI image and video generation truly a helpful tool if so many regulations must be placed on it to ensure it is not used for harm?

The Crandall High School incident has shown what could happen if generative AI were used for nefarious purposes, and that the teachers involved were only spared from further harassment because of the account owner’s goodwill. To justify image and video AI, there must be an equally helpful aspect to it. Some argue that generative AI boosts productivity and reduces the extra costs of hiring human artists to create artwork when a business needs visual content. Others in the artistic fields see AI as a way to overcome the skill barrier separating those who are good at drawing from those who are not, and believe it enables them to bring their drawing ideas to life. AI image generators also have healthcare implications in enhancing Computed Tomography (CT) scans. CT scans are used to diagnose internal injuries, diseases, and structural abnormalities by producing 3D images of bones, tissues, and organs. CT scans are usually used in emergency cases or to pinpoint cancer, and the images they produce are critical for saving lives. Recently, AI image models have been created to help reconstruct images from limited data and reduce radiation exposure. The AI models allow doctors to more accurately identify where they need to make incisions, operate, and save lives.

Of the aforementioned benefits of generative technology, only the medical applications are actually beneficial to society, such as saving lives, as in the CT example. When companies use AI image generation to cut costs, they rob actual artists of the opportunity to make money and eventually leave them unable to make a living. Additionally, the content produced by generative AI is so easy to generate. It requires so little human intervention that, at that point, it is completely soulless, meaning it is meaningless to use it to lower the skill requirement for art. If a person truly dreams of becoming an artist, they should aim to overcome the difficulties themselves rather than use AI as a cheap alternative. Ultimately, generative AI can certainly be invaluable. Still, because of the ways that certain people and companies may abuse such tools, it should only be accessible to companies in fields that actually use them to improve their products instead of using them as shortcuts, and this only becomes increasingly true as AI evolves and becomes more able to fool those who watch AI-generated content. Regulating this will undoubtedly be challenging, and will have to involve the government in both deciding which companies get the generative AI tools and actually enforcing a law that prevents the tools from spreading. However, such efforts would pave the way for a safer future in which fewer people fall victim to malicious uses of AI.