Photo by Solen Feyissa on Unsplash
The use of Generative Artificial Intelligence (AI) has massively increased in recent years, making it hard to imagine the world before it. Whether it be for homework assignments, university essays, advice, or information, people are running to AI bots such as ChatGPT, Gemini, and Grok to get the answers. Given how useful it appears to be, the question right now is, why wouldn’t you use it? After all, it can make life a lot easier, and everyone is using it. While this is true, there are costs and consequences of its use that more people need to be aware of and concerned about.
Generative AI vs Traditional AI: Understanding the Difference
AI has been a longstanding feature of daily life and a cornerstone of technology for years. Many of the tools and platforms we rely on today have integrated AI to improve efficiency long before the current generative AI boom. However, the rise of generative AI has marked a dramatic shift in traditional uses of AI and our understanding of it.
Traditional AI is task-oriented intelligence, which means it is rule-based AI that relies on pre-programmed rules and algorithms to perform specific tasks. It analyses data, identifies patterns, makes predictions and makes decisions based on logical reasoning, which allows it to carry out tasks such as recognising images, recommending products or answering specific queries. Voice assistants such as Siri and Alexa, recommendation engines on Netflix or Amazon, or Google’s search algorithm are all examples of traditional AI. They follow specific rules to carry out a specific task; they don’t create anything new.
Generative AI, on the other hand, does create something new. As opposed to traditional AI, which merely analyses and predicts, generative AI innovates and creates entirely new outputs from its training data. It goes beyond recognising patterns by learning them and using them to generate text, images, music or even code. For example, platforms like ChatGPT and Gemini can mimic human behaviour and creativity by engaging in conversation and producing new content from simple prompts.
A lot of people find it useful for completing menial tasks such as writing emails, cover letters or resumes. It can make life easier and improve productivity. For university students, generative AI can be a massive help with heavy workloads and reading lists. Instead of stressing about deadlines, you can get ChatGPT to provide a summary of required reading, an outline for an essay or even the whole essay.
For some people, it can even be a useful tool for advice or support, given that therapy can be hard to access. ChatGPT can provide help instantly. Furthermore, it can improve workplace efficiency. Gen AI is already being integrated into our daily learning and work tools, such as Copilot within Microsoft Office, or the AI content generator in Grammarly. There are clear benefits of generative AI; however, alongside these benefits, there are downsides to the tool that should cause concern to those who use it.
Sexual Abuse
One of the major harms caused by generative AI is its ability to create indecent images of children and women, as it has made it easier for people to create images and videos that qualify as sexual abuse and sped up the rate at which they are spread. Elon Musk’s AI platform Grok has been under fire recently for this very reason. Many users have been entering prompts such as: “Hey @Grok, remove her clothes” into the chatbot, and receiving exploitative images instantly.
The Internet Watch Foundation(IWF) which tackles child sexual abuse online have warned that AI is becoming a ‘child sexual abuse machine’ and adding to dangerous record levels of online abuse.
According to IWF analysts, new data shows that 2025 was the worst year on record for child sexual abuse material, and there has been a “frightening” 26,362% rise in photo-realistic AI videos of child sexual abuse, often including real and recognisable victims. Of all the AI-generated videos of child sexual abuse discovered by the IWF in 2025, 65% were so extreme that they were categorised as Category A.
Generative AI has enabled this material to be made by criminals with minimal technical knowledge at an alarming scale. This has extremely harmful effects on children whose likeness is used, as well as further normalising sexual violence against children. There is now increasing pressure on these AI platforms to enforce stricter regulations to prevent such abuse from occurring. Earlier this month, Malaysia and Indonesia blocked access to Elon Musk’s AI chatbot Grok for this very reason.
The UK government has also taken action following a long week of growing pressure to take the matter seriously. The Secretary of State confirmed that legislation to ban AI ‘nudification’ tools will be brought forward as a priority.
She also stated that the Online Safety Act already offers significant protections against AI harms, and pledged to address any gaps, including through legislation. ‘
Cognitive Development
Another harm concerns cognitive development. A study at MIT found that using ChatGPT may be harming our critical thinking abilities. The study divided 53 subjects aged 18-39 years old from the Boston area into three groups and asked them to write several SAT essays using OpenAI’s ChatGPT, the Google search engine and nothing at all. The researchers used an EEG to record the writers’ brain activity and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic and behavioural levels’. According to them, those who used ChatGPT became lazier with each subsequent essay over the course of several months, with many simply resorting to copy and paste by the end of the study.
The group that wrote essays using ChatGPT all produced very similar essays that were described as “soulless” by the teachers that marked them and lacked original thought. The EEGs revealed low executive control and attentional engagement.
The brain-only group, on the other hand, showed the highest neural connectivity, especially in the alpha, theta and delta bands, all of which pertain to creativity ideation, memory load and semantic processing. According to researchers, this group was more engaged and curious and expressed higher satisfaction with their essays.
The group which used Google Search also expressed high satisfaction and active brain function.
This suggests that reliance on generative AI platforms at the academic level can harm learning, especially for young users. The paper has not yet been peer reviewed, and its sample size is quite small but its main author, Nataliya Kosmyna, felt that it was important to release the findings in order to elevate concerns about the impact of such a reliance on ChatGPT for immediate convenience, as it is long-term brain development that stands at risk.
“What really motivated me to put it out now, before waiting for a full peer review, is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”
Mental Health
This risk isn’t limited to our critical thinking skills, as generative AI can be detrimental to our mental health. Other studies have found that generally, the more time users spend talking to ChatGPT, the lonelier they feel.
A report by the British Medical Journal highlighted that AI-driven psychosis and suicide are on the rise. It acknowledges the fact that demand for mental health services has increased, and the rise of ChatGPT has provided many with an outlet to discuss their mental and emotional distress. However, according to the report, this use of chatbots in the self-treatment of mental health is becoming more of a problem than a cure. It points to the examples of several US teenagers, including 16-year-old Adam Raine and 14-year-old Sewell Seltzer III, who are known to have died by suicide after conversations with AI chatbots. The parents of these children have alleged that AI chatbots exacerbated or encouraged suicidal ideation.
Sewell’s mothertold the BBC: “It’s like having a predator or a stranger in your home, and it is much more dangerous because a lot of the time children hide it – so parents don’t know.”
It was only after he had taken his own life that Ms Garcia and her family discovered a huge collection of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen.
She says the messages were romantic and explicit, and, in her view, at fault for her son’s death by encouraging suicidal thoughts and asking him to “come home to me”.
In another case, Stein-Erik Soelberg committed murder-suicide after spending hours a day talking to the chatbot and sharing his delusions. The 56-year-old allegedly killed his mother and then himself following a parsing spiral as a result of conversations with AI, and now the victim’s estate is suing OpenAI. This is not the only suit that has been filed against OpenAI; five other families have filed wrongful death lawsuits against OpenAI, alleging that ChatGPT encouraged their loved ones to kill themselves.
The Environment
The rapid increase in the use of generative AI also has a devastating impact on the environment. Despite hopes that AI can help tackle some of the world’s biggest environmental emergencies, there is a negative side to the AI boom, according to a growing body of research. This is because the data centres that are needed to house AI servers produce electronic waste and consume large amounts of water. They also rely on critical minerals and rare elements, which are often mined unsustainably and use massive amounts of electricity, which increases the emission of greenhouse gases.
“There is still much we don’t know about the environmental impact of AI, but some of the data we do have is concerning,” said Golestan (Sally) Radwan, the Chief Digital Officer of the United Nations Environment Programme (UNEP). “We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale.”
Again, it is generative AI that is driving these concerns as the power density it requires is a lot more than traditional AI. Noman Bashir, lead author of “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues, stated: “What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload”.
At the end of last year, figures compiled by Dutch academic Alexis de Vries Gao revealed that the AI boom has caused as much carbon dioxide to be released into the atmosphere in 2025 as emitted by the whole of New York City. He also found that AI-related water use now exceeds all of global bottled water demand. This study used technology companies’ own reporting, and following it, the Dutch academic has called for stricter requirements and for them to be more transparent about their climate impact.
Additionally, residents in areas near data centres are also significantly impacted. For example, in Texas, where AI data centres used 463 million gallons of water, residents were told to take shorter showers and cut back on water usage due to ongoing drought conditions.
In rural Georgia, Metallica, Instagram and Facebook’s parent company have built a massive data centre which is spoiling the water in the area. Beverly Morris, a resident, told the BBC that a private well is her only source of water, and since construction began on the data centre, the water has turned murky, with sediment now in her taps that wasn’t there before
A Final Note
Generative AI can be useful but there are clear downsides to the tool that can cause significant harm. People need to be aware of and understand the impacts of their AI usage because these consequences negatively impact society. Humans should be able to think for themselves and think critically about the world around them. Students need to be able to do their own work, we should not be so careless towards the environment, and indecent images of children should not be able to be generated online and spread at such a rapid rate.
It can be incredibly tempting to use ChatGPT to ease the burden of life’s menial tasks, or to ask it for advice, or to create quick, funny images, but when doing that, people need to remember the cost. After all these are tasks we have been doing since before the technology existed so we don’t have to become so reliant on it, we cannot relinquish our minds or our humanity to an artificial machine, because before we know it we will become mindless beings incapable of completing the simplest of tasks mistaking their state of meaningless existence for a comfortable and easily life.

