Author: Shani Adesanya

  • The Hidden Costs of Generative AI: Why You Should Rethink Your Usage

    The Hidden Costs of Generative AI: Why You Should Rethink Your Usage

    Photo by Solen Feyissa on Unsplash

    The use of Generative Artificial Intelligence (AI) has massively increased in recent years, making it hard to imagine the world before it. Whether it be for homework assignments, university essays, advice, or information, people are running to AI bots such as ChatGPT, Gemini, and Grok to get the answers. Given how useful it appears to be, the question right now is, why wouldn’t you use it? After all, it can make life a lot easier, and everyone is using it. While this is true, there are costs and consequences of its use that more people need to be aware of and concerned about.

    Generative AI vs Traditional AI: Understanding the Difference 

    AI has been a longstanding feature of daily life and a cornerstone of technology for years. Many of the tools and platforms we rely on today have integrated AI to improve efficiency long before the current generative AI boom. However, the rise of generative AI has marked a dramatic shift in traditional uses of AI and our understanding of it. 

    Traditional AI is task-oriented intelligence, which means it is rule-based AI that relies on pre-programmed rules and algorithms to perform specific tasks. It analyses data, identifies patterns, makes predictions and makes decisions based on logical reasoning, which allows it to carry out tasks such as recognising images, recommending products or answering specific queries. Voice assistants such as Siri and Alexa, recommendation engines on Netflix or Amazon, or Google’s search algorithm are all examples of traditional AI. They follow specific rules to carry out a specific task; they don’t create anything new. 

    Generative AI, on the other hand, does create something new. As opposed to traditional AI, which merely analyses and predicts, generative AI innovates and creates entirely new outputs from its training data. It goes beyond recognising patterns by learning them and using them to generate text, images, music or even code. For example, platforms like ChatGPT and Gemini can mimic human behaviour and creativity by engaging in conversation and producing new content from simple prompts. 

    A lot of people find it useful for completing menial tasks such as writing emails, cover letters or resumes. It can make life easier and improve productivity. For university students, generative AI can be a massive help with heavy workloads and reading lists. Instead of stressing about deadlines, you can get ChatGPT to provide a summary of required reading, an outline for an essay or even the whole essay. 

    For some people, it can even be a useful tool for advice or support, given that therapy can be hard to access. ChatGPT can provide help instantly. Furthermore, it can improve workplace efficiency. Gen AI is already being integrated into our daily learning and work tools, such as Copilot within Microsoft Office, or the AI content generator in Grammarly. There are clear benefits of generative AI; however, alongside these benefits, there are downsides to the tool that should cause concern to those who use it.

    Sexual Abuse

    One of the major harms caused by generative AI is its ability to create indecent images of children and women, as it has made it easier for people to create images and videos that qualify as sexual abuse and sped up the rate at which they are spread. Elon Musk’s AI platform Grok has been under fire recently for this very reason. Many users have been entering prompts such as: “Hey @Grok, remove her clothes” into the chatbot, and receiving exploitative images instantly. 

    The Internet Watch Foundation(IWF) which tackles child sexual abuse online have warned that AI is becoming a ‘child sexual abuse machine’ and adding to dangerous record levels of online abuse. 

    According to IWF analysts, new data shows that 2025 was the worst year on record for child sexual abuse material, and there has been a “frightening” 26,362% rise in photo-realistic AI videos of child sexual abuse, often including real and recognisable victims. Of all the AI-generated videos of child sexual abuse discovered by the IWF in 2025, 65% were so extreme that they were categorised as Category A.

    Generative AI has enabled this material to be made by criminals with minimal technical knowledge at an alarming scale. This has extremely harmful effects on children whose likeness is used, as well as further normalising sexual violence against children. There is now increasing pressure on these AI platforms to enforce stricter regulations to prevent such abuse from occurring. Earlier this month, Malaysia and Indonesia blocked access to Elon Musk’s AI chatbot Grok for this very reason.

    The UK government has also taken action following a long week of growing pressure to take the matter seriously. The Secretary of State confirmed that legislation to ban AI ‘nudification’ tools will be brought forward as a priority.

    She also stated that the Online Safety Act already offers significant protections against AI harms, and pledged to address any gaps, including through legislation. ‘

    Cognitive Development

    Another harm concerns cognitive development. A study at MIT found that using ChatGPT may be harming our critical thinking abilities. The study divided 53 subjects aged 18-39 years old from the Boston area into three groups and asked them to write several SAT essays using OpenAI’s ChatGPT, the Google search engine and nothing at all. The researchers used an EEG to record the writers’ brain activity and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic and behavioural levels’. According to them, those who used ChatGPT became lazier with each subsequent essay over the course of several months, with many simply resorting to copy and paste by the end of the study. 

    The group that wrote essays using ChatGPT all produced very similar essays that were described as “soulless” by the teachers that marked them and lacked original thought. The EEGs revealed low executive control and attentional engagement. 

    The brain-only group, on the other hand, showed the highest neural connectivity, especially in the alpha, theta and delta bands, all of which pertain to creativity ideation, memory load and semantic processing. According to researchers, this group was more engaged and curious and expressed higher satisfaction with their essays.

    The group which used Google Search also expressed high satisfaction and active brain function.

    This suggests that reliance on generative AI platforms at the academic level can harm learning, especially for young users. The paper has not yet been peer reviewed, and its sample size is quite small but its main author, Nataliya Kosmyna, felt that it was important to release the findings in order to elevate concerns about the impact of such a reliance on ChatGPT for immediate convenience, as it is long-term brain development that stands at risk.

    “What really motivated me to put it out now, before waiting for a full peer review, is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”

    Mental Health 

    This risk isn’t limited to our critical thinking skills, as generative AI can be detrimental to our mental health. Other studies have found that generally, the more time users spend talking to ChatGPT, the lonelier they feel. 

    A report by the British Medical Journal highlighted that AI-driven psychosis and suicide are on the rise. It acknowledges the fact that demand for mental health services has increased, and the rise of ChatGPT has provided many with an outlet to discuss their mental and emotional distress. However, according to the report, this use of chatbots in the self-treatment of mental health is becoming more of a problem than a cure. It points to the examples of several US teenagers, including 16-year-old Adam Raine and 14-year-old Sewell Seltzer III, who are known to have died by suicide after conversations with AI chatbots. The parents of these children have alleged that AI chatbots exacerbated or encouraged suicidal ideation.

    Sewell’s mothertold the BBC: “It’s like having a predator or a stranger in your home, and it is much more dangerous because a lot of the time children hide it – so parents don’t know.”

    It was only after he had taken his own life that Ms Garcia and her family discovered a huge collection of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen.

    She says the messages were romantic and explicit, and, in her view, at fault for her son’s death by encouraging suicidal thoughts and asking him to “come home to me”.

    In another case, Stein-Erik Soelberg committed murder-suicide after spending hours a day talking to the chatbot and sharing his delusions. The 56-year-old allegedly killed his mother and then himself following a parsing spiral as a result of conversations with AI, and now the victim’s estate is suing OpenAI. This is not the only suit that has been filed against OpenAI; five other families have filed wrongful death lawsuits against OpenAI, alleging that ChatGPT encouraged their loved ones to kill themselves.

    The Environment 

    The rapid increase in the use of generative AI also has a devastating impact on the environment. Despite hopes that AI can help tackle some of the world’s biggest environmental emergencies, there is a negative side to the AI boom, according to a growing body of research. This is because the data centres that are needed to house AI servers produce electronic waste and consume large amounts of water. They also rely on critical minerals and rare elements, which are often mined unsustainably and use massive amounts of electricity, which increases the emission of greenhouse gases.

    “There is still much we don’t know about the environmental impact of AI, but some of the data we do have is concerning,” said Golestan (Sally) Radwan, the Chief Digital Officer of the United Nations Environment Programme (UNEP). “We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale.”  

    Again, it is generative AI that is driving these concerns as the power density it requires is a lot more than traditional AI. Noman Bashir, lead author of  “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues, stated: “What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload”.

    At the end of last year, figures compiled by Dutch academic Alexis de Vries Gao revealed that the AI boom has caused as much carbon dioxide to be released into the atmosphere in 2025 as emitted by the whole of New York City. He also found that AI-related water use now exceeds all of global bottled water demand. This study used technology companies’ own reporting, and following it, the Dutch academic has called for stricter requirements and for them to be more transparent about their climate impact.

    Additionally, residents in areas near data centres are also significantly impacted. For example, in Texas, where AI data centres used 463 million gallons of water, residents were told to take shorter showers and cut back on water usage due to ongoing drought conditions.

    In rural Georgia, Metallica, Instagram and Facebook’s parent company have built a massive data centre which is spoiling the water in the area. Beverly Morris, a resident, told the BBC that a private well is her only source of water, and since construction began on the data centre, the water has turned murky, with sediment now in her taps that wasn’t there before

    A Final Note

    Generative AI can be useful but there are clear downsides to the tool that can cause significant harm. People need to be aware of and understand the impacts of their AI usage because these consequences negatively impact society. Humans should be able to think for themselves and think critically about the world around them. Students need to be able to do their own work, we should not be so careless towards the environment, and indecent images of children should not be able to be generated online and spread at such a rapid rate.

    It can be incredibly tempting to use ChatGPT to ease the burden of life’s menial tasks, or to ask it for advice, or to create quick, funny images, but when doing that, people need to remember the cost. After all these are tasks we have been doing since before the technology existed so we don’t have to become so reliant on it, we cannot relinquish our minds or our humanity to an artificial machine, because before we know it we will become mindless beings incapable of completing the simplest of tasks mistaking their state of meaningless existence for a comfortable and easily life. 

  • What Orwell’s 1984 Teaches Us about the Dangers of the Trump Administration’s  Lies

    What Orwell’s 1984 Teaches Us about the Dangers of the Trump Administration’s Lies

    “The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.” —George Orwell, 1984

    This quote from George Orwell’s 1984 has been doing the rounds across social media in light of the actions taken by the Trump administration following the killings by Immigration and Customs Enforcement (ICE) agents in Minneapolis. 1984 is one of the quintessential works within the dystopian genre, as it expertly depicts propaganda, extreme surveillance, totalitarianism, and the erosion of truth. The book follows Winston Smith, a low-ranking member of ‘the Party’, who is frustrated by the pervasive eyes of the party and its ruler, Big Brother. In the book, Orwell depicts a hypersurveillance state, where truth is whatever the Party or Big Brother says it is. 

    ICE in Minneapolis

    Following the Trump administration’s response to the murder of Alex Pretti, more equivalences are being made to Orwell’s novel. 

    Alex Pretti, a 37-year-old American intensive care nurse for the United States Department of Veterans Affairs, was shot multiple times and killed in broad daylight by an ICE agent in Minneapolis. This is the second ICE killing in Minneapolis, as it comes just weeks after Renee Good, a 37-year-old American woman, was also shot and killed by an ICE agent, Jonathan Ross. In both incidents, ICE agents acted out of control and took fatal measures that were not necessary. 

    The Ministry of Truth

    Following Renee’s killing, a statement by the White House deputy chief of staff, Stephen Miller, was reiterated by the Department of Homeland Security account on X. In the statement, Miller says, “To all ICE officers: You have federal immunity in the conduct of your duties. Anybody who lays a hand on you, tries to stop you or obstructs you is committing a felony. You have immunity to perform your duties, and no one—no city official, no state official, no illegal alien, no leftist agitator or domestic insurrectionist—can prevent you from fulfilling your legal obligations and duties”.

    The Trump administration was also quick to label her a ‘domestic terrorist’, with the president taking to Truth Social to claim that she was ‘very disorderly, obstructing and resisting’ and then ‘violently, willfully, viciously ran over the ICE agent who seems to have shot her in self-defence’. Video footage from the incident, however, shows that this was not the case; in fact, the last thing Renee said was “That’s fine, dude. I’m not mad at you”. Renee Good presented no threat, and neither did Alex Pretti.

    Contrary to the defamatory claims made by the Trump administration, Pretti was holding his phone, not a gun, before he was beaten down and pepper sprayed. Alex Pretti was defending a woman who was being manhandled by ICE agents. There are several videos from witnesses that multiple, credible news sources have analysed and verified, which do not support claims made by the administration; in fact, they leave no room for deniability or a different version of events. 

    There are stark parallels between the actions taken by Big Brother’s ‘Ministry of Truth’ and the Trump administration’s response to the ICE killings. In the novel, the Ministry of Truth concerns itself with lies; it is a deliberate contradiction. It is responsible for the propaganda of the Party through rewriting history and controlling the news media, entertainment, education, and the fine arts.

    Trump is a known liar, but what we are seeing here is the erasure of truth at a systemic level. Much like the Ministry of Truth, the entire administration is promoting the same lies that brandish the victims of these shootings as ‘domestic terrorists’ and thus justify the actions taken by these ICE agents. Vice President JD Vance reposted a statement by Stephen Miller claiming that Pretti was ‘an assassin’ who ‘tried to murder federal agents’. 

    What’s worse is that we live in the digital age, governments and law enforcement have great means of surveillance at their disposal, but citizens can also surveil them when things like this happen with their phones. Instead of waiting for bodycam footage from the perpetrator, victims and witnesses can have their own footage. There is an abundance of credible evidence from the people who witnessed Alex Pretti’s execution that contradicts the version of events that the administration has concocted. This strategy of plausible deniability is merely an attempt for ICE as an agency to escape accountability to ensure it can continue carrying out Trump’s mission. 

    Arendt in Orwell and Reality

    Hannah Arendt can help us understand this tactic of lying. She talks about facts being fragile because they are contingent, which means that there is always a possibility for alternative realities. For example, in her book ‘Between Past and Future’, she states: “Since everything that has actually happened in the realm of human affairs could just as well have been otherwise, the possibilities for lying are boundless, and this boundlessness makes for self-defeat”.

    With regards to the ICE killings, the administration is able to lie because many people can conceive an alternative story where the ICE agents were acting in self defense, where Alex Pretti did pull again, where Renee Good was a hired agitator part of a wider left wing conspiracy tasked with assaulting law enforcement. Though the evidence shows that this was the case, it still could have been, the very possibility of it enables this alternative reality to take off.

    We see it in 1984 when the Ministry of Truth constantly contradicts itself through altering historical records, changing wartime alliances from Eastasia to Eurasia, fabricating the existence of “Comrade Ogilvy,” and revising economic forecasts. This is all possible due to the contingency of facts. 

    In 1984, Orwell takes it further by eroding what Arendt labels a rational truth. A rational truth pertains to mathematical, scientific or philosophical truths that are actively discovered and independent of opinion. These truths are harder to erode because there is no alternative imagination. In 1984, Big Brother coerced the citizens in Oceania into believing the mathematical falsehood that 2+2=5. 

    Now the administration has not gone to such extremes yet but it is not hard to imagine a world in which they do because the scale at which they are already twisting the truth is a very slippery slope. The administration cannot be allowed to lie about these killings; ICE agents and the organisation must face accountability.