the AI problem

i work in tech and every time someone mentions AI, i want to take a shot of vodka or a drag of recreational drugs. i would be long gone if i actually did this. but i work remotely, so i can get up and walk around to feel less frustrated. at least i get some movement in. for the past couple years, anything and everything is called AI. as a writer, and a technical writer at my day job, i’m annoyed by the lack of specificity. it’s okay to call it a chat bot, or an automation using some Python scripts, or a just a feature that summarizes batches of text. it’s straight-forward, easy to grasp, and not misleading. this AI mirage created using words irks the part of my brain that wants clarity.

however, my problem is not just with the words associated with AI. it’s everything.

the user problem

i’ve seen a few common ways in which it (typically ChatGPT) is used: search engines, creating summaries, drafting, asking for legal and life advice, a companion to talk with, coding, generating images, videos, and audio, study help, etc. none of these uses are valid.

using it as search engine leaves out a lot of context on who is providing that information. and that matters. i understand why one may not want to go through 5 different articles to find information on something trivial, but using it for every search creates the risk of falling for false information when we don’t check the sources. none of us are immune to falling for misinformation, as much as we’d like to think we are. chat bots are also known to “hallucinate” and give you outdated information. working with this is detrimental to you, as a user.

using chat bots to create summaries of lengthier documents/essays/articles and books is again problematic. sure, who hasn’t read a summary of a movie or a book we couldn’t bother to complete. but when we rely on it as the main source of information, we are missing out on the possibility that that particular piece has to offer. we miss the details or misinterpret, we miss giving ourselves the chance to have learnt something new to give us a different perspective or inspiration. we miss out on all these exciting things.

and as far as generating images is concerned, why do we need AI to generate fake human pictures and videos? we can well imagine the wrong ways in which it can and will be used. how can this possibly be regulated? i can think of better uses: generating images for scientific uses, simulations in engineering fields. sure, that makes sense. these are actual use cases. but to generate memes or “art” is wasteful and not even fun.

when i think of using chat bots/AI tools to create work email drafts, or to take notes, or to study, at first, such actions seem justifiable. a large part of the education system prioritizes grades and making money over learning, and so employees will inevitably try make their work easier. why would they use their time and energy to do menial tasks that make the days harder? with AI tools and automation, we can do basic and menial tasks faster and more accurately. but these are only the details. the actual problem lies underneath.

the capitalism problem

i don’t think that blaming individuals is going to change anything. the problem is with the capitalist system itself that alienates a lot of us from our work, from our lives, pushes us into isolation, and breeds unnecessary competition. we could find joy in our work if we worked less. if we didn’t have to worry about our continued sustenance. in fact, it would be more efficient. it’s not a pipe dream, there’s plenty of evidence that our current way of working and living is stressful and harming ourselves and the environment.

when AI is pushed onto us in every aspect, we must also ask the question why this is so. you must have heard of the phrase, “if you are not the buyer, you are the product”. i believe it is the same case here. when you use these tools, you are testing them, training the LLM models, providing them data. one may think that this benefits us too, but such benefits are not justifiable.

from the medical field to manufacturing to research, AI tech’s computational and analysis capabilities can be hugely beneficial. but the way it is implemented leaves a lot to be desired. it’s only a glorified chat bot with access to media made by humans. and the access that’s given is questionable itself. companies like Meta, Google, and Microsoft can read and use our data to train their LLMs and call it “policy” or “terms of use”. for example, i recently noticed that instagram translates reels by changing the speaker’s face, their mouth movements to match the translated language. do the creators know this, and are they allowing this to happen? what happens to intellectual property and the rights to your own video? it’s not a feature, it’s creepy, ugly, and unethical. this did not come out of nowhere. you know that many teams at instagram would have to be involved in working on and releasing this feature. there would have been meetings. do those employees know how creepy, ugly, and unethical this is? it’s not me versus them, we are on the same side, as employees working in tech. though i wonder what they think while they are working on this.

the feeding problem

AI and the LLMs that run it work with what humans have created. the art, research, language, media, everything is fed to it and it gives us answers according to what it has access to. so, if the quality of the input is bad, the output follows.

what i mean is this: let’s consider the medical field. let’s say we give it all the information we have now. we know that the data is biased against women and people of color. aren’t we propagating the same old issues? if we are going to use AI, we need to feed better quality of information. This means that in order to implement AI tech here, the prejudices and biases that humans have must be addressed. the same goes for other fields.

the ethical and sustainability problems

from how i see it, conversations on AI ethics and sustainability cannot really be separate. to illustrate, have you read the Anatomy of an AI System? this 2018 paper maps the journey of Amazon’s Echo device from birth to death. The supply chain logistics to produce these devices is shown to be terribly complex. we’re aware of the brutal mining processes, the exploited human labor, and the pollution that runs our modern tech lifestyle. it is a stark reminder of how we lack any meaningful regulations compared to the advancements in AI features.

when we picture AI ethics, we usually imagine robot laws, or whether or not a machine can be held responsible, or about the abundant security and privacy concerns surrounding the unimaginable amount of data we generate today.

but it would be good to remember that AI ethics must also include how AI is built, along with how it is used. before the boom of our current AI tech, we have already had AI in the form of chat bots and smart devices. manufacturing all our devices comes at a great cost—human life and earth’s resources. training of data sets, annotating images, moderating content is carried out by human workers who are often paid in misery and a dollar. all the processing power requires more servers which require water to operate at optimal temperatures. already, residents in a few cities in the US are facing problems with lack of water and bad quality of water.

i’m constantly reminded of the short story The Ones Who Walk Away from Omelas by Ursula Le Guin. it is the essence of our reality in the form of a short, fictional story. if our convenience depends on someone else’s struggles, what should we do?

these concerns continue to be relevant now. every prompt you send, every response you receive, every computation that any model must do takes up a lot more processing power than your average search. when these options are forcefully embedded into our daily digital lives, just how much energy is consumed in a day? a month? as we go on, if the current trajectory of AI continues as it is, what is the price we must pay for further advancement? it is true that our collective Internet usage also take up a lot of power. hosting this site, surfing the net, streaming music, everything takes up electricity and water. so why only blame AI for it? as mentioned previously, most of the tech we have today is built on earth’s resources and cheap human labor. the problem already exists. increasingly shoving AI into everything and operating more data centers exponentially aggravates this existing problem. we could be working towards decolonizing tech and making it more sustainable, enjoyable. AI is taking us in the opposite direction.

one other point on ethical use is intellectual property. AI is used to generate audio, video, text in the style of a particular singer, director, or writer. me being inspired from another artist and imitating their work is different from stealing it to generate slop. where does it end? alarm bells are already ringing as AI is used to create fake videos of politicians, fake news items, pornography.

the future problem

let’s take a minute to think about future (and current) generations brought up in such environments. if we don’t make it clear how AI should and shouldn’t be used, the problems we see today will only be aggravated. it is high time for comprehensive courses on using the Internet responsibly.

unfortunately, I haven’t found much that inspires optimism, but we can only move up from here. i want a better tech space where we make informed and inclusive decisions. i cannot deny that AI and tech, have great practical uses. that’s why, i want this to benefit all of us.

Leave a comment