Artificial intelligence. The buzzword of the year. So mainstream, so all-encompassing, and so vast that it almost seems weird to not hear about it in one context or the other. And why shouldn’t it be? AI has been creating waves in the tech sector ever since the launch of ChatGPT late last year. And it’s safe to say that the world has not looked back since.
As much as it brings us to the forefront of immense potential and possibilities, it is important to understand that just like any technological revolution, this too has its pitfalls. And understanding those to better utilize these new AI models in a way that isn’t detrimental to society has become crucial. The problem here isn’t that of AI turning against humanity by gaining consciousness or replacing humans entirely but rather about the nuances of more social, political, and ethical aspects of such an innovation.
To dive into the issues, let’s try to understand how these new AI models like ChatGPT and Bard work. Simply put, these large language models (LLMs) run on machine learning, which is a branch of self-learning AI. These models learn to perform a given task using past examples and large numbers of textual training data sets based on human language and identified patterns. Think about it as studying for an exam. You learn from your teachers and practice questions from old exam papers which then enable you to perform on a brand-new exam, without needing the help of your teacher next to you.
However, there are some fundamental issues that arise with such practices. Although many people believe AI and robots to be objective, the data they are trained on are hardly so. Existing within such large datasets and past examples of how we make decisions are undertones of human bias. Most of the decisions we make as humans are based on our values, beliefs, moral compasses, etc.
So, what happens when we tell our AI to learn patterns from those biased decisions and apply them to newer ones?
Decisions that we now believe are neutral might actually be equally or more discriminatory, exclusive, and socially sorted.
And as the same cycle of bias and unfair sorting loops over and over, the inequalities simply become more widespread and systemized, embedded into the very frameworks designed to help us.
And if that wasn’t enough, AI hallucinates… sometimes. More accurately, it can misguide or make up information. We must remember that these models are not perfect. They run on systems that use predictive modeling, latent spaces, and massive sets of big data which can confound, to say the least, its answers.
Take for example a simple query asking for a summary of some scientific debate in some field. Imagine it gave you some names of people who argue for and against, some of their quotes, and some extra context. Now zoom out. It is completely possible that it made up some information in there to make its text more complete. Now, it doesn’t do this on purpose. It is possible that there were a lot of random sources of data in its database that somehow got filtered into the answer. It is also possible that certain aspects of the topic at hand got mixed up and somehow made it into the answer simply because for the AI, those two topics go together.
But the bigger problem here right now has more to do with the kind of overdependency that can arise on taking such information for granted because users are not aware of how AI hallucinates. Not understanding how or where your information comes from is not only self-sabotaging your knowledge but also extremely problematic when it comes to sourcing and crediting creators of such data where they should be.
Let’s discuss a scenario on Midjourney, a popular art generative platform that takes in requests from users and transforms them into some version of art. The issue? Copyright. Just like ChatGPT, Midjourney scrapes the web to gain insights into various art forms and generates visuals based on them. This has angered artists who have published their own work online for the public and have later noticed their art pieces being used as training data without their explicit consent. Not only does it affect the credibility of the data you receive but also majorly impacts the livelihoods of millions of artists globally.
Coming to a final stop, let’s talk about the digital divide. This image of an entirely technologically surrounded world seems real, but not to everyone. In fact, a lot of the developing world has yet to understand even the basics of new technology such as recommender systems that suggest content to you online, let alone gush over the rise of generative AI. A major part of the world still lacks the infrastructure to even distribute such technology among their citizens, while others fall behind on proper education to use them.
According to almost any predictive report you can find, our world is on its way to becoming more digital, more online, and more technologically infused by the minute. Innovation is getting a do-over as we witness a new age of technology. But with the onset of challenges, both new and old, how do we make this revolution an equitable and fair one?
It is difficult to estimate what the right move is here. Would it be enough to rally together and show enough dissent against such shrouded tech companies that they start taking more responsibility? Or is it more important to push the political systems in place so they can hold the technology sector accountable with up-to-date policies? While our lawmakers play catch up with a field that doesn’t rest and tech companies rush to outdo their competition at our expense, one thing we can strive to improve on is education.
And it’s not about taking classes to become developers yourself, but simple awareness of the kind of tech you interact with, the kind of data you give away, the terms you accept when you click on ‘allow all cookies.’ It isn’t enough anymore that we take for granted the services these companies provide us with, learning about your presence online and where you fall in the cycle of new technology is one of the only ways we still retain control of data that was never anyone else’s to use.
Ultimately, there is no clear path. It is of great vitality that we embrace the world of possibilities this new AI revolution is about to bring, but also that much important, if not more, that we don’t overlook its shortcomings.
The problems discussed here are not new, not completely. But they may be exacerbated by the recent advent of technology we are bound to witness. We are at a turning point. A generation more skilled, more dynamic, and more aware than any before. One that can spearhead one of society’s most significant technological changes with ethical leadership. One that makes its mark on the future in a way we can only imagine for millennia to come.
Edited by: Patricia Beschea