Ethical Issues in Artificial Intelligence
In Sci-Fi movies, Artificial Intelligence is depicted as a futuristic human creation that’ll go rogue and bring about a dystopian society – even annihilating the human race.
The advancement of artificial intelligence to perform specific human tasks used to sound like a possibility in the distant future, but it’s already here.
Presently, some enterprises operate with little or no human intervention and instead use programmed robots to perform tasks. However, forecasts also show that 30% of manufacturers will use AI to increase product development efficiency by 2027.
Speculations have it that by 2030, AI will make certain jobs obsolete. Unfortunately, these speculations — and the reality they pose — have raised ethical concerns about how much investment humans should make in artificial intelligence and its effects on humanity.
So, in this article, I’ll discuss some ethical issues artificial intelligence poses.
The Impact of AI on Jobs: What’s next when Jobs Become Extinct?
AI has displaced over 5 million humans from manufacturing jobs in the US alone. With automation, artificial intelligence can potentially replace many more jobs.
One of the concerns of AI critics is how this automation will increase unemployment and what this means for the economy.
According to Boden in his book Artificial Intelligence, a short introduction, “the novelty in the perceived threat from AI, which differs from earlier similar fears about ICT in general or other automation technologies, is that the jobs currently under apparent threat are better-paying ones: AI may increasingly imperil the income of middle-class professionals.”
This means that AI may further reduce an already dwindling middle class by taking over their jobs.
There’s also the prominent issue of how it’ll affect people socially and psychologically since work environments provide an income for humans and an avenue for social interaction, which has a net positive on psychological health.
Trust Issues: What Level of Trust Should We Give to Unempathetic Machines?
Humans possess empathy which influences our decision-making process, a trait that machines lack.
We can agree that humans are prone to mistakes and biases when making decisions, but artificial intelligence also has its errors.
In making a decision, a human is likely to guess themselves second to ensure the accuracy of their decision. Unfortunately, machines still lack this critical part of human nature. Instead, they act as if they have been programmed.
This absence of consciousness poses an ethical challenge because even programmers have biases and act with the information they have.
An example of this moral issue is in the earlier versions of facial recognition software used by law enforcement agencies. It was revealed that these algorithms misidentified and profiled dark-skinned people more often than white people.
As artificial intelligence becomes more popular in the justice system, law enforcement, security checks, and even weapons systems, a critical question of how much error a machine can make for humans to trust AI’s decision-making process lingers.
Can we assume that AI won’t profile a person unjustly or, even worse, wrongly set off a weapons system?
Privacy, Biases, and Safety
Is your phone listening to you? Data management and handling is another ethical concern associated with Artificial Intelligence. The problems range from AI’s data sourcing to how this data is used and for what purposes.
Because most organizations’ data collection and usage policies are opaque, growing concerns about where all the data collected goes and how it is used continue to pose questions.
Of course, the ethical considerations are valid because, with data, behavior is predictable. Who is to say such information stays ethical and is not used for profit?
There’s a need for legislation that justifies the collection of specific data and also protects the user’s privacy and freedom without an algorithm directing or controlling their decision-making process.
More so, as self-operated machines and autonomous vehicles become popular, the risk of loss of life may become a deciding factor in public acceptance.
What happens when an autonomous vehicle has an issue with its programming? How safe is it to assume a machine will make the right choice when it is necessary to preserve life?
Power Balance: How Does Artificial Intelligence Affect Power Distribution?
With AI giants like Amazon and Google using artificial intelligence to outmatch their competition, the ethical concern of how AI affects power balance arises.
For instance, nations with the capacity to develop and implement artificial intelligence already possess a head start putting them ahead of geopolitics.
The same applies to businesses, as I have stated above. So, ensuring that artificial intelligence does not further affect power dynamics in human society and business competition is an essential topic.
Conclusion: Is AI Bad News for Humanity?
No, it’s not. But how we use and build it is essential to ensure that it remains positive for humanity.
Unlike the movies, the fear that AI may create a dystopian world for humans may be a far-fetched narrative – if at all possible.
Also, on the positive side, AI has increased our potential to survive in our environment and is improving our standard of living. For example, some of the risks associated with manufacturing are reduced by the use of automation.
But the fear of gloom isn’t going away soon. Prominent people like Elon Musk have expressed their reservations that AI, if not properly managed, can destroy humanity. Moreover, most of the ethical issues raising caution about the popularity of AI rely on us as programmers.
In my conclusion, programmers building AI are responsible for ensuring that our creations don’t infringe on people’s rights, freedom, and even survival.
What do you think?