^

Opinion

AI & the end of humankind

BREAKTHROUGH - Elfren S. Cruz - The Philippine Star

It was the brilliant scientist Stephen Hawking who, during his lifetime, made a dire prediction about the end of humankind. He said that the development of artificial intelligence (AI) will someday result in the end of mankind as we know it.

The mathematician I.J. Good predicted back during WW II that the invention of an ultra-intelligent machine would lead to an intelligence explosion. Back in 1965, he wrote that the invention of the first ultra-intelligent machine would be “…the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

It should be remembered that Good was the adviser of Stanley Kubrick when he made the film “2001:  A Space Odyssey.” The main feature of this futuristic film was a machine that took over the spaceship and demanded that the earthlings would worship him as god. This ultra-intelligent machine was named Hal 9000.

In more recent times, we have the series of movies called “The Terminator” about artificial intelligence taking control of the world from human beings. In fact, these AIs believed that they were superior to mankind.

Isaac Asimov, the most famous science fiction writer who wrote during the middle of the 20th century, wrote several novels on robots that had developed the ability to think. In his novels on robots, he always made it a point that one of the instructions to the AI directing the robots was never to harm humankind.

Some of his novels became stories of robots disregarding these instructions and becoming dangers to humankind. Today, we see the rapid advancement of artificial intelligence. This has resulted in hybrid education and other similar advances.

There have been initial reports of machines that are now capable of thinking. In fact, there is a report of a machine that has even begun to develop a language. The question now is if machines can think, how will human beings retain control over them? If human beings cannot control machines anymore, is it then possible for AI to be taught morality or the difference between good and evil?

For me, this is a nearly impossible task because human beings themselves often cannot agree on the difference between good and evil.

The writer Toby Walsh, author of “Machines Behaving Badly,” says that AI is already being used for an impressive array of purposes, which will continue to expand in the next decades. Some of the uses of AI that have become normal today are “…detecting malware, checking legal contracts for errors, identifying bird songs, discovering new materials, facial recognitions, predicting crime and scheduling police patrols.”

While these are laudable functions, the world needs to think of unintended consequences of these advancements in technology. Walsh is also predicting that in the immediate horizon, computers can help automate dull and dangerous jobs unsuited for humans. This includes AI being used to “…combat the climate emergency by optimizing the supply and demand of electricity, predicting weather patterns and maximizing the capacity of wind and solar energy.”

Soon we will see the day when AI will be in every aspect of our lives.

But Walsh himself questions whether machines can be made to operate in moral ways. In an experiment by Media Lab at the Massachusetts Institute of Technology, a digital platform is being used to experiment whether AI has the possibility to make moral choices.

A description of the experiment is cited by John Thornhill, Financial Times Innovation editor, based on interviews with a large number of car users. “How do users react to the moral problem known as trolley problem dreamt up by the English philosopher Philippa Foot in 1967. Would you switch the course of a runaway trolley to prevent it from killing five people on one truck at the cost of killing one person on an alternative spur? In surveys, some 90 percent of people say they would save the five lives at the cost of the one.”

However, many people including computer scientists accept the difficulty of deciding such moral choices. This therefore amplifies the difficulty of writing moral choices into a machine’s operating system. It is not uncommon for people to say one thing in public and do another in private. For example, if a person is on a diet, how do we ensure that the machine will provide that person with the right kind of food? It is not uncommon for a person to say for example that he or she will not eat cakes or other baked high-sugar products when in fact, they want to continue doing so.

Thornhill concludes: “…moral decisions made by machines cannot be the blurred average of what people tend to do. Morality changes. Democratic societies no longer deny women the vote or enslave people as they want to.” Walsh himself wrote: “We cannot today build moral machines, machines that capture our human values and that can be held accountable for their decisions.”

My own question is if we now have machines that have morality, who will now decide what moral choices they should make? Will it be lawmakers who will base it on their own personal morality or will it be the makers of the machines? Then, if machines now begin to make other machines, we will see a world that will see machines deciding what is moral.

This is the dilemma for humankind in the future. Will we stop the development of artificial intelligence or will we face the risk of what Stephen Hawking says will be the end of humankind?

*      *      *

Young Writers’ Hangout on July 23 with returning author-facilitator Kim Derla, 2-3 pm.

Contact [email protected]. 0945.2273216

Email: [email protected]

vuukle comment

AI

Philstar
x
  • Latest
  • Trending
Latest
Latest
abtest
Are you sure you want to log out?
X
Login

Philstar.com is one of the most vibrant, opinionated, discerning communities of readers on cyberspace. With your meaningful insights, help shape the stories that can shape the country. Sign up now!

Get Updated:

Signup for the News Round now

FORGOT PASSWORD?
SIGN IN
or sign in with