Elon Musk is concerned about the impact of Artificial Intelligence (AI) in the future. Who am I to disagree? After all, he is a tech billionaire with a boat load of academic and entrepreneurial credibility.

I am not any of those things. I live in the world of how human behavior, leadership, culture, and the ability to continuously change affect success. I have leveraged a behavioral and social science education from a little-known state university, 30-plus years of experience, and continuous study into a place that is firmly in the middle of global experts in my field.

The difference in our perspectives, however, makes it possible for me to see the future quite differently than the apocalyptic version Musk envisions. I absolutely acknowledge the potential for a world of mass job losses, dictators, and world wars. I saw “The Matrix,” and I’m convinced that I would gladly take the red pill.

I just don’t believe that AI is the root cause of the dystopian destiny he envisions.

Change, Fear, and Moral Panics

Humans have always fixated on their worst fears about anything that challenges their current thinking. The Theory of Moral Panics, first attributed to South African sociologist Stanley Cohen, describes the wide-spread and often irrational fear that something is a threat to the values, safety, or interests of society. Sensationalized allegations about “Welfare Queens,” rainbow parties, and backwards messages on rock albums are examples.

Uncertainty about the impact of new technology adds another layer of context to the discussion. Socrates was said to have questioned the value of a relatively new technology—writing —as a tool to help people remember. Ironically, we know that because Plato transcribed and published the conversation.

There were concerns that the human body couldn’t withstand speeds of 30 miles per hour back when the Stockton-Darlington Railway began operation in 1825. The telegraph was heralded as “too fast for the truth,” and telephones were once feared as a conduit for lightning strikes.

Moral Panic combined with fears about new technology make very entertaining science fiction. They can also create cautionary warnings about the inherent perils of bad choices. It doesn’t, however, create an absolute picture of the future.

A 2017 study by Pew Research found that over 70 percent of U.S. residents express wariness and concern over the expanding role of machines in day-to-day life. When it comes to jobs, 85 percent reported a desire to limit machines to jobs that are dangerous or unhealthy for humans.

Hawking and Gates add Perspective

CNBC ran this headline for a story on November 6, 2017: “Stephen Hawking says A.I. could be ‘worst event in the history of our civilization”.

Hawking’s alignment with Musk appears obvious. Except that isn’t the entire picture.

The story goes on to describe Hawking’s vision that AI could help to transform “every aspect of society.” It could, according to him, be the “biggest event in the history of our civilization, or the worst.”

In contrast to Musk, Bill Gates has gone on record touting AI’s potential benefits. According to him, the world could increase productivity, provide people with more time off, and allow everyone to enjoy a higher quality of life. Even then, there is recognition of the challenges within his optimism.

The problem with AI and its ultimate role in our world isn’t the technology. It is the human element that will influence development, application, adoption, and ultimately regulation. It has always been that way.

There are valid ethical questions such as what do we do if there is no more work, how human do we want our machines, and how do we protect against even greater inequalities between the haves and have nots?

Ultimately, those are human choices. It us up to us to determine if we allow Musk’s world view to become a reality or focus or the potential recognized by Hawking and Gates?

If you listen closely, there is a tinge of hope in Musk’s skepticism. Maybe his real problem is a lack of confidence in our ability to responsibly develop, deploy, and utilize AI technology. If that is the case, let’s tackle that problem. It is more challenging than going to Mars, but the benefits are far greater.

 

Randy Pennington is an award-winning author, speaker, and leading authority on helping organizations achieve positive results in a world of accelerating change. To bring Randy to your organization or event, visit www.penningtongroup.com , email info@penningtongroup.com, or call 972.980.9857.