Taming the machine before it’s too late
IMD Professor Howard Yu on developments in Artificial Intelligence
8 March 2016
Whether Lee will win is irrelevant; the mere fact that AlphaGo exists is a testament to its power and the future of artificial intelligence. The ancient Chinese board game Go, played on a 19-by-19 grid with black and white stones, was once thought impossible to beat by machines. Unlike the western game of chess, where each move affords a maximum of 40 options, Go entails up to 200 choices. The possibility of different outcomes quickly compounds to a bewildering range of 10170—more than the total number of atoms in the entire observable universe.
Some may see AlphaGo as an incremental progress of existing technologies, like that of IBM’s deep blue which had defeated chess grandmaster Garry Kimovich in 1997. But what stands out this time around is AlphaGo’s ability to autonomously improve performance, simulating what cognitive psychologists regard as intuition. Before AlphaGo played against a human, the program had been developed to play video games—Space Invaders, Breakout, Pong, and others. Without the need of any specific programming, the general purpose algorithm was able to master each game by trial and error—pressing different buttons randomly at first, then adjusting to maximize rewards. Game after game, the software proved to be cunningly versatile in figuring out an appropriate strategy, and then applying it without making any mistakes.
Though AlphaGo may seem by far the most advanced, IBM has also pushed forward in cognitive computing. IBM Watson, hailed to be the first computer capable of understanding natural human language, shows us just how artificial intelligence can go beyond games and trivia. By digesting millions of pages of medical journals and patient data, Watson provides recommendations—from additional blood tests to the latest clinical trails available—to doctors and physicians. A cancer doctor, for example, only needs to describe a patient’s symptoms to Watson in plain spoken English, over an iPad application.
Naturally, companies have little choice but to double down their efforts in AI. The prospect of knowing our next desires and to deploy ever-smarter bots, a super Siri, or any automated messages, as a cheap alternative to customer service, is enough to tip all Fortune 500 companies to leverage ever more AI. AlphaGo, in that sense, is an early indicator on how that future may be rich with new possibilities beyond the realm of a human mind.
Such breakneck advances no doubt irk many observers. Elon Musk, the founder of Tesla Motors, has recently posted a stirring comment, saying artificial intelligence could “potentially be more dangerous than nukes.” Even Apple co-founder Steve Wozniak expressed grave concerns. “The future is scary and very bad for people,” he argued. “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?” For the first time, we have business leaders who are steeped in information technology projecting an apocalyptic outcome. The risks of producing the less desirable are real.
All this will certainly prompt additional negotiation at the societal level. Already, some 1,000 high profile AI experts have jointly signed an open letter, calling for a ban of any “offensive autonomous weapons.” If history is a guide, international protocols surrounding what AI systems are, and how they should be built will soon emerge. The concept of setting up international protocols are not new in IT. Even in our free-for-all Internet, worldwide protocols had helped ensure neutral access by all parties and efficient information exchange.
Given the sweeping scope of AI, no one can ill-afford to ignore developments. As said by Mary Kay Ash, there are three types of people in this world: those who make things happen, those who watch things happen and those who wonder what happened. Let us not become the last.
Howard Yu is a professor of strategic management and innovation at IMD business school.