Scott Gibson: Opening the black box of artificial intelligence
In March of 2016, a computer challenged the grandmaster of the game Go to a match.
Go is an exceptionally difficult game, considered far more complex than chess. Artificially intelligent computers had tried for years to beat humans, but had not yet advanced beyond amateur status.
Elon Musk, a large investor in artificial intelligence, or AI technology, stated in early 2016 that computers were a decade away from defeating human Go masters.Then came AlphaGo, a computer program created at Google.
AlphaGo not only had access to a database of 30 million moves from 160,000 games, but it had also been designed to learn by practice. By playing against itself, the software learned aspects of the game that set it apart from prior computer programs.
AlphaGo defeated Grandmaster Lee Sedol in four of five matches, an astonishing feat.
But perhaps the most significant aspect was that the software sometimes made moves so unexpected that they were initially considered mistakes. It was only as the game progressed that the anomalous moves proved to be decisive in victory.
Why AlphaGo made these moves remains elusive.
The computer could not explain itself to its creators. It seemed to act by intuition.
This same intuitive technique was common in human players of the top rank, who sometimes played in ways that felt right but could not be related.
AI programs are entering a new phase in which they have increasing flexibility and massively increased analytical power. They can shift from one purpose to another, carrying with them what they have learned previously to new applications.
This capacity is called a “foundation model,” and it opens far wider uses of AI, uses that bring it to an industrial scale. But the increasing inscrutability of AI remains a serious hurdle.
If the computers are making decisions they cannot explain to their human masters, we are essentially left to rely on computer hunches.
We may look at the output, and judge that the computer has made a mistake, but how can we know if it is going off the rails or making an ingenious intuitive assessment? We need to be able to look at what parameters the computer uses to determine if it is making a breakthrough or a blunder.
The problem is, the more intuitive the computer output, the more likely it is based on extremely complex webs of interconnecting information. AI programs are designed to work like the human brain, and the multitude of accessed data points and the pathways behind the sorting process provide a daunting web when subjected to reverse analysis.
In some situations, however, it is vital to allow humans to judge the quality of AI decisionmaking.
The United States gathers piles of data about North Korea from signal interception, satellites and spy planes. Analysts can become overwhelmed with recommendations from pattern recognition software alerting them to possible hostile activity.
The ability to track why the program created the warning would help analysts assess its importance and recommend software changes if trivial data is being flagged. For this reason, the Defense Advanced Research Projects Agency instituted a program to probe the innards of AI software to back-analyze its processes.
Another reason we need to build transparency into computer outputs is so that we can learn from them.
Artificial intelligence has the potential to make discoveries of great value. Going forward, computers will create new, useful products.
But if their inventive methods remain entangled in a maze of semiconductors, we will not learn how to duplicate the effort.
At that point, we will become dependent upon the computer for further ideas instead of innovating for ourselves from a higher level of understanding. If human learning is to expand from AI achievements, we need to understand the building blocks the computer used.
Fortunately, some reverse engineering is possible without retracing the millions of steps a supercomputer takes to analyze a particular question.
If an AI writing program recommends different wording for a piece of text, the writer can often judge for herself if the computer is being clever or daft. But I can attest that sometimes the editor, whether human or electronic, needs to provide compelling reasons to nudge an author into accepting their conclusions.
Computers are wedging their way into more and more creative fields that previously were the domain of humans alone.
An AI program collaborated with composers to construct what Beethoven’s 10th Symphony might have sounded like was produced from the composer’s notes for the piece. The result has been hailed as a musical triumph.
Software is now writing music, poems, stories, technical manuals and even jokes, and the results are becoming harder and harder to distinguish from human efforts. Because computers can access vastly more human creative works than any person can, then use that data bank to create new material, they provide an incomparable resource for creative collaboration.
But to learn why the software made various suggestions, understanding its sources is vital. Learning requires explanation, not dictation.
To this end, companies as well as government agencies are designing software that learns to watch AI programs, identify sources, determine how they were integrated and report back to humans.
That analytic task, sadly, is beyond the capacity of people. But we are learning how to make tools to watch over our tools.
Artificial intelligence will soon be driving our cars, making medical diagnoses and authorizing bank loans. These new capabilities have the power to make our world safer and more efficient.
But we humans need to know what steers the decisions these programs make, both to learn from them and, when needed, bring them to heel.
Guest writer Scott Gibson returned to his childhood home 30 years ago to practice medicine. A board-certified internist, he served on the McMinnville School Board from 2011 to 2017, when he and his wife, Melody, moved to the outskirts of Amity to open the Bella Collina B&B. In addition to medicine and science, he counts history, economics and writing among his interests.
Comments