The Four Waves of A.I.

THE TERM ‘ARTIFICIAL INTELLIGENCE” was coined in 1956, at a historic conference at Dartmouth, but it has been only in the past 10 years, for the most part, that we’ve seen the first truly substantive glimpses of its power and application. A.I., as it’s now universally called, is the pursuit of performing tasks usually reserved for human cognition: recognizing patterns, predicting outcomes clouded by uncertainty, and making complex decisions. A.I. algorithms can perceive and interpret the world around us—and some even say they’ll soon be capable of emotion, compassion, and creativity—though the original dream of matching overall “human intelligence” is still very far away.

What changed everything a decade or so ago was an approach called “deep learning”—an architecture inspired by the human brain, with neurons and connections. As the name suggests, deep-learning networks can be thousands of layers deep and have up to billions of parameters. Unlike the human brain, however, such networks are “trained” on huge amounts of labeled data; then they use what they’ve “learned” to mathematically pick out and recognize incredibly subtle patterns within other mountains of data. A data input to the network can be anything digital—say, an image, or a sound segment, or a credit card purchase. The output, meanwhile, is a decision or prediction related to whatever question might be asked: Whose face is in the image? What words were spoken in the sound segment? Is the purchase fraudulent?

This technological breakthrough was paralleled with an explosion in data—the vast majority of it coming from the Internet—which captured human activities, intentions, and inclinations. While a human brain tends to focus on the most obvious correlations between the input data and the outcomes, a deep-learning algorithm trained on an ocean of information will discover connections between obscure features of the data that are so subtle or complex we humans cannot even describe them logically. When you combine hundreds or thousands of them together, they naturally outstrip the performance of even the most experienced humans. A.I. algorithms now beat humans in speech recognition, face recognition, the games of chess and Go, reading MRIs for certain cancers, and any quantitative field—whether it’s deciding what loans to approve or detecting credit card fraud.

Such algorithms don’t operate in a vacuum. To perform their analyses, they require huge sets of data to train on and vast computational power to process it all. Today’s A.I. also functions only in clearly defined single domains. It’s not capable of generalized intelligence or common sense—AlphaGo, for example, which beat the world’s masters in the ancient game of Go, does not play chess; algorithms trained to determine loan underwriting, likewise, cannot do asset allocation.

With deep learning and the data explosion as catalysts, A.I. has moved from the era of discovery to the era of implementation. For now, at least, the center of gravity has shifted from elite research laboratories to real-world applications. In essence, deep learning and big data have boosted A.I. onto a new plateau. Companies and governments are now exploring that plateau, looking for ways to apply present artificial intelligence capabilities to their activities, to squeeze every last drop of productivity out of this groundbreaking technology (see our next story). This is why China, with its immense market, data, and tenacious entrepreneurs, has suddenly become an A.I. superpower.

What makes the technology more powerful still is that it can be applied to a nearly infinite number of domains. The closest parallel we’ve seen up until now may well be electricity. The current era of A.I. implementation can be compared with the era in which humans learned to apply electricity to all the tasks in their life: lighting a room, cooking food, powering a train, and so on. Likewise, today we’re seeing the application of A.I. in everything from diagnosing cancer to the autonomous robots scurrying about in corporate warehouses.

FROM WEB-LINKED TO AUTONOMOUS

A.I. APPLICATIONS can be categorized into four waves, which are happening simultaneously, but with different starting points and velocity:

The first stage is “Internet A.I.” Powered by the huge amount of data flowing through the web, Internet A.I. leverages the fact that users automatically label data as we browse: buying vs. not buying, clicking vs. not clicking. These cascades of labeled data build a detailed profile of our personalities, habits, demands, and desires: the perfect recipe for more tailored content to keep us on a given platform, or to maximize revenue or profit.

The second wave is “business A.I.” Here, algorithms can be trained on proprietary data sets ranging from customer purchases to machine maintenance records to complex business processes—and ultimately lead managers to improved decision-making. An algorithm, for example, might study many thousands of bank loans and repayment rates, and learn if one type of borrower is a hidden risk for default or, alternatively, a surprisingly good, but overlooked, lending prospect. Medical researchers, similarly, can use deep-learning algorithms to digest enormous quantities of data on patient diagnoses, genomic profiles, resultant therapies, and subsequent health outcomes and perhaps discover a worthy personalized treatment protocol that would have otherwise been missed. By scouting out hidden correlations that escape our linear cause-and-effect logic, business A.I. can outperform even the most veteran of experts.

The third wave of artificial intelligence—call it “perception A.I.”— gets an upgrade with eyes, ears, and myriad other senses, collecting new data that was never before captured, and using it to create new applications. As sensors and smart devices proliferate through our homes and cities, we are on the verge of entering a trillion-sensor economy. This includes speech interfaces (from Alexa and Siri to future super­smart assistants that remember everything for you) as well as computer-vision applications—from face recognition to manufacturing quality inspection.

The fourth wave is the most monumental but also the most difficult: “autonomous A.I.” Integrating all previous waves, autonomous A.I. gives machines the ability to sense and respond to the world around them, to move intuitively, and to manipulate objects as easily as a human can. Included in this wave are autonomous vehicles that can “see” the environment around them: recognizing patterns in the camera’s pixels (red octagons, for instance); figuring out what they correlate to (stop signs); and then using that information to make decisions (applying pressure to the brake in order to slowly stop the vehicle). In the area of robotics, such advanced A.I. algorithms will be applied to industrial applications (automated assembly lines and warehouses), commercial tasks (dishwashing and fruit-harvesting robots), and eventually consumer ones too.

THE CHANGES YET TO COME

BECAUSE A.I. CAN BE PROGRAMMED to maximize profitability or replace human labor, it adds immediate value to the economy. A.I. is fast, accurate, works around-the-clock, doesn’t complain, and can be applied to many tasks, with substantial economic benefit. How substantial? PwC estimates that the technology will contribute about $16 trillion to worldwide GDP by 2030.

But that gift doesn’t come without challenges to humanity. The first and foremost is job displacement: Since A.I. can perform single tasks with superhuman accuracy—and most human jobs are single-task—it follows that many routine jobs will be replaced by this next-generation tech. That includes both white-collar and blue-collar jobs. A.I. also faces questions with security, privacy, data bias, and monopoly maintenance. All are significant issues with no known solution, so governments and corporations should start working on them now.

But one concern we don’t have to face quite yet is the one that may be most common these days, cast in the image of science-fiction movies—that machines will achieve true human-level (or even superhuman-level) intelligence, making them capable presumably of threatening mankind.

We’re nowhere near that. Today’s A.I. isn’t “general artificial intelligence” (the human kind, that is), but rather narrow—limited to a single domain. General A.I. requires advanced capabilities like reasoning, conceptual learning, common sense, planning, cross-domain thinking, creativity, and even self-awareness and emotions, which remain beyond our reach. There are no known engineering paths to evolve toward the general capabilities above.

How far are we from general A.I.? I don’t think we even know enough to estimate. We would need dozens of big breakthroughs to get there, when the field of A.I. has seen only one true breakthrough in 60 years. That said, narrow A.I. will bring about a technology revolution the magnitude of the Industrial Revolution or larger—and one that’s happening much faster. It’s incumbent upon us to understand its monumental impact, widespread benefits, and serious challenges.

This essay is adapted from Lee’s new book, AI Superpowers: China, Silicon Valley, and the New World Order (Houghton ­Mifflin Harcourt). He is the chairman and CEO of Sinovation ­Ventures and the former president of Google China.

This article originally appeared in the November 1, 2018 issue of Fortune.

Join our Fortune Features list to receive breaking news alerts from our newsroom and our latest stories. Sign up for free.