The prognostications of Silicon Valley moguls such as Elon Musk, Peter Thiel, and company would have you believe we are standing on the edge of an artificial intelligence apocalypse, with smart machines coming to eat everyone’s lunch and undermine the viability of the economy as it functions today. This anxiety is exacerbated by recent troubling news of the way Big Tech is manipulating things we hold dear, such as local organic dispensers or civil democratic discourse. Combined with a steady diet of dystopian motion picture depictions of intelligent machines such as 2001: A Space Odyssey, Bladerunner, Her, Ex Machina, The Terminator, AI, Alien: Covenant, and so on, one may come away with the impression that the magnates of Big Tech are correct: the future of humanity is slipping out of the grasp of humans and into that of some nameless, faceless Machine Learning systems.
I’ve spent the past several years during graduate school and onward becoming intimately familiar with the mathematical underpinnings of learning theory and how it can be used to design autonomous learning systems. What I can tell you is that we are exceptionally far from designing anything resembling intelligence in the popular usage of the word. Today’s learning machines can successfully learn from their sensory feeds to address rudimentary questions such as “am I looking at an apple?” with a high degree of accuracy. This is called supervised learning, or statistical inference, and dates back to Sir R. A. Fischer in 1917. The key difference between then and now is increased data availability and computing infrastructure, but the bones of how the learning occurs is nearly as basic as it was in 1917.
Within the next five to ten years will be able to reliably and from streaming information have a learning system, not just perform object recognition (“is that an apple”?), but learn to autonomously navigate its environment through past trial and error experiences. To be specific, we will be able to teach an autonomous agent to ask the right questions that lead it to find the apple. This is called reinforcement learning, and has been around at least since Richard Bellman in 1954, but the framework has gained salience more recently, again due to increases in computing power and data availability. The folks at Google DeepMind, among others, are hard at work on making this a reality for simple videogames, and from that we will be able to extrapolate the techniques to problems faced by robotic systems.
But what no engineered combination of supervised or reinforcement learning systems can tackle is how to mathematically formulate the meaning of higher level questions or abstract notions. There is an insurmountable barrier between “how should you steer to the apple?” and “what is the meaning or purpose of an apple?” that makes it beyond the realm of engineered systems. Workarounds based on regurgitating information from Wikipedia or Google ignore the issue of how a machine should actually form a representation of higher-level knowledge based on what it’s trying to do. There really is nothing that both makes mathematical sense and captures the way abstract concepts actually can define the real world and our experience of it.
I would like to go out on a limb at this point and claim that we will never be able to assign a numerical value to intangible concepts like trust, friendship, creativity, and so on. Without a formal theory of how learning systems should process and understand abstract concepts, no engineered systems will do so. We may see exceptions within our lifetime, but these will not go beyond a “special sauce” cobbling together of different features to dupe us into believing we are observing intelligence when we really are not. Siri and Alexa don’t understand you. Not really. They just built up an intricate predictive statistical model of your speech patterns and browsing habits. But there is no higher level understanding.
Equipped with this understanding of the limitations of the artificial intelligence of our imagination, we can shift focus to how technology is currently evolving. Products like the iPhone, HomePod, Echo, XBox One, etc., promise consumers intelligent companionship that can understand when important events are upcoming in their lives and help them find recipes in real time for things they are cooking. For all sorts of rudimentary tasks like recounting information from the web or inside your smartphone apps, they are well-equipped to do so. But this impressive functionality breaks down when you replace basic informational questions with ones that require any sort of creativity and synthesis. For higher-level knowledge, there’s nothing more than regurgitating Wikipedia.
These products are a lot more rudimentary than the marketing campaigns for Amazon Echo, HomePod, etc. would have you believe. Big Tech has a vested interest in convincing the public that autonomous learning systems are much more advanced than they are, so that in five to ten years we are more likely to buy into the rush towards autonomous cars. Automobile companies are pouring billions of dollars into the development of AI for autonomous driving, ignoring the sheer number of higher-level questions that are required during our daily driving experience, such as how to provide an intuitive and adaptive response to the behavior of a cyclist in a neighboring lane on the road. This should give us cause for concern.
More pernicious than a few malfunctioning fancy cars are the long-term economic effects that industrial automation has wrought on the working classes of Western countries. Automation has been so effective because it allows autonomous systems to operate only within the realm of low-level questions and repetitive mechanical tasks, for which identifying the apple and navigating to it are enough. As countries have automated factories, warehouses, cashiers, and other components of retail, decent working people have watched their livelihoods evaporate. We cannot ask companies to reduce efficiency and incur greater operating costs just due to our bleeding hearts. The task we are faced with is how to redirect working people to tasks that are well-suited to their skill set, and too challenging for industrial engineers to automate. The AI Apocalypse may not be coming, but the bifurcation of advanced industrialized economies is already here.