Forget Turing: Machines Have Already Passed a More Important Test

By Robert D. Lamb

■ It’s common to talk about the evolution of machines in comparison to humans — the Turing test, when will computers be as intelligent as humans, can robots be conscious like humans, etc. But there’s a forgotten tradition of research (called cybernetics) that compares humans, machines, and human societies to see what practical lessons they have for each other. (My PhD adviser, John Steinbruner, was one of the last great lights of the field.)

I work at the Army War College, and a lot of people in the security community often wonder: why is the U.S. military (some of the most sophisticated fighting organizations in the history of warfare) losing in Afghanistan, a tribal society? In other contexts I’ve heard people make fun of how Afghan tribal communities are still supposedly fighting with each other over things like whose family stole whose goat 200 years ago. Last year, as I was wrapping up my (years-long!) research on governance and conflict, I came to a startling conclusion: Afghan tribes might well be a more sophisticated organization than the Army is—and the joke about centuries-long grudges might have something to do with it! (I delayed publishing the results because I needed to pursue and develop some of the implications of the research.)

Here was my reasoning and research: I traced the evolution of human-made machines through six stages of increasing sophistication in the hardware and mathematics needed to make them work: linear, dynamic, complex, adaptive, intelligent, and conscious. The last three (4, 5, and 6) are the most relevant here: (4) adaptive machines use heuristics to link their present state with their desired state; (5) intelligent machines use metaheuristics (to make decisions about which heuristic would be most effective for a given situation) and training data (i.e., not just enough memory to make decisions based on its present circumstances, but deep memory to make decisions based on its experience of decision making in all of its past circumstances—history, as it were); and (6) conscious machines don’t exist because we don’t yet know the hardware (memory? history? neurology?) and mathematics (what’s beyond metaheuristics?) required to make a machine self-aware.

Machines have evolved so that, today, they are on the low end of intelligent—with metaheuristics and deep memory—getting closer to, but not yet as sophisticated as, individual human beings (who are conscious, beyond intelligent).

BUT—and here’s where it gets freaky—most human societies are merely adaptive—we’re collectively pretty good at knowing our present circumstances and what we need to do to reach a collective goal one or two steps ahead, but terrible at learning from history (deep memory) enough to make decisions that are good enough to account for potential second- and third-order consequences. Some societies and organizations might be as sophisticated as the low end of intelligent (such as, say, certain tribal societies whom we mock for their deep knowledge of their own histories—who perhaps know that “waiting them out” has been an effective strategy against foreign invaders for centuries).

My alternative to the Turing test is: Can a machine make better decisions than a society? And I think that, not only will machines pass this test before they pass the Turing test, they probably already have! Some individual machines today already seem to be as sophisticated in their development as the most sophisticated human societies have ever been (even if individual humans are more sophisticated than individual machines).

That is a bit disconcerting, because human societies are not getting more intelligent. We’re certainly not getting more “conscious”—whatever that would mean! We don’t yet know enough about the hardware and software of human brains to understand what makes us conscious—but that’s actively being studied, and once we understand the requirements, building a conscious (beyond-Turing-capable) machine will inevitably happen! So we’ll have conscious machines and conscious humans—but merely adaptive or intelligent societies at best. And societies are what make individual human life possible and tolerable.

So the big— THE big—challenge for us humans is this: what does a future look like in which machines are more sophisticated (make better decisions, achieve objectives more effectively) than our own most sophisticated societies and organizations?

You could freak out about human-robot wars and the robots winning, but what I’m more concerned about (and why I needed more time to pursue my research last year, because this can get into some unscholarly woo-woo territory) is actually a more promising line of questioning: What does this imply for how humans will govern ourselves in the future? Can we use technology to improve the way we govern ourselves? I think we can—and we must! Because right now, human societies are murdering each other—because collectively we’re merely adaptive and barely intelligent and therefore constantly making stupid decisions.

Can technology help us make better collective decisions? Can it help us with the breadth and distribution of memory (history, information, doctrine, metaheuristics, etc.), which is a core enabling factor of intelligence that we have individually but lack collectively? Can we develop technology-enabled governance systems (perhaps drawing from a deep well of human knowledge) that produce justice, peace, inclusion, and prosperity? More radically, can technology help us create not just a more intelligent society, but a conscious society—help us build a collective consciousness so entire societies can behave as a single organism in which all the parts work together for common purpose?

That last question is the big one for me—because tech-enabled governance systems are already being developed, but they’re being developed for largely selfish reasons (tax evasion, illicit trafficking, etc.) or they’re being developed without consideration for the ultimate effects they’ll have on justice, peace, prosperity, etc. On our current path, we’re just going to keep using technology to scale up the worst that humanity has to offer.

We need to be experimenting today on how best to structure these platforms, so that we can built more intelligent societies, and perhaps in the future more conscious societies. We’re never going to get there otherwise.