The amped-up efforts by Facebook and Twitter to tone down blatant “misinformation” on the campaign trail merits public support. Personally, I find myself trying with limited success to tune out the political noise while sensing that the problem goes beyond that.
The rhetoric of politics overall sounds tired and anachronistic, but then, to my ear, so does much of the dialogue on the popular streamers we binge on. Further, check out the “virtual learning” classes that now pass for “education” and you run into even lazier forms of communication. We all decided the earth was flat even before the new Netflix documentary, titled Social Dilemma, pointed up the random anti-truths directed our way.
So while misinformation is being challenged by the social media monoliths, my techno-nerd friends remind me that the demise of honest communication demands a more drastic approach. Their solution? Get ready to groan — remember, they’re nerds.
Their solution is to alert us to the expanding tools of neuro-symbiotic AI — artificial intelligence. For most of us, AI conjures an old Steven Spielberg movie in which a robotic Haley Joel Osment keeps flunking the tests of his cybertronics instructors. Little wonder the poor kid kept saying “I see dead people” (oops, different movie).
But a San Francisco-based software company called Open AI last month unveiled a system that could write coherent essays, design software applications and even propose recipes for breakfast burritos – that is, if fed the appropriate maze of symbols. It’s called “deep learning” but it could even lead to “deep communicating.”
Mike Davies, director of Intel Corp’s neuromorphic computing lab, contends that neuro-symbolic AI can potentially deliver our own voice assistants adjusted to user needs, analyzing problems or even, some day, writing film scripts — or political speeches.
“These systems are still nascent but you could imagine that as the technology progresses, entirely new fields could emerge in terms of advertising or media,” Francesco Marconi, founder of Applied XI, told the Wall Street Journal. His company generates briefs on health and environmental data. “They will become effective at assisting people because they’ll be able to understand and communicate.”
The ultimate aim is to build support for a sort of Manhattan Project, akin to the body that fostered the atom bomb. Spending on this technology could grow to $3.2 billion by 2023, according to IDC, a research firm, that looks for future support coming from health care, banking and retail. Yann LeCun, chief AI scientist at Facebook, insists we are in sight of creating a machine that can “learn how the world works by watching video, listening to audio and reading text.”
Given the critical results of its own self-audits, Facebook is under growing pressure to police hate speech, with AI-based censors potentially mobilized to crack down on targeted content. Thus extremists who argue that conservationists triggered the fires in Oregon could no longer aim their social media propaganda directly at any user who happens to check on fires or conservation.
But advances must come from sources even more esoteric then AI, some scientists insist. In a new book titled Livewired, David Eagleman, a practicing futurist, argues that the increasingly important field of brain science itself will nurture development of artificial “neural” networks.
As these networks proliferate, they will be embellished by “machines that themselves can learn, and adjust to new surroundings,” such as self-driving cars or power grids distributing electricity.
Argues Eagleman, “The capacity to grow new neural circuits isn’t just a response to trauma – it’s with all of us every day and it forms the basis of all learning.”
So here’s the epiphany: Given the heightened sophistication of our neuro circuitry, political candidates may actually have to talk honestly to us. And there is nothing more intimidating to a political candidate than an intelligent audience — even if it’s artificially intelligent.