Artificial Intelligence: Propaganda And Predicaments

In our not so brave, somewhat new world where tech giants are rock stars as well as robber barons, it’s no surprise that in the way that they’ve changed our habits they’re also changing our concepts.

“Artificial Intelligence” (AI) is the most disruptive example of conceptual shift in our age. Most of us have a fuzzy understanding of AI that gets fuzzier on just what algorithms, deep learning, machine learning, natural language processing, the cloud, big data, and Bayesian networks, are. That AI is computers making conclusions and solving problems like humans without explicitly being programmed by humans to do so makes us at least a little uncomfortable.

According to recent Gallup polling, while AI’s state of play and future prospects may be vague, informing the general public that we’re using AI right now results in generally positive feelings. Still, the contrast between our comfort with present-day AI and how transformational the technology is predicted to be in the future seems incompatible.

This rift suggests much is at stake. Our perceptions of an AI future are being managed in a way that’s as important as the technology itself. As an example, consider IBM’s Watson commercials, especially how they blur the line between science fiction and reality by humanizing a machine that in many ways can (but not yet) “out-think” humans.

AI perception management campaigns like Watson, Alexa, Apple, Google, etc. distract us from understanding the true state of the technology. There’s a considerable gulf separating what AI technology can do at this time and what it might be able to do in the future. Though progress in the field has been considerable, much of it builds on breakthroughs that are decades old. This should make us concerned about over-hyping AI’s potential.

Five problems are of particular interest. First, the alchemy problem. A Google researcher named Ali Rahimi received much attention for claiming that AI researchers “do not know why some algorithms work and others don’t, nor do they have rigorous criteria for choosing one AI architecture over another.” The alchemy problem is closely related to the reproducibility problem in which researchers cannot reproduce each other’s results, a problem that other scientific fields are grappling with. Then there’s the black box problem in which researchers have difficulty explaining how and why an AI came to its conclusions. What I call the garbage problem is the possibility that a programmer’s prejudices and biases can affect the performance of an AI. Lastly, there’s the problem of pseudo-AI, firms using humans to do some of the work that their well-hyped AIs are supposed to do. Considering problems that are essentially epistemic, Rahimi convincingly suggests that AI as a field of research may itself be a black box.

Contrast these issues with Google CEO Sundar Pichai’s claim that AI is “one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire.”

There’s a relatively small number of researchers and writers highlighting AI’s predicaments. But there are legions of technologists and investors pushing its promises. It’s inevitable (though barely acknowledged) that bad actors will take advantage of at least some of the industry’s irrational exuberance. Billionaire-making patents, IPOs, stock options, as well as rock star status are on the line. A potential crisis could be even worse if Rahimi is right that the field itself is a black box, which would set the stage for “fake AI”— false claims, shady characters, sham corporations, and crappy research that weaken our expectations for a better, more enlightened future. Indeed, “fake AI” could produce a few “AI Enrons” in the future. Hopefully, real progress in the field will prevent public confidence in AI from collapsing or taking a serious hit. But for that to happen, the extraordinary claims that are made now will need to have more than a reasonable chance of being true.

AI may be a black box, but it’s also a moonshot. Advanced post-industrial societies need a great scientific, technological, and economic leap forward, one on par with industrial societies transforming themselves with fossil fuels, the internal combustion engine, electrification, and indoor plumbing. The return on investment these developments created transformed much of the world by making us modern and spreading prosperity in ways humans had never seen before. But that 150-year run can’t last forever. Surely some form of mature, pervasive AI could give our way of life a shot in the arm. Still, I’m crossing my fingers that it’s not a placebo.  

@ProjectDex0

IMAGE SOURCE: artificial-intelligence-3382507_1920 (CC0 Creative Commons)

Advertisements

2 thoughts on “Artificial Intelligence: Propaganda And Predicaments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.