Lately there have been accounts of tech moguls bickering about whether artificial intelligence (AI) will be beneficial for mankind. The general tone of these stories suggests that some of the most powerful people in the world are bickering over mankind’s future like children:
“Even one of the most respected and adored people in science and technology, Steve Wozniak, can’t seem to avoid making dramatic and sweeping statements on the future of AI, the kind that are guaranteed to garner headlines like “Steve Wozniak explains why he used to agree with Elon Musk, Stephen Hawking on A.I. — but now he doesn’t.”
The problem is that Schmidt, Musk, Woz, and Zuckerberg aren’t cast members on a reality TV show — what they say and do influences the future of STEM. And while great storytelling may revolve around conflict, science and technology should be about solving problems not starting arguments.
Eric Schmidt was smart enough to run Google, he knows damn well that Elon Musk has, at least, a basic understanding of how artificial intelligence works. And Musk knows that Mark Zuckerberg is a brilliant person, saying his understanding of AI is limited is a total manipulation: everyone’s understanding of everything is limited. Plus, Zuckerberg and Wozniak probably shouldn’t be challenging Musk’s assertion that AI could be used for evil. It already is.
None of this helps anyone. In fact, it could be argued that it makes it impossible for the general public to understand what’s really going on in the field.”
“Opinion: It’d be great if AI experts stopped arguing like children” | The Next Web |Tristan Greene | 05/25/2018
The assumption of even critical comments is that a tech mogul with a household name and billions in the bank de facto is a voice with a lot of weight in this discussion.
I have a very different take.
To make my case as clear as possible, I define AI (particularly “strong” AI) as “a non-human, electronic artifact that can learn to performs tasks, functions, analysis, decisions, etc. without having been explicitly programmed to do so, much as the human brain does.
None of the famous people who’ve offered their deep insights on what artificial intelligence will do for humanity have really demonstrated that they’re actually AI experts qualified to make such predictions. This is certainly true of Musk in particular, as well as Schmidt. And it’s probably true of Wozniak, as well as anyone who got fabulously famous and rich from starting a social media company. Zuckerberg seems to be the exception. I think he has the experience and the coding chops to at least be in the room with people for whom AI research is their sole focus.
Clearly, tech moguls deserve a prominent place in our society because they built Apple, Google, Facebook, Tesla, etc. The levels of technological change they’ve shepherded have changed the world.
However, unfortunately, I’d also say that Zuckerberg and the other tech moguls in this debate have a major blind spot when making pronouncements about the future. The pace of scientific and technological change gets faster and faster, which means that over time tech mogul insight will carry diminishing weight as predictors of future progress.
Plus, we should acknowledge that their very place as moguls should make us check our trust in their motives. For Apple, Google, Facebook, and Tesla, the future looks like Apple, Google, Facebook, and Tesla. I’m pretty sure that the future—particularly an AI future—will look nothing like what the tech moguls have already built. They’d be as likely to nail the future of AI as J. D. Rockefeller would be speculating on how to land on the Moon.
If this is even sort of true then we should question our expectation that the tech moguls of today know enough to really know whether AI will be beneficial on net or not.
Further, the difference between real, honest to goodness AI (which no one has achieved, as far as we know) and systems we casually call AI but are just black boxes can be tricky. One (“strong AI”) is the real deal, a game changer unlike anything since the invention of the steam engine, while the other (“weak AI”) can not only look like the real deal, but it may be able to fool us into believing that it is the real deal. On the flipside, it may sound like the plot of a sci-fi movie, but if a strong AI functions like the human brain, why wouldn’t it pretend to be a weak AI? Like real quantum computing, it’s not clear that we’ll know real AI when we see it. How do we know that one of these moguls hasn’t already been fooled?
I suspect these tech moguls and other players in the field know this. Further, I suspect they (even Musk) are hashing out the following : 1) convincing the rest of us that an AI future is inevitable and unstoppable; 2) that the rest of us have no say in the major social changes that are already being called “inevitable”; and 3) investment opportunities for their companies in AI firms as the real prize, even if what they think is AI is little more than a fancy black box.
IMAGE SOURCE: robot-3010309_1920 (CC0 Creative Commons)