Monday, June 13, 2016

"Artificial Superintelligence"

Just read this article about Nick Bostrom and his ideas for warning against artificial superintelligence:

In his own words:
"It is not that I believe I know how it is going to happen and have to tell the world that information. It is more I feel quite ignorant and very confused about these things but by working for many years on probabilities you can get partial little insights here and there. And if you add those together with insights many other people might have, then maybe it will build up to some better understanding."

That sums it up nicely. He has no understanding of artificial intelligence, or intelligence is general, or what it would take to build it. He plays with numbers and probabilities, but without any way of actually assigning probabilities to any of the scenarios that he's imagining. Some people just don't seem to realize that math is only a logical transformation of whatever assumptions you put into it, so if any of the assumptions is wrong, even a little bit, or if any critical piece of information is missing, then the extrapolation of these factors via math can not possibly be trusted as predictive of reality. This is just the way math works.

So this guy, this Nick Bostrom, he knows this, in his bones. This is why he's so incredibly insecure, as is made clear in that article - he is well aware, on an emotional level, that he's not on solid ground. And he wants to be on solid ground, just like any human. So to "test" these ideas of his, he tries to get other people to believe what he believes. It's an instinctive social mechanism for testing ideas that you can't actually test - see how many people you can persuade of these ideas... Only that it's not a very good mechanism, because humans are still very fallible, and he still doesn't feel like he's on solid ground. So he needs more money, more resources and people to work on his ideas, because this is just too critical - after all, he believes that the future of humanity is at stake... And people actually give him money, because his arguments were persuasive enough for some people with too much money on their hands. So he gets more power, more influence, and uses it to do more math and try to persuade more people - the two approaches that can't possibly ground you..

I feel sorry for the guy, I really do. He seems entirely lacking on the one thing that can actually ground him - a bottom-up understanding of what intelligence actually is, and how one might go about building an "agent" that's truly intelligent. If only he turned his attention to the various challenges involved in actually building such a system, he might begin to comprehend how much more challenging it would be to build a system that can itself build an even better system... After all, our world is full of creatures of human-level intelligence, and so far none of them got even close to creating a system even vaguely approaching the full spectrum of capabilities that they themselves possess. The idea that playing with various attempts to build such systems is in any way an existential threat (more than climate change or superbugs!) is just so laughable, that I just wish that guy spent more time around those who are actually trying to build these systems to see what that actually looks like, and less time in his office with the blackboard and the lamps... It's only that interaction with reality, the trying and failing and learning from that experience, that has a real chance of actually grounding us. If only we ignore all the social chatter and wishful thinking that surround these endeavors...