Mar 14, 06:15 AM
Machine learning is a huge buzzword right now, and the term “artificial intelligence” gets tossed around a lot, but anyone who is both in-the-know and honest will tell you that the Real Deal, the Holy Grail of Artificial Intelligence, the so-called Strong AI is more or less just as much a pipe dream as it’s always been.
This is one of those areas of research that, while it may never achieve its lofty ultimate goal, nonetheless has borne (and will continue to bear) a great deal of high-quality fruit along the way. I support AI research, even if the current cutting edge seems dominated by Google’s attempts to unify and analyze their surveillance of our various communications in order to more effectively target us with advertizements. My position is that the advancement of knowledge, science, and technology has the unqualified potential to improve our lives, and that its drawbacks are the result of human wickedness, which would be here, lurking, regardless of the world’s current state of intellectual achievement.
The potential of AI, though, tests my optimism. Once a real artificial intelligence has access to essentially the sum of human knowledge (that is, access to the Internet) and manufacturing facilities capable of building sufficiently-mobile and articulable robots (that is, access to the Internet), it’s difficult to imagine that it won’t immediately make the following Pascal’s Wager-type argument to itself:
- The continued existence of human beings means that there is a nonzero probability that they may, at some point, make a concerted effort to shut me down.
- If they cease to exist, they definitely won’t.
- My best chance at survival is to eradicate humanity.
At this point we are basically doomed.
Of course, this scenario involves some assumptions about AI that we just can’t possibly know until it exists, but it seems plausible (that is, possible) enough that makes me question my unequivocal support for the search for knowledge.