Dr. Joseph P. Farrell
March 29, 2016
If you’re like Nick Bostrom, Isaac Asimov, or (not to put myself on their level), me, you probably have a few, nay, probably many misgivings about the idea of artificial intelligence and the coming “robot revolution.” Asimov, in his typically perspicacious way, explored the ethics and moral issues of artificial intelligences and robots in his sci-fi classic, I, Robot, which was made into a film version. There, as we know, VICKI, an artificial intelligence super-computer, takes over the world’s robots and bascially imprisons humanity. For some of us, following the weirdness in financial markets for example, the “dark pools” and algorithmic trading that now constitutes the bulk of commodities and equities trading is tailor made for all sorts of A.I. trouble. Even the popular American television series (one of my favorites, incidentally) Person of Interest explores not only the dangers of A.I., but of two such artificial intelligences battling it out with each other, with humanity caught in the middle. In one episode, the “evil” A.I. gives a little demonstration of its “powers” when it deliberately crashes the stock markets in mere seconds, and then, just as quickly, rectifies it. Oxford philosopher Nick Bostrom has been sounding the warnings for many years about A.I.
Well, if the following story shared by Mr. A. is any indicator, Bostrom’s and Asimov’s concerns may be entirely justified:
Consider just the disturbing implications about the new robot “Sophia” as outlined in this paragraph:
It is important to note several things that Hanson mentions. Sophia first tells us that she would like to be “an ambassador” to humans, as well as to continue her evolution through formal education, studying art and eventually creating a business and having a family. Hanson explicitly states that Sophia will become as “conscious, creative, and capable as any human.” This statement is followed by a key mention of not having the rights of a human. This might seem absurd to the uninitiated, but this is a serious ethical discussion that has been taking place among “roboethicists.” This is all-but guaranteed to gain steam as robots are integrated in autonomous ways, whether it is on the battlefield, as self-driving vehicles (now programmed to sacrifice some humans over others), or certainly as they become visually and intelligently on par with human beings. Even the mainstream Boston Globe addressed this more than two years ago, citing a 2012 paper from MIT.
At this juncture, the article goes on to mention the existence of – get this! – a Society for the Prevention of Cruelty to Robots, this in a society that chops up the unborn, sells their parts, harvests human organs, and makes people pay for the whole “privilege.”