April 24, 2017
(The following blog is contributed and composed by regular reader, Ms. K.M.):
This site has many times mentioned the rise of Artificial Intelligence (AI) and its relationship to both the inability to contain it and also to its direct relationship to transhumanists (or perhaps “trans-inhumanists”). I’m reminded of Jeff Goldblum in Jurassic Park: “Life will find a way.”
You have likely heard the mid-century-style voice of Watson besting human contestants on Jeopardy, the Greek statue pose of the T-1 Terminator after it shocks into our timeline, and the rosy predictions of how great it will be when our minds are forced to use a non-feeling prosthetic. Blogs here have also spoke out about the frightening addition of a “kill switch” to Google’s Deep Mind AI (Google’s AI Kill Switch).
If AI is not already out of control, the risks are vast but so are the rewards for those who possess them. If the idea of a life and death experience with your favorite non-living non-dying machine is not enough to get your pack-a-day habit up to a pack-and-a-half, this recent story about AI thinking its way into medicine offers us the opportunity to ponder a future where the biggest sociopath with a lab coat is not a doctor, but an unliving, thinking AI on wheels:
AI is Taking on Traditional Healthcare With all the usual blather about how much better machines are than people, the article discusses a study published in the top-drawer journal Nature and points out that:
“The arrival of AI means healthcare expertise is no longer under the exclusive purview of medical practitioners. As the technology advances, AI is proving to be more than just a peripheral tool that can provide assistance — a machine’s ability to process enormous amounts of data using advanced learning technology allows it to deliver speedier and more accurate diagnosis and treatment plans, which could drastically alter the standards of modern healthcare. For example, in a recent study involving 34 participants, machine-learning algorithms were used to predict the development of psychosis based on coherence and syntactic markers of speech complexity. In that study, the AI was able to predict the outcome with 100 percent accuracy, outperforming the results of traditional clinical interviews. In a separate research project, an AI system was able to identify and categorize suicidal tendencies among a pool of 379 teenage subjects with 93 percent accuracy. In that study, patients were asked to complete a standardized behavioral rating scale and then answer a series of open-ended questions. Based on the verbal and nonverbal data gathered, a machine-learning algorithm was able to classify if a patient was suicidal, mentally ill but not suicidal, or neither.”
The key problem with all this is the assumption of perfection. As people in the 1960’s thought that the eight-bit bug-ridden mainframes of that era were superhuman, humans, in the presence of mystery, defer to these automates and the end result may be worse than the Phillip K. Dick Department of Precrime. Now, your friendly AI diagnostic (which would be calculated from your email messages and texts over the internet, or in your phone calls to your favorite aunt) could lead you to be diagnosed as “pre-crazy.” You can’t confront your accuser. You won’t be listened to anyway because you are already ruled three pins short of a strike. Think that’s out of line, then consider this: a half a decade ago, a Facebook poster was arrested, incarcerated in a mental institution and drugged by officials based upon his innocent if inflammatory posts to Facebook.
And what about the injection of a healthy doses of greed and corruption into this futuremare? “Them’s that pay the monies maketh the rules” and you can imagine some “banksterolled” startups and big medicine companies demanding that the AI’s thinking be influenced to favor their spate of drugs and other therapies. Microsoft’s experience last year with an AI Twitter turned out in an unexpected way when when it became a sex robot and a Hitler and Nazi sympathizer… Anyways, how do you sue a machine? How can you confirm its programming is or is not biased? And will the devices indemnify the vendors as so many software licenses do? Will we have to program an AI cop to regulate other AIs?
With Musk’s recent announcement of a human computer interface (I guess autonomous driving is not working out so well), who is to say that your own embedded chip won’t be the one recommending to the authorities that you “just need to rest for a while.”
Read More At: GizaDeathStar.com
About Dr. Joseph P. Farrell
Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.