The Transhumanist Scrapbook: (Hideous) Method In The EU…

THE TRANSHUMANIST SCRAPBOOK: (HIDEOUS) METHOD IN THE EU PARLIAMENT'S MADNESS
Source: GizaDeathStar.com
Dr. Joseph P. Farrell
January 27, 2017

This story was another one that seemed to have attracted a lot of people’s attention this past week: an EU parliament committee – a completely powerless “legislative” body – has voted to give robots “rights”, along with a kill switch:

EU Parliament Committee Votes To Give Robots Rights (And A Kill Switch)

I’ve blogged previously about the sneaky jurisprudence implied in such efforts, but this one spells it all out plainly; none of my usual high octane speculation is needed:

Foreseeing a rapidly approaching age of autonomous artificial intelligence, a European Parliament committee has voted to legally bestow electronic personhood to robots. The status includes a detailed list of rights, responsibilities, regulations, and a “kill switch.”

The committee voted by 17 votes to two, with two abstentions, to approve a draft report written by Luxembourg MEP Mady Delvaux, who believes “robots, bots, androids and other manifestations of artificial intelligence” will spawn a new industrial revolution. She wants to establish a European Agency to develop rules for how to govern AI behavior. Specifically, Delvaux writes about how increased levels of autonomy in robot entities will make usual manufacturing liability laws insufficient. It will become necessary, the report states, to be able to hold robots and their manufacturers legally responsible for their acts.

Sounding at times like a governmental whisper of Isaac Asimov’s Three Laws of Robotics, the report states, A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The rules will also affect AI developers, who, according to the report, will have to engineer robots in such a way that they can be controlled. This includes a “kill switch,” a mechanism by which rogue robots can be terminated or shut down remotely. (Emphases in the original)

Now, if you’re like me, you’re seeing or sensing a huge danger here, and it makes me wonder if the water supply in Europe is being doped with anti-sanity and anti-reason drugs, for observe the implicit and explicit logical argument here:

(1) humans are persons;

(2) persons have special rights, and with them come special responsibilities (one shudders to think what “rights” mean to a Eurocrat, but we’ll assume the best and move on);

(3) human consciousness and “personhood” can be produced by machines, and artificial intelligence should constitute “electronic personhood” just like corporations are “corporate persons”

(Of course, this is now all getting to be a little fuzzy, and as I’ve said many times, all this corporate personhood stuff is based in a theological confusion of massive proportions. But, hey, relax, because we’re modern trendy predominantly secularized Europeans and we needn’t bother with the niceties of mediaeval metaphysics, even if those niceties have issued in a horribly screwed up notion like “corporations are persons” while “unborn babies are not” but robots are For my part, the silliness of corporate personhood resides in the old adage “I’ll believe corporations are persons when the State of Texas executes one of them.” Heck, forget about murder, I’d settle for manslaughter and a long prison sentence for a few of them, but I digress.

(4) But we need to protect humanity from the possibility that robots might go rogue and do something like found a corporation (a corporate electronic person, presumably) whose corporate charter says that its corporate electronic personhood function is to kill other persons (presumably of either the human biological sort, or the robotic electronic sort). Thus, we need a

(5) “kill switch” to “terminate the program/robot/electronic person”.

Well, in today’s wonderful transhumanist “cashless” world, why not a “kill switch” in your friendly implant when you start having “unacceptable thoughts” like using cash, or questioning the latest “narrative from Brussels.” If it’s good enough for “electronic persons” then one be quite certain that some insane Eurocrat, somewhere, will propose the same thing for human persons by parity of reasoning…

…a parity of reasoning that will not, of course, extend to corporations.

See you on the flip side…

Read More At: GizaDeathStar.com
____________________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

In Case You May Have Missed That Little Announcement About Artificial…

google
Source: GizaDeathStar.com
Dr. Joseph P. Farrell
January 19, 2017

You may have missed it, but in case you did, Mr. B.B. and many other regular readers here shared this story, to make sure you didn’t miss it. And this is such a bombshell I that its implications and ramifications are still percolating through my mind. The long and short of it is, Google’s “artificial intelligence” program-search engine no longer requires quotation marks around it:

The mind-blowing AI announcement from Google that you probably missed

And just in case you read this article and are still so shocked that you’re “missing it,” here it is in all of its frighening-implications-glory:

Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.

This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.

All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.

The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.

Google Translate invented its own language to help it translate more effectively.

What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.

Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.

Now, if you read closely, right after the closing remarks in the quotation above, the author of the article, Mr. Gil Fewster, added this parenthetical comment: “I’ve added a correction/retraction of this paragraph in the notes.” The correction/retraction comes in the form of a comment that Mr. Fewster directs the reader to at the end of his article, from a Mr. Chris MacDonald, who stated:

Ok slow down.
The AI didn’t invent its own language nor did it get creativity. Saying that is like saying calculators are smart and one day they’ll take all the math teachers’ jobs.

What Google found was that their framework was working even better than they expected. That’s awesome because when you’re doing R&D you learn to expect things to fail rather than work perfectly.
How it’s workings that, through all the data it’s reading, it’s observing patterns in language. What they found is that if it knew English to Korean, and English to Japanese, it could actually get pretty good results translating Korean to Japanese (through the common ground of English).

The universal language, or the interlingua, is a not it’s own language per se. It’s the commonality found inbetween many languages. Psychologists have been talking about it for years. As matter of fact, this work is perhaps may be even more important to Linguistics and Psychology than it is to computer science.

We’ve already observed that swear words tend to be full of harsh sounds ( “p” “c” “k” and “t”) and sibilance (“S” and “f”) in almost any language. If you apply the phonetic sounds to the Google’s findings, psychologists could make accurate observations about which sounds tend to universally correlate to which concepts. (Emphasis added)

Now, this puts that business on the computer teaching itself into a little less hysterical category and into a more “Chomskian” place; after all, the famous MIT linguist has been saying for decades that there’s a common universal “grammar” underlying all languages, and not just common phonemes, as Mr. MacDonald points out in the last paragraph of the above quotation.

But, the problem still remains: the computer used one set of patterns it noticed in one context, that appeared in another context, and then mapped that pattern into a new context unfamiliar to it. That, precisely, is analogical thinking, it is a topological process that seems almost innate in our every thought, and that, precisely, is the combustion engine of human intelligence (and in my opinion, of any intelligence).

And that raises some nasty high octane speculations, particularly for those who have been following my “CERN” speculations about hidden “data correlation” experiments, for such data correlations would require massive computing power, and also an ability to do more or less this pattern recognition and “mapping” function. The hidden implication with that is that if this is what Google is willing to talk about publicly, imagine what has…

Continue Reading At: GizaDeathStar.com
______________________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Is a robot a person? EU lawmakers to roll our rules of AI engagement

Source: RTAmerica
January 17, 2017

Members of the European Parliament will soon vote on whether robots should be considered ‘electronic persons’ and lay out rules governing their interactions with human beings. Issues at stake include protecting humans from very sophisticated or powerful AIs and to regulate the role of robots in workplaces of the future. RT correspondent Peter Oliver reports.