How About Them Apples?

Source: GizaDeathStar.com
Dr. Joseph P. Farrell Ph.D.
March 24, 2017

Over the years of watching and reporting on the GMO issue on this website, one of the things that many brought to my attention by sharing various articles and studies, is the apparent linkage between CCD (colony collapse disorder), as the populations of honey bees colonies and other pollinators have dramatically declined since the introduction of GMO foods and the heavy pesticides they involve. As a result, I have also blogged about the latest gimmick to “repair” the damage: artificial drones as pollinators. It is, after all, “no big deal” if the world’s pollinator population declines or simply goes extinct, after all, they only keep most of the world’s plant life going, and most of its food supply going. No big deal, especially if one has artificial pollinators waiting in the wings. Indeed, as I’ve previously blogged, there were scientists actually seriously proposing this as a means to get around the phenomenon of colony collapse disorder.

Well, according to this article shared by Mr. T.M., it’s now actually been accomplished:

Researchers use drone to pollinate a flower

The opening paragraphs say it all:

Researchers in Japan have successfully used a tiny drone to pollinate an actual flower, a task usually accomplished by insects and animals.

The remote-controlled drone was equipped with horsehairs coated with a special gel, which the researchers say was crucial to the process.

“This is the world’s first demonstration of pollination by an artificial robotic pollinator,” said Eijiro Miyako of the National Institute of Advanced Industrial Science and Technology in Japan, one of the authors of the study, which was published in the journal Chem.

And, lest the connection between pollinator population collapse and the artificial pollinator is missed, the article itself makes the connection:

But many pollinators are under threat, particularly insects like bees and butterflies. They belong to a group — invertebrate pollinators — in which 40 percent of species face extinction, according to the same report.

The drone is an attempt to address this problem: “The global pollination crisis is a critical issue for the natural environment and our lives,” the authors wrote in the study.
There is, however, a catch: it’s still a long way from insect pollinators, due not only to the size of the drone, but due to the lack of artificial intelligence and independent movement in the artificial pollinator itself:

The peculiarity of this project is that it focuses on the pollination process, rather than the construction of a robotic bee.

As the authors note, “practical pollination has not yet been demonstrated with the aerial robots currently available.”

However, pollination was achieved on a very large flower, and the drone was not autonomous: “I believe that some form of artificial intelligence and GPS would be very useful for the development of such automatic machines in future,” said Miyako.

Much work remains to be done before we can emulate the complex behavior of insects and animals: “There is little chance this can replace pollinators,” said Christina Grozinger, Director of the Center for Pollinator Research at Penn State University.

Hidden text: “we urgently need artificial intelligence in order to construct more efficient artificial pollinators.”

And that of course, brings me to my high octane speculation of the day: suppose such artificial intelligence was constructed. And suppose, for a moment, all those artificial pollinators were under the controlled of a networked Artificial Intelligence, coordinating it all. Who is to say that said “intelligence” would even see the need for pollinator activity, or the human and animal populations that they ultimately aid in feeding? Waves of AI pollinators could conceivably become plagues of AI locusts. If this be the case, the “technological fix” could end up being an even worse nightmare.

Of course, one could always solve the problem by the simple fix of what appears to be the basis of the pollinator problem: get rid of GMOs, and let nature do what she was designed to do.

That, of course, would be far too simple, and not issue in enough research grants and profits.

Read More At: GizaDeathStar.com
________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Google’s Artificial Intelligence Learns “Highly Aggressive” Behavior, Concept of Betrayal

What if AI was our last invention? – Technology Lifestyle
Source: TheMindUnleashed.com
Cassius Methyl
February 20, 2017

An artificial intelligence created by Google recently made headlines after learning “highly aggressive” behavior.

The intelligence engaged in a wolfpack hunting game scenario, and a fruit gathering scenario. The wolfpack game was based on cooperation, but the fruit game got strange.

In the fruit game, the AI were represented by colored squares, moving on a grid to collect fruit squares of a different color. They racked up points, and the fruit squares would regenerate. The AI competed on their own like human players in a video game.

The interesting part is, they were also given the ability to damage the other intelligence. They were able to attack the other player by shooting beams at them.

They found that the AI would become more aggressive as the fruit squares became more scarce: when less fruit was present, they attacked each other more frequently.

Summarized by Extreme Tech:

Guess what the neural networks learned to do. Yep, they shoot each other a lot. As researchers modified the respawn rate of the fruit, they noted that the desire to eliminate the other player emerges “quite early.” When there are enough of the green squares, the AIs can coexist peacefully. When scarcity is introduced, they get aggressive. They’re so like us it’s scary.

While this is similar to a computer game and not actual artificially intelligent robots, it could foreshadow something else. This article doesn’t need to tell you where it could go.

Perhaps a better question would be, what is the consequence of trusting a corporation like Google to become so massive? How will this technology ever suit the bottom class when it is developed by the wealthiest?

Read More At: TheMindUnleashed.com

(image credit: CDN, high qfx, guard time)

____________________________________________________________________

Cassius Kamarampi is a researcher and writer from Sacramento, California. He is the founder of Era of Wisdom, writer/director of the documentary “Toddlers on Amphetamine: History of Big Pharma and the Major Players,” and a writer in the alternative media since 2013 at the age of 17. He focuses primarily on identifying the exact individuals, institutions, and entities responsible for various forms of human slavery and control, particularly chemicals and more insidious forms of hegemony: identifying exactly who damages our well being and working toward independence from those entities, whether they are corporate, government, or institutional.

 

The Transhumanist Scrapbook: (Hideous) Method In The EU…

THE TRANSHUMANIST SCRAPBOOK: (HIDEOUS) METHOD IN THE EU PARLIAMENT'S MADNESS
Source: GizaDeathStar.com
Dr. Joseph P. Farrell
January 27, 2017

This story was another one that seemed to have attracted a lot of people’s attention this past week: an EU parliament committee – a completely powerless “legislative” body – has voted to give robots “rights”, along with a kill switch:

EU Parliament Committee Votes To Give Robots Rights (And A Kill Switch)

I’ve blogged previously about the sneaky jurisprudence implied in such efforts, but this one spells it all out plainly; none of my usual high octane speculation is needed:

Foreseeing a rapidly approaching age of autonomous artificial intelligence, a European Parliament committee has voted to legally bestow electronic personhood to robots. The status includes a detailed list of rights, responsibilities, regulations, and a “kill switch.”

The committee voted by 17 votes to two, with two abstentions, to approve a draft report written by Luxembourg MEP Mady Delvaux, who believes “robots, bots, androids and other manifestations of artificial intelligence” will spawn a new industrial revolution. She wants to establish a European Agency to develop rules for how to govern AI behavior. Specifically, Delvaux writes about how increased levels of autonomy in robot entities will make usual manufacturing liability laws insufficient. It will become necessary, the report states, to be able to hold robots and their manufacturers legally responsible for their acts.

Sounding at times like a governmental whisper of Isaac Asimov’s Three Laws of Robotics, the report states, A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The rules will also affect AI developers, who, according to the report, will have to engineer robots in such a way that they can be controlled. This includes a “kill switch,” a mechanism by which rogue robots can be terminated or shut down remotely. (Emphases in the original)

Now, if you’re like me, you’re seeing or sensing a huge danger here, and it makes me wonder if the water supply in Europe is being doped with anti-sanity and anti-reason drugs, for observe the implicit and explicit logical argument here:

(1) humans are persons;

(2) persons have special rights, and with them come special responsibilities (one shudders to think what “rights” mean to a Eurocrat, but we’ll assume the best and move on);

(3) human consciousness and “personhood” can be produced by machines, and artificial intelligence should constitute “electronic personhood” just like corporations are “corporate persons”

(Of course, this is now all getting to be a little fuzzy, and as I’ve said many times, all this corporate personhood stuff is based in a theological confusion of massive proportions. But, hey, relax, because we’re modern trendy predominantly secularized Europeans and we needn’t bother with the niceties of mediaeval metaphysics, even if those niceties have issued in a horribly screwed up notion like “corporations are persons” while “unborn babies are not” but robots are For my part, the silliness of corporate personhood resides in the old adage “I’ll believe corporations are persons when the State of Texas executes one of them.” Heck, forget about murder, I’d settle for manslaughter and a long prison sentence for a few of them, but I digress.

(4) But we need to protect humanity from the possibility that robots might go rogue and do something like found a corporation (a corporate electronic person, presumably) whose corporate charter says that its corporate electronic personhood function is to kill other persons (presumably of either the human biological sort, or the robotic electronic sort). Thus, we need a

(5) “kill switch” to “terminate the program/robot/electronic person”.

Well, in today’s wonderful transhumanist “cashless” world, why not a “kill switch” in your friendly implant when you start having “unacceptable thoughts” like using cash, or questioning the latest “narrative from Brussels.” If it’s good enough for “electronic persons” then one be quite certain that some insane Eurocrat, somewhere, will propose the same thing for human persons by parity of reasoning…

…a parity of reasoning that will not, of course, extend to corporations.

See you on the flip side…

Read More At: GizaDeathStar.com
____________________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

In Case You May Have Missed That Little Announcement About Artificial…

google
Source: GizaDeathStar.com
Dr. Joseph P. Farrell
January 19, 2017

You may have missed it, but in case you did, Mr. B.B. and many other regular readers here shared this story, to make sure you didn’t miss it. And this is such a bombshell I that its implications and ramifications are still percolating through my mind. The long and short of it is, Google’s “artificial intelligence” program-search engine no longer requires quotation marks around it:

The mind-blowing AI announcement from Google that you probably missed

And just in case you read this article and are still so shocked that you’re “missing it,” here it is in all of its frighening-implications-glory:

Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.

This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.

All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.

The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.

Google Translate invented its own language to help it translate more effectively.

What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.

Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.

Now, if you read closely, right after the closing remarks in the quotation above, the author of the article, Mr. Gil Fewster, added this parenthetical comment: “I’ve added a correction/retraction of this paragraph in the notes.” The correction/retraction comes in the form of a comment that Mr. Fewster directs the reader to at the end of his article, from a Mr. Chris MacDonald, who stated:

Ok slow down.
The AI didn’t invent its own language nor did it get creativity. Saying that is like saying calculators are smart and one day they’ll take all the math teachers’ jobs.

What Google found was that their framework was working even better than they expected. That’s awesome because when you’re doing R&D you learn to expect things to fail rather than work perfectly.
How it’s workings that, through all the data it’s reading, it’s observing patterns in language. What they found is that if it knew English to Korean, and English to Japanese, it could actually get pretty good results translating Korean to Japanese (through the common ground of English).

The universal language, or the interlingua, is a not it’s own language per se. It’s the commonality found inbetween many languages. Psychologists have been talking about it for years. As matter of fact, this work is perhaps may be even more important to Linguistics and Psychology than it is to computer science.

We’ve already observed that swear words tend to be full of harsh sounds ( “p” “c” “k” and “t”) and sibilance (“S” and “f”) in almost any language. If you apply the phonetic sounds to the Google’s findings, psychologists could make accurate observations about which sounds tend to universally correlate to which concepts. (Emphasis added)

Now, this puts that business on the computer teaching itself into a little less hysterical category and into a more “Chomskian” place; after all, the famous MIT linguist has been saying for decades that there’s a common universal “grammar” underlying all languages, and not just common phonemes, as Mr. MacDonald points out in the last paragraph of the above quotation.

But, the problem still remains: the computer used one set of patterns it noticed in one context, that appeared in another context, and then mapped that pattern into a new context unfamiliar to it. That, precisely, is analogical thinking, it is a topological process that seems almost innate in our every thought, and that, precisely, is the combustion engine of human intelligence (and in my opinion, of any intelligence).

And that raises some nasty high octane speculations, particularly for those who have been following my “CERN” speculations about hidden “data correlation” experiments, for such data correlations would require massive computing power, and also an ability to do more or less this pattern recognition and “mapping” function. The hidden implication with that is that if this is what Google is willing to talk about publicly, imagine what has…

Continue Reading At: GizaDeathStar.com
______________________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Is a robot a person? EU lawmakers to roll our rules of AI engagement

Source: RTAmerica
January 17, 2017

Members of the European Parliament will soon vote on whether robots should be considered ‘electronic persons’ and lay out rules governing their interactions with human beings. Issues at stake include protecting humans from very sophisticated or powerful AIs and to regulate the role of robots in workplaces of the future. RT correspondent Peter Oliver reports.