New “Smart Phone” Will Be Quietly Studying Your Behavior And Reacting In Real-Time

Bernie Suarez
April 21, 2017

The march towards an Orwellian future where every form of human behavior is being monitored by AI-driven appliances and electronics is quickly becoming a reality. This was the plan from the start and as we can see the ruling elite have not slowed down one bit in their attempt to create this kind of world.

It is thus no surprise that Samsung is releasing a new smart phone this week called the S8 and S8+ that has a software called “Bixby” which will be studying your behavior in real-time and will be reacting, responding and “learning” from you accordingly.

The new Samsung S8 smart phone represents one of the first portable devices released to the general public in which the owner will be officially creating a 2-way relationship with the machine. The phone will carry a feature that can be activated which will require biometric data (retinal scan and facial recognition) in order to use the phone, and you can sure many people will take advantage of this feature. Passwords and pin numbers will soon be no more as the masses are conditioned into using biometric data.

The Bixby software in the latest Samsung smart phone will:

“learn your routines to serve up the right apps at the right time”

And it:

“is an intelligent interface that learns from you to help you do more.”

The dark technocratic future of humanity is now coming to full view as the technocratic ruling elite continue to push their technocracy and technology to no end.

The questions that we need to be asking ourselves are questions like, when will humanity as a whole start pushing back? When will those in the Department of Justice demand an end to unnecessary research? When will people in large masses call this out and consider certain advanced scientific research unethical? When will the line be crossed where additional research into certain areas like artificial intelligence (AI) be considered an act of aggression against humanity or even a punishable crime? I’m referring to the people of the nation states demanding observation of human rights to privacy and holding elected officials accountable. And I’m also referring to consumers becoming aware of what they buy and choosing where and how they spend their money.

I believe all of this is part of the JADE (at the) HELM revelations of 2015-2016. A plan for “mastering the human domain.” A plan made possible by companies like Raytheon which have been pushing a thing called BBN AI technology, a highly sophisticated technological platform that incorporates super high speed real-time learning, adaptation and awareness features. (See video below for more.)


Stopping the agenda of the technocrats is the solution. People everywhere must step up, begin paying attention and start fighting back. And the very first thing we need to do is shine a bright light on this issue, spread the awareness and hope that enough people see the problem. That means educating the stubborn “Progressive” Liberal Left and anyone else who has a blind faith in technology. This awareness must then turn into action. Here are some solutions.

1- Throw away, disable or stop using your smart phone or any smart device around your home for that matter.

2- Replace your smart phone with a non-smart phone.

3- Change your paradigm and see if you can free yourself from cell phones altogether.

4- Realize that smart phones and smart devices are slowly being pulled into the equation for work and survival. Demand that your boss (if you work for someone) not force you to use a smart device. Let’s challenge the legality of this now growing practice and problem. Just focusing and pointing to the dangers of EMF waves emitted by these devices alone may give us the legal authority to stop employers from requiring the use of these devices.

5- If you must have a cell phone (like many of us do) then learn all the safest ways to use it which include among other things keeping it as far away from your body as possible.

6- As best you can, stop texting your friends and family for every little thing and go back to talking to each other and meeting in person.

7- Make a focused effort to raise awareness of this issue and help spread the word.

Finally, realize that Technocracy is just another form of oppression. Despite all the voices of today that glorify technological advancements, realize that humans are made to move around and work while on earth. It’s the large corporations that want to replace all forms of labor with robots. It’s the ruling elite in control of the nation states that want every form of behavior recorded with the aid of their technology and very little to none of this actually benefits humanity. While many inventions of the 19th century improved the human experience the technological advancements of today are going far beyond what humanity needs to be happy and healthy. Let’s learn to identify this problem quickly so we can implement solutions today and now.

Read More At:

Related video

Bernie is a revolutionary writer with a background in medicine, psychology, and information technology. He is the author of The Art of Overcoming the New World Order and has written numerous articles over the years about freedom, government corruption and conspiracies, and solutions. Bernie is also the creator of the Truth and Art TV project where he shares articles and videos about issues that raise our consciousness and offer solutions to our current problems. As a musician and artist his efforts are designed to appeal to intellectuals, working class and artists alike and to encourage others to fearlessly and joyfully stand for truth. His goal is to expose government tactics of propaganda fear and deception and to address the psychology of dealing with the rising new world order. He is also a former U.S. Marine who believes it is our duty to stand for and defend the U.S. Constitution against all enemies foreign and domestic. He believes information and awareness is the first step toward being free from the control system which now threatens humanity. He believes love conquers all fear and it is up to each and every one of us to manifest the solutions and the change that you want to see in this world because doing this is what will ensure victory and restoration of the human race and offer hope to future generations.

The Device Will See You Now

April 24, 2017

(The following blog is contributed and composed by regular reader, Ms. K.M.):

This site has many times mentioned the rise of Artificial Intelligence (AI) and its relationship to both the inability to contain it and also to its direct relationship to transhumanists (or perhaps “trans-inhumanists”). I’m reminded of Jeff Goldblum in Jurassic Park: “Life will find a way.”

You have likely heard the mid-century-style voice of Watson besting human contestants on Jeopardy, the Greek statue pose of the T-1 Terminator after it shocks into our timeline, and the rosy predictions of how great it will be when our minds are forced to use a non-feeling prosthetic. Blogs here have also spoke out about the frightening addition of a “kill switch” to Google’s Deep Mind AI (Google’s AI Kill Switch).

If AI is not already out of control, the risks are vast but so are the rewards for those who possess them. If the idea of a life and death experience with your favorite non-living non-dying machine is not enough to get your pack-a-day habit up to a pack-and-a-half, this recent story about AI thinking its way into medicine offers us the opportunity to ponder a future where the biggest sociopath with a lab coat is not a doctor, but an unliving, thinking AI on wheels:

AI is Taking on Traditional Healthcare With all the usual blather about how much better machines are than people, the article discusses a study published in the top-drawer journal Nature and points out that:

“The arrival of AI means healthcare expertise is no longer under the exclusive purview of medical practitioners. As the technology advances, AI is proving to be more than just a peripheral tool that can provide assistance — a machine’s ability to process enormous amounts of data using advanced learning technology allows it to deliver speedier and more accurate diagnosis and treatment plans, which could drastically alter the standards of modern healthcare. For example, in a recent study involving 34 participants, machine-learning algorithms were used to predict the development of psychosis based on coherence and syntactic markers of speech complexity. In that study, the AI was able to predict the outcome with 100 percent accuracy, outperforming the results of traditional clinical interviews. In a separate research project, an AI system was able to identify and categorize suicidal tendencies among a pool of 379 teenage subjects with 93 percent accuracy. In that study, patients were asked to complete a standardized behavioral rating scale and then answer a series of open-ended questions. Based on the verbal and nonverbal data gathered, a machine-learning algorithm was able to classify if a patient was suicidal, mentally ill but not suicidal, or neither.”

The key problem with all this is the assumption of perfection. As people in the 1960’s thought that the eight-bit bug-ridden mainframes of that era were superhuman, humans, in the presence of mystery, defer to these automates and the end result may be worse than the Phillip K. Dick Department of Precrime. Now, your friendly AI diagnostic (which would be calculated from your email messages and texts over the internet, or in your phone calls to your favorite aunt) could lead you to be diagnosed as “pre-crazy.” You can’t confront your accuser. You won’t be listened to anyway because you are already ruled three pins short of a strike. Think that’s out of line, then consider this: a half a decade ago, a Facebook poster was arrested, incarcerated in a mental institution and drugged by officials based upon his innocent if inflammatory posts to Facebook.

And what about the injection of a healthy doses of greed and corruption into this futuremare? “Them’s that pay the monies maketh the rules” and you can imagine some “banksterolled” startups and big medicine companies demanding that the AI’s thinking be influenced to favor their spate of drugs and other therapies. Microsoft’s experience last year with an AI Twitter turned out in an unexpected way when when it became a sex robot and a Hitler and Nazi sympathizer… Anyways, how do you sue a machine? How can you confirm its programming is or is not biased? And will the devices indemnify the vendors as so many software licenses do?  Will we have to program an AI cop to regulate other AIs?

With Musk’s recent announcement of a human computer interface (I guess autonomous driving is not working out so well), who is to say that your own embedded chip won’t be the one recommending to the authorities that you “just need to rest for a while.”

Read More At:

About Dr. Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

How About Them Apples?

Dr. Joseph P. Farrell Ph.D.
March 24, 2017

Over the years of watching and reporting on the GMO issue on this website, one of the things that many brought to my attention by sharing various articles and studies, is the apparent linkage between CCD (colony collapse disorder), as the populations of honey bees colonies and other pollinators have dramatically declined since the introduction of GMO foods and the heavy pesticides they involve. As a result, I have also blogged about the latest gimmick to “repair” the damage: artificial drones as pollinators. It is, after all, “no big deal” if the world’s pollinator population declines or simply goes extinct, after all, they only keep most of the world’s plant life going, and most of its food supply going. No big deal, especially if one has artificial pollinators waiting in the wings. Indeed, as I’ve previously blogged, there were scientists actually seriously proposing this as a means to get around the phenomenon of colony collapse disorder.

Well, according to this article shared by Mr. T.M., it’s now actually been accomplished:

Researchers use drone to pollinate a flower

The opening paragraphs say it all:

Researchers in Japan have successfully used a tiny drone to pollinate an actual flower, a task usually accomplished by insects and animals.

The remote-controlled drone was equipped with horsehairs coated with a special gel, which the researchers say was crucial to the process.

“This is the world’s first demonstration of pollination by an artificial robotic pollinator,” said Eijiro Miyako of the National Institute of Advanced Industrial Science and Technology in Japan, one of the authors of the study, which was published in the journal Chem.

And, lest the connection between pollinator population collapse and the artificial pollinator is missed, the article itself makes the connection:

But many pollinators are under threat, particularly insects like bees and butterflies. They belong to a group — invertebrate pollinators — in which 40 percent of species face extinction, according to the same report.

The drone is an attempt to address this problem: “The global pollination crisis is a critical issue for the natural environment and our lives,” the authors wrote in the study.
There is, however, a catch: it’s still a long way from insect pollinators, due not only to the size of the drone, but due to the lack of artificial intelligence and independent movement in the artificial pollinator itself:

The peculiarity of this project is that it focuses on the pollination process, rather than the construction of a robotic bee.

As the authors note, “practical pollination has not yet been demonstrated with the aerial robots currently available.”

However, pollination was achieved on a very large flower, and the drone was not autonomous: “I believe that some form of artificial intelligence and GPS would be very useful for the development of such automatic machines in future,” said Miyako.

Much work remains to be done before we can emulate the complex behavior of insects and animals: “There is little chance this can replace pollinators,” said Christina Grozinger, Director of the Center for Pollinator Research at Penn State University.

Hidden text: “we urgently need artificial intelligence in order to construct more efficient artificial pollinators.”

And that of course, brings me to my high octane speculation of the day: suppose such artificial intelligence was constructed. And suppose, for a moment, all those artificial pollinators were under the controlled of a networked Artificial Intelligence, coordinating it all. Who is to say that said “intelligence” would even see the need for pollinator activity, or the human and animal populations that they ultimately aid in feeding? Waves of AI pollinators could conceivably become plagues of AI locusts. If this be the case, the “technological fix” could end up being an even worse nightmare.

Of course, one could always solve the problem by the simple fix of what appears to be the basis of the pollinator problem: get rid of GMOs, and let nature do what she was designed to do.

That, of course, would be far too simple, and not issue in enough research grants and profits.

Read More At:

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Google’s Artificial Intelligence Learns “Highly Aggressive” Behavior, Concept of Betrayal

What if AI was our last invention? – Technology Lifestyle
Cassius Methyl
February 20, 2017

An artificial intelligence created by Google recently made headlines after learning “highly aggressive” behavior.

The intelligence engaged in a wolfpack hunting game scenario, and a fruit gathering scenario. The wolfpack game was based on cooperation, but the fruit game got strange.

In the fruit game, the AI were represented by colored squares, moving on a grid to collect fruit squares of a different color. They racked up points, and the fruit squares would regenerate. The AI competed on their own like human players in a video game.

The interesting part is, they were also given the ability to damage the other intelligence. They were able to attack the other player by shooting beams at them.

They found that the AI would become more aggressive as the fruit squares became more scarce: when less fruit was present, they attacked each other more frequently.

Summarized by Extreme Tech:

Guess what the neural networks learned to do. Yep, they shoot each other a lot. As researchers modified the respawn rate of the fruit, they noted that the desire to eliminate the other player emerges “quite early.” When there are enough of the green squares, the AIs can coexist peacefully. When scarcity is introduced, they get aggressive. They’re so like us it’s scary.

While this is similar to a computer game and not actual artificially intelligent robots, it could foreshadow something else. This article doesn’t need to tell you where it could go.

Perhaps a better question would be, what is the consequence of trusting a corporation like Google to become so massive? How will this technology ever suit the bottom class when it is developed by the wealthiest?

Read More At:

(image credit: CDN, high qfx, guard time)


Cassius Kamarampi is a researcher and writer from Sacramento, California. He is the founder of Era of Wisdom, writer/director of the documentary “Toddlers on Amphetamine: History of Big Pharma and the Major Players,” and a writer in the alternative media since 2013 at the age of 17. He focuses primarily on identifying the exact individuals, institutions, and entities responsible for various forms of human slavery and control, particularly chemicals and more insidious forms of hegemony: identifying exactly who damages our well being and working toward independence from those entities, whether they are corporate, government, or institutional.


The Transhumanist Scrapbook: (Hideous) Method In The EU…

Dr. Joseph P. Farrell
January 27, 2017

This story was another one that seemed to have attracted a lot of people’s attention this past week: an EU parliament committee – a completely powerless “legislative” body – has voted to give robots “rights”, along with a kill switch:

EU Parliament Committee Votes To Give Robots Rights (And A Kill Switch)

I’ve blogged previously about the sneaky jurisprudence implied in such efforts, but this one spells it all out plainly; none of my usual high octane speculation is needed:

Foreseeing a rapidly approaching age of autonomous artificial intelligence, a European Parliament committee has voted to legally bestow electronic personhood to robots. The status includes a detailed list of rights, responsibilities, regulations, and a “kill switch.”

The committee voted by 17 votes to two, with two abstentions, to approve a draft report written by Luxembourg MEP Mady Delvaux, who believes “robots, bots, androids and other manifestations of artificial intelligence” will spawn a new industrial revolution. She wants to establish a European Agency to develop rules for how to govern AI behavior. Specifically, Delvaux writes about how increased levels of autonomy in robot entities will make usual manufacturing liability laws insufficient. It will become necessary, the report states, to be able to hold robots and their manufacturers legally responsible for their acts.

Sounding at times like a governmental whisper of Isaac Asimov’s Three Laws of Robotics, the report states, A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The rules will also affect AI developers, who, according to the report, will have to engineer robots in such a way that they can be controlled. This includes a “kill switch,” a mechanism by which rogue robots can be terminated or shut down remotely. (Emphases in the original)

Now, if you’re like me, you’re seeing or sensing a huge danger here, and it makes me wonder if the water supply in Europe is being doped with anti-sanity and anti-reason drugs, for observe the implicit and explicit logical argument here:

(1) humans are persons;

(2) persons have special rights, and with them come special responsibilities (one shudders to think what “rights” mean to a Eurocrat, but we’ll assume the best and move on);

(3) human consciousness and “personhood” can be produced by machines, and artificial intelligence should constitute “electronic personhood” just like corporations are “corporate persons”

(Of course, this is now all getting to be a little fuzzy, and as I’ve said many times, all this corporate personhood stuff is based in a theological confusion of massive proportions. But, hey, relax, because we’re modern trendy predominantly secularized Europeans and we needn’t bother with the niceties of mediaeval metaphysics, even if those niceties have issued in a horribly screwed up notion like “corporations are persons” while “unborn babies are not” but robots are For my part, the silliness of corporate personhood resides in the old adage “I’ll believe corporations are persons when the State of Texas executes one of them.” Heck, forget about murder, I’d settle for manslaughter and a long prison sentence for a few of them, but I digress.

(4) But we need to protect humanity from the possibility that robots might go rogue and do something like found a corporation (a corporate electronic person, presumably) whose corporate charter says that its corporate electronic personhood function is to kill other persons (presumably of either the human biological sort, or the robotic electronic sort). Thus, we need a

(5) “kill switch” to “terminate the program/robot/electronic person”.

Well, in today’s wonderful transhumanist “cashless” world, why not a “kill switch” in your friendly implant when you start having “unacceptable thoughts” like using cash, or questioning the latest “narrative from Brussels.” If it’s good enough for “electronic persons” then one be quite certain that some insane Eurocrat, somewhere, will propose the same thing for human persons by parity of reasoning…

…a parity of reasoning that will not, of course, extend to corporations.

See you on the flip side…

Read More At:

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

In Case You May Have Missed That Little Announcement About Artificial…

Dr. Joseph P. Farrell
January 19, 2017

You may have missed it, but in case you did, Mr. B.B. and many other regular readers here shared this story, to make sure you didn’t miss it. And this is such a bombshell I that its implications and ramifications are still percolating through my mind. The long and short of it is, Google’s “artificial intelligence” program-search engine no longer requires quotation marks around it:

The mind-blowing AI announcement from Google that you probably missed

And just in case you read this article and are still so shocked that you’re “missing it,” here it is in all of its frighening-implications-glory:

Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.

This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.

All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.

The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.

Google Translate invented its own language to help it translate more effectively.

What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.

Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.

Now, if you read closely, right after the closing remarks in the quotation above, the author of the article, Mr. Gil Fewster, added this parenthetical comment: “I’ve added a correction/retraction of this paragraph in the notes.” The correction/retraction comes in the form of a comment that Mr. Fewster directs the reader to at the end of his article, from a Mr. Chris MacDonald, who stated:

Ok slow down.
The AI didn’t invent its own language nor did it get creativity. Saying that is like saying calculators are smart and one day they’ll take all the math teachers’ jobs.

What Google found was that their framework was working even better than they expected. That’s awesome because when you’re doing R&D you learn to expect things to fail rather than work perfectly.
How it’s workings that, through all the data it’s reading, it’s observing patterns in language. What they found is that if it knew English to Korean, and English to Japanese, it could actually get pretty good results translating Korean to Japanese (through the common ground of English).

The universal language, or the interlingua, is a not it’s own language per se. It’s the commonality found inbetween many languages. Psychologists have been talking about it for years. As matter of fact, this work is perhaps may be even more important to Linguistics and Psychology than it is to computer science.

We’ve already observed that swear words tend to be full of harsh sounds ( “p” “c” “k” and “t”) and sibilance (“S” and “f”) in almost any language. If you apply the phonetic sounds to the Google’s findings, psychologists could make accurate observations about which sounds tend to universally correlate to which concepts. (Emphasis added)

Now, this puts that business on the computer teaching itself into a little less hysterical category and into a more “Chomskian” place; after all, the famous MIT linguist has been saying for decades that there’s a common universal “grammar” underlying all languages, and not just common phonemes, as Mr. MacDonald points out in the last paragraph of the above quotation.

But, the problem still remains: the computer used one set of patterns it noticed in one context, that appeared in another context, and then mapped that pattern into a new context unfamiliar to it. That, precisely, is analogical thinking, it is a topological process that seems almost innate in our every thought, and that, precisely, is the combustion engine of human intelligence (and in my opinion, of any intelligence).

And that raises some nasty high octane speculations, particularly for those who have been following my “CERN” speculations about hidden “data correlation” experiments, for such data correlations would require massive computing power, and also an ability to do more or less this pattern recognition and “mapping” function. The hidden implication with that is that if this is what Google is willing to talk about publicly, imagine what has…

Continue Reading At:

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.