JP Morgan Launches New High Frequency Trading Algorithm


Source: GizaDeathStar.com
Dr. Joseph P. Farrell Ph.D.
August 9, 2017

The disconnect between genuine human market activity and that created by machines proceeds apace, for JP Morgan has just launched a new algorithmic high frequency trading algorithm, as this article from Zero Hedge, spotted and shared by Mr. B.H., states:

JPM Develops A.I. Robot To Execute High Speed Trades, Put Humans Out Of Work

The motivation, as usual, is the “bottom line,” and maximizing profits while minimizing costly (human) labor overhead:

In the latest victory for robot kind over humans, LOXM’s job will be to execute client orders with maximum speed at the best price, “using lessons it has learnt from billions of past trades — both real and simulated — to tackle problems such as how best to offload big equity stakes without moving market prices.”

In other words, one giant “big data” aggregator, using historical precedent to guide future decisions, which coming in a time when “this time it’s certainly different” for the broader stock market, could be a big mistake.

“Such customisation was previously implemented by humans, but now the AI machine is able to do it on a much larger and more efficient scale,” said David Fellah, of JPMorgan’s European Equity Quant Research team. Mr Ciment said that, so far, the European trials showed that the pricing achieved by LOXM was “significantly better” than its benchmark.

The development guarantees another round of downsizing among bank front offices as increasingly inefficient human traders are removes from the equation… and payroll. As the FT notes, investment banks have been increasingly using AI, automation and robotics to help cut costs and eliminate time-consuming routine work. “For example, UBS’s recent deployment of AI to deal with client post-trade allocation requests, which saves as much as 45 minutes of human labour per task. UBS has also brought in AI to help clients trade volatility.” (Italicized emphasis added)

It’s precisely that italicized phrase (which I have emphasized) that caught my attention in this article, as the reader might well imagine, for “tackling problems such as how best to offload big equity stakes without moving market prices” has been, I submit, one of the major problems with high frequency trading algorithms, as exemplified by the various “flash crashes” that occur from time to time, beginning with the infamous May 2010 flash crash. The problem, of course, has been that these algorithms can, and have, “run amok”, and caused market value of certain equities or commodities either to dramatically rise, or fall, within mere seconds, forcing shut downs of markets and price “resets,” as I have blogged here before. The problem, as I saw it then, and still see it, is that these “resets” are costly, and will inevitably involve humans and human activity, and that, of course, adds to overhead costs.

But now, supposedly, JP Morgan has waved a magic wand of code, and one can now “offload big equity stakes without moving market prices.” Let that one sink in for a moment… “big equity stakes” can be “offloaded” without any effect on market prices!?!?  Since when?!? The sentence, I submit, is a stunning admission of just how artificial, and unreal, these markets have become under trading algorithms. If prices are not affected by “offloading big equity stakes,” then one of the key mechanisms by which humans determine their investment decisions – the price of an equity itself within market movement – no longer is reflective of anything humanly real. I don’t know about you, but I don’t want to invest my paltry $100 in a share of Twisted Trading Algorithm Partners, Inc.  when the price itself is being determined in part by an algorithm that will allow JP Morgan to dump, or buy, vast blocks of Twisted Trading (NASQUACK symbol, TT) without “moving market prices.” Yes, that means I’d personally really rather have human traders on a floor waving papers and shouting hysterically at each other to conclude trades. And yes, I’ll take a physical copy of that 1 share of Twisted Trading’s stock, thank you very much.

Thank goodness sanity reigns somewhere, for Zero Hedge captures my own concerns with the vast expansion of “dark pools” and high frequency trading algorithms:

PM also said it had no risk management issues with the technology. “The machine is restricted in its trading behaviour, as it learns under, and operates within, our general electronic trading risk framework, which is overseen by internal control groups and validated by regulators,” Mr Fellah said.

Of course, with such rapid propagation of technology among both stock investing and trade processing, it is only a matter of time before a “black hat” hack takes place, and sends trading – and markets – haywire. Which, incidentally, may be among the reasons for the concerted push: after all what better way to avoid blame for what is coming than to blame it on, who else, Russian hackers.(Italicized emphasis added)

There you have the problem clearly stated. And I cannot improve on it.

See you on the flip side…

Read More At: GizaDeathStar.com
________________________________________________

About Dr. Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Inhuman Markets: Even The Algorithm Creators Don’t Know What…


Source: GizaDeathStar.com
Dr. Joseph P. Farrell Ph.D.
June 27, 2017

Over the years I’ve become increasingly wary of the various markets that are now run almost exclusively by computers and have occasionally commented about it in blogs. I’ve even entertained the possibility, in my high octane speculation mode, that various “flash crash” events seem to have features that suggest that the algorithm “took over” and drove a market event with no connection to human market realities; in this respect, I continue to be unconvinced, for example, by the various explanations of the May 2010 flash crash; call it a suspicion, or a hunch, nothing more. Yes, in short, I’ve entertained the idea that artificial intelligence (AI) is not “coming” but already “here”, and may be infesting the “dark pools” and high frequency trading (HFT) algorithms.

Well, now I’m not the only one, according to these stories shared by Ms. K.M.:

Like Something Out of ‘The Twilight Zone,’ This Market Is About the Machines

Doug Kass: Not Even The Algo Creators Know What Is Going On

From the first article, I want to draw your attention to the following statements:

Listen Luddites, for the stock market, too, it’s a thing about the machines.

Throw away your fundamental analysis, your price charts, interest rates and economic growth forecasts, as the market has lost its moorings.

It is no longer a pyramid of fundamental and technical analysis nor is it a response to changing investor sentiment.

The ongoing multiyear changes in the market structure and dominant investor strategies in which quants, algos and other passive strategies (e.g., ETFs) have replaced active managers raise the same risks that Finchley faced 57 years ago.

And the overwhelming impact of central bankers’ largesse is the cherry on the market’s non-fundamentally influenced sundae.

As I have written:

“The combination of central bankers’ unprecedented largesse (and liquidity) when combined with mindless quant strategies and the enormous popularity of ETFs will, as night follows day, become a toxic cocktail for the equity markets. While we live in an imperfect world, we face (with valuations at a 95% decile on a number of metrics) a stock market that views the world almost perfectly.”

Back to JPMorgan’s Marko Kalonovic, who is quoted at the top of this piece and again here:

“… some striking facts: to understand this market transformation, note that Passive and Quantitative investors now account for ~60% of equity assets (vs. less than 30% a decade ago). We estimate that only ~10% of trading volumes originates from fundamental discretionary traders. This means that while fundamental narratives explaining the price action abound, the majority of equity investors today don’t buy or sell stocks based on stock-specific fundamentals. (Bold emphasis added)

Let that last statement sink in for a moment, for if you, like I, have been wondering just why the heck markets don’t make sense any more, it’s because they are utterly unconnected to humanity and human decision-making. That “less than ten percent” of trading volume that “originates from fundamental discretionary traders” means that actual human consideration of stock performance, or even equities in a certain specific sector of industry – say, film-making or farm implement manufacture – are based on actual human consideration of the performance, risk, and returns of a particular stock.

I don’t know about you, but I find this development more than disturbing.

But before we move on to the second article, pause and consider something else: it is often a criticism or critique that centralized solutions, the “one size fits all” political solutions of the political left are unworkable, precisely because no human being can calculate for all possible circumstances for all human beings: one cannot, as it were, create a bureaucratic policy or algorithm to stick in “guideline notebooks” for every possible situation.

And that raises the thorny philosophical question that no one seems to want to address:

How then, can we expect human creators of computer algorithms to do for markets, what cannot be done for other segments of human interaction by bureaucrats?

With that philosophical point in mind, turn to the second article, and consider these very cogent points made for our friends at Zero Hedge:

Most people think of artificial intelligence and algos as simply executing logical rules programmed into them by humans — the same rules that the programming humans would follow if they were presented with the same data and data analysis. The algos and AIs are doing it in the same way humans have always done and would do, but at a much slower speed or perhaps not at all because of the very weak and distant relationship of some data items to other data items.

The general belief is that algos and AIs are just “faster humans able to do a lot more calculations in a meaningful time frame”. That may NOT be a correct characterization of some of the more powerful AIs that may be working in the markets. Of course, we don’t know what AIs are working because there are no regulations requiring that machine decision-making accounts disclose and register as such … a very, very big gap in regulation.

True, AI and the related “machine learning” developments at the leading edge of such technology do NOT simply duplicate human rules and logic. Instead, while they may perform simple repetitive correlations initially on data as humans currently formulate that data, the more advanced machines go on to program themselves at successive layers, where the data being analyzed and correlated is no longer what we think of as data. Rather, it is often data artifacts created by the first layers in a form that no human would ever consider or has ever seen. To put in a more street-level way, the first level creates ghosts and apparitions and shadows that the second layer treats as real data on which it assesses correlation and predictability in the service of some decision asked of it. AND … a third and fourth and on and on are doing the same thing with output from each layer below it.

The result of this procedure is striking and terrifying when the the leading experts in AI and machine learning are interviewed. They admit that they have no way of determining what rules AI and machine- learning powered machines are following in making their decisions AND we cannot even know what inputs are being used in making those decisions.

Think about that. The creators have no knowledge of what their creations are thinking or what kind of inputs the machines are thinking about and how decisions about that are being made. The machines are inscrutable and, most terrifyingly important, UNPREDICTABLE.

We are not telling these AIs how to make decisions. The machines are figuring out how to decide to “make a profit” on their own and subject to no enforceable constraint.

The resulting risk of “flash crashes” — to lump all sudden and unexpected behaviors into a catchphrase — is unknowable but probably much greater than anyone even dreams. The machines have no fear of flash crashes or any other kind of crash. Such crashes might even serve their purpose of “making a profit.”

Note what is really being said:

 (1) algorithmic trading generates artifacts in data that no human ever would;

(2) is processing and making trading decisions based on those artifacts;

(3) none of these processes are transparent, and thus, we do not even know why the markets are behaving as they are behaving, we only know they are not reflective of human market realities; and finally,

(4) all this can lead to the risk of flash crashes.

Lest one think that this sounds too incredible to be true, consider the final closing paragraph of this article, which is the biggest jaw-dropper of them all:

Everyone should read this important note from JPMorgan’s head quant (hat tip to Zero Hedge) in order to understand how risk parity, volatility trending, stat arb and other quant strategies that are agnostic to balance sheets, income statements and private market value artificially are impacting the capital markets and, temporarily at least, are checking volatility. (Bold and italics emphasis added)

Let that sink in for a moment: because algorithms trade at such extraordinary speed, and execute trades in blocks of equities, little or no correlation is being with actual specific equity performance, such as a human “discretionary investor” would make, looking at “old fashioned analogue sorts of things” like balance sheets, income, profit/loss statements, company indebtedness, cost-earnings ratios, exposure, assets &c… in other words, the algorithms have little to no connection to markets and their realities, much less to human decision-making processes that are normally involved in the investment process.

The bottom line? Well, over the long term, obvious a huge rethink of computer-based trading is in order. Frankly, I’m old fashioned enough to want to see a Wall Street trading floor of shouting traders, piles of paper, and bundles of stock certificates being mailed out every day. But beyond this, there’s a short term necessity, perhaps one can call it a strategy, and that’s “keep it local”, and in “keeping it local” I mean, even for local investments, finding out about their exposure to national and international markets: how much of that local bank’s stock is traded on the big markets, and who are the major shareholders? And so on… because, for right now, these machines are at the root of market unreality.

This should, and I hope will, prompt a discussion, and it will have to be a deep one, for the problem of the quants and their algorithms is highlighting the limitations of technology for a human world. The disconnection of markets from real human market activity is a case in point of how technologies have been adapted to a normal human activity – investing and trading – in an inhuman way. And the problem is, if the markets are that far removed from human realities, what will happen if, suddenly, someone pulls the plug? How many would remember how to conduct trades on the floor, the “old fashioned way”?

See you on the flip side…

Read More At: GizaDeathStar.com
________________________________________________

About Dr. Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

New “Smart Phone” Will Be Quietly Studying Your Behavior And Reacting In Real-Time


Source: ActivistPost.com
Bernie Suarez
April 21, 2017

The march towards an Orwellian future where every form of human behavior is being monitored by AI-driven appliances and electronics is quickly becoming a reality. This was the plan from the start and as we can see the ruling elite have not slowed down one bit in their attempt to create this kind of world.

It is thus no surprise that Samsung is releasing a new smart phone this week called the S8 and S8+ that has a software called “Bixby” which will be studying your behavior in real-time and will be reacting, responding and “learning” from you accordingly.

The new Samsung S8 smart phone represents one of the first portable devices released to the general public in which the owner will be officially creating a 2-way relationship with the machine. The phone will carry a feature that can be activated which will require biometric data (retinal scan and facial recognition) in order to use the phone, and you can sure many people will take advantage of this feature. Passwords and pin numbers will soon be no more as the masses are conditioned into using biometric data.

The Bixby software in the latest Samsung smart phone will:

“learn your routines to serve up the right apps at the right time”

And it:

“is an intelligent interface that learns from you to help you do more.”

The dark technocratic future of humanity is now coming to full view as the technocratic ruling elite continue to push their technocracy and technology to no end.

The questions that we need to be asking ourselves are questions like, when will humanity as a whole start pushing back? When will those in the Department of Justice demand an end to unnecessary research? When will people in large masses call this out and consider certain advanced scientific research unethical? When will the line be crossed where additional research into certain areas like artificial intelligence (AI) be considered an act of aggression against humanity or even a punishable crime? I’m referring to the people of the nation states demanding observation of human rights to privacy and holding elected officials accountable. And I’m also referring to consumers becoming aware of what they buy and choosing where and how they spend their money.

I believe all of this is part of the JADE (at the) HELM revelations of 2015-2016. A plan for “mastering the human domain.” A plan made possible by companies like Raytheon which have been pushing a thing called BBN AI technology, a highly sophisticated technological platform that incorporates super high speed real-time learning, adaptation and awareness features. (See video below for more.)

Solutions

Stopping the agenda of the technocrats is the solution. People everywhere must step up, begin paying attention and start fighting back. And the very first thing we need to do is shine a bright light on this issue, spread the awareness and hope that enough people see the problem. That means educating the stubborn “Progressive” Liberal Left and anyone else who has a blind faith in technology. This awareness must then turn into action. Here are some solutions.

1- Throw away, disable or stop using your smart phone or any smart device around your home for that matter.

2- Replace your smart phone with a non-smart phone.

3- Change your paradigm and see if you can free yourself from cell phones altogether.

4- Realize that smart phones and smart devices are slowly being pulled into the equation for work and survival. Demand that your boss (if you work for someone) not force you to use a smart device. Let’s challenge the legality of this now growing practice and problem. Just focusing and pointing to the dangers of EMF waves emitted by these devices alone may give us the legal authority to stop employers from requiring the use of these devices.

5- If you must have a cell phone (like many of us do) then learn all the safest ways to use it which include among other things keeping it as far away from your body as possible.

6- As best you can, stop texting your friends and family for every little thing and go back to talking to each other and meeting in person.

7- Make a focused effort to raise awareness of this issue and help spread the word.

Finally, realize that Technocracy is just another form of oppression. Despite all the voices of today that glorify technological advancements, realize that humans are made to move around and work while on earth. It’s the large corporations that want to replace all forms of labor with robots. It’s the ruling elite in control of the nation states that want every form of behavior recorded with the aid of their technology and very little to none of this actually benefits humanity. While many inventions of the 19th century improved the human experience the technological advancements of today are going far beyond what humanity needs to be happy and healthy. Let’s learn to identify this problem quickly so we can implement solutions today and now.

Read More At: ActivistPost.com

Related video

Bernie is a revolutionary writer with a background in medicine, psychology, and information technology. He is the author of The Art of Overcoming the New World Order and has written numerous articles over the years about freedom, government corruption and conspiracies, and solutions. Bernie is also the creator of the Truth and Art TV project where he shares articles and videos about issues that raise our consciousness and offer solutions to our current problems. As a musician and artist his efforts are designed to appeal to intellectuals, working class and artists alike and to encourage others to fearlessly and joyfully stand for truth. His goal is to expose government tactics of propaganda fear and deception and to address the psychology of dealing with the rising new world order. He is also a former U.S. Marine who believes it is our duty to stand for and defend the U.S. Constitution against all enemies foreign and domestic. He believes information and awareness is the first step toward being free from the control system which now threatens humanity. He believes love conquers all fear and it is up to each and every one of us to manifest the solutions and the change that you want to see in this world because doing this is what will ensure victory and restoration of the human race and offer hope to future generations.

The Device Will See You Now

Source: GizaDeathStar.com
K.M.
April 24, 2017

(The following blog is contributed and composed by regular reader, Ms. K.M.):

This site has many times mentioned the rise of Artificial Intelligence (AI) and its relationship to both the inability to contain it and also to its direct relationship to transhumanists (or perhaps “trans-inhumanists”). I’m reminded of Jeff Goldblum in Jurassic Park: “Life will find a way.”

You have likely heard the mid-century-style voice of Watson besting human contestants on Jeopardy, the Greek statue pose of the T-1 Terminator after it shocks into our timeline, and the rosy predictions of how great it will be when our minds are forced to use a non-feeling prosthetic. Blogs here have also spoke out about the frightening addition of a “kill switch” to Google’s Deep Mind AI (Google’s AI Kill Switch).

If AI is not already out of control, the risks are vast but so are the rewards for those who possess them. If the idea of a life and death experience with your favorite non-living non-dying machine is not enough to get your pack-a-day habit up to a pack-and-a-half, this recent story about AI thinking its way into medicine offers us the opportunity to ponder a future where the biggest sociopath with a lab coat is not a doctor, but an unliving, thinking AI on wheels:

AI is Taking on Traditional Healthcare With all the usual blather about how much better machines are than people, the article discusses a study published in the top-drawer journal Nature and points out that:

“The arrival of AI means healthcare expertise is no longer under the exclusive purview of medical practitioners. As the technology advances, AI is proving to be more than just a peripheral tool that can provide assistance — a machine’s ability to process enormous amounts of data using advanced learning technology allows it to deliver speedier and more accurate diagnosis and treatment plans, which could drastically alter the standards of modern healthcare. For example, in a recent study involving 34 participants, machine-learning algorithms were used to predict the development of psychosis based on coherence and syntactic markers of speech complexity. In that study, the AI was able to predict the outcome with 100 percent accuracy, outperforming the results of traditional clinical interviews. In a separate research project, an AI system was able to identify and categorize suicidal tendencies among a pool of 379 teenage subjects with 93 percent accuracy. In that study, patients were asked to complete a standardized behavioral rating scale and then answer a series of open-ended questions. Based on the verbal and nonverbal data gathered, a machine-learning algorithm was able to classify if a patient was suicidal, mentally ill but not suicidal, or neither.”

The key problem with all this is the assumption of perfection. As people in the 1960’s thought that the eight-bit bug-ridden mainframes of that era were superhuman, humans, in the presence of mystery, defer to these automates and the end result may be worse than the Phillip K. Dick Department of Precrime. Now, your friendly AI diagnostic (which would be calculated from your email messages and texts over the internet, or in your phone calls to your favorite aunt) could lead you to be diagnosed as “pre-crazy.” You can’t confront your accuser. You won’t be listened to anyway because you are already ruled three pins short of a strike. Think that’s out of line, then consider this: a half a decade ago, a Facebook poster was arrested, incarcerated in a mental institution and drugged by officials based upon his innocent if inflammatory posts to Facebook.

And what about the injection of a healthy doses of greed and corruption into this futuremare? “Them’s that pay the monies maketh the rules” and you can imagine some “banksterolled” startups and big medicine companies demanding that the AI’s thinking be influenced to favor their spate of drugs and other therapies. Microsoft’s experience last year with an AI Twitter turned out in an unexpected way when when it became a sex robot and a Hitler and Nazi sympathizer… Anyways, how do you sue a machine? How can you confirm its programming is or is not biased? And will the devices indemnify the vendors as so many software licenses do?  Will we have to program an AI cop to regulate other AIs?

With Musk’s recent announcement of a human computer interface (I guess autonomous driving is not working out so well), who is to say that your own embedded chip won’t be the one recommending to the authorities that you “just need to rest for a while.”

Read More At: GizaDeathStar.com
________________________________________________

About Dr. Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

How About Them Apples?

Source: GizaDeathStar.com
Dr. Joseph P. Farrell Ph.D.
March 24, 2017

Over the years of watching and reporting on the GMO issue on this website, one of the things that many brought to my attention by sharing various articles and studies, is the apparent linkage between CCD (colony collapse disorder), as the populations of honey bees colonies and other pollinators have dramatically declined since the introduction of GMO foods and the heavy pesticides they involve. As a result, I have also blogged about the latest gimmick to “repair” the damage: artificial drones as pollinators. It is, after all, “no big deal” if the world’s pollinator population declines or simply goes extinct, after all, they only keep most of the world’s plant life going, and most of its food supply going. No big deal, especially if one has artificial pollinators waiting in the wings. Indeed, as I’ve previously blogged, there were scientists actually seriously proposing this as a means to get around the phenomenon of colony collapse disorder.

Well, according to this article shared by Mr. T.M., it’s now actually been accomplished:

Researchers use drone to pollinate a flower

The opening paragraphs say it all:

Researchers in Japan have successfully used a tiny drone to pollinate an actual flower, a task usually accomplished by insects and animals.

The remote-controlled drone was equipped with horsehairs coated with a special gel, which the researchers say was crucial to the process.

“This is the world’s first demonstration of pollination by an artificial robotic pollinator,” said Eijiro Miyako of the National Institute of Advanced Industrial Science and Technology in Japan, one of the authors of the study, which was published in the journal Chem.

And, lest the connection between pollinator population collapse and the artificial pollinator is missed, the article itself makes the connection:

But many pollinators are under threat, particularly insects like bees and butterflies. They belong to a group — invertebrate pollinators — in which 40 percent of species face extinction, according to the same report.

The drone is an attempt to address this problem: “The global pollination crisis is a critical issue for the natural environment and our lives,” the authors wrote in the study.
There is, however, a catch: it’s still a long way from insect pollinators, due not only to the size of the drone, but due to the lack of artificial intelligence and independent movement in the artificial pollinator itself:

The peculiarity of this project is that it focuses on the pollination process, rather than the construction of a robotic bee.

As the authors note, “practical pollination has not yet been demonstrated with the aerial robots currently available.”

However, pollination was achieved on a very large flower, and the drone was not autonomous: “I believe that some form of artificial intelligence and GPS would be very useful for the development of such automatic machines in future,” said Miyako.

Much work remains to be done before we can emulate the complex behavior of insects and animals: “There is little chance this can replace pollinators,” said Christina Grozinger, Director of the Center for Pollinator Research at Penn State University.

Hidden text: “we urgently need artificial intelligence in order to construct more efficient artificial pollinators.”

And that of course, brings me to my high octane speculation of the day: suppose such artificial intelligence was constructed. And suppose, for a moment, all those artificial pollinators were under the controlled of a networked Artificial Intelligence, coordinating it all. Who is to say that said “intelligence” would even see the need for pollinator activity, or the human and animal populations that they ultimately aid in feeding? Waves of AI pollinators could conceivably become plagues of AI locusts. If this be the case, the “technological fix” could end up being an even worse nightmare.

Of course, one could always solve the problem by the simple fix of what appears to be the basis of the pollinator problem: get rid of GMOs, and let nature do what she was designed to do.

That, of course, would be far too simple, and not issue in enough research grants and profits.

Read More At: GizaDeathStar.com
________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Google’s Artificial Intelligence Learns “Highly Aggressive” Behavior, Concept of Betrayal

What if AI was our last invention? – Technology Lifestyle
Source: TheMindUnleashed.com
Cassius Methyl
February 20, 2017

An artificial intelligence created by Google recently made headlines after learning “highly aggressive” behavior.

The intelligence engaged in a wolfpack hunting game scenario, and a fruit gathering scenario. The wolfpack game was based on cooperation, but the fruit game got strange.

In the fruit game, the AI were represented by colored squares, moving on a grid to collect fruit squares of a different color. They racked up points, and the fruit squares would regenerate. The AI competed on their own like human players in a video game.

The interesting part is, they were also given the ability to damage the other intelligence. They were able to attack the other player by shooting beams at them.

They found that the AI would become more aggressive as the fruit squares became more scarce: when less fruit was present, they attacked each other more frequently.

Summarized by Extreme Tech:

Guess what the neural networks learned to do. Yep, they shoot each other a lot. As researchers modified the respawn rate of the fruit, they noted that the desire to eliminate the other player emerges “quite early.” When there are enough of the green squares, the AIs can coexist peacefully. When scarcity is introduced, they get aggressive. They’re so like us it’s scary.

While this is similar to a computer game and not actual artificially intelligent robots, it could foreshadow something else. This article doesn’t need to tell you where it could go.

Perhaps a better question would be, what is the consequence of trusting a corporation like Google to become so massive? How will this technology ever suit the bottom class when it is developed by the wealthiest?

Read More At: TheMindUnleashed.com

(image credit: CDN, high qfx, guard time)

____________________________________________________________________

Cassius Kamarampi is a researcher and writer from Sacramento, California. He is the founder of Era of Wisdom, writer/director of the documentary “Toddlers on Amphetamine: History of Big Pharma and the Major Players,” and a writer in the alternative media since 2013 at the age of 17. He focuses primarily on identifying the exact individuals, institutions, and entities responsible for various forms of human slavery and control, particularly chemicals and more insidious forms of hegemony: identifying exactly who damages our well being and working toward independence from those entities, whether they are corporate, government, or institutional.

 

The Transhumanist Scrapbook: (Hideous) Method In The EU…

THE TRANSHUMANIST SCRAPBOOK: (HIDEOUS) METHOD IN THE EU PARLIAMENT'S MADNESS
Source: GizaDeathStar.com
Dr. Joseph P. Farrell
January 27, 2017

This story was another one that seemed to have attracted a lot of people’s attention this past week: an EU parliament committee – a completely powerless “legislative” body – has voted to give robots “rights”, along with a kill switch:

EU Parliament Committee Votes To Give Robots Rights (And A Kill Switch)

I’ve blogged previously about the sneaky jurisprudence implied in such efforts, but this one spells it all out plainly; none of my usual high octane speculation is needed:

Foreseeing a rapidly approaching age of autonomous artificial intelligence, a European Parliament committee has voted to legally bestow electronic personhood to robots. The status includes a detailed list of rights, responsibilities, regulations, and a “kill switch.”

The committee voted by 17 votes to two, with two abstentions, to approve a draft report written by Luxembourg MEP Mady Delvaux, who believes “robots, bots, androids and other manifestations of artificial intelligence” will spawn a new industrial revolution. She wants to establish a European Agency to develop rules for how to govern AI behavior. Specifically, Delvaux writes about how increased levels of autonomy in robot entities will make usual manufacturing liability laws insufficient. It will become necessary, the report states, to be able to hold robots and their manufacturers legally responsible for their acts.

Sounding at times like a governmental whisper of Isaac Asimov’s Three Laws of Robotics, the report states, A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The rules will also affect AI developers, who, according to the report, will have to engineer robots in such a way that they can be controlled. This includes a “kill switch,” a mechanism by which rogue robots can be terminated or shut down remotely. (Emphases in the original)

Now, if you’re like me, you’re seeing or sensing a huge danger here, and it makes me wonder if the water supply in Europe is being doped with anti-sanity and anti-reason drugs, for observe the implicit and explicit logical argument here:

(1) humans are persons;

(2) persons have special rights, and with them come special responsibilities (one shudders to think what “rights” mean to a Eurocrat, but we’ll assume the best and move on);

(3) human consciousness and “personhood” can be produced by machines, and artificial intelligence should constitute “electronic personhood” just like corporations are “corporate persons”

(Of course, this is now all getting to be a little fuzzy, and as I’ve said many times, all this corporate personhood stuff is based in a theological confusion of massive proportions. But, hey, relax, because we’re modern trendy predominantly secularized Europeans and we needn’t bother with the niceties of mediaeval metaphysics, even if those niceties have issued in a horribly screwed up notion like “corporations are persons” while “unborn babies are not” but robots are For my part, the silliness of corporate personhood resides in the old adage “I’ll believe corporations are persons when the State of Texas executes one of them.” Heck, forget about murder, I’d settle for manslaughter and a long prison sentence for a few of them, but I digress.

(4) But we need to protect humanity from the possibility that robots might go rogue and do something like found a corporation (a corporate electronic person, presumably) whose corporate charter says that its corporate electronic personhood function is to kill other persons (presumably of either the human biological sort, or the robotic electronic sort). Thus, we need a

(5) “kill switch” to “terminate the program/robot/electronic person”.

Well, in today’s wonderful transhumanist “cashless” world, why not a “kill switch” in your friendly implant when you start having “unacceptable thoughts” like using cash, or questioning the latest “narrative from Brussels.” If it’s good enough for “electronic persons” then one be quite certain that some insane Eurocrat, somewhere, will propose the same thing for human persons by parity of reasoning…

…a parity of reasoning that will not, of course, extend to corporations.

See you on the flip side…

Read More At: GizaDeathStar.com
____________________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

In Case You May Have Missed That Little Announcement About Artificial…

google
Source: GizaDeathStar.com
Dr. Joseph P. Farrell
January 19, 2017

You may have missed it, but in case you did, Mr. B.B. and many other regular readers here shared this story, to make sure you didn’t miss it. And this is such a bombshell I that its implications and ramifications are still percolating through my mind. The long and short of it is, Google’s “artificial intelligence” program-search engine no longer requires quotation marks around it:

The mind-blowing AI announcement from Google that you probably missed

And just in case you read this article and are still so shocked that you’re “missing it,” here it is in all of its frighening-implications-glory:

Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.

This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.

All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.

The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.

Google Translate invented its own language to help it translate more effectively.

What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.

Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.

Now, if you read closely, right after the closing remarks in the quotation above, the author of the article, Mr. Gil Fewster, added this parenthetical comment: “I’ve added a correction/retraction of this paragraph in the notes.” The correction/retraction comes in the form of a comment that Mr. Fewster directs the reader to at the end of his article, from a Mr. Chris MacDonald, who stated:

Ok slow down.
The AI didn’t invent its own language nor did it get creativity. Saying that is like saying calculators are smart and one day they’ll take all the math teachers’ jobs.

What Google found was that their framework was working even better than they expected. That’s awesome because when you’re doing R&D you learn to expect things to fail rather than work perfectly.
How it’s workings that, through all the data it’s reading, it’s observing patterns in language. What they found is that if it knew English to Korean, and English to Japanese, it could actually get pretty good results translating Korean to Japanese (through the common ground of English).

The universal language, or the interlingua, is a not it’s own language per se. It’s the commonality found inbetween many languages. Psychologists have been talking about it for years. As matter of fact, this work is perhaps may be even more important to Linguistics and Psychology than it is to computer science.

We’ve already observed that swear words tend to be full of harsh sounds ( “p” “c” “k” and “t”) and sibilance (“S” and “f”) in almost any language. If you apply the phonetic sounds to the Google’s findings, psychologists could make accurate observations about which sounds tend to universally correlate to which concepts. (Emphasis added)

Now, this puts that business on the computer teaching itself into a little less hysterical category and into a more “Chomskian” place; after all, the famous MIT linguist has been saying for decades that there’s a common universal “grammar” underlying all languages, and not just common phonemes, as Mr. MacDonald points out in the last paragraph of the above quotation.

But, the problem still remains: the computer used one set of patterns it noticed in one context, that appeared in another context, and then mapped that pattern into a new context unfamiliar to it. That, precisely, is analogical thinking, it is a topological process that seems almost innate in our every thought, and that, precisely, is the combustion engine of human intelligence (and in my opinion, of any intelligence).

And that raises some nasty high octane speculations, particularly for those who have been following my “CERN” speculations about hidden “data correlation” experiments, for such data correlations would require massive computing power, and also an ability to do more or less this pattern recognition and “mapping” function. The hidden implication with that is that if this is what Google is willing to talk about publicly, imagine what has…

Continue Reading At: GizaDeathStar.com
______________________________________________________________

About Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Is a robot a person? EU lawmakers to roll our rules of AI engagement

Source: RTAmerica
January 17, 2017

Members of the European Parliament will soon vote on whether robots should be considered ‘electronic persons’ and lay out rules governing their interactions with human beings. Issues at stake include protecting humans from very sophisticated or powerful AIs and to regulate the role of robots in workplaces of the future. RT correspondent Peter Oliver reports.