The 11-Dimensional Multi-Verse Of The Brain

Source: GizaDeathStar.com
Dr. Joseph P. Farrell Ph.D.
June 21, 2017

A couple of days ago I blogged about the discovery of “memory-wiping” enzymes and its implications for the topic of mind control. In that blog, I also made the connection between the mind and the universe, particularly the version of quantum theory called the “multiverse” hypothesis. I’ve long sensed that there is a connection between the mind and matter, and that this connection is not of the tidy Cartesian variety, where the one (take your choice) gives rise to the other. I suspect, and have suspected for some time, that the situation is rather than of a complex set of feedback loops between the two, and that in that situation, that complexity can only be described by something “not physical” in the ordinary, three dimensional sense of our everyday experience.

Thus, when Mr. M.H. shared this article this week, I took notice:

Brain Architecture: Scientists Discover 11 Dimensional Structures That Could Help Us Understand How the Brain Works

The following paragraphs leapt out at me:

Scientists studying the brain have discovered that the organ operates on up to 11 different dimensions, creating multiverse-like structures that are “a world we had never imagined.”

By using an advanced mathematical system, researchers were able to uncover architectural structures that appears when the brain has to process information, before they disintegrate into nothing.

In the latest study, researchers honed in on the neural network structures within the brain using algebraic topology—a system used to describe networks with constantly changing spaces and structures. This is the first time this branch of math has been applied to neuroscience.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures—the trees in the forest—and see the empty spaces—the clearings—all at the same time,” study author Kathryn Hess said in a statement.

In the study, researchers carried out multiple tests on virtual brain tissue to find brain structures that would never appear just by chance.

“We found a world that we had never imagined. There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

The findings indicate the brain processes stimuli by creating these complex cliques and cavities, so the next step will be to find out whether or not our ability to perform complicated tasks requires the creation of these multi-dimensional structures.

Hess says the findings suggest that when we examine brain activity with low-dimensional representations, we only get a shadow of the real activity taking place. This means we can see some information, but not the full picture. “So, in a sense our discoveries may explain why it has been so hard to understand the relation between brain structure and function,” she explains.

Talk about high octane! Let that sink in for a moment: at every moment you are thinking, multi-dimensional structures arise in your very three dimensional brain, and that’s a fancy way of saying your brain is not closed within or upon itself, but is rather an open system interacting with much higher dimensional realities that cannot be encompassed in the material 3-d world. And this is why, using a merely three-dimensional model, or, if I may be more blunt, a merely materialistic model of the mind-brain relationship, has failed to grasp the complexity, the hyper-dimensional complexity, of what is actually going on. Indeed, higher order topologies are necessary to describe thought at all: thought does not occur in the three dimensional material stuff of life solely or exclusively, but outside it, as something coupled with it. (Regular readers of my books will recognize this as what I’ve been calling the Topological Metaphor of the Medium, and its analogical basis.) For those who’ve read my books Secrets of the Unified Field or The Third Way, the name of Gabriel Kron should also spring to mind, with his theory that all electrical circuits, no matter how simple they are, are in effect hyper-dimensional machines, transducing something “down here” from “up there”.

What is interesting in this article is also the implication of the object or stimulus of brain activity: for consider what that object is, in physics terms. Even at the atomic or, better, sub-atomic quantum level, these “material” entities dissolve – if I may use that term – into packets of information modeled by multi-dimensional mathematical equations. In other words, multi-dimensionality is the bridge of perception because the multi-dimensionality is at the root of the objects themselves.

What’s coming down the pike? Well, I’ve speculated at length about this idea in our numerous members’ vidchats (along with some pretty stimulating speculations from members themselves): the next step is to find the exact nature and structure of those “feedback” loops between the “material” world and the “incorporeal” one: think “quantum neurology” and “neuro-cosmology” for a moment, and you get an intuitive approximation of how the old, tidy, Cartesian dualistic lines are breaking down. We are, I rather suspect, looking at something more akin to the old Neoplatonic spectrum of “fine gradations” from the immaterial world of forms to the increasingly “dense” world of matter.

Funny thing, too, to remember that Plato referred to all of this as “the mathematicals”. Funny thing, too, that in membrane theory, space-time is in 11 dimensions.

See you on the flip side…

Read More At: GizaDeathStar.com
________________________________________________

About Dr. Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

40% of Scientists Admit Fraud “Always or Often” Contributes to Irreproducible Research

TruthFact
Source: Nature.com
Monya Baker
May 25, 2016

More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature‘s survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.

The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.

Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology1 and cancer biology2, found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence.

The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. “At the current time there is no consensus on what reproducibility is or should be.” But just recognizing that is a step forward, he says. “The next step may be identifying what is the problem and to get a consensus.”

Failing to reproduce results is a rite of passage, says Marcus Munafo, a biological psychologist at the University of Bristol, UK, who has a long-standing interest in scientific reproducibility. When he was a student, he says, “I tried to replicate what looked simple from the literature, and wasn’t able to. Then I had a crisis of confidence, and then I learned that my experience wasn’t uncommon.”

The challenge is not to eliminate problems with reproducibility in published work. Being at the cutting edge of science means that sometimes results will not be robust, says Munafo. “We want to be discovering new things but not generating too many false leads.”

The scale of reproducibility

But sorting discoveries from false leads can be discomfiting. Although the vast majority of researchers in our survey had failed to reproduce an experiment, less than 20% of respondents said that they had ever been contacted by another researcher unable to reproduce their work. Our results are strikingly similar to another online survey of nearly 900 members of the American Society for Cell Biology (see go.nature.com/kbzs2b). That may be because such conversations are difficult. If experimenters reach out to the original researchers for help, they risk appearing incompetent or accusatory, or revealing too much about their own projects.

A minority of respondents reported ever having tried to publish a replication study. When work does not reproduce, researchers often assume there is a perfectly valid (and probably boring) reason. What’s more, incentives to publish positive replications are low and journals can be reluctant to publish negative findings. In fact, several respondents who had published a failed replication said that editors and reviewers demanded that they play down comparisons with the original study.

Nevertheless, 24% said that they had been able to publish a successful replication and 13% had published a failed replication. Acceptance was more common than persistent rejection: only 12% reported being unable to publish successful attempts to reproduce others’ work; 10% reported being unable to publish unsuccessful attempts.

Survey respondent Abraham Al-Ahmad at the Texas Tech University Health Sciences Center in Amarillo expected a “cold and dry rejection” when he submitted a manuscript explaining why a stem-cell technique had stopped working in his hands. He was pleasantly surprised when the paper was accepted3. The reason, he thinks, is because it offered a workaround for the problem.

Others place the ability to publish replication attempts down to a combination of luck, persistence and editors’ inclinations. Survey respondent Michael Adams, a drug-development consultant, says that work showing severe flaws in an animal model of diabetes has been rejected six times, in part because it does not reveal a new drug target. By contrast, he says, work refuting the efficacy of a compound to treat Chagas disease was quickly accepted4.

The corrective measures

One-third of respondents said that their labs had taken concrete steps to improve reproducibility within the past five years. Rates ranged from a high of 41% in medicine to a low of 24% in physics and engineering. Free-text responses suggested that redoing the work or asking someone else within a lab to repeat the work is the most common practice. Also common are efforts to beef up the documentation and standardization of experimental methods.

Any of these can be a major undertaking. A biochemistry graduate student in the United Kingdom, who asked not to be named, says that efforts to reproduce work for her lab’s projects doubles the time and materials used — in addition to the time taken to troubleshoot when some things invariably don’t work. Although replication does boost confidence in results, she says, the costs mean that she performs checks only for innovative projects or unexpected results.

Consolidating methods is a project unto itself, says Laura Shankman, a postdoc studying smooth muscle cells at the University of Virginia, Charlottesville. After several postdocs and graduate students left her lab within a short time, remaining members had trouble getting consistent results in their experiments. The lab decided to take some time off from new questions to repeat published work, and this revealed that lab protocols had gradually diverged. She thinks that the lab saved money overall by getting synchronized instead of troubleshooting failed experiments piecemeal, but that it was a long-term investment.

Irakli Loladze, a mathematical biologist at Bryan College of Health Sciences in Lincoln, Nebraska, estimates that efforts to ensure reproducibility can increase the time spent on a project by 30%, even for his theoretical work. He checks that all steps from raw data to the final figure can be retraced. But those tasks quickly become just part of the job. “Reproducibility is like brushing your teeth,” he says. “It is good for you, but it takes time and effort. Once you learn it, it becomes a habit.”

One of the best-publicized approaches to boosting reproducibility is pre-registration, where scientists submit hypotheses and plans for data analysis to a third party before performing experiments, to prevent cherry-picking statistically significant results later. Fewer than a dozen people mentioned this strategy. One who did was Hanne Watkins, a graduate student studying moral decision-making at the University of Melbourne in Australia. Going back to her original questions after collecting data, she says, kept her from going down a rabbit hole. And the process, although time consuming, was no more arduous than getting ethical approval or formatting survey questions. “If it’s built in right from the start,” she says, “it’s just part of the routine of doing a study.”

The cause

The survey asked scientists what led to problems in reproducibility. More than 60% of respondents said that each of two factors — pressure to publish and selective reporting — always or often contributed. More than half pointed to insufficient replication in the lab, poor oversight or low statistical power. A smaller proportion pointed to obstacles such as variability in reagents or the use of specialized techniques that are difficult to repeat.

But all these factors are exacerbated by common forces, says Judith Kimble, a developmental biologist at the University of Wisconsin–Madison: competition for grants and positions, and a growing burden of bureaucracy that takes away from time spent doing and designing research. “Everyone is stretched thinner these days,” she says. And the cost extends beyond any particular research project. If graduate students train in labs where senior members have little time for their juniors, they may go on to establish their own labs without having a model of how training and mentoring should work. “They will go off and make it worse,” Kimble says.

What can be done?

Respondents were asked to rate 11 different approaches to improving reproducibility in science, and all got ringing endorsements. Nearly 90% — more than 1,000 people — ticked “More robust experimental design” “better statistics” and “better mentorship”. Those ranked higher than the option of providing incentives (such as funding or credit towards tenure) for reproducibility-enhancing practices. But even the lowest-ranked item — journal checklists — won a whopping 69% endorsement.

The survey — which was e-mailed to Nature readers and advertised on affiliated websites and social-media outlets as being ‘about reproducibility’ — probably selected for respondents who are more receptive to and aware of concerns about reproducibility. Nevertheless, the results suggest that journals, funders and research institutions that advance policies to address the issue would probably find cooperation, says John Ioannidis, who studies scientific robustness at Stanford University in California. “People would probably welcome such initiatives.” About 80% of respondents thought that funders and publishers should do more to improve reproducibility.

“It’s healthy that people are aware of the issues and open to a range of straightforward ways to improve them,” says Munafo. And given that these ideas are being widely discussed, even in mainstream media, tackling the initiative now may be crucial. “If we don’t act on this, then the moment will pass, and people will get tired of being told that they need to do something.”

Read More At: Nature.com

Memory Erasing 101: Ewen Cameron’s Dream Come True: Memory Wipe Enzyme

Source: GizaDeathStar.com
Dr. Joseph P. Farrell Ph.D.
June 18, 2017

Just in case you read the title of this blog, and don’t know who Ewen Cameron was, a little history is in order: Cameron was a psychologist/psychiatrist involved in the CIA’s infamous mind-control program, MK-Ultra. Cameron had his “laboratory” in a  psychiatric hospital in Canada, where he subjected his victims (I won’t use the word “patients” here, because what Cameron did is in my opinion unspeakable) to a regime of drug cocktails, continual sleep (nor for hours, but for days at a time), and repeated endless bombardment of tape recorders playing back, for hours on end, recorded messages. He called all this “psychic driving,” and his goal was to eliminate “bad personality habits” (or even the personality itself) and to replace it with “something else”, that something else being the recorded endlessly repeated messages. If his “procedures” (and we’ve only very briefly summarized them) sound a little like the Nazi doctors in World War Two, then you understand the measure of the torture he was inflicting.

But imagine a magic drug that could do the same thing, without the endless weeks of sleep, tape-recorded looped messages, and cocktails. Indeed, if one digs a little bit into the history of the CIA’s various mind-control programs – Projects Bluebird, Artichoke, or MK-Ultra – one of the things being investigated was precisely the use of drugs for memory and behavior alteration.

Which brings us to this rather frightening article that was shared by Mr. C.S. this week:

MEMORY HOLE: U.S. scientists have developed a “memory wipe” enzyme that can erase memories forever

Assuming the article to be true, then the implications of the following would fulfill Dr. Cameron’s wildest fantasies of “psychic driving” and memory wipes:

Scientists have long known that creating new memories and storing old ones involve the creation of proteins in the synapse, where two brain cells meet. For this process to be successful, genes must be expressed in the nucleus of the cell, and this is where a key enzyme can turn genes on or off as new memories are formed. It’s also believed that this enzyme, which is known as ACSS2, plays a role in the memory impairment that is seen in neurodegenerative disorders.

In the study, the researchers found that lowering ACSS2 levels in mice reduced the expression of memory genes, thereby stopping the formation of long-term memories. Mice who had reduced enzyme levels showed no interest in a ball they saw the previous day, whereas those with normal levels of the enzyme were interested in the ball.

Now the researchers are hoping to use this knowledge to stop traumatic memories from forming in people with PTSD simply by blocking the brain’s ACSS2. This might sound like a good idea to those of us who are haunted by some sort of trauma, but there’s also the potential for this to be used for more sinister reasons.

As the article goes on to point out, what’s to prevent the “intelligence” agencies of the modern police state from using the capability to erase memories in individuals (or for that matter, whole populations), it finds “inconvenient”, or from planting completely false memories. In these, Cameron’s goal of completely wiping one personality and replacing it with another come close to reality, without the corresponding torture he inflicted.

Which brings us to the high octane speculation of the day: why investigate such things at all? As the article avers, some beneficial uses could be had, but I strongly suspect that all those assurances we were given decades ago from our intelligence agencies during the Church Committee were just that: assurances, nothing more, and that the covert funding and investigation of mind control techniques and technologies continued. With its track record of having given LSD to unsuspecting victims to study their responses – all under the aegis of its mind control programs – one can see where this is going, for in a world where chemicals are sprayed over whole populations without their knowledge (in many cases) and without their consent (in most), it takes little imagination to see that a study of “whole population effects” could be had with the appropriate spraying, or slipping a little “mind wipe enzyme” into a town’s water supply, and watching and studying the results. Indeed, in 1984(note the year) American actor Tim Matheson starred with co-star Hume Cronyn in the movie Impulse, which was about precisely such a scenario. Add a false news story or two and one has a frightening scenario where whole populations might be induced to “remember” something that didn’t actually happen, or to forget something that did.

I am reminded of the late 1960s and 1970s, when various gurus of the “drug culture” actually viewed psychedelics are a means of accessing “alternate worlds,” and in a universe where one has quantum physicists emphasizing the role of the observer in the creation of reality, and where they are talking about “multiple worlds” hypotheses, such an experiment might even have cosmological implications.

I don’t know about you, but I for one put nothing past them.

See you on the flip side…

Read More At: GizaDeathStar.com
________________________________________________

About Dr. Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Immortality & Resurrection Inc.


Source: GizaDeathStar.com
Dr. Joseph P. Farrell Ph.D.
June 13, 2017

Just when you thought the aspirations and plans of modern science couldn’t possibly become more diabolical (or, if one prefer, sacrilegious), an article comes along to renew your hope that the world continues on its path of normalcy, and that many scientists are, indeed, just as wild-eyed-nuts as you always thought them to be. And this week, apparently many people were relieved and reassured that the mad scientist is not a thing of the past or a species that died out, but a real, living creature deserving of our awe and respect. Ms. M.W. and many others found this, and shared it, doubtless because they were concerned that I was losing hope that there were no more mad scientists:

Could we soon REVERSE death? US company to start trials ‘reawakening the dead’ in Latin America ‘in a few months’ – and this is how they’ll do it

Way back when I first started writing about these strange topics in The Giza Death Star, I made the observation that physical immortality might not be such a good thing, without a commensurate and corresponding improvement in human spirituality and morality. In this, I took my cue from an ancient Greek Church Father named St. John Chrysostom, who warned about the same thing, and who stated that it was death, in fact, that formed the crucial condition for the possibility of human repentance and a change of mind, for it cut off further progress in evil. Taking this as my cue, in the final pages of that book, I asked people to imagine if such immortality were possible, or even a dramatically extended life span were possible – both of which are now being openly discussed and touted in serious and not-so-serious literature – what it might mean for the resulting civilization? One thing that would result, I pointed out, was a vastly expanded and accelerated scientific and technological development. One individual would, in such a condition, be able to learn and to master several academic disciplines, not just one.The explosion of technology and science would dwarf anything we have seen thus far. But the other consequence would be for moral progress. Imagine, I said back then, an Albert Schweitzer having not a century, but centuries or even millennia to do good things, or, conversely, a Mao Tse-Tung, a Josif Stalin, a Pol Pot or an Adolf Hitler, having that long to “perfect their progress in evil,” and one gets a clear picture of the sharp moral contradictions such a society would be in. And please note: this problem is not a problem that, to my knowledge, is receiving anything close to the attention it needs in the transhumanism-virtual immortality community. The sole focus is on the science; if we can do it, we should do it.

Now we have this:

Bioquark, a Philadelphia-based company, announced in late 2016 that they believe brain death is not ‘irreversible’.

And now, CEO Ira Pastor has revealed they will soon be testing an unprecedented stem cell method on patients in an unidentified country in Latin America, confirming the details in the next few months.

To be declared officially dead in the majority of countries, you have to experience complete and irreversible loss of brain function, or ‘brain death’.

According to Pastor, Bioquark has developed a series of injections that can reboot the brain – and they plan to try it out on humans this year.

They have no plans to test on animals first.

The first stage, named ‘First In Human Neuro-Regeneration & Neuro-Reanimation’ was slated to be a non-randomized, single group ‘proof of concept’ study.

The team said they planned to examine individuals aged 15-65 declared brain dead from a traumatic brain injury using MRI scans, in order to look for possible signs of brain death reversal.

Specifically, they planned to break it down into three stages.

First, they would harvest stem cells from the patient’s own blood, and inject this back into their body.

Next, the patient would receive a dose of peptides injected into their spinal cord.

Finally, they would undergo a 15-day course of nerve stimulation involving lasers and median nerve stimulation to try and bring about the reversal of brain death, whilst monitoring the patients using MRI scans.

Light, chemistry, and stem cells and DNA. If one didn’t know any better, one would swear one was looking at the broad chronological progression of Genesis 1.

But I digress.

The problem here is, one notices, the almost complete avoidance of the moral question. Let’s assume the technology works and that one can, literally, resurrect the dead scientifically. And let us assume the project reaches the stage of perfection envisioned by the Russian Cosmists, like Nikolai Fedorov. The cosmists, recall, want to extend the resurrection-by-science principle to the entire history of one’s ancestors. But should this occur, then what about resurrecting people like Stalin, Mao, or Hitler? The sad truth is, some people still “revere” those twisted and murderous people as heroes. The sad truth is, some people would attempt to do it, if given the means to do so.

But there’s an even bigger problem. The entire project is predicated on the materialist assumption that “brain function equals the person.” Regular readers here know that I have never subscribed to such a view, nor have I subscribed to the view, conversely, that there is no relationship between a person’s “personhood” and the functions of their soul, which would include, of course, the functions of their will, intellect, emotions, and brain. It is, I suspect, a very complex phenomenon not neatly divided into tidy Cartesian dualisms, with numerous feedback loops between the two. This said, however, the problem arises then that the brain is not the creator of individuality, but rather, its transducer (and, if I may employ a more ancient version of the term, its traducer). Thus, the possibility arises that one might “revive” a brain, and traduce or transduce a different individual than one “recalls” being present prior to brain death. Already some psychologists have written – and published – papers suggesting that certain mental disorders such as bipolarity and schizophrenia might not be disorders in any standard sense, but rather a phenomenon where an individual is inhabiting two very different and parallel universes at the same time. In this they draw upon the many worlds hypotheses of qauntum mechanics.

In short, for my money, I have no doubt that ultimately, some sort of “scientific” resurrection technique might be possible. But I suspect it will be a Pandora’s box of spiritual phenomena which, once opened, will be difficult if not impossible to close again, and that before we open it, we should give lengthy, and due consideration to all the moral problems it will engender.

See you on the flip side…

Read More At: GizaDeathStar.com
________________________________________________

About Dr. Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and “strange stuff”. His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into “alternative history and science”.

Global Warming A Myth Say 80 Graphs From 58 Peer-Reviewed Papers

Scientists Increasingly Discarding ‘Hockey Stick’ Temperature Graphs


“[W]hen it comes to disentangling natural variability from anthropogenically affected variability the vast majority of the instrumental record may be biased.”  — Büntgen et al., 2017

EarthSource: Notrickszone.com
Kenneth Richard
May 29, 2017


Last year there were at least 60 peer-reviewed papers published in scientific journals demonstrating that Today’s Warming Isn’t Global, Unprecedented, Or Remarkable.
 .
Just within the last 5 months, 58 more papers and 80 new graphs have been published that continue to undermine the popularized conception of a slowly cooling Earth temperature history followed by a dramatic hockey-stick-shaped uptick, or an especially unusual global-scale warming during modern times.
 .
Yes, some regions of the Earth have been warming in recent decades or at some point in the last 100 years.  Some regions have been cooling for decades at a time.  And many regions have shown no significant net changes or trends in either direction relative to the last few hundred to thousands of years.
 .
Succinctly, then, scientists publishing in peer-reviewed journals have increasingly affirmed that there is nothing historically unprecedented or remarkable about today’s climate when viewed in the context of long-term natural variability.


Büntgen et al., 2017

“Spanning the period 1186-2014 CE, the new reconstruction reveals overall warmer conditions around 1200 and 1400, and again after ~1850. … Little agreement is found with climate model simulations that consistently overestimate recent summer warming and underestimate pre-industrial temperature changes. … [W]hen it comes to disentangling natural variability from anthropogenically affected variability the vast majority of the instrumental record may be biased. …




Abrantes et al., 2017

The transition from warm to colder climatic conditions occurs around 1300 CE associated with the Wolf solar minimum. The coldest SSTs are detected between 1350 and 1850 CE, on Iberia during the well-known Little Ice Age (LIA) (Bradley and Jones, 1993), with the most intense cooling episodes related with other solar minima events, and major volcanic forcing and separated by intervals of relative warmth (e.g. (Crowley and Unterman, 2013; Solanki et al., 2004; Steinhilber et al., 2012; Turner et al., 2016; Usoskin et al., 2011). During the 20th century, the southern records show unusually large decadal scale SST oscillations in the context of the last 2 millennia, in particular after the mid 1970’s, within the Great Solar Maximum (1940 – 2000 (Usoskin et al., 2011)) and the “greater salinity anomaly” event in the northern Atlantic (Dickson et al., 1988), or yet the higher global temperatures of the last 1.4 ky detected by (Ahmed et al., 2013).”


Werner et al., 2017


Deng et al., 2017

The results indicate that the climate of the Medieval Climate Anomaly (MCA, AD 900–1300) was similar to that of the Current Warm Period (CWP, AD 1850–present) … As for the Little Ice Age (LIA, AD 1550–1850), the results from this study, together with previous data from the Makassar Strait, indicate a cold and wet period compared with the CWP and the MCA in the western Pacific. The cold LIA period agrees with the timing of the Maunder sunspot minimum and is therefore associated with low solar activity.”


Chapanov et al., 2017

“A good agreement exists between the decadal cycles of LOD [length of day], MSL [mean sea level], climate and solar indices whose periods are between 12-13, 14-16, 16-18 and 28-33 years.”


Williams et al., 2017

“Reconstructed SSTs significantly warmed 1.1°C … from 1660s to 1800 (rate of change: 0.008°C/year), followed by a significant cooling of 0.8°C …  until 1840 (rate of change: 0.02°C/year), then a significant warming of 0.8°C from 1860 until the end of reconstruction in 2007 (rate of change: 0.005°C/year).” [The amplitude of sea surface temperature warming and cooling was higher and more rapid from the 1660s to 1800 than from 1860-2007.]
‘In fact, the SST reconstruction significantly co-varied with a reconstruction of solar irradiance [Lean, 2000] on the 11-year periodicity only from ~1745 to 1825. In addition, the reconstructed SSTs were cool during the period of lower than usual solar irradiance called the Maunder minimum (1645–1715) but then warmed and cooled during the Dalton minimum (1795–1830), a second period of reduced solar irradiance. … The Dalton solar minimum and increased volcanic activity in the early 1800s could explain the decreasing SSTs from 1800 to 1850.”


Stenni et al., 2017

“A recent effort to characterize Antarctic and sub-Antarctic climate variability during the last 200 years also concluded that most of the trends observed since satellite climate monitoring began in 1979 CE cannot yet be distinguished from natural (unforced) climate variability (Jones et al., 2016), and are of the opposite sign [cooling, not warming] to those produced by most forced climate model simulations over the same post-1979 CE interval. … (1) Temperatures over the Antarctic continent show an overall cooling trend during the period from 0 to 1900CE, which appears strongest in West Antarctica, and (2) no continent-scale warming of Antarctic temperature is evident in the last century.”


Li et al., 2017


Demezhko et al., 2017

“GST [ground surface temperature] and SHF [surface heat flux] histories differ substantially in shape and chronology. Heat flux changes ahead temperature changes by 500–1000 years.”


Luoto and Nevalainen, 2017


Li et al., 2017

“The main driving forces behind the Holocene climatic changes in the LYR [Lower Yangtze Region, East China] area are likely summer solar insolation associated with tropical or subtropical macro-scale climatic circulations such as the Intertropical Convergence Zone (ITCZ), Western Pacific Subtropical High (WPSH), and El Niño/Southern Oscillation (ENSO).”


Mayewski et al., 2017


Rydval et al., 2017

“[T]he recent summer-time warming in Scotland is likely not unique when compared to multi-decadal warm periods observed in the 1300s, 1500s, and 1730s“



Reynolds et al., 2017


Rosenthal et al., 2017

“Here we review proxy records of intermediate water temperatures from sediment cores and corals in the equatorial Pacific and northeastern Atlantic Oceans, spanning 10,000 years beyond the instrumental record. These records suggests that intermediate waters [0-700 m] were 1.5-2°C warmer during the Holocene Thermal Maximum than in the last century. Intermediate water masses cooled by 0.9°C from the Medieval Climate Anomaly to the Little Ice Age. These changes are significantly larger than the temperature anomalies documented in the instrumental record. The implied large perturbations in OHC and Earth’s energy budget are at odds with very small radiative forcing anomalies throughout the Holocene and Common Era. … The records suggest that dynamic processes provide an efficient mechanism to amplify small changes in insolation [surface solar radiation] into relatively large changes in OHC.”


Li et al., 2017

“We suggest that solar activity may play a key role in driving the climatic fluctuations in NC [North China] during the last 22 centuries, with its quasi ∼100, 50, 23, or 22-year periodicity clearly identified in our climatic reconstructions. … It has been widely suggested from both climate modeling and observation data that solar activity plays a key role in driving late Holocene climatic fluctuations by triggering global temperature variability and atmospheric dynamical circulation


Goursaud et al., 2017


Guillet et al., 2017


Wilson et al., 2017


Tegzes et al., 2017

Our sortable-silt time series show prominent multi-decadal to multi-centennial variability, but no clear long-term trend over the past 4200 years. … [O]ur findings indicate that variations in the strength of the main branch of the Atlantic Inflow may not necessarily translate into proportional changes in northward oceanic heat transport in the eastern Nordic Seas.”



Tejedor et al., 2017


Fernández-Fernández et al., 2017


Cai and Liu et al., 2017

“2003– 2009 was the warmest period in the reconstruction. 1970– 2000 was colder than the last stage of the Little Ice Age (LIA).”


Köse et al., 2017

“The reconstruction is punctuated by a temperature increase during the 20th century; yet extreme cold and warm events during the 19th century seem to eclipse conditions during the 20th century. We found significant correlations between our March–April spring temperature reconstruction and existing gridded spring temperature reconstructions for Europe over Turkey and southeastern Europe. … During the last 200 years, our reconstruction suggests that the coldest year was 1898 and the warmest year was 1873. The reconstructed extreme events also coincided with accounts from historical records. …  Further, the warming trends seen in our record agrees with data presented by Turkes and Sumer (2004), of which they attributed [20th century warming] to increased urbanization in Turkey.”


Flannery et al., 2017

The early part of the reconstruction (1733–1850) coincides with the end of the Little Ice Age, and exhibits 3 of the 4 coolest decadal excursions in the record. However, the mean SST estimate from that interval during the LIA is not significantly different from the late 20th Century SST mean. The most prominent cooling event in the 20th Century is a decade centered around 1965. This corresponds to a basin-wide cooling in the North Atlantic and cool phase of the AMO.”


Steiger et al., 2017

“Through several idealized and real proxy experiments we assess the spatial and temporal extent to which isotope records can reconstruct surface temperature, 500 hPa geopotential height, and precipitation. We find local reconstruction skill to be most robust across the reconstructions, particularly for temperature and geopotential height, as well as limited non-local skill in the tropics.  These results are in agreement with long-held views that isotopes in ice cores have clear value as local climate proxies, particularly for temperature and atmospheric circulation.”




Chang et al., 2017

“The chironomid-based record from Heihai Lake shows a summer temperature fluctuation within 2.4°C in the last c. 5000 years from the south-east margin of the QTP [Qinghai–Tibetan Plateau]. … The summer temperature changes in this region respond primarily to the variation in the Asian Summer Monsoon. The variability of solar activity is likely an important driver of summer temperatures, either directly or by modifying the strength and intensity of the Indian Ocean Summer Monsoon. … We observed a relatively long-lasting summer cooling episode (c. 0.8°C lower than the 5000-year average) between c. 270 cal. BP and AD c. 1956. … The record shows cooling episodes occurred at c. 3100, 2600, 2100 and 1600 cal. BP.  This is likely related to the period defined as the Northern Hemisphere Little Ice Age (LIA; c. AD 1350–1850, equivalent to 600–100 cal. BP). These possibly relate to the 500-year quasi-periodic solar cycle. Cooling stages between c. 270 and 100 cal. BP were also recorded and these are possibly linked to the LIA suggesting a hemisphere-wide forcing mechanism for this event.”

 


Krossa et al., 2017


Albot, 2017

Growing paleoclimatic evidence suggests that the climatic signals of Medieval Warm Period and the Little Ice Age events can be detected around the world (Mayewski et al., 2004; Bertler et al., 2011). … [T]he causes for these events are still debated between changes in solar output, increased volcanic activity, shifts in zonal wind distribution, and changes in the meridional overturning circulation (Crowley, 2000; Hunt, 2006).”


Zhang et al., 2017

“[S]ummer temperature variability at the QTP [Qinghai-Tibetan Plateau] responds rapidly to solar irradiance changes in the late Holocene”




Kotthoff et al., 2017


Li et al., 2017

“Overall, the strong linkage between solar variability and summer SSTs is not only of regional significance, but is also consistent over the entire North Atlantic region.”


Jones et al., 2017


Vachula et al., 2017


Fischel et al., 2017


Li et al., 2017


Anderson et al., 2017


Woodson et al., 2017

The last ca. 1000 years recorded the warmest SST averaging 28.5°C. We record, for the first time in this region, a cool interval, ca. 1000 years in duration, centered on 5000 cal years BP concomitant with a wet period recorded in Borneo. The record also reflects a warm interval from ca. 1000 to 500 cal years BP that may represent the Medieval Climate Anomaly. Variations in the East Asian Monsoon (EAM) and solar activity are considered as potential drivers of SST trends. However, hydrology changes related to the El Nino-Southern Oscillation (ENSO) variability, ~ shifts of the Western Pacific Warm Pool and migration of the Intertropical Convergence Zone are more likely to have impacted our SST temporal trend. …  The SA [solar activity] trends (Steinhilber et al., 2012) are in general agreement with the regional cooling of SST (Linsley et al., 2010) and the SA [solar activity] oscillations are roughly coincident with the major excursions in our SST data.”


Koutsodendris et al., 2017

“Representing one of the strongest global climate instabilities during the Holocene, the Little Ice Age (LIA) is marked by a multicentennial-long cooling (14-19th centuries AD) that preceded the recent ‘global warming’ of the 20th century. The cooling has been predominantly attributed to reduced solar activity and was particularly pronounced during the 1645-1715 AD and 1790-1830 AD solar minima, which are known as Maunder and Dalton Minima, respectively.”


Browne et al., 2017


Perșoiu et al., 2017


Kawahata et al., 2017

“The SST [sea surface temperature] shows a broad maximum (~17.3 °C) in the mid-Holocene (5-7 cal kyr BP), which corresponds to the Jomon transgression. … The SST maximum continued for only a century and then the SST [sea surface temperatures] dropped by 3.5 °C [15.1 to 11.6 °C] within two centuries. Several peaks fluctuate by 2°C over a few centuries.”


Saini et al., 2017


Dechnik et al., 2017


Wu et al., 2017


Sun et al., 2017

“Our findings are generally consistent with other records from the ISM [Indian Summer Monsoon]  region, and suggest that the monsoon intensity is primarily controlled by solar irradiance on a centennial time scale. This external forcing may have been amplified by cooling events in the North Atlantic and by ENSO activity in the eastern tropical Pacific, which shifted the ITCZ further southwards.”


Wu et al., 2017

“The existence of depressed MAAT [mean annual temperatures] (1.3°C lower than the 3200-year average) between 1480 CE and 1860 CE (470–90 cal. yr BP) may reflect the manifestation of the ‘Little Ice Age’ (LIA) in southern Costa Rica. Evidence of low-latitude cooling and drought during the ‘LIA’ has been documented at several sites in the circum-Caribbean and from the tropical Andes, where ice cores suggest marked cooling between 1400 CE and 1900 CE.  Lake and marine records recovered from study sites in the southern hemisphere also indicate the occurrence of ‘LIA’ cooling. High atmospheric aerosol concentrations, resulting from several large volcanic eruptions and sea-ice/ocean feedbacks, have been implicated as the drivers responsible for the ‘LIA’.”


Park, 2017

Late Holocene climate change in coastal East Asia was likely driven by ENSO variation.   Our tree pollen index of warmness (TPIW) shows important late Holocene cold events associated with low sunspot periods such as Oort, Wolf, Spörer, and Maunder Minimum. Comparisons among standard Z-scores of filtered TPIW, ΔTSI, and other paleoclimate records from central and northeastern China, off the coast of northern Japan, southern Philippines, and Peru all demonstrate significant relationships [between solar activity and climate]. This suggests that solar activity drove Holocene variations in both East Asian Monsoon (EAM) and El Niño Southern Oscillation (ENSO).”


Markle et al., 2017


Dong et al., 2017


Nazarova et al., 2017

“The application of transfer functions resulted in reconstructed T July fluctuations of approximately 3 °C over the last 2800 years. Low temperatures (11.0-12.0 °C) were reconstructed for the periods between ca 1700 and 1500 cal yr BP (corresponding to the Kofun cold stage) and between ca 1200 and 150 cal yr BP (partly corresponding to the Little Ice Age [LIA]). Warm periods (modern T[emperatures] July or higher) were reconstructed for the periods between ca 2700 and 1800 cal yr BP, 1500 and 1300 cal yr BP and after 150 cal yr BP.”


Samartin et al., 2017


Thienemann et al., 2017

“[P]roxy-inferred annual MATs[annual mean air temperatures] show the lowest value at 11,510 yr BP (7.6°C). Subsequently, temperatures rise to 10.7°C at 9540 yr BP followed by an overall decline of about 2.5°C until present (8.3°C).”


Li et al., 2017

“Contrary to the often-documented warming trend over the past few centuries, but consistent with temperature record from the northern Tibetan Plateau, our data show a gradual decreasing trend of 0.3 °C in mean annual air temperature from 1750 to 1970 CE. This result suggests a gradual cooling trend in some high altitude regions over this interval, which could provide a new explanation for the observed decreasing Asian summer monsoon. In addition, our data indicate an abruptly increased interannual-to decadal-scale temperature variations of 0.8 – 2.2 °C after 1970 CE, in terms of both magnitude and frequency, indicating that the climate system in high altitude regions would become more unstable under current global warming.”

Krawczyk et al., 2017



Pendea et al., 2017  (Russia)

The Holocene Thermal Maximum (HTM) was a relatively warm period that is commonly associated with the orbitally forced Holocene maximum summer insolation (e.g., Berger, 1978; Bartlein et al., 2011). Its timing varies widely from region to region but is generally detected in paleorecords between 11 and 5 cal ka BP (e.g., Kaufman et al., 2004; Bartlein et al., 2011; Renssen et al., 2012).  … In Kamchatka, the timing of the HTM varies. Dirksen et al. (2013) find warmer-than-present conditions between 9000 and 5000 cal yr BP in central Kamchatka and between 7000 and 5800 cal yr BP at coastal sites.”

Stivrins et al., 2017  (Latvia)

“Conclusion: Using a multi-proxy approach, we studied the dynamics of thermokarst characteristics in western Latvia, where thermokarst occurred exceptionally late at the Holocene Thermal Maximum. …  [A] thermokarst active phase … began 8500 cal. yr BP and lasted at least until 7400 cal. yr BP. Given that thermokarst arise when the mean summer air temperature gradually increased ca. 2°C beyond the modern day temperature, we can argue that before that point, the local geomorphological conditions at the study site must have been exceptional to secure ice-block from the surficial landscape transformation and environmental processes.”

Bañuls-Cardona et al., 2017  (Spain)

“During the Middle Holocene we detect important climatic events. From 7000 to 6800 [years before present] (MIR 23 and MIR22), we register climatic characteristics that could be related to the end of the African Humid Period, namely an increase in temperatures and a progressive reduction in arboreal cover as a result of a decrease in precipitation. The temperatures exceeded current levels by 1°C, especially in MIR23, where the most highly represented taxon is a thermo-Mediterranean species, M. (T.) duodecimcostatus.”

Reid, 2017 (Global)

The small increase in global average temperature observed over the last 166 years is the random variation of a centrally biased random walk. It is a red noise fluctuation. It is not significant, it is not a trend and it is not likely to continue.”

Åkesson et al., 2017 (Norway)

“Reconstructions for southern Norway based on pollen and chironomids suggest that summer temperatures were up to 2 °C higher than present in the period between 8000 and 4000 BP, when solar insolation was higher (Nesje and Dahl, 1991; Bjune et al., 2005; Velle et al., 2005a).”

– See more at: http://notrickszone.com/2017/05/29/80-graphs-from-58-new-2017-papers-invalidate-claims-of-unprecedented-global-scale-modern-warming/#sthash.ktF0tSb7.ulRAHkyd.dpuf

Break Out The CO2 Bubbly; Al Gore Is Crying In His Beer

TruthFact

Source: NoMoreFakeNews.com | JonRappoport.wordpress.com
By: Jon Rappoport
June 2, 2017

“All right, contestants, listen carefully. Here’s the final question. The winner will be awarded three years living in a hut with no electricity or heat and he’ll dig for tubers and roots so he can eat—thus contributing to a decrease in global warming. All right, here is the question: Whose private jet spews more CO2? Al Gore’s or Leo DiCaprio’s?”

With Trump’s historic rejection of the Paris climate treaty, Al Gore is deep in a funk.

But don’t weep for Al. He can still amuse himself counting his money. Yes, Al’s done very well for himself hustling the “settled science” all these years, shilling for an energy-depleted Globalist utopia.

Al knows actual science the way a June bug knows how to pilot a spaceship.

Every movement needs such men.

Consider facts laid out in an uncritical Washington Post story (October 10, 2012, “Al Gore has thrived as a green-tech investor”):

In 2001, Al was worth less than $2 million. By 2012, it was estimated he’d piled up a nice neat $100 million in his lock box.

How did he do it? Well, he invested in 14 green companies, who inhaled—via loans, grants and tax relief—somewhere in the neighborhood of $2.5 billion from the federal government to go greener.

Therefore, Gore’s investments paid off, because the federal government was providing massive cash backup to those companies. It’s nice to have friends in high places.

For example, Gore’s investment firm at one point held 4.2 million shares of an outfit called Iberdrola Renovables, which was building 20 wind farms across the United States.

Iberdrola was blessed with $1.5 billion from the Federal government for the work which, by its own admission, saved its corporate financial bacon. Every little bit helps.

Then there was a company called Johnson Controls. It made batteries, including those for electric cars. Gore’s investment company, Generation Investment Management (GIM), doubled its holdings in Johnson Controls in 2008, when shares cost as little $9 a share. Gore sold when shares cost $21 to $26—before the market for electric-car batteries fell on its head.

Johnson Controls had been bolstered by $299 million dropped at its doorstep by the administration of President Barack Obama.

On the side, Gore had been giving speeches on the end of life as we know it on Earth, for as much as $175,000 a pop. (Gore was constantly on the move from conference to conference, spewing jet fumes in his wake.) Those lecture fees can add up.

So Gore, as of 2012, had $100 million.

The man worked every angle to parlay fear of global-warming catastrophes into a humdinger of a personal fortune. And he didn’t achieve his new status in the free market. The federal government helped out with major, major bucks.

This wasn’t an entrepreneur relying exclusively on his own smarts and hard work. Far from it.

—How many scientists and other PhDs have been just saying no to the theory of manmade global warming?

2012: A letter to The Wall Street Journal signed by 16 scientists said no. Among the luminaries: William Happer, professor of physics at Princeton University; Richard Lindzen, professor of atmospheric sciences at Massachusetts Institute of Technology; William Kininmonth, former head of climate research at the Australian Bureau of Meteorology.

And then there was the Global Warming Petition Project, or the Oregon Petition, that said no. According to Petitionproject.org, the petition has the signatures of “31,487 American scientists,” of which 9,029 stated they had Ph.Ds.

Global warming is one of the Rockefeller Globalists’ chief issues. Manipulating it entails convincing populations that a massive intervention is necessary to stave off the imminent collapse of life on Earth. Therefore, sovereign nations must be eradicated. Political power and decision-making must flow from above, from “those who are wiser.”

Globalists want all national governments on the planet to commit to lowering energy production by a significant and destructive percentage in the next 15 years—“to save us from a horrible fate.”

Their real agenda is clear: “The only solution to climate change is a global energy-management network. We (the Globalist leaders) are in the best position to manage such a system. We will allocate mandated energy-use levels throughout the world, region by region, nation by nation, and eventually, citizen by citizen.”

This is the long-term goal. This is the Globalists’ Holy Grail.

Slavery imposed through energy.

Al Gore has done admirable work for his bosses. And for himself. As a past politician with large name recognition, he’s promoted fake science, tried to scare the population of Earth, and financially leveraged himself to the hilt in the fear-crevice he helped create.

Ask not for whom the bells toll. They toll with delight. They’re attached to cash registers. And Al has stuck his hands in and removed the cash.

He might be crying in his beer today, after Trump rejected the Paris climate treaty, but Al’s also thinking about how he can play to the Left that’s so outraged at Trump’s decision. More speeches, more “inconvenient truth” films, maybe a summit with Leo DiCaprio and Obama.

Yes, there’s still money in those hills…quite possibly more money than ever.

Read More At: JonRappoport.wordpress.com
_______________________________________________________________

Jon Rappoport

The author of three explosive collections, THE MATRIX REVEALED, EXIT FROM THE MATRIX, and POWER OUTSIDE THE MATRIX, Jon was a candidate for a US Congressional seat in the 29th District of California. He maintains a consulting practice for private clients, the purpose of which is the expansion of personal creative power. Nominated for a Pulitzer Prize, he has worked as an investigative reporter for 30 years, writing articles on politics, medicine, and health for CBS Healthwatch, LA Weekly, Spin Magazine, Stern, and other newspapers and magazines in the US and Europe. Jon has delivered lectures and seminars on global politics, health, logic, and creative power to audiences around the world. You can sign up for his free NoMoreFakeNews emails here or his free OutsideTheRealityMachine emails here.

Monsanto quietly announces they are investing heavily in gene editing

Image: Monsanto quietly announces they are investing heavily in gene editing

Source: NaturalNews.com
Vicki Batts
June 2, 2017

Is anyone surprised that Monsanto is moving on from “conventional” genetically modified organisms to gene editing? It seems that the world’s most evil corporation is convinced that the new gene editing technology that’s been taking the globe by storm will somehow ease consumer concerns about eating GMOs.

Whether or not the difference between the two is substantial enough to assuage the many fundamental issues that surround GMO seeds, which extend far beyond just concerns about the effects of consumption, has yet to be seen. Personally, this writer feels that the alleged differences between “genetically modified” and “gene-edited” are not going to be very moving.

Dr. Robert Fraley, Monsanto’s chief technology officer, recently told Fox Business, “I see gene editing very differently [than GMOs] because it’s being used today broadly by pharmaceutical, agricultural companies, universities and hundreds of startup companies — and I think there is broad support for this science and I think that is going to make a big difference.”

Supposedly, the key difference between GMOs and “gene-edited crops” is that while GMOs rely on genes from different species (resulting in transgenic organisms), these gene-edited versions will be “generated through precise editing of an organism’s native genome,” as Business Insider explains.

Monsanto has recently announced that they would be investing heavily into new gene editing technology, known as CRISPR/Cas-9, which is a gene editing technique that essentially allows scientists to select, snip and replace certain genetic components. It’s essentially a genetic “find and replace” tool — but there are many questions about its safety.

This technology purportedly allows scientists to manipulate a plant’s DNA without having to pull foreign DNA from other species, like current GMOs. However, you may recall that this same CRISPR-Cas9 technology was used to create human-pig embryos — which are, obviously, transgenic organisms.

The use of CRISPR-Cas9 technology in crops, therefore, would not implicitly guarantee that any creations derived from it would be free of foreign DNA. The potential for transgenic creations is absolutely still quite real.

Fraley says that the CRISPR technology allows them to precisely edit a gene without having to replace it entirely. However, there will still likely be concerns about where the replacement parts for snipped genes are coming from. According to Fraley, we can expect to see the first gene-edited creations on the market within the next five years.

Megan Westgate, the executive director of Non-GMO Project, explained to Fox Business, “While these new technologies are touted to be more precise than older genetic engineering technologies, it is widely accepted in the scientific community that there can be ‘off target’ effects to the genome when the technologies are utilized. GMOs, including the products of these new technologies, have not been adequately tested—no long-term feeding studies have been conducted—and people are starting to connect these experimental technologies to health concerns.”

Fraley, like other GMO proponents, claims that the skepticism of GMOs is due to the fact that Monsanto failed to educate people about the “science” of GMOs early on. And of course, by education he means “brain-washing.” They didn’t realize that the public would be smart enough to ask pertinent questions not just about the safety of GMOs, but everything that tends to come along with them: Pesticides, herbicides, chemical fertilizers,and monocrop farming techniques — all of which can be harmful to the environment.

Claiming that there are “vast” differences between “genetically modified” and “gene-edited” crops could be seen as an exercise in semantics. The fact of the matter is that many people feel strongly about not eating food that has been modified in a lab, by humans who think they know what they’re doing. This is not likely to change just because a new label has been slapped on it.

Regardless of how you feel about genetically modified organisms, or their new “edited” counterparts, the fact remains that every person should have the right to choose what kind of food they want to consume — and the call to label these new “gene-edited” foods needs to begin before they hit the shelves.

Read More At: NaturalNews.com

Sources:

FoxBusiness.com

BusinessInsider.com

CNN.com

SustainableTable.org