SkyNet (From the Terminator Movies) and Big Brother – Audio and Text

Spread the love

There are those who think that this is science fiction. Guess again

Text

 



by
Nafeez Ahmed
November 24, 2014

from Motherboard.Vice Website

Spanish version

 

 

 

 

Nafeez Ahmed, Ph.D. is an investigative journalist and international security scholar.

He is author of A User’s Guide to the Crisis of Civilization and the sci-fi thriller, Zero Point.

 

Pentagon officials are worried that the US military is losing its edge compared to competitors like China, and are willing to explore almost anything to stay on top – including creating watered-down versions of the Terminator.

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years.

By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel (below image) announced the ‘Defense Innovation Initiative‘ – a sweeping plan to identify and develop cutting edge technology breakthroughs,

“over the next three to five years and beyond” to maintain global US “military-technological superiority.”

Defense Secretary Chuck Hagel

provides remarks during the Reagan National Defense Forum

at The Ronald Reagan Presidential Library in Simi Valley, Calif.

Nov. 15, 2014

Source

Areas to be covered by the DoD program include,

  • robotics

  • autonomous systems

  • miniaturization

  • Big Data

  • advanced manufacturing, including 3D printing

But just how far down the rabbit hole Hagel’s initiative could go – whether driven by desperation, fantasy or hubris – is revealed by an overlooked Pentagon-funded study, published quietly in mid-September by the DoD National Defense University’s (NDU) Center for Technology and National Security Policy in Washington DC.

The Pentagon plans to monopolize imminent

“transformational advances”

in nanotechnology, robotics, and energy.

The 72-page document (Policy Challenges of Accelerating Technological Change – Security Policy and Strategy Implications of Parallel Scientific Revolutions) throws detailed light on the far-reaching implications of the Pentagon’s plan to monopolize imminent “transformational advances” in,

  • biotechnology

  • robotics and artificial intelligence

  • information technology

  • nanotechnology

  • energy

Hagel’s initiative is being overseen by deputy defense secretary Robert O. Work, lead author of a report released last January by the Center for a New American Security (CNAS), “20YY – Preparing for War in the Robotic Age.”

Work’s report is also cited heavily in the new study published by the NDU, a Pentagon-funded higher education institution that trains US military officials and develops government national security strategy and defense policies.

The NDU study warns that while accelerating technological change will,

“flatten the world economically, socially, politically, and militarily, it could also increase wealth inequality and social stress,” and argues that the Pentagon must take drastic action to avoid the potential decline of US military power: “For DoD to remain the world’s preeminent military force, it must redefine its culture and organizational processes to become more networked, nimble, and knowledge-based.”

The authors of the NDU paper, Dr James Kadtke and Dr Linton Wells, are seasoned long-term Pentagon advisers, both affiliated with the NDU’s technology center which produces research “supporting the Office of the Secretary of Defense, the Services, and Congress.”

Kadtke was previously a senior official at the White House’s National Nanotechnology Coordinating Office, while Wells – who served under Paul Wolfowitz as DoD chief information officer and deputy assistant defense secretary – was until this June NDU’s Force Transformation Chair.

Wells also chairs a little-known group known as the ‘Highlands Forum,’ which is run by former Pentagon staffer Richard O’Neill on behalf of the DoD.

The Forum brings together military and information technology experts to explore the defense policy issues arising from the impact of the internet and globalization.

Explaining the Highlands Forum process in 2006 to Government Executive magazine, Wells described the Forum as a DoD-sponsored “idea engine” that,

“generates ideas in the minds of government people who have the ability to act through other processes… What happens out of Highlands is you get people who come back with an idea and say, ‘Now how can I cause this to happen?'”

Big Data’s Big Brother

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on,

“Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”,

…in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of,

…and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through,

“monitoring of individuals and populations using sensors, wearable devices, and IOT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.”

The Pentagon can build capacity for this,

“in partnership with large private sector providers, where the most innovative solutions are currently developing.”

In particular, the Pentagon must improve its capacity to analyze data sets quickly, by investing in,

“automated analysis techniques, text analytics, and user interface techniques to reduce the cycle time and manpower requirements required for analysis of large data sets.”

Kadtke and Wells want the US military to take advantage of the increasing interconnection of people and devices via the new ‘Internet of Things‘ through the use of, “embedded systems” in,

  • automobiles

  • factories

  • infrastructure

  • appliances and homes

  • pets

  • potentially, inside human beings

Due to the advent of,

“cloud robotics… the line between conventional robotics and intelligent everyday devices will become increasingly blurred.”

Cloud robotics, a term coined by Google’s new robotics chief, James Kuffner, allows individual robots to augment their capabilities by connecting through the internet to share online resources and collaborate with other machines.

By 2030, nearly every aspect of global society could become, in their words,

“instrumented, networked, and potentially available for control via the Internet, in a hierarchy of cyber-physical systems.”

Yet the most direct military application of such technologies, the Pentagon study concludes, will be in,

“Command-Control-Communications, Computers and Intelligence-Surveillance-Reconnaissance (C4ISR)” – a field led by “world-class organizations such as the National Security Agency (NSA).”

Clever Kill Bots in the Cloud

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality.

Robots could also become embedded in civilian life to perform,

“surveillance, infrastructure monitoring, police telepresence, and homeland security applications.”

The main challenge to such robot institutionalization will come from a “political backlash” to robots being able to determine by themselves when to kill.

To counter public objections, they advocate that the Pentagon should be “highly proactive” in ensuring,

“it is not perceived as creating weapons systems without a ‘human in the loop.’ It may be that DoD should publicly self-limit its operational doctrine on the use of such systems to head off public or international backlash to its development of autonomous systems.”

Despite this PR move, they recommend that DoD should still,

“remain ahead of the curve” by developing “operational doctrine for forces made up significantly or even entirely of unmanned or autonomous elements.”

The rationale is to “augment or substitute for human operators” as much as possible, especially for missions that are “hazardous,” “impractical,” or “impossible” for humans (like, perhaps, all wars?).

In just five years, the study reports, Pentagon research to improve robot intelligence will bear “significant advances.”

Skynet by 2020s?

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI” – or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness.

Many now believe, Kadtke and Wells, observe, that,

“strong AI may be achieved sometime in the 2020s.”

They report that a range of technological advances support “this optimism,” especially that,

“computer processors will likely reach the computational power of the human brain sometime in the 2020s”.

Intel aims to reach this milestone by 2018.

Other relevant advances in development include,

“full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson.”

As the costs of robotics manufacturing and cloud computing plummet, the NDU paper says, AI advances could even allow for automation of high-level military functions like,

  • “problem solving”

  • “strategy development”

  • “operational planning”

“In the longer term,

fully robotic soldiers may be developed and deployed,

particularly by wealthier countries.”

“In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries,” the paper says (thankfully, no plans to add ‘living tissue’ on the outside are mentioned).

The study thus foresees the Pentagon playing a largely supervisory role over autonomous machines as increasingly central to all dimensions of warfare – from operational planning to identifying threats via surveillance and social media data-mining; from determining enemy targets to actually pursuing and executing them.

There is no soul-searching, though, about the obvious risks of using AI to automate such core elements of military planning and operations, beyond the following oblique sentence:

“One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience.”

But if the reservations of billionaire tech entrepreneur Elon Musk are anything to go by, the Pentagon’s hubris is deeply amiss.

Musk, an early investor in the AI company DeepMind now owned by Google, has warned of “something dangerous” happening in five years due to “close to exponential” growth of AI at the firm – and some AI experts agree.

Synthetic Genetically Enhanced Laser-Armed Prosthetic People

As if this wasn’t disturbing enough, Kadtke and Wells go on to chart significant developments across a wide range of other significant technologies.

They point to the development of Directed Energy Weapons (DEW) that project electromagnetic radiation as laser light, and which are already being deployed in test form.

This August, USS Ponce deployed with an operational laser – a matter that was only reported in the last few days.

DEWs, the NDU authors predict,

“will be a very disruptive military technology” due to “unique characteristics, such as near-zero flight time, high accuracy, and an effectively infinite magazine.”

The Pentagon plans to widely deploy DEWs aboard ships in a few years. They also wants to harvest technologies that could ‘upgrade’ human physical, psychological, and cognitive makeup.

The NDU paper catalogues a range of relevant fields, including,

  • personalized (genetic) medicine

  • tissue and organ regeneration via stem cells

  • implants such as computer chips and communication devices

  • robotic prosthetics

  • direct brain-machine interfaces

  • potentially direct brain-brain communications

Another area experiencing breakthrough developments is synthetic biology (SynBio).

Scientists have recently created cells with DNA composed of non-natural amino acids, opening the door to create entirely new “designer life forms,” the Pentagon report enthuses, and to engineer them with “specialized and exotic properties.”

Kadtke and Wells flag up a recent Pentagon assessment of current SynBio research suggesting,

“great promise for the engineering of synthetic organisms” useful for a range of “defense relevant applications.”

It is already possible to replace organs with artificial electro-mechanical devices for a wide range of body parts.

Citing ongoing US Army research on “cognition and neuro-ergonomics,” Kadtke and Wells forecast that:

“Reliable artificial lungs, ear and eye implants, and muscles will all likely be commercially available within 5 to 10 years.”

Even more radically, they note the emerging possibility of using stem cells to regenerate every human body part.

Meshing such developments with robotics has further radical implications. The authors highlight successful demonstrations of implantation of silicon memory and processors into the brain, as well as “purely thought controlled devices.”

In the long-term, these breakthroughs could make ‘wearable devices’ like Google Glass look like ancient fossils, superseded by,

“distributed human-machine systems employing brain-machine interfaces and analog physiomimetic processors, as well as hybrid cybernetic systems, which could provide seamless and artificially enhanced human data exploration and analysis.”

We’re all terror suspects

Taken together, the “scientific revolutions” catalogued by the NDU report – if militarized – would grant the Department of Defense (DoD) “disruptive new capabilities” of a virtually totalitarian quality.

As I was told by former NSA senior executive Thomas Drake, the whistleblower who inspired Edward Snowden, ongoing Pentagon-funded research on data-mining feeds directly into fine-tuning the algorithms used by the US intelligence community to identify not just ‘terror suspects’, but also targets for the CIA’s drone-strike kill lists.

Nearly half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more than half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists” – among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups.

Multiple studies show that a substantive number of drone strike victims are civilians – and a secret Obama administration memo released this summer under Freedom of Information reveals that the drone program authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with potential ‘extremists‘, who “support political violence” and thus pose a threat to US interests.

It is far from clear that the Pentagon’s Skynet-esque vision of future warfare will actually reach fruition.

That the aspiration is being pursued so fervently in the name of ‘national security,’ in the age of austerity no less, certainly raises questions about whether the most powerful military in the world is not so much losing its edge, as it is losing the plot.



by Bruce Schneier
4 October 2013

from TheGuardian Website

 

 

 

Secret servers and a privileged position

on the internet’s backbone

used to identify users and

attack target computers

 

 

 

Tor is a well-designed and robust anonymity tool,

and successfully attacking it is difficult.

Photograph: Magdalena Rehova/Alamy

 

The online anonymity network Tor is a high-priority target for the National Security Agency.

The work of attacking Tor is done by the NSA‘s application vulnerabilities branch, which is part of the systems intelligence directorate, or SID. The majority of NSA employees work in SID, which is tasked with collecting data from communications systems around the world.

According to a top-secret NSA presentation provided by the whistleblower Edward Snowden, one successful technique the NSA has developed involves exploiting the Tor browser bundle, a collection of programs designed to make it easy for people to install and use the software.

The trick identified Tor users on the Internet and then executes an attack against their Firefox web browser.

The NSA refers to these capabilities as CNE, or computer network exploitation.

The first step of this process is finding Tor users. To accomplish this, the NSA relies on its vast capability to monitor large parts of the internet. This is done via the agency’s partnership with US telecoms firms under programs codenamed,

  • Stormbrew

  • Fairview

  • Oakstar

  • Blarney

The NSA creates “fingerprints” that detect http requests from the Tor network to particular servers.

These fingerprints are loaded into NSA database systems like XKeyscore, a bespoke collection and analysis tool which NSA boasts allows its analysts to see “almost everything” a target does on the internet.

Using powerful data analysis tools with codenames such as Turbulence, Turmoil and Tumult, the NSA automatically sifts through the enormous amount of internet traffic that it sees, looking for Tor connections.

Last month, Brazilian TV news show Fantastico showed screenshots of an NSA tool that had the ability to identify Tor users by monitoring internet traffic.

The very feature that makes Tor a powerful anonymity service, and the fact that all Tor users look alike on the internet, makes it easy to differentiate Tor users from other web users. On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.

After identifying an individual Tor user on the internet, the NSA uses its network of secret internet servers to redirect those users to another set of secret internet servers, with the codename FoxAcid, to infect the user’s computer.

FoxAcid is an NSA system designed to act as a matchmaker between potential targets and attacks developed by the NSA, giving the agency opportunity to launch prepared attacks against their systems.

Once the computer is successfully attacked, it secretly calls back to a FoxAcid server, which then performs additional attacks on the target computer to ensure that it remains compromised long-term, and continues to provide eavesdropping information back to the NSA.

Exploiting the Tor browser bundle

Tor is a well-designed and robust anonymity tool, and successfully attacking it is difficult.

The NSA attacks we found individually target Tor users by exploiting vulnerabilities in their Firefox browsers, and not the Tor application directly.

This, too, is difficult. Tor users often turn off vulnerable services like scripts and Flash when using Tor, making it difficult to target those services. Even so, the NSA uses a series of native Firefox vulnerabilities to attack users of the Tor browser bundle.

According to the training presentation provided by Snowden, EgotisticalGiraffe exploits a type confusion vulnerability in E4X, which is an XML extension for Javascript. This vulnerability exists in Firefox 11.0 to 16.0.2, as well as Firefox 10.0 ESR – the Firefox version used until recently in the Tor browser bundle.

According to another document, the vulnerability exploited by EgotisticalGiraffe was inadvertently fixed when Mozilla removed the E4X library with the vulnerability, and when Tor added that Firefox version into the Tor browser bundle, but NSA were confident that they would be able to find a replacement Firefox exploit that worked against version 17.0 ESR.

The Quantum system

To trick targets into visiting a FoxAcid server, the NSA relies on its secret partnerships with US telecoms companies.

As part of the Turmoil system, the NSA places secret servers, codenamed Quantum, at key places on the internet backbone. This placement ensures that they can react faster than other websites can.

By exploiting that speed difference, these servers can impersonate a visited website to the target before the legitimate website can respond, thereby tricking the target’s browser to visit a Foxacid server.

In the academic literature, these are called man-in-the-middle” attacks (also here), and have been known to the commercial and academic security communities. More specifically, they are examples of “man-on-the-side” attacks.

They are hard for any organization other than the NSA to reliably execute, because they require the attacker to have a privileged position on the internet backbone, and exploit a “race condition” between the NSA server and the legitimate website.

This top-secret NSA diagram (below image), made public last month, shows a Quantum server impersonating Google in this type of attack.

Source

 

The NSA uses these fast Quantum servers to execute a packet injection attack, which surreptitiously redirects the target to the FoxAcid server.

An article in the German magazine Spiegel, based on additional top secret Snowden documents, mentions an NSA developed attack technology with the name of QuantumInsert that performs redirection attacks.

Another top-secret Tor presentation provided by Snowden mentions QuantumCookie to force cookies onto target browsers, and another Quantum program to “degrade/deny/disrupt Tor access”.

This same technique is used by the Chinese government to block its citizens from reading censored internet content, and has been hypothesized as a probable NSA attack technique.

The FoxAcid system

According to various top-secret documents provided by Snowden, FoxAcid is the NSA codename for what the NSA calls an “exploit orchestrator,” an internet-enabled system capable of attacking target computers in a variety of different ways.

It is a Windows 2003 computer configured with custom software and a series of Perl scripts. These servers are run by the NSA’s tailored access operations, or TAO, group. TAO is another subgroup of the systems intelligence directorate.

The servers are on the public internet. They have normal-looking domain names, and can be visited by any browser from anywhere; ownership of those domains cannot be traced back to the NSA.

However, if a browser tries to visit a FoxAcid server with a special URL, called a FoxAcid tag, the server attempts to infect that browser, and then the computer, in an effort to take control of it.

The NSA can trick browsers into using that URL using a variety of methods, including the race-condition attack mentioned above and frame injection attacks.

FoxAcid tags are designed to look innocuous, so that anyone who sees them would not be suspicious. An example of one such tag [LINK REMOVED] is given in another top-secret training presentation provided by Snowden.

There is no currently registered domain name by that name; it is just an example for internal NSA training purposes.

The training material states that merely trying to visit the homepage of a real FoxAcid server will not result in any attack, and that a specialized URL is required. This URL would be created by TAO for a specific NSA operation, and unique to that operation and target. This allows the FoxAcid server to know exactly who the target is when his computer contacts it.

According to Snowden, FoxAcid is a general CNE system, used for many types of attacks other than the Tor attacks described here. It is designed to be modular, with flexibility that allows TAO to swap and replace exploits if they are discovered, and only run certain exploits against certain types of targets.

The most valuable exploits are saved for the most important targets. Low-value exploits are run against technically sophisticated targets where the chance of detection is high.

TAO maintains a library of exploits, each based on a different vulnerability in a system. Different exploits are authorized against different targets, depending on the value of the target, the target’s technical sophistication, the value of the exploit, and other considerations.

In the case of Tor users, FoxAcid might use EgotisticalGiraffe against their Firefox browsers.

According to a top-secret operational management procedures manual provided by Snowden, once a target is successfully exploited it is infected with one of several payloads. Two basic payloads mentioned in the manual, are designed to collect configuration and location information from the target computer so an analyst can determine how to further infect the computer.

These decisions are made in part by the technical sophistication of the target and the security software installed on the target computer; called Personal Security Products or PSP, in the manual.

FoxAcid payloads are updated regularly by TAO. For example, the manual refers to version 8.2.1.1 of one of them. FoxAcid servers also have sophisticated capabilities to avoid detection and to ensure successful infection of its targets.

The operations manual states that a FoxAcid payload with the codename DireScallop can circumvent commercial products that prevent malicious software from making changes to a system that survive a reboot process.

The NSA also uses phishing attacks to induce users to click on FoxAcid tags.

TAO additionally uses FoxAcid to exploit callbacks – which is the general term for a computer infected by some automatic means – calling back to the NSA for more instructions and possibly to upload data from the target computer.

According to a top-secret operational management procedures manual, FoxAcid servers configured to receive callbacks are codenamed FrugalShot.

After a callback, the FoxAcid server may run more exploits to ensure that the target computer remains compromised long term, as well as install “implants” designed to exfiltrate data.

By 2008, the NSA was getting so much FoxAcid callback data that they needed to build a special system to manage it all.



by Frank Furedi

January 09, 2021

from RT Website

 

Frank Furedi is an author and social commentator. He is an emeritus professor of sociology at the University of Kent in Canterbury.

Author of How Fear Works: The Culture of Fear in the 21st Century.

Follow him on Twitter @Furedibyte

 


Logo of Google

at an office building in Zurich

© Reuters / Arnd Wiegmann

Big Tech has just taken a gigantic step toward its objective of gaining total control over what can and what cannot be said on the internet.

Apple and Google have commanded Parler, a social network used by conservatives, to police its users.

In effect, what their warning issued to Parler means,

‘do as you are told or face digital annihilation!’…

Google suspended Parler from its ‘Play Store’, declaring that it will shut the network until it rigorously polices its app.

Apple was reported to have followed suit giving Parler 24 hours to fall in line,

otherwise it would be removed from Apple’s App Store…

Apple and Google’s declaration of war on Parler has serious implications.

These two giant companies make operating systems that support nearly every smartphone in the world.

That means that if Apple shuts Parler out of its App Store, people would not be able to download the app on their iPhones or iPads.

The timing of the edict issued by the masters of Silicon Valley is not a coincidence.

Parler is one of the fastest growing apps on the internet.

Millions of conservatives fed up with the censorious behavior of Twitter and Facebook have been attracted to this social network.

In the aftermath of President Trump being forced off Facebook and Twitter, it was expected that millions of his supporters would turn to Parler to freely express their convictions.

Big Tech censorship is nothing new.

In recent years, social-media companies – once reluctant to be drawn into becoming official censors and arbiters of truth – have increasingly clamped down on what they deem to be hate speech or misinformation.

Since the beginning of the ‘pandemic’ Big Tech companies have behaved as if they are digital ‘gods’…

These powerful unaccountable billionaires have issued one Papal Bull after another…

Facebook has used the ‘pandemic’ to expand its policing of what can be posted.

Initially it stated that it would continue to remove,

“misinformation that could contribute to imminent physical harm”,

…while deploying its army of fact-checkers to flag certain posts, depress their distribution, and direct sharers of such material to ‘reliable’ information.

A few weeks later in April, 2020 it was reported that it was removing event posts for anti-lockdown gatherings.

Early on in the pandemic Susan Wojcicki, the CEO of YouTube, declared that she saw their role as the arbiter of ‘truth’ on the coronavirus.

She stated that anything that contradicted the ‘recommendations’ of the WHO would be removed from her platform.

That Big Tech sees itself as a veritable global power that stands above elected governments was strikingly illustrated by Facebook CEO Mark Zuckerberg, when he announced that,

Trump’s page would be closed down, at the very least, for the rest of his presidency.

A day later, Twitter followed suit and suspended Trump’s account permanently…

This humiliation of the American president indicates that a handful of billionaire capitalists now get to decide,

who can have a voice in the digital public square…!

Big Tech companies censoring their own platforms is bad enough.

However, when they take it upon themselves to determine how another independent social network must police itself, they have in effect assumed a tyrannical role over the entire internet.

Their declaration of war on Parler, indicates that they see themselves as not simply private companies but,

as global institutions that can wield political and policing power over the digital world…

It is likely that Parler will be forced to cave in and accept the terms imposed on it by Apple and Google.

John Matze, Parler’s CEO, has gone on record to state that he believes that,

“we can retain our values and make Apple happy quickly.”

If Parler is forced to fall in line with the edict issued by Big Tech then it will constitute the greatest blow struck against internet freedom so far.

Despite its rhetoric of supporting diversity, Big Tech is distinctly opposed to the diversity of opinion.

As recent events show,

they intend to turn the digital world into an entirely homogeneous system, where the only values that can be freely expressed are those of Silicon Valley and Hollywood…

Restoring the freedom to express whatever view you want to put forward on the internet is one of the most important challenges confronting genuine democrats.

by Dr. Joseph Mercola
August 29, 2020
from
Mercola Website

 

Source

Story at-a-glance

  • Documentary filmmaker and BBC journalist Adam Curtis has developed a cult following for his eccentric films that combine BBC archival footage into artistic montages combined with dark narratives; his latest film, Hypernormalisation, came out in 2016.

  • Hypernormalisation tells the story of how politicians, financiers and “technological utopians” constructed a fake world over the last four decades in an attempt to maintain power and control

  • Their fake world is simpler than the real world by design, and as a result people went along with it because the simplicity was reassuring

  • The film takes viewers on a timeline of recent history that appears as though you’re seeing bits and pieces of a scrapbook, but which ultimately support the larger message that the world is being controlled by a powerful few while the rest of us are willing puppets in the play

Documentary filmmaker and BBC journalist Adam Curtis has developed a cult following for his eccentric films that combine BBC archival footage into artistic montages combined with dark narratives that create a unique storytelling experience that’s both journalistic and entertaining.

His latest film, “HyperNormalisation,” came out in 2016 and is perhaps even more apropos now, as many have the feeling that they’re waking up to an unprecedented, and unreal, world anew each and every day – and so-called fake news is all around.

The term “HyperNormalisation” was coined by Alexei Yurchak, a Russian historian. 1

In an interview with The Economist, Curtis explained that it’s used to describe the feeling that comes with accepting total fakeness as normal.

Yurchak had used it in relation to living in the Soviet Union during the 1980s, but Curtis used it in response ,

living in the present-day U.S. and Europe…

He said:

“Everyone in my country and in America and throughout Europe knows that the system that they are living under isn’t working as it is supposed to; that there is a lot of corruption at the top…

There is a sense of everything being slightly unreal:

  • that you fight a war that seems to cost you nothing and it has no consequences at home

  • that money seems to grow on trees

  • that goods come from China and don’t seem to cost you anything

  • that phones make you feel liberated but that maybe they’re manipulating you but you’re not quite sure

It’s all slightly odd and slightly corrupt.

So I was trying to make a film about where that feeling came from… I was just trying to show the same feeling of unreality, and also that those in charge know that we know that they don’t know what’s going on.

That same feeling is pervasive in our society, and that’s what the film is about.” 2

Living in a Fake, Simple World

“HyperNormalisation” tells the story of how politicians, financiers and “technological utopians” constructed a fake world over the last four decades in an attempt to maintain power and control.

Their fake world is simpler than the real world by design, and as a result people went along with it because the simplicity was reassuring.

The transition began in 1975, when the film describes two world-changing moments that took place in two cities:

New York City and Damascus, Syria, which shifted the world away from political control and toward one managed instead by financial services, technology and energy companies.

First, New York ceded its power to bankers.

As noted in The New Yorker:

“New York, embroiled in a debt crisis as its middle-class tax base is evaporated by white flight, starts to cede authority to its lenders.

Fearing for the security of their loans, the banks, via a new committee Curtis contends was dominated by their leadership, the Municipal Assistance Corporation, set out to control the city’s finances, resulting in the first wave of banker-mandated austerity to greet a major American city as thousands of teachers, police officers, and firefighters are sacked.” 3

In Damascus, meanwhile, conflict between Henry Kissinger and Syrian head of state Hafez al-Assad grew, with Kissinger fearing a united Arab world and Assad angered that his attempts at transformation were fading.

“Kissinger’s theory was that instead of having a comprehensive peace for Palestinians, which would cause specific problems, you split the Middle Eastern world and made everyone dissatisfied,” Curtis said. 4

Further,

“In Curtis’ view, the Syrian leader pioneered the use of suicide bombing against Americans,” The New Yorker explained, which then spread throughout the Middle East, accelerating Islamic terrorism in the U.S.

While the roots of modern society can be traced back much further – millennia – Curtis chose to start “HyperNormalisation” in 1975 due to the economic crisis of the time.

“1975 is when a shift in power happened in the Middle East at the same time as the shift in power away from politics toward finance began in the West,” he told Hyperallergic. 5

“It’s arbitrary, but I chose that moment because those two things are at the root of a lot of other things we have today. It’s a dramatic moment.”

The film then takes viewers on a timeline of recent history that appears as though you’re seeing bits and pieces of a scrapbook, but which ultimately support the larger message that the world is being controlled by a powerful few while the rest of us are willing puppets in the play, and we’re essentially living in an unreal world

Being Managed as Individuals

According to Curtis, mass democracy died out in the early ’90s, only to be replaced by a system that
manages people as individuals.

Politics requires that people be in groups in order to control them.

Parties are established and individuals join the groups that are then represented by politicians that the group identifies with.

The advancement of technology has changed this, particularly because computer systems can manage masses of people by understanding the way they act as groups – but the people continue to think they’re acting as individuals.

Speaking to The Economist, Curtis said:

“This is the genius of what happened with computer networks.

Using feedback loops, pattern matching and pattern recognition, those systems can understand us quite simply.

That we are far more similar to each other than we might think, that my desire for an iPhone as a way of expressing my identity is mirrored by millions of other people who feel exactly the same.

We’re not actually that individualistic. We’re very similar to each other and computers know that dirty secret.

But because we feel like we’re in control when we hold the magic screen, it allows us to feel like we’re still individuals.

And that’s a wonderful way of managing the world.” 6

He compares it to a modern ghost story, in which we’re haunted by yesterday’s behaviors.

By predicting what we’ll like based on what we did yesterday, we’re inundated with messages that lock us into a static, unchanging world that’s repetitive and rarely imagines anything new.

“And because it doesn’t allow mass politics to challenge power, it has allowed corruption to carry on without it really being challenged properly,” he says, 7 using the example of extremely wealthy people who don’t pay taxes.

Although most are aware that this occurs, it doesn’t change:

“I think it has something to do with this technocratic world because it doesn’t have the capacity to respond to that kind of thing. It has the capacity to manage us very well.

It’s benign but it doesn’t have the capacity to challenge the rich and the powerful within that system, who use it badly for their own purposes.” 8

A Complex Documentary for an Oversimplified Time

While the crux of “HyperNormalisation” is that people have retreated into a simplified world perception, the documentary itself is complex and borderline alarming.

Its intricacies can be well explored, however, as it was released directly on BBC iPlayer, then passed around on the internet, such that it’s easy to replay it – or sections of it – again and again, something that wasn’t always possible with live television.

Speaking with “HyperNormalisation,” Curtis said:

“The interesting thing about online is that you can do things that are more complex and involving and less patronizing to the audience than traditional documentaries, which tend to simplify so much because they’re panicking that people will only watch them once live.

They tend to just tell you what you already know. I think you can do some more complicated things, and that’s what I’ve been trying.” 9

Watching “HyperNormalisation,” you’ll be confronted with seemingly unrelated snippets ranging from disaster movies to Jane Fonda, which will make you want to rewind and reconsider what you’ve just seen.

And perhaps that’s the point…

The gaps in the story compel viewers to do more research and ask more questions, and those willing to watch all of its nearly three hours of footage may find themselves indeed feeling like they’re climbing through a dark thicket, being led by only a flashlight, as the film’s opening portrays.

Meanwhile, the theme of an overriding power funneling information to the masses in an increasingly dumbed-down format is pervasive, right down to the censorship being fostered by social media.

Curtis narrates in the film:

“…as the intelligence systems online gathered evermore data, new forms of guidance began to illumine, social media created filters – complex algorithms that looked at what individuals liked and then fed more of the same back to them.

In the process, individuals began to move, without noticing, into bubbles that isolated them from enormous amounts of other information.

They only heard and saw what they liked, and the news feeds increasingly excluded anything that might challenge people’s pre-existing beliefs.”

Giant Corporations Behind the Internet’s Superficial Freedom

“HyperNormalisation” also touches on the irony behind the “freedom” provided by the Internet, which is that giant corporations are largely controlling it.

“…[B]ehind the superficial freedoms of the web were a few giant corporations and opaque systems that controlled what people saw and shaped what they thought.

What was even more mysterious was how they made their decisions about what you should like and what should be hidden from you,” the documentary states.

And as Curtis noted,

“I’m not trying to make a traditional documentary. I’m trying to make a thing that gets why you feel today like you do – uncertain, untrusting of those who tell you what is what.

To make it in a way that emotionally explains that as much as it explains it intellectually.” 10

On the topic of social media, Curtis described social media as a scam, telling Idler Magazine: 11

The Internet has been captured by four giant corporations who don’t produce anything, contribute nothing to the wealth of the country, and hoard their billions of dollars in order to pounce on anything that appears to be a competitor and buy it out immediately.

They will get you and I to do the work for them – which is putting the data in – then they send out what they con other people into believing are targeted ads.

But actually, the problem with their advertising is that it is – like all geek stuff – literal.

It has no imagination to it whatsoever. It sees that you bought a ticket to Budapest, so you’re going to get more tickets to Budapest.

It’s a scam…”

Technology, largely in the form of social media, feeds into the forces at play that are spreading a state of powerlessness and bewilderment around the world, according to Curtis. 12

This is fueled by anger, which prompts more intense reactions online, hence, more clicks and more money being poured into social media.

It’s Curtis’ goal to create an emotional history of the world, which he plans to create using decades’ worth of BBC footage from around the world.

His next project is to explore Russia, then China, Egypt, Vietnam and Africa, telling stories that people want to hear but probably won’t otherwise, due to the altered state of reality we’re living in.

 

Video

 

 

 

Sources and References