Several Years ago, I made a prediction on Facebook that the Human Species had less than 50 years and I was dead serious. The more that I hear about this stuff, the more pessimistic I get. Some of the Biggest Names in Tech have also been saying the same thing.
from TheMarkaz Website
(b. Tehran 1976, lives and works in New York),
oil on linen, 72 x 108 inches (2010).
Courtesy of the artist.
Algorithms, artificial intelligence programs controlled by Big Tech companies including Google, Facebook and Twitter – corporations with no commitment to ethical journalism – are the new gatekeepers…
More and more, proprietary algorithms rather than newsroom editors determine which news stories circulate widely, raising serious concerns about transparency and accountability in determinations of newsworthiness.
The rise of what is best understood as algorithmic censorship makes newly relevant the old concept of “gatekeeping” in ways that directly address previous critiques of how we get our news.
To illustrate the power of algorithms to control the flow of information, consider the example of what happened to the digital record of an academic conference that I attended last year.
YouTube and the Critical Media Literacy Conference of the Americas
In October 2020 I participated in an academic conference focused on media literacy education.
The event brought together the field’s leading figures for two days of scholarly panels and discussions.
Many of the participants, including those in a session I moderated, raised questions about the impact of Big Tech companies such as Google and Facebook on the future of journalism and criticized how corporate news media,
including not only Fox News and MSNBC but also the New York Times and Washington Post,
…often impose narrow definitions of newsworthiness.
In other words, the conference was like many others I’ve attended, except that due to the pandemic we met virtually via Zoom.
After the conference concluded, its organizers uploaded video recordings of the keynote session and more than twenty additional hours of conference presentations to a YouTube channel created to make those sessions available to a wider public.
Several weeks later, YouTube removed all of the conference videos, without any notification or explanation to the conference organizers.
As MintPress News reported, an academic conference at which many participants raised warnings about,
“the dangers of media censorship” had, ironically, “been censored by YouTube.”
Despite the organizers’ subsequent formal appeals, YouTube refused to restore any of the deleted content; instead, it declined to acknowledge the content was ever posted in the first place.
Through my work with Project Censored, a nonprofit news watchdog with a global reputation for opposing news censorship and championing press freedoms, I was familiar with online content filtering.
Thinking about YouTube’s power to delete the public video record of an academic conference, without explanation, initially reminded me of the “memory holes” in George Orwell‘s Nineteen Eighty-Four.
In Orwell’s dystopian novel, memory holes efficiently whisk away for destruction any evidence that might conflict with or undermine the government’s interests, as determined by the Ministry of Truth.
But I also found myself recalling a theory of news production and distribution that enjoyed popularity in the 1950s but has since fallen from favor.
I’ve come to understand YouTube’s removal of the conference videos as (a new form of) gatekeeping, the concept developed by David Manning White and Walter Gieber in the 1950s to explain how newspaper editors determined what stories to publish as news.
The original gatekeeping model
White studied the decisions of a wire editor at a small midwestern newspaper, examining the reasons that the editor, whom White called “Mr. Gates,” gave for selecting or rejecting specific stories for publication.
Mr. Gates rejected some stories for practical reasons:
“too vague,” “dull writing,” or “too late – no space”…
But in 18 of the 423 decisions that White examined, Mr. Gates rejected stories for political reasons, rejecting stories as “pure propaganda” or “too red,” for example.
White concluded his 1950 article by emphasizing,
“how highly subjective, how based on the gatekeeper’s own set of experiences, attitudes and expectations the communication of ‘news’ really is.”
In 1956, Walter Gieber conducted a similar study, this time examining the decisions of 16 different wire editors.
Gieber’s findings refuted White’s conclusion of gatekeeping as subjective. Instead, Gieber found that, independently of one another, editors made much the same decisions.
Gatekeeping was real, but the editors treated story selection as a rote task, and they were most concerned with what Gieber described as “goals of production” and “bureaucratic routine” – not, in other words, with advancing any particular political agenda.
More recent studies have reinforced and refined Gieber’s conclusion that professional assessments of “newsworthiness,” not political partisanship, guide news workers’ decisions about what stories to cover.
The gatekeeping model fell out of favor as newer theoretical models – including “framing” and “agenda setting” – seemed to explain more of the news production process.
In an influential 1989 article, sociologist Michael Schudson described gatekeeping as,
“a handy, if not altogether appropriate, metaphor.”
The gatekeeping model was problematic, he wrote, because,
“it leaves ‘information’ sociologically untouched, a pristine material that comes to the gate already prepared.”
In that flawed view “news” is preformed, and the gatekeeper,
“simply decides which pieces of prefabricated news will be allowed through the gate.”
Although White and others had noted that “gatekeeping” occurs at multiple stages in the news production process, Schudson’s critique stuck.
With the advent of the Internet, some scholars attempted to revive the gatekeeping model.
New studies showed how audiences increasingly act as gatekeepers, deciding which news items to pass along via their own social media accounts.
But, overall, gatekeeping seemed even more dated:
“The Internet defies the whole notion of a ‘gate’ and challenges the idea that journalists (or anyone else) can or should limit what passes through it,” Jane B. Singer wrote in 2006.
Algorithmic news filtering
Fast forward to the present and Singer’s optimistic assessment appears more dated than gatekeeping theory itself.
Instead, the Internet, and social media in particular, encompass numerous limiting “gates,” fewer and fewer of which are operated by news organizations or journalists themselves.
Incidents such as YouTube’s wholesale removal of the media literacy conference videos are not isolated.
In fact, they are increasingly common as privately-owned companies and their media platforms wield ever more power to regulate who speaks online and the types of speech that are permissible.
Independent news outlets have documented,
how Twitter, Facebook, and others have suspended Venezuelan, Iranian, and Syrian accounts and censored content that conflict with U.S. foreign policy
how the Google News aggregator filters out pro-LGBTQ stories while amplifying homophobic and transphobic voices
how changes made by Facebook to its news feed have throttled web traffic to progressive news outlets
Some Big Tech companies’ decisions have made headline news.
After the 2020 presidential election, for example, Google, Facebook, YouTube, Twitter, and Instagram restricted the online communications of Donald Trump and his supporters:
after the January 6 assault on the Capitol, Google, Apple, and Amazon suspended Parler, the social media platform favored by many of Trump’s supporters.
But decisions to deplatform Donald Trump and suspend Parler differ in two fundamental ways from most other cases of online content regulation by Big Tech companies.
First, the instances involving Trump and Parler received widespread news coverage; those decisions became public issues and were debated as such.
Second, as that news coverage tacitly conveyed, the decisions to restrict Trump’s online voice and Parler’s networked reach were made by leaders at Google, Facebook, Apple, and Amazon. They were human decisions.
“Thought Police” by Ali Banisadr,
oil on linen, 82 x 120 inches (2019).
Courtesy of the artist.
This last point was not a focus of the resulting news coverage, but it matters a great deal for understanding the stakes in other cases, where the decision to filter content – in effect, to silence voices and throttle conversations – were made by algorithms, rather than humans.
Increasingly the news we encounter is the product of both the daily routines and professional judgments of journalists, editors, and other news professionals and the assessments of relevance and appropriateness made by artificial intelligence programs that have been developed and are controlled by private for-profit corporations that do not see themselves as media companies much less ones engaged in journalism.
When I search for news about “rabbits gone wild” or the Equality Act on Google News, an algorithm employs a variety of confidential criteria to determine what news stories and news sources appear in response to my query.
Google News does not produce any news stories of its own but, like Facebook and other platforms that function as news aggregators, it plays an enormous – and poorly understood – role in determining what news stories many people see.
The new algorithmic gatekeeping
Recall that Schudson criticized the gatekeeping model for,
“leaving ‘information’ sociologically untouched.”
Because news was constructed, not prefabricated, the gatekeeping model failed to address the complexity of the news production process, Schudson contended.
That critique, however, no longer applies to the increasingly common circumstances in which corporations such as Google and Facebook, which do not practice journalism themselves, determine what news stories members of the public are most likely to see – and what news topics or news outlets those audiences are unlikely to ever come across, unless they actively seek them out.
In these cases, Google, Facebook, and other social media companies have no hand – or interest – in the production of the stories that their algorithms either promote or bury.
Without regard for the basic principles of ethical journalism as recommended by the Society of Professional Journalists,
to seek the truth and report it
to minimize harm
to act independently
to be accountable and transparent
The new gatekeepers claim content neutrality while promoting news stories that often fail glaringly to fulfil even one of the SPJ’s ethical guidelines.
This problem is compounded by the reality that it is impossible for a contemporary version of David Manning White or Walter Gieber to study gatekeeping processes at Google or Facebook:
The algorithms engaged in the new gatekeeping are protected from public scrutiny as proprietary intellectual property.
As April Anderson and I have previously reported, a class action suit filed against YouTube in August 2019 by LGBT content creators could,
“force Google to make its powerful algorithms available for scrutiny.”
Google/YouTube have sought to dismiss the case on the grounds that its distribution algorithms are “not content-based.”
Algorithms, human agency, and inequalities
“Trust in the Future” by Ali Banisadr,
oil on linen, 82 x 120 inches (2017).
Courtesy of the artist.
To be accountable and transparent is one of guiding principles for ethical journalism, as advocated by the Society of Professional Journalists.
News gatekeeping conducted by proprietary algorithms crosses wires with this ethical guideline, producing grave threats to the integrity of journalism and the likelihood of a well-informed public.
Most often when Google, Facebook, and other Big Tech companies are considered in relation to journalism and the conditions necessary for it to fulfill its fundamental role as the “Fourth Estate” – holding the powerful accountable and informing the public – the focus is on how Big Tech has thoroughly appropriated the advertising revenues on which most legacy media outlets depend to stay in business.
The rise of algorithmic news gatekeeping should be just as great a concern. Technologies driven by artificial intelligence (AI) reduce the role of human agency in decision making.
This is often touted, by advocates of AI, as a selling point:
Algorithms replace human subjectivity and fallibility with “objective” determinations.
Critical studies of algorithmic bias, including,
Safiya Umoja Noble‘s Algorithms of Oppression
Virginia Eubank‘s Automating Inequality
Cathy O’Neill‘s Weapons of Math Destruction,
…advise us to be wary of how easy it is to build longstanding human prejudices into “viewpoint neutral” algorithms that, in turn, add new layers to deeply-sedimented structural inequalities.
With the new algorithmic gatekeeping of news developing more quickly than public understanding of it, journalists and those concerned with the role of journalism in democracy face multiple threats.
We must exert all possible pressure to force corporations such as Google and Facebook to make their algorithms available for third-party scrutiny; at the same time, we must do more to educate the public about this new and subtle wrinkle in the news production process.
Journalists are well positioned to tell this story from first-hand experience, and governmental regulation or pending lawsuits may eventually force Big Tech companies to make their algorithms available for third-party scrutiny.
But the stakes are too high to wait on the sidelines for others to solve the problem.
So what can we do now in response to algorithmic gatekeeping?
I recommend four proactive responses, presented in increasing order of engagement:
Avoid using “Google” as a verb,
…a common habit that tacitly identifies a generic online activity with the brand name of a corporation that has done as much as any to multiply epistemic inequality.
The concept was developed by Shoshana Zuboff, author of The Age of Surveillance Capitalism, to describe a form of power based on the difference between what we can know and what can be known about us.
Remember search engines and social media feeds are not neutral information sources.
The algorithms that drive them often serve to reproduce existing inequalities in subtle but powerful ways. Investigate for yourself.
Select a topic of interest to you and compare search results from Google and DuckDuckGo.
Connect directly to news organizations that display firm commitments to ethical journalism,
…rather than relying on your social media feed for news. Go to the outlet’s website, sign up for its email list or RSS feed, subscribe to the outlet’s print version if there is one.
The direct connection removes the social media platform, or search engine, as an unnecessary and potentially biased intermediary.
Call out algorithmic bias when you encounter it.
Call it out directly to the entity responsible for it; call it out publicly by letting others know about it.
Fortunately, our human brains can employ new information in ways that algorithms cannot.
Understanding the influential roles of algorithms on our lives – including how they operate as gatekeepers of the news stories we are most likely to see – allows us to take greater control of our individual online experiences.
Based on greater individual awareness and control, we can begin to organize collectively to expose and oppose algorithmic bias and censorship…
y Jonathan Chadwick
May 18, 2022
from DailyMail Website
Google’s hype on DeepMind exceeds reality in achieving Artificial General Intelligence (AGI).
According to Tristan Greene of ‘TheNextWeb’,
“It’s not a general AI, it’s a bunch of pre-trained, narrow models bundled neatly.”
What is certain is Google’s ability to ‘make it so’ and fool a public that cannot distinguish between ‘magic’ and ‘reality’…
DeepMind expert suggests the hardest tasks to create a human-like AI are solved
The London firm wants to build an ‘AGI‘ that has the same intelligence as humans
This week DeepMind unveiled a program capable of achieving over 600 tasks
‘The Game is Over!’
Google’s DeepMind says
it is close to achieving
‘human-level’ artificial intelligence,
but it still needs to be scaled up…
Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said ‘the game is over’ in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI).
AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training.
According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI.
Earlier this week, DeepMind unveiled a new AI ‘agent’ called Gato that can complete 604 different tasks,
‘across a wide range of environments’…
Gato uses a single neural network – a computing system with interconnected nodes that works like nerve cells in the human brain.
It can chat, caption images, stack blocks with a real robot arm and even play the 1980s home video game console Atari, DeepMind claims.
DeepMind, a British company owned by Google,
may be on the verge of achieving
human-level artificial intelligence (file photo)
– computing systems with interconnected nodes
that work like nerve cells in the human brain –
to complete 604 tasks, according to DeepMind
De Freitas comments came in response to an opinion piece published on The Next Web that said humans alive today won’t ever achieve AGI.
De Freitas tweeted:
‘It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster…’
However, he admitted that humanity is still far from creating an AI that can pass the Turing test – a test of a machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.
After DeepMind’s announcement of Gato, The Next Web article said it demonstrates AGI no more than virtual assistants such as Amazon’s Alexa and Apple’s Siri, which are already on the market and in people’s homes.
‘Gato’s ability to perform multiple tasks is more like a video game console that can store 600 different games, than it’s like a game you can play 600 different ways,’ said The Next Web contributor Tristan Greene.
‘It’s not a general AI, it’s a bunch of pre-trained, narrow models bundled neatly.’
Gato has been built to achieve a variety of hundreds of tasks, but this ability may compromise the quality of each task, according to other commentators.
De Freitas tweeted:
‘It’s all about scale now!
The Game is Over!
It’s about making these models
bigger, safer, compute efficient, faster…’
In another opinion piece, ZDNet columnist Tiernan Ray wrote that the agent,
‘is actually not so great on several tasks’.
‘On the one hand, the program is able to do better than a dedicated machine learning program at controlling a robotic Sawyer arm that stacks blocks,’ Ray said.
‘On the other hand, it produces captions for images that in many cases are quite poor.
‘Its ability at standard chat dialogue with a human interlocutor is similarly mediocre, sometimes eliciting contradictory and nonsensical utterances.’
For example, when a chatbot, Gato initially mistakenly said that Marseille is the capital of France.
Also, a caption created by Gato to accompany a photo read,
‘man holding up a banana to take a picture of it’, even though the man wasn’t holding bread.
The company’s authors have said such an agent will show ‘significant performance improvement’ when it’s scaled-up.
AGI has been already identified as a future threat that could wipe out humanity either deliberately or by accident.
Pictured a dialogues with Gato
when prompted to be a chatbot.
A critic called Gato’s ability
to have a chat with a human ‘mediocre’
Earlier this week, British firm
DeepMind revealed Gato,
a program that can chat, caption images,
stack blocks with a real robot arm and even play
the 1980s home video game console Atari.
Depicted here are some of the tasks
that Gato has been tested
on in a DeepMind promo
Dr Stuart Armstrong at Oxford University’s Future of Humanity Institute previously said AGI will eventually make humans redundant and wipe us out.
machines will work at speeds inconceivable to the human brain and will skip communicating with humans to take control of the economy and financial markets, transport, healthcare and more…
Dr Armstrong said a simple instruction to an AGI to ‘prevent human suffering’ could be interpreted by a super computer as ‘kill all humans’, due to human language being easily misinterpreted.
‘The development of full artificial intelligence could spell the end of the human race.’
During his lifetime,
the famous British astrophysicist
Professor Stephen Hawking (pictured)
said AI ‘could spell the end of the human race’
In a 2016 paper, DeepMind researchers acknowledged the need for a ‘big red button’ to prevent a machine from completing,
‘a harmful sequence of actions’…
DeepMind, which was founded in London in 2010 before being acquired by Google in 2014, is known for creating an AI program that beat a human professional Lee Sedol, the world champion, in a five-game match in 2016.
In 2020, the firm announced it had solved a 50-year-old problem in biology, known as the ‘protein folding problem‘ – knowing how a protein’s amino acid sequence dictates its 3D structure.
DeepMind claimed to have solved the problem with 92 per cent accuracy by training a neural network with 170,000 known protein sequences and their different structures.
The firm is perhaps
best known for its AlphaGo AI program
that beat a human professional Go player Lee Sedol ,
the world champion, in a five-game match.
Pictured, Go world champion Lee Sedol of South Korea
seen ahead of the first game the
Google DeepMind Challenge Match
against Google’s AlphaGo programme
in March 2016
WHAT IS GOOGLE’S DEEPMIND AI PROJECT?
DeepMind was founded in London in 2010 and was acquired by Google in 2014.
It now has additional research centers in Edmonton and Montreal, Canada, and a DeepMind Applied team in Mountain View, California.
DeepMind is on a mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.
If successful, the firm believes this will be one of the most important and widely beneficial scientific advances ever made.
The company has hit the headlines for a number of its creations, including software it created a that taught itself how to play and win at 49 completely different Atari titles, with just raw pixels as input.
In a world first, its AlphaGo program took on the world’s best player at G, one of the most complex and intuitive games ever devised, with more positions than there are atoms in the universe – and won.
from WorldPoliticsReview Website
with the Navy-sponsored Shipboard Autonomous Firefighting Robot,
Washington, Feb. 4, 2015
(Department of Defense photo).
“Fifteen years after a drone first fired missiles in combat,” journalist Josh Smith recently wrote from Afghanistan, “the U.S. military’s drone program has expanded far beyond specific strikes to become an everyday part of the war machine.”
Important as this is, it is only a first step in a much bigger process.
As a report co-authored in January 2014 by Robert Work and Shawn Brimley put it,
“a move to an entirely new war-fighting regime in which unmanned and autonomous systems play central roles” has begun.
Where this ultimately will lead is unclear.
Work, who went to become the deputy secretary of defense in May 2014, and Brimley represent one school of thought about robotic war. Drawing from a body of ideas about military revolutions from the 1990s, they contend that roboticization is inevitable, largely because it will be driven by advances in the private sector.
Hence the United States military must embrace and master it rather than risk having enemies do so and gain an advantage.
On the other side of the issue are activists who want to stop the development of military robots. For instance the United Nations Human Rights Council has called for a moratorium on lethal autonomous systems.
Nongovernmental organizations have created what they call the Campaign to Stop Killer Robots, which is modeled on recent efforts to ban land mines and cluster munitions.
Other groups and organizations share this perspective.
Undoubtedly the political battle between advocates and opponents of military robots will continue. However, regardless of the outcome of that battle, developments in the next decade will already set the trajectory for the future and have cascading effects.
At several points, autonomous systems will cross a metaphorical Rubicon from which there is no turning back.
– One such Rubicon is when some nation deploys a robot that can decide to kill a human based on programmed instructions and an algorithm rather than a direct instruction from an operator.
In military parlance, these would be robots without “a human in the loop.”
In a sense, this would not be entirely new:
Booby traps and mines have killed without a human pulling the trigger for millennia.
But the idea that a machine would make something akin to a decision rather than simply killing any human that comes close to it adds greater ethical complexity than a booby trap or mine, where the human who places it has already taken the ethical decision to kill.
“Creating autonomous military robots
that can act at least as ethically as human soldiers
appears to be a sensible goal.”
In Isaac Asimov‘s science fiction collection “I, Robot,” which was one of the earliest attempts to grapple with the ethics of autonomous systems, an ironclad rule programmed into all such machines was that,
“a robot may not injure a human being.”
Clearly that is an unrealistic boundary, but as an important 2008 report sponsored by the U.S. Navy argued,
“Creating autonomous military robots that can act at least as ethically as human soldiers appears to be a sensible goal.”
Among the challenges to meeting this goal that the report’s authors identified,
“creating a robot that can properly discriminate among targets is one of the most urgent.”
In other words, the key is not the technology for killing, but the programmed instructions and algorithms.
But that also makes control extraordinarily difficult, since programmed instructions can be changed remotely and in the blink of an eye, instantly transforming a benign robot into a killer.
A second Rubicon will be crossed when non-state entities field military robots.
Since most of the technology for military robots will arise from the private sector, anyone with the money and expertise to operate them will be able to do so.
violent extremist movements, as well as contractors working on their behalf
Even if efforts to control the use of robots by state militaries in the form of international treaties are successful, there would be little to constrain non-state entities from using them.
Nations constrained by treaties could be at a disadvantage when facing non-state enemies that are not.
A third Rubicon will be crossed when autonomous systems are no longer restricted to being temporary mobile presences that enter a conflict zone, linger for a time, then leave, but are an enduring presence on the ground and in the water, as well as in the air, for the duration of an operation.
Pushing this idea even further, some experts believe that military robots will not be large, complex autonomous systems, but swarms of small, simple machines networked for a common purpose. Like an insect swarm, this type of robot could function even if many of its constituent components were destroyed or broke down.
Swarming autonomous networks would represent one of the most profound changes in the history of armed conflict.
In his seminal 2009 book “Wired for War,” Peter Singer wrote,
“Robots may not be poised to revolt, but robotic technologies and the ethical questions they raise are all too real.”
This makes it vital to understand the points of no return.
Even that is only a start:
Knowing that the Rubicon has been crossed does not alone tell what will come next.
When Caesar and his legion crossed the Rubicon River in 49 B.C., everyone knew that some sort of conflict was inevitable.
But no one could predict Caesar’s victory, much less his later assassination and all that it brought. Although the parameters of choice had been bounded, much remained to be determined.
Similarly, Rubicon crossings by military robots are inevitable, but their long-term outcomes will remain unknown.
It is therefore vital for the global strategic community, including governments and militaries as well as scholars, policy experts, ethicists, technologists, nongovernmental organizations and international organizations to undertake a collaborative campaign of learning and public education.
Political leaders must engage the public on this issue without hysteria or hyperbole, identifying all the alternative scenarios for who might use military robots, where they might use them, and what they might use them for.
With such a roadmap, it might be possible for political leaders and military officials to push roboticization in a way that limits the dangers, rather than amplifying them.
November 24, 2011
from ActivistPost Website
Brandon Turbeville is an author out of Mullins, South Carolina. He has a Bachelor’s Degree from Francis Marion University where he earned the Pee Dee Electric Scholar’s Award as an undergraduate.
He has had numerous articles published dealing with a wide variety of subjects including health, economics, and civil liberties.
He also the author of Codex Alimentarius – The End of Health Freedom, 7 Real Conspiracies and Five Sense Solutions.
World’s first official cyborg
For years, many have mocked the idea of implantable microchips and cyborgs as both conspiracy theories and science fiction.
Anyone who so much as mentioned these possibilities to their neighbor risked being labeled either as a religious fanatic or delusional and paranoid. However, as they have become more and more prevalent in everyday society, it has become increasingly difficult to ridicule these concepts.
For instance, with stories like the recent Singularity Hub article entitled, “Revolutionary New Brain Chip Allows Monkeys To Grasp AND Feel Objects Using Their Thoughts,” these emerging technological possibilities are almost impossible to ignore.
This article discusses how scientists have recently announced the creation of an implantable device that can be placed in the brain and which will allow for the control of computers by thought.
Dr. Miguel Nicolelis and company have already tested these devices in monkeys with stunningly accurate results. In addition to allowing the user to control the computer by thought, it also allows the user to feel the virtual object it is manipulating.
Of course, this device is not the first of its kind. For years, implants have allowed monkeys to control computer cursors and even robotic arms in laboratory settings.
In the most recent experiment (Active Tactile Exploration Using a Brain-Machine-Brain Interface), two macaque monkeys were trained to control a virtual arm represented on the computer screen and use the arm to “grasp” virtual objects. The difference between this latest experiment and those that have preceded it, however, is that these monkeys were able to actually feel the objects they were grasping.
The good news is that this could provide individuals who have lost limbs with more than a mere prosthetic replacement.
Indeed, it would be able to offer them a prosthetic that comes complete with the sense of touch. As the quality of prosthetics continue to improve, this technology could no doubt go a great distance toward replacing lost limbs with something more than simple equipment that allows merely for basic mobility.
Yet mobility, for some, is still the main goal. Miguel Nicolelis and his associates who conducted the experiment have expressed desire to take the technology to the next level.
In conjunction with The Walk Again Project, there is allegedly a concerted effort to,
“restore full mobility to patients suffering from a severe degree of paralysis.”
Nicolelis’ lab at Duke University is already working toward this end.
Because the ultimate goal of a return of full mobility to a person experiencing such paralysis will require support for the body itself, the scientists are also developing what they call a “wearable robot” to encase the person who is being implanted.
Yes, I said a wearable robot. And yes, a wearable robot can also be described as an exoskeleton.
Peter Murray of Singularity Hub writes:
The brain chips – if they work – will be a technological triumph by themselves. Custom designed, the brain chips will be low-power and wireless, transmitting their signals to a processing unit worn on the patient’s belt about the size of a cell phone.
That brain activity will then be translated to digital motor signals which will control the actuators across the joints of the exoskeleton. Force and stretch indicators throughout the exoskeleton will signal back to the patient’s brain the whereabouts of his or her joints and limbs.
That technology is being created which may enable the lame to walk again is obviously good news.
However, for everything good in the world, there is an evil twin. This technology not only provides for the possibility of some darker applications, but, considering those who are currently guiding the destiny of the world, it almost ensures them.
If we allow society to continue to move in the direction it is currently headed, these technologies simply do not bode well for the future of humanity as we know it.
For years, shadowy and nefarious agencies like DARPA have openly discussed creating drones designed to look like insects, snakes, and other animals. Largely, these creations have been robotic, with the appearance of being an insect, but actually being nothing more than a sophisticated remote controlled or pre-programmed drone.
Going one giant step further, and in direct relationship to the style of brain tampering spoken about early in this article, DARPA has unveiled other projects as well. As far back as 2007, DARPA announced that it was planning to not only build but grow cyborg moths and other insects for spying purposes.
“tiny lepidopterine infiltration borg by growing a living moth around a ‘micro-mechanical system.’”
Essentially, DARPA is implanting a “metallic core” into the moths which will function as a cloak for the metal center.
This is something that bears repeating. Instead of functioning as an exoskeleton like what was discussed in Nicolelis’ project, the cores will actually wear the bodies.
As Lewis Page wrote for The Register,
“If Dr. Lal [DARPA scientist involved in the project] was using vast Austrian bodybuilders rather than moths, we’d be talking Terminator yet again (this happens rather a lot when one starts looking at the US defense establishment.”
Of course, there is little doubt that if DARPA is releasing this much information on HI-MEMS, the project has already been attempted, tested, and perfected a long time ago.
It is only new to the general public and the few scientists who have been chosen to release the information to the population for the purposes of prepping them for the coming New World Order.
As Rod Brooks of MIT’s computer science and artificial intelligence lab (CSAIL) was quoted as saying,
“This is going to happen. It’s not science like developing the nuclear bomb, which costs billions of dollars. It can be done relatively cheaply.”
He goes on,
“A bunch of experiments have been done over the past couple of years where simple animals, such as rats and cockroaches, have been operated on and driven by joysticks, but this is the first time where chip has been injected in the pupa stage and ‘grown’ inside it.”
“First time.” Yeah right.
Nevertheless, Brooks continues to discuss the future of technological implants in humans when he says,
Biological engineering is coming.
There are already more than 100,000 people with cochlear implants, which have a direct neural connection, and chips are being inserted in people’s retinas to combat macular degeneration. By the 2012 Olympics, we’re going to be dealing with systems which can aid the oxygen uptake of athletes.
There’s going to be more and more technology in our bodies… there’s going to be a lot of moral debates.
Moral debates there may be, but there is also no doubt that the world is more and more readily coming to accept biological intrusion and top-down control as a result of their constant training at the hands of the television, video games, and indoctrination at the hands of the education system.
One need only take a look at the behavior of anyone under the age of thirty in the presence of any modern interactive technology to see the writing on the wall.
In this regard, The Singularity Movement is a perfect example of the coming scientific dictatorship and the enthusiasm of some to accept it.
Singularity can be defined, very simply, as the moment when human and machine merge together. It is a philosophy that is gaining more and more steam with the general public as a result of its gradual introduction and promotion by the media (television, games, etc.) and prominent individuals like Ray Kurzweil and Rodney Brooks.
TIME magazine’s Lev Grossman discussed Singularity in this way:
Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities.
Maybe the artificial intelligences will help us treat the effects of old age and prolong our life indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually.
Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: Singularity.
Singularity is obviously a movement that has been promoted from the top-down.
This is easily seen by the fact that the backers of the Singularity movement are the usual suspects including NASA and GOOGLE, as well as individuals like Bill Gates in addition to Ray Kurzweil. Not only that, but major governments have been preparing for the eventuality of a massive merger between man and machine for some time.
For instance, in a report by Richard Norton-Taylor, written for The Guardian in 2007, entitled “Revolution, flashmobs, and brain chips – A grim vision of the future,” Norton-Taylor relays the findings of a 90-page report administered and released by the British Ministry of Defense.
The research team was tasked with describing a future “strategic context” that the British military might encounter in the coming years.
By 2035, an implantable ‘information chip’ could be wired directly to the brain. A growing pervasiveness of information communications technology will enable states, terrorists or criminals, to mobilize ‘flashmobs,’ challenging security forces to match this potential agility coupled with an ability to concentrate forces quickly in a small area.
Singularity is also a movement that has its roots in eugenics and the desire of the ruling elites for complete control over the mind, body, and soul of every human being on the planet.
Oddly enough, while some may dispute this claim, this movement’s roots in eugenics is relatively open.
Consider the comments made by the RAND corporation in its 2001 report, The Global Technology Revolution – Bio/Nano/Materials Trends and Their Synergies with Information Technology by 2015.
The results could be astonishing. Effects may include significant improvements in human quality of life and life span… continued globalization, reshuffling of wealth, cultural amalgamation or invasion with potential for increased tension and conflict, shifts in power from nation states to non-government organizations and individuals… and the possibility of human eugenics and cloning.
With this in mind, the developments presented in the Singularity Hub article which I discussed early on, take a more sinister tone.
As the RAND corporation states in its report, the introduction of Singularity will most likely involve a great improvement in living standards for the handicapped. At least it will at first.
Eventually, the movement will begin to encompass convenience and will come to be seen as trendy and fashionable. Once merging with machines has become commonplace and acceptable (even expected), the real tyranny will begin to set in. Soon after, there will be no opt-outs allowed.
It [technology/internet/etc.] has many purposes but one of them was never to free the people, it would be used as an incredible tool of data-collection and using, like television and repetition of different topics, or the same topics or phrases again, it would be used to condition the public in their opinions, until, really, they’d be addicted to it, they could never do without it.
That’s the intent, because you will go cashless eventually and it will be used as a form of social approval and disapproval, if they cut you off from the net: you won’t be able to do your banking, get money to pay your rent etc.
Bertrand Russell talked about this sort of technique to be used in the future and it’s coming now. ‘Cloud’ will come in and that will take over and be THE one for the planet and everyone will rush into it thinking ‘my God I don’t have to worry about spyware or viruses or upgrades, it’s all done for me, out there somewhere in the big ‘cloud.’
And the Cloud, eventually, will be censoring your emails and actually popping up windows to tell you ‘are you sure you want to use this word, this politically incorrect word in this email?’ Then it might give you a little list of fines or punishments etc. etc.
This is all planned folks, that’s how you do it.
The advancements in the quality of human life as a result of this new technology have never been intended for the average person.
The good that could be done by virtue of its development is only meant as a tool to sell it to the population in the beginning and to control them in the end. Indeed, the control that can and will be exerted through its acceptance is the ultimate goal.