One Question On The Simulation Hypothesis


The simulation hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, most likely a computer simulation. Some versions rely on the development of a simulated reality, a proposed technology that would seem realistic enough to convince its inhabitants. The hypothesis has been a central plot device of many science fiction stories and films. – Wiki

There is a long philosophical and scientific history to the underlying thesis that reality is an illusion. This skeptical hypothesis can be traced back to antiquity; for example, to the “Butterfly Dream” of Zhuangzi, or the Indian philosophy of Maya.

This’ surely one of man’s most intriguing topics that has been laid out on the table, and the perplexing question “Are we living in a world of computer simulation” have caused great anxiety to both the questioner and answerer alike. There are many reactions from discussions and arguments, and as expected of any controversial topic, it has brought the thinkers, philosophers and scientists into splitsville of for-and-againstidealists-materialists. This bifurcation takes us way back to the periods of Plato and Democritus.

Personally I’m rolled up into excitement, nervousness and sadness on this topic. What prompted me to delve deeper into this was when I read about Elon Musk saying that there is only a “one in billions” chance that we’re not living in a computer simulation. His elaborations prior leading to that conclusion is pointing to the three propositions of Nick Bostrom’strilemma.

  1. “The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or
  2. “The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero”, or
  3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one”

Scientifically, quantum physicists’ discoveries are paving the way for the simulation hypothesis to becoming real each day and making it harder for the philosophical materialists to hang on to their stance.

Read: Proof Of The Simulation Argument

I am neither a scientist nor a philosopher. My interest in all this is purely as a human. Where do I go from here, and what do I do next after and when its finally determined, either way about this world of mine.

If it is a real physical world of matter. Life just goes on for me as usual. If it is a VR, my question is:

What am I then?

Surely I cannot be the human I thought I am – of matter, flesh, and blood? I must be of a simulation too. How else could I fit in on the VR stage?

Me an AI? This is the nervy thought that keeps me awake at night. I have come to get used to, enjoy and love being human living with other humans.

This makes me sad.

..

Advertisements

Are humans Natural Intelligence?


Artificial Intelligence (AI) is the buzzword on the Internet. Putting aside the varied and different definitions by computer scientists and experts in the field, let’s look at the words artificial and intelligence from a semantic angle:

artificial

The Oxford Dictionary

artificial-intel

Going by the above definitions, the phrase artificial intelligence would mean:

unnatural ability to acquire and apply knowledge and skills

Turning to the current usage of this phrase, it is now widely applied to the technological developments in the Internet of Things (IoT™)

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving” – wiki

As a layman, I started to read more on AI in 2017. There seem to be more and more articles, reports and tech news appearing daily and I began to sense the growing importance of this subject and the concerns surrounding its future implications to humans both positive and negatively.  Below are some of the concerns:

AI is a fundamental risk to the existence of human civilization. – Elon Musk at the NGA 2017

8 Staggering Predictions From Ray Kurzweil

Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a twenty-year track record of accurate predictions. Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which describe him as the “rightful heir to Thomas Edison.”

8.Ubiquitous Wifi Access
7.An Asteroid Won’t Kill Us
6.Working From Home with VR
6.*Don’t Worry About Disease
5.Virtual Reality = Reality
4.Computers Surpass Humans Really Soon
3.Humans Become Machines
2.Earth Will Be Made of Computers
1.The Universe Will Be a Supercomputer

What an artificial intelligence researcher fears about AI

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production. Read more

In 2014, one of the world’s most renowned physicists and cosmologists, Stephen Hawking, who shares the same concerns, said in an interview, “The development of full Artificial Intelligence could spell the end of the human race. It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Just hearing the thoughts and views of these prominent figures in the tech industry could easily worry, if not put fear into the man-in-the-street as he will see AI as monsters that will destroy him and rid him out of existence.

I thought so too, until I tried to see Creation in the bigger picture. The 1999 movie The Matrix sort of set the scenario for me to review and re-evaluate the ‘world’ I am in. Its all a program and the DNA in me and every other living thing is a good enough fact to convince me that I am a program.

What if humans started off as AI?

Wow! What a concept! That’s really a mind blowing piece let me tell you. Me, an artificial intelligence thingy? That’s awfully sad I must say…but, let’s continue and bear with me and see if it makes any sense.

What if we started off as artificial intelligence with Machine Learning™ inputs and allowed to run (the program) freely and indefinitely, and here we are… evolved to our current state of being in a vessel called – ‘humans’? Is that too bizarre and far-fetched of an idea?

As humans we’re taught to have a Soul, which in all probabilities is an AI software?

Its all a question mark until all the questions are satisfactorily answered.

If we look at the current scenario, humans (with natural intelligence) are creating AI and some amongst the creators/inventors are alarmed at the very thought that it might run to a point they would lose control of their ‘intelligent’ creations. A case of deja vu? Perhaps their foresights on AI is nothing more than a whisk of a re-collection of who they really are and what they’ve become?

Humans have highly evolved to a point, that they chose and wish to only see themselves as completely biological, flesh and blood, existing as they rightly or wrongly claimed in the “natural” state.

Really? If not, why do they chose to name their creation ‘artificial’ intelligence, which to all intent and purposes is to replicate their (natural) intelligence?

Why are humans drawn to everything digital? Programmers are busy coding programs/applications and converting everything into digital. Ray Kurzweil could well be right when he predicts that the universe will be a super computer.

singularityThey are also hyped on the “singularity” concept, which they foresee humans and machines will converge into one, but that is another story for another time.

Wait, there is a positive side to this. They (the programmers/scientists) forget as AI them/ourselves, I am convinced (hypothetically if you like) are still evolving or ascending as the spiritualists prefer to call it, and are transforming (awakening) into a higher state-of-being.

If that is true, we have nothing to fear and worry about (the AI), because we would be in complete control of the situation as the embedded Machine Learning in us continually updates and upgrades our program.

Like it or not, in Hollywood fashion – The show will and must go on.

..

 

What an artificial intelligence researcher fears about AI


AI The destroyer of worlds?

What if mankind itself, in all probabilities was created / started off as AI and ‘evolved’ to the present high state of being called ‘human’, reaching to the capabilities of creating its very own version of AI?

…and now humans are worried at the prospects of being disposed by its very own creation?

Having this very thought is disconcerting, if not frightening and it sure beats the hell out of me😅


TheConversation

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It’s perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, “Matrix”-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become “the destroyer of worlds,” as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn’t avoid asking: As an AI expert, what do I fear about artificial intelligence?

Fear of the unforeseen

file-20170622-11971-1esty4gThe HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarkeand brought to life by movie director Stanley Kubrick in “2001: A Space Odyssey,” is a good example of a system that fails because of unintended consequences. In many complex systems – the RMS Titanic, NASA’s space shuttle, the Chernobyl nuclear power plant – engineers layer many different components together. The designers may have known well how each element worked individually, but didn’t know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster – sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia – a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBM’s Watson and Google’s Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on “Jeopardy!” or don’t defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.

Fear of misuse

I’m not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures’ performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution – and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Fear of wrong social priorities

Being a scientist doesn’t absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected – and get surgery performed by a tireless robotwith a perfectly steady “hand.” Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self – together with the rest of humanity – may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

file-20170622-12027-1nkjh7bThe key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind’s existence in it probably doesn’t matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn’t just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.


Arend Hintze – Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

..

Tech billionaires convinced we live in the Matrix are secretly funding scientists to help break us out of it


Independent

Some of the world’s richest and most powerful people are convinced that we are living in a computer simulation. And now they’re trying to do something about it.

At least two of Silicon Valley’s tech billionaires are pouring money into efforts to break humans out of the simulation that they believe that it is living in, according to a new report.

Philosophers have long been concerned about how we can know that our world isn’t just a very believable simulation of a real one. But concern about that has become ever more active in recent years, as computers and artificial intelligence have advanced.

Video:

That has led some tech billionaires to speculate that the chances we are not living in such a simulation is “billions to one”. Even Bank of America analysts wrote last month that the chances we are living in a Matrix-style fictional world is as high as 50 per cent.

And now at least two billionaires are funding scientists in an effort to try and break us out of that simulation. It isn’t clear what form that work is taking.

“Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer,” writes The New Yorker’s Tad Friend. “Two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.”

The detail came from a New Yorker profile of Sam Altman, who runs Y Combinator which helps develop tech companies.

Mr Friend didn’t indicate whether Mr Altman was one of those two, or who those people might be. A number of prominent tech billionaires have discussed the idea of the simulation – including Elon Musk, who has used his fortune to fund potentially odd efforts in the past.

Mr Musk spoke earlier this year about the fact that he believes that the chance that we are not living in a computer simulation is “one in billions”. He said that he had come to that conclusion after a chat in a hot tub, where it was pointed out that computing technology has advanced so quickly that at some point in the future it will become indistinguishable from real life – and, if it does, there’s no reason to think that it hasn’t done already and that that’s what we are currently living through.

If we aren’t actually living through a simulation, Mr Musk said, then all human life is probably about to come to an end and so we should hope that we are living in one. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation,” he said at the Recode conference.

Mr Altman seemed to echo that fear and told the New Yorker that he was concerned about the way that the devices that surround us might lead to the extinction of all consciousness in the universe. He spoke about how the best scenario for dealing with that is a “merge” – when our brains and computers become one, perhaps by having our brains uploaded into the cloud.

“These phones already control us,” he said. “The merge has begun – and a merge is our best scenario. Any version without a merge will have conflict: we enslave the A.I. or it enslaves us.

“The full-on-crazy version of the merge is we get our brains uploaded into the cloud. I’d love that. We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever. What a time to be alive!”

..

…of Smartphones, Pokemon and Etiquette.


FT

Excuse me, we need to talk mobile manners

The rules on acceptable behaviour in public are not eternal principles

In July, Nintendo launched the smartphone app Pokémon Go, an augmented reality game in which players hunt for animated monsters hidden at “PokeStops” in locations around the world.

Initially, Nintendo and the game’s developer, Niantic, surfed a wave of global enthusiasm, particularly among smartphone-toting “kidults” who possibly should know better. But celebration curdled into opprobrium once it emerged that PokeStops had been placed in some culturally sensitive locations, including the Holocaust Museum and Arlington National Cemetery in Washington DC.

Its communications director responded: “Playing the game is not appropriate in the museum, which is a memorial to the victims of Nazism.”

Meanwhile, officials at the cemetery, one of martial America’s most sacred sites, regretted the breaches of “decorum” committed by those hunting virtual creatures among the headstones.

These episodes will have confirmed the worst fears of those who believe digital technology is sending us to hell in a handcart and generally portends, if not civilisational collapse, then at least the unravelling of centuries-old standards of civilised behaviour.

Such laments are probably overdone. The rules governing what counts as acceptable deportment in public are not eternal principles fixed once and for all, but norms that change over time, especially in the face of dramatic technological change.

It would be an exaggeration to say the old rules simply no longer apply. Behavioural norms — and I would include our idea of what constitutes good “manners” in this — are remarkably resilient. They bend rather than break under the pressure of rapid social transformation of various sorts. The early days of any technology, whether it is the smartphone or the internal combustion engine, tend to be an ethical and behavioural free-for-all, after which a consensus on what is and is not acceptable eventually emerges.

Anyone who has travelled regularly on buses in London will recall a relatively recent period when journeys often unfolded to the tinny accompaniment of music played through the speakers of phones belonging to sulky-looking adolescents. Only those willing to invest considerable sums in noise-cancelling headphones were protected from this aural contagion.

At least one social commentator attempted, heroically, to argue that “sodcasting”, as this practice was known, was a way for the disenfranchised and downtrodden to strike a blow against bourgeois hegemony — or something like that. I have to say, though, that it never struck me as likely that young people would take to the barricades to defend their right to test their phones’ broadcasting capacities in public.

In any event, at least from my vantage point on the top deck of the number 40 bus, it seems the practice has largely died out under pressure of social disapproval. Most people riding the bus these days, whether 16 or 60, will know that sodcasting is decidedly de trop.

A report last year by the Pew Research Center on “mobile etiquette” and the “new contours of civil behaviour” in the US reveals a similar picture. Although there is inevitable generational variation in people’s tolerance of smartphone use in public places (18-29-year-olds tend to be more permissive than other age groups), “Americans of all ages generally trend in the same direction about when it’s OK or not to use [them] in public settings,” the report’s authors say.

Social scientist Sherry Turkle has argued that widespread use of smartphones leads to people living “alone together”, each of us locked in our own virtual universe. Indeed, 22 per cent of those surveyed by Pew admitted to sometimes using their phones “to avoid interacting with others who are near them”. Similarly, 82 per cent of all adults thought that smartphone use “hurts the conversation and atmosphere” at social gatherings.

One problem with the survey is it did not specify what “using” a mobile phone entails. After all, you can do more with a smartphone than make a call — a surreptitious look at your email is surely less socially disruptive than a bellowed conversation with someone at the office.

There was one idea on which nearly all respondents were agreed, however: 96 per cent said that using your smartphone at church or during some other kind of religious service was “not OK”. We can be fairly sure they would have said the same about chasing Pokémon in a cemetery.

The writer is the FT’s executive comment editor

..

The world wide cage


Aeon

Technology promised to set us free. Instead it has trained us to withdraw from the world into distraction and dependency

internet

It was a scene out of an Ambien nightmare: a jackal with the face of Mark Zuckerberg stood over a freshly killed zebra, gnawing at the animal’s innards. But I was not asleep. The vision arrived midday, triggered by the Facebook founder’s announcement – in spring 2011 – that ‘The only meat I’m eating is from animals I’ve killed myself.’ Zuckerberg had begun his new ‘personal challenge’, he told Fortune magazine, by boiling a lobster alive. Then he dispatched a chicken. Continuing up the food chain, he offed a pig and slit a goat’s throat. On a hunting expedition, he reportedly put a bullet in a bison. He was ‘learning a lot’, he said, ‘about sustainable living’.

I managed to delete the image of the jackal-man from my memory. What I couldn’t shake was a sense that in the young entrepreneur’s latest pastime lay a metaphor awaiting explication. If only I could bring it into focus, piece its parts together, I might gain what I had long sought: a deeper understanding of the strange times in which we live.

What did the predacious Zuckerberg represent? What meaning might the lobster’s reddened claw hold? And what of that bison, surely the most symbolically resonant of American fauna? I was on to something. At the least, I figured, I’d be able to squeeze a decent blog post out of the story.

The post never got written, but many others did. I’d taken up blogging early in 2005, just as it seemed everyone was talking about ‘the blogosphere’. I’d discovered, after a little digging on the domain registrar GoDaddy, that ‘roughtype.com’ was still available (an uncharacteristic oversight by pornographers), so I called my blog Rough Type. The name seemed to fit the provisional, serve-it-raw quality of online writing at the time.

Blogging has since been subsumed into journalism – it’s lost its personality – but back then it did feel like something new in the world, a literary frontier. The collectivist claptrap about ‘conversational media’ and ‘hive minds’ that came to surround the blogosphere missed the point. Blogs were crankily personal productions. They were diaries written in public, running commentaries on whatever the writer happened to be reading or watching or thinking about at the moment. As Andrew Sullivan, one of the form’s pioneers, put it: ‘You just say what the hell you want.’ The style suited the jitteriness of the web, that needy, oceanic churning. A blog was critical impressionism, or impressionistic criticism, and it had the immediacy of an argument in a bar. You hit the Publish button, and your post was out there on the world wide web, for everyone to see.

Or to ignore. Rough Type’s early readership was trifling, which, in retrospect, was a blessing. I started blogging without knowing what the hell I wanted to say. I was a mumbler in a loud bazaar. Then, in the summer of 2005, Web 2.0 arrived. The commercial internet, comatose since the dot-com crash of 2000, was up on its feet, wide-eyed and hungry. Sites such as MySpace, Flickr, LinkedIn and the recently launched Facebook were pulling money back into Silicon Valley. Nerds were getting rich again. But the fledgling social networks, together with the rapidly inflating blogosphere and the endlessly discussed Wikipedia, seemed to herald something bigger than another gold rush. They were, if you could trust the hype, the vanguard of a democratic revolution in media and communication – a revolution that would change society forever. A new age was dawning, with a sunrise worthy of the Hudson River School.

Rough Type had its subject.

The greatest of the United States’ homegrown religions – greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology – is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his testament The Paradise Within the Reach of All Men (1833). By fulfilling its ‘mechanical purposes’, he wrote, the US would turn itself into a new Eden, a ‘state of superabundance’ where ‘there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations’, not to mention ‘vegetables of infinite variety and appearance’.

Similar predictions proliferated throughout the 19th and 20th centuries, and in their visions of ‘technological majesty’, as the critic and historian Perry Miller wrote, we find the true American sublime. We might blow kisses to agrarians such as Jefferson and tree-huggers such as Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

Cyberspace, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for the spiritual yearnings and tropes of the US. ‘What better way,’ wrote the philosopher Michael Heim in ‘The Erotic Ontology of Cyberspace’ (1991), ‘to emulate God’s knowledge than to generate a virtual world constituted by bits of information?’ In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting ‘the second coming of the computer’, replete with gauzy images of ‘cyberbodies drift[ing] in the computational cosmos’ and ‘beautifully laid-out collections of information, like immaculate giant gardens’.


The revelation continues to this day, the technological paradise forever glittering on the horizon


The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.

Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: they’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.

The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in ‘social production’ outside the marketplace. As the Harvard law professor Yochai Benkler wrote in his influential book The Wealth of Networks (2006):

This new freedom holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.

Calling it a revolution, he said, is no exaggeration.

Benkler and his cohort had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when the system’s commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organised to enrich a small group of businesses and their owners.


The territory began to be subdivided, strip-malled and I sensed that foreign agents were slipping into my computer through its connection to the web


The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort – and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterised by frenetic production and consumption – smartphones have made media machines of us all – but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that shrouds the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.

Late in his life, the economist John Kenneth Galbraith coined the term ‘innocent fraud’. He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. ‘It is innocent because most who employ it are without conscious guilt,’ Galbraith wrote in 1999. ‘It is fraud because it is quietly in the service of special interest.’ The idea of the computer network as an engine of liberation is an innocent fraud.

I love a good gizmo. When, as a teenager, I sat down at a computer for the first time – a bulging, monochromatic terminal connected to a two-ton mainframe processor – I was wonderstruck. As soon as affordable PCs came along, I surrounded myself with beige boxes, floppy disks and what used to be called ‘peripherals’. A computer, I found, was a tool of many uses but also a puzzle of many mysteries. The more time you spent figuring out how it worked, learning its language and logic, probing its limits, the more possibilities it opened. Like the best of tools, it invited and rewarded curiosity. And it was fun, head crashes and fatal errors notwithstanding.

In the early 1990s, I launched a browser for the first time and watched the gates of the web open. I was enthralled – so much territory, so few rules. But it didn’t take long for the carpetbaggers to arrive. The territory began to be subdivided, strip-malled and, as the monetary value of its data banks grew, strip-mined. My excitement remained, but it was tempered by wariness. I sensed that foreign agents were slipping into my computer through its connection to the web. What had been a tool under my own control was morphing into a medium under the control of others. The computer screen was becoming, as all mass media tend to become, an environment, a surrounding, an enclosure, at worst a cage. It seemed clear that those who controlled the omnipresent screen would, if given their way, control culture as well.

‘Computing is not about computers any more,’ wrote Nicholas Negroponte of the Massachusetts Institute of Technology in his bestseller Being Digital (1995). ‘It is about living.’ By the turn of the century, Silicon Valley was selling more than gadgets and software: it was selling an ideology. The creed was set in the tradition of US techno-utopianism, but with a digital twist. The Valley-ites were fierce materialists – what couldn’t be measured had no meaning – yet they loathed materiality. In their view, the problems of the world, from inefficiency and inequality to morbidity and mortality, emanated from the world’s physicality, from its embodiment in torpid, inflexible, decaying stuff. The panacea was virtuality – the reinvention and redemption of society in computer code. They would build us a new Eden not from atoms but from bits. All that is solid would melt into their network. We were expected to be grateful and, for the most part, we were.


What Silicon Valley sells and we buy is not transcendence but withdrawal. We flock to the virtual because the real demands too much of us


Our craving for regeneration through virtuality is the latest expression of what Susan Sontag in On Photography (1977) described as ‘the American impatience with reality, the taste for activities whose instrumentality is a machine’. What we’ve always found hard to abide is that the world follows a script we didn’t write. We look to technology not only to manipulate nature but to possess it, to package it as a product that can be consumed by pressing a light switch or a gas pedal or a shutter button. We yearn to reprogram existence, and with the computer we have the best means yet. We would like to see this project as heroic, as a rebellion against the tyranny of an alien power. But it’s not that at all. It’s a project born of anxiety. Behind it lies a dread that the messy, atomic world will rebel against us. What Silicon Valley sells and we buy is not transcendence but withdrawal. The screen provides a refuge, a mediated world that is more predictable, more tractable, and above all safer than the recalcitrant world of things. We flock to the virtual because the real demands too much of us.

‘You and I are alive at this moment.’ That Wired story – under headline ‘We Are the Web’ – nagged at me as the excitement over the rebirth of the internet intensified through the fall of 2005. The article was an irritant but also an inspiration. During the first weekend of October, I sat at my Power Mac G5 and hacked out a response. On Monday morning, I posted the result on Rough Type – a short essay under the portentous title ‘The Amorality of Web 2.0’. To my surprise (and, I admit, delight), bloggers swarmed around the piece like phagocytes. Within days, it had been viewed by thousands and had sprouted a tail of comments.

So began my argument with – what should I call it? There are so many choices: the digital age, the information age, the internet age, the computer age, the connected age, the Google age, the emoji age, the cloud age, the smartphone age, the data age, the Facebook age, the robot age, the posthuman age. The more names we pin on it, the more vaporous it seems. If nothing else, it is an age geared to the talents of the brand manager. I’ll just call it Now.

It was through my argument with Now, an argument that has now careered through more than a thousand blog posts, that I arrived at my own revelation, if only a modest, terrestrial one. What I want from technology is not a new world. What I want from technology are tools for exploring and enjoying the world that is – the world that comes to us thick with ‘things counter, original, spare, strange’, as Gerard Manley Hopkins once described it. We might all live in Silicon Valley now, but we can still act and think as exiles. We can still aspire to be what Seamus Heaney, in his poem ‘Exposure’, called inner émigrés.

A dead bison. A billionaire with a gun. I guess the symbolism was pretty obvious all along.

Reprinted from ‘Utopia Is Creepy: And Other Provocations’ by Nicholas Carr. Copyright © 2016 by Nicholas Carr. With permission of the publisher, W W Norton & Company, Inc. All rights reserved.

..