Web Stats
Top Panel
Top Panel

Album review&download

Close All | Open All
Full With Noise: Theory and Japanese Noise Music
by Paul Hegarty.......... "Full with Noise,..." is about noise music, specifically the version that has come to be called Japanese Noise -- itself composed of many different strands. The first half deals with the question of noise. What is it, whose is it, and how can we think about it. Also, how does noise inflect our thinking, rather than being an object; at what point does noise lose its noiseness and become meaning, music, signification? Or -- is there even a point where noise can subsist? Mostly, the text below takes the view that noise is a function of not-noise, itself a function of not being noise. Noise is no more original than music or meaning, and yet its position is to indicate the banished, overcome primordiality, and cannot lose this 'meaning'. Noise, then, is neither the outside of language nor music, nor is it simply categorisable, at some point or other, as belonging exclusively to the world of meaning, understanding, truth and knowledge. Read More ...
Dirty HC Punk explosion - Bristol scene Rise up + Disorder 9 free CDs
From The Cortinas to Lunatic Fringe and Disorder, Bristol had a huge Punk scene that has influenced, affected and stimulated a vast range of artists that operate in the city. Many of these artists produce music that wouldn’t necessarily suggest a Punk heritage but scratch beneath the surface of a lot of the major players in the Bristol milieu and you will find a fondness for the times of `spikey barnets’, limited musical ability, a `F*** You’ attitude and disrespect for the music industry and its poseur hierarchy. Read More ...
Dinosaur Jr.
Beyond + 17 albums free download
A straight shot west out of Boston on I-90 will carry you, in two hours or less, to Western Massachusetts, where the country still looks like it did twenty or even 40 years ago: college towns, I-91 tracing the same lazy ladder from Springfield up through Holyoke and Northampton, Amherst and Deerfield. Out there it's taken for granted that the houses will be drafty, the winters uniformly long, and that, on any given trip to the local supermarket, one might spot Thurston or Lou or Kim or J, on-and-off locals for more than twenty years. {audio}{/audio} ... Drawerings Read More ...
Leon Theremin /1896-1993/ - the great forefather of Rock N' Roll /big noise master/
In 1919, in the midst of the Russian Civil War, Theremin invented the musical instrument that bears his name. The theremin is an electronic device that resonates sound when its operator waves his hands near its two antennas. It was the first musical instrument designed to be played without being touched. He invented the theremin (also called the thereminvox) in 1919, when his country was in the midst of the Russian Civil War. After a lengthy tour of Europe, during which he demonstrated his invention to full audiences, Theremin found his way to the United States. He performed the theremin with the New York Philharmonic in 1928. He patented his invention in 1929 (U.S. Patent 1,661,058 ) and subsequently granted commercial production rights to RCA. In 1938 Theremin was kidnapped in the New York apartment he shared with his American wife (the black ballet dancer, Iavana Williams) by the NKVD (forerunners of the KGB). He was transported back to Russia, and accused of propagating anti-Soviet propaganda by Stalin. Read More ...
Animal Collective
Album: Fall Be Kind + 9 albums free download
By way of decrying a society that left its citizens unbearably restrained, Edith Wharton describes how in New York in the 1870s, women would order dresses from their Paris dressmakers and then leave them in tissue paper at least two years before wearing them in public; the thought of showing them "in advance of the fashion" was unforgivably vulgar. Social life has changed, but cultural life seems just as restricted now – even Animal Collective are held back by trends that seem a couple of years old (and that they helped to invent). When I think back on 2009, I’ll first remember how our impoverished aesthetic generation repeatedly scraped the resin from the cultural trash barrel. Every second person is wearing neon leggings, and the ones who aren’t rock a ‘70s aesthetic, with high-waisted jeans and moccasins. Christmas sweaters are getting impossible to find at the thrift store. Ska revival. Garage rock revival. It never ends. Read More ...
For just over 10 years, London's Guapo has been working in the world of avant and progressive rock. The band's past is a bit hard to track with its numerous lineup changes and guest musicians. The most recent change in roster was the resignation of Matthew Thompson, the founding member of Guapo, which occurred just before the release of 2005's Black Oni. The departure of Thompson has left Guapo with percussionist David Smith and multi-instrumentalist Daniel O'Sullivan. Though O'Sullivan is by no means a founding member of the band, but he was essential in honing the sound on Guapo's last two LPs: Five Suns and Black Oni. These two albums have been pivotal in building Guapo's following of fans, so it's hard not to credit O'Sullivan as an asset to the band.... {audio} {/audio} ... The Selenotrope Read More ...
The Swans - THIS IS NOT A REUNION - Message From Gira + free discography download (20 CDs)
Michael Gira's re-activated Swans will be undertaking their first U.S. performances in 13 years, celebrating the Fall release of the first new Swans album since Soundtracks For The Blind (1997). The album was recorded by Jason LeFarge at Seizure's Palace in Brooklyn and is currently be remixed by Gira with Bryce Goggin (Antony & The Johnsons, Akron/Family) at Trout Recordings. Read More ...
New Zealand Psychedelic Noise scene + 6 free CDs
For a small country New Zealand has long been pumping out some impressive music. Way back in the 1960s it was crazed long-haired punkers messed up on all sorts of stuff - musical (the Pretty Things, Love, the 13th Floor Elevators, the Troggs and who-knows-what-else) and I guess otherwise. Some of the best of these bands (at least, the ones that recorded) can be heard on Wild Things vol 1 and 2, compiled by NZ music historian John Baker, the first of which came out on Flying Nun, the second probably on Baker's own Zero Records, also the home to No. 8 Wire: Psychedelia Without Drugs. Read More ...


Cyberwar Hype Intended to Destroy the Open Internet
The biggest threat to the open internet is not Chinese government hackers or greedy anti-net-neutrality ISPs, it’s Michael McConnell, the former director of national intelligence. McConnell’s not dangerous because he knows anything about SQL injection hacks, but because he knows about social engineering. He’s the nice-seeming guy who’s willing and able to use fear-mongering to manipulate the federal bureaucracy for his own ends, while coming off like a straight shooter to those who are not in the know. When he was head of the country’s national intelligence, he scared President Bush with visions of e-doom, prompting the president to sign a comprehensive secret order that unleashed tens of billions of dollars into the military’s black budget so they could start making firewalls and building malware into military equipment. Read More ...
The Peyote Way Church of God - believe that the Holy Sacrament Peyote can lead an individual toward a more spiritual life
The Peyote Way Church of God is a non-sectarian, multicultural, experiential, Peyotist organization located in southeastern Arizona, in the remote Aravaipa wilderness. It is not affiliated with the Church of Jesus Christ of Latter Day Saints, the Native American Church, or any other religious organizations, though we do accept people from all faiths. Church membership is open to all races. We encourage individuals to create their own rituals as they become acquainted with the great mystery. We believe that the Holy Sacrament Peyote, when taken according to our sacramental procedure and combined with a holistic lifestyle (see Word of Wisdom), can lead an individual toward a more spiritual life. Peyote is currently listed as a controlled substance and its religious use is protected by Federal law only for Native American members of the Native American Church. Read More ...
All world secret underground bases build for space travelers
The following material comes from people who know the Dulce (underground) base exists. They are people who worked in the labs; abductees taken to the base; people who assisted in the construction; intelligence personal (NSA,CIA,FBI ... ect.) and UFO / inner-earth researchers. This information is meant for those who are seriously interested in the dulce base. for your own protection be advised to “use caution” while investigating this complex.Does a strange world exist beneath our feet? Strange legends have persisted for centuries about the mysterious cavern world and the equally strange beings who inhabit it.  More UFOlogists have considered the possibility that UFOs may be emanating from subterranean bases, that UFO aliens have constructed these bases to carry out various missions involving Earth or humans. Read More ...
Dreamachine - stroboscopic flicker device enter you to a hypnagogic state - try it right here in your browser
The dreamachine (or dream machine) is a stroboscopic  flicker device that produces visual stimuli. Artist Brion Gysin and William Burroughs's "systems adviser" Ian Sommerville created the dreamachine after reading William Grey Walter's book, The Living Brain. In its original form, a dreamachine is made from a cylinder with slits cut in the sides. The cylinder is placed on a record turntable and rotated at 78 or 45 revolutions per minute. A light bulb is suspended in the center of the cylinder and the rotation speed allows the light to come out from the holes at a constant frequency of between 8 and 13 pulses per second. This frequency range corresponds to alpha waves, electrical oscillations  normally present in the human brain while relaxing. Read More ...
Japan’s Annual Penis Festival – Celebrates Fertility
KOMAKI, Japan — It's springtime in Japan and that means one thing. Actually, two things. Penis festivals and vagina festivals. It may sound like a sophomoric gag. But these are folk rites going back at least 1,500 years, into Japan's agricultural past. They're held to ensure a good harvest and promote baby-making. Maybe they should hold more such festivals. Japan has one of the world's lowest birthrates (1.37 children per woman), which experts blame on stagnant incomes and changing gender relations. Read More ...
Rarest Fishes in the World
Aquatic Lifeforms You Never Caught While Fishing:
Black-lip Rattail ............ These sorts of rattails feed in the muddy seafloor by gliding along head down and tail up, powered by gentle undulations of a long fin under the tail. The triangular head has sensory cells underneath that help detect animals buried in the mud or sand. The common name comes from the black edges around the mouth. Read More ...
Island of Ghosts: Hashima Island - Japan’s rotting metropolis
Hashima, an island located in Nagasaki Bay, is better known as Warship Island (Gunkanshima). The island was inhabited until the end of the 19th century, when it was discovered that the ground below it held tons of coal. The island soon became a center of a major mining complex owned by Mitsubishi Corporation. As the complex expanded, rock brought out of the shafts was used to artificially expand the island. Seawalls created in this expansion turned Hashima into the monstrous looking Gunkanshima; its artificial appearance makes it looks more like a battleship than an island. Read More ...
Japan Monster mummies - the preserved remains of demons, mermaids, kappa, tengu, raijū, and human monks
These fairly freaky historical remains can be found lurking in dark corners of Buddhist temples and museums across Japan. Known as monster mummies, they are, in fact, the preserved remains of demons, mermaids, kappa, tengu and raijū. Or should I say things that people thought were demons, mermaids, kappa, tengu and raijū. They are not pretty, but they are really fascinating. Read More ...


The World's First Commercial Brain-Computer Interface + history of BCI
A brain–computer interface (BCI), sometimes called a direct neural interface or a brain–machine interface, is a direct communication pathway between a brain and an external device. BCIs are often aimed at assisting, augmenting or repairing human cognitive or sensory-motor functions. Research on BCIs began in the 1970s at the University of California Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature. Read More ...
Meet ALICE - new CERNs giant detector
The giant ALICE detector is already underway at CERN, and researchers are scrambling to add an electromagnetic calorimeter to capture jet-quenching, the newest way to look inside the quark-gluon plasma — the hot, dense state of matter that filled the earliest universe, which the Large Hadron Collider will soon recreate by slamming lead nuclei into one another.  CERN's Large Hadron Collider (LHC) is known mainly as the accelerator that will soon begin searching for the Higgs particle, and other new physics, in proton collisions at unprecedented energies — up to 14 TeV (14 trillion electron volts) at the center of mass — and with unprecedented beam intensities. But the same machine will also collide massive nuclei, specifically lead ions, to energies never achieved before in the laboratory. Read More ...
Vadim Chernobrov & Russian secrets experiments with time machines
A disturbing story in the March, 2005. 1 issue of Pravda suggests that the U. S. Government is working on the discovery of a mysterious point over the South Pole that may be a passageway backward in time. According to the article, some American and British scientists working in Antarctica on January 27, 1995, noticed a spinning gray fog in the sky over the pole. U. S. physicist Mariann McLein said at first they believed it to be some kind of sandstorm. But after a while they noticed that the fog did not change its form and did not move so they decided to investigate. Read More ...
The Secrets of Coral Castle and pyramids EXPLAINED by Leedskalnin's Magnetic Current theory
Coral Castle doesn't look much like a castle, but that hasn't discouraged generations of tourists from wanting to see it. That's because it was built by one man, Ed Leedskalnin, a Latvian immigrant who single-handedly and mysteriously excavated, carved, and erected over 2.2 million pounds of coral rock to build this place, even though he stood only five feet tall and weighed a mere 100 pounds. Ed was as secretive as he was misguided. He never told anyone how he carved and set into place the walls, gates, monoliths, and moon crescents that make up much of his Castle. Some of these blocks weigh as much as 30 tons. Ed often worked at night, by lantern light, so that no one could see him. He used only tools that he fashioned himself from wrecks in an auto junkyard. Read More ...
Microbial communities in fluid inclusions and long-term survival in halite + The 11th Hour - documentary
Fluid inclusions in modern and ancient buried halite from Death Valley and Saline Valley, California, USA, contain an ecosystem of “salt-loving” (halophilic) prokaryotes and eukaryotes, some of which are alive. Prokaryotes may survive inside fluid inclusions for tens of thousands of years using carbon and other metabolites supplied by the trapped microbial community, most notably the single-celled alga Dunaliella, an important primary producer in hypersaline systems. Deeper understanding of the long-term survival of prokaryotes in fluid inclusions will complement studies that further explore microbial life on Earth and elsewhere in the solar system, where materials that potentially harbor microorganisms are millions and even billions of years old. Read More ...
How Norbert Wiener Invents Cybernetics + his book " God and Golem, Inc.........."
Norbert Wiener invented the field of cybernetics, inspiring a generation of scientists to think of computer technology as a means to extend human capabilities. Norbert Wiener was born on November 26, 1894, and received his Ph.D. in Mathematics from Harvard University at the age of 18 for a thesis on mathematical logic ( see below "The Logic of Boolean Algebra").  After working as a journalist, university teacher, engineer, and writer, Wiener he was hired by MIT in 1919, coincidentally the same year as Vannevar Bush. In 1933, Wiener won the Bôcher Prize for his brilliant work on Tauberian theorems and generalized harmonic analysis. Read More ...
The T2K Experiment - From Tokai To Kamioka - Where is the anti-matter?
From the beginning of 2010, the T2K experiment will fire a beam of muon-neutrinos from Tokai on Japan's east coast, 300km accross the country to a detector at Kamioka. It hopes to investigate the phenomenon of "neutrino oscillations" by looking for "muon neutrinos" oscillating into "electron neutrinos".  A million pound detector has been built at the University of Warwick as part of a vital experiment to investigate fundamental particles - neutrinos. Read More ...
Careerism and Psychopathy in the US Military leadership
The internal workings of the US military had little significance to the overall state of the nation, except during wars – until the post-WWII era.   With the military dominating our foreign policy and being one of the most trusted institution, the character of our senior generals may become a major factor shaping our future.  Hence the importance of this chapter by GI Wilson from The Pentagon Labyrinth: Ten Short Essays to Help You Through It, edited by Winslow T. Wheeler and published by the Center for Defense Information and the World Security Institute.  You can see a summary and download a free copy of this important book at the Project for Government Oversight (POGO). Read More ...


UFO's of Nazi Germany
Viktor Schauberger & UFO's of Nazi Germany
It was nearly the end of WWII. At that same time, scientist Viktor Schauberger worked on a secret project. Johannes Kepler, whose ideas Schauberger followed, had knowledge of the secret teachings of Pythagoras that had been adopted and kept secret. It was the knowledge of Implosion (in this case the utilization of the potential of the inner worlds in the outer world). Hitler knew - as did the Thule and Vril people - that the divine principle was always constructive. A technology however that is based on explosion and therefore is destructive runs against the divine principle. Thus they wanted to create a technology based on Implosion. Read More ...
It Takes a Giant Cosmos to Create Life and Mind + new Supernova Discovered to be the 'Creation-Machines' of the Cosmos
Excerpt from 'The Intelligent Universe', James Gardner ................... There is a time machine clearly visible right outside your front door. It’s easy to see—in fact, it’s impossible to overlook—although its awesome powers are generally ignored by all but a discerning few.  The unearthly beauty, the ineffable grandeur, and the ingenuity of construction of this time machine are humbling to every human being who makes an effort to probe into the enigma of its origin and the mystery of its ultimate destiny. The time machine of which I speak is emphatically not of human origin. Indeed, a few venturesome scientists are beginning to entertain a truly incredible possibility: that this device is an artifact bequeathed to us by a supremely evolved intelligence that existed long, long ago and far, far away. All knowledgeable observers agree that the scope of its stupendous powers and the sheer delicacy of its miniscule moving parts seem nothing short of miraculous. Read More ...
The Size Of Our World or How Insignificant the Earth Really Is in the Universe
Compared to you and me, the Earth is really big. But compared to Jupiter and the Sun, the Earth is pretty tiny. There are many ways we can measure the size of the Earth. Let's look at how big the Earth is, and then compare it to other objects in the Solar System. The diameter of the Earth is 12,742 km. In other words, if you dug a hole down into the Earth, passed through the center of the Earth, and came out the other side, you would have dug a hole 12,742 km deep (on average). That's about 4 times longer than the diameter of the Moon. Read More ...
Strange Images from Space - Photos&videos of the Bizarre in Our Universe
Some weird and unusual objects are floating around in the cosmos. Space is always serving up something new, unusual, and unexpected. Here are images and explanations of obejcts that have amazed and delighted astronomers. Read More ...
Project Icarus: Gas Mining on Uranus
Project Icarus is a 21st century theoretical study of a mission to another star. Icarus aims to build on the work of the celebrated Daedalus project. Between the period 1973-1978 members of the BIS undertook a theoretical study of a flyby mission to Barnard's star 5.9 light years away. This was Project Daedalus and remains one of the most complete studies of an interstellar probe to date. The 54,000 ton two-stage vehicle was powered by inertial confinement fusion using electron beams to compress the D/He3 fusion capsules to ignition. It would obtain an eventual cruise velocity of 36,000km/s or 12% of light speed from over 700kN of thrust, burning at a specific impulse of 1 million seconds, reaching its destination in approximately 50 years. Read More ...
Astronomers had found evidence of something that occurred before the (conventional) Big Bang
Our cosmos was "bruised" in collisions with other universes. Now astronomers have found the first evidence of these impacts in the cosmic microwave background. There's something exciting afoot in the world of cosmology. Last month, Roger Penrose at the University of Oxford and Vahe Gurzadyan at Yerevan State University in Armenia announced that they had found patterns of concentric circles in the cosmic microwave background, the echo of the Big Bang. Read More ...
Mysterious Radio Waves from Unknown Object in M82 Galaxy
There is something strange is lurking in the galactic neighborhood. An unknown object in galaxy M82 12 million light-years away has started sending out radio waves, and the emission does not look like anything seen anywhere in the universe before except perhaps by Ford Prefect. M82 is starburst galaxy five times as bright as the Milky Way and one hundred times as bright as our galaxy's center. "We don't know what it is," says co-discoverer Tom Muxlow of Jodrell Bank Centre for Astrophysics near Macclesfield, UK. But its apparent sideways velocity is four times the speed of light. This "superluminal" motion occurs usually in high-speed jets of material bursting out by black holes. Read More ...
Nibiru - great arrival of Planet X  + Timeline of 2012. cataclysm
“A secret document prepared for Prime Minister Putin by Russia’s Ministry of Foreign Affairs is claiming that President Medvedev confirmed in his extended meeting with Pope Benedict XVI in February 2011 that the new planet named Tyche (pronounced ty-kee) by NASA will be appearing in the Earth’s night sky by 2012. Though the existence of this planet had long been known to the ancients, it has only been in the past year that Western scientists have begun informing their citizens about this unprecedented event soon to occur, but who are, also, still failing to tell how catastrophic its appearance will be. Tyche was the name coined for this ancient celestial body by the two astrophysicists proposing it for “planet” status, Daniel Whitmire and John Matese from the University of Louisiana at Lafayette. Read More ...

Help us stay alive

Enter Amount:

Are You living in a computer simulation?

BY NICK BOSTROM ............ This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.



Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this paper will spell it out more carefully.

Apart from the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which some may find amusing or thought-provoking.

The structure of the paper is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.

Dim lights Embed Embed this video on your site

The brain in the vat scenario is interesting, but ultimately pointless. There is no escape from the notion that we may be living in a simulated world - indeed we ourselves may be simulated. This however should not prevent us from assuming reality is in fact the real reality. Such axioms are necessary if any progress at all is to be made.

This is the first of a (hopefully) 50 part series on philosophical ideas. The text is from a book written by Ben Dupre called "50 Philosophy ideas you really should know", which I picked up on a business trip a while ago. I thought it would make a great basis for a series of videos, but it's only now I have found the motivation to achieve it.



A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is nor an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) – just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.
Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).
Dim lights Embed Embed this video on your site


At our current stage of technological development, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Some authors argue that this stage may be only a few decades away.[1] Yet present purposes require no assumptions about the time-scale. The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.
Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. As we are still lacking a “theory of everything”, we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those constraints[2] that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter. We can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10^21 instructions per second.[3] Another author gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet.[4] (If we could create quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits.[5] However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain.[6] An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~10^16-10^17 operations per second.[7] Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dendritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.

Memory seems to be a no more stringent constraint than processing power.[8] Moreover, since the maximum human sensory bandwidth is ~10^8 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.
If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.

Moreover, a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.

It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resides in simulating organic brains down to the neuronal or sub-neuronal level.[9] While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10^33 - 10^36 operations as a rough estimate[10]. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 10^42 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.

  • Posthuman civilizations would have enough computing power to run hugely many ancestor-simulations even while using only a tiny fraction of their resources for that purpose.

Dim lights Embed Embed this video on your site


The basic idea of this paper can be expressed roughly as follows: If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestor-simulations, then how come you are not living in such a simulation?

We shall develop this idea into a rigorous argument. Let us introduce the following notation:

: Fraction of all human-level technological civilizations that survive to reach a posthuman stage
: Average number of ancestor-simulations run by a posthuman civilization
: Average number of individuals that have lived in a civilization before it reaches a posthuman stage

The actual fraction of all observers with human-type experiences that live in simulations is then

Writing  for the fraction of posthuman civilizations that are interested in running ancestor-simulations (or that contain at least some individuals who are interested in that and have sufficient resources to run a significant number of such simulations), and for the average number of ancestor-simulations run by such interested civilizations, we have

and thus:


Because of the immense computing power of posthuman civilizations,  is extremely large, as we saw in the previous section. By inspecting (*) we can then see that at least one of the following three propositions must be true:



We can take a further step and conclude that conditional on the truth of (3), one’s credence in the hypothesis that one is in a simulation should be close to unity. More generally, if we knew that a fraction x of all observers with human-type experiences live in simulations, and we don’t have any information that indicate that our own particular experiences are any more or less likely than other human-type experiences to have been implemented in vivo rather than in machina, then our credence that we are in a simulation should equal x:


This step is sanctioned by a very weak indifference principle. Let us distinguish two cases. The first case, which is the easiest, is where all the minds in question are like your own in the sense that they are exactly qualitatively identical to yours: they have exactly the same information and the same experiences that you have. The second case is where the minds are “like” each other only in the loose sense of being the sort of minds that are typical of human creatures, but they are qualitatively distinct from one another and each has a distinct set of experiences. I maintain that even in the latter case, where the minds are qualitatively different, the simulation argument still works, provided that you have no information that bears on the question of which of the various minds are simulated and which are implemented biologically.

A detailed defense of a stronger principle, which implies the above stance for both cases as trivial special instances, has been given in the literature.[11] Space does not permit a recapitulation of that defense here, but we can bring out one of the underlying intuitions by bringing to our attention to an analogous situation of a more familiar kind. Suppose that x% of the population has a certain genetic sequence S within the part of their DNA commonly designated as “junk DNA”. Suppose, further, that there are no manifestations of S (short of what would turn up in a gene assay) and that there are no known correlations between having S and any observable characteristic. Then, quite clearly, unless you have had your DNA sequenced, it is rational to assign a credence of x% to the hypothesis that you have S. And this is so quite irrespective of the fact that the people who have S have qualitatively different minds and experiences from the people who don’t have S. (They are different simply because all humans have different experiences from one another, not because of any known link between S and what kind of experiences one has.)

The same reasoning holds if S is not the property of having a certain genetic sequence but instead the property of being in a simulation, assuming only that we have no information that enables us to predict any differences between the experiences of simulated minds and those of the original biological minds.

It should be stressed that the bland indifference principle expressed by (#) prescribes indifference only between hypotheses about which observer you are, when you have no information about which of these observers you are. It does not in general prescribe indifference between hypotheses when you lack specific information about which of the hypotheses is true. In contrast to Laplacean and other more ambitious principles of indifference, it is therefore immune to Bertrand’s paradox and similar predicaments that tend to plague indifference principles of unrestricted scope.

Dim lights Embed Embed this video on your site

Readers familiar with the Doomsday argument[12] may worry that the bland principle of indifference invoked here is the same assumption that is responsible for getting the Doomsday argument off the ground, and that the counterintuitiveness of some of the implications of the latter incriminates or casts doubt on the validity of the former. This is not so. The Doomsday argument rests on a much stronger and more controversial premiss, namely that one should reason as if one were a random sample from the set of all people who will ever have lived (past, present, and future) even though we know that we are living in the early twenty-first century rather than at some point in the distant past or the future. The bland indifference principle, by contrast, applies only to cases where we have no information about which group of people we belong to.

If betting odds provide some guidance to rational belief, it may also be worth to ponder that if everybody were to place a bet on whether they are in a simulation or not, then if people use the bland principle of indifference, and consequently place their money on being in a simulation if they know that that’s where almost all people are, then almost everyone will win their bets. If they bet on not being in a simulation, then almost everyone will lose. It seems better that the bland indifference principle be heeded.

Further, one can consider a sequence of possible situations in which an increasing fraction of all people live in simulations: 98%, 99%, 99.9%, 99.9999%, and so on. As one approaches the limiting case in which everybody is in a simulation (from which one can deductively infer that one is in a simulation oneself), it is plausible to require that the credence one assigns to being in a simulation gradually approach the limiting case of complete certainty in a matching manner.
Dim lights Embed Embed this video on your site
David Pearce is an independent researcher and vegan animal activist based in Brighton UK.

In 1995, he wrote an online manifesto, The Hedonistic Imperative, advocating the use of biotechnology to abolish suffering throughout the living world. David predicts that our descendants will be animated by gradients of cerebral bliss orders of magnitude richer than anything accessible today.

David has also written on the philosophy of mind and perception; utilitarian ethics; psychopharmacology; life extension; cognitive enhancement technologies; mood enrichment; genetic recalibration of the hedonic treadmill; ecosystem redesign; reprogramming predators; and more speculatively on a posthuman future based on paradise engineering.

In 1998, David and Nick Bostrom set up the World Transhumanist Association (now rebranded as Humanity +. Transhumanists promote the responsible use of advanced technology to overcome our biological limitations.



The possibility represented by proposition (1) is fairly straightforward. If (1) is true, then humankind will almost certainly fail to reach a posthuman level; for virtually no species at our level of development become posthuman, and it is hard to see any justification for thinking that our own species will be especially privileged or protected from future disasters. Conditional on (1), therefore, we must give a high credence to DOOM, the hypothesis that humankind will go extinct before reaching a posthuman level:

One can imagine hypothetical situations were we have such evidence as would trump knowledge of . For example, if we discovered that we were about to be hit by a giant meteor, this might suggest that we had been exceptionally unlucky. We could then assign a credence to DOOM larger than our expectation of the fraction of human-level civilizations that fail to reach posthumanity. In the actual case, however, we seem to lack evidence for thinking that we are special in this regard, for better or worse.

Proposition (1) doesn’t by itself imply that we are likely to go extinct soon, only that we are unlikely to reach a posthuman stage. This possibility is compatible with us remaining at, or somewhat above, our current level of technological development for a long time before going extinct. Another way for (1) to be true is if it is likely that technological civilization will collapse. Primitive human societies might then remain on Earth indefinitely.

There are many ways in which humanity could become extinct before reaching posthumanity. Perhaps the most natural interpretation of (1) is that we are likely to go extinct as a result of the development of some powerful but dangerous technology.[13] One candidate is molecular nanotechnology, which in its mature stage would enable the construction of self-replicating nanobots capable of feeding on dirt and organic matter – a kind of mechanical bacteria. Such nanobots, designed for malicious ends, could cause the extinction of all life on our planet.[14]

The second alternative in the simulation argument’s conclusion is that the fraction of posthuman civilizations that are interested in running ancestor-simulation is negligibly small. In order for (2) to be true, there must be a strong convergence among the courses of advanced civilizations. If the number of ancestor-simulations created by the interested civilizations is extremely large, the rarity of such civilizations must be correspondingly extreme. Virtually no posthuman civilizations decide to use their resources to run large numbers of ancestor-simulations. Furthermore, virtually all posthuman civilizations lack individuals who have sufficient resources and interest to run ancestor-simulations; or else they have reliably enforced laws that prevent such individuals from acting on their desires.
What force could bring about such convergence? One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.Another possible convergence point is that almost all individual posthumans in virtually all posthuman civilizations develop in a direction where they lose their desires to run ancestor-simulations. This would require significant changes to the motivations driving their human predecessors, for there are certainly many humans who would like to run ancestor-simulations if they could afford to do so. But perhaps many of our human desires will be regarded as silly by anyone who becomes a posthuman. Maybe the scientific value of ancestor-simulations to a posthuman civilization is negligible (which is not too implausible given its unfathomable intellectual superiority), and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure – which can be obtained much more cheaply by direct stimulation of the brain’s reward centers. One conclusion that follows from (2) is that posthuman societies will be very different from human societies: they will not contain relatively wealthy independent agents who have the full gamut of human-like desires and are free to act on them.
Dim lights Embed Embed this video on your site
The possibility expressed by alternative (3) is the conceptually most intriguing one. If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence. The physics in the universe where the computer is situated that is running the simulation may or may not resemble the physics of the world that we observe. While the world we see is in some sense “real”, it is not located at the fundamental level of reality.
It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration. If we do go on to create our own ancestor-simulations, this would be strong evidence against (1) and (2), and we would therefore have to conclude that we live in a simulation. Moreover, we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.

Reality may thus contain many levels. Even if it is necessary for the hierarchy to bottom out at some stage – the metaphysical status of this claim is somewhat obscure – there may be room for a large number of levels of reality, and the number could be increasing over time. (One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman.)

Although all the elements of such a system can be naturalistic, even physical, it is possible to draw some loose analogies with religious conceptions of the world. In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation: the posthumans created the world we see; they are of superior intelligence; they are “omnipotent” in the sense that they can interfere in the workings of our world even in ways that violate its physical laws; and they are “omniscient” in the sense that they can monitor everything that happens. However, all the demigods except those at the fundamental level of reality are subject to sanctions by the more powerful gods living at lower levels.
Further rumination on these themes could climax in a naturalistic theogony that would study the structure of this hierarchy, and the constraints imposed on its inhabitants by the possibility that their actions on their own level may affect the treatment they receive from dwellers of deeper levels. For example, if nobody can be sure that they are at the basement-level, then everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators. An afterlife would be a real possibility. Because of this fundamental uncertainty, even the basement civilization may have a reason to behave ethically. The fact that it has such a reason for moral behavior would of course add to everybody else’s reason for behaving morally, and so on, in truly virtuous circle. One might get a kind of universal ethical imperative, which it would be in everybody’s self-interest to obey, as it were “from nowhere”.

In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience. Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations. There would have to be about 100 billion times as many “me-simulations” (simulations of the life of only a single mind) as there are ancestor-simulations in order for most simulated persons to be in me-simulations.

There is also the possibility of simulators abridging certain parts of the mental lives of simulated beings and giving them false memories of the sort of experiences that they would typically have had during the omitted interval. If so, one can consider the following (farfetched) solution to the problem of evil: that there is no suffering in the world and all memories of suffering are illusions. Of course, this hypothesis can be seriously entertained only at those times when you are not currently suffering.
Dim lights Embed Embed this video on your site
Supposing we live in a simulation, what are the implications for us humans? The foregoing remarks notwithstanding, the implications are not all that radical. Our best guide to how our posthuman creators have chosen to set up our world is the standard empirical study of the universe we see. The revisions to most parts of our belief networks would be rather slight and subtle – in proportion to our lack of confidence in our ability to understand the ways of posthumans. Properly understood, therefore, the truth of (3) should have no tendency to make us “go crazy” or to prevent us from going about our business and making plans and predictions for tomorrow. The chief empirical importance of (3) at the current time seems to lie in its role in the tripartite conclusion established above.[15] We may hope that (3) is true since that would decrease the probability of (1), although if computational constraints make it likely that simulators would terminate a simulation before it reaches a posthuman level, then out best hope would be that (2) is true.
If we learn more about posthuman motivations and resource constraints, maybe as a result of developing towards becoming posthumans ourselves, then the hypothesis that we are simulated will come to have a much richer set of empirical implications.


A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).
Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.
Dim lights Embed Embed this video on your site


I’m grateful to many people for comments, and especially to Amara Angelica, Robert Bradbury, Milan Cirkovic, Robin Hanson, Hal Finney, Robert A. Freitas Jr., John Leslie, Mitch Porter, Keith DeRose, Mike Treder, Mark Walker, Eliezer Yudkowsky, and the anonymous referees.

[1] See e.g. K. E. Drexler, Engines of Creation: The Coming Era of Nanotechnology, London, Forth Estate, 1985; N. Bostrom, “How Long Before Superintelligence?” International Journal of Futures Studies, vol. 2, (1998); R. Kurzweil, The Age of Spiritual Machines: When computers exceed human intelligence, New York, Viking Press, 1999; H. Moravec, Robot: Mere Machine to Transcendent Mind, Oxford University Press, 1999.
[2] Such as the Bremermann-Bekenstein bound and the black hole limit (H. J. Bremermann, “Minimum energy requirements of information transfer and computing.” International Journal of Theoretical Physics 21: 203-217 (1982); J. D. Bekenstein, “Entropy content and information flow in systems with limited energy.” Physical Review D 30: 1669-1679 (1984); A. Sandberg, “The Physics of Information Processing Superobjects: The Daily Life among the Jupiter Brains.” Journal of Evolution and Technology, vol. 5 (1999)).
[3] K. E. Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, New York, John Wiley & Sons, Inc., 1992.
[4] R. J. Bradbury, “Matrioshka Brains.” Working manuscript (2002),
[5] S. Lloyd, “Ultimate physical limits to computation.” Nature 406 (31 August): 1047-1054 (2000).
[6] H. Moravec, Mind Children, Harvard University Press (1989).
[7] Bostrom (1998), op. cit.
[8] See references in foregoing footnotes.
[9] As we build more and faster computers, the cost of simulating our machines might eventually come to dominate the cost of simulating nervous systems.
[10] 100 billion humans50 years/human30 million secs/year[10^14, 10^17] operations in each human brain per second  [10^33, 10^36] operations.
[11] In e.g. N. Bostrom, “The Doomsday argument, Adam & Eve, UN++, and Quantum Joe.” Synthese 127(3): 359-387 (2001); and most fully in my book Anthropic Bias: Observation Selection Effects in Science and Philosophy, Routledge, New York, 2002.
[12] See e.g. J. Leslie, “Is the End of the World Nigh? ” Philosophical Quarterly 40, 158: 65-72 (1990).
[13] See my paper “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology, vol. 9 (2001) for a survey and analysis of the present and anticipated future threats to human survival.
[14] See e.g. Drexler (1985) op cit., and R. A. Freitas Jr., “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations.” Zyvex preprint April (2000),
[15] For some reflections by another author on the consequences of (3), which were sparked by a privately circulated earlier version of this paper, see R. Hanson, “How to Live in a Simulation.” Journal of Evolution and Technology, vol. 7 (2001).

Similary articles:

Add comment

Security code