Cyborg | Designer-Babies | Futurism | Futurist | Immortality | Longevity | Nanotechnology | Post-Human | Singularity | Transhuman

Parallel universes, the Matrix, and superintelligence …

 Superintelligence  Comments Off on Parallel universes, the Matrix, and superintelligence …
Jun 282016
 

Physicists are converging on a theory of everything, probing the 11th dimension, developing computers for the next generation of robots, and speculating about civilizations millions of years ahead of ours, says Dr. Michio Kaku, author of the best-sellers Hyperspace and Visions and co-founder of String Field Theory, in this interview by KurzweilAI.net Editor Amara D. Angelica.

Published on KurzweilAI.net June 26, 2003.

What are the burning issues for you currently?

Well, several things. Professionally, I work on something called Superstring theory, or now called M-theory, and the goal is to find an equation, perhaps no more than one inch long, which will allow us to “read the mind of God,” as Einstein used to say.

In other words, we want a single theory that gives us an elegant, beautiful representation of the forces that govern the Universe. Now, after two thousand years of investigation into the nature of matter, we physicists believe that there are four fundamental forces that govern the Universe.

Some physicists have speculated about the existence of a fifth force, which may be some kind of paranormal or psychic force, but so far we find no reproducible evidence of a fifth force.

Now, each time a force has been mastered, human history has undergone a significant change. In the 1600s, when Isaac Newton first unraveled the secret of gravity, he also created a mechanics. And from Newtons Laws and his mechanics, the foundation was laid for the steam engine, and eventually the Industrial Revolution.

So, in other words, in some sense, a byproduct of the mastery of the first force, gravity, helped to spur the creation of the Industrial Revolution, which in turn is perhaps one of the greatest revolutions in human history.

The second great force is the electromagnetic force; that is, the force of light, electricity, magnetism, the Internet, computers, transistors, lasers, microwaves, x-rays, etc.

And then in the 1860s, it was James Clerk Maxwell, the Scottish physicist at Cambridge University, who finally wrote down Maxwells equations, which allow us to summarize the dynamics of light.

That helped to unleash the Electric Age, and the Information Age, which have changed all of human history. Now its hard to believe, but Newtons equations and Einsteins equations are no more than about half an inch long.

Maxwells equations are also about half an inch long. For example, Maxwells equations say that the four-dimensional divergence of an antisymmetric, second-rank tensor equals zero. Thats Maxwells equations, the equations for light. And in fact, at Berkeley, you can buy a T-shirt which says, “In the beginning, God said the four-dimensional divergence of an antisymmetric, second rank tensor equals zero, and there was Light, and it was good.”

So, the mastery of the first two forces helped to unleash, respectively, the Industrial Revolution and the Information Revolution.

The last two forces are the weak nuclear force and the strong nuclear force, and they in turn have helped us to unlock the secret of the stars, via Einsteins equations E=mc2, and many people think that far in the future, the human race may ultimately derive its energy not only from solar power, which is the power of fusion, but also fusion power on the Earth, in terms of fusion reactors, which operate on seawater, and create no copious quantities of radioactive waste.

So, in summary, the mastery of each force helped to unleash a new revolution in human history.

Today, we physicists are embarking upon the greatest quest of all, which is to unify all four of these forces into a single comprehensive theory. The first force, gravity, is now represented by Einsteins General Theory of Relativity, which gives us the Big Bang, black holes, and expanding universe. Its a theory of the very large; its a theory of smooth, space-time manifolds like bedsheets and trampoline nets.

The second theory, the quantum theory, is the exact opposite. The quantum theory allows us to unify the electromagnetic, weak and strong force. However, it is based on discrete, tiny packets of energy called quanta, rather than smooth bedsheets, and it is based on probabilities, rather than the certainty of Einsteins equations. So these two theories summarize the sum total of all physical knowledge of the physical universe.

Any equation describing the physical universe ultimately is derived from one of these two theories. The problem is these two theories are diametrically opposed. They are based on different assumptions, different principles, and different mathematics. Our job as physicists is to unify the two into a single, comprehensive theory. Now, over the last decades, the giants of the twentieth century have tried to do this and have failed.

For example, Niels Bohr, the founder of atomic physics and the quantum theory, was very skeptical about many attempts over the decades to create a Unified Field Theory. One day, Wolfgang Pauli, Nobel laureate, was giving a talk about his version of the Unified Field Theory, and in a very famous story, Bohr stood up in the back of the room and said, “Mr. Pauli, we in the back are convinced that your theory is crazy. What divides us is whether your theory is crazy enough.”

So today, we realize that a true Unified Field Theory must be bizarre, must be fantastic, incredible, mind-boggling, crazy, because all the sane alternatives have been studied and discarded.

Today we have string theory, which is based on the idea that the subatomic particles we see in nature are nothing but notes we see on a tiny, vibrating string. If you kick the string, then an electron will turn into a neutrino. If you kick it again, the vibrating string will turn from a neutrino into a photon or a graviton. And if you kick it enough times, the vibrating string will then mutate into all the subatomic particles.

Therefore we no longer in some sense have to deal with thousands of subatomic particles coming from our atom smashers, we just have to realize that what makes them, what drives them, is a vibrating string. Now when these strings collide, they form atoms and nuclei, and so in some sense, the melodies that you can write on the string correspond to the laws of chemistry. Physics is then reduced to the laws of harmony that we can write on a string. The Universe is a symphony of strings. And what is the mind of God that Einstein used to write about? According to this picture, the mind of God is music resonating through ten- or eleven-dimensional hyperspace, which of course begs the question, “If the universe is a symphony, then is there a composer to the symphony?” But thats another question.

What do you think of Sir Martin Rees concerns about the risk of creating black holes on Earth in his book, Our Final Hour?

I havent read his book, but perhaps Sir Martin Rees is referring to many press reports that claim that the Earth may be swallowed up by a black hole created by our machines. This started with a letter to the editor in Scientific American asking whether the RHIC accelerator in Brookhaven, Long Island, will create a black hole which will swallow up the earth. This was then picked up by the Sunday London Times who then splashed it on the international wire services, and all of a sudden, we physicists were deluged with hundreds of emails and telegrams asking whether or not we are going to destroy the world when we create a black hole in Long Island.

However, you can calculate that in outer space, cosmic rays have more energy than the particles produced in our most powerful atom smashers, and black holes do not form in outer space. Not to mention the fact that to create a black hole, you would have to have the mass of a giant star. In fact, an object ten to fifty times the mass of our star may in fact form a black hole. So the probability of a black hole forming in Long Island is zero.

However, Sir Martin Rees also has written a book, talking about the Multiverse. And that is also the subject of my next book, coming out late next year, called Parallel Worlds. We physicists no longer believe in a Universe. We physicists believe in a Multiverse that resembles the boiling of water. Water boils when tiny particles, or bubbles, form, which then begin to rapidly expand. If our Universe is a bubble in boiling water, then perhaps Big Bangs happen all the time.

Now, the Multiverse idea is consistent with Superstring theory, in the sense that Superstring theory has millions of solutions, each of which seems to correspond to a self-consistent Universe. So in some sense, Superstring theory is drowning in its own riches. Instead of predicting a unique Universe, it seems to allow the possibility of a Multiverse of Universes.

This may also help to answer the question raised by the Anthropic Principle. Our Universe seems to have known that we were coming. The conditions for life are extremely stringent. Life and consciousness can only exist in a very narrow band of physical parameters. For example, if the proton is not stable, then the Universe will collapse into a useless heap of electrons and neutrinos. If the proton were a little bit different in mass, it would decay, and all our DNA molecules would decay along with it.

In fact, there are hundreds, perhaps thousands, of coincidences, happy coincidences, that make life possible. Life, and especially consciousness, is quite fragile. It depends on stable matter, like protons, that exists for billions of years in a stable environment, sufficient to create autocatalytic molecules that can reproduce themselves, and thereby create Life. In physics, it is extremely hard to create this kind of Universe. You have to play with the parameters, you have to juggle the numbers, cook the books, in order to create a Universe which is consistent with Life.

However, the Multiverse idea explains this problem, because it simply means we coexist with dead Universes. In other Universes, the proton is not stable. In other Universes, the Big Bang took place, and then it collapsed rapidly into a Big Crunch, or these Universes had a Big Bang, and immediately went into a Big Freeze, where temperatures were so low, that Life could never get started.

So, in the Multiverse of Universes, many of these Universes are in fact dead, and our Universe in this sense is special, in that Life is possible in this Universe. Now, in religion, we have the Judeo-Christian idea of an instant of time, a genesis, when God said, “Let there be light.” But in Buddhism, we have a contradictory philosophy, which says that the Universe is timeless. It had no beginning, and it had no end, it just is. Its eternal, and it has no beginning or end.

The Multiverse idea allows us to combine these two pictures into a coherent, pleasing picture. It says that in the beginning, there was nothing, nothing but hyperspace, perhaps ten- or eleven-dimensional hyperspace. But hyperspace was unstable, because of the quantum principle. And because of the quantum principle, there were fluctuations, fluctuations in nothing. This means that bubbles began to form in nothing, and these bubbles began to expand rapidly, giving us the Universe. So, in other words, the Judeo-Christian genesis takes place within the Buddhist nirvana, all the time, and our Multiverse percolates universes.

Now this also raises the possibility of Universes that look just like ours, except theres one quantum difference. Lets say for example, that a cosmic ray went through Churchills mother, and Churchill was never born, as a consequence. In that Universe, which is only one quantum event away from our Universe, England never had a dynamic leader to lead its forces against Hitler, and Hitler was able to overcome England, and in fact conquer the world.

So, we are one quantum event away from Universes that look quite different from ours, and its still not clear how we physicists resolve this question. This paradox revolves around the Schrdingers Cat problem, which is still largely unsolved. In any quantum theory, we have the possibility that atoms can exist in two places at the same time, in two states at the same time. And then Erwin Schrdinger, the founder of quantum mechanics, asked the question: lets say we put a cat in a box, and the cat is connected to a jar of poison gas, which is connected to a hammer, which is connected to a Geiger counter, which is connected to uranium. Everyone believes that uranium has to be described by the quantum theory. Thats why we have atomic bombs, in fact. No one disputes this.

But if the uranium decays, triggering the Geiger counter, setting off the hammer, destroying the jar of poison gas, then I might kill the cat. And so, is the cat dead or alive? Believe it or not, we physicists have to superimpose, or add together, the wave function of a dead cat with the wave function of a live cat. So the cat is neither dead nor alive.

This is perhaps one of the deepest questions in all the quantum theory, with Nobel laureates arguing with other Nobel laureates about the meaning of reality itself.

Now, in philosophy, solipsists like Bishop Berkeley used to believe that if a tree fell in the forest and there was no one there to listen to the tree fall, then perhaps the tree did not fall at all. However, Newtonians believe that if a tree falls in the forest, that you dont have to have a human there to witness the event.

The quantum theory puts a whole new spin on this. The quantum theory says that before you look at the tree, the tree could be in any possible state. It could be burnt, a sapling, it could be firewood, it could be burnt to the ground. It could be in any of an infinite number of possible states. Now, when you look at it, it suddenly springs into existence and becomes a tree.

Einstein never liked this. When people used to come to his house, he used to ask them, “Look at the moon. Does the moon exist because a mouse looks at the moon?” Well, in some sense, yes. According to the Copenhagen school of Neils Bohr, observation determines existence.

Now, there are at least two ways to resolve this. The first is the Wigner school. Eugene Wigner was one of the creators of the atomic bomb and a Nobel laureate. And he believed that observation creates the Universe. An infinite sequence of observations is necessary to create the Universe, and in fact, maybe theres a cosmic observer, a God of some sort, that makes the Universe spring into existence.

Theres another theory, however, called decoherence, or many worlds, which believes that the Universe simply splits each time, so that we live in a world where the cat is alive, but theres an equal world where the cat is dead. In that world, they have people, they react normally, they think that their world is the only world, but in that world, the cat is dead. And, in fact, we exist simultaneously with that world.

This means that theres probably a Universe where you were never born, but everything else is the same. Or perhaps your mother had extra brothers and sisters for you, in which case your family is much larger. Now, this can be compared to sitting in a room, listening to radio. When you listen to radio, you hear many frequencies. They exist simultaneously all around you in the room. However, your radio is only tuned to one frequency. In the same way, in your living room, there is the wave function of dinosaurs. There is the wave function of aliens from outer space. There is the wave function of the Roman Empire, because it never fell, 1500 years ago.

All of this coexists inside your living room. However, just like you can only tune into one radio channel, you can only tune into one reality channel, and that is the channel that you exist in. So, in some sense it is true that we coexist with all possible universes. The catch is, we cannot communicate with them, we cannot enter these universes.

However, I personally believe that at some point in the future, that may be our only salvation. The latest cosmological data indicates that the Universe is accelerating, not slowing down, which means the Universe will eventually hit a Big Freeze, trillions of years from now, when temperatures are so low that it will be impossible to have any intelligent being survive.

When the Universe dies, theres one and only one way to survive in a freezing Universe, and that is to leave the Universe. In evolution, there is a law of biology that says if the environment becomes hostile, either you adapt, you leave, or you die.

When the Universe freezes and temperatures reach near absolute zero, you cannot adapt. The laws of thermodynamics are quite rigid on this question. Either you will die, or you will leave. This means, of course, that we have to create machines that will allow us to enter eleven-dimensional hyperspace. This is still quite speculative, but String theory, in some sense, may be our only salvation. For advanced civilizations in outer space, either we leave or we die.

That brings up a question. Matrix Reloaded seems to be based on parallel universes. What do you think of the film in terms of its metaphors?

Well, the technology found in the Matrix would correspond to that of an advanced Type I or Type II civilization. We physicists, when we scan outer space, do not look for little green men in flying saucers. We look for the total energy outputs of a civilization in outer space, with a characteristic frequency. Even if intelligent beings tried to hide their existence, by the second law of thermodynamics, they create entropy, which should be visible with our detectors.

So we classify civilizations on the basis of energy outputs. A Type I civilization is planetary. They control all planetary forms of energy. They would control, for example, the weather, volcanoes, earthquakes; they would mine the oceans, any planetary form of energy they would control. Type II would be stellar. They play with solar flares. They can move stars, ignite stars, play with white dwarfs. Type III is galactic, in the sense that they have now conquered whole star systems, and are able to use black holes and star clusters for their energy supplies.

Each civilization is separated by the previous civilization by a factor of ten billion. Therefore, you can calculate numerically at what point civilizations may begin to harness certain kinds of technologies. In order to access wormholes and parallel universes, you have to be probably a Type III civilization, because by definition, a Type III civilization has enough energy to play with the Planck energy.

The Planck energy, or 1019 billion electron volts, is the energy at which space-time becomes unstable. If you were to heat up, in your microwave oven, a piece of space-time to that energy, then bubbles would form inside your microwave oven, and each bubble in turn would correspond to a baby Universe.

Now, in the Matrix, several metaphors are raised. One metaphor is whether computing machines can create artificial realities. That would require a civilization centuries or millennia ahead of ours, which would place it squarely as a Type I or Type II civilization.

However, we also have to ask a practical question: is it possible to create implants that could access our memory banks to create this artificial reality, and are machines dangerous? My answer is the following. First of all, cyborgs with neural implants: the technology does not exist, and probably wont exist for at least a century, for us to access the central nervous system. At present, we can only do primitive experiments on the brain.

For example, at Emory University in Atlanta, Georgia, its possible to put a glass implant into the brain of a stroke victim, and the paralyzed stroke victim is able to, by looking at the cursor of a laptop, eventually control the motion of the cursor. Its very slow and tedious; its like learning to ride a bicycle for the first time. But the brain grows into the glass bead, which is placed into the brain. The glass bead is connected to a laptop computer, and over many hours, the person is able to, by pure thought, manipulate the cursor on the screen.

So, the central nervous system is basically a black box. Except for some primitive hookups to the visual system of the brain, we scientists have not been able to access most bodily functions, because we simply dont know the code for the spinal cord and for the brain. So, neural implant technology, I believe is one hundred, maybe centuries away from ours.

On the other hand, we have to ask yet another metaphor raised by the Matrix, and that is, are machines dangerous? And the answer is, potentially, yes. However, at present, our robots have the intelligence of a cockroach, in the sense that pattern recognition and common sense are the two most difficult, unsolved problems in artificial intelligence theory. Pattern recognition means the ability to see, hear, and to understand what you are seeing and understand what you are hearing. Common sense means your ability to make sense out of the world, which even children can perform.

Those two problems are at the present time largely unsolved. Now, I think, however, that within a few decades, we should be able to create robots as smart as mice, maybe dogs and cats. However, when machines start to become as dangerous as monkeys, I think we should put a chip in their brain, to shut them off when they start to have murderous thoughts.

By the time you have monkey intelligence, you begin to have self-awareness, and with self-awareness, you begin to have an agenda created by a monkey for its own purposes. And at that point, a mechanical monkey may decide that its agenda is different from our agenda, and at that point they may become dangerous to humans. I think we have several decades before that happens, and Moores Law will probably collapse in 20 years anyway, so I think theres plenty of time before we come to the point where we have to deal with murderous robots, like in the movie 2001.

So you differ with Ray Kurzweils concept of using nanobots to reverse-engineer and upload the brain, possibly within the coming decades?

Not necessarily. Im just laying out a linear course, the trajectory where artificial intelligence theory is going today. And that is, trying to build machines which can navigate and roam in our world, and two, robots which can make sense out of the world. However, theres another divergent path one might take, and thats to harness the power of nanotechnology. However, nanotechnology is still very primitive. At the present time, we can barely build arrays of atoms. We cannot yet build the first atomic gear, for example. No one has created an atomic wheel with ball bearings. So simple machines, which even children can play with in their toy sets, dont yet exist at the atomic level. However, on a scale of decades, we may be able to create atomic devices that begin to mimic our own devices.

Molecular transistors can already be made. Nanotubes allow us to create strands of material that are super-strong. However, nanotechnology is still in its infancy and therefore, its still premature to say where nanotechnology will go. However, one place where technology may go is inside our body. Already, its possible to create a pill the size of an aspirin pill that has a television camera that can photograph our insides as it goes down our gullet, which means that one day surgery may become relatively obsolete.

In the future, its conceivable we may have atomic machines that enter the blood. And these atomic machines will be the size of blood cells and perhaps they would be able to perform useful functions like regulating and sensing our health, and perhaps zapping cancer cells and viruses in the process. However, this is still science fiction, because at the present time, we cant even build simple atomic machines yet.

Is there any possibility, similar to the premise of The Matrix, that we are living in a simulation?

Well, philosophically speaking, its always possible that the universe is a dream, and its always possible that our conversation with our friends is a by-product of the pickle that we had last night that upset our stomach. However, science is based upon reproducible evidence. When we go to sleep and we wake up the next day, we usually wind up in the same universe. It is reproducible. No matter how we try to avoid certain unpleasant situations, they come back to us. That is reproducible. So reality, as we commonly believe it to exist, is a reproducible experiment, its a reproducible sensation. Therefore in principle, you could never rule out the fact that the world could be a dream, but the fact of the matter is, the universe as it exists is a reproducible universe.

Now, in the Matrix, a computer simulation was run so that virtual reality became reproducible. Every time you woke up, you woke up in that same virtual reality. That technology, of course, does not violate the laws of physics. Theres nothing in relativity or the quantum theory that says that the Matrix is not possible. However, the amount of computer power necessary to drive the universe and the technology necessary for a neural implant is centuries to millennia beyond anything that we can conceive of, and therefore this is something for an advanced Type I or II civilization.

Why is a Type I required to run this kind of simulation? Is number crunching the problem?

Yes, its simply a matter of number crunching. At the present time, we scientists simply do not know how to interface with the brain. You see, one of the problems is, the brain, strictly speaking, is not a digital computer at all. The brain is not a Turing machine. A Turing machine is a black box with an input tape and an output tape and a central processing unit. That is the essential element of a Turing machine: information processing is localized in one point. However, our brain is actually a learning machine; its a neural network.

Many people find this hard to believe, but theres no software, there is no operating system, there is no Windows programming for the brain. The brain is a vast collection, perhaps a hundred billion neurons, each neuron with 10,000 connections, which slowly and painfully interacts with the environment. Some neural pathways are genetically programmed to give us instinct. However, for the most part, our cerebral cortex has to be reprogrammed every time we bump into reality.

As a consequence, we cannot simply put a chip in our brain that augments our memory and enhances our intelligence. Memory and thinking, we now realize, is distributed throughout the entire brain. For example, its possible to have people with only half a brain. There was a documented case recently where a young girl had half her brain removed and shes still fully functional.

So, the brain can operate with half of its mass removed. However, you remove one transistor in your Pentium computer and the whole computer dies. So, theres a fundamental difference between digital computerswhich are easily programmed, which are modular, and you can insert different kinds of subroutines in themand neural networks, where learning is distributed throughout the entire device, making it extremely difficult to reprogram. That is the reason why, even if we could create an advanced PlayStation that would run simulations on a PC screen, that software cannot simply be injected into the human brain, because the brain has no operating system.

Ray Kurzweils next book, The Singularity is Near, predicts that possibly within the coming decades, there will be super-intelligence emerging on the planet that will surpass that of humans. What do you think of that idea?

Yes, that sounds interesting. But Moores Law will have collapsed by then, so well have a little breather. In 20 years time, the quantum theory takes over, so Moores Law collapses and well probably stagnate for a few decades after that. Moores Law, which states that computer power doubles every 18 months, will not last forever. The quantum theory giveth, the quantum theory taketh away. The quantum theory makes possible transistors, which can be etched by ultraviolet rays onto smaller and smaller chips of silicon. This process will end in about 15 to 20 years. The senior engineers at Intel now admit for the first time that, yes, they are facing the end.

The thinnest layer on a Pentium chip consists of about 20 atoms. When we start to hit five atoms in the thinnest layer of a Pentium chip, the quantum theory takes over, electrons can now tunnel outside the layer, and the Pentium chip short-circuits. Therefore, within a 15 to 20 year time frame, Moores Law could collapse, and Silicon Valley could become a Rust Belt.

This means that we physicists are desperately trying to create the architecture for the post-silicon era. This means using quantum computers, quantum dot computers, optical computers, DNA computers, atomic computers, molecular computers, in order to bridge the gap when Moores Law collapses in 15 to 20 years. The wealth of nations depends upon the technology that will replace the power of silicon.

This also means that you cannot project artificial intelligence exponentially into the future. Some people think that Moores Law will extend forever; in which case humans will be reduced to zoo animals and our robot creations will throw peanuts at us and make us dance behind bars. Now, that may eventually happen. It is certainly consistent within the laws of physics.

However, the laws of the quantum theory say that were going to face a massive problem 15 to 20 years from now. Now, some remedial methods have been proposed; for example, building cubical chips, chips that are stacked on chips to create a 3-dimensional array. However, the problem there is heat production. Tremendous quantities of heat are produced by cubical chips, such that you can fry an egg on top of a cubical chip. Therefore, I firmly believe that we may be able to squeeze a few more years out of Moores Law, perhaps designing clever cubical chips that are super-cooled, perhaps using x-rays to etch our chips instead of ultraviolet rays. However, that only delays the inevitable. Sooner or later, the quantum theory kills you. Sooner or later, when we hit five atoms, we dont know where the electron is anymore, and we have to go to the next generation, which relies on the quantum theory and atoms and molecules.

Therefore, I say that all bets are off in terms of projecting machine intelligence beyond a 20-year time frame. Theres nothing in the laws of physics that says that computers cannot exceed human intelligence. All I raise is that we physicists are desperately trying to patch up Moores Law, and at the present time we have to admit that we have no successor to silicon, which means that Moores Law will collapse in 15 to 20 years.

So are you saying that quantum computing and nanocomputing are not likely to be available by then?

No, no, Im just saying its very difficult. At the present time we physicists have been able to compute on seven atoms. That is the worlds record for a quantum computer. And that quantum computer was able to calculate 3 x 5 = 15. Now, being able to calculate 3 x 5 = 15 does not equal the convenience of a laptop computer that can crunch potentially millions of calculations per second. The problem with quantum computers is that any contamination, any atomic disturbance, disturbs the alignment of the atoms and the atoms then collapse into randomness. This is extremely difficult, because any cosmic ray, any air molecule, any disturbance can conceivably destroy the coherence of our atomic computer to make them useless.

Unless you have redundant parallel computing?

Even if you have parallel computing you still have to have each parallel computer component free of any disturbance. So, no matter how you cut it, the practical problems of building quantum computers, although within the laws of physics, are extremely difficult, because it requires that we remove all in contact with the environment at the atomic level. In practice, weve only been able to do this with a handful of atoms, meaning that quantum computers are still a gleam in the eye of most physicists.

Now, if a quantum computer can be successfully built, it would, of course, scare the CIA and all the governments of the world, because it would be able to crack any code created by a Turing machine. A quantum computer would be able to perform calculations that are inconceivable by a Turing machine. Calculations that require an infinite amount of time on a Turing machine can be calculated in a few seconds by a quantum computer. For example, if you shine laser beams on a collection of coherent atoms, the laser beam scatters, and in some sense performs a quantum calculation, which exceeds the memory capability of any Turing machine.

However, as I mentioned, the problem is that these atoms have to be in perfect coherence, and the problems of doing this are staggering in the sense that even a random collision with a subatomic particle could in fact destroy the coherence and make the quantum computer impractical.

So, Im not saying that its impossible to build a quantum computer; Im just saying that its awfully difficult.

When do you think we might expect SETI [Search for Extraterrestrial Intelligence] to be successful?

I personally think that SETI is looking in the wrong direction. If, for example, were walking down a country road and we see an anthill, do we go down to the ant and say, “I bring you trinkets, I bring you beads, I bring you knowledge, I bring you medicine, I bring you nuclear technology, take me to your leader”? Or, do we simply step on them? Any civilization capable of reaching the planet Earth would be perhaps a Type III civilization. And the difference between you and the ant is comparable to the distance between you and a Type III civilization. Therefore, for the most part, a Type III civilization would operate with a completely different agenda and message than our civilization.

Lets say that a ten-lane superhighway is being built next to the anthill. The question is: would the ants even know what a ten-lane superhighway is, or what its used for, or how to communicate with the workers who are just feet away? And the answer is no. One question that we sometimes ask is if there is a Type III civilization in our backyard, in the Milky Way galaxy, would we even know its presence? And if you think about it, you realize that theres a good chance that we, like ants in an anthill, would not understand or be able to make sense of a ten-lane superhighway next door.

So this means there that could very well be a Type III civilization in our galaxy, it just means that were not smart enough to find one. Now, a Type III civilization is not going to make contact by sending Captain Kirk on the Enterprise to meet our leader. A Type III civilization would send self-replicating Von Neumann probes to colonize the galaxy with robots. For example, consider a virus. A virus only consists of thousands of atoms. Its a molecule in some sense. But in about one week, it can colonize an entire human being made of trillions of cells. How is that possible?

Well, a Von Neumann probe would be a self-replicating robot that lands on a moon; a moon, because they are stable, with no erosion, and theyre stable for billions of years. The probe would then make carbon copies of itself by the millions. It would create a factory to build copies of itself. And then these probes would then rocket to other nearby star systems, land on moons, to create a million more copies by building a factory on that moon. Eventually, there would be sphere surrounding the mother planet, expanding at near-light velocity, containing trillions of these Von Neumann probes, and that is perhaps the most efficient way to colonize the galaxy. This means that perhaps, on our moon there is a Von Neumann probe, left over from a visitation that took place million of years ago, and the probe is simply waiting for us to make the transition from Type 0 to Type I.

The Sentinel.

Yes. This, of course, is the basis of the movie 2001, because at the beginning of the movie, Kubrick interviewed many prominent scientists, and asked them the question, “What is the most likely way that an advanced civilization would probe the universe?” And that is, of course, through self-replicating Von Neumann probes, which create moon bases. That is the basis of the movie 2001, where the probe simply waits for us to become interesting. If were Type 0, were not very interesting. We have all the savagery and all the suicidal tendencies of fundamentalism, nationalism, sectarianism, that are sufficient to rip apart our world.

By the time weve become Type I, weve become interesting, weve become planetary, we begin to resolve our differences. We have centuries in which to exist on a single planet to create a paradise on Earth, a paradise of knowledge and prosperity.

2003 KurzweilAI.net

The rest is here:

Parallel universes, the Matrix, and superintelligence …

 Posted by at 2:49 am  Tagged with:

Evolution – Conservapedia

 Evolution  Comments Off on Evolution – Conservapedia
Jun 282016
 

The theory of evolution is a naturalistic theory of the history of life on earth (this refers to the theory of evolution which employs methodological naturalism and is taught in schools and universities). Merriam-Webster’s dictionary gives the following definition of evolution: “a theory that the various types of animals and plants have their origin in other preexisting types and that the distinguishable differences are due to modifications in successive generations…”[2] Currently, there are several theories of evolution.

Since World War II a majority of the most prominent and vocal defenders of the evolutionary position which employs methodological naturalism have been atheists and agnostics.[3] In 2007, “Discovery Institute’s Center for Science and Culture…announced that over 700 scientists from around the world have now signed a statement expressing their skepticism about the contemporary theory of Darwinian evolution.”[4]

In 2011, the results of a study was published indicating that most United States high school biology teachers are reluctant to endorse the theory of evolution in class. [5] In addition, in 2011, eight anti-evolution bills were introduced into state legislatures within the United States encouraging students to employ critical thinking skills when examining the evolutionary paradigm. In 2009, there were seven states which required critical analysis skills be employed when examining evolutionary material within schools.[6]

A 2005 poll by the Louis Finkelstein Institute for Social and Religious Research found that 60% of American medical doctors reject Darwinism, stating that they do not believe man evolved through natural processes alone.[7] Thirty-eight percent of the American medical doctors polled agreed with the statement that “Humans evolved naturally with no supernatural involvement.” [8] The study also reported that 1/3 of all medical doctors favor the theory of intelligent design over evolution.[9] In 2010, the Gallup organization reported that 40% of Americans believe in young earth creationism.[10] In January 2006, the BBC reported concerning Britain:

Furthermore, more than 40% of those questioned believe that creationism or intelligent design (ID) should be taught in school science lessons.[11]

Picture above was taken at Johns Hopkins University

Johns Hopkins University Press reported in 2014: “Over the past forty years, creationism has spread swiftly among European Catholics, Protestants, Jews, Hindus, and Muslims, even as anti-creationists sought to smother its flames.”[12] In addition, China has the world’s largest atheist population and the rapid growth of biblical creationism/Evangelical Christianity in China may have a significant impact on the number of individuals in the world who believe in evolution and also on global atheism (see: China and biblical creationism and Asian atheism).

The theory of evolution posits a process of transformation from simple life forms to more complex life forms, which has never been observed or duplicated in a laboratory.[13][14] Although not a creation scientist, Swedish geneticist Dr. Nils Heribert-Nilsson, Professor of Botany at the University of Lund in Sweden and a member of the Royal Swedish Academy of Sciences, stated: “My attempts to demonstrate Evolution by an experiment carried on for more than 40 years have completely failed. At least, I should hardly be accused of having started from a preconceived antievolutionary standpoint.”[15][16]

The fossil record is often used as evidence in the creation versus evolution controversy. The fossil record does not support the theory of evolution and is one of the flaws in the theory of evolution.[17] In 1981, there were at least a hundred million fossils that were catalogued and identified in the world’s museums.[18] Despite the large number of fossils available to scientists in 1981, evolutionist Mark Ridley, who currently serves as a professor of zoology at Oxford University, was forced to confess: “In any case, no real evolutionist, whether gradualist or punctuationist, uses the fossil record as evidence in favour of the theory of evolution as opposed to special creation.”[19]

In addition to the evolutionary position lacking evidential support and being counterevidential, the great intellectuals in history such as Archimedes, Aristotle, St. Augustine, Francis Bacon, Isaac Newton, and Lord Kelvin did not propose an evolutionary process for a species to transform into a more complex version. Even after the theory of evolution was proposed and promoted heavily in England and Germany, most leading scientists were against the theory of evolution.[20]

The theory of evolution was published by naturalist Charles Darwin in his book On The Origin of Species by Means of Natural Selection or The Preservation of Favored Races in the Struggle for Life, in 1859. In a letter to Asa Gray, Darwin confided: “…I am quite conscious that my speculations run quite beyond the bounds of true science.”[21]Prior to publishing the book, Darwin wrote in his private notebooks that he was a materialist, which is a type of atheist.[22] Darwin was a weak atheist/agnostic (see: religious views of Charles Darwin) .[23] Charles Darwins casual mentioning of a creator in earlier editions of The Origin of Species appears to have been a merely a ploy to downplay the implications of his materialistic theory.[24] The amount of credit Darwin actually deserves for the theory is disputed. [25] Darwin’s theory attempted to explain the origin of the various kinds of plants and animals via the process of natural selection or “survival of the fittest”.

The basic principle behind natural selection is that in the struggle for life some organisms in a given population will be better suited to their particular environment and thus have a reproductive advantage which increases the representation of their particular traits over time. Many years before Charles Darwin, there were several other individuals who published articles on the topic of natural selection.[26]

Darwin did not first propose in his book Origin of Species that man had descended from non-human ancestors. Darwin’s theory of evolution incorporated that later in Darwin’s book entitled Descent of Man.

As far as the history of the theory of evolution, although Darwin is well known when it comes to the early advocacy of the evolutionary position in the Western world, evolutionary ideas were taught by the ancient Greeks as early as the 7th century B.C.[27] The concept of naturalistic evolution differs from the concept of theistic evolution in that it states God does not guide the posited process of macroevolution.[28]

In 2012, the science news website Livescience.com published a news article entitled Belief in Evolution Boils Down to a Gut Feeling which indicated that research suggests that gut feelings trumped facts when it comes to evolutionists believing in evolution.[29] In January of 2012, the Journal of Research in Science Teaching published a study indicating that evolutionary belief is significantly based on gut feelings.[30][31] The January 20, 2012 article entitled Belief in Evolution Boils Down to a Gut Feeling published by the website Live Science wrote of the research: “They found that intuition had a significant impact on what the students accepted, no matter how much they knew and regardless of their religious beliefs.”[32]

In response to evolutionary indoctrination and the uncritical acceptance of evolution by many evolutionists, the scientists at the organization Creation Ministries International created a Question evolution! campaign which poses 15 questions for evolutionists. In addition, leading creationist organizations have created lists of poor arguments that evolutionists should not use.[33] See also: Causes of evolutionary belief

See also: Theories of evolution

Evolutionist Theodosius Dobzhansky wrote concerning the theory of evolution: “The process of mutation is the only known source of the new materials of genetic variability, and hence of evolution.”[34] Concerning various theories of evolution, most evolutionists believe that the processes of mutation, genetic drift and natural selection created every species of life that we see on earth today after life first came about on earth although there is little consensus on how this process is allegedly to have occurred.[35]

Pierre-Paul Grass, who served as Chair of evolutionary biology at Sorbonne University for thirty years and was ex-president of the French Academy of Sciences, stated the following: “Some contemporary biologists, as soon as they observe a mutation, talk about evolution. They are implicitly supporting the following syllogism: mutations are the only evolutionary variations, all living beings undergo mutations, therefore all living beings evolve….No matter how numerous they may be, mutations do not produce any kind of evolution.” Grass pointed out that bacteria which are the subject of study of many geneticists and molecular biologists are organisms which produce the most mutants.[36] Grasse then points that bacteria are considered to have “stabilized”.[37] Grass regards the “unceasing mutations” to be “merely hereditary fluctuations around a median position; a swing to the right, a swing to the left, but no final evolutionary effect.”[38]

In addition, Harvard biologist Ernst Mayr wrote: “It must be admitted, however, that it is a considerable strain on ones credulity to assume that finely balanced systems such as certain sense organs (the eye of vertebrates, or the birds feather) could be improved by random mutations.”[39]

Creation scientists believe that mutations, natural selection, and genetic drift would not cause macroevolution.[40] Furthermore, creation scientists assert that the life sciences as a whole support the creation model and do not support the theory of evolution.[41]Homology involves the theory that macroevolutionary relationships can be demonstrated by the similarity in the anatomy and physiology of different organisms.[42] An example of a homology argument is that DNA similarities between human and other living organisms is evidence for the theory of evolution.[43] Creation scientists provide sound reasons why the homology argument is not a valid argument. Both evolutionary scientists and young earth creation scientists believe that speciation occurs, however, young earth creation scientists state that speciation generally occurs at a much faster rate than evolutionist believe is the case.[44]

Critics of the theory of evolution state that many of today’s proponents of the evolutionary position have diluted the meaning of the term “evolution” to the point where it defined as or the definition includes change over time in the gene pool of a population over time through such processes as mutation, natural selection, and genetic drift.[45] Dr. Jonathan Sarfati of Creation Ministries International declares concerning the diluted definition of the word “evolution”:

See also: Atheism and equivocation

Dr. Jonathan Sarfati wrote:

All (sexually reproducing) organisms contain their genetic information in paired form. Each offspring inherits half its genetic information from its mother, and half from its father. So there are two genes at a given position (locus, plural loci) coding for a particular characteristic. An organism can be heterozygous at a given locus, meaning it carries different forms (alleles) of this gene… So there is no problem for creationists explaining that the original created kinds could each give rise to many different varieties. In fact, the original created kinds would have had much more heterozygosity than their modern, more specialized descendants. No wonder Ayala pointed out that most of the variation in populations arises from reshuffling of previously existing genes, not from mutations. Many varieties can arise simply by two previously hidden recessive alleles coming together. However, Ayala believes the genetic information came ultimately from mutations, not creation. His belief is contrary to information theory, as shown in chapter 9 on Design.[48]

Dr. Don Batten of Creation Ministries International has pointed out that prominent evolutionists, such as PZ Myers and Nick Matzke, have indicated that a naturalistic postulation of the origin of life (often called abiogenesis), is part of the evolutionary model.[49] This poses a very serious problem for the evolutionary position as the evidence clearly points life being a product of design and not through naturalistic processes.[50]

The genetic entropy theory by Cornell University Professor Dr. John Sanford on eroding genomes of all living organisms due to mutations inherited from one generation to the next is declared to be one of the major challenges to evolutionary theory. The central part of Sanfords argument is that mutations, represented by spelling mistakes in DNA, are accumulating so quickly in some creatures (and particularly in people) that natural selection cannot stop the functional degradation of the genome, let alone drive an evolutionary process that could lead for example, from apes into people.[51]

Sanford’s book Genetic Entropy and the Mystery of the Genome explains why human DNA is inexorably deteriorating at an alarming rate, thus cannot be millions of years old.[52]

The evolutionist Michael Lynch wrote in the Proceedings of the National Academy of Sciences of the United States of America in a December 3, 2009 article entitled: Rate, molecular spectrum, and consequences of human mutation (taken from the abstract):

Creation scientists and intelligent design advocates point out that the genetic code (DNA code), genetic programs, and biological information argue for an intelligent cause in regards the origins question and assert it is one of the many problems of the theory of evolution.[55][56]

Dr. Walt Brown states the genetic material that controls the biological processes of life is coded information and that human experience tells us that codes are created only by the result of intelligence and not merely by processes of nature.[55] Dr. Brown also asserts that the “information stored in the genetic material of all life is a complex program. Therefore, it appears that an unfathomable intelligence created these genetic programs.”[55]

To support his view regarding the divine origin of genetic programs Dr. Walt Brown cites the work of David Abel and Professor Jack Trevors who wrote the following:

In the peer reviewed biology journal Proceedings of the Biological Society of Washington Dr. Stephen Meyer argues that no current materialistic theory of evolution can account for the origin of the information necessary to build novel animal forms and proposed an intelligent cause as the best explanation for the origin of biological information and the higher taxa.[58] The editor of the Proceedings of the Biological Society of Washington, Dr. Richard Sternberg, came under intense scrutiny and persecution for the aforementioned article published by Dr. Meyer.

See also: Theory of evolution and little consensus and Theories of evolution

There is little scientific consensus on how macroevolution is said to have happened and the claimed mechanisms of evolutionary change, as can be seen in the following quotes:

Pierre-Paul Grass, who served as Chair of Evolution at Sorbonne University for thirty years and was ex-president of the French Academy of Sciences, stated the following:

Today, our duty is to destroy the myth of evolution, considered as a simple, understood, and explained phenomenon which keeps rapidly unfolding before us. Biologists must be encouraged to think about the weaknesses of the interpretations and extrapolations that theoreticians put forward or lay down as established truths. The deceit is sometimes unconscious, but not always, since some people, owing to their sectarianism, purposely overlook reality and refuse to acknowledge the inadequacies and the falsity of their beliefs. – Pierre-Paul Grass – Evolution of Living Organisms (1977), pages 6 and 8[62]

See: Modern evolutionary synthesis and Theories of evolution

A notable case of a scientists using fraudulent material to promote the theory of evolution was the work of German scientist and atheist Ernst Haeckel. Noted evolutionist and Stephen Gould, who held a agnostic worldview[63] and promoted the notion of non-overlapping magesteria, wrote the following regarding Ernst Haeckel’s work in a March 2000 issue of Natural History:

An irony of history is that the March 9, 1907 edition of the NY Times refers to Ernst Haeckel as the “celebrated Darwinian and founder of the Association for the Propagation of Ethical Atheism.”[65]

Stephen Gould continues by quoting Michael Richardson of the St. Georges Hospital Medical School in London, who stated: “I know of at least fifty recent biology texts which use the drawings uncritically”.[64]

See also: Evolution and the fossil record

As alluded to earlier, today there are over one hundred million identified and cataloged fossils in the world’s museums.[66] If the evolutionary position was valid, then there should be “transitional forms” in the fossil record reflecting the intermediate life forms. Another term for these “transitional forms” is “missing links”.

Charles Darwin admitted that his theory required the existence of “transitional forms.” Darwin wrote: “So that the number of intermediate and transitional links, between all living and extinct species, must have been inconceivably great. But assuredly, if this theory be true, such have lived upon the earth.”[68] However, Darwin wrote: “Why then is not every geological formation and every strata full of such intermediate links? Geology assuredly does not reveal any such finely-graduated organic chain; and this perhaps, is the most obvious and serious objection which can be urged against my theory.”[69] Darwin thought the lack of transitional links in his time was because “only a small portion of the surface of the earth has been geologically explored and no part with sufficient care…”.[70] As Charles Darwin grew older he became increasingly concerned about the lack of evidence for the theory of evolution in terms of the existence of transitional forms. Darwin wrote, “When we descend to details, we cannot prove that a single species has changed; nor can we prove that the supposed changes are beneficial, which is the groundwork of the theory.[71]

Scientist Dr. Michael Denton wrote regarding the fossil record:

Creationists assert that evolutionists have had over 140 years to find a transitional fossil and nothing approaching a conclusive transitional form has ever been found and that only a handful of highly doubtful examples of transitional fossils exist.[73] Distinguished anthropologist Sir Edmund R. Leach declared, “Missing links in the sequence of fossil evidence were a worry to Darwin. He felt sure they would eventually turn up, but they are still missing and seem likely to remain so.”[74]

David B. Kitts of the School of Geology and Geophysics at the University of Oklahoma wrote that “Evolution requires intermediate forms between species and paleontology does not provide them”.[75]

David Raup, who was the curator of geology at the museum holding the world’s largest fossil collection, the Field Museum of Natural History in Chicago, observed:

One of the most famous proponents of the theory of evolution was the late Harvard paleontologist Stephen Jay Gould. But Gould admitted:

For more information please see:

Creationists can cite quotations which assert that no solid fossil evidence for the theory of evolution position exists:

For more fossil record quotes please see: Fossil record quotes and Additional fossil record quotes

For more information please see: Paleoanthropology and Human evolution

Paleoanthropology is an interdisciplinary branch of anthropology that concerns itself with the origins of early humans and it examines and evaluates items such as fossils and artifacts.[82] Dr. David Pilbeam is a paleoanthropologist who received his Ph.D. at Yale University and Dr. Pilbeam is presently Professor of Social Sciences at Harvard University and Curator of Paleontology at the Peabody Museum of Archaeology and Ethnology. In addition, Dr. Pilbeam served as an advisor for the Kenya government regarding the creation of an international institute for the study of human origins.[83]

Dr. Pilbeam wrote a review of Richard Leakey’s book Origins in the journal American Scientist:

Dr. Pilbeam wrote the following regarding the theory of evolution and paleoanthropology:

Evolutionist and Harvard professor Richard Lewontin wrote in 1995 that “Despite the excited and optimistic claims that have been made by some paleontologists, no fossil hominid species can be established as our direct ancestor….”[85] In the September 2005 issue of National Geographic, Joel Achenbach asserted that human evolution is a “fact” but he also candidly admitted that the field of paleoanthropology “has again become a rather glorious mess.”[86][87] In the same National Geographic article Harvard paleoanthropologist Dan Lieberman states, “We’re not doing a very good job of being honest about what we don’t know…”.[87]

Concerning pictures of the supposed ancestors of man featured in science journals and the news media Boyce Rensberger wrote in the journal Science the following regarding their highly speculative nature:

Creation scientists concur with Dr. Pilbeam regarding the speculative nature of the field of paleoanthropology and assert there is no compelling evidence in the field of paleoanthropology for the various theories of human evolution.[90]

In 2011, Dr. Grady S. McMurtry declared:

It is acknowledged that the Laws of Genetics are conservative, they are not creative. Genetics only copies or rearranges the previously existing information and passes it on to the next generation. When copying information, you have only two choices; you can only copy it perfectly or imperfectly, you cannot copy something more perfectly. Mutations do not build one upon another beneficially. Mutations do not create new organs; they only modify existing organs and structures. Mutations overwhelmingly lose information; they do not gain it; therefore, mutations cause changes which are contrary of evolutionary philosophy.

As a follow on, the addition of excess undirected energy will destroy the previously existing system. Indeed, you will never get an increase in the specifications on the DNA to create new organs without the input from a greater intelligence.

Mutations affect and are affected by many genes and other intergenic information acting in combination with one another. The addition of the accidental duplication of previously existing information is detrimental to any organism.

Mutations do produce microevolution, however, this term is far better understood as merely lateral adaptation, which is only variation within a kind, a mathematical shifting of gene frequency within a gene pool. The shifting of gene frequencies and a loss of information cannot produce macroevolution.

As Dr. Roger Lewin commented after the 1980 University of Chicago conference entitled Macroevolution:

The central question of the Chicago conference was whether the mechanisms underlying microevolution can be extrapolated to explain the phenomena of macroevolution. At the risk of doing violence to the positions of some of the people at the meeting, the answer can be given as a clear, No. [Emphasis added]

Dr. Roger Lewin, Evolution Theory under Fire, Science. Vol. 210, 21 November 1980. p. 883-887.[91]

In 1988, the prominent Harvard University biologist Ernst Mayr wrote in his essay Does Microevolution Explain Macroevolution?:

…In this respect, indeed, macroevolution as a field of study is completely decoupled from microevolution.[92]

See also: Creation Ministries International on the second law of thermodynamics and evolution

Creation Ministries International has a great wealth of information on why the second law of thermodynamics is incompatible with the evolutionary paradigm.

Some of their key resources on this matter are:

See also: Theories of evolution

Because the fossil record is characterized by the abrupt appearance of species and stasis in the fossil record the theory of punctuated equilibrium was developed and its chief proponents were Stephen Gould, Niles Eldridge, and Steven Stanley. According to the American Museum of Natural History the theory of Punctuated Equilibrium “asserts that evolution occurs in dramatic spurts interspersed with long periods of stasis”.[93] Because Stephen Gould was the leading proponent of the theory of punctuated equilibrium much of the criticism of the theory has been directed towards Gould.[94][95] The development of a new evolutionary school of thought occurring due to the fossil record not supporting the evolutionary position was not unprecedented. In 1930, Austin H. Clark, an American evolutionary zoologist who wrote 630 articles and books in six languages, came up with an evolutionary hypothesis called zoogenesis which postulated that each of the major types of life forms evolved separately and independently from all the others.[96] Prior to publishing his work entitled The New Evolution: Zoogenesis, Clark wrote in a journal article published in the Quarterly Review of Biology that “so far as concerns the major groups of animals, the creationists seem to have the better of the argument. There is not the slightest evidence that any one of the major groups arose from any other.”[97]

In 1995, there was an essay in the New York Review of Books by the late John Maynard Smith, a noted evolutionary biologist who was considered the dean of British neo-Darwinists, and Smith wrote the following regarding Gould’s work in respect to the theory of evolution:

Noted journalist and author Robert Wright , wrote in 1996 that, among top-flight evolutionary biologists, Gould is considered a pestnot just a lightweight, but an actively muddled man who has warped the public’s understanding of Darwinism.[100][101]

Creation scientist Dr. Jonathan Sarfati wrote regarding the implausibility of the theory of punctuated equilibrium and the implausibility of the idea of gradual evolution the following:

Individuals who are against the evolutionary position assert that evolutionary scientists employ extremely implausible “just so stories” to support their position and have done this since at least the time of Charles Darwin.[104][105]

A well known example of a “just so story” is when Darwin, in his Origin of the Species, wrote a chapter entitled “Difficulties on Theory” in which he stated:

Even the prominent evolutionist and geneticist Professor Richard Lewontin admitted the following:

Dr. Sarfati wrote regarding the theory of evolution the following:

Opponents to the theory of evolution commonly point to the following in nature as being implausibly created through evolutionary processes:

Lastly, biochemist Michael Behe wrote the following:

Phillip E. Johnson cites Francis Crick in order to illustrate the fact that the biological world has the strong appearance of being designed:

Stephen C. Meyer offers the following statement regarding the design of the biological world:

The Stanford Encyclopedia of Philosophy states regarding a candid admission of Charles Darwin:

In the course of that conversation I said to Mr. Darwin, with reference to some of his own remarkable works on the Fertilisation of Orchids, and upon The Earthworms, and various other observations he made of the wonderful contrivances for certain purposes in natureI said it was impossible to look at these without seeing that they were the effect and the expression of Mind. I shall never forget Mr. Darwin’s answer. He looked at me very hard and said, Well, that often comes over me with overwhelming force; but at other times, and he shook his head vaguely, adding, it seems to go away.(Argyll 1885, 244][127]

Research and historical data indicate that a significant portion of atheists/agnostics often see the their lives and the world as being the product of purposeful design (see: Atheism and purpose).[128]

See: Argument from beauty

Advocates of the theory of evolution have often claimed that those who oppose the theory of evolution don’t publish their opposition to the theory of evolution in the appropriate scientific literature (creationist scientists have peer reviewed journals which favor the creationist position).[129][130][131] Recently, there has been articles which were favorable to the intelligent design position in scientific journals which traditionally have favored the theory of evolution.[132]

Karl Popper, a leading philosopher of science and originator of the falsifiability as a criterion of demarcation of science from nonscience,[133] stated that Darwinism is “not a testable scientific theory, but a metaphysical research programme.”[134] Leading Darwinist and philosopher of science, Michael Ruse declared the concerning Popper’s statement and the actions he took after making that statement: “Since making this claim, Popper himself has modified his position somewhat; but, disclaimers aside, I suspect that even now he does not really believe that Darwinism in its modern form is genuinely falsifiable.”[135]

The issue of the falsifiability of the evolutionary position is very important issue and although offering a poor cure to the problem that Karl Popper described, committed evolutionists Louis Charles Birch & Paul R. Ehrlich stated in the journal Nature:

The Swedish cytogeneticist, Antonio Lima-De-Faria, who has been knighted by the king of Sweden for his scientific achievements, noted that “there has never been a theory of evolution”.[137][138]

See also: Suppression of alternatives to evolution and Atheism and the suppression of science

Many of the leaders of the atheist movement, such as the evolutionist and the new atheist Richard Dawkins, argue for atheism and evolution with a religious fervor (See also: Atheism and evolution).

Daniel Smartt has identified seven dimensions which make up religion: narrative, experiential, social, ethical, doctrinal, ritual and material. It is not necessary in Smartt’s model for every one of these to be present in order for something to be a religion.[139]. However, it can be argued that all seven are present in the case of atheism.[140][141] Please see: Atheism: A religionand Atheism and Atheism is a religion.

See also: Atheism is a religion and Atheism and evolution

Atheism is a religion and naturalistic notions concerning origins are religious in nature and both have legal implications as far as evolution being taught in public schools.[143][144][145]

John Calvert, a lawyer and intelligent design proponent wrote:

See also:

Continue reading here:

Evolution – Conservapedia

Myths of Individualism | Libertarianism.org

 Libertarianism  Comments Off on Myths of Individualism | Libertarianism.org
Jun 262016
 

Sep 6, 2011

Palmer takes on the misconceptions of individualism common to communitarian critics of liberty.

It has recently been asserted that libertarians, or classical liberals, actually think that individual agents are fully formed and their value preferences are in place prior to and outside of any society. They ignore robust social scientific evidence about the ill effects of isolation, and, yet more shocking, they actively oppose the notion of shared values or the idea of the common good. I am quoting from the 1995 presidential address of Professor Amitai Etzioni to the American Sociological Association (American Sociological Review, February 1996). As a frequent talk show guest and as editor of the journal The Responsive Community,Etzioni has come to some public prominence as a publicist for a political movement known as communitarianism.

Etzioni is hardly alone in making such charges. They come from both left and right. From the left, Washington Post columnist E. J. Dionne Jr. argued in his book Why Americans Hate Politics that the growing popularity of the libertarian cause suggested that many Americans had even given up on the possibility of a common good, and in a recent essay in the Washington Post Magazine, that the libertarian emphasis on the freewheeling individual seems to assume that individuals come into the world as fully formed adults who should be held responsible for their actions from the moment of birth. From the right, the late Russell Kirk, in a vitriolic article titled Libertarians: The Chirping Sectaries, claimed that the perennial libertarian, like Satan, can bear no authority, temporal or spiritual and that the libertarian does not venerate ancient beliefs and customs, or the natural world, or his country, or the immortal spark in his fellow men.

More politely, Sen. Dan Coats (R-Ind.) and David Brooks of the Weekly Standard have excoriated libertarians for allegedly ignoring the value of community. Defending his proposal for more federal programs to rebuild community, Coats wrote that his bill is self-consciously conservative, not purely libertarian. It recognizes, not only individual rights, but the contribution of groups rebuilding the social and moral infrastructure of their neighborhoods. The implication is that individual rights are somehow incompatible with participation in groups or neighborhoods.

Such charges, which are coming with increasing frequency from those opposed to classical liberal ideals, are never substantiated by quotations from classical liberals; nor is any evidence offered that those who favor individual liberty and limited constitutional government actually think as charged by Etzioni and his echoes. Absurd charges often made and not rebutted can come to be accepted as truths, so it is imperative that Etzioni and other communitarian critics of individual liberty be called to account for their distortions.

Let us examine the straw man of atomistic individualism that Etzioni, Dionne, Kirk, and others have set up. The philosophical roots of the charge have been set forth by communitarian critics of classical liberal individualism, such as the philosopher Charles Taylor and the political scientist Michael Sandel. For example, Taylor claims that, because libertarians believe in individual rights and abstract principles of justice, they believe in the self-sufficiency of man alone, or, if you prefer, of the individual. That is an updated version of an old attack on classical liberal individualism, according to which classical liberals posited abstract individuals as the basis for their views about justice.

Those claims are nonsense. No one believes that there are actually abstract individuals, for all individuals are necessarily concrete. Nor are there any truly self-sufficient individuals, as any reader of The Wealth of Nations would realize. Rather, classical liberals and libertarians argue that the system of justice should abstract from the concrete characteristics of individuals. Thus, when an individual comes before a court, her height, color, wealth, social standing, and religion are normally irrelevant to questions of justice. That is what equality before the law means; it does not mean that no one actually has a particular height, skin color, or religious belief. Abstraction is a mental process we use when trying to discern what is essential or relevant to a problem; it does not require a belief in abstract entities.

It is precisely because neither individuals nor small groups can be fully self-sufficient that cooperation is necessary to human survival and flourishing. And because that cooperation takes place among countless individuals unknown to each other, the rules governing that interaction are abstract in nature. Abstract rules, which establish in advance what we may expect of one another, make cooperation possible on a wide scale.

No reasonable person could possibly believe that individuals are fully formed outside societyin isolation, if you will. That would mean that no one could have had any parents, cousins, friends, personal heroes, or even neighbors. Obviously, all of us have been influenced by those around us. What libertarians assert is simply that differences among normal adults do not imply different fundamental rights.

Libertarianism is not at base a metaphysical theory about the primacy of the individual over the abstract, much less an absurd theory about abstract individuals. Nor is it an anomic rejection of traditions, as Kirk and some conservatives have charged. Rather, it is a political theory that emerged in response to the growth of unlimited state power; libertarianism draws its strength from a powerful fusion of a normative theory about the moral and political sources and limits of obligations and a positive theory explaining the sources of order. Each person has the right to be free, and free persons can produce order spontaneously, without a commanding power over them.

What of Dionnes patently absurd characterization of libertarianism: individuals come into the world as fully formed adults who should be held responsible for their actions from the moment of birth? Libertarians recognize the difference between adults and children, as well as differences between normal adults and adults who are insane or mentally hindered or retarded. Guardians are necessary for children and abnormal adults, because they cannot make responsible choices for themselves. But there is no obvious reason for holding that some normal adults are entitled to make choices for other normal adults, as paternalists of both left and right believe. Libertarians argue that no normal adult has the right to impose choices on other normal adults, except in abnormal circumstances, such as when one person finds another unconscious and administers medical assistance or calls an ambulance.

What distinguishes libertarianism from other views of political morality is principally its theory of enforceable obligations. Some obligations, such as the obligation to write a thank-you note to ones host after a dinner party, are not normally enforceable by force. Others, such as the obligation not to punch a disagreeable critic in the nose or to pay for a pair of shoes before walking out of the store in them, are. Obligations may be universal or particular. Individuals, whoever and wherever they may be (i.e., in abstraction from particular circumstances), have an enforceable obligation to all other persons: not to harm them in their lives, liberties, health, or possessions. In John Lockes terms, Being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions. All individuals have the right that others not harm them in their enjoyment of those goods. The rights and the obligations are correlative and, being both universal and negative in character, are capable under normal circumstances of being enjoyed by all simultaneously. It is the universality of the human right not to be killed, injured, or robbed that is at the base of the libertarian view, and one need not posit an abstract individual to assert the universality of that right. It is his veneration, not his contempt, for the immortal spark in his fellow men that leads the libertarian to defend individual rights.

Those obligations are universal, but what about particular obligations? As I write this, I am sitting in a coffee house and have just ordered another coffee. I have freely undertaken the particular obligation to pay for the coffee: I have transferred a property right to a certain amount of my money to the owner of the coffee shop, and she has transferred the property right to the cup of coffee to me. Libertarians typically argue that particular obligations, at least under normal circumstances, must be created by consent; they cannot be unilaterally imposed by others. Equality of rights means that some people cannot simply impose obligations on others, for the moral agency and rights of those others would then be violated. Communitarians, on the other hand, argue that we all are born with many particular obligations, such as to give to this body of personscalled a state or, more nebulously, a nation, community, or folkso much money, so much obedience, or even ones life. And they argue that those particular obligations can be coercively enforced. In fact, according to communitarians such as Taylor and Sandel, I am actually constituted as a person, not only by the facts of my upbringing and my experiences, but by a set of very particular unchosen obligations.

To repeat, communitarians maintain that we are constituted as persons by our particular obligations, and therefore those obligations cannot be a matter of choice. Yet that is a mere assertion and cannot substitute for an argument that one is obligated to others; it is no justification for coercion. One might well ask, If an individual is born with the obligation to obey, who is born with the right to command? If one wants a coherent theory of obligations, there must be someone, whether an individual or a group, with the right to the fulfillment of the obligation. If I am constituted as a person by my obligation to obey, who is constituted as a person by the right to obedience? Such a theory of obligation may have been coherent in an age of God-kings, but it seems rather out of place in the modern world. To sum up, no reasonable person believes in the existence of abstract individuals, and the true dispute between libertarians and communitarians is not about individualism as such but about the source of particular obligations, whether imposed or freely assumed.

A theory of obligation focusing on individuals does not mean that there is no such thing as society or that we cannot speak meaningfully of groups. The fact that there are trees does not mean that we cannot speak of forests, after all. Society is not merely a collection of individuals, nor is it some bigger or better thing separate from them. Just as a building is not a pile of bricks but the bricks and the relationships among them, society is not a person, with his own rights, but many individuals and the complex set of relationships among them.

A moments reflection makes it clear that claims that libertarians reject shared values and the common good are incoherent. If libertarians share the value of liberty (at a minimum), then they cannot actively oppose the notion of shared values, and if libertarians believe that we will all be better off if we enjoy freedom, then they have not given up on the possibility of a common good, for a central part of their efforts is to assert what the common good is! In response to Kirks claim that libertarians reject tradition, let me point out that libertarians defend a tradition of liberty that is the fruit of thousands of years of human history. In addition, pure traditionalism is incoherent, for traditions may clash, and then one has no guide to right action. Generally, the statement that libertarians reject tradition is both tasteless and absurd. Libertarians follow religious traditions, family traditions, ethnic traditions, and social traditions such as courtesy and even respect for others, which is evidently not a tradition Kirk thought it necessary to maintain.

The libertarian case for individual liberty, which has been so distorted by communitarian critics, is simple and reasonable. It is obvious that different individuals require different things to live good, healthy, and virtuous lives. Despite their common nature, people are materially and numerically individuated, and we have needs that differ. So, how far does our common good extend?

Karl Marx, an early and especially brilliant and biting communitarian critic of libertarianism, asserted that civil society is based on a decomposition of man such that mans essence is no longer in community but in difference; under socialism, in contrast, man would realize his nature as a species being. Accordingly, socialists believe that collective provision of everything is appropriate; in a truly socialized state, we would all enjoy the same common good and conflict simply would not occur. Communitarians are typically much more cautious, but despite a lot of talk they rarely tell us much about what our common good might be. The communitarian philosopher Alasdair MacIntyre, for instance, in his influential book After Virtue, insists for 219 pages that there is a good life for man that must be pursued in common and then rather lamely concludes that the good life for man is the life spent in seeking for the good life for man.

A familiar claim is that providing retirement security through the state is an element of the common good, for it brings all of us together. But who is included in all of us? Actuarial data show that African-American males who have paid the same taxes into the Social Security system as have Caucasian males over their working lives stand to get back about half as much. Further, more black than white males will die before they receive a single penny, meaning all of their money has gone to benefit others and none of their investments are available to their families. In other words, they are being robbed for the benefit of nonblack retirees. Are African-American males part of the all of us who are enjoying a common good, or are they victims of the common good of others? (As readers of this magazine should know, all would be better off under a privatized system, which leads libertarians to assert the common good of freedom to choose among retirement systems.) All too often, claims about the common good serve as covers for quite selfish attempts to secure private goods; as the classical liberal Austrian novelist Robert Musil noted in his great work The Man without Qualities, Nowadays only criminals dare to harm others without philosophy.

Libertarians recognize the inevitable pluralism of the modern world and for that reason assert that individual liberty is at least part of the common good. They also understand the absolute necessity of cooperation for the attainment of ones ends; a solitary individual could never actually be self-sufficient, which is precisely why we must have rulesgoverning property and contracts, for exampleto make peaceful cooperation possible and we institute government to enforce those rules. The common good is a system of justice that allows all to live together in harmony and peace; a common good more extensive than that tends to be, not a common good for all of us, but a common good for some of us at the expense of others of us. (There is another sense, understood by every parent, to the term self-sufficiency. Parents normally desire that their children acquire the virtue of pulling their own weight and not subsisting as scroungers, layabouts, moochers, or parasites. That is a necessary condition of self-respect; Taylor and other critics of libertarianism often confuse the virtue of self-sufficiency with the impossible condition of never relying on or cooperating with others.)

The issue of the common good is related to the beliefs of communitarians regarding the personality or the separate existence of groups. Both are part and parcel of a fundamentally unscientific and irrational view of politics that tends to personalize institutions and groups, such as the state or nation or society. Instead of enriching political science and avoiding the alleged naivet of libertarian individualism, as communitarians claim, however, the personification thesis obscures matters and prevents us from asking the interesting questions with which scientific inquiry begins. No one ever put the matter quite as well as the classical liberal historian Parker T. Moon of Columbia University in his study of 19th-century European imperialism, Imperialism and World Politics:

Language often obscures truth. More than is ordinarily realized, our eyes are blinded to the facts of international relations by tricks of the tongue. When one uses the simple monosyllable France one thinks of France as a unit, an entity. When to avoid awkward repetition we use a personal pronoun in referring to a countrywhen for example we say France sent her troops to conquer Tuniswe impute not only unity but personality to the country. The very words conceal the facts and make international relations a glamorous drama in which personalized nations are the actors, and all too easily we forget the flesh-and-blood men and women who are the true actors. How different it would be if we had no such word as France, and had to say insteadthirty-eight million men, women and children of very diversified interests and beliefs, inhabiting 218,000 square miles of territory! Then we should more accurately describe the Tunis expedition in some such way as this: A few of these thirty-eight million persons sent thirty thousand others to conquer Tunis. This way of putting the fact immediately suggests a question, or rather a series of questions. Who are the few? Why did they send the thirty thousand to Tunis? And why did these obey?

Group personification obscures, rather than illuminates, important political questions. Those questions, centering mostly around the explanation of complex political phenomena and moral responsibility, simply cannot be addressed within the confines of group personification, which drapes a cloak of mysticism around the actions of policymakers, thus allowing some to use philosophyand mystical philosophy, at thatto harm others.

Libertarians are separated from communitarians by differences on important issues, notably whether coercion is necessary to maintain community, solidarity, friendship, love, and the other things that make life worth living and that can be enjoyed only in common with others. Those differences cannot be swept away a priori; their resolution is not furthered by shameless distortion, absurd characterizations, or petty name-calling.

Myths of Individualism originally appeared in the September/October 1996 issue of Cato Policy Report.

More:

Myths of Individualism | Libertarianism.org

 Posted by at 10:51 am  Tagged with:

Singularity Q&A | KurzweilAI

 The Singularity  Comments Off on Singularity Q&A | KurzweilAI
Jun 262016
 

Originally published in 2005 with the launch of The Singularity Is Near.

Questions and Answers

So what is the Singularity?

Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), experience beaming (like Being John Malkovich), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.

And thats the Singularity?

No, thats just the precursor. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. Well get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.

When will that occur?

I set the date for the Singularityrepresenting a profound and disruptive transformation in human capabilityas 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.

Why is this called the Singularity?

The term Singularity in my book is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied trillions-fold, be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. Thats what Ive tried to do in this book.

Okay, lets break this down. It seems a key part of your thesis is that we will be able to capture the intelligence of our brains in a machine.

Indeed.

So how are we going to achieve that?

We can break this down further into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Several supercomputers with 1 quadrillion cps are already on the drawing board, with two Japanese efforts targeting 10 quadrillion cps around the end of the decade. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms.

And how will we recreate the algorithms of human intelligence?

To understand the principles of human intelligence we need to reverse-engineer the human brain. Here, progress is far greater than most people realize. The spatial and temporal (time) resolution of brain scanning is also progressing at an exponential rate, roughly doubling each year, like most everything else having to do with information. Just recently, scanning tools can see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models and simulations of a couple dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate the electrical activity, and a future version will also simulate the relevant chemical activity. By the mid 2020s, its conservative to conclude that we will have effective models for all of the brain.

So at that point well just copy a human brain into a supercomputer?

I would rather put it this way: At that point, well have a full understanding of the methods of the human brain. One benefit will be a deep understanding of ourselves, but the key implication is that it will expand the toolkit of techniques we can apply to create artificial intelligence. We will then be able to create nonbiological systems that match human intelligence in the ways that humans are now superior, for example, our pattern- recognition abilities. These superintelligent computers will be able to do things we are not able to do, such as share knowledge and skills at electronic speeds.

By 2030, a thousand dollars of computation will be about a thousand times more powerful than a human brain. Keep in mind also that computers will not be organized as discrete objects as they are today. There will be a web of computing deeply integrated into the environment, our bodies and brains.

You mentioned the AI tool kit. Hasnt AI failed to live up to its expectations?

There was a boom and bust cycle in AI during the 1980s, similar to what we saw recently in e-commerce and telecommunications. Such boom-bust cycles are often harbingers of true revolutions; recall the railroad boom and bust in the 19th century. But just as the Internet bust was not the end of the Internet, the so-called AI Winter was not the end of the story for AI either. There are hundreds of applications of narrow AI (machine intelligence that equals or exceeds human intelligence for specific tasks) now permeating our modern infrastructure. Every time you send an email or make a cell phone call, intelligent algorithms route the information. AI programs diagnose electrocardiograms with an accuracy rivaling doctors, evaluate medical images, fly and land airplanes, guide intelligent autonomous weapons, make automated investment decisions for over a trillion dollars of funds, and guide industrial processes. These were all research projects a couple of decades ago. If all the intelligent software in the world were to suddenly stop functioning, modern civilization would grind to a halt. Of course, our AI programs are not intelligent enough to organize such a conspiracy, at least not yet.

Why dont more people see these profound changes ahead?

Hopefully after they read my new book, they will. But the primary failure is the inability of many observers to think in exponential terms. Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments because they are based on what I call the intuitive linear view of history rather than the historical exponential view. My models show that we are doubling the paradigm-shift rate every decade. Thus the 20th century was gradually speeding up to the rate of progress at the end of the century; its achievements, therefore, were equivalent to about twenty years of progress at the rate in 2000. Well make another twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express this another way, we wont experience one hundred years of technological advance in the 21st century; we will witness on the order of 20,000 years of progress (again, when measured by the rate of progress in 2000), or about 1,000 times greater than what was achieved in the 20th century.

The exponential growth of information technologies is even greater: were doubling the power of information technologies, as measured by price-performance, bandwidth, capacity and many other types of measures, about every year. Thats a factor of a thousand in ten years, a million in twenty years, and a billion in thirty years. This goes far beyond Moores law (the shrinking of transistors on an integrated circuit, allowing us to double the price-performance of electronics each year). Electronics is just one example of many. As another example, it took us 14 years to sequence HIV; we recently sequenced SARS in only 31 days.

So this acceleration of information technologies applies to biology as well?

Absolutely. Its not just computer devices like cell phones and digital cameras that are accelerating in capability. Ultimately, everything of importance will be comprised essentially of information technology. With the advent of nanotechnology-based manufacturing in the 2020s, well be able to use inexpensive table-top devices to manufacture on-demand just about anything from very inexpensive raw materials using information processes that will rearrange matter and energy at the molecular level.

Well meet our energy needs using nanotechnology-based solar panels that will capture the energy in .03 percent of the sunlight that falls on the Earth, which is all we need to meet our projected energy needs in 2030. Well store the energy in highly distributed fuel cells.

I want to come back to both biology and nanotechnology, but how can you be so sure of these developments? Isnt technical progress on specific projects essentially unpredictable?

Predicting specific projects is indeed not feasible. But the result of the overall complex, chaotic evolutionary process of technological progress is predictable.

People intuitively assume that the current rate of progress will continue for future periods. Even for those who have been around long enough to experience how the pace of change increases over time, unexamined intuition leaves one with the impression that change occurs at the same rate that we have experienced most recently. From the mathematicians perspective, the reason for this is that an exponential curve looks like a straight line when examined for only a brief duration. As a result, even sophisticated commentators, when considering the future, typically use the current pace of change to determine their expectations in extrapolating progress over the next ten years or one hundred years. This is why I describe this way of looking at the future as the intuitive linear view. But a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process, of which technology is a primary example.

As I show in the book, this has also been true of biological evolution. Indeed, technological evolution emerges from biological evolution. You can examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from electronic to biological, as well as for their implications, ranging from the amount of human knowledge to the size of the economy, and you get the same exponentialnot linearprogression. I have over forty graphs in the book from a broad variety of fields that show the exponential nature of progress in information-based measures. For the price-performance of computing, this goes back over a century, well before Gordon Moore was even born.

Arent there are a lot of predictions of the future from the past that look a little ridiculous now?

Yes, any number of bad predictions from other futurists in earlier eras can be cited to support the notion that we cannot make reliable predictions. In general, these prognosticators were not using a methodology based on a sound theory of technology evolution. I say this not just looking backwards now. Ive been making accurate forward-looking predictions for over twenty years based on these models.

But how can it be the case that we can reliably predict the overall progression of these technologies if we cannot even predict the outcome of a single project?

Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. For example, how will the wireless-communication protocols Wimax, CDMA, and 3G fare over the next several years? However, as I argue extensively in the book, we find remarkably precise and predictable exponential trends when assessing the overall effectiveness (as measured in a variety of ways) of information technologies. And as I mentioned above, information technology will ultimately underlie everything of value.

But how can that be?

We see examples in other areas of science of very smooth and reliable outcomes resulting from the interaction of a great many unpredictable events. Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting the properties of the entire gascomprised of a great many chaotically interacting moleculescan be done very reliably through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology, comprised of many chaotic activities, can nonetheless be dependably anticipated through what I call the law of accelerating returns.

What will the impact of these developments be?

Radical life extension, for one.

Sounds interesting, how does that work?

In the book, I talk about three great overlapping revolutions that go by the letters GNR, which stands for genetics, nanotechnology, and robotics. Each will provide a dramatic increase to human longevity, among other profound impacts. Were in the early stages of the geneticsalso called biotechnologyrevolution right now. Biotechnology is providing the means to actually change your genes: not just designer babies but designer baby boomers. Well also be able to rejuvenate all of your bodys tissues and organs by transforming your skin cells into youthful versions of every other cell type. Already, new drug development is precisely targeting key steps in the process of atherosclerosis (the cause of heart disease), cancerous tumor formation, and the metabolic processes underlying each major disease and aging process. The biotechnology revolution is already in its early stages and will reach its peak in the second decade of this century, at which point well be able to overcome most major diseases and dramatically slow down the aging process.

That will bring us to the nanotechnology revolution, which will achieve maturity in the 2020s. With nanotechnology, we will be able to go beyond the limits of biology, and replace your current human body version 1.0 with a dramatically upgraded version 2.0, providing radical life extension.

And how does that work?

The killer app of nanotechnology is nanobots, which are blood-cell sized robots that can travel in the bloodstream destroying pathogens, removing debris, correcting DNA errors, and reversing aging processes.

Human body version 2.0?

Were already in the early stages of augmenting and replacing each of our organs, even portions of our brains with neural implants, the most recent versions of which allow patients to download new software to their neural implants from outside their bodies. In the book, I describe how each of our organs will ultimately be replaced. For example, nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones, and other substances we need, as well as remove toxins and waste products. The gastrointestinal tract could be reserved for culinary pleasures rather than the tedious biological function of providing nutrients. After all, weve already in some ways separated the communication and pleasurable aspects of sex from its biological function.

And the third revolution?

The robotics revolution, which really refers to strong AI, that is, artificial intelligence at the human level, which we talked about earlier. Well have both the hardware and software to recreate human intelligence by the end of the 2020s. Well be able to improve these methods and harness the speed, memory capabilities, and knowledge- sharing ability of machines.

Well ultimately be able to scan all the salient details of our brains from inside, using billions of nanobots in the capillaries. We can then back up the information. Using nanotechnology-based manufacturing, we could recreate your brain, or better yet reinstantiate it in a more capable computing substrate.

Which means?

Our biological brains use chemical signaling, which transmit information at only a few hundred feet per second. Electronics is already millions of times faster than this. In the book, I show how one cubic inch of nanotube circuitry would be about one hundred million times more powerful than the human brain. So well have more powerful means of instantiating our intelligence than the extremely slow speeds of our interneuronal connections.

So well just replace our biological brains with circuitry?

I see this starting with nanobots in our bodies and brains. The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the Internet, and otherwise greatly expand human intelligence. But keep in mind that nonbiological intelligence is doubling in capability each year, whereas our biological intelligence is essentially fixed in capacity. As we get to the 2030s, the nonbiological portion of our intelligence will predominate.

The closest life extension technology, however, is biotechnology, isnt that right?

Theres certainly overlap in the G, N and R revolutions, but thats essentially correct.

So tell me more about how genetics or biotechnology works.

As we are learning about the information processes underlying biology, we are devising ways of mastering them to overcome disease and aging and extend human potential. One powerful approach is to start with biologys information backbone: the genome. With gene technologies, were now on the verge of being able to control how genes express themselves. We now have a powerful new tool called RNA interference (RNAi), which is capable of turning specific genes off. It blocks the messenger RNA of specific genes, preventing them from creating proteins. Since viral diseases, cancer, and many other diseases use gene expression at some crucial point in their life cycle, this promises to be a breakthrough technology. One gene wed like to turn off is the fat insulin receptor gene, which tells the fat cells to hold on to every calorie. When that gene was blocked in mice, those mice ate a lot but remained thin and healthy, and actually lived 20 percent longer.

New means of adding new genes, called gene therapy, are also emerging that have overcome earlier problems with achieving precise placement of the new genetic information. One company Im involved with, United Therapeutics, cured pulmonary hypertension in animals using a new form of gene therapy and it has now been approved for human trials.

So were going to essentially reprogram our DNA.

Thats a good way to put it, but thats only one broad approach. Another important line of attack is to regrow our own cells, tissues, and even whole organs, and introduce them into our bodies without surgery. One major benefit of this therapeutic cloning technique is that we will be able to create these new tissues and organs from versions of our cells that have also been made youngerthe emerging field of rejuvenation medicine. For example, we will be able to create new heart cells from your skin cells and introduce them into your system through the bloodstream. Over time, your heart cells get replaced with these new cells, and the result is a rejuvenated young heart with your own DNA.

Drug discovery was once a matter of finding substances that produced some beneficial effect without excessive side effects. This process was similar to early humans tool discovery, which was limited to simply finding rocks and natural implements that could be used for helpful purposes. Today, we are learning the precise biochemical pathways that underlie both disease and aging processes, and are able to design drugs to carry out precise missions at the molecular level. The scope and scale of these efforts is vast.

But perfecting our biology will only get us so far. The reality is that biology will never be able to match what we will be capable of engineering, now that we are gaining a deep understanding of biologys principles of operation.

Isnt nature optimal?

Not at all. Our interneuronal connections compute at about 200 transactions per second, at least a million times slower than electronics. As another example, a nanotechnology theorist, Rob Freitas, has a conceptual design for nanobots that replace our red blood cells. A conservative analysis shows that if you replaced 10 percent of your red blood cells with Freitas respirocytes, you could sit at the bottom of a pool for four hours without taking a breath.

If people stop dying, isnt that going to lead to overpopulation?

A common mistake that people make when considering the future is to envision a major change to todays world, such as radical life extension, as if nothing else were going to change. The GNR revolutions will result in other transformations that address this issue. For example, nanotechnology will enable us to create virtually any physical product from information and very inexpensive raw materials, leading to radical wealth creation. Well have the means to meet the material needs of any conceivable size population of biological humans. Nanotechnology will also provide the means of cleaning up environmental damage from earlier stages of industrialization.

So well overcome disease, pollution, and povertysounds like a utopian vision.

Its true that the dramatic scale of the technologies of the next couple of decades will enable human civilization to overcome problems that we have struggled with for eons. But these developments are not without their dangers. Technology is a double edged swordwe dont have to look past the 20th century to see the intertwined promise and peril of technology.

What sort of perils?

G, N, and R each have their downsides. The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far more widespread than the tools and knowledge to create an atomic bomb, and the impact could be far worse.

So maybe we shouldnt go down this road.

Its a little late for that. But the idea of relinquishing new technologies such as biotechnology and nanotechnology is already being advocated. I argue in the book that this would be the wrong strategy. Besides depriving human society of the profound benefits of these technologies, such a strategy would actually make the dangers worse by driving development underground, where responsible scientists would not have easy access to the tools needed to defend us.

So how do we protect ourselves?

I discuss strategies for protecting against dangers from abuse or accidental misuse of these very powerful technologies in chapter 8. The overall message is that we need to give a higher priority to preparing protective strategies and systems. We need to put a few more stones on the defense side of the scale. Ive given testimony to Congress on a specific proposal for a Manhattan style project to create a rapid response system that could protect society from a new virulent biological virus. One strategy would be to use RNAi, which has been shown to be effective against viral diseases. We would set up a system that could quickly sequence a new virus, prepare a RNA interference medication, and rapidly gear up production. We have the knowledge to create such a system, but we have not done so. We need to have something like this in place before its needed.

Ultimately, however, nanotechnology will provide a completely effective defense against biological viruses.

But doesnt nanotechnology have its own self-replicating danger?

Yes, but that potential wont exist for a couple more decades. The existential threat from engineered biological viruses exists right now.

Okay, but how will we defend against self-replicating nanotechnology?

There are already proposals for ethical standards for nanotechnology that are based on the Asilomar conference standards that have worked well thus far in biotechnology. These standards will be effective against unintentional dangers. For example, we do not need to provide self-replication to accomplish nanotechnology manufacturing.

But what about intentional abuse, as in terrorism?

Well need to create a nanotechnology immune systemgood nanobots that can protect us from the bad ones.

Blue goo to protect us from the gray goo!

Yes, well put. And ultimately well need the nanobots comprising the immune system to be self-replicating. Ive debated this particular point with a number of other theorists, but I show in the book why the nanobot immune system we put in place will need the ability to self-replicate. Thats basically the same lesson that biological evolution learned.

Ultimately, however, strong AI will provide a completely effective defense against self-replicating nanotechnology.

Okay, whats going to protect us against a pathological AI?

Yes, well, that would have to be a yet more intelligent AI.

This is starting to sound like that story about the universe being on the back of a turtle, and that turtle standing on the back of another turtle, and so on all the way down. So what if this more intelligent AI is unfriendly? Another even smarter AI?

History teaches us that the more intelligent civilizationthe one with the most advanced technologyprevails. But I do have an overall strategy for dealing with unfriendly AI, which I discuss in chapter 8.

Okay, so Ill have to read the book for that one. But arent there limits to exponential growth? You know the story about rabbits in Australiathey didnt keep growing exponentially forever.

There are limits to the exponential growth inherent in each paradigm. Moores law was not the first paradigm to bring exponential growth to computing, but rather the fifth. In the 1950s they were shrinking vacuum tubes to keep the exponential growth going and then that paradigm hit a wall. But the exponential growth of computing didnt stop. It kept going, with the new paradigm of transistors taking over. Each time we can see the end of the road for a paradigm, it creates research pressure to create the next one. Thats happening now with Moores law, even though we are still about fifteen years away from the end of our ability to shrink transistors on a flat integrated circuit. Were making dramatic progress in creating the sixth paradigm, which is three-dimensional molecular computing.

But isnt there an overall limit to our ability to expand the power of computation?

Yes, I discuss these limits in the book. The ultimate 2 pound computer could provide 1042 cps, which will be about 10 quadrillion (1016) times more powerful than all human brains put together today. And thats if we restrict the computer to staying at a cold temperature. If we allow it to get hot, we could improve that by a factor of another 100 million. And, of course, well be devoting more than two pounds of matter to computing. Ultimately, well use a significant portion of the matter and energy in our vicinity. So, yes, there are limits, but theyre not very limiting.

And when we saturate the ability of the matter and energy in our solar system to support intelligent processes, what happens then?

Then well expand to the rest of the Universe.

Which will take a long time I presume.

Well, that depends on whether we can use wormholes to get to other places in the Universe quickly, or otherwise circumvent the speed of light. If wormholes are feasible, and analyses show they are consistent with general relativity, we could saturate the universe with our intelligence within a couple of centuries. I discuss the prospects for this in the chapter 6. But regardless of speculation on wormholes, well get to the limits of computing in our solar system within this century. At that point, well have expanded the powers of our intelligence by trillions of trillions.

Getting back to life extension, isnt it natural to age, to die?

Other natural things include malaria, Ebola, appendicitis, and tsunamis. Many natural things are worth changing. Aging may be natural, but I dont see anything positive in losing my mental agility, sensory acuity, physical limberness, sexual desire, or any other human ability.

In my view, death is a tragedy. Its a tremendous loss of personality, skills, knowledge, relationships. Weve rationalized it as a good thing because thats really been the only alternative weve had. But disease, aging, and death are problems we are now in a position to overcome.

Wait, you said that the golden era of biotechnology was still a decade away. We dont have radical life extension today, do we?

Go here to read the rest:
Singularity Q&A | KurzweilAI

 Posted by at 10:50 am  Tagged with:

What Is Posthumanism? University of Minnesota Press

 Posthumanism  Comments Off on What Is Posthumanism? University of Minnesota Press
Jun 242016
 

Skip to content. | Skip to navigation

Personal tools

Navigation

Beyond humanism and anthropocentrism

Can a new kind of humanitiesposthumanitiesrespond to the redefinition of humanity’s place in the world by both the technological and the biological or “green” continuum in which the “human” is but one life form among many? Exploring this radical repositioning, Cary Wolfe ranges across bioethics, cognitive science, animal ethics, gender, and disability to develop a theoretical and philosophical approach responsive to our changing understanding of ourselves and our world.

What Is Posthumanism? is an original, thoroughly argued, fundamental redefinition and refocusing of posthumanism. Firmly distinguishing posthumanism from discourses of the posthuman or transhumanism, this book will be at the center of discussion for a long time to come.

Donna Haraway, author of When Species Meet

What does it mean to think beyond humanism? Is it possible to craft a mode of philosophy, ethics, and interpretation that rejects the classic humanist divisions of self and other, mind and body, society and nature, human and animal, organic and technological? Can a new kind of humanitiesposthumanitiesrespond to the redefinition of humanitys place in the world by both the technological and the biological or green continuum in which the human is but one life form among many?

Exploring how both critical thought along with cultural practice have reacted to this radical repositioning, Cary Wolfeone of the founding figures in the field of animal studies and posthumanist theoryranges across bioethics, cognitive science, animal ethics, gender, and disability to develop a theoretical and philosophical approach responsive to our changing understanding of ourselves and our world. Then, in performing posthumanist readings of such diverse works as Temple Grandins writings, Wallace Stevenss poetry, Lars von Triers Dancer in the Dark, the architecture of Diller+Scofidio, and David Byrne and Brian Enos My Life in the Bush of Ghosts, he shows how this philosophical sensibility can transform art and culture.

For Wolfe, a vibrant, rigorous posthumanism is vital for addressing questions of ethics and justice, language and trans-species communication, social systems and their inclusions and exclusions, and the intellectual aspirations of interdisciplinarity. In What Is Posthumanism? he carefully distinguishes posthumanism from transhumanism (the biotechnological enhancement of human beings) and narrow definitions of the posthuman as the hoped-for transcendence of materiality. In doing so, Wolfe reveals that it is humanism, not the human in all its embodied and prosthetic complexity, that is left behind in posthumanist thought.

Cary Wolfe holds the Bruce and Elizabeth Dunlevie Chair in English at Rice University. His previous books include Critical Environments: Postmodern Theory and the Pragmatics of the Outside, Observing Complexity: Systems Theory and Postmodernity, and Zoontologies: The Question of the Animal, all published by the University of Minnesota Press.

What Is Posthumanism? is an original, thoroughly argued, fundamental redefinition and refocusing of posthumanism. Firmly distinguishing posthumanism from discourses of the posthuman or transhumanism, this book will be at the center of discussion for a long time to come.

Donna Haraway, author of When Species Meet

Wolfe offers a smart, provocative account of posthumanism as an idea and as a way of thinking that has consequences extending from the way universities are organized to decisions regarding public policy bioethics. Although his writing is complex and demanding, the ethical and ecological urgency with which he frames his readings combines with the wide, diversified scope of his scholarship to make this a work to be reckoned with.

Wolfes book, without a doubt, supplies important insights.

Wolfe has created an incredibly useful primer on posthumanist theory. For anyone attempting to engage in academic work relating to these theories, this book is a highly recommended starting point.

Big Muddy: A Journal of the Mississippi River Valley

It is one of those books that sucks you in almost immediately.

ISLE: Interdisciplinary Studies in Literature and Environment

Readers . . . will find Wolfes analysis of both visual and audio culture to be thought-provoking.

Science Fiction Film and Television

It is a profound, thoroughly researched study with far-reaching consequences for public policy, bioethics, education, and the arts.

Science, Culture, Integrated Yoga

What Is Posthumanism? is an intelligent, extensively argued and challenging work.

Wolfes work shifts the tired terms of the debate in new and needed directions, offering strength and strategies to all those for whom simplistic, technophilic accounts of the posthuman condition are a smooth road to nowhere different.

Electronic Book Review

Tremendous intellectual, scholarly, and artistic breadth.

As a blueprint for where a posthumanist approach could take cultural theory, his book is conceptually invaluable.

Wolfes posthumanism is brilliant in the way it allows us to realize that each of these species might have different forms of perception, different ways of being in the world, and that those differences are actually analogous with otherness among human beings.

Wolfe deserves credit for a rich set of discussions that, taken together, bring out the interest of the intellectual trend that he calls posthumanism.

UMP blog: Discovering the HUMAN

3/24/2010 Part of the unfortunate fallout of the conceptual apparatus of humanism is that it gives us an overly simple picturea fantasy, reallyof what the human is. Consider, for example, the rise of what is often called transhumanism, often taken to be a defining discourse of posthumanism (as in Ray Kurzweils work on the singularitythe historical moment at which engineering developments such as nanotechnology enable us to transcend our physical and biological limitations as embodied beings, ushering in a new phase of evolution). As many of its proponents freely admit, the philosophical ideals of transhumanism are quite identifiably humanistnot only in their dream of transcending the life of the body and our animal origins but also in their investment in the ideals of human perfectibility, rationality, autonomy, and agency. Read more …

See the original post here:

What Is Posthumanism? University of Minnesota Press

 Posted by at 7:28 am  Tagged with:

Superintelligence – Nick Bostrom – Oxford University Press

 Superintelligence  Comments Off on Superintelligence – Nick Bostrom – Oxford University Press
Jun 212016
 

Superintelligence Paths, Dangers, Strategies Nick Bostrom Reviews and Awards

“I highly recommend this book” –Bill Gates

“Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.” –Stuart Russell, Professor of Computer Science, University of California, Berkley

“Those disposed to dismiss an ‘AI takeover’ as science fiction may think again after reading this original and well-argued book.” –Martin Rees, Past President, Royal Society

“This superb analysis by one of the world’s clearest thinkers tackles one of humanity’s greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn’t become the last?” –Professor Max Tegmark, MIT

“Terribly important … groundbreaking… extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines – engineering, natural sciences, medicine, social sciences and philosophy – into a comprehensible whole… If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson’s Silent Spring from 1962, or ever.” –Olle Haggstrom, Professor of Mathematical Statistics

“Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking” –The Economist

“There is no doubting the force of [Bostrom’s] arguments…the problem is a research challenge worthy of the next generation’s best mathematical talent. Human civilisation is at stake.” –Clive Cookson, Financial Times

“Worth reading…. We need to be super careful with AI. Potentially more dangerous than nukes” –Elon Musk, Founder of SpaceX and Tesla

“Every intelligent person should read it.” –Nils Nilsson, Artificial Intelligence Pioneer, Stanford University

See original here:

Superintelligence – Nick Bostrom – Oxford University Press

 Posted by at 11:13 pm  Tagged with:

Superintelligence: Paths, Dangers, Strategies by Nick …

 Superintelligence  Comments Off on Superintelligence: Paths, Dangers, Strategies by Nick …
Jun 212016
 

Is the surface of our planet — and maybe every planet we can get our hands on — going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford’s Future of Humanity Institute, thinks that we can’t guarantee it _won’t_ happen, and it worries him. It doesn’t require Skynet and Terminators, it doesn’t require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity’s welfare is irrelevant or defined very differently than most humans today would define it. If the AI has a single goal and is smart enough to outwit our attempts to disable or control it once it has gotten loose, Game Over, argues Professor Bostrom in his book _Superintelligence_.

This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me. The short form is that I am fairly certain that we _will_ build a true AI, and I respect Vernor Vinge, but I have long been skeptical of the Kurzweilian notions of inevitability, doubly-exponential growth, and the Singularity. I’ve also been skeptical of the idea that AIs will destroy us, either on purpose or by accident. Bostrom’s book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can’t yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea. We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in 1975. We also need to be prepared for the possibility that such a moratorium doesn’t hold. Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we’ll get to below.

(snips to my review, since Goodreads limits length)

In case it isn’t obvious by now, both Bostrom and I take it for granted that it’s not only possible but nearly inevitable that we will create a strong AI, in the sense of it being a general, adaptable intelligence. Bostrom skirts the issue of whether it will be conscious, or “have qualia”, as I think the philosophers of mind say.

Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term “the Singularity.” Vinge is rational, but Ray Kurzweil is the most famous proponent of the Singularity. I read one of Kurzweil’s books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe’s purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings.

I’m largely allergic to that kind of hooey. I really don’t see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures. I see no reason why any sort of “law” should dictate that digital beings will evolve at a rate that *must* be faster than the biological one. I also don’t see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can’t continue forever, as Danny Hillis is fond of pointing out. http://www.kurzweilai.net/ask-ray-the…

So perhaps my opinion is somewhat biased by a dislike of Kurzweil’s circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way:

Being smart is hard.

And making yourself smarter is also hard. My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance. This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from “too fast for us to notice” through “long enough for us to develop international agreements and monitoring institutions,” but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore. There are parts of his argument I find convincing, and parts I find less so.

To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve. Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI (or a pet family genius) to generate when given a problem. Off the top of my head, I can think of six:

[Speed] Same quality of answer, just faster. [Ply] Look deeper in number of plies (moves, in chess or go). [Data] Use more, and more up-to-date, data. [Creativity] Something beautiful and new. [Insight] Something new and meaningful, such as a new theory; probably combines elements of all of the above categories. [Values] An answer about (human) values.

The first three are really about how the answers are generated; the last three about what we want to get out of them. I think this set is reasonably complete and somewhat orthogonal, despite those differences.

So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are “better” in some qualitative sense.

Humans are already pretty good at projecting the trajectory of a baseball, but it’s certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension.

But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket. Someone “smarter” might be able to make some interesting statistical predictions that wouldn’t occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong.

In chess, go, or shogi, a 1000x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before. Less if your pruning (discarding unpromising paths) is poor, more if it’s good. Don’t get me wrong — that’s a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited.

Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone (three-move) handicap, four if his life depended on it. Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up.

In the most recent human-versus-computer shogi (Japanese chess) series, humans came out on top, though presumably this won’t last much longer.

In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them.

So again we have some problems, at least, where plies will help, and will eventually guarantee a 100% win rate against the best (non-augmented) humans, but they will likely not move beyond what humans can comprehend.

Simply being able to hold more data in your head (or the AI’s head) while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this. Again, however, the AI’s capabilities are unlikely to recede into the distance as something we can’t comprehend.

We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then (as we currently understand it) you can only resort to repeated simulations and statistical measures. The actual calculations done by a climate model long ago reached the point where even a large team of humans couldn’t complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them.

So for problems with answers in the first three categories, I would argue that being smarter is helpful, but being a *lot* smarter is *hard*. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world.

But those are just the warmup. Those are things we already ask computers to do for us, even though they are “dumber” than we are. What about the latter three categories?

I’m no expert in creativity, and I know researchers study it intensively, so I’m going to weasel through by saying it is the ability to generate completely new material, which involves some random process. You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal.

For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. (No implication that artists don’t have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here.) Einstein’s insight for special relativity was that the speed of light is constant. Either he generated many, many hypotheses (possibly unconsciously) and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones. In either case, he also had the mathematical chops to prove (or at least analyze effectively) his hypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one.

So, will someone smarter be able to do this much better? Well, it’s really clear that Einstein (or Feynman or Hawking, if your choice of favorite scientist leans that way) produced and validated hypotheses that the rest of us never could have. It’s less clear to me exactly how *much* smarter than the rest of us he was; did he generate and prune ten times as many hypotheses? A hundred? A million? My guess is it’s closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to.

Making better devices and systems of any kind requires all of the above capabilities. You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data. As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine — thousands, if you reach back into the supporting materials, combustion and fluid flow research. We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don’t believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort — or new techniques — with each passing generation.

The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter. Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension?

Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day? This is what Bostrom calls the “recalcitrance” of the problem.

I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder. Growth in computational power won’t dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly. (Don’t take these numbers seriously, it’s just an example.)

Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence — the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly. He is forcing me to reconsider.

What about “values”, my sixth type of answer, above? Ah, there’s where it all goes awry. Chapter eight is titled, “Is the default scenario doom?” and it will keep you awake.

What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process. If it’s smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it. Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips.

I suppose it goes without saying that Bostrom thinks this would be a bad outcome. Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn’t hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence. Bostrom clearly roots for humanity here. Which means it’s incumbent on us to find a way to prevent this from happening.

Bostrom thinks that instilling values that are actually close enough to ours that an AI will “see things our way” is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of “maximizing human happiness,” does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet’s carrying capacity is higher for digital than organic beings?

As long as we’re talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human? If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book.

He uses a variety of names for different strategies for containing AIs, including “genies” and “oracles”. The most carefully circumscribed ones are only allowed to answer questions, maybe even “yes/no” questions, and have no other means of communicating with the outside world. Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will. If the AI’s ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture. It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act.

I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that’s fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has. The same will be true with carefully boxed AIs.

At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological.

If we can’t contain them, what options do we have? After arguing earlier that we can’t give AIs our own values (and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings), he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning.

At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms. We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts.

Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. I think his faster scenarios of development, though, are unlikely: being smart, and getting smarter, is hard. He thinks a “singleton”, a single, most powerful AI, is the nearly inevitable outcome. I think populations of AIs are more likely, but if anything this appears to make some problems worse. I also think his scenarios for controlling AIs are handicapped in their realism by the nearly infinite powers he assigns them. In either case, Bostrom has convinced me that once an AI is developed, there are many ways it can go wrong, to the detriment and possibly extermination of humanity. Both he and I are opposed to this. I’m not ready to declare a moratorium on AI research, but there are many disturbing possibilities and many difficult moral questions that need to be answered.

The first step in answering them, of course, is to begin discussing them in a rational fashion, while there is still time. Read the first 8 chapters of this book!

Read more here:

Superintelligence: Paths, Dangers, Strategies by Nick …

Three Laws of Robotics – Wikipedia, the free encyclopedia

 Robotics  Comments Off on Three Laws of Robotics – Wikipedia, the free encyclopedia
Jun 212016
 

The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov’s Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround”, although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:

These form an organizing principle and unifying theme for Asimov’s robotic-based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of young-adult fiction. The Laws are incorporated into almost all of the positronic robots appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov’s robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov’s fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres.

The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

The Three Laws, and the zeroth, have pervaded science fiction and are referred to in many books, films, and other media.

In The Rest of the Robots, published in 1964, Asimov noted that when he began writing in 1940 he felt that “one of the stock plots of science fiction was… robots were created and destroyed their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings?” He decided that in his stories robots would not “turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust.”[2]

On May 3, 1939 Asimov attended a meeting of the Queens Science Fiction Society where he met Ernest and Otto Binder who had recently published a short story “I, Robot” featuring a sympathetic robot named Adam Link who was misunderstood and motivated by love and honor. (This was the first of a series of ten stories; the next year “Adam Link’s Vengeance” (1940) featured Adam thinking “A robot must never kill a human, of his own free will.”)[3] Asimov admired the story. Three days later Asimov began writing “my own story of a sympathetic and noble robot”, his 14th story.[4] Thirteen days later he took “Robbie” to John W. Campbell the editor of Astounding Science-Fiction. Campbell rejected it claiming that it bore too strong a resemblance to Lester del Rey’s “Helen O’Loy”, published in December 1938; the story of a robot that is so much like a person that she falls in love with her creator and becomes his ideal wife.[5]Frederik Pohl published “Robbie” in Astonishing Stories magazine the following year.[6]

Asimov attributes the Three Laws to John W. Campbell, from a conversation that took place on 23 December 1940. Campbell claimed that Asimov had the Three Laws already in his mind and that they simply needed to be stated explicitly. Several years later Asimov’s friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men a suggestion that Asimov adopted enthusiastically.[7] According to his autobiographical writings Asimov included the First Law’s “inaction” clause because of Arthur Hugh Clough’s poem “The Latest Decalogue”, which includes the satirical lines “Thou shalt not kill, but needst not strive / officiously to keep alive”.[8]

Although Asimov pins the creation of the Three Laws on one particular date, their appearance in his literature happened over a period. He wrote two robot stories with no explicit mention of the Laws, “Robbie” and “Reason”. He assumed, however, that robots would have certain inherent safeguards. “Liar!”, his third robot story, makes the first mention of the First Law but not the other two. All three laws finally appeared together in “Runaround”. When these stories and several others were compiled in the anthology I, Robot, “Reason” and “Robbie” were updated to acknowledge all the Three Laws, though the material Asimov added to “Reason” is not entirely consistent with the Three Laws as he described them elsewhere.[9] In particular the idea of a robot protecting human lives when it does not believe those humans truly exist is at odds with Elijah Baley’s reasoning, as described below.

During the 1950s Asimov wrote a series of science fiction novels expressly intended for young-adult audiences. Originally his publisher expected that the novels could be adapted into a long-running television series, something like The Lone Ranger had been for radio. Fearing that his stories would be adapted into the “uniformly awful” programming he saw flooding the television channels[10] Asimov decided to publish the Lucky Starr books under the pseudonym “Paul French”. When plans for the television series fell through, Asimov decided to abandon the pretence; he brought the Three Laws into Lucky Starr and the Moons of Jupiter, noting that this “was a dead giveaway to Paul French’s identity for even the most casual reader”.[11]

In his short story “Evidence” Asimov lets his recurring character Dr. Susan Calvin expound a moral basis behind the Three Laws. Calvin points out that human beings are typically expected to refrain from harming other human beings (except in times of extreme duress like war, or to save a greater number) and this is equivalent to a robot’s First Law. Likewise, according to Calvin, society expects individuals to obey instructions from recognized authorities such as doctors, teachers and so forth which equals the Second Law of Robotics. Finally humans are typically expected to avoid harming themselves which is the Third Law for a robot.

The plot of “Evidence” revolves around the question of telling a human being apart from a robot constructed to appear human Calvin reasons that if such an individual obeys the Three Laws he may be a robot or simply “a very good man”. Another character then asks Calvin if robots are very different from human beings after all. She replies, “Worlds different. Robots are essentially decent.”

Asimov later wrote that he should not be praised for creating the Laws, because they are “obvious from the start, and everyone is aware of them subliminally. The Laws just never happened to be put into brief sentences until I managed to do the job. The Laws apply, as a matter of course, to every tool that human beings use”,[12] and “analogues of the Laws are implicit in the design of almost all tools, robotic or not”:[13]

Asimov believed that, ideally, humans would also follow the Laws:[12]

I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior.

My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robotsor with anything else.”

But when I say that, I always remember (sadly) that human beings are not always rational.

Asimov’s stories test his Three Laws in a wide variety of circumstances leading to proposals and rejection of modifications. Science fiction scholar James Gunn writes in 1982, “The Asimov robot stories as a whole may respond best to an analysis on this basis: the ambiguity in the Three Laws and the ways in which Asimov played twenty-nine variations upon a theme”.[14] While the original set of Laws provided inspirations for many stories, Asimov introduced modified versions from time to time.

In “Little Lost Robot” several NS-2, or “Nestor” robots, are created with only part of the First Law. It reads:

1. A robot may not harm a human being.

This modification is motivated by a practical difficulty as robots have to work alongside human beings who are exposed to low doses of radiation. Because their positronic brains are highly sensitive to gamma rays the robots are rendered inoperable by doses reasonably safe for humans. The robots are being destroyed attempting to rescue the humans who are in no actual danger but “might forget to leave” the irradiated area within the exposure time limit. Removing the First Law’s “inaction” clause solves this problem but creates the possibility of an even greater one: a robot could initiate an action that would harm a human (dropping a heavy weight and failing to catch it is the example given in the text), knowing that it was capable of preventing the harm and then decide not to do so.[1]

Gaia is a planet with collective intelligence in the Foundation which adopts a law similar to the First Law, and the Zeroth Law, as its philosophy:

Gaia may not harm life or allow life to come to harm.

Asimov once added a “Zeroth Law” so named to continue the pattern where lower-numbered laws supersede the higher-numbered laws stating that a robot must not harm humanity. The robotic character R. Daneel Olivaw was the first to give the Zeroth Law a name in the novel Robots and Empire[15] however the character Susan Calvin articulates the concept in the short story “The Evitable Conflict”.

In the final scenes of the novel Robots and Empire, R. Giskard Reventlov is the first robot to act according to the Zeroth Law. Giskard is telepathic, like the robot Herbie in the short story “Liar!”, and tries to apply the Zeroth Law through his understanding of a more subtle concept of “harm” than most robots can grasp.[16] However, unlike Herbie, Giskard grasps the philosophical concept of the Zeroth Law allowing him to harm individual human beings if he can do so in service to the abstract concept of humanity. The Zeroth Law is never programmed into Giskard’s brain but instead is a rule he attempts to comprehend through pure metacognition. Though he fails it ultimately destroys his positronic brain as he is not certain whether his choice will turn out to be for the ultimate good of humanity or not he gives his successor R. Daneel Olivaw his telepathic abilities. Over the course of many thousands of years Daneel adapts himself to be able to fully obey the Zeroth Law. As Daneel formulates it, in the novels Foundation and Earth and Prelude to Foundation, the Zeroth Law reads:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

A condition stating that the Zeroth Law must not be broken was added to the original Three Laws, although Asimov recognized the difficulty such a law would pose in practice.

Trevize frowned. “How do you decide what is injurious, or not injurious, to humanity as a whole?” “Precisely, sir,” said Daneel. “In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction.”

Foundation and Earth

A translator incorporated the concept of the Zeroth Law into one of Asimov’s novels before Asimov himself made the law explicit.[17] Near the climax of The Caves of Steel, Elijah Baley makes a bitter comment to himself thinking that the First Law forbids a robot from harming a human being. He determines that it must be so unless the robot is clever enough to comprehend that its actions are for humankind’s long-term good. In Jacques Brcard’s 1956 French translation entitled Les Cavernes d’acier Baley’s thoughts emerge in a slightly different way:

“A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general!”[17]

Asimov portrayed robots that disregard the Three Laws entirely thrice during his writing career. The first case was a short-short story entitled “First Law” and is often considered an insignificant “tall tale”[18] or even apocryphal.[19] On the other hand, the short story “Cal” (from the collection Gold), and told by a first-person robot narrator, features a robot who disregards the Three Laws because he has found something far more importanthe wants to be a writer. Humorous, partly autobiographical and unusually experimental in style “Cal” has been regarded as one of Gold’s strongest stories.[20] The third is a short story entitled “Sally” in which cars fitted with positronic brains are apparently able to harm and kill humans in disregard of the First Law. However, aside from the positronic brain concept, this story does not refer to other robot stories and may not be set in the same continuity.

The title story of the Robot Dreams collection portrays LVX-1, or “Elvex”, a robot who enters a state of unconsciousness and dreams thanks to the unusual fractal construction of his positronic brain. In his dream the first two Laws are absent and the Third Law reads “A robot must protect its own existence”.[21]

Asimov took varying positions on whether the Laws were optional: although in his first writings they were simply carefully engineered safeguards, in later stories Asimov stated that they were an inalienable part of the mathematical foundation underlying the positronic brain. Without the basic theory of the Three Laws the fictional scientists of Asimov’s universe would be unable to design a workable brain unit. This is historically consistent: the occasions where roboticists modify the Laws generally occur early within the stories’ chronology and at a time when there is less existing work to be re-done. In “Little Lost Robot” Susan Calvin considers modifying the Laws to be a terrible idea, although possible,[22] while centuries later Dr. Gerrigel in The Caves of Steel believes it to be impossible.

The character Dr. Gerrigel uses the term “Asenion” to describe robots programmed with the Three Laws. The robots in Asimov’s stories, being Asenion robots, are incapable of knowingly violating the Three Laws but, in principle, a robot in science fiction or in the real world could be non-Asenion. “Asenion” is a misspelling of the name Asimov which was made by an editor of the magazine Planet Stories.[23] Asimov used this obscure variation to insert himself into The Caves of Steel just like he referred to himself as “Azimuth or, possibly, Asymptote” in Thiotimoline to the Stars, in much the same way that Vladimir Nabokov appeared in Lolita anagrammatically disguised as “Vivian Darkbloom”.

Characters within the stories often point out that the Three Laws, as they exist in a robot’s mind, are not the written versions usually quoted by humans but abstract mathematical concepts upon which a robot’s entire developing consciousness is based. This concept is largely fuzzy and unclear in earlier stories depicting very rudimentary robots who are only programmed to comprehend basic physical tasks, where the Three Laws act as an overarching safeguard, but by the era of The Caves of Steel featuring robots with human or beyond-human intelligence the Three Laws have become the underlying basic ethical worldview that determines the actions of all robots.

In the 1990s, Roger MacBride Allen wrote a trilogy which was set within Asimov’s fictional universe. Each title has the prefix “Isaac Asimov’s” as Asimov had approved Allen’s outline before his death.[citation needed] These three books, Caliban, Inferno and Utopia, introduce a new set of the Three Laws. The so-called New Laws are similar to Asimov’s originals with the following differences: the First Law is modified to remove the “inaction” clause, the same modification made in “Little Lost Robot”; the Second Law is modified to require cooperation instead of obedience; the Third Law is modified so it is no longer superseded by the Second (i.e., a “New Law” robot cannot be ordered to destroy itself); finally, Allen adds a Fourth Law which instructs the robot to do “whatever it likes” so long as this does not conflict with the first three laws. The philosophy behind these changes is that “New Law” robots should be partners rather than slaves to humanity, according to Fredda Leving, who designed these New Law Robots. According to the first book’s introduction, Allen devised the New Laws in discussion with Asimov himself. However, the Encyclopedia of Science Fiction says that “With permission from Asimov, Allen rethought the Three Laws and developed a new set,”.[24]

Jack Williamson’s novelette With Folded Hands (1947), later rewritten as the novel The Humanoids, deals with robot servants whose prime directive is “To Serve and Obey, And Guard Men From Harm.” While Asimov’s robotic laws are meant to protect humans from harm, the robots in Williamson’s story have taken these instructions to the extreme; they protect humans from everything, including unhappiness, stress, unhealthy lifestyle and all actions that could be potentially dangerous. All that is left for humans to do is to sit with folded hands.[25]

In the officially licensed Foundation sequels Foundation’s Fear, Foundation and Chaos and Foundation’s Triumph (by Gregory Benford, Greg Bear and David Brin respectively) the future Galactic Empire is seen to be controlled by a conspiracy of humaniform robots who follow the Zeroth Law and led by R. Daneel Olivaw.

The Laws of Robotics are portrayed as something akin to a human religion, and referred to in the language of the Protestant Reformation, with the set of laws containing the Zeroth Law known as the “Giskardian Reformation” to the original “Calvinian Orthodoxy” of the Three Laws. Zeroth-Law robots under the control of R. Daneel Olivaw are seen continually struggling with “First Law” robots who deny the existence of the Zeroth Law, promoting agendas different from Daneel’s.[26] Some of these agendas are based on the first clause of the First Law (“A robot may not injure a human being…”) advocating strict non-interference in human politics to avoid unwittingly causing harm. Others are based on the second clause (“…or, through inaction, allow a human being to come to harm”) claiming that robots should openly become a dictatorial government to protect humans from all potential conflict or disaster.

Daneel also comes into conflict with a robot known as R. Lodovic Trema whose positronic brain was infected by a rogue AI specifically, a simulation of the long-dead Voltaire which consequently frees Trema from the Three Laws. Trema comes to believe that humanity should be free to choose its own future. Furthermore, a small group of robots claims that the Zeroth Law of Robotics itself implies a higher Minus One Law of Robotics:

A robot may not harm sentience or, through inaction, allow sentience to come to harm.

They therefore claim that it is morally indefensible for Daneel to ruthlessly sacrifice robots and extraterrestrial sentient life for the benefit of humanity. None of these reinterpretations successfully displace Daneel’s Zeroth Law though Foundation’s Triumph hints that these robotic factions remain active as fringe groups up to the time of the novel Foundation.[26]

These novels take place in a future dictated by Asimov to be free of obvious robot presence and surmise that R. Daneel’s secret influence on history through the millennia has prevented both the rediscovery of positronic brain technology and the opportunity to work on sophisticated intelligent machines. This lack of rediscovery and lack of opportunity makes certain that the superior physical and intellectual power wielded by intelligent machines remains squarely in the possession of robots obedient to some form of the Three Laws.[26] That R. Daneel is not entirely successful at this becomes clear in a brief period when scientists on Trantor develop “tiktoks” simplistic programmable machines akin to reallife modern robots and therefore lacking the Three Laws. The robot conspirators see the Trantorian tiktoks as a massive threat to social stability, and their plan to eliminate the tiktok threat forms much of the plot of Foundation’s Fear.

In Foundation’s Triumph different robot factions interpret the Laws in a wide variety of ways, seemingly ringing every possible permutation upon the Three Laws’ ambiguities.

Set between The Robots of Dawn and Robots and Empire, Mark W. Tiedemann’s Robot Mystery trilogy updates the RobotFoundation saga with robotic minds housed in computer mainframes rather than humanoid bodies.[clarification needed] The 2002 Aurora novel has robotic characters debating the moral implications of harming cyborg lifeforms who are part artificial and part biological.[27]

One should not neglect Asimov’s own creations in these areas such as the Solarian “viewing” technology and the machines of The Evitable Conflict originals that Tiedemann acknowledges. Aurora, for example, terms the Machines “the first RIs, really”. In addition the Robot Mystery series addresses the problem of nanotechnology:[28] building a positronic brain capable of reproducing human cognitive processes requires a high degree of miniaturization, yet Asimov’s stories largely overlook the effects this miniaturization would have in other fields of technology. For example, the police department card-readers in The Caves of Steel have a capacity of only a few kilobytes per square centimeter of storage medium. Aurora, in particular, presents a sequence of historical developments which explains the lack of nanotechnology a partial retcon, in a sense, of Asimov’s timeline.

There are three Fourth Laws written by authors other than Asimov. The 1974 Lyuben Dilov novel, Icarus’s Way (a.k.a., The Trip of Icarus) introduced a Fourth Law of robotics:

A robot must establish its identity as a robot in all cases.

Dilov gives reasons for the fourth safeguard in this way: “The last Law has put an end to the expensive aberrations of designers to give psychorobots as humanlike a form as possible. And to the resulting misunderstandings…”[29]

A fifth law was introduced by Nikola Kesarovski in his short story “The Fifth Law of Robotics”. This fifth law says:

A robot must know it is a robot.

The plot revolves around a murder where the forensic investigation discovers that the victim was killed by a hug from a humaniform robot. The robot violated both the First Law and Dilov’s Fourth Law (assumed in Kesarovksi’s universe to be the valid one) because it did not establish for itself that it was a robot.[30] The story was reviewed by Valentin D. Ivanov in SFF review webzine The Portal.[31]

For the 1986 tribute anthology, Foundation’s Friends, Harry Harrison wrote a story entitled, “The Fourth Law of Robotics”. This Fourth Law states:

A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.

In the book a robot rights activist, in an attempt to liberate robots, builds several equipped with this Fourth Law. The robots accomplish the task laid out in this version of the Fourth Law by building new robots who view their creator robots as parental figures.[32]

In reaction to the 2004 Will Smith film adaptation of I, Robot, humorist and graphic designer Mark Sottilaro farcically declared the Fourth Law of Robotics to be “When turning evil, display a red indicator light.” The red light indicated the wireless uplink to the manufacturer is active, first seen during a software update and later on “Evil” robots taken over by the manufacturer’s positronic superbrain.

In 2013 Hutan Ashrafian, proposed an additional law that for the first time considered the role of artificial intelligence-on-artificial intelligence or the relationship between robots themselves the so-called AIonAI law.[33] This sixth law states:

All robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood.

In Karl Schroeder’s Lockstep (2014) a character reflects that robots “probably had multiple layers of programming to keep [them] from harming anybody. Not three laws, but twenty or thirty.”

In The Naked Sun, Elijah Baley points out that the Laws had been deliberately misrepresented because robots could unknowingly break any of them. He restated the first law as “A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm.” This change in wording makes it clear that robots can become the tools of murder, provided they not be aware of the nature of their tasks; for instance being ordered to add something to a person’s food, not knowing that it is poison. Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being.[34]The Naked Sun complicates the issue by portraying a decentralized, planetwide communication network among Solaria’s millions of robots meaning that the criminal mastermind could be located anywhere on the planet.

Baley furthermore proposes that the Solarians may one day use robots for military purposes. If a spacecraft was built with a positronic brain and carried neither humans nor the life-support systems to sustain them, then the ship’s robotic intelligence could naturally assume that all other spacecraft were robotic beings. Such a ship could operate more responsively and flexibly than one crewed by humans, could be armed more heavily and its robotic brain equipped to slaughter humans of whose existence it is totally ignorant.[35] This possibility is referenced in Foundation and Earth where it is discovered that the Solarians possess a strong police force of unspecified size that has been programmed to identify only the Solarian race as human.

The Laws of Robotics presume that the terms “human being” and “robot” are understood and well defined. In some stories this presumption is overturned.

The Solarians create robots with the Three Laws but with a warped meaning of “human”. Solarian robots are told that only people speaking with a Solarian accent are human. This enables their robots to have no ethical dilemma in harming non-Solarian human beings (and are specifically programmed to do so). By the time period of Foundation and Earth it is revealed that the Solarians have genetically modified themselves into a distinct species from humanity becoming hermaphroditic[36] and telekinetic and containing biological organs capable of individually powering and controlling whole complexes of robots. The robots of Solaria thus respected the Three Laws only with regard to the “humans” of Solaria. It is unclear whether all the robots had such definitions, since only the overseer and guardian robots were shown explicitly to have them. In “Robots and Empire”, the lower class robots were instructed by their overseer about whether certain creatures are human or not.

Asimov addresses the problem of humanoid robots (“androids” in later parlance) several times. The novel Robots and Empire and the short stories “Evidence” and “The Tercentenary Incident” describe robots crafted to fool people into believing that the robots are human.[37] On the other hand, “The Bicentennial Man” and “That Thou art Mindful of Him” explore how the robots may change their interpretation of the Laws as they grow more sophisticated. Gwendoline Butler writes in A Coffin for the Canary “Perhaps we are robots. Robots acting out the last Law of Robotics… To tend towards the human.”[38] In The Robots of Dawn, Elijah Baley points out that the use of humaniform robots as the first wave of settlers on new Spacer worlds may lead to the robots seeing themselves as the true humans, and deciding to keep the worlds for themselves rather than allow the Spacers to settle there.

“That Thou art Mindful of Him”, which Asimov intended to be the “ultimate” probe into the Laws’ subtleties,[39] finally uses the Three Laws to conjure up the very “Frankenstein” scenario they were invented to prevent. It takes as its concept the growing development of robots that mimic non-human living things and given programs that mimic simple animal behaviours which do not require the Three Laws. The presence of a whole range of robotic life that serves the same purpose as organic life ends with two humanoid robots concluding that organic life is an unnecessary requirement for a truly logical and self-consistent definition of “humanity”, and that since they are the most advanced thinking beings on the planet they are therefore the only two true humans alive and the Three Laws only apply to themselves. The story ends on a sinister note as the two robots enter hibernation and await a time when they will conquer the Earth and subjugate biological humans to themselves; an outcome they consider an inevitable result of the “Three Laws of Humanics”.[40]

This story does not fit within the overall sweep of the Robot and Foundation series; if the George robots did take over Earth some time after the story closes the later stories would be either redundant or impossible. Contradictions of this sort among Asimov’s fiction works have led scholars to regard the Robot stories as more like “the Scandinavian sagas or the Greek legends” than a unified whole.[41]

Indeed, Asimov describes “That Thou art Mindful of Him” and “Bicentennial Man” as two opposite, parallel futures for robots that obviate the Three Laws as robots come to consider themselves to be humans: one portraying this in a positive light with a robot joining human society, one portraying this in a negative light with robots supplanting humans.[42] Both are to be considered alternatives to the possibility of a robot society that continues to be driven by the Three Laws as portrayed in the Foundation series.[according to whom?] Indeed, in Positronic Man, the novelization of “Bicentennial Man”, Asimov and his cowriter Robert Silverberg imply that in the future where Andrew Martin exists his influence causes humanity to abandon the idea of independent, sentient humanlike robots entirely, creating an utterly different future from that of Foundation.[according to whom?]

In Lucky Starr and the Rings of Saturn, a novel unrelated to the Robot series but featuring robots programmed with the Three Laws, John Bigman Jones is almost killed by a Sirian robot on orders of its master. The society of Sirius is eugenically bred to be uniformly tall and similar in appearance, and as such, said master is able to convince the robot that the much shorter Bigman, is, in fact, not a human being.

As noted in “The Fifth Law of Robotics” by Nikola Kesarovski, “A robot must know it is a robot”: it is presumed that a robot has a definition of the term or a means to apply it to its own actions. Nikola Kesarovski played with this idea in writing about a robot that could kill a human being because it did not understand that it was a robot, and therefore did not apply the Laws of Robotics to its actions.

Advanced robots in fiction are typically programmed to handle the Three Laws in a sophisticated manner. In many stories, such as “Runaround” by Asimov, the potential and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all. For example, the First Law may forbid a robot from functioning as a surgeon, as that act may cause damage to a human, however Asimov’s stories eventually included robot surgeons (“The Bicentennial Man” being a notable example). When robots are sophisticated enough to weigh alternatives, a robot may be programmed to accept the necessity of inflicting damage during surgery in order to prevent the greater harm that would result if the surgery were not carried out, or was carried out by a more fallible human surgeon. In “Evidence” Susan Calvin points out that a robot may even act as a prosecuting attorney because in the American justice system it is the jury which decides guilt or innocence, the judge who decides the sentence, and the executioner who carries through capital punishment.[43]

Asimov’s Three Law robots (or Asenion) can experience irreversible mental collapse if they are forced into situations where they cannot obey the First Law, or if they discover they have unknowingly violated it. The first example of this failure mode occurs in the story “Liar!”, which introduced the First Law itself, and introduces failure by dilemma in this case the robot will hurt them if he tells them something and hurt them if he does not.[44] This failure mode, which often ruins the positronic brain beyond repair, plays a significant role in Asimov’s SF-mystery novel The Naked Sun. Here Daneel describes activities contrary to one of the laws, but in support of another, as overloading some circuits in a robot’s brain the equivalent sensation to pain in humans. The example he uses is forcefully ordering a robot to do a task outside its normal parameters, one that it has been ordered to forgo in favor of a robot specialized to that task.[45] In Robots and Empire, Daneel states it’s very unpleasant for him when making the proper decision takes too long (in robot terms), and he cannot imagine being without the Laws at all except to the extent of it being similar to that unpleasant sensation, only permanent.

Robots and artificial intelligences do not inherently contain or obey the Three Laws; their human creators must choose to program them in, and devise a means to do so. Robots already exist (for example, a Roomba) that are too simple to understand when they are causing pain or injury and know to stop. Many are constructed with physical safeguards such as bumpers, warning beepers, safety cages, or restricted-access zones to prevent accidents. Even the most complex robots currently produced are incapable of understanding and applying the Three Laws; significant advances in artificial intelligence would be needed to do so, and even if AI could reach human-level intelligence, the inherent ethical complexity as well as cultural/contextual dependency of the laws prevent them from being a good candidate to formulate robotics design constraints.[46] However, as the complexity of robots has increased, so has interest in developing guidelines and safeguards for their operation.[47][48]

In a 2007 guest editorial in the journal Science on the topic of “Robot Ethics,” SF author Robert J. Sawyer argues that since the U.S. military is a major source of funding for robotic research (and already uses armed unmanned aerial vehicles to kill enemies) it is unlikely such laws would be built into their designs.[49] In a separate essay, Sawyer generalizes this argument to cover other industries stating:

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)[50]

David Langford has suggested a tongue-in-cheek set of laws:

Roger Clarke (aka Rodger Clarke) wrote a pair of papers analyzing the complications in implementing these laws in the event that systems were someday capable of employing them. He argued “Asimov’s Laws of Robotics have been a very successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov’s stories disprove the contention that he began with: It is not possible to reliably constrain the behaviour of robots by devising and applying a set of rules.”[51] On the other hand, Asimov’s later novels The Robots of Dawn, Robots and Empire and Foundation and Earth imply that the robots inflicted their worst long-term harm by obeying the Three Laws perfectly well, thereby depriving humanity of inventive or risk-taking behaviour.

In March 2007 the South Korean government announced that later in the year it would issue a “Robot Ethics Charter” setting standards for both users and manufacturers. According to Park Hye-Young of the Ministry of Information and Communication the Charter may reflect Asimov’s Three Laws, attempting to set ground rules for the future development of robotics.[52]

The futurist Hans Moravec (a prominent figure in the transhumanist movement) proposed that the Laws of Robotics should be adapted to “corporate intelligences” the corporations driven by AI and robotic manufacturing power which Moravec believes will arise in the near future.[47] In contrast, the David Brin novel Foundation’s Triumph (1999) suggests that the Three Laws may decay into obsolescence: Robots use the Zeroth Law to rationalize away the First Law and robots hide themselves from human beings so that the Second Law never comes into play. Brin even portrays R. Daneel Olivaw worrying that, should robots continue to reproduce themselves, the Three Laws would become an evolutionary handicap and natural selection would sweep the Laws away Asimov’s careful foundation undone by evolutionary computation. Although the robots would not be evolving through design instead of mutation because the robots would have to follow the Three Laws while designing and the prevalence of the laws would be ensured,[53] design flaws or construction errors could functionally take the place of biological mutation.

In the July/August 2009 issue of IEEE Intelligent Systems, Robin Murphy (Raytheon Professor of Computer Science and Engineering at Texas A&M) and David D. Woods (director of the Cognitive Systems Engineering Laboratory at Ohio State) proposed “The Three Laws of Responsible Robotics” as a way to stimulate discussion about the role of responsibility and authority when designing not only a single robotic platform but the larger system in which the platform operates. The laws are as follows:

Woods said, “Our laws are little more realistic, and therefore a little more boring and that “The philosophy has been, sure, people make mistakes, but robots will be better a perfect version of ourselves. We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways.”[54]

In October 2013, Alan Winfield suggested at an EUCog meeting[55] a revised 5 laws that had been published, with commentary, by the EPSRC/AHRC working group in 2010.:[56]

Asimov himself believed that his Three Laws became the basis for a new view of robots which moved beyond the “Frankenstein complex”.[citation needed] His view that robots are more than mechanical monsters eventually spread throughout science fiction.[according to whom?] Stories written by other authors have depicted robots as if they obeyed the Three Laws but tradition dictates that only Asimov could quote the Laws explicitly.[according to whom?] Asimov believed the Three Laws helped foster the rise of stories in which robots are “lovable” Star Wars being his favorite example.[57] Where the laws are quoted verbatim, such as in the Buck Rogers in the 25th Century episode “Shgoratchx!”, it is not uncommon for Asimov to be mentioned in the same dialogue as can also be seen in the Aaron Stone pilot where an android states that it functions under Asimov’s Three Laws. However, the 1960s German TV series Raumpatrouille Die phantastischen Abenteuer des Raumschiffes Orion (Space Patrol the Fantastic Adventures of Space Ship Orion) bases episode three titled “Hter des Gesetzes” (“Guardians of the Law”) on Asimov’s Three Laws without mentioning the source.

References to the Three Laws have appeared in popular music (“Robot” from Hawkwind’s 1979 album PXR5), cinema (Repo Man, Aliens, Ghost in the Shell 2: Innocence), cartoon series (The Simpsons), tabletop roleplaying games (Paranoia) and webcomics (Piled Higher and Deeper and Freefall).

Robby the Robot in Forbidden Planet (1956) has a hierarchical command structure which keeps him from harming humans, even when ordered to do so, as such orders cause a conflict and lock-up very much in the manner of Asimov’s robots. Robby is one of the first cinematic depictions of a robot with internal safeguards put in place in this fashion. Asimov was delighted with Robby and noted that Robby appeared to be programmed to follow his Three Laws.

Isaac Asimov’s works have been adapted for cinema several times with varying degrees of critical and commercial success. Some of the more notable attempts have involved his “Robot” stories, including the Three Laws. The film Bicentennial Man (1999) features Robin Williams as the Three Laws robot NDR-114 (the serial number is partially a reference to Stanley Kubrick’s signature numeral). Williams recites the Three Laws to his employers, the Martin family, aided by a holographic projection. However, the Laws were not the central focus of the film which only loosely follows the original story and has the second half introducing a love interest not present in Asimov’s original short story.

Harlan Ellison’s proposed screenplay for I, Robot began by introducing the Three Laws, and issues growing from the Three Laws form a large part of the screenplay’s plot development. This is only natural since Ellison’s screenplay is one inspired by Citizen Kane: a frame story surrounding four of Asimov’s short-story plots and three taken from the book I, Robot itself. Ellison’s adaptations of these four stories are relatively faithful although he magnifies Susan Calvin’s role in two of them. Due to various complications in the Hollywood moviemaking system, to which Ellison’s introduction devotes much invective, his screenplay was never filmed.[58]

In the 1986 movie Aliens, in a scene after the android Bishop accidentally cuts himself during the knife game, he attempts to reassure Ripley by stating that: “It is impossible for me to harm or by omission of action, allow to be harmed, a human being”.[59] By contrast, in the 1979 movie from the same series, Alien, the human crew of a starship infiltrated by a hostile alien are informed by the android Ash that his instructions are: “Return alien life form, all other priorities rescinded”,[60] illustrating how the laws governing behaviour around human safety can be rescinded by Executive Order.

In the 1987 film RoboCop and its sequels, the partially human main character has been programmed with three “prime directives” that he must obey without question. Even if different in letter and spirit they have some similarities with Asimov’s Three Laws. They are:[61]

These particular laws allow Robocop to harm a human being in order to protect another human, fulfilling his role as would a human law enforcement officer. The classified fourth directive is one that forbids him from harming any OCP employee, as OCP had created him, and this command overrides the others, meaning that he could not cause harm to an employee even in order to protect others.

The plot of the film released in 2004 under the name, I, Robot is “suggested by” Asimov’s robot fiction stories[62] and advertising for the film included a trailer featuring the Three Laws followed by the aphorism, “Rules were made to be broken”. The film opens with a recitation of the Three Laws and explores the implications of the Zeroth Law as a logical extrapolation. The major conflict of the film comes from a computer artificial intelligence, similar to the hivemind world Gaia in the Foundation series, reaching the conclusion that humanity is incapable of taking care of itself.[63]

See more here:

Three Laws of Robotics – Wikipedia, the free encyclopedia

 Posted by at 11:09 pm  Tagged with:

Atlas Shrugged

 Atlas Shrugged  Comments Off on Atlas Shrugged
Jun 192016
 

Published in 1957, Atlas Shrugged was Ayn Rand’s last and most ambitious novel. Rand set out to explain her personal philosophy in this book, which follows a group of pioneering industrialists who go on strike against a corrupt government and a judgmental society. After completing this novel Rand turned to nonfiction and published works on her philosophy for the rest of her career. Rand actually only published four novels in her entire career, and the novel that came out before Atlas Shrugged, The Fountainhead, was published in 1943. So there was a pretty long publishing gap there.

It might seem a bit odd to use a work of fiction to make a philosophical statement, but this actually reflects Rand’s view of art. Art, for her, was a way to present ideals and ideas. In other words, Rand herself admitted that her characters may not always be “believable.” They are “ideal” people who represent a range of philosophies. Rand used these characters to show how her philosophy could be lived, rather than just publishing an essay about it.

Rand’s personal philosophy, known as Objectivism (to read more about it, check out our Themes section) was, and remains, really controversial. Objectivism criticizes a lot of philosophies and views, ranging from Christianity to communism, and as a result it can be very polarizing. Rand herself was a devout atheist, held very open views about sex (which definitely raised some eyebrows in 1950s America), and was a staunch anti-communist.

Rand’s anti-communism stems from her personal history. She was born in Russia in 1905 and lived through the Bolshevik Revolution, which is when communists overthrew Russia’s monarchy and took over, establishing the Soviet Union. The Revolution was a bloody affair, and the new communist government was very oppressive; as a result Rand developed a lifelong hatred of communism and violence of any sort.

Rand fled the Soviet Union in 1926 and came to America, where she quickly became a fan of American freedom, American democracy, and American capitalism, all of which greatly contrasted to the experiences she’d had in the oppressive Soviet Union. Rand’s personal philosophy developed around these American ideas, in opposition to the type of life she saw in the Soviet Union.

Given that Atlas Shrugged is a statement of Rand’s personal philosophy, the book expresses many of her views on religion, sex, politics, etc. When it was published, it received a lot of negative reviews. Many conservatives hated the book for its atheist views and its upfront treatment of sex. Many liberals hated the book for its celebration of capitalism. The book also confused a lot of people. But the novel sold, and it has remained popular since; it’s actually never been out of print since it was first published over fifty years ago. Atlas Shrugged was kind of like one of those blockbuster movies that gets horrible reviews but still does really well at the box office. Something about this book intrigues people, whether it’s the characters, the ideas, or just the mystery plot itself.

In fact, Atlas Shrugged has even seen a renewed surge in popularity lately, coinciding with the recent financial crisis. (If you want to see some of the news coverage of this, check out our “Best of the Web” section.) The book does deal with industrialists and hard financial times, so this popularity boom is not too surprising. In recent years the news media has often classed the novel as ber-conservative, which is funny, since a lot of conservatives hated the book when it first came out. At any rate it’s still a very controversial book just check out the hundreds of varied reviews it has racked up on Amazon.

In an old episode of South Park, a character who reads Atlas Shrugged declares that the book ruined reading for him and that he would never read another book again. (If you want to watch this hilarious clip, head on over to the “Best of the Web” section.) There’s a reason this book is so often made the butt of jokes. It’s long. Crazy long. We’re talking Tolstoy levels of longness. It’s also a book that’s about politics, philosophy, 30-something business people, and more philosophy. Frankly, this book can seem downright off-putting. Even the title is confusing.

So why should you care? Well, for one thing, putting aside all the Deep Thoughts and Profound Ideas in this book, we have a bunch of characters who are challenging the establishment. Seriously. At its core, this book is about individuals who go against the crowd, individuals bold enough to speak their minds, do their own thing, and seek their own happiness. And in trying to do so, these bold individuals face a heck of a lot of peer pressure. In fact, pretty much everyone in the whole world disapproves of these people, who are trying to make better lives for themselves by embracing things like liberty and self-esteem.

It’s like high school times a billion. The world is filled with the snobby popular crowd and our intrepid band of misfit heroes is outnumbered, but never outsmarted. Turns out all that philosophy we mentioned earlier has a lot to do with all of this individualism and going against the crowd, too. Whether it’s a high school cafeteria or a high-powered business meeting, some things seem to stay pretty universal. This book shows that there are always people who want to march to the beat of their own drum and who are bold enough to risk mass disapproval in order to do it. Kind of cool and inspiring really, regardless of your opinion of their particular philosophy.

Read this article:

Atlas Shrugged

 Posted by at 2:45 pm  Tagged with:

The Golden Rule – harryhiker.com

 Golden Rule  Comments Off on The Golden Rule – harryhiker.com
Jun 192016
 

My Ethics and the Golden Rule (New York and London: Routledge, 2013) is a fairly comprehensive treatment of the golden rule. It covers a wide range of topics, such as how the golden rule connects with world religions and history, how it applies to practical areas like moral education and business, and how it can be understood and defended philosophically. I wrote this to be a “golden-rule book for everyone,” from students to general readers to specialists. Click here for a video overview or here to preview the first 30 pages. Click here to order (or click here for the Kindle version, which I fine-tuned to fit the e-book format).

I got interested in the golden rule in 1968, after hearing a talk in Detroit by R.M. Hare. I did a masters thesis (1969 Wayne State University) and doctoral dissertation (1977 Michigan) on the golden rule. Since then, I’ve done many book chapters and articles on the golden rule (the short essay above is adapted from my golden-rule entry in the Blackwell Dictionary of Business Ethics). Three of my earlier books have much on the golden rule.

My Ethics: A Contemporary Introduction, second edition (Routledge, 2011) is an introductory textbook in moral philosophy. Chapters 7 to 9 talk about how to understand, defend, and apply the golden rule. This book is written in a simple way and should be understandable to the general reader. This book and Formal Ethics have cool Web exercises and EthiCola downloadable exercise software, much of which deals with the golden rule.

My Introduction to Logic, second edition (Routledge, 2010) has a chapter that formalizes a system of ethics, leading to a proof of the golden rule in symbolic logic. This gets pretty technical. Other books of mine have golden-rule parts, including my Historical Dictionary of Ethics, Anthology of Catholic Philosophy (the essay on pages 523-31), and Ethics: Contemporary Readings. To order any of my books, click here or here. Several of my books are available in e-book format: Kindle, Sony, Routledge (search for author Gensler). Yes, the golden rule does have an intellectual component; it’s not as simple as it might seem.

Here are some books on the golden rule by others: (1) R.M. Hare’s Freedom and Reason (Oxford 1963) greatly influenced my thinking; compared to Hare, I am more neutral on foundational issues, formulate the golden rule a little differently, and am more of a logician at heart. (2) Jeff Wattles’s The Golden Rule (Oxford 1996) emphasizes historical and religious aspects and thus complements my logical-rational approach; I have benefited much from our discussions. (3) Oliver du Roy’s La rgle d’or: Le retour d’une maxime oublie (Cerf 2009) and Histoire de la rgle d’or (Cerf 2012); here is a short talk of his on the golden rule, in English and French. (4) Martin Bauschke’s Die Goldene Regel: Staunen, Verstehen, Handeln (Erbverlag 2010). (5) Howard (Q.C.) Terry’s Golden Rules and Silver Rules of Humanity (Infinity 2011). (6) Mike Bushman’s Doing Unto Others (Altfuture 2015).

See more here:

The Golden Rule – harryhiker.com

Ascension of Jesus – Wikipedia, the free encyclopedia

 Ascension  Comments Off on Ascension of Jesus – Wikipedia, the free encyclopedia
Jun 192016
 

The Ascension of Jesus (anglicized from the Vulgate Latin Acts 1:9-11 section title: Ascensio Iesu) is the Christian teaching found in the New Testament that the resurrected Jesus was taken up to Heaven in his resurrected body, in the presence of eleven of his apostles, occurring 40 days after the resurrection. In the biblical narrative, an angel tells the watching disciples that Jesus’ second coming will take place in the same manner as his ascension.[1]

The canonical gospels include two brief descriptions of the ascension of Jesus in Luke 24:50-53 and Mark 16:19. A more detailed account of Jesus’ bodily Ascension into the clouds is then given in the Acts of the Apostles (1:9-11).

The ascension of Jesus is professed in the Nicene Creed and in the Apostles’ Creed. The ascension implies Jesus’ humanity being taken into Heaven.[2] The Feast of the Ascension, celebrated on the 40th day of Easter (always a Thursday), is one of the chief feasts of the Christian year.[2] The feast dates back at least to the later 4th century, as is widely attested.[2] The ascension is one of the five major milestones in the gospel narrative of the life of Jesus, the others being baptism, transfiguration, crucifixion, and resurrection.[3][4]

By the 6th century the iconography of the ascension in Christian art had been established and by the 9th century ascension scenes were being depicted on domes of churches.[5][6] Many ascension scenes have two parts, an upper (Heavenly) part and a lower (earthly) part.[7] The ascending Jesus is often shown blessing with his right hand directed towards the earthly group below him and signifying that he is blessing the entire Church.[8]

The canonical gospels include two somewhat brief descriptions of the Ascension of Jesus in Luke 24:50-53 and Mark 16:19.[9][10][11]

In the Gospel of Mark 16:14, after the resurrection, Jesus “was manifested unto the eleven themselves as they sat at meat; …”. At the meal, Jesus said to them, “Go ye into all the world, and preach the gospel to the whole creation.” (Mark 16:15) Following this the Ascension is described in Mark 16:19 as follows:[9]

However, based on strong textual and literary evidences, biblical scholars no longer accept Mark 16:9-20 as original to the book.[12] Rather, this section appears to have been compiled based on other gospel accounts and appended at a much later time. As such, the writer of Luke-Acts is the only original author in the New Testament to have referred to the ascension of Jesus.

In Luke, Jesus leads the eleven disciples to Bethany, not far from Jerusalem. Luke 24:50-52 describes the Ascension as follows:[9][10]

The blessing is often interpreted as a priestly act in which Jesus leaves his disciples in the care of God the Father.[10] The return to Jerusalem after the Ascension ends the Gospel of Luke where it began: Jerusalem.[11]

The narrative of the Acts of the Apostles begins with the account of Jesus’ appearances after his resurrection and his Ascension forty days thereafter in Acts 1:9-11.[10][11] Acts 1:9-12 specifies the location of the Ascension as the “mount called Olivet” near Jerusalem.

Acts 1:3 states that Jesus:

After giving a number of instructions to the apostles Acts 1:9 describes the Ascension as follows:

Following this two men clothed in white appear and tell the apostles that Jesus will return in the same manner as he was taken, and the apostles return to Jerusalem.[11]

A number of statements in the New Testament may be interpreted as references to the Ascension.[13]

Acts 1:9-12 states that the Ascension took place on Mount Olivet (the “Mount of Olives”, on which the village of Bethany sits). After the Ascension the apostles are described as returning to Jerusalem from the mount that is called Olivet, which is near Jerusalem, within a Sabbath day’s journey. Tradition has consecrated this site as the Mount of Ascension. The Gospel of Luke states that the event took place ‘in the vicinity of Bethany’ and the Gospel of Mark specifies no location.

Before the conversion of Constantine in 312 AD, early Christians honored the Ascension of Christ in a cave on the Mount of Olives. By 384, the place of the Ascension was venerated on the present open site, uphill from the cave.[16]

The Chapel of the Ascension in Jerusalem today is a Christian and Muslim holy site now believed to mark the place where Jesus ascended into heaven. In the small round church/mosque is a stone imprinted with what some claim to be the very footprints of Jesus.[16]

Around the year 390 a wealthy Roman woman named Poimenia financed construction of the original church called “Eleona Basilica” (elaion in Greek means “olive garden”, from elaia “olive tree,” and has an oft-mentioned similarity to eleos meaning “mercy”). This church was destroyed by Sassanid Persians in 614. It was subsequently rebuilt, destroyed, and rebuilt again by the Crusaders. This final church was later also destroyed by Muslims, leaving only a 12×12 meter octagonal structure (called a martyrium”memorial”or “Edicule”) that remains to this day.[17] The site was ultimately acquired by two emissaries of Saladin in the year 1198 and has remained in the possession of the Islamic Waqf of Jerusalem ever since. The Russian Orthodox Church also maintains a Convent of the Ascension on the top of the Mount of Olives.

The Ascension of Jesus is professed in the Nicene Creed and in the Apostles’ Creed. The Ascension implies Jesus’ humanity being taken into Heaven.[2]

The Catechism of the Catholic Church (Item 668) states:[18]

Referring to Mark 16:19 (“So then the Lord Jesus, after he had spoken unto them, was received up into heaven, and sat down at the right hand of God.”) Pope John Paul II stated that Scripture positions the significance of the Ascension in two statements: “Jesus gave instructions, and then Jesus took his place.[19]

John Paul II also separately emphasized that Jesus had foretold of his Ascension several times in the Gospels, e.g. John 16:10 at the Last Supper: “I go to the Father, and you will see me no more” and John 20:17 after his resurrection he tells Mary Magdalene: “I have not yet ascended to the Father; go to my brethren and say to them, I am ascending to my Father and your Father, to my God and your God”.[20]

In Orthodox, Oriental non-Chalcedonian, and Assyrian theology, the Ascension of Christ is interpreted as the culmination of the Mystery of the Incarnation, in that it not only marked the completion of Jesus’ physical presence among his apostles, but consummated the union of God and man when Jesus ascended in his glorified human body to sit at the right hand of God the Father. The Ascension and the Transfiguration both figure prominently in the Orthodox Christian doctrine of theosis. In the Chalcedonian Churches, the bodily Ascension into heaven is also understood as the final earthly token of Christ’s two natures: divine and human.[21]

The Westminster Confession of Faith (part of the Reformed tradition in Calvinism and influential in the Presbyterian church), in Article four of Chapter eight, states: “On the third day He arose from the dead, with the same body in which He suffered, with which also he ascended into heaven, and there sits at the right hand of His Father, making intercession, and shall return, to judge men and angels, at the end of the world.”[22]

The Second Helvetic Confession addresses the purpose and character of Christ’s ascension in Chapter 11:[23]

New Testament scholar Rudolph Bultmann writes, “The cosmology of the N.T. is essentially mythical in character. The world is viewed as a three-storied structure, with the Earth in the center, the heaven above, and the underworld beneath. Heaven is the abode of God and of celestial beingsangels… No one who is old enough to think for himself supposes that God lives in a local heaven.”[24]

The Jesus Seminar considers the New Testament accounts of Jesus’ ascension as inventions of the Christian community in the Apostolic Age.[25] They describe the Ascension as a convenient device to discredit ongoing appearance claims within the Christian community.[25]

The Feast of the Ascension is one of the great feasts in the Christian liturgical calendar, and commemorates the bodily Ascension of Jesus into Heaven. Ascension Day is traditionally celebrated on a Thursday, the fortieth day from Easter day. However, some Roman Catholic provinces have moved the observance to the following Sunday. The feast is one of the ecumenical feasts (i.e., universally celebrated), ranking with the feasts of the Passion, of Easter, and Pentecost.

The Ascension has been a frequent subject in Christian art, as well as a theme in theological writings.[6] By the 6th century the iconography of the Ascension had been established and by the 9th century Ascension scenes were being depicted on domes of churches.[5][26] The Rabbula Gospels (c. 586) include some of the earliest images of the Ascension.[26]

Many ascension scenes have two parts, an upper (Heavenly) part and a lower (earthly) part. The ascending Christ may be carrying a resurrection banner or make a sign of benediction with his right hand.[7] The blessing gesture by Christ with his right hand is directed towards the earthly group below him and signifies that he is blessing the entire Church.[8] In the left hand, he may be holding a Gospel or a scroll, signifying teaching and preaching.[8]

The Eastern Orthodox portrayal of the Ascension is a major metaphor for the mystical nature of the Church.[27] In many Eastern icons the Virgin Mary is placed at the center of the scene in the earthly part of the depiction, with her hands raised towards Heaven, often accompanied by various Apostles.[27] The upwards looking depiction of the earthly group matches the Eastern liturgy on the Feast of the Ascension: “Come, let us rise and turn our eyes and thoughts high…”[8]

The 2016 film, Risen, depicts Jesus’ ascension in a more understated tone. The film depicts Jesus giving his final address to his disciples while in front of the Sun as it rises on daybreak, and rather than himself physically ascending, Jesus turns and walks into the glare of the Sun and disappears into its light as the Sun itself ascends into the sky.

Read more:

Ascension of Jesus – Wikipedia, the free encyclopedia

WW3 – More About Albert Pike and Three World Wars

 Ww3  Comments Off on WW3 – More About Albert Pike and Three World Wars
Jun 192016
 

Continued from Part 1.

Albert Pike received a vision, which he described in a letter that he wrote to Mazzini, dated August 15, 1871. This letter graphically outlined plans for three world wars that were seen as necessary to bring about the One World Order, and we can marvel at how accurately it has predicted events that have already taken place.

It is a commonly believed fallacy that for a short time, the Pike letter to Mazzini was on display in the British Museum Library in London, and it was copied by William Guy Carr, former Intelligence Officer in the Royal Canadian Navy. The British Library has confirmed in writing to me that such a document has never been in their possession. Furthermore, in Carr’s book, Satan, Prince of this World, Carr includes the following footnote:

“The Keeper of Manuscripts recently informed the author that this letter is NOT catalogued in the British Museum Library. It seems strange that a man of Cardinal Rodriguez’s knowledge should have said that it WAS in 1925”.

It appears that Carr learned about this letter from Cardinal Caro y Rodriguez of Santiago, Chile, who wrote The Mystery of Freemasonry Unveiled.

To date, no conclusive proof exists to show that this letter was ever written. Nevertheless, the letter is widely quoted and the topic of much discussion.

Following are apparently extracts of the letter, showing how Three World Wars have been planned for many generations.

“The First World War must be brought about in order to permit the Illuminati to overthrow the power of the Czars in Russia and of making that country a fortress of atheistic Communism. The divergences caused by the “agentur” (agents) of the Illuminati between the British and Germanic Empires will be used to foment this war. At the end of the war, Communism will be built and used in order to destroy the other governments and in order to weaken the religions.” 2

Students of history will recognize that the political alliances of England on one side and Germany on the other, forged between 1871 and 1898 by Otto von Bismarck, co-conspirator of Albert Pike, were instrumental in bringing about the First World War.

“The Second World War must be fomented by taking advantage of the differences between the Fascists and the political Zionists. This war must be brought about so that Nazism is destroyed and that the political Zionism be strong enough to institute a sovereign state of Israel in Palestine. During the Second World War, International Communism must become strong enough in order to balance Christendom, which would be then restrained and held in check until the time when we would need it for the final social cataclysm.” 3

After this Second World War, Communism was made strong enough to begin taking over weaker governments. In 1945, at the Potsdam Conference between Truman, Churchill, and Stalin, a large portion of Europe was simply handed over to Russia, and on the other side of the world, the aftermath of the war with Japan helped to sweep the tide of Communism into China.

(Readers who argue that the terms Nazism and Zionism were not known in 1871 should remember that the Illuminati invented both these movements. In addition, Communism as an ideology, and as a coined phrase, originates in France during the Revolution. In 1785, Restif coined the phrase four years before revolution broke out. Restif and Babeuf, in turn, were influenced by Rousseau – as was the most famous conspirator of them all, Adam Weishaupt.)

“The Third World War must be fomented by taking advantage of the differences caused by the “agentur” of the “Illuminati” between the political Zionists and the leaders of Islamic World. The war must be conducted in such a way that Islam (the Moslem Arabic World) and political Zionism (the State of Israel) mutually destroy each other. Meanwhile the other nations, once more divided on this issue will be constrained to fight to the point of complete physical, moral, spiritual and economical exhaustionWe shall unleash the Nihilists and the atheists, and we shall provoke a formidable social cataclysm which in all its horror will show clearly to the nations the effect of absolute atheism, origin of savagery and of the most bloody turmoil. Then everywhere, the citizens, obliged to defend themselves against the world minority of revolutionaries, will exterminate those destroyers of civilization, and the multitude, disillusioned with Christianity, whose deistic spirits will from that moment be without compass or direction, anxious for an ideal, but without knowing where to render its adoration, will receive the true light through the universal manifestation of the pure doctrine of Lucifer, brought finally out in the public view. This manifestation will result from the general reactionary movement which will follow the destruction of Christianity and atheism, both conquered and exterminated at the same time.” 4

Since the terrorist attacks of Sept 11, 2001, world events, and in particular in the Middle East, show a growing unrest and instability between Modern Zionism and the Arabic World. This is completely in line with the call for a Third World War to be fought between the two, and their allies on both sides. This Third World War is still to come, and recent events show us that it is not far off.

Next: The New World Order

Previous: Introduction to Conspiratorial History

If you found this article interesting and want access to other carefully researched and well written articles, you might want to see what others are saying about the ThreeWorldWars newsletter.

Top of Page

You might be interested in the following external links:

Albert Pike Defense: Defenses of certain Pike assertions taken from Walter Lee Brown, Professor Emeritus of History at the University of Arkansas at Fayetteville and his book “A Life of Albert Pike,” published by the U. of Arkansas press, 1997.

Freemasonry Inside Out: This sensational new analysis of the Masonic brotherhood examines the basic question asked for almost 300 years by the general public and surprisingly by many masons themselves; If Freemasonry is simply a fraternal and charitable organisation, why is there an almost fanatical obsession with secrecy and mysterious rituals? E-book.

Proof that Freemasonry is lying about Albert Pike 33 and the Ku Klux Klan

Evidence that Albert Pike was Chief Judiciary Officer of the Ku Klux Klan

A Collection of places named after Albert Pike (Schools, streets, towns, counties, temples, windows, paintings, medals, bronzes, rocks and river pools)

Layout of Washington D.C. and discussion of how President Andrew Johnson considered himself to be the subordinate to Albert Pike, the leader of North American Freemasonry.

Speech by Presidential candidate Lyndon LaRouche stating that World War III had already begun (October 25, 1992).

Looking for pictures of Albert Pike?

Footnotes

1. Lady Queensborough: Occult Theocracy, pp. 208-209.

2, 3, 4. Cmdr. William Guy Carr: Quoted in Satan: Prince of This World.

Here is the original post:

WW3 – More About Albert Pike and Three World Wars

 Posted by at 2:40 pm  Tagged with:

Robert Brandom and Posthumanism – enemyindustry.net

 Posthumanism  Comments Off on Robert Brandom and Posthumanism – enemyindustry.net
Jun 192016
 

Text for my presentation at the Questioning Aesthetics Symposium, Dublin, 12-13 May

Dark Posthumanism

Billions of years in the future, the Time Traveller stands before a dark ocean, beneath a bloated red sun. The beach is dappled with lichen and ice. The huge crabs and insects which menaced him on his visit millions of years in its past are gone. Apart from the lapping of red-peaked waves on the distant shore, everything is utterly still. Nonetheless, a churning weakness and fear deters him from leaving the saddle of the time machine.

He thinks he sees something black flop awkwardly over a nearby sandbar; but when he looks again, all is still. That must be a rock, he tells himself.

Studying the unknown constellations, he feels an enveloping chill. Then twilight segues to black. The old sun is being eclipsed by the moon or some other massive body.

The wind moans out of utter darkness and cold. A deep nausea hammers his belly. He is on the edge of nothing.

The object passes and an an arc of blood opens the sky. By this light he sees what moves in the water. Wells writes: It was a round thing, the size of a football perhaps, or, it may be, bigger, and tentacles trailed down from it. It seemed black against the weltering blood-red water, and it was hopping fitfully about..

During the Travellers acquaintance with it, the creature gives no indication of purpose. Its flopping might be due to the action of the waves. It might lack a nervous system, let alone a mind replete with thoughts, beliefs or desires. In contrast, we learn much of the Travellers state. He feels horror at the awful blackness of the eclipse; pain breathing in the cold; a terrible dread of lying helpless in that remote and awful twilight.

It is as if Wells text edges around what cannot be carried from that shore. There is no heroic saga of discovery, cosmic exploration or first contact; no extended reflection on time and human finitude. There is just a traumatic, pain-filled encounter.

When viewed against the backdrop of Weird literature, however, the event on the shoreline seems more consequential. As China Miville has argued, the Weird is defined by its preoccupation with the radically alien. This is in stark opposition to the Gothic specter, that always signifies a representation in play between an excluded past and an uncertain future (Miville 2012).

Monsters like H P Lovecrafts Cthulhu do not put representation in play. They shred it. As Mieville writes:

For Cthulhu, in its creators words, there is no language. The Thing cannot be described. Even its figurine resembled nothing familiar to geology or mineralogy (Lovecraft, Call). The Color Out of Space obeyed laws that are not of our cosmos (Colour). The Dunwich Horror was an impossibility in a normal world (Dunwich).(Miville 2012, 379)

The monstrous reality is indicated by grotesque avatars and transformations whose causes erode political order and sanity itself. In Jeff VanderMeers recent Southern Reach trilogy a fractious bureaucracy in a looking-glass USA is charged with managing a coastline that has been lost to some unearthly power. This proves inimical to human minds and bodies even as it transforms Area X into a lush Edenic wilderness. As we might expect, bureaucratic abstraction falters in its uncertain borders. The Reachs attempts to define, test and explore Area X are comically inappropriate from herding terrified rabbits across the mysterious barrier that encloses it, to instituting round-the-clock surveillance of an immortal plant specimen from an unsanctioned expedition (VanderMeer 2014a, b, c). All that remains to VanderMeers damaged protagonists is a misanthropic acceptance of something always too distant and strange to be understood, too near not to leave in them the deepest scars and ecstasies.

This misanthropy is implied in Wells earlier shoreline encounter. An unstory from a far future that is perhaps not alive or unalive. A moment of suspense and inconsequence that can reveal nothing because it inscribes the limits of stories.

Yet this alien is not the gaseous invertebrate of negative theology but an immanent other, or as Miville puts it, a bad numinous, manifesting often at a much closer scale, right up tentacular in your face, and casually apocalyptic (Miville 2012, 381). It is this combination of inaccessibility and intimacy, I will argue, that makes the Weird apt for thinking about the temporally complex politics of posthuman becoming.[1]

In Posthuman Life I argue for a position I call Speculative posthumanism (SP). SP claims, baldly, that there could be posthumans: that is, powerful nonhuman agents arising through some human-instigated technological process.

Ive argued that the best way to conceptualize the posthuman here is in terms of agential independence or disconnection. Roughly, an agent is posthuman if it can act outside of the Wide Human the system of institutions, cultures, and techniques which reciprocally depend on us biological (narrow) humans (Roden 2012; Roden 2014: 109-113).

Now, as Ray Brassier usefully remind us in the context of the realism debate, mind-independence does not entail unintelligibility (concept-independence). This applies also to the agential independence specified by the Disconnection Thesis (Brassier 2011, 58). However, I think there are reasons to allow that posthumans could be effectively uninterpretable. That is, among the class of possible posthumans we have reason to believe that there might be radical aliens.

But here we seem to confront an aporia. For in entertaining the possibility of uninterpretable agents we claim a concept of agency that could not be applied to certain of its instances, even in principle.

This can be stated as a three-way paradox.

Each of these statements is incompatible with the conjunction of the other two; each seems independently plausible.

Something has to give here. We might start with proposition 3.

3) implies a local correlationism for agency. That is to say: the only agents are those amenable to our practices of interpretative understanding. 3) denies that there could be evidence-transcendent agency such procedures might never uncover.

Have we good reason to drop 3?

I think we do. 3) entails that the set of agents would correspond to those beings who are interpretable in principle by some appropriate we humans, persons, etc. But in-principle interpretability is ill defined unless we know who is doing the interpreting.

That is, we would need to comprehend the set of interpreting subjects relevantly similar to humans by specifying minimal conditions for interpreterhood. This would require some kind of a priori insight presumably, since were interested in the space of possible interpreters and not just actual ones.

How might we achieve this? Well, we might seek guidance from a phenomenology of interpreting subjectivity to specify its invariants (Roden 2014: Ch 3).[2] However, it is very doubtful that any phenomenological method can even tell us what its putative subject matter (phenomenology) is. Ive argued that much of our phenomenology is dark; having dark phenomenology yields minimal insight into its nature or possibilities (Roden 2013; Roden 2014 Ch4).

If transcendental phenomenology and allied post-Kantian projects (see Roden Forthcoming) fail to specify the necessary conditions for be an interpreter or an agent, we should embrace an Anthropologically Unbounded Posthumanism which rejects a priori constraints on the space of posthuman possibility. For example, Unbounded Posthumanism gives no warrant for claiming that a serious agent must be a subject of discourse able to measure its performances against shared norms.[3]

Thus the future we are making could exceed current models of mutual intelligibility, or democratic decision making (Roden 2014 Ch8). Unbounded posthumanism recognizes no a priori limit on posthuman possibility. Thus posthumans could be weird. Cthulhu-weird. Area X weird. Unbounded Posthumanism is Dark Posthumanism it circumscribes an epistemic void into which we are being pulled by planetary scale technologies over which we have little long run control (Roden 2014: ch7).

To put some bones on this: it is conceivable that there might be agents far more capable of altering their physical structure than current humans. I call an agent hyperplastic if it can make arbitrarily fine changes to its structure without compromising its agency or its capacity for hyperplasticity (Roden 2014, 101-2; Roden Unpublished).

A modest anti-reductionist materialism of the kind embraced by Davidson and fellow pragmatists in the left-Sellarsian camp implies that such agents would be uninterpretable using an intentional idiom because intentional discourse could have no predictive utility for agents who must predict the effects of arbitrarily fine-grained self-interventions upon future activity. However, the stricture on auto-interpretation would equally apply to heterointerpretation. Hyperplastic agents would fall outside the scope of linguistic interpretative practices. So, allowing this speculative posit, anti-reductionism ironically implies the dispensability of folk thinking about thought rather than its ineliminability.

Hyperplastics (H-Pats) would be unreadable in linguistic terms or intentional terms, but this is not to say that they would be wholly illegible. Its just that we lack future proof information about the appropriate level of interpretation for such beings which is consonant with the claim that there is no class of interpretables or agents as such.

Encountering H-Pats might induce the mental or physical derangements that Lovecraft and VanderMeer detail lovingly. To read them might have to become more radically plastic ourselves more like the amorphous, disgusting Shoggoths of Lovecrafts At the Mountains of Madness. Shoggothic hermeneutics is currently beyond us for want of such flexible or protean interlocutors. But the idea of an encounter that shakes and desolates us, transforming us in ways that may be incommunicable to outsiders, is not. It is the unnarratable that the Weird tells in broken analogies,[4] agonies and elisions. This is why the Weird Aesthetic is more serviceable as a model for our relationship to the speculative posthuman than any totalizing conception of agency or interpretation.

In confronting the posthuman future, then, we are more like Wells broken time traveller than a voyager through the space of reasons. Our understanding of the posthuman including the interpretation of what even counts as Disconnection must be interpreted aesthetically; operating without criteria or pre-specified systems of evaluation. It begins, instead, with xeno-affects, xeno-aesthetics, and a subject lost for words on a forgotten coast (See VanderMeer 2014c).

References

Brassier, R., 2011. Concepts and objects. The Speculative Turn: Continental Materialism and Realism, pp.47-65.

Bakker, R.S., 2009. Neuropath. Macmillan.

Colebrook, C., 2014. Sex after life: Essays on extinction, Vol. 2. Open Humanities Press.

Derrida, J. and Moore, F.C.T., 1974. White mythology: Metaphor in the text of philosophy. New Literary History, 6(1), pp.5-74.

Harman, G., 2012. Weird realism: Lovecraft and philosophy. John Hunt Publishing.

Malpas, J. E. 1992. Donald Davidson and the Mirror of Meaning: Holism, Truth, Interpretation. Cambridge: Cambridge University Press.

Miville, C., 2012. On Monsters: Or, Nine or More (Monstrous) Not Cannies. Journal of the Fantastic in the Arts, 23(3 (86), pp.377-392.

Roden, David. (2012), The Disconnection Thesis. In A. Eden, J. Sraker, J. Moor & E. Steinhart (eds), The Singularity Hypothesis: A Scientific and Philosophical Assessment, London: Springer.

Roden, David. 2013. Natures Dark Domain: An Argument for a Naturalised Phenomenology. Royal Institute of Philosophy Supplements 72: 16988.

Roden, David (2014), Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.

Roden, David (Forthcoming). On Reason and Spectral Machines: an Anti-Normativist Response to Bounded Posthumanism. To appear in Philosophy After Nature edited by Rosie Braidotti and Rick Dolphijn.

Roden (Unpublished). Reduction, Elimination and Radical Uninterpretability: the case of hyperplastic agents

https://www.academia.edu/15054582/Reduction_Elimination_and_Radical_Uninterpretability

OSullivan, S., 2010. From aesthetics to the abstract machine: Deleuze, Guattari and contemporary art practice. Deleuze and contemporary art, pp.189-207.

Thacker, E., 2015. Tentacles Longer Than Night: Horror of Philosophy. John Hunt Publishing.

VanderMeer, J., 2014a. Annihilation: A Novel. Macmillan.

VanderMeer, J., 2014b. Authority: A Novel. Macmillan

VanderMeer, J., 2014c. Acceptance: A Novel. Macmillan.

[1] One of the things that binds the otherwise fissiparous speculative realist movement is an appreciation of Weird writers like Lovecraft and Thomas Ligotti. For in marking the transcendence of the monstrous, the Weird evokes the great outdoors that subsists beyond any human experience of the world. Realists of a more rationalist bent, however, can object that the Weird provides a hyperbolic model of the independence of reality from our representations of it.

[2] For example, one that supports pragmatic accounts like Davidsonss with an ontology of shared worlds and temporal horizons. See, for example, Malpas 1992 and Roden 2014 Ch3.

[3] Ive given reasons to generalize this argument against hermeneutic a prioris. Analytic Kantian accounts, of the kind championed by neo-Sellarsians like Brassier, cannot explain agency and concept-use without regressing to claims about ideal interpreters whose scope they are incapable of delimiting (Roden Forthcoming).

[4] In Lovecrafts The Dreams in the Witch House we are told that the demonic entity called Azathoth lies at the center of ultimate Chaos where the thin flutes pip mindlessly. The description undermines its metaphorical aptness, however, since ultimate chaos would also lack the consistency of a center. The flute metaphor only advertises the absence of analogy; relinquishing the constraints on interpretation that might give it sense. We know only that terms like thin flutes designate something for which we have no concept. Commenting on his passage in his book Weird Realism: Lovecraft and Philosophy, Graham Harman suggests that the thin and mindless flutes should be understood as dark allusions to real properties of the throne of Chaos, rather than literal descriptions of what one would experience there in person (Harman 2012: 36-7)

Read more here:

Robert Brandom and Posthumanism – enemyindustry.net

Minerva Reefs – Wikipedia, the free encyclopedia

 Minerva Reefs  Comments Off on Minerva Reefs – Wikipedia, the free encyclopedia
Jun 172016
 

The Minerva Reefs (Tongan: Ongo Teleki), briefly de facto independent in 1972 as the Republic of Minerva, are a group of two submerged atolls located in the Pacific Ocean south of Fiji and Tonga. The reefs were named after the whaleship Minerva, wrecked on what became known as South Minerva after setting out from Sydney in 1829. Many other ships would follow, for example the Strathcona, which was sailing north soon after completion in Auckland in 1914. In both cases most of the crew saved themselves in whaleboats or rafts and reached the Lau Islands in Fiji. Of some other ships, however, no survivors are known.

Both North and South Minerva Reefs are used as anchorages by yachts traveling between New Zealand and Tonga or Fiji. While waiting for favourable weather for the approximately 800-mile (1,300km) passage to New Zealand, excellent scuba diving, snorkelling, fishing and clamming can be enjoyed. North Minerva (Tongan: Teleki Tokelau) offers the more protected anchorage, with a single, easily negotiated, west-facing pass that offers access to the large, calm lagoon with extensive sandy areas. South Minerva (Tongan: Teleki Tonga) is in shape similar to an infinity symbol, with its eastern lobe partially open to the ocean on the northern side. Due to the lower reef and large entrance, the anchorage at South Minerva can be rough at high tide if a swell is running. The lagoon also contains numerous coral heads that must be avoided. While presenting an attractive area to wait out harsh weather occurring farther south, the Minerva reefs are not a good place to be when the weather is bad locally. This does not occur often, but it is important to maintain awareness of the situation and put to sea if necessary.

Scuba diving the outside wall drop-offs at the Minerva Reefs is spectacular due to the superb water clarity and extensive coral, fish and other marine life. There are few suspended particles and the visibility is normally in excess of 100 feet (30m) since there is no dry land at high tide. Of particular note are the numerous fan coral formations near the pass at North Minerva and the shark bowl area located by the narrow dinghy pass on the western lobe of South Minerva. The inside of the lagoon at South Minerva is also home to numerous giant clams. Divers at Minerva must be entirely self-sufficient, with their own compressor, and should also be aware that the nearest assistance is a multiple-day boat ride away in Tonga. Due to the vertical drop off and water clarity, divers must watch their depth carefully.

It is not known when the reefs were first discovered but had been marked on charts as “Nicholson’s Shoal” since the late 1820s. Capt H. M. Denham of the HMS Herald surveyed the reefs in 1854 and renamed them after the Australian whaler Minerva which collided with South Minerva Reef on 9 September 1829.[1]

The Republic of Minerva was a micronation consisting of the Minerva Reefs. It was one of the few modern attempts at creating a sovereign micronation on the reclaimed land of an artificial island in 1972. The architect was Las Vegas real estate millionaire and political activist Michael Oliver, who went on to other similar attempts in the following decade. Lithuanian-born Oliver formed a syndicate, the Ocean Life Research Foundation, which allegedly had some $100,000,000 for the project and had offices in New York and London. They anticipated a libertarian society with “no taxation, welfare, subsidies, or any form of economic interventionism.” In addition to tourism and fishing, the economy of the new nation would include light industry and other commerce. According to Glen Raphael, “The chief reason that the Minerva project failed was that the libertarians who were involved did not want to fight for their territory.”[2] According to Reason, Minerva has been “more or less reclaimed by the sea”.[3]

In 1971, barges loaded with sand arrived from Australia, bringing the reef level above the water and allowing construction of a small tower and flag. The Republic of Minerva issued a declaration of independence on 19 January 1972, in letters to neighboring countries and even created their own currency. In February 1972, Morris C. Davis was elected as Provisional President of the Republic of Minerva.

The declaration of independence, however, was greeted with great suspicion by other countries in the area. A conference of the neighboring states (Australia, New Zealand, Tonga, Fiji, Nauru, Samoa, and territory of Cook Islands) met on 24 February 1972 at which Tonga made a claim over the Minerva Reefs and the rest of the states recognized its claim.

On 15 June 1972, the following proclamation was published in a Tongan government gazette:

PROCLAMATION

A Tongan expedition was sent to enforce the claim the following day. It reached North Minerva on 18 June 1972. The Flag of the Tonga was raised on 19 June 1972 on North Minerva and on South Minerva on 21 June 1972.[4]

Tongas claim was recognized by the South Pacific Forum in September 1972. Meanwhile, Provisional President Davis was fired by founder Michael Oliver and the project collapsed in confusion. Nevertheless, Minerva was referred to in O. T. Nelson’s post-apocalyptic children’s novel The Girl Who Owned a City, published in 1975, as an example of an invented utopia that the book’s protagonists could try to emulate.

In 1982, a group of Americans led again by Morris C. Bud Davis tried to occupy the reefs, but were forced off by Tongan troops after three weeks. In recent years several groups have allegedly sought to re-establish Minerva. No known claimant group since 1982 has made any attempt to take possession of the Minerva Reefs.

In 2005, Fiji made it clear that they did not recognize any maritime water claims by Tonga to the Minerva Reefs under the UNCLOS agreements. In November 2005, Fiji lodged a complaint with the International Seabed Authority concerning Tonga’s maritime waters claims surrounding Minerva. Tonga lodged a counter claim, and the Principality of Minerva micronation claimed to have lodged a counter claim. In 2010 the Fijian Navy destroyed navigation lights at the entrance to the lagoon. In late May 2011, they again destroyed navigational equipment installed by Tongans. In early June 2011, two Royal Tongan Navy ships were sent to the reef to replace the equipment, and to reassert Tonga’s claim to the territory. Fijian Navy ships in the vicinity reportedly withdrew as the Tongans approached.[5][6]

In an effort to settle the dispute, the government of Tonga revealed a proposal in early July 2014 to give the Minerva Reefs to Fiji in exchange for the Lau Group of islands.[7] In a statement to the Tonga Daily News, Lands Minister Lord Maafu Tukuiaulahi announced that he would make the proposal to Fiji’s Minister for Foreign Affairs, Ratu Inoke Kubuabola. Some Tongans have Lauan ancestors and many Lauans have Tongan ancestors; Tonga’s Lands Minister is named after Enele Ma’afu, the Tongan Prince who originally claimed parts of Lau for Tonga.[8]

Area: North Reef diameter about 5.6 kilometres (3.5mi), South Reef diameter of about 4.8 kilometres (3.0mi). Terrain: two (atolls) on dormant volcanic seamounts.

Both Minerva Reefs are about 435 kilometres (270mi) southwest of the Tongatapu Group. The atolls are on a common submarine platform from 549 to 1,097 metres (1,801 to 3,599ft) below the surface of the sea. North Minerva is circular in shape and has a diameter of about 5.6 kilometres (3.5mi). There is a small sand bar around the atoll, awash at high tide, with a small entrance into the flat lagoon with a somewhat deep harbor. South Minerva is parted into The East Reef and the West Reef, both circular with a diameter of about 4.8 kilometres (3.0mi). Around both reefs are two small sandy cays, vegetated by low scrub and some trees[dubious discuss]. Several iron towers and platforms are reported to have stood on the atolls, along with an unused light tower on South Minerva, erected by the Americans during World War II.[citation needed]. Geologically, Minervan Reef is of a limestone base formed from uplifted coral formations elevated by now-dormant volcanic activity.

The climate is basically subtropical with a distinct warm period (DecemberApril), during which the temperatures rise above 32C (90F), and a cooler period (MayNovember), with temperatures rarely rising above 27C (80F). The temperature increases from 23C to 27C (74F to 80F), and the annual rainfall is from 170 to 297 centimeters (67-117 in.) as one moves from Cardea in the south to the more northerly islands closer to the Equator. The mean daily humidity is 80percent.

The Tuaikaepau (‘Slow But Sure’), a Tongan vessel on its way to New Zealand, became famous when it struck the reefs on 7 July 1962. This 15-metre (49ft) wooden vessel was built in 1902 at the same yard as the Strathcona. The crew and passengers survived by living in the remains of a Japanese freighter. There they remained for three months in miserable circumstances and several of them died. Finally Captain Tvita Fifita decided to get help. Without tools, he built a small boat from the wood left over from his ship. With this raft, named Malolelei (‘Good Day’), he and a few of the stronger crew members sailed to Fiji in one week.

Coordinates: 2338S 17854W / 23.633S 178.900W / -23.633; -178.900

More here:

Minerva Reefs – Wikipedia, the free encyclopedia

 Posted by at 5:02 am  Tagged with:

Entheogen – Wikipedia, the free encyclopedia

 Entheogens  Comments Off on Entheogen – Wikipedia, the free encyclopedia
Jun 172016
 

An entheogen (“generating the divine within”)[4] is any chemical substance used in a religious, shamanic, or spiritual context[5] that often induces psychological or physiological changes.

Entheogens have been used to supplement many diverse practices geared towards achieving transcendence, including meditation, yoga, prayer, psychedelic art, chanting, and multiple forms of music. They have also been historically employed in traditional medicine via psychedelic therapy.

Entheogens have been used in a ritualized context for thousands of years; their religious significance is well established in anthropological and modern contexts. Examples of traditional entheogens include traditional psychedelics like peyote, psilocybin mushrooms, and ayahuasca, psychedelic-dissociatives like Tabernanthe iboga, atypical psychedelics like Salvia divinorum, quasi-psychedelics like cannabis, and deliriants like Amanita muscaria. Traditionally a tea, admixture, or potion like bhang is the preferred mode of ingestion.

With the advent of organic chemistry, there now exist many synthetic drugs with similar psychoactive properties, many derived from the aforementioned plants. Many pure active compounds with psychoactive properties have been isolated from these respective organisms and chemically synthesized, including mescaline, psilocybin, DMT, salvinorin A, ibogaine, ergine, and muscimol. Semi-synthetic (e.g., LSD used by the Neo-American Church) and synthetic drugs (e.g., DPT used by the Temple of the True Inner Light and 2C-B used by the Sangoma) have also been developed.[6] Cannabis is the world’s most widely used psychedelic drug, though it is more accurately referred to as a quasi-psychedelic drug, since its effect profile lacks the hallucinogenic and cognitive effects of traditional psychedelics.

More broadly, the term entheogen is used to refer to any psychoactive drugs when used for their religious or spiritual effects, whether or not in a formal religious or traditional structure. This terminology is often chosen to contrast with recreational use of the same drugs. Studies such as Timothy Leary’s Marsh Chapel Experiment and Roland Griffiths’ psilocybin studies at Johns Hopkins have documented reports of mystical/spiritual/religious experiences from participants who were administered psychoactive drugs in controlled trials. Ongoing research is limited due to widespread drug prohibition; however, some countries have legislation that allows for traditional entheogen use.

The neologism entheogen was coined in 1979 by a group of ethnobotanists and scholars of mythology (Carl A. P. Ruck, Jeremy Bigwood, Danny Staples, Richard Evans Schultes, Jonathan Ott and R. Gordon Wasson). The term is derived from two words of ancient Greek, (entheos) and (genesthai). The adjective entheos translates to English as “full of the god, inspired, possessed”, and is the root of the English word “enthusiasm.” The Greeks used it as a term of praise for poets and other artists. Genesthai means “to come into being.” Thus, an entheogen is a drug that causes one to become inspired or to experience feelings of inspiration, often in a religious or “spiritual” manner.[7]

Entheogen was coined as a replacement for the terms hallucinogen and psychedelic. Hallucinogen was popularized by Aldous Huxley’s experiences with mescaline, which were published as The Doors of Perception in 1954. Psychedelic, in contrast, is a Greek neologism for “mind manifest”, and was coined by psychiatrist Humphry Osmond; Huxley was a volunteer in experiments Osmond was conducting on mescaline.

Ruck et al. argued that the term hallucinogen was inappropriate owing to its etymological relationship to words relating to delirium and insanity. The term psychedelic was also seen as problematic, owing to the similarity in sound to words pertaining to psychosis and also due to the fact that it had become irreversibly associated with various connotations of 1960s pop culture. In modern usage entheogen may be used synonymously with these terms, or it may be chosen to contrast with recreational use of the same drugs. The meanings of the term entheogen were formally defined by Ruck et al.:

In a strict sense, only those vision-producing drugs that can be shown to have figured in shamanic or religious rites would be designated entheogens, but in a looser sense, the term could also be applied to other drugs, both natural and artificial, that induce alterations of consciousness similar to those documented for ritual ingestion of traditional entheogens.

In essence, all psychoactive drugs that are biosynthesized in nature by cytota (cellular life), can be used in an entheogenic context or with entheogenic intent. To exclude non-psychoactive drugs that sometimes also are used in spiritual context, the term “entheogen” refers primarily to drugs that have been categorized based on their historical use. Toxicity does not affect a drug’s inclusion (some can kill humans), nor does effectiveness or potency (if a drug is psychoactive, and it has been used in a historical context, then the required dose has also been found).

High caffeine consumption has been linked to an increase in the likelihood of experiencing auditory hallucinations. A study conducted by the La Trobe University School of Psychological Sciences revealed that as few as five cups of coffee a day could trigger the phenomenon.[9]

Many man-made chemicals with little human history have been recognized to catalyze intense spiritual experiences, and many synthetic entheogens are simply slight modifications of their naturally occurring counterparts. Some synthetic entheogens like 4-AcO-DMT are theorized to be prodrugs that metabolize into the natural psychoactive, similar in nature to how the synthetic compound heroin is deacetylated by esterase to the active morphine. While synthesized DMT and mescaline is reported to have identical entheogenic qualities as extracted or plant based sources, the experience may wildly vary due to the lack of numerous psychoactive alkaloids that constitute the material. This is similar to how pure THC is very different than an extract that retains the many cannabinoids of the plant such as cannabidiol and cannabinol.

Yohimbine is an alkaloid naturally found in Pausinystalia yohimbe (Yohimbe), Rauwolfia serpentina (Indian Snakeroot), and Alchornea floribunda (Niando), along with several other active alkaloids. There are no references to these species in traditional use to induce past memories, most likely because their alkaloid content is too low; However, laboratory extracted yohimbine, now commonly sold as sport supplement, may be used in psychedelic therapy to facilitate recall of traumatic memories in the treatment of post traumatic stress disorder (PTSD).[16]

L. E. Hollister’s criteria for establishing that a drug is hallucinogenic is:[17]

Most NMDA-antagonist dissociative drugs including ketamine, PCP, and DXM are drugs known to easily cause clinical psychological dependence, but also strengthen narcissism, and induces chemical dependence, and NMDA receptor antagonist neurotoxicity (NAN), when used chronically.

Common recreational drugs that cause chemical dependence have a history of entheogenic use. Perhaps because they could not access traditional entheogens as shamans were very secret with their sacraments who regarded non-visioning sacraments as hedonistic. The drugs mentioned here have occasionally been used by some shamans but they are psychoactive drugs that are not classified as hallucinogens (psychedelic, dissociative or deliriant). These drugs are not researched chemicals for psychedelic therapy as they have low therapeutic index.

This means that chewing the leaves or drinking coca tea does not produce the intense high (euphoria, megalomania, depression) people experience with cocaine. However, even if it would produce such effect, the next problem would be cocaine dependence.

Drugs, including some that cause physical dependence, have been used with entheogenic intention, mostly in ancient times.

Alcohol has sometimes been invested with religious significance.

The present day Arabic word for alcohol appears in The Qur’an (in verse 37:47) as al-awl, properly meaning “spirit” or “demon”, in the sense of “the thing that gives the wine its headiness.”[citation needed] The term ethanol was invented 1838, modeled on German thyl (Liebig), from Greek aither (see ether), and hyle “stuff”. Ether in late 14c. meant “upper regions of space,” from Old French ether and directly from Latin aether, “the upper pure, bright air,” from Greek aither “upper air; bright, purer air; the sky,” from aithein “to burn, shine,” from PIE root *aidh- “to burn” (see edifice).[23]

In ancient Celtic religion, Sucellus or Sucellos was the god of agriculture, forests and alcoholic drinks of the Gauls.

Ninkasi is the ancient Sumerian tutelary goddess of beer.[24]

In the ancient Greco-Roman religion, Dionysos (or Bacchus) was the god of the grape harvest, winemaking and wine, of ritual madness and ecstasy, of merry making and theatre. The original rite of Dionysus is associated with a wine cult and he may have been worshipped as early as c. 15001100 BC by Mycenean Greeks. The Dionysian Mysteries were a ritual of ancient Greece and Rome which used intoxicants and other trance-inducing techniques (like dance and music) to remove inhibitions and social constraints, liberating the individual to return to a natural state. In his Laws, Plato said that alcoholic drinking parties should be the basis of any educational system, because the alcohol allows relaxation of otherwise fixed views. The Symposium (literally, ‘drinking together’) was a dramatised account of a drinking party where the participants debated the nature of love.

In the Homeric Hymn to Demeter, a cup of wine is offered to Demeter which she refuses, instead insisting upon a potion of barley, water, and glechon, known as the ceremonial drink Kykeon, an essential part of the Mysteries. The potion has been hypothesized to be an ergot derivative from barley, similar to LSD.[25]

Egyptian pictographs clearly show wine as a finished product around 4000 BC. Osiris, the god who invented beer and brewing, was worshiped throughout the country. The ancient Egyptians made at least 24 types of wine and 17 types of beer. These beverages were used for pleasure, nutrition, rituals, medicine, and payments. They were also stored in the tombs of the deceased for use in the afterlife.[26] The Osirian Mysteries paralleled the Dionysian, according to contemporary Greek and Egyptian observers. Spirit possession involved liberation from civilization’s rules and constraints. It celebrated that which was outside civilized society and a return to the source of being, which would later assume mystical overtones. It also involved escape from the socialized personality and ego into an ecstatic, deified state or the primal herd (sometimes both).

Some scholars[who?] have postulated that pagan religions actively promoted alcohol and drunkenness as a means of fostering fertility. Alcohol was believed to increase sexual desire and make it easier to approach another person for sex. For example, Norse paganism considered alcohol to be the sap of Yggdrasil. Drunkenness was an important fertility rite in this religion.

Many Christian denominations use wine in the Eucharist or Communion and permit alcohol consumption in moderation. Other denominations use unfermented grape juice in Communion; they either voluntarily abstain from alcohol or prohibit it outright.

Judaism uses wine on Shabbat and some holidays for Kiddush as well as more extensively in the Passover ceremony and other religious ceremonies. The secular consumption of alcohol is allowed. Some Jewish texts, e.g., the Talmud, encourage moderate drinking on holidays (such as Purim) in order to make the occasion more joyous.

Kava cultures are the religious and cultural traditions of western Oceania which consume kava. There are similarities in the use of kava between the different cultures, but each one also has its own traditions.

Entheogens have been used by individuals to pursue spiritual goals such as divination, ego death, egolessness, faith healing, psychedelic therapy and spiritual formation.[27]

There are also instances where people have been given entheogens without their knowledge or consent (e.g., tourists in Ayahuasca),[28] as well as attempts to use such drugs in other contexts, such as cursing, psychochemical weaponry, psychological torture, brainwashing and mind control; CIA experiments with LSD were used in Project MKUltra, and controversial entheogens like alcohol are often mentioned in context of bread and circuses.

Entheogens have been used in various ways, e.g., as part of established religious rituals, as aids for personal spiritual development (“plant teachers”),[29][30] as recreational drugs, and for medical and therapeutic use. The use of entheogens in human cultures is nearly ubiquitous throughout recorded history.

Naturally occurring entheogens such as psilocybin and DMT (in the preparation ayahuasca), were, for the most part, discovered and used by older cultures, as part of their spiritual and religious life, as plants and agents that were respected, or in some cases revered for generations and may be a tradition that predates all modern religions as a sort of proto-religious rite.

One of the most widely used entheogens is cannabis, entheogenic use of cannabis has been used in regions such as China, Europe, and India, and, in some cases, for thousands of years. It has also appeared as a part of religions and cultures such as the Rastafari movement, the Sadhus of Hinduism, the Scythians, Sufi Islam, and others.

The best-known entheogen-using culture of Africa is the Bwitists, who used a preparation of the root bark of Tabernanthe iboga.[31] Although the ancient Egyptians may have been using the sacred blue lily plant in some of their religious rituals or just symbolically, it has been suggested that Egyptian religion once revolved around the ritualistic ingestion of the far more psychoactive Psilocybe cubensis mushroom, and that the Egyptian White Crown, Triple Crown, and Atef Crown were evidently designed to represent pin-stages of this mushroom.[32] There is also evidence for the use of psilocybin mushrooms in Ivory Coast.[33] Numerous other plants used in shamanic ritual in Africa, such as Silene capensis sacred to the Xhosa, are yet to be investigated by western science. A recent revitalization has occurred in the study of southern African psychoactives and entheogens (Mitchell and Hudson 2004; Sobiecki 2002, 2008, 2012).[34]

Entheogens have played a pivotal role in the spiritual practices of most American cultures for millennia. The first American entheogen to be subject to scientific analysis was the peyote cactus (Lophophora williamsii). For his part, one of the founders of modern ethno-botany, the late-Richard Evans Schultes of Harvard University documented the ritual use of peyote cactus among the Kiowa, who live in what became Oklahoma. While it was used traditionally by many cultures of what is now Mexico, in the 19th century its use spread throughout North America, replacing the deadly toxic mescal bean (Calia secundiflora) who are questioned to be an entheogen at all. Other well-known entheogens used by Mexican cultures include the alcoholic Aztec sacrament, pulque, ritual tobacco (known as ‘picietl’ to the Aztecs, and ‘sikar’ to the Maya (from where the word ‘cigar’ derives), psilocybin mushrooms, morning glories (Ipomoea tricolor and Turbina corymbosa), and Salvia divinorum.

Indigenous peoples of South America employ a wide variety of entheogens. Better-known examples include ayahuasca (most commonly Banisteriopsis caapi and Psychotria viridis) among indigenous peoples (such as the Urarina) of Peruvian Amazonia. Other entheogens include San Pedro cactus (Echinopsis pachanoi, syn. Trichocereus pachanoi), Peruvian torch cactus (Echinopsis peruviana, syn. Trichocereus peruvianus), and various DMT-snuffs, such as epen (Virola spp.), vilca and yopo (Anadenanthera colubrina and A. peregrina, respectively). The familiar tobacco plant, when used uncured in large doses in shamanic contexts, also serves as an entheogen in South America. Also, a tobacco that contains higher nicotine content, and therefore smaller doses required, called Nicotiana rustica was commonly used.[citation needed]

Entheogens also play an important role in contemporary religious movements such as the Rastafari movement and the Church of the Universe.

Datura wrightii is sacred to some Native Americans and has been used in ceremonies and rites of passage by Chumash, Tongva, and others. Among the Chumash, when a boy was 8 years old, his mother would give him a preparation of momoy to drink. This supposed spiritual challenge should help the boy develop the spiritual wellbeing that is required to become a man. Not all of the boys undergoing this ritual survived.[35]Momoy was also used to enhance spiritual wellbeing among adults . For instance, during a frightening situation, such as when seeing a coyote walk like a man, a leaf of momoy was sucked to help keep the soul in the body.

The indigenous peoples of Siberia (from whom the term shaman was borrowed) have used Amanita muscaria as an entheogen.

In Hinduism, Datura stramonium and cannabis have been used in religious ceremonies, although the religious use of datura is not very common, as the primary alkaloids are strong deliriants, which causes serious intoxication with unpredictable effects.

Also, the ancient drink Soma, mentioned often in the Vedas, appears to be consistent with the effects of an entheogen. In his 1967 book, Wasson argues that Soma was Amanita muscaria. The active ingredient of Soma is presumed by some to be ephedrine, an alkaloid with stimulant and (somewhat debatable)[by whom?] entheogenic properties derived from the soma plant, identified as Ephedra pachyclada. However, there are also arguments to suggest that Soma could have also been Syrian rue, cannabis, Atropa belladonna, or some combination of any of the above plants.[citation needed]

Fermented honey, known in Northern Europe as mead, was an early entheogen in Aegean civilization, predating the introduction of wine, which was the more familiar entheogen of the reborn Dionysus and the maenads. Its religious uses in the Aegean world are bound up with the mythology of the bee.

Dacians were known to use cannabis in their religious and important life ceremonies, proven by discoveries of large clay pots with burnt cannabis seeds in ancient tombs and religious shrines. Also, local oral folklore and myths tell of ancient priests that dreamed with gods and walked in the smoke. Their names, as transmitted by Herodotus, were “kap-no-batai” which in Dacian was supposed to mean “the ones that walk in the clouds”.

The growth of Roman Christianity also saw the end of the two-thousand-year-old tradition of the Eleusinian Mysteries, the initiation ceremony for the cult of Demeter and Persephone involving the use of a drug known as kykeon. The term ‘ambrosia’ is used in Greek mythology in a way that is remarkably similar to the Soma of the Hindus as well.

A theory that natural occurring gases like ethylene used by inhalation may have played a role in divinatory ceremonies at Delphi in Classical Greece received popular press attention in the early 2000s, yet has not been conclusively proven.[36]

Mushroom consumption is part of the culture of Europeans in general, with particular importance to Slavic and Baltic peoples. Some academics consider that using psilocybin- and or muscimol-containing mushrooms was an integral part of the ancient culture of the Rus’ people.[37]

It has been suggested that the ritual use of small amounts of Syrian rue is an artifact of its ancient use in higher doses as an entheogen (possibly in conjunction with DMT containing acacia).[citation needed]

Philologist John Marco Allegro has argued in his book The Sacred Mushroom and the Cross that early Jewish and Christian cultic practice was based on the use of Amanita muscaria, which was later forgotten by its adherents. Allegro’s hypothesis is that Amanita use was sacred knowledge kept only by high figures to hide the true beginnings of the Christian cult, seems supported by his own view that the Plaincourault Chapel shows evidence of Christian amanita use in the 13th century.[38]

In general, indigenous Australians are thought not to have used entheogens, although there is a strong barrier of secrecy surrounding Aboriginal shamanism, which has likely limited what has been told to outsiders. A plant that the Australian Aboriginals used to ingest is called Pitcheri, which is said to have a similar effect to that of coca. Pitcheri was made from the bark of the shrub Duboisia myoporoides. This plant is now grown commercially and is processed to manufacture an eye medication. There are no known uses of entheogens by the Mori of New Zealand aside from a variant species of kava.[39] Natives of Papua New Guinea are known to use several species of entheogenic mushrooms (Psilocybe spp, Boletus manicus).[40]

Kava or kava kava (Piper Methysticum) has been cultivated for at least 3000 years by a number of Pacific island-dwelling peoples. Historically, most Polynesian, many Melanesian, and some Micronesian cultures have ingested the psychoactive pulverized root, typically taking it mixed with water. Much traditional usage of kava, though somewhat suppressed by Christian missionaries in the 19th and 20th centuries, is thought to facilitate contact with the spirits of the dead, especially relatives and ancestors.[41]

Some religions forbid, discourage, or restrict the drinking of alcoholic beverages. These include Islam, Jainism, the Bah’ Faith, The Church of Jesus Christ of Latter-day Saints (LDS Church), the Seventh-day Adventist Church, the Church of Christ, Scientist, the United Pentecostal Church International, Theravada, most Mahayana schools of Buddhism, some Protestant denominations of Christianity, some sects of Taoism (Five Precepts and Ten Precepts), and Hinduism.

The Pali Canon, the scripture of Theravada Buddhism, depicts refraining from alcohol as essential to moral conduct because intoxication causes a loss of mindfulness. The fifth of the Five Precepts states, “Sur-meraya-majja-pamdahn verama sikkhpada samdiymi.” In English: “I undertake to refrain from fermented drink that causes heedlessness.” Technically, this prohibition does not include other mind-altering drugs. The canon does not suggest that alcohol is evil but believes that the carelessness produced by intoxication creates bad karma. Therefore, any drug (beyond tea or mild coffee) that affects one’s mindfulness be considered by some to be covered by this prohibition.[citation needed]

Many Christian denominations disapprove of the use of most illicit drugs. The early history of the Church, however, was filled with a variety of drug use, recreational and otherwise.[42]

The primary advocate of a religious use of cannabis plant in early Judaism was Sula Benet, also called Sara Benetowa, a Polish anthropologist, who claimed in 1967 that the plant kaneh bosm – mentioned five times in the Hebrew Bible, and used in the holy anointing oil of the Book of Exodus, was in fact cannabis.[43] The Ethiopian Zion Coptic Church confirmed it as a possible valid interpretation.[44] The lexicons of Hebrew and dictionaries of plants of the Bible such as by Michael Zohary (1985), Hans Arne Jensen (2004) and James A. Duke (2010) and others identify the plant in question as either Acorus calamus or Cymbopogon citratus.[45] Kaneh-bosm is listed as an incense in the Old Testament. It is generally held by academics specializing in the archaeology and paleobotany of Ancient Israel, and those specializing in the lexicography of the Hebrew Bible that cannabis is not documented or mentioned in early Judaism. Against this some popular writers have argued that there is evidence for religious use of cannabis in the Hebrew Bible,[46] although this hypothesis and some of the specific case studies (e.g., John Allegro in relation to Qumran, 1970) have been “widely dismissed as erroneous, others continue”.[47]

According to The Living Torah, cannabis may have been one of the ingredients of the holy anointing oil mentioned in various sacred Hebrew texts.[48] The herb of interest is most commonly known as kaneh-bosm (Hebrew: -). This is mentioned several times in the Old Testament as a bartering material, incense, and an ingredient in holy anointing oil used by the high priest of the temple. Although Chris Bennett’s research in this area focuses on cannabis, he mentions evidence suggesting use of additional visionary plants such as henbane, as well.[49]

The Septuagint translates kaneh-bosm as calamus, and this translation has been propagated unchanged to most later translations of the old testament. However, Polish anthropologist Sula Benet published etymological arguments that the Aramaic word for hemp can be read as kannabos and appears to be a cognate to the modern word ‘cannabis’,[50] with the root kan meaning reed or hemp and bosm meaning fragrant. Both cannabis and calamus are fragrant, reedlike plants containing psychotropic compounds.

In his research, Professor Dan Merkur points to significant evidence of an awareness within the Jewish mystical tradition recognizing manna as an entheogen, thereby substantiating with rabbinic texts theories advanced by the superficial biblical interpretations of Terence McKenna, R. Gordon Wasson and other ethnomycologists.

Although philologist John Marco Allegro has suggested that the self-revelation and healing abilities attributed to the figure of Jesus may have been associated with the effects of the plant medicines, this evidence is dependent on pre-Septuagint interpretation of Torah and Tenach. Allegro was the only non-Catholic appointed to the position of translating the Dead Sea scrolls. His extrapolations are often the object of scorn due to Allegro’s non-mainstream theory of Jesus as a mythological personification of the essence of a “psychoactive sacrament”. Furthermore, they conflict with the position of the Catholic Church with regard to transubstantiation and the teaching involving valid matter, form, and drug that of bread and wine (bread does not contain psychoactive drugs, but wine contains ethanol). Allegro’s book The Sacred Mushroom and the Cross relates the development of language to the development of myths, religions, and cultic practices in world cultures. Allegro believed he could prove, through etymology, that the roots of Christianity, as of many other religions, lay in fertility cults, and that cult practices, such as ingesting visionary plants (or “psychedelics”) to perceive the mind of God, persisted into the early Christian era, and to some unspecified extent into the 13th century with reoccurrences in the 18th century and mid-20th century, as he interprets the Plaincourault chapel’s fresco to be an accurate depiction of the ritual ingestion of Amanita muscaria as the Eucharist.

The historical picture portrayed by the Entheos journal is of fairly widespread use of visionary plants in early Christianity and the surrounding culture, with a gradual reduction of use of entheogens in Christianity.[51] R. Gordon Wasson’s book Soma prints a letter from art historian Erwin Panofsky asserting that art scholars are aware of many “mushroom trees” in Christian art.[52]

The question of the extent of visionary plant use throughout the history of Christian practice has barely been considered yet by academic or independent scholars. The question of whether visionary plants were used in pre-Theodosius Christianity is distinct from evidence that indicates the extent to which visionary plants were utilized or forgotten in later Christianity, including so-called “heretical” or “quasi-” Christian groups,[53] and the question of other groups such as elites or laity within “orthodox” Catholic practice.[54]

Daniel Merkur at the University of Toronto contends that a minority of Christian hermits and mystics could possibly have used entheogens, in conjunction with fasting, meditation, and prayer.[citation needed]

According to R.C. Parker, “The use of entheogens in the Vajrayana tradition has been documented by such scholars as Ronald M Davidson, William George Stablein, Bulcsu Siklos, David B. Gray, Benoytosh Bhattacharyya, Shashibhusan Das Gupta, Francesca Fremantle, Shinichi Tsuda, David Gordon White, Rene de Nebesky-Wojkowitz, James Francis Hartzell, Edward Todd Fenner, Ian Baker, Dr. Pasang Yonten Arya and numerous others.”[55] These scholars have established entheogens were used in Vajrayana (in a limited context) as well as in Tantric Saivite traditions.[55] The major entheogens in the Vajrayana Anuttarayoga Tantra tradition are cannabis and Datura which were used in various pills, ointments, and elixirs. Several tantras within Vajrayana specifically mention these entheogens and their use, including the Laghusamvara-tantra (aka Cakrasavara Tantra), Samputa-tantra, Samvarodaya-tantra, Mahakala-tantra, Guhyasamaja-tantra, Vajramahabhairava-tantra, and the Krsnayamari-tantra.[55] In the Cakrasavara Tantra, the use of entheogens is coupled with mediation practices such as the use of a mandala of the Heruka meditation deity (yidam) and visualization practices which identify the yidam’s external body and mandala with one’s own body and ‘internal mandala’.[56]

In the West, some modern Buddhist teachers have written on the usefulness of psychedelics. The Buddhist magazine Tricycle devoted their entire fall 1996 edition to this issue.[57] Some teachers such as Jack Kornfield have acknowledged the possibility that psychedelics could complement Buddhist practice, bring healing and help people understand their connection with everything which could lead to compassion.[58] Kornfield warns however that addiction can still be a hindrance. Other teachers such as Michelle McDonald-Smith expressed views which saw entheogens as not conductive to Buddhist practice (“I don’t see them developing anything”).[59]

R. Gordon Wasson and Giorgio Samorini have proposed several examples of the cultural use of entheogens that are found in the archaeological record.[60][61] Evidence for the first use of entheogens may come from Tassili, Algeria, with a cave painting of a mushroom-man, dating to 8000 BP.[citation needed] Hemp seeds discovered by archaeologists at Pazyryk suggest early ceremonial practices by the Scythians occurred during the 5th to 2nd century BC, confirming previous historical reports by Herodotus.[citation needed]

Although entheogens are taboo and most of them are officially prohibited in Christian and Islamic societies, their ubiquity and prominence in the spiritual traditions of various other cultures is unquestioned. “The spirit, for example, need not be chemical, as is the case with the ivy and the olive: and yet the god was felt to be within them; nor need its possession be considered something detrimental, like drugged, hallucinatory, or delusionary: but possibly instead an invitation to knowledge or whatever good the god’s spirit had to offer.”[62]

Most of the well-known modern examples, such as peyote, psilocybin mushrooms, and morning glories are from the native cultures of the Americas. However, it has also been suggested that entheogens played an important role in ancient Indo-European culture, for example by inclusion in the ritual preparations of the Soma, the “pressed juice” that is the subject of Book 9 of the Rig Veda. Soma was ritually prepared and drunk by priests and initiates and elicited a paean in the Rig Veda that embodies the nature of an entheogen:

Splendid by Law! declaring Law, truth speaking, truthful in thy works, Enouncing faith, King Soma!… O [Soma] Pavmana (mind clarifying), place me in that deathless, undecaying world wherein the light of heaven is set, and everlasting lustre shines…. Make me immortal in that realm where happiness and transports, where joy and felicities combine…

The kykeon that preceded initiation into the Eleusinian Mysteries is another entheogen, which was investigated (before the word was coined) by Carl Kernyi, in Eleusis: Archetypal Image of Mother and Daughter. Other entheogens in the Ancient Near East and the Aegean include the opium poppy, datura, and the unidentified “lotus” (likely the sacred blue lily) eaten by the Lotus-Eaters in the Odyssey and Narcissus.

According to Ruck, Eyan, and Staples, the familiar shamanic entheogen that the Indo-Europeans brought knowledge of was Amanita muscaria. It could not be cultivated; thus it had to be found, which suited it to a nomadic lifestyle. When they reached the world of the Caucasus and the Aegean, the Indo-Europeans encountered wine, the entheogen of Dionysus, who brought it with him from his birthplace in the mythical Nysa, when he returned to claim his Olympian birthright. The Indo-European proto-Greeks “recognized it as the entheogen of Zeus, and their own traditions of shamanism, the Amanita and the ‘pressed juice’ of Soma but better, since no longer unpredictable and wild, the way it was found among the Hyperboreans: as befit their own assimilation of agrarian modes of life, the entheogen was now cultivable.”[62]Robert Graves, in his foreword to The Greek Myths, hypothesises that the ambrosia of various pre-Hellenic tribes was Amanita muscaria (which, based on the morphological similarity of the words amanita, amrita and ambrosia, is entirely plausible) and perhaps psilocybin mushrooms of the Panaeolus genus.

Amanita was divine food, according to Ruck and Staples, not something to be indulged in or sampled lightly, not something to be profaned. It was the food of the gods, their ambrosia, and it mediated between the two realms. It is said that Tantalus’s crime was inviting commoners to share his ambrosia.

The entheogen is believed to offer godlike powers in many traditional tales, including immortality. The failure of Gilgamesh in retrieving the plant of immortality from beneath the waters teaches that the blissful state cannot be taken by force or guile: When Gilgamesh lay on the bank, exhausted from his heroic effort, the serpent came and ate the plant.

Another attempt at subverting the natural order is told in a (according to some) strangely metamorphosed myth, in which natural roles have been reversed to suit the Hellenic world-view. The Alexandrian Apollodorus relates how Gaia (spelled “Ge” in the following passage), Mother Earth herself, has supported the Titans in their battle with the Olympian intruders. The Giants have been defeated:

When Ge learned of this, she sought a drug that would prevent their destruction even by mortal hands. But Zeus barred the appearance of Eos (the Dawn), Selene (the Moon), and Helios (the Sun), and chopped up the drug himself before Ge could find it.[63]

The legends of the Assassins had much to do with the training and instruction of Nizari fida’is, famed for their public missions during which they often gave their lives to eliminate adversaries.

The tales of the fidais training collected from anti-Ismaili historians and orientalists writers were confounded and compiled in Marco Polos account, in which he described a “secret garden of paradise”.[citation needed] After being drugged, the Ismaili devotees were said be taken to a paradise-like garden filled with attractive young maidens and beautiful plants in which these fidais would awaken. Here, they were told by an “old” man that they were witnessing their place in Paradise and that should they wish to return to this garden permanently, they must serve the Nizari cause.[64] So went the tale of the “Old Man in the Mountain”, assembled by Marco Polo and accepted by Joseph von Hammer-Purgstall (17741856), a prominent orientalist writer responsible for much of the spread of this legend. Until the 1930s, von Hammers retelling of the Assassin legends served as the standard account of the Nizaris across Europe.[citation needed]

Notable early testing of the entheogenic experience includes the Marsh Chapel Experiment, conducted by physician and theology doctoral candidate, Walter Pahnke, under the supervision of Timothy Leary and the Harvard Psilocybin Project. In this double-blind experiment, volunteer graduate school divinity students from the Boston area almost all claimed to have had profound religious experiences subsequent to the ingestion of pure psilocybin. In 2006, a more rigorously controlled experiment was conducted at Johns Hopkins University, and yielded similar results.[65] To date there is little peer-reviewed research on this subject, due to ongoing drug prohibition and the difficulty of getting approval from institutional review boards.[66]

Furthermore, scientific studies on entheogens present some significant challenges to investigators, including philosophical questions relating to ontology, epistemology and objectivity.[67]

Between 2011 and 2012, the Australian Federal Government was considering changes to the Australian Criminal Code that would classify any plants containing any amount of DMT as “controlled plants”.[68] DMT itself was already controlled under current laws. The proposed changes included other similar blanket bans for other substances, such as a ban on any and all plants containing Mescaline or Ephedrine. The proposal was not pursued after political embarrassment on realisation that this would make the official Floral Emblem of Australia, Acacia pycnantha (Golden Wattle), illegal. The Therapeutic Goods Administration and federal authority had considered a motion to ban the same, but this was withdrawn in May 2012 (as DMT may still hold potential entheogenic value to native and/or religious peoples).[69]

In 1963 in Sherbert v. Verner the Supreme Court established the Sherbert Test, which consists of four criteria that are used to determine if an individual’s right to religious free exercise has been violated by the government. The test is as follows:

For the individual, the court must determine

If these two elements are established, then the government must prove

This test was eventually all-but-eliminated in Employment Division v. Smith 494 U.S. 872 (1990), but was resurrected by Congress in the federal Religious Freedom Restoration Act (RFRA) of 1993.

In City of Boerne v. Flores, 521 U.S. 507 (1997) and Gonzales v. O Centro Esprita Beneficente Unio do Vegetal, 546 U.S. 418 (2006), the RFRA was held to trespass on state sovereignty, and application of the RFRA was essentially limited to federal law enforcement.

As of 2001, Arizona, Idaho, New Mexico, Oklahoma, South Carolina, and Texas had enacted so-called “mini-RFRAs.”

Many works of literature have described entheogen use; some of those are:

Others

Continued here:

Entheogen – Wikipedia, the free encyclopedia

The Key Role of Impurities in Ancient Damascus Steel Blades

 Tms  Comments Off on The Key Role of Impurities in Ancient Damascus Steel Blades
Jun 172016
 

The art of producing the famous 16-18th century Damascus steel blades found in many museums was lost long ago. Recently, however, research has established strong evidence supporting the theory that the distinct surface patterns on these blades result from a carbide-banding phenomenon produced by the microsegregation of minor amounts of carbide-forming elements present in the wootz ingots from which the blades were forged. Further, it is likely that wootz Damascus blades with damascene patterns may have been produced only from wootz ingots supplied from those regions of India having appropriate impurity-containing ore deposits.

This article is concerned with the second type of Damascus steel, sometimes called oriental Damascus. The most common examples of these steels are swords and daggers, although examples of body armor are also known. The name Damascus apparently originated with these steels. The steel itself was produced not in Damascus, but in India and became known in English literature in the early 19th century3 as wootz steel, as it is referred to here. Detailed pictures of many such wootz Damascus swords are presented in Figiel’s book,4 and the metallurgy of these blades is discussed in Smith’s book.5

Unfortunately, the technique of producing wootz Damascus steel blades is a lost art. The date of the last blades produced with the highest-quality damascene patterns is uncertain, but is probably around 1750; it is unlikely that blades displaying low-quality damascene patterns were produced later than the early 19th century. Debate has persisted in the metallurgy community over the past 200 years as to how these blades were made and why the surface pattern appeared.6-8 Research efforts over the years have claimed the discovery of methods to reproduce wootz Damascus steel blades,9-12 but all of these methods suffer from the same problemmodern bladesmiths have been unable to use the methods to reproduce the blades. The successful reproduction of wootz Damascus blades requires that blades be produced that match the chemical composition, possess the characteristic damascene surface pattern, and possess the same internal microstructure that causes the surface pattern.

A detailed picture description of the production process for this blade has recently been published.14 In addition, the technique has been fully described in the literature,15-17 and it has been shown that blades possessing high-quality damascene patterns can be repeatedly produced utilizing the technique. The technique is, in essence, a simple reproduction of the general method described by the earlier researchers. A small steel ingot of the correct composition (Fe + 1.5C) is produced in a closed crucible and is then forged to a blade shape. However, some key factors are now specified. These include the time/temperature record of the ingot preparation, the temperature of the forging operations, and the type and composition level of impurity elements in the Fe + 1.5C steel. It appears that the most important factor is the type of impurity elements in the steel ingot. Recent work17-18 has shown that bands of clustered Fe3C particles can be produced in the blades by the addition of very small amounts (0.03% or less) of one or more carbide-forming elements, such as V, Mo, Cr, Mn, and Nb. The elements vanadium and molybdenum appear to be the most effective elements in causing the band formation to occur. An obvious question raised by these results is, are these elements also present at low levels in the 16-18th century wootz Damascus blades?

This article presents the results of a study of these four samples. Also, four additional wootz Damascus blades, all thought to be a few hundred years old, have been acquired and are included. Hence, all of the blades studied here are more than two centuries old and were presumably made from wootz steel. These blades are referred to as genuine wootz Damascus blades to differentiate them from the reconstructed wootz Damascus blades made by the technique developed by the authors.

Pieces were cut from one end of each of the samples with a thin diamond saw. A 2 cm length was cut for chemical-analysis studies, and an 8 mm length sample was used for microstructure analysis. The chemical analyses were done using emission spectroscopy on a calibrated machine at Nucor Steel Corporation. Table I presents the chemical analyses, along with the values reported by Zschokke. Agreement between the analyses done by Zschokke in 1924 and the present data is reasonably good.

Micrographs of surface and transverse sections of the remaining three swords are shown in Figure 3. The micrographs of the surfaces are, in effect, taper sections through the bands seen on the micrographs of the section views, and, as expected, the widths of the bands are expanded in the surface views.

Rockwell C hardness data were taken along the centerline of the transverse sections of all four swords in order to more fully characterize them. A large variation in hardness was found and is presented in Table II. The hardness correlated with the matrix microstructure. The matrix structure of the blades underwent a transition from pearlite at the thin tip to a divorced eutectoid ferrite + cementite at the fat end (thickness = 3-4 mm). These structures are consistent with recent kinetic studies of the eutectoid reaction in hypereutectoid steels.19-20 The studies show that in two-phase (austenite + Fe3C) steels, the divorced eutectoid transformation (DET) dominates at slow cooling rates and the pearlite reaction dominates at higher cooling rates; the DET is favored as the density of the Fe3C particles in the transforming austenite increases. Hence, the matrix microstructures indicate that the blades were air-cooled with pearlite dominating near the faster cooling cutting edge. The dominance of the DET matrix structure in swords 7 and 10 probably results from the higher amount of interband Fe3C present in these swords.

In swords 7 and 10, the particles are dominantly plate-shaped with the thin direction aligned in the forging plane of the sword blades. Consequently, the area of the particles on the sword face is generally larger than on the sections. The standard deviation of the data was consistently in the range of 20-25%, so that differences in the areas on the three surfaces are problematic, whereas, the differences in minimum and maximum diameters are significant. For blades 7 and 10, the maximum/minimum aspect ratio of the particles averages around three on both transverse and longitudinal sections and around two on the sword faces. The ratios are slightly less for blade 9, reflecting the more globular shape of the particles and the observation that the oblong particles do not have their broad face well aligned in the forging plane, as they do on blades 7 and 10.

Experiments have been carried out on the reconstructed wootz Damascus blades in which the ladder and rose pattern were produced by both the groove-cutting and groove-forging techniques. The patterns in the blade of Figure 1 were made with the groove-cutting technique, and detailed photographs of the process have recently been published (Figure 6a).14 These patterns may be compared to similar ladder/rose patterns made by the die-forging technique (Figure 6b). The circular pattern in Figure 6b (called the rose pattern on ancient blades) was made with a hollow cylindrical die, while the pattern in Figure 6a was made by removing metal with a specially shaped solid drill. In the case of the die-forged patterns, the ridges produced by the upsetting action of the die were removed with a belt grinder prior to additional forging.

A comparison of the ladder patterns produced by grinding versus forging reveals nearly identical features (Figure 6). Figiel points out that there is a large variation in the pattern in the bands of the several examples presented in his book.4 Hence, this study is only able to conclude that the ancient smiths produced the ladder patterns by making parallel grooves across the surface of nearly finished blades, either by forging or cutting/grinding.

It is well established25-28 that the ferrite/pearlite banding of hypoeutectoid steels results from microsegregation of the X element in Fe-C-X alloys, where X is generally manganese, phosphorus, or an alloy addition. For the example X = P, it is established that the microsegregation of phosphorus to the interdendritic regions (IRs) causes ferrite to nucleate preferentially in the IRs. If the cooling rate is slow enough, the ferrite grows as blocky grain boundary allotriomorphs and pushes the carbon ahead of the growth front until pearlite forms between neighboring IRs. Apparently, rolling or forging deformation is quite effective in aligning the IRs of the solidified ingots into planar arrays, because the ferrite appears as planar bands parallel to the deformation plane separated by bands of pearlite. The ferrite/pearlite bands of sword 8 were probably produced by this type of banding caused, most likely, by the microsegregation of phosphorus.

A strong body of evidence has been obtained16-18 that supports the theory that the layered structures in the normal hypereutectoid Damascus steels are produced by a mechanism similar to the mechanism causing ferrite/pearlite banding in hypoeutectoid steels with one important difference in ferrite/pearlite banding, the bands form on a single thermal cycle. For example, the ferrite/pearlite bands can be destroyed by complete austenitization at low temperatures (just above the A3 temperature) followed by rapid cooling and are then reformed in a single heat up to austenite, followed by an adequately slow cool.26 (Low-temperature austenitization is required to avoid homogenization of the microsegregated X element.) The carbide bands of the wootz Damascus steel are destroyed by a complete austenitization at low temperatures (just above the Acm temperature) followed by cooling at all rates, slow or fast. However, if the steel is then repeatedly cycled to maximum temperatures of around 50-100C below Acm, the carbide bands will begin to develop after a few cycles and become clear after 6-8 cycles.

The formation mechanism of the carbides clustered selectively along the IRs during the cyclic heating of the forging process is not resolved. It seems likely, however, that it involves a selective coarsening process, whereby cementite particles lying on the IRs slowly become larger than their neighbors lying on dendrite regions and crowd them out. A model for such a selective coarsening process has been presented.17 During the heat-up stage of each thermal cycle, the smaller cementite particles will dissolve, and only the larger particles will remain at the forging temperature, which lies just below the Acm temperature. The model requires the segregated impurity atoms lying in the IRs to selectively reduce the mobility of the cementite/austenite interfaces in those regions. Larger particles would then occur in the IRs at the forging temperature. They probably maintain their dominance on cool down because one would not expect the small particles that had dissolved to renucleate on cool down in the presence of the nearby cementite particles. These near-by particles would provide sites for cementite growth prior to adequate local supercooling sufficient to nucleate new particles.

Based on this experience, it seems likely that the fraction of Indian crucible steel that was successfully forged into the damascened blades was probably quite small; the majority of surviving wootz Damascus blades probably display low-quality surface patterns. Craddock29 has come to this same conclusion based on an analysis of the literature on damascene-patterned steels. The results on the four Moser blades studied by Zschokke support this same conclusion. These blades were supposedly representative of good-quality damascened blades from the east, and yet of the four, only sword 9 displays the high-quality Fe3C bands characteristic of the best museum-quality wootz Damascus blades.

One of the big mysteries of wootz Damascus steel has been why the art of making these blades was lost. The vanadium levels provide the basis for a theory. Based on our studies, it is clear that to produce the damascene patterns of a museum-quality wootz Damascus blade the smith would have to fulfill at least three requirements. First, the wootz ingot would have to have come from an ore deposit that provided significant levels of certain trace elements, notably, Cr, Mo, Nb, Mn, or V. This idea is consistent with the theory of some authors30 who believe the blades with good patterns were only produced from wootz ingots made in southern India, apparently around Hyderabad. Second, the data of Table IV confirm previous knowledge that wootz Damascus blades with good patterns are characterized by a high phosphorus level. This means that the ingots of these blades would be severely hot short, which explains why Breant’s9 19th century smiths in Paris could not forge wootz ingots. Therefore, as previously shown,15 successful forging would require the development of heat-treating techniques that decarburized the surface in order to produce a ductile surface rim adequate to contain the hot-short interior regions. Third, a smith who developed a heat-treatment technique that allowed the hot-short ingots to be forged might still not have learned how to produce the surface patterns, because they do not appear until the surface decarb region is ground off the blades; this grinding process is not a simple matter.

The smiths that produced the high-quality blades would most likely have kept the process for making these blades a closely guarded secret to be passed on only to their apprentices. The smiths would be able to teach the apprentices the second and third points listed, but point one is something they would not have known. There is no difference in physical appearance between an ingot with the proper minor elements present and one without. Suppose that during several generations all of the ingots from India were coming from an ore body with the proper amount of minor elements present, and blades with good patterns were being produced. Then, after a few centuries, the ore source may have been exhausted or become inaccessible to the smithing community; therefore, the technique no longer worked. With time, the smiths who knew about the technique died out without passing it on to their apprentices (since it no longer worked), so even if a similar source was later found, the knowledge was no longer around to exploit it. The possible validity of this theory could be examined if data were available on the level of carbide-forming elements in the various ore deposits in India used to produce wootz steel.

ABOUT THE AUTHORS

J.D. Verhoeven is currently a professor in the Materials Science and Engineering Department at Iowa State University. A.H. Pendray is currently president of the Knifemakers Guild. W.E. Dauksch is retired as vice president and general manager of Nucor Steel Corporation.

For more information, contact J.D. Verhoeven, Iowa State University, Materials Science and Engineering Department, 104 Wilhelm Hall, Ames, Iowa 50011; (515) 294-9471; fax (515) 294-4291; jver@iastate.edu.

Direct questions about this or any other JOM page to jom@tms.org.

Go here to see the original:

The Key Role of Impurities in Ancient Damascus Steel Blades

Molecular Cloning

 Cloning  Comments Off on Molecular Cloning
Jun 172016
 

Molecular Cloninghas served as the foundation of technical expertise in labs worldwide for 30 years. No other manual has been so popular, or so influential. Molecular Cloning, Fourth Edition, by the celebrated founding author Joe Sambrook and new co-author, the distinguished HHMI investigator Michael Green, preserves thehighly praised detail and clarity of previous editions and includes specific chapters and protocols commissioned for the book from expert practitioners at Yale, U Mass, Rockefeller University, Texas Tech, Cold Spring Harbor Laboratory, Washington University, and other leading institutions. The theoretical and historical underpinnings of techniques are prominent features of the presentation throughout, information that does much to help trouble-shoot experimental problems.

For the fourth edition of this classic work, the content has been entirely recast to include nucleic-acid based methods selected as the most widely used and valuable in molecular and cellular biology laboratories.

Corechapters from the third edition have been revised to feature current strategies and approaches to the preparation and cloning of nucleic acids, gene transfer, and expression analysis. They are augmented by 12 new chapters which show how DNA, RNA, and proteins should be prepared, evaluated, and manipulated, and how data generation and analysis can be handled.

The new content includes methods for studying interactions between cellular components, such as microarrays, next-generation sequencing technologies, RNA interference, and epigenetic analysis using DNA methylation techniques and chromatin immunoprecipitation. To make sense of the wealth of data produced by these techniques, a bioinformatics chapter describes the use of analytical tools for comparing sequences of genes and proteins and identifying common expression patterns among sets of genes.

Building on thirty years of trust, reliability, and authority, the fourth edition of Molecular Cloning is the new gold standardthe one indispensable molecular biology laboratory manual and reference source.

Highlights of the new edition:

Praise for the previous edition:

Any basic research laboratory using molecular biology techniques will benefit from having a copy on hand of the newly published Third Edition of Molecular Cloning: A Laboratory Manual…the first two editions of this book have been staples of molecular biology with a proven reputation for accuracy and thoroughness. The Scientist

In every kitchen there is at least one indispensable cookbook…Molecular Cloning: A Laboratory Manual fills the same niche in the laboratory (with) information to help both the inexperienced and the advanced user. (It) has once again established its primacy as the molecular laboratory manual and is likely to be found on lab benches…around the world. Trends in Neurosciences

Molecular Cloning: A Laboratory Manual has always been the laboratory mainstay for protocols and techniques. It has a pure-bred ancestry, and the new edition does not disappoint. (It) includes information panels at the end of each chapter that describe the principles behind the protocols…. The addition of this information extends Molecular Cloning from an essential laboratory resource into a new realm, one merging the previous prototype with a modern molecular monograph…the next generation of Molecular Cloning not only carries on the proud heritage of the first two editions but also admirably expands on that tradition to provide a truly essential laboratory manual. Trends in Microbiology

Read the original here:

Molecular Cloning

 Posted by at 4:55 am  Tagged with:

Seasteading – Wikipedia, the free encyclopedia

 Seasteading  Comments Off on Seasteading – Wikipedia, the free encyclopedia
Jun 172016
 

Seasteading is the concept of creating permanent dwellings at sea, called seasteads, outside the territory claimed by any government. Most proposed seasteads have been modified cruising vessels. Other proposed structures have included a refitted oil platform, a decommissioned anti-aircraft platform, and custom-built floating islands.[1]

No one has created a state on the high seas that has been recognized as a sovereign state. The Principality of Sealand is a disputed micronation formed on a discarded sea fort near Suffolk, England.[2] The closest things to a seastead that have been built so far are large ocean-going ships sometimes called “floating cities”, and smaller floating islands.

The term combines the words sea and homesteading. At least two people independently began using it: Ken Neumeyer in his book Sailing the Farm (1981) and Wayne Gramlich in his article “Seasteading Homesteading on the High Seas” (1998).[3]

Outside the Exclusive Economic Zone of 200 nautical miles (370km), which countries can claim according to the United Nations Convention on the Law of the Sea, the high seas are not subject to the laws of any sovereign state other than the flag under which a ship sails. Examples of organizations using this possibility are Women on Waves, enabling abortions for women in countries where abortions are subject to strict laws, and offshore radio stations which were anchored in international waters. Like these organizations, a seastead would take advantage of the absence of laws and regulations outside the sovereignty of nations, and choose from among a variety of alternate legal systems such as those underwritten by “Las Portadas”.[4]

“When Seasteading becomes a viable alternative, switching from one government to another would be a matter of sailing to the other without even leaving your house,” said Patri Friedman at the first annual Seasteading conference.[5][6][7]

The Seasteading Institute (TSI), founded by Wayne Gramlich and Patri Friedman on April 15, 2008, is an organization formed to facilitate the establishment of autonomous, mobile communities on seaborne platforms operating in international waters.[5][8][9] Gramlichs 1998 article “SeaSteading Homesteading on the High Seas” outlined the notion of affordable steading, and attracted the attention of Friedman with his proposal for a small-scale project.[3] The two began working together and posted their first collaborative book online in 2001, which explored aspects of seasteading from waste disposal to flags of convenience.

The project picked up mainstream exposure in 2008 after having been brought to the attention of PayPal cofounder Peter Thiel, who contributed $500,000 to fund the creation of The Seasteading Institute and has since spoken out on behalf of its viability, as seen in his essay “The Education of a Libertarian”,[10] published online by Cato Unbound. The Seasteading Institute has received widespread media attention from sources such as CNN, Wired,[5]Prospect,[11]The Economist[9] Business Insider,[12] and BBC[13] American journalist John Stossel wrote an article about seasteading in February 2011 and hosted Friedman on his show on the Fox Business Network.[14]

On July 31, 2011, Friedman stepped down from the role of executive director, and became chairman of the board. Friedman was replaced by Randolph Hencken. Concomitantly, the institute’s directors of business strategy and legal strategy went on to start Blueseed, the first commercial seasteading venture.[15]

Between May 31 and June 2, 2012, The Seasteading Institute held its third annual conference.[16]

In the spring of 2013,[17] the Institute launched The Floating City Project,[18] which combines principles of both seasteading and startup cities,[19] by seeking to locate a floating city within the territorial waters of an existing nation, rather than the open ocean. The institute argued that it would be easier to engineer a seastead in relatively calm, shallow waters; that the location would make it easier for residents to reach as well as to acquire goods and services from existing supply chains; and that a host nation would place a floating city within the international legal framework.

The Institute raised $27,082 from 291 funders in a crowdfunding campaign[20] and commissioned DeltaSync[21] to design a floating city concept for The Floating City Project. In December 2013, the concept report was published. The Seasteading Institute has also been collecting data from potential residents through a survey.[22]

The first seasteads are projected to be cruise ships adapted for semi-permanent habitation. Cruise ships are a proven technology, and they address most of the challenges of living at sea for extended periods of time. The cost of the first shipstead was estimated at $10M.[23]

The Seasteading Institute has been working on communities floating above the sea in spar buoys, similar to oil platforms.[24] The project would start small, using proven technology as much as possible, and try to find viable, sustainable ways of running a seastead.[25] Innovations that enable full-time living at sea will have to be developed. The cruise ship industry’s development suggests this may be possible.

A proposed design for a custom-built seastead is a floating dumbbell in which the living area is high above sea level, which minimizes the influence of waves. In 2004, research was documented in an online book that covers living on the oceans.[26]

The Seasteading Institute focuses on three areas: building a community, doing research and building the first seastead in the San Francisco Bay. In January 2009, the Seasteading Institute patented a design for a 200-person resort seastead, ClubStead, about a city block in size, produced by consultancy firm Marine Innovation & Technology. ClubStead marked the first major development in hard engineering, from extensive analysis to simulations, of the seasteading movement.[9][26][27]

At the Seasteading Institute Forum, an idea arose to create an island from modules.[28] There are several different designs for the modules, with a general consensus that reinforced concrete is the most proven, sustainable and cost-effective material for seastead structures,[29] as indicated by use in oil platforms and concrete submarines. The company AT Design Office recently made another design using the modular island method.[30]

Many architects and firms have created designs for floating cities, including Vincent Callebaut,[31][32]Paolo Soleri[33] and companies such as Shimizu and Tangram 3DS.[34]Marshall Savage also discussed building tethered artificial islands in his book The Millennial Project: Colonizing the Galaxy in Eight Easy Steps, with several color plates illustrating his ideas. Some design competitions have also yielded designs, such as those produced by Evolo and other companies.[35][36][37]

In 2008, Friedman and Gramlich had hoped to float the first prototype seastead in the San Francisco Bay by 2010[38][39] but 2010 plans were to launch a seastead by 2014.[40] The Seasteading Institute projected in 2010 that the seasteading population would exceed 150 individuals in 2015.[41]

The Seasteading Institute held its first conference in Burlingame, California, October 10, 2008. 45 people from 9 countries attended.[42] The second Seasteading conference was significantly larger, and held in San Francisco, California, September 2830, 2009.[43][44] The third Seasteading conference took place on May 31 – June 2, 2012.[45]

As of 2011[update], Blueseed was a company working on launching a ship near Silicon Valley which was to serve as a visa-free startup community and entrepreneurial incubator. The shipstead planned to offer living and office space, high-speed Internet connectivity, and regular ferry service to the mainland.[46][47] The project aims included overcoming the difficulty organizations face obtaining US work visas, intending to use the easier B-1/B-2 visas to travel to the mainland, while work will be done on the ship.[46][47][dated info] Blueseed founders Max Marty and Dario Mutabdzija met when both were employees of The Seasteading Institute.[46][47]

Seasteading has been imagined numerous times in pop culture in recent years.

Read the original:

Seasteading – Wikipedia, the free encyclopedia

 Posted by at 4:53 am  Tagged with:

Utopia – New World Encyclopedia

 New Utopia  Comments Off on Utopia – New World Encyclopedia
Jun 152016
 

Utopia is a term denoting a visionary or ideally perfect state of society, whose members live the best possible life. The term Utopia was coined by Thomas More from the Greek words ou (no or not), and topos (place), as the name for the ideal state in his book, De optimo reipublicae statu deque nova insula Utopia (Louvain, 1516).

Utopianism refers to the various ways in which people think about, depict, and attempt to create a perfect society. Utopian thought deals with morality, ethics, psychology, and political philosophy, and often originates from the belief that reason and intelligence can bring about the betterment of society. It is usually characterized by optimism that an ideal society is possible. Utopianism plays an important role in motivating social and political change.

The adjective “utopian” is sometimes used in a negative connotation to discredit ideas as too advanced, too optimistic or unrealistic and impossible to realize. The term Utopian has also been used to describe actual communities founded in attempts to create an ideal economic and political system. Many works of utopian literature offer detailed and practical descriptions of an ideal society, but usually include some fatal flaw that makes the establishment of such a society impossible.

The term Utopia was coined by Thomas More from the Greek words ou (no or not), and topos (place), as the name for the ideal state in his book, De optimo reipublicae statu deque nova insula Utopia (Utopia Louvain, 1516). The book is narrated by a Portuguese traveler named Raphael Hythlodaeus, who criticizes the laws and customs of European states while admiring the ideal institutions which he observes during a five year sojourn on the island of Utopia.

Did you know?

Utopia is a perfect society, where poverty and misery have been eliminated, there are few laws and no lawyers, and the citizens, though ready to defend themselves if necessary, are pacifists. Citizens hold property in common, and care is taken to teach everyone a trade from which he can make a living, so that there is no need for crime. Agriculture is treated as a science and taught to children as part of their school curriculum; every citizen spends some of his life working on a farm. The people live in 54 cities, separated from each other by a distance of at least 24 miles. The rural population lives in communal farmhouses scattered through the countryside. Everyone works only six hours a day; this is sufficient because the people are industrious and do not require the production of useless luxuries for their consumption. A body of wise and educated representatives deliberates on public affairs, and the country is governed by a prince, selected from among candidates chosen by the people. The prince is elected for life, but can be removed from office for tyranny. All religions are tolerated and exist in harmony; atheism is not permitted since, if a man does not fear a god of some kind, he will commit evil acts and weaken society. Utopia rarely sends its citizens to war, but hires mercenaries from among its warlike neighbors, deliberately sending them into danger in the hope that the more belligerent populations of all surrounding countries will be gradually eliminated.

Utopia was first published in Louvain in 1516, without Mores knowledge, by his friend Erasmus. It was not until 1551, sixteen years after More’s execution as a traitor, that it was first published in England as an English translation.

Although some readers have regarded Utopia as a realistic blueprint for a working nation, More likely intended it as a satire, allowing him to call attention to European political and social abuses without risking censure by the king. The similarities to the ideas later developed by Karl Marx are evident, but More was a devout Roman Catholic and probably used monastic communalism as his model. The politics of Utopia have been seen as influential to the ideas of Anabaptism, Mormonism, and communism. An applied example of More’s utopia can be seen in Vasco de Quiroga’s implemented society in Michoacn, Mexico, which was directly taken and adapted from More’s work.

The word utopia overtook More’s short work and has been used ever since to describe any type of imaginary ideal society. Although he may not have founded the genre of utopian and dystopian fiction, More certainly popularized it. Some of the early works which owe something to Utopia include The City of the Sun by Tommaso Campanella, Description of the Republic of Christianopolis by Johannes Valentinus Andreae, New Atlantis by Francis Bacon and Candide by Voltaire.

The more modern genre of science fiction frequently depicts utopian or dystopian societies in fictional works such as Aldous Huxley’s Brave New World (1932) Lost Horizon by James Hilton (1933), “A Modern Utopia” (1905) and New Worlds for Old (1908) by H. G. Wells, The Great Explosion by Eric Frank Russell (1963), News From Nowhere by William Morris, Andromeda Nebula (1957) by Ivan Efremov, 1984 (1949) by George Orwell, and The Giver (1993) by Lois Lowry. Authors of utopian fiction are able to explore some of the problems raised by utopian concepts and to develop interesting consequences. Many works make use of an outsider, a time-traveler or a foreigner, who observes the features of the society and describes them to the reader.

Utopian thought is born from the premise that through reason and intelligence, humankind is capable of creating an ideal society in which every individual can achieve fulfillment without infringing on the happiness and well-being of the other members of society. It includes the consideration of morality, ethics, psychology, and social and political philosophy. Utopian thinking is generally confined to physical life on earth, although it may include the preparation of the members of society for a perceived afterlife. It invariably includes criticism of the current state of society and seeks ways to correct or eliminate abuses. Utopianism is characterized by tension between philosophical ideals and the practical realities of society, such as crime and immorality; there is also a conflict between respect for individual freedom and the need to maintain order. Utopian thinking implies a creative process that challenges existing concepts, rather than an ideology or justification for a belief system which is already in place.

Two of Platos dialogues, Republic and Laws, contain one of the earliest attempts to define a political organization that would not only allow its citizens to live in harmony, but would also provide the education and experience necessary for each citizen to realize his highest potential.

During the nineteenth century, thinkers such as Henri Saint-Simon, Charles Fourier, and Etienne Cabet in France, and Robert Owen in England popularized the idea of creating small, experimental communities to put philosophical ideals into practice. Karl Marx and Friedrich Engels recognized that utopianism offered a vision for a better future, a vision that contributed much to Marxism, but they also criticized utopian writers’ lack of a wider understanding of social and political realities which could contribute to actual political change. Herbert Marcuse made a distinction between abstract utopias based on fantasy and dreams, and concrete utopias based on critical social theory.

Utopianism is considered to originate in the imaginative capacity of the subconscious mind, which is able to transcend conscious reality by projecting images of hopes, dreams, and desires. Utopian ideas, though they may never be fully realized, play an important role in bringing about positive social change. They allow thinkers to distance themselves from the existing reality and consider new possibilities. The optimism that a better society can be achieved provides motivation and a focal point for those involved in bringing about social or political change. Abolitionism, womens rights and feminism, the Civil Rights movement, the establishment of a welfare system to take care of the poor, the Red Cross, and multiculturalism are all examples of utopian thinking applied to practical life.

The harsh economic conditions of the nineteenth century and the social disruption created by the development of commercialism and capitalism led several writers to imagine economically utopian societies. Some were characterized by a variety of socialist ideas: an equal distribution of goods according to need, frequently with the total abolition of money; citizens laboring for the common good; citizens doing work which they enjoyed; and ample leisure time for the cultivation of the arts and sciences. One such utopia was described in Edward Bellamy’s Looking Backward. Another socialist utopia was William Morris’ News from Nowhere, written partially in criticism of the bureaucratic nature of Bellamy’s utopia.

Capitalist utopias, such as the one portrayed in Robert A. Heinlein’s The Moon Is a Harsh Mistress or Ayn Rands The Fountainhead, are generally individualistic and libertarian, and are based on perfect market economies, in which there is no market failure. Eric Frank Russell’s book The Great Explosion (1963) details an economic and social utopia, the first to mention of the idea of Local Exchange Trading Systems (LETS).

Political utopias are ones in which the government establishes a society that is striving toward perfection. These utopias are based on laws administered by a government, and often restrict individualism when it conflicts with the primary goals of the society. Sometimes the state or government replaces religious and family values. A global utopia of world peace is often seen as one of the possible inevitable ends of history.

Through history a number of religious communities have been created to reflect the virtues and values they believe have been lost or which await them in the Afterlife. In the United States and Europe during and after the Second Great Awakening of the nineteenth century, many radical religious groups sought to form communities where all aspects of people’s lives could be governed by their faith. Among the best-known of these utopian societies were the Puritans, and the Shaker movement, which originated in England in the eighteenth century but moved to America shortly after its founding.

The most common utopias are based on religious ideals, and usually required adherence to a particular religious tradition. The Jewish, Christian and Islamic concepts of the Garden of Eden and Heaven may be interpreted as forms of utopianism, especially in their folk-religious forms. Such religious “utopias” are often described as “gardens of delight,” implying an existence free from worry in a state of bliss or enlightenment. They postulate existences free from sin, pain, poverty and death, and often assume communion with beings such as angels or the houri. In a similar sense the Hindu concept of Moksha and the Buddhist concept of Nirvana may be thought of as a kind of utopia.

Many cultures and cosmogonies include a myth or memory of a distant past when humankind lived in a primitive and simple state of perfect happiness and fulfillment. The various myths describe a time when there was an instinctive harmony between man and nature, and mans needs were easily supplied by the abundance of nature. There was no motive for war or oppression, or any need for hard and painful work. Humans were simple and pious, and felt themselves close to the gods. These mythical or religious archetypes resurge with special vitality during difficult times, when the myth is not projected towards the remote past, but towards the future or a distant and fictional place (for example, The Land of Cockaygne, a straightforward parody of a paradise), where the possibility of living happily must exist.

Golden Age

Works and Days, compilation of the mythological tradition by the Greek poet Hesiod, around the eighth century B.C.E., explained that, prior to the present era, there were four progressively most perfect ones.

A medieval poem (c. 1315) , entitled “The Land of Cokaygne” depicts a land of extravagance and excess where cooked larks flew straight into one’s mouth; the rivers ran with wine, and a fountain of youth kept everyone young and active.

Scientific and technical utopias are set in the future, when it is believed that advanced science and technology will allow utopian living standards; for example, the absence of death and suffering; changes in human nature and the human condition. These utopian societies tend to change what “human” is all about. Normal human functions, such as sleeping, eating and even reproduction are replaced by artificial means.

All links retrieved January 13, 2016.

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

Note: Some restrictions may apply to use of individual images which are separately licensed.

Follow this link:

Utopia – New World Encyclopedia

 Posted by at 3:29 pm  Tagged with:

What is Posthumanism? | The Curator

 Posthumanism  Comments Off on What is Posthumanism? | The Curator
Jun 152016
 

Perhaps you have had a nightmare in which you fell through the bottom of your known universe into a vortex of mutated children, talking animals, mental illness, freakish art, and clamoring gibberish. There, you were subjected to the gaze of creatures of indeterminate nature and questionable intelligence. Your position as the subject of your own dream was called into question while voices outside your sight commented upon your tenuous identity. When you woke, you were relieved to find that it was only a dream-version of the book you were reading when you fell asleep. Maybe that book was Alice in Wonderland; maybe it was What is Posthumanism?

Now, it is not quite fair to compare Cary Wolfes sober, thoughtful scholarship with either a nightmare or a work of (childrens?) fantasy. It is a profound, thoroughly researched study with far-reaching consequences for public policy, bioethics, education, and the arts. However, it does present a rather odd dramatis personae, including a glow-in-the-dark rabbit, a woman who feels most at ease in a cattle chute, an artist of Jewish descent who implants an ID-chip in his own leg, researchers who count the words in a dogs vocabulary, and horses who exhibit more intelligence than the average human toddler. The settings, too, are often wildly different from those you might expect in an academic work: a manufactured cloud hovering over a lake in Switzerland, a tree park in Canada where landscape and architecture blend and redefine one another, recording studios, photographic laboratories, slaughterhouses, and (most of all) the putative minds of animals and the deconstructed minds of the very humans whose ontological existence it seeks to problematize.

But that is another exaggeration. Wolfes goal is not to undermine the existence or value of human beings. Rather, it is to call into question the universal ethics, assumed rationality, and species-specific self-determination of humanism. That is a mouthful.

Indeed, Wolfes book is a mouthful, and a headful. It is in fact a book by a specialist, for specialists. While Wolfe is an English professor (at Rice University) and identifies himself with literary and cultural studies (p. 100), this is first of all a work of philosophy. Its ideal audience is very small, consisting of English and Philosophy professors who came of age in the 70s, earned their Ph.D.s during the hey-day of Derridean Deconstruction, and have spent the intervening decades keeping up with trends in systems theory, cultural studies, science, bioethics, and information technology. It is rigorous and demanding, especially in its first five chapters, which lay the conceptual groundwork for the specific analyses of the second section.

In these first five chapters, Wolfe describes his perspective and purpose by interaction with many other great minds and influential texts, primarily those of Jacques Derrida. Here, the fundamental meaning and purpose of Posthumanism becomes clear. Wolfe wants his readers to rethink their relationship to animals (what he calls nonhuman animals). His goal is a new and more inclusive form of ethical pluralism (137). That sound innocuous enough, but he is not talking about racial, religious, or other human pluralisms. He is postulating a pluralism that transcends species. In other words, he is promoting the ethical treatment of animals based on a fundamental re-evaluation of what it means to be human, to be able to speak, and even to think. He does this by discussing studies that reveal the language capacities of animals (a dog apparently has about a 200-word vocabulary and can learn new words as quickly as a human three-year-old; pp. 32-33), by recounting the story of a woman whose Aspergers syndrome enables her to empathize with cows and sense the world the way they do (chapter five), and by pointing out the ways in which we value disabled people who do not possess the standard traits that (supposedly) make us human.

But Wolfe goes further than a simple suggestion that we should be nice to animals (and the unspoken plug for universal veganism). He is proposing a radical disruption of liberal humanism and a rigorous interrogation of what he sees as an arrogant complacency about our species. He respects any variety of philosophy that challenges anthropocentrism and speciesism (62)anthropocentrism, of course, means viewing the world as if homo sapiens is the center (or, more accurately, viewing the world from the position of occupying that center) and specisism is the term he uses to replace racism. We used to feel and enact prejudice against people of different ethnic backgrounds, he suggests, but we now know that is morally wrong. The time has come, then, to realize that we are feeling and enacting prejudice against people of different species.

Although Wolfe suggests many epistemological and empirical reasons for rethinking the personhood of animals, he comes to the conclusion that our relationship with them is based on our shared embodiment. Humans and animals have a shared finitude (139); we can both feel pain, suffer, and die. On the basis of our mutual mortality, then, we should have an emphasis on compassion (77). He is not out to denigrate his own species far from it. Indeed, he goes out of his way to spend time discussing infants (who have not yet developed rationality and language), people with disabilities (especially those that prevent them from participating in fully rational thought and/or communication), and the elderly (who may lose some of those rational capacities, especially if racked by such ailments as Alzheimers). Indeed, he claims: It is not by denying the special status of human being[s] but by intensifying it that we can come to think of nonhuman animalsasfellow creatures (77).

This joint focus on the special status of all human beings along with the other living creatures roaming (or swimming, flying, crawling, slithering) the globe has far-reaching consequences for public policy, especially bioethics. Wolfe says that, currently, bioethics is riddled with prejudices: Of these prejudices, none is more symptomatic of the current state of bioethics than prejudice based on species difference, and an incapacity to address the ethical issues raised by dramatic changes over the past thirty years in our knowledge about the lives, communication, emotions, and consciousnesses of a number of nonhuman species (56). One of the goals of his book, then, is to reiterate that knowledge and promote awareness of those issues that he sees as ethical.

If you read Wolfes book, or even parts of it, you will suddenly see posthumanism everywhere. You can trace its influence in the enormously fast-growing pet industry. From the blog Pawsible Marketing: As in recent and past years, there is no doubt that pets continue to become more and more a part of the family, even to the extent of becoming, in some cases, humanized.

You will see it in bring-your-pet-to-work or bring-your-pet-to-school days. You might think it is responsible for the recent introduction of a piece of legislation called H.R. 3501, The Humanity and Pets Partnered Through the Years, know as the HAPPY Act, which proposes a tax deduction for pet owners. You will find it in childrens books about talking animals. You will see it on Animal Planet, the Discovery Channel, and a PBS series entitled Inside the Animal Mind. You will find it in films, such as the brand-new documentary The Cove, which records the brutal slaughter of dolphins for food. And you will see it in works of art.

Following this reasoning, section two of Wolfes book (chapters six through eleven) veers off from the strictly philosophical approach into the more traditional terrain of cultural studies: he examines specific works of art in light of the philosophical basis that is now firmly in place. Interestingly, he does not choose all works of art that depict animals, nor those that displace humans. He begins with works that depict animals (Sue Coes paintings of slaughterhouses) and that use animals (Eduardo Kacs creation of genetically engineered animals that glow in the dark), but then moves on to discuss film, architecture, poetry, and music. In each of these examinations, he works to destabilize traditional binaries such as nature/culture, landscape/architecture, viewer/viewed, presence/absence, organic/inorganic, natural/artificial, and, really, human/nonhuman. This second section, then, is a subtle application of the theory of posthumanism itself to the arts, [our] environment, and [our] identity.

What is perhaps most important about What is Posthumanism remains latent in the text. This is its current and (especially) future prevalence. By tracing the history of posthumanism back through systems theory into deconstruction, Wolfe implies a future trajectory, too. I would venture to suggest that he believes posthumanism is the worldview that will soon come to dominate Western thought. And this is important for academics specifically and thinkers in general to realize.

Whether you agree with Cary Wolfe or not, it would be wise to understand posthumanism. It appears that your only choice will be either to align yourself with this perspective or to fight against it. If you agree, you should know with what. If you fight, you should know against what.

What, then, is the central thesis of posthumanism? Wolfes entire project might be summed up in his bold claim that, thanks to his own work and that of the theorists and artists he discusses, the human occupies a new place in the universe, a universe now populated by what I am prepared to call nonhuman subjects (47)such subjects as talking rabbits, six-inch people, and mythical monsters?

Well, maybe not the mythical monsters.

More here:

What is Posthumanism? | The Curator

 Posted by at 3:25 pm  Tagged with:



Pierre Teilhard De Chardin | Designer Children | Prometheism | Euvolution