Cyborg | Designer-Babies | Futurism | Futurist | Immortality | Longevity | Nanotechnology | Post-Human | Singularity | Transhuman

21 Actionable SEO Techniques You Can Use Right Now

 SEO  Comments Off on 21 Actionable SEO Techniques You Can Use Right Now
May 232016
 

by Brian Dean|Last updated May. 20, 2016

People that succeed with SEO do two things very well:

First, they identify SEO techniques that get them results.

Second, they put 100% of their resources into executing and scaling those techniques.

But youre probably wondering:

How do I find SEO strategies that actually work?

Well today Im going to make it easy for you.

All you need to do is carve out a few minutes of your day and tackle one of the 21 white hat SEO techniques below.

Free PDF Download: Get access to the free bonus checklist that will show you how to quickly execute these strategies. Includes 2 bonus techniques not found in this post.

Broken link building has it all

Scalable.

White hat.

Powerful.

Theres only one problem: finding broken links is a HUGE pain.

That is, unless you know about a little-known wrinkle in Wikipedias editing system.

You see, when a Wikipedia editor stumbles on a dead link, they dont delete the link right away.

Instead, they add a footnote next to the link that says dead link:

This footnote gives other editors a chance to confirm that the link is actually dead before removing it.

And that simple footnote makes finding broken links dead simple.

Heres how:

First, use this simple search string:

site:wikipedia.org [keyword] + dead link

For example, if you were in the investing space youd search for something like this:

Next, visit a page in the search results thats relevant to your site:

Hit ctrl + f and search for dead link:

Your browser will jump to any dead links in the references section:

Pro Tip: Wikipedia actually has a list of articles with dead links. This makes finding dead links in Wikipedia even easier.

OK. So once youve found a dead link, now what?

Well you could re-create that dead resource on your site and replace the dead link in Wikipedia with yours.

But that would only land you a single link (and a nofollow link at that).

Instead, I recommend tapping into The Moving Man Method.

This post will show you everything you need to know:

Now for our next SEO technique

Hit the play button to see how its done:

Last year I get an email out of the blue:

Turns out Emil used The Skyscraper Technique to achieve these impressive results.

Not only that, but Emil wanted to share his case study with the Backlinko community.

Thats when I had an idea:

Instead of writing a new post for Emils case study, why dont I add it to an existing post?

So thats what I did.

Specifically, I added Emils case study to this old post:

(I also updated the images and added some new tips)

The final result?

A new and improved version of the post:

To make sure the new post got the attention it deserved, I re-promoted it by sending an email to the Backlinko community:

I also shared it on social media:

The result?

A 111.37% increase in organic traffic to that page.

Pretty cool, right?

Its no secret that compelling title and description tags get more clicks in the SERPS.

(In fact, REALLY good copy can actually steal traffic from the top 3 results)

Question is: How do you know what people want to click on?

Thats easy: look at that keywords Adwords ads.

You see, the Adwords ads that you see for competitive keywords are the result of hundreds (if not thousands) of split tests.

Split tests to maximize clicks.

And you can use copy from these ads to turn your title and description tags into click magnets.

For example, lets say you were going to publish a blog post optimized around the keyword glass water bottles.

First, take a look at the Adwords ads for that keyword:

Keep an eye out for interesting copy from the ads that you can work into your title and description. In our glass water bottles example, we have phrases like:

Heres how your title and description tags might look:

As you can see, these tags include words that are proven to generate clicks.

What if there was an up-to-date list of blogs in your niche that you could use to find quality link opportunities?

I have good news. There is.

And its called AllTop.

AllTop is a modern day directory that curates the best blogs in every industry under the sun.

To find blogs in your niche, just go to the AllTop homepage and search for a keyword:

Next, find a category that fits with your sites topic:

And AllTop will show you their hand-picked list of quality blogs in that category:

Now you have a long list of some of the best blogs in your industry. And these bloggers are the exact people that you want to start building relationships with.

Lets face it: Most content curation is pretty weak.

I think I speak for everyone when I say that Ive read enough top 100 posts you need to read lists for one lifetime.

So how can you make your content curation stand out?

By tapping into Benefit-Focused Content Curation.

Benefit-Focused Content Curation is similar to most other types of curation, with one huge difference: it focuses on the outcomes that your audience wants.

Im sure youd like to see an example.

Here you go:

This is a guide I put together a while back called, Link Building: The Definitive Guide.

This guide has generated over 116,000 visitors from social media, forums, blogs and search engines:

(I should point out that the guides design and promotion contributed to its success. But it all started with how the content itself was organized)

What makes this guides curation unique is that its organized by benefits, not topics.

For example, Chapter 2 is called How to Get Top Notch Links Using Content Marketing:

Note that the title isnt: Chapter 2: Content Marketing. And most of the other chapters follow the same benefit-driven formula.

Why is this so important?

When someone sees a list of curated links they immediately ask themselves whats in it for me?.

And when you organize your content around outcomes and benefits, that answer becomes really, really obvious.

In other words, answering whats in it for me? makes the perceived value of your content MUCH higher than a list of 100 random resources.

With all the talk about Hummingbirds and Penguins its easy to forget about an important Google algorithm update from 2003 called Hilltop.

Despite being over ten years old, Hilltop still plays a major role in todays search engine landscape.

Hilltop is essentially an on-page SEO signal way that tells Google whether or not a page is a hub of information.

So: How does Google know which pages are hubs?

Its simple: Hubs are determined by the quality and relevancy of that pages outbound links.

This makes total sense if you think about it

The pages you link out to tend to reflect the topic of your page.

And pages that link to helpful resources also tend to be higher-quality than pages that only link to their own stuff.

In other words, pages that link out to awesome resources establish themselves as hubs of helpful content in the eyes of Big G.

In fact, a recent industry study found a correlation between outbound links and Google rankings.

Bottom line:

Link to at least 3 quality, relevant resources in every piece of content that you publish.

That will show Google that your page is a Hilltop Hub.

Read the rest here:
21 Actionable SEO Techniques You Can Use Right Now

Transhumanism – RationalWiki

 Transhumanism  Comments Off on Transhumanism – RationalWiki
Mar 252016
 

You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Julie from Crystal Nights by Greg Egan

Transhumanism (or H+), broadly speaking, is a futurist movement with a set of beliefs with a common theme of anticipating an evolutionary plateau beyond the current Homo sapiens. The term was coined and movement founded by the biologist Julian Huxley in 1957.

The general expectation is that in the near future greater manipulation of human nature will be possible because of the adoption of techniques apparent on the technological frontier: machine intelligence greater than that of contemporary humans, direct mind-computer interface, genetic engineering and nanotechnology. Transhumanists tend to believe that respect for human agency, even when practiced by humans in their current form, is valuable, however.

How plausible is transhumanism? In the 1930’s, many sensible people were sure human beings would never get to the moon and that was just one of many predictions that turned out incorrect.[1] Early 21st century people do not know one way or the other what will be possible in the future.

While frequently dismissed as mere speculation at best by most rationalists[citationneeded] (especially in light of the many failures of artificial intelligence), transhumanism is a strongly-held belief among many computer geeks, notably synthesizer and accessible computing guru Ray Kurzweil, a believer in the “technological singularity,” where technology evolves beyond humanity’s current capacity to understand or anticipate it, and Sun Microsystems founder and Unix demigod Bill Joy, who believes the inevitable result of AI research is the obsolescence of humanity.[2]

Certain recent technological advances are making the possibility of the realization of transhumanism appear more plausible: Scientists funded by the military developed an implant that can translate motor neuron signals into a form that a computer can use, thus opening the door for advanced prosthetics capable of being manipulated like biological limbs and producing sensory information.[3] This is on top of the earlier development of cochlear implants, which translate sound waves into nerve signals; they are often called “bionic ears.”[4]

Even DIY transhumanism is becoming an option, with people installing magnetic implants, allowing them to feel magnetic and electric fields.[5] Others have taken to wearing belts of magnets, in order to always be able to find magnetic north. Prosthetic limbs with some level of touch are also now being developed, a major milestone. [6]

Sadly, some followers of transhumanism[citationneeded] are based on a sort of blind-men-at-the-elephant thinking people assuming that because it can be imagined, it must be possible. Transhumanism is particularly associated with figures in computer science, which is a field that is in some ways more math and art than a true experimental science; as a result, a great many[citationneeded] transhumanists are technophiles with inevitabilist techno-utopian outlooks.

The example of the singularity is instructive; for a great many people, at least part of the singularity hinges on being able to create a true artificial intelligence. While it’s reasonable to contend that the complexity inherent in the human brain is entirely the result of mundane physics, and therefore can be reproduced in principle, singularitarians[citationneeded] tend to assume that the emulation of human intelligence not being impossible means having the ability to in the near future.

“Whole brain emulation” (WBE) is a term used by transhumanists to refer to, quite obviously, the emulation of a brain on a computer. While this is no doubt a possibility, it encounters two problems that keep it from being a certainty anytime in the near future.

The first is a philosophical objection: For WBE to work, “strong AI” (i.e. AI equivalent to or greater than human intelligence) must be attainable. A number of philosophical objections have been raised against strong AI, generally contending either that the mind or consciousness is not computable or that a simulation of consciousness is not equivalent to true consciousness (whatever that is). There is still controversy over strong AI in the field of philosophy of mind.[7]

A second possible objection is technological: WBE may not defy physics, but the technology to fully simulate a human brain (in the sense meant by transhumanists, at least) is a long way away. Currently, no computer (or network of computers) is powerful enough to simulate a human brain. Henry Markram, head of the Blue Brain Project, estimates that simulating a brain would require 500 petabytes of data for storage and that the power required to run the simulation would cost about $3 billion annually. (However, in 2008, he optimistically predicts this will be possible in ten years.[8]) In addition to technological limitations in computing, there are also the limits of neuroscience. Neuroscience currently relies on technology that can only scan the brain at the level of gross anatomy (e.g., fMRI, PET). Forms of single neuron imaging (SNI) have been developed recently, but they can only be used on animal subjects (usually rats) because they destroy neural tissue.[9]

Another transhumanist goal is mind uploading, which is one way they claim we will be able to achieve immortality. Aside from the problems with WBE listed above, mind uploading suffers a philosophical problem, namely the “swamp man problem.” That is, will the “uploaded” mind be “you” or simply a copy or facsimile of your mind? However, one possible way round this problem would be via incremental replacement of parts of the brain with their cybernetic equivalents (the patient being awake during each operation). Then there is no “breaking” of the continuity of the individual’s consciousness, and it becomes difficult for proponents of the “swamp man” hypothesis to pinpoint exactly when the individual stops being “themselves.”

Cryonics is another favorite of many transhumanists. In principle, cryonics is not impossible, but the current form of it is based largely on hypothetical future technologies and costs substantial amounts of money.

Fighting aging and extending life expectancy is possible the field that studies aging and attempts to provide suggestions for anti-aging technology is known as “biogerontology.” Aubrey de Grey, a transhumanist, has proposed a number of treatments for aging. In 2005, 28 scientists working in biogerontology signed a letter to EMBO Reports pointing out that de Grey’s treatments had never been demonstrated to work and that many of his claims for anti-aging technology were extremely inflated.[10]

Worst of all, some transhumanists outright ignore what people in the fields they’re interested in tell them; a few AI boosters, for example, believe that neurobiology is an outdated science because AI researchers can do it themselves anyway.[citationneeded] They seem to have taken the analogy used to introduce the computational theory of mind, “the mind (or brain) is like a computer.” Of course, the mind/brain is not a computer in the usual sense.[11] Debates with such people can take on the wearying feel of a debate with a creationist or climate change denialist, as such people will stick to their positions no matter what. Indeed, many critics are simply dismissed as Luddites or woolly-headed romantics who oppose scientific and technological progress.[12]

Transhumanism has often been criticized for not taking ethical issues seriously on a variety of topics,[13] including life extension technology,[14] cryonics,[15] and mind uploading and other enhancements.[16][17] Francis Fukuyama (in his doctrinaire neoconservative days) caused a stir by naming transhumanism “the world’s most dangerous idea.”[18] One of Fukuyama’s criticisms, that implementation of the technologies transhumanists push for will lead to severe inequality, is a rather common one.

A number of political criticisms of transhumanism have been made as well. Transhumanist organizations have been accused of being in the pocket of corporate and military interests.[19] The movement has been identified with Silicon Valley due to the fact that some of its biggest backers, such as Peter Thiel (of PayPal and Bitcoin fame), reside in the region.[20][21] Some writers see transhumanism as a hive of cranky and obnoxious techno-libertarianism.[22][23] The fact that Julian Huxley coined the term “transhumanism” and many transhumanists’ obsession with constructing a Nietzschean ubermensch known as the “posthuman” has led to comparisons with eugenics.[24][19] Like eugenics, it has been characterized as a utopian political ideology.[25] Jaron Lanier slammed it as “cybernetic totalism”.[26]

Some tension has developed between transhumanism and religion, namely Christianity. Some transhumanists, generally being atheistic naturalists, see all religion as an impediment to scientific and technological advancement and some Christians oppose transhumanism because of its stance on cloning and genetic engineering and label it as a heretical belief system.[27] Other transhumanists, however, have attempted to extend an olive branch to Christians.[28] Some have tried to reconcile their religion and techno-utopian beliefs, calling for a “scientific theology.”[29] There is even a Mormon transhumanist organization.[30]Ironically for the atheistic transhumanists, the movement has itself been characterized as a religion and its rhetoric compared to Christian apologetics.[31][32]

The very small political transhumanist political movement[wp] has gained momentum with Zoltan Istvan[wp] announcing his bid for US president, with the Transhumanist Party and other small political parties gaining support internationally.

The important thing about transhumanism is that while a lot of such predictions may in fact be possible (and may even be in their embryonic stages right now), a strong skeptical eye is required for any claimed prediction about the fields it covers. When evaluating such a claim, one will probably need a trip to a library (or Wikipedia, or a relevant scientist’s home page) to get up to speed on the basics.[33]

A common trope in science fiction for decades is that the prospect of transcending the current form may be positive, as in Arthur C. Clarke’s 1953 novel Childhood’s End or negative, as in the film The Matrix, with its barely disguised salvationist theme, or the Terminator series of films, where humanity has been essentially replaced by machine life. Change so radical elicits fear and thus it is unsurprising that many of the portrayals of transhumanism in popular culture are negative. The cyberpunk genre deals extensively with the theme of a transhumanist society gone wrong.

Among the utopian visions of transhumanism (fused with libertarianism) are those found in the collaborative online science fiction setting Orion’s Arm. Temporally located in the post-singularity future, 10,000 years from now, Orion’s Arm is massively optimistic about genetic engineering, continued improvements in computing and materials science. Because only technology which has been demonstrated to be impossible is excluded, even remotely plausible concepts has a tendency to be thrown in. At the highest end of the scale is artificial wormhole creation, baby universes and inertia without mass.[34]

Read more from the original source:

Transhumanism – RationalWiki

Transhumanism by Julian Huxley (1957)

 Transhumanism  Comments Off on Transhumanism by Julian Huxley (1957)
Mar 252016
 

In New Bottles for New Wine, London: Chatto & Windus, 1957, pp. 13-17

As a result of a thousand million years of evolution, the universe is becoming conscious of itself, able to understand something of its past history and its possible future. This cosmic self-awareness is being realized in one tiny fragment of the universe in a few of us human beings. Perhaps it has been realized elsewhere too, through the evolution of conscious living creatures on the planets of other stars. But on this our planet, it has never happened before.

Evolution on this planet is a history of the realization of ever new possibilities by the stuff of which earth (and the rest of the universe) is made life; strength, speed and awareness the flight of birds and the social polities of bees and ants; the emergence of mind, long before man was ever dreamt of, with the production of colour, beauty, communication, maternal care, and the beginnings of intelligence and insight. And finally, during the last few ticks of the cosmic clock, something wholly new and revolutionary, human beings with their capacities for conceptual thought and language, for self-conscious awareness and purpose, for accumulating and pooling conscious experience. For do not let us forget that the human species is as radically different from any of the microscopic single-celled animals that lived a thousand million years ago as they were from a fragment of stone or metal.

The new understanding of the universe has come about through the new knowledge amassed in the last hundred yearsby psychologists, biologists, and other scientists, by archaeologists, anthropologists, and historians. It has defined mans responsibility and destinyto be an agent for the rest of the world in the job of realizing its inherent potentialities as fully as possible.

It is as if man had been suddenly appointed managing director of the biggest business of all, the business of evolution appointed without being asked if he wanted it, and without proper warning and preparation. What is more, he cant refuse the job. Whether he wants to or not, whether he is conscious of what he is doing or not, he is in point of fact determining the future direction of evolution on this earth. That is his inescapable destiny, and the sooner he realizes it and starts believing in it, the better for all concerned.

What the job really boils down to is thisthe fullest realization of mans possibilities, whether by the individual, by the community, or by the species in its processional adventure along the corridors of time. Every man-jack of us begins as a mere speck of potentiality, a spherical and microscopic egg-cell. During the nine months before birth, this automatically unfolds into a truly miraculous range of organization: after birth, in addition to continuing automatic growth and development, the individual begins to realize his mental possibilitiesby building up a personality, by developing special talents, by acquiring knowledge and skills of various kinds, by playing his part in keeping society going. This post-natal process is not an automatic or a predetermined one. It may proceed in very different ways according to circumstances and according to the individuals own efforts. The degree to which capacities are realized can be more or less complete. The end-result can be satisfactory or very much the reverse: in particular, the personality may grievously fail in attaining any real wholeness. One thing is certain, that the well-developed, well-integrated personality is the highest product of evolution, the fullest realization we know of in the universe.

The first thing that the human species has to do to prepare itself for the cosmic office to which it finds itself appointed is to explore human nature, to find out what are the possibilities open to it (including, of course, its limitations, whether inherent or imposed by the facts of external nature). We have pretty well finished the geographical exploration of the earth; we have pushed the scientific exploration of nature, both lifeless and living, to a point at which its main outlines have become clear; but the exploration of human nature and its possibilities has scarcely begun. A vast New World of uncharted possibilities awaits its Columbus.

The great men of the past have given us glimpses of what is possible in the way of personality, of intellectual understanding, of spiritual achievement, of artistic creation. But these are scarcely more than Pisgah glimpses. We need to explore and map the whole realm of human possibility, as the realm of physical geography has been explored and mapped. How to create new possibilities for ordinary living? What can be done to bring out the latent capacities of the ordinary man and woman for understanding and enjoyment; to teach people the techniques of achieving spiritual experience (after all, one can acquire the technique of dancing or tennis, so why not of mystical ecstasy or spiritual peace?); to develop native talent and intelligence in the growing child, Instead of frustrating or distorting them? Already we know that painting and thinking, music and mathematics, acting and science can come to mean something very real to quite ordinary average boys and girls provided only that the fright methods are adopted for bringing out the childrens possibilities. We are beginning to realize that even the most fortunate people are living far below capacity, and that most human beings develop not more than a small fraction of their potential mental and spiritual efficiency. The human race, in fact, is surrounded by a large area of unrealized possibilities, a challenge to the spirit of exploration.

The scientific and technical explorations have given the Common Man all over the world a notion of physical possibilities. Thanks to science, the under-privileged are coming to believe that no one need be underfed or chronically diseased, or deprived of the benefits of its technical and practical applications.

The worlds unrest is largely due to this new belief. People are determined not to put up with a subnormal standard of physical health and material living now that science has revealed the possibility of raising it. The unrest will produce some unpleasant consequences before it is dissipated; but it is in essence a beneficent unrest, a dynamic force which will not be stilled until it has laid the physiological foundations of human destiny.

Once we have explored the possibilities open to consciousness and personality, and the knowledge of them has become Common property, a new source of unrest will have emerged, will realize and believe that if proper measures are taken, no one need be starved of true satisfaction, or condemned to sub-standard fulfillment. This process too will begin by being unpleasant, and end by being beneficent. It will begin by destroying the ideas and the institutions that stand in the way of our realizing our possibilities (or even deny that the possibilities are there to be realized), and will go on by at least making a start with the actual construction of true human destiny.

Up till now human life has generally been, as Hobbes described it, nasty, brutish and short; the great majority of human beings (if they have not already died young) have been afflicted with misery in one form or anotherpoverty, disease, ill-health, over-work, cruelty, or oppression. They have attempted to lighten their misery by means of their hopes and their ideals. The trouble has been that the hopes have generally been unjustified, the ideals have generally failed to correspond with reality.

The zestful but scientific exploration of possibilities and of the techniques for realizing them will make our hopes rational, and will set our ideals within the framework of reality, by showing how much of them are indeed realizable. Already, we can justifiably hold the belief that these lands of possibility exist, and that the present limitations and miserable frustrations of our existence could be in large measure surmounted. We are already justified in the conviction that human life as we know it in history is a wretched makeshift, rooted in ignorance; and that it could be transcended by a state of existence based on the illumination of knowledge and comprehension, just as our modern control of physical nature based on science transcends the tentative fumblings of our ancestors, that were rooted in superstition and professional secrecy.

To do this, we must study the possibilities of creating a more favourable social environment, as we have already done in large measure with our physical environment. We shall start from new premises. For instance, that beauty (something to enjoy and something to be proud of) is indispensable, and therefore that ugly or depressing towns are immoral; that quality of people, not mere quantity, is what we must aim at, and therefore that a concerted policy is required to prevent the present flood of population-increase from wrecking all our hopes for a better world; that true understanding and enjoyment are ends in themselves, as well as tools for or relaxations from a job, and that therefore we must explore and make fully available the techniques of education and self-education; that the most ultimate satisfaction comes from a depth and wholeness of the inner life, and therefore that we must explore and make fully available the techniques of spiritual development; above all, that there are two complementary parts of our cosmic duty one to ourselves, to be fulfilled in the realization and enjoyment of our capacities, the other to others, to be fulfilled in service to the community and in promoting the welfare of the generations to come and the advancement of our species as a whole.

The human species can, if it wishes, transcend itself not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.

I believe in transhumanism: once there are enough people who can truly say that, the human species will be on the threshold of a new kind of existence, as different from ours as ours is from that of Pekin man. It will at last be consciously fulfilling its real destiny.

Original post:

Transhumanism by Julian Huxley (1957)

Transhumanist Values – Nick Bostrom

 Transhumanism  Comments Off on Transhumanist Values – Nick Bostrom
Mar 232016
 

1. What is Transhumanism?

Transhumanism is a loosely defined movement that has developed gradually over the past two decades.[1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.

The enhancement options being discussed include radical extension of human health-span, eradication of disease, elimination of unnecessary suffering, and augmentation of human intellectual, physical, and emotional capacities. Other transhumanist themes include space colonization and the possibility of creating superintelligent machines, along with other potential developments that could profoundly alter the human condition. The ambit is not limited to gadgets and medicine, but encompasses also economic, social, institutional designs, cultural development, and psychological skills and techniques.

Transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman, beings with vastly greater capacities than present human beings have.

Some transhumanists take active steps to increase the probability that they personally will survive long enough to become posthuman, for example by choosing a healthy lifestyle or by making provisions for having themselves cryonically suspended in case of de-animation.[2] In contrast to many other ethical outlooks, which in practice often reflect a reactionary attitude to new technologies, the transhumanist view is guided by an evolving vision to take a more proactive approach to technology policy. This vision, in broad strokes, is to create the opportunity to live much longer and healthier lives, to enhance our memory and other intellectual faculties, to refine our emotional experiences and increase our subjective sense of well-being, and generally to achieve a greater degree of control over our own lives. This affirmation of human potential is offered as an alternative to customary injunctions against playing God, messing with nature, tampering with our human essence, or displaying punishable hubris.

Transhumanism does not entail technological optimism. While future technological capabilities carry immense potential for beneficial deployments, they also could be misused to cause enormous harm, ranging all the way to the extreme possibility of intelligent life becoming extinct. Other potential negative outcomes include widening social inequalities or a gradual erosion of the hard-to-quantify assets that we care deeply about but tend to neglect in our daily struggle for material gain, such as meaningful human relationships and ecological diversity. Such risks must be taken very seriously, as thoughtful transhumanists fully acknowledge.[3]

Transhumanism has roots in secular humanist thinking, yet is more radical in that it promotes not only traditional means of improving human nature, such as education and cultural refinement, but also direct application of medicine and technology to overcome some of our basic biological limits.

The range of thoughts, feelings, experiences, and activities accessible to human organisms presumably constitute only a tiny part of what is possible. There is no reason to think that the human mode of being is any more free of limitations imposed by our biological nature than are those of other animals. In much the same way as Chimpanzees lack the cognitive wherewithal to understand what it is like to be human the ambitions we humans have, our philosophies, the complexities of human society, or the subtleties of our relationships with one another, so we humans may lack the capacity to form a realistic intuitive understanding of what it would be like to be a radically enhanced human (a posthuman) and of the thoughts, concerns, aspirations, and social relations that such humans may have.

Our own current mode of being, therefore, spans but a minute subspace of what is possible or permitted by the physical constraints of the universe (see Figure 1). It is not farfetched to suppose that there are parts of this larger space that represent extremely valuable ways of living, relating, feeling, and thinking.

The limitations of the human mode of being are so pervasive and familiar that we often fail to notice them, and to question them requires manifesting an almost childlike naivet. Let consider some of the more basic ones.

Lifespan. Because of the precarious conditions in which our Pleistocene ancestors lived, the human lifespan has evolved to be a paltry seven or eight decades. This is, from many perspectives, a rather short period of time. Even tortoises do better than that.

We dont have to use geological or cosmological comparisons to highlight the meagerness of our allotted time budgets. To get a sense that we might be missing out on something important by our tendency to die early, we only have to bring to mind some of the worthwhile things that we could have done or attempted to do if we had had more time. For gardeners, educators, scholars, artists, city planners, and those who simply relish observing and participating in the cultural or political variety shows of life, three scores and ten is often insufficient for seeing even one major project through to completion, let alone for undertaking many such projects in sequence.

Human character development is also cut short by aging and death. Imagine what might have become of a Beethoven or a Goethe if they had still been with us today. Maybe they would have developed into rigid old grumps interested exclusively in conversing about the achievements of their youth. But maybe, if they had continued to enjoy health and youthful vitality, they would have continued to grow as men and artists, to reach levels of maturity that we can barely imagine. We certainly cannot rule that out based on what we know today. Therefore, there is at least a serious possibility of there being something very precious outside the human sphere. This constitutes a reason to pursue the means that will let us go there and find out.

Intellectual capacity. We have all had moments when we wished we were a little smarter. The three-pound, cheese-like thinking machine that we lug around in our skulls can do some neat tricks, but it also has significant shortcomings. Some of these such as forgetting to buy milk or failing to attain native fluency in languages you learn as an adult are obvious and require no elaboration. These shortcomings are inconveniences but hardly fundamental barriers to human development.

Yet there is a more profound sense in the constraints of our intellectual apparatus limit our modes of our mentation. I mentioned the Chimpanzee analogy earlier: just as is the case for the great apes, our own cognitive makeup may foreclose whole strata of understanding and mental activity. The point here is not about any logical or metaphysical impossibility: we need not suppose that posthumans would not be Turing computable or that they would have concepts that could not be expressed by any finite sentences in our language, or anything of that sort. The impossibility that I am referring to is more like the impossibility for us current humans to visualize an 200-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress. These things are impossible for us because, simply put, we lack the brainpower. In the same way, may lack the ability to intuitively understand what being a posthuman would be like or to grok the playing field of posthuman concerns.

Further, our human brains may cap our ability to discover philosophical and scientific truths. It is possible that failure of philosophical research to arrive at solid, generally accepted answers to many of the traditional big philosophical questions could be due to the fact that we are not smart enough to be successful in this kind of enquiry. Our cognitive limitations may be confining us in a Platonic cave, where the best we can do is theorize about shadows, that is, representations that are sufficiently oversimplified and dumbed-down to fit inside a human brain.

Bodily functionality. We enhance our natural immune systems by getting vaccinations, and we can imagine further enhancements to our bodies that would protect us from disease or help us shape our bodies according to our desires (e.g. by letting us control our bodies metabolic rate). Such enhancements could improve the quality of our lives.

A more radical kind of upgrade might be possible if we suppose a computational view of the mind. It may then be possible to upload a human mind to a computer, by replicating in silico the detailed computational processes that would normally take place in a particular human brain.[4] Being an upload would have many potential advantages, such as the ability to make back-up copies of oneself (favorably impacting on ones life-expectancy) and the ability to transmit oneself as information at the speed of light. Uploads might live either in virtual reality or directly in physical reality by controlling a robot proxy.

Sensory modalities, special faculties and sensibilities. The current human sensory modalities are not the only possible ones, and they are certainly not as highly developed as they could be. Some animals have sonar, magnetic orientation, or sensors for electricity and vibration; many have a much keener sense of smell, sharper eyesight, etc. The range of possible sensory modalities is not limited to those we find in the animal kingdom. There is no fundamental block to adding say a capacity to see infrared radiation or to perceive radio signals and perhaps to add some kind of telepathic sense by augmenting our brains with suitably interfaced radio transmitters.

Humans also enjoy a variety of special faculties, such as appreciation of music and a sense of humor, and sensibilities such as the capacity for sexual arousal in response to erotic stimuli. Again, there is no reason to think that what we have exhausts the range of the possible, and we can certainly imagine higher levels of sensitivity and responsiveness.

Mood, energy, and self-control. Despite our best efforts, we often fail to feel as happy as we would like. Our chronic levels of subjective well-being seem to be largely genetically determined. Life-events have little long-term impact; the crests and troughs of fortune push us up and bring us down, but there is little long-term effect on self-reported well-being. Lasting joy remains elusive except for those of us who are lucky enough to have been born with a temperament that plays in a major key.

In addition to being at the mercy of a genetically determined setpoint for our levels of well-being, we are limited in regard to energy, will-power, and ability to shape our own character in accordance with our ideals. Even such simple goals as losing weight or quitting smoking prove unattainable to many.

Some subset of these kinds of problems might be necessary rather than contingent upon our current nature. For example, we cannot both have the ability easily to break any habit and the ability to form stable, hard-to-break habits. (In this regard, the best one can hope for may be the ability to easily get rid of habits we didnt deliberately choose for ourselves in the first place, and perhaps a more versatile habit-formation system that would let us choose with more precision when to acquire a habit and how much effort it should cost to break it.)

The conjecture that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions. Take, for example, a dispositional theory of value such as the one described by David Lewis.[5] According to Lewiss theory, something is a value for you if and only if you would want to want it if you were perfectly acquainted with it and you were thinking and deliberating as clearly as possible about it. On this view, there may be values that we do not currently want, and that we do not even currently want to want, because we may not be perfectly acquainted with them or because we are not ideal deliberators. Some values pertaining to certain forms of posthuman existence may well be of this sort; they may be values for us now, and they may be so in virtue of our current dispositions, and yet we may not be able to fully appreciate them with our current limited deliberative capacities and our lack of the receptive faculties required for full acquaintance with them. This point is important because it shows that the transhumanist view that we ought to explore the realm of posthuman values does not entail that we should forego our current values. The posthuman values can be our current values, albeit ones that we have not yet clearly comprehended. Transhumanism does not require us to say that we should favor posthuman beings over human beings, but that the right way of favoring human beings is by enabling us to realize our ideals better and that some of our ideals may well be located outside the space of modes of being that are accessible to us with our current biological constitution.

We can overcome many of our biological limitations. It is possible that there are some limitations that are impossible for us to transcend, not only because of technological difficulties but on metaphysical grounds. Depending on what our views are about what constitutes personal identity, it could be that certain modes of being, while possible, are not possible for us, because any being of such a kind would be so different from us that they could not be us. Concerns of this kind are familiar from theological discussions of the afterlife. In Christian theology, some souls will be allowed by God to go to heaven after their time as corporal creatures is over. Before being admitted to heaven, the souls would undergo a purification process in which they would lose many of their previous bodily attributes. Skeptics may doubt that the resulting minds would be sufficiently similar to our current minds for it to be possible for them to be the same person. A similar predicament arises within transhumanism: if the mode of being of a posthuman being is radically different from that of a human being, then we may doubt whether a posthuman being could be the same person as a human being, even if the posthuman being originated from a human being.

We can, however, envision many enhancements that would not make it impossible for the post-transformation someone to be the same person as the pre-transformation person. A person could obtain quite a bit of increased life expectancy, intelligence, health, memory, and emotional sensitivity, without ceasing to exist in the process. A persons intellectual life can be transformed radically by getting an education. A persons life expectancy can be extended substantially by being unexpectedly cured from a lethal disease. Yet these developments are not viewed as spelling the end of the original person. In particular, it seems that modifications that add to a persons capacities can be more substantial than modifications that subtract, such as brain damage. If most of someone currently is, including her most important memories, activities, and feelings, is preserved, then adding extra capacities on top of that would not easily cause the person to cease to exist.

Preservation of personal identity, especially if this notion is given a narrow construal, is not everything. We can value other things than ourselves, or we might regard it as satisfactory if some parts or aspects of ourselves survive and flourish, even if that entails giving up some parts of ourselves such that we no longer count as being the same person. Which parts of ourselves we might be willing to sacrifice may not become clear until we are more fully acquainted with the full meaning of the options. A careful, incremental exploration of the posthuman realm may be indispensable for acquiring such an understanding, although we may also be able to learn from each others experiences and from works of the imagination.

Additionally, we may favor future people being posthuman rather than human, if the posthumans would lead lives more worthwhile than the alternative humans would. Any reasons stemming from such considerations would not depend on the assumption that we ourselves could become posthuman beings.

Transhumanism promotes the quest to develop further so that we can explore hitherto inaccessible realms of value. Technological enhancement of human organisms is a means that we ought to pursue to this end. There are limits to how much can be achieved by low-tech means such as education, philosophical contemplation, moral self-scrutiny and other such methods proposed by classical philosophers with perfectionist leanings, including Plato, Aristotle, and Nietzsche, or by means of creating a fairer and better society, as envisioned by social reformists such as Marx or Martin Luther King. This is not to denigrate what we can do with the tools we have today. Yet ultimately, transhumanists hope to go further.

If this is the grand vision, what are the more particular objectives that it translates into when considered as a guide to policy?

What is needed for the realization of the transhumanist dream is that technological means necessary for venturing into the posthuman space are made available to those who wish to use them, and that society be organized in such a manner that such explorations can be undertaken without causing unacceptable damage to the social fabric and without imposing unacceptable existential risks.

Global security. While disasters and setbacks are inevitable in the implementation of the transhumanist project (just as they are if the transhumanist project is not pursued), there is one kind of catastrophe that must be avoided at any cost:

Existential risk one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Several recent discussions have argued that the combined probability of the existential risks is very substantial. The relevance of the condition of existential safety to the transhumanist vision is obvious: if we go extinct or permanently destroy our potential to develop further, then the transhumanist core value will not be realized. Global security is the most fundamental and nonnegotiable requirement of the transhumanist project.

Technological progress. That technological progress is generally desirable from a transhumanist point of view is also self-evident. Many of our biological shortcomings (aging, disease, feeble memories and intellects, a limited emotional repertoire and inadequate capacity for sustained well-being) are difficult to overcome, and to do so will require advanced tools. Developing these tools is a gargantuan challenge for the collective problem-solving capacities of our species. Since technological progress is closely linked to economic development, economic growth or more precisely, productivity growth can in some cases serve as a proxy for technological progress. (Productivity growth is, of course, only an imperfect measure of the relevant form of technological progress, which, in turn, is an imperfect measure of overall improvement, since it omits such factors as equity of distribution, ecological diversity, and quality of human relationships.)

The history of economic and technological development, and the concomitant growth of civilization, is appropriately regarded with awe, as humanitys most glorious achievement. Thanks to the gradual accumulation of improvements over the past several thousand years, large portions of humanity have been freed from illiteracy, life-expectancies of twenty years, alarming infant-mortality rates, horrible diseases endured without palliatives, and periodic starvation and water shortages. Technology, in this context, is not just gadgets but includes all instrumentally useful objects and systems that have been deliberately created. This broad definition encompasses practices and institutions, such as double-entry accounting, scientific peer-review, legal systems, and the applied sciences.

Wide access. It is not enough that the posthuman realm be explored by someone. The full realization of the core transhumanist value requires that, ideally, everybody should have the opportunity to become posthuman. It would be sub-optimal if the opportunity to become posthuman were restricted to a tiny elite.

There are many reasons for supporting wide access: to reduce inequality; because it would be a fairer arrangement; to express solidarity and respect for fellow humans; to help gain support for the transhumanist project; to increase the chances that you will get the opportunity to become posthuman; to increase the chances that those you care about can become posthuman; because it might increase the range of the posthuman realm that gets explored; and to alleviate human suffering on as wide a scale as possible.

The wide access requirement underlies the moral urgency of the transhumanist vision. Wide access does not argue for holding back. On the contrary, other things being equal, it is an argument for moving forward as quickly as possible. 150,000 human beings on our planet die every day, without having had any access to the anticipated enhancement technologies that will make it possible to become posthuman. The sooner this technology develops, the fewer people will have died without access.

Consider a hypothetical case in which there is a choice between (a) allowing the current human population to continue to exist, and (b) having it instantaneously and painlessly killed and replaced by six billion new human beings who are very similar but non-identical to the people that exist today. Such a replacement ought to be strongly resisted on moral grounds, for it would entail the involuntary death of six billion people. The fact that they would be replaced by six billion newly created similar people does not make the substitution acceptable. Human beings are not disposable. For analogous reasons, it is important that the opportunity be become posthuman is made available to as many humans as possible, rather than having the existing population merely supplemented (or worse, replaced) by a new set of posthuman people. The transhumanist ideal will be maximally realized only if the benefits of technologies are widely shared and if they are made available as soon as possible, preferably within our lifetime.

From these specific requirements flow a number of derivative transhumanist values that translate the transhumanist vision into practice. (Some of these values may also have independent justifications, and transhumanism does not imply that that the list of values provided below is exhaustive.)

To start with, transhumanists typically place emphasis on individual freedom and individual choice in the area of enhancement technologies. Humans differ widely in their conceptions of what their own perfection or improvement would consist in. Some want to develop in one direction, others in different directions, and some prefer to stay the way they are. It would neither be morally unacceptable for anybody to impose a single standard to which we would all have to conform. People should have the right to choose which enhancement technologies, if any, they want to use. In cases where individual choices impact substantially on other people, this general principle may need to be restricted, but the mere fact that somebody may be disgusted or morally affronted by somebody elses using technology to modify herself would not normally a legitimate ground for coercive interference. Furthermore, the poor track record of centrally planned efforts to create better people (e.g. the eugenics movement and Soviet totalitarianism) shows that we need to be wary of collective decision-making in the field of human modification.

Another transhumanist priority is to put ourselves in a better position to make wise choices about where we are going. We will need all the wisdom we can get when negotiating the posthuman transition. Transhumanists place a high value on improvements in our individual and collective powers of understanding and in our ability to implement responsible decisions. Collectively, we might get smarter and more informed through such means as scientific research, public debate and open discussion of the future, information markets[8], collaborative information filtering[9]. On an individual level, we can benefit from education, critical thinking, open-mindedness, study techniques, information technology, and perhaps memory- or attention-enhancing drugs and other cognitive enhancement technologies. Our ability to implement responsible decisions can be improved by expanding the rule of law and democracy on the international plane. Additionally, artificial intelligence, especially if and when it reaches human-equivalence or greater, could give an enormous boost to the quest for knowledge and wisdom.

Given the limitations of our current wisdom, a certain epistemic tentativeness is appropriate, along with a readiness to continually reassess our assumptions as more information becomes available. We cannot take for granted that our old habits and beliefs will prove adequate in navigating our new circumstances.

Global security can be improved by promoting international peace and cooperation, and by strongly counteracting the proliferation of weapons of mass destruction. Improvements in surveillance technology may make it easier to detect illicit weapons programs. Other security measures might also be appropriate to counteract various existential risks. More studies on such risks would help us get a better understanding of the long-term threats to human flourishing and of what can be done to reduce them.

Since technological development is necessary to realize the transhumanist vision, entrepreneurship, science, and the engineering spirit are to be promoted. More generally, transhumanists favor a pragmatic attitude and a constructive, problem-solving approach to challenges, preferring methods that experience tells us give good results. They think it better to take the initiative to do something about it rather than sit around complaining. This is one sense in which transhumanism is optimistic. (It is not optimistic in the sense of advocating an inflated belief in the probability of success or in the Panglossian sense of inventing excuses for the shortcomings of the status quo.)

Transhumanism advocates the well-being of all sentience, whether in artificial intellects, humans, and non-human animals (including extraterrestrial species, if there are any). Racism, sexism, speciesism, belligerent nationalism and religious intolerance are unacceptable. In addition to the usual grounds for deeming such practices objectionable, there is also a specifically transhumanist motivation for this. In order to prepare for a time when the human species may start branching out in various directions, we need to start now to strongly encourage the development of moral sentiments that are broad enough encompass within the sphere of moral concern sentiences that are constituted differently from ourselves.

Finally, transhumanism stresses the moral urgency of saving lives, or, more precisely, of preventing involuntary deaths among people whose lives are worth living. In the developed world, aging is currently the number one killer. Aging is also biggest cause of illness, disability and dementia. (Even if all heart disease and cancer could be cured, life expectancy would increase by merely six to seven years.) Anti-aging medicine is therefore a key transhumanist priority. The goal, of course, is to radically extent peoples active health-spans, not to add a few extra years on a ventilator at the end of life.

Since we are still far from being able to halt or reverse aging, cryonic suspension of the dead should be made available as an option for those who desire it. It is possible that future technologies will make it possible to reanimate people who have cryonically suspended.[10] While cryonics might be a long shot, it definitely carries better odds than cremation or burial.

The table below summarizes the transhumanist values that we have discussed.

Read the rest here:

Transhumanist Values – Nick Bostrom

Natasha Vita-More | Transhuman Art

 Transhuman  Comments Off on Natasha Vita-More | Transhuman Art
Jan 182016
 

Natashas research concerns the aesthetics of human enhancement and radical life extension, with a focus on sciences and technologies of nanotechnology, biotechnology, information technology, and cognitive and neuro sciences (NBIC). Her conceptual future human design Primo Posthuman has been featured in Wired, Harpers Bazaar, Marie Claire, The New York Times, U.S. News & World Report, Net Business, Teleopolis, and Village Voice. She has appeared in over twenty-four televised documentaries on the future and culture, and has exhibited media artworks at National Centre for Contemporary Arts, Brooks Memorial Museum, Institute of Contemporary Art, Women In Video, Telluride Film Festival, and United States Film Festival and recently Evolution Haute Couture: Art and Science in the Post-Biological Age. Natasha has been the recipient of several awards: First Place Award at Brooks Memorial Museum, Special Recognition at Women in Video, and Best Graduate Student Project of 2005 for her Futures Podcast Series: at the University of Houston, Future Studies program.

Natasha is a proponent human rights and ethical means for human enhancement, and is published in Artifact, Technoetic Arts, Nanotechnology Perceptions, Annual Workshop on Geoethical Nanotechnology, Death And Anti- Death. She has a bi-monthly column in Nanotechnology Now, is a Guest Editor of The Global Spiral academic journal and on the Editorial Board of International Journal of Green Nanotechnology. Natasha authored Create / Recreate: the 3rd Millennial Culture on the emerging cybernetic culture and the future of humanism and the arts and sciences. She co-authored One on One Fitness, a guide to nutrition and aerobic and anaerobic exercise for women. Her new book The Transhumanist Reader: Classical and Contemporary Look at Philosophy and Technology is scheduled for publishing in 2012 through Wiley-Blackwell.

Natasha is Chair of Humanity+, international non-profit 501c3 organization and was the former president of Extropy Institute, networking organization Natasha continues to work with academic institutions, non-profit organizations and business about human futures. She is a track advisor at the Singularity University, on the Scientific Board of Lifeboat Foundation, a Fellow of the Institute for Ethics and Emerging Technologies, Visiting Scholar at 21st Century Medicine, and advises non-profit organizations including Adaptive A.I. and Alcor Life Extension Foundation. She has been a consultant to IBM on the future of human performance.

See the original post here:
Natasha Vita-More | Transhuman Art

 Posted by at 6:40 am  Tagged with:

Freedom to Tinker Research and expert commentary on …

 Freedom  Comments Off on Freedom to Tinker Research and expert commentary on …
Nov 032015
 

Yesterday I posted some thoughts about Purdue Universitys decision to destroy a video recording of my keynote address at its Dawn or Doom colloquium. The organizers had gone dark, and a promised public link was not forthcoming. After a couple of weeks of hoping to resolve the matter quietly, I did some digging and decided to write up what I learned. I posted on the web site of the Century Foundation, my main professional home:

It turns out that Purdue has wiped all copies of my video and slides from university servers, on grounds that I displayed classified documents briefly on screen. A breach report was filed with the universitys Research Information Assurance Officer, also known as the Site Security Officer, under the terms of Defense Department Operating Manual 5220.22-M. I am told that Purdue briefly considered, among other things, whether to destroy the projector I borrowed, lest contaminants remain.

I was, perhaps, naive, but pretty much all of that came as a real surprise.

Lets rewind. Information Assurance? Site Security?

These are familiar terms elsewhere, but new to me in a university context. I learned that Purdue, like a number of its peers, has a facility security clearance to perform classified U.S. government research. The manual of regulations runs to 141 pages. (Its terms forbid uncleared trustees to ask about the work underway on their campus, but thats a subject for another day.) The pertinent provision here, spelled out at length in a manual called Classified Information Spillage, requires sanitization, physical removal, or destruction of classified information discovered on unauthorized media.

Two things happened in rapid sequence around the time I told Purdue about my post.

First, the university broke a week-long silence and expressed a measure of regret:

UPDATE: Just after posting this item I received an email from Julie Rosa, who heads strategic communications for Purdue. She confirmed that Purdue wiped my video after consulting the Defense Security Service, but the university now believes it went too far.

In an overreaction while attempting to comply with regulations, the video was ordered to be deleted instead of just blocking the piece of information in question. Just FYI: The conference organizers were not even aware that any of this had happened until well after the video was already gone.

Im told we are attempting to recover the video, but I have not heard yet whether that is going to be possible. When I find out, I will let you know and we will, of course, provide a copy to you.

Then Edward Snowden tweeted the link, and the Century Foundations web site melted down. It now redirects to Medium, where you can find the full story.

I have not heard back from Purdue today about recovery of the video. It is not clear to me how recovery is even possible, if Purdue followed Pentagon guidelines for secure destruction. Moreover, although the university seems to suggest it could have posted most of the video, it does not promise to do so now. Most importantly, the best that I can hope for here is that my remarks and slides will be made available in redacted form with classified images removed, and some of my central points therefore missing. There would be one version of the talk for the few hundred people who were in the room on Sept. 24, and for however many watched the live stream, and another version left as the only record.

For our purposes here, the most notable questions have to do with academic freedom in the context of national security. How did a university come to sanitize a public lecture it had solicited, on the subject of NSA surveillance, from an author known to possess the Snowden documents? How could it profess to be shocked to find that spillage is going on at such a talk? The beginning of an answer came, I now see, in the question and answer period after my Purdue remarks. A post-doctoral research engineer stood up to ask whether the documents I had put on display were unclassified. No, I replied. Theyre classified still. Eugene Spafford, a professor of computer science there, later attributed that concern to junior security rangers on the faculty and staff. But the display of Top Secret material, he said, once noted, is something that cannot be unnoted.

Someone reported my answer to Purdues Research Information Assurance Officer, who reported in turn to Purdues representative at the Defense Security Service. By the terms of its Pentagon agreement, Purdue decided it was now obliged to wipe the video of my talk in its entirety. I regard this as a rather devout reading of the rules, which allowed Purdue to realistically consider the potential harm that may result from compromise of spilled information. The slides I showed had been viewed already by millions of people online. Even so, federal funding might be at stake for Purdue, and the notoriously vague terms of the Espionage Act hung over the decision. For most lawyers, abundance of caution would be the default choice. Certainly that kind of thinking is commonplace, and sometimes appropriate, in military and intelligence services.

But universities are not secret agencies. They cannot lightly wear the shackles of a National Industrial Security Program, as Purdue agreed to do. The values at their core, in principle and often in practice, are open inquiry and expression.

I do not claim I suffered any great harm when Purdue purged my remarks from its conference proceedings. I do not lack for publishers or public forums. But the next person whose talk is disappeared may have fewer resources.

More importantly, to my mind, Purdue has compromised its own independence and that of its students and faculty. It set an unhappy precedent, even if the people responsible thought they were merely following routine procedures.

One can criticize the university for its choices, and quite a few have since I published my post. What interests me is how nearly the results were foreordained once Purdue made itself eligible for Top Secret work.

Think of it as a classic case of mission creep. Purdue invited the secret-keepers of the Defense Security Service into one cloistered corner of campus (a small but significant fraction of research in certain fields, as the university counsel put it). The trustees accepted what may have seemed a limited burden, confined to the precincts of classified research.

Now the security apparatus claims jurisdiction over the campus (facility) at large. The university finds itself sanitizing a conference that has nothing to do with any government contract.

I am glad to see that Princeton takes the view that [s]ecurity regulations and classification of information are at variance with the basic objectives of a University. It does not permit faculty members to do classified work on campus, which avoids Purdues facility problem. And even so, at Princeton and elsewhere, there may be an undercurrent of self-censorship and informal restraint against the use of documents derived from unauthorized leaks.

Two of my best students nearly dropped a course I taught a few years back, called Secrecy, Accountability and the National Security State, when they learned the syllabus would include documents from Wikileaks. Both had security clearances, for summer jobs, and feared losing them. I told them I would put the documents on Blackboard, so they need not visit the Wikileaks site itself, but the readings were mandatory. Both, to their credit, stayed in the course. They did so against the advice of some of their mentors, including faculty members. The advice was purely practical. The U.S. government will not give a clear answer when asked whether this sort of exposure to published secrets will harm job prospects or future security clearances. Why take the risk?

Every student and scholar must decide for him- or herself, but I think universities should push back harder, and perhaps in concert. There is a treasure trove of primary documents in the archives made available by Snowden and Chelsea Manning. The government may wish otherwise, but that information is irretrievably in the public domain. Should a faculty member ignore the Snowden documents when designing a course on network security architecture? Should a student write a dissertation on modern U.S.-Saudi relations without consulting the numerous diplomatic cables on Wikileaks? To me, those would be abdications of the basic duty to seek out authoritative sources of knowledge, wherever they reside.

I would be interested to learn how others have grappled with these questions. I expect to write about them in my forthcoming book on surveillance, privacy and secrecy.

See more here:
Freedom to Tinker Research and expert commentary on …

 Posted by at 8:42 pm  Tagged with:

An SEO Driven Approach To Content Marketing: The Complete …

 SEO  Comments Off on An SEO Driven Approach To Content Marketing: The Complete …
Sep 232015
 

Should you be worried about SEO on your content marketing blog?

In recent months, the necessity of search engine optimization has come under major fire. AsGoogle released their Panda and Penguin algorithms we all saw a major reduction in search spam, and an almost overnight, we began noticing major changes in the type of content we saw in our own search results.

Long time SEO Jill Whalen, is now internet famous for quitting her career as an SEO following these major announcements. Google works now said Jill, thismeans, my friends, that my work here is done.

What does she mean? Is SEO really dead?

As is often the case, nothing is really dead. SEO has changed, dramatically, and as Jill points out, this is a good thing. The good news for content creators like you is that it has changed in your favor.Google now rewards content marketing over spam bots and link-building tricks. Its a victory for good content and a loss for tactics of questionable nature.

This is a good thing.

You may be wondering why you still need to consider SEO in your writing with all of the changes that have been made by Google. The answer is relatively simple: For a long time, SEO was all about tricks and tactics. It was truly about optimization and opportunism, but not anymore. Now, SEO is about content. Lots of content.

In other words, SEO as we know it picked up camp and moved in with content marketing. We have a new roommate. Why not get to know it a little?

From what I can see, the opportunity for content marketers to use SEO-driven tactics is more applicable now than ever. We already have the content. What if we add a little science and tactic to our workwho knows where we might go in the future? We could even put ourselves on page one of search. Wouldnt that be something?

How should the content marketer be approaching the search engines with our writing? This guide aims to answer these two questions. SEO may not be dead, but it has dramatically changed and that means there is a big opportunity for the content marketer who is paying attention.

Heres a step-by-step guide to what you need to do to have a modernSEO-driven approach to content marketing.

When outlining an SEO strategy for content marketing, we take a slightly different approach than what we were used to. It is probably best to begin understanding how (and why) Google is rewarding longer-form contentand other content that is visually focused. Google has started to see these elements as symbols of quality, and is doing a better job of connecting search users to quality content.

Again, thats a good thing, but it doesnt mean that some of the tried and true techniques of old SEO arent still viable. Thats where keywords come in.

One of the most important aspects of search engine optimization has always been the keywords, those words that people use to find our content in search.

In the early days of SEO, the goal was to achieve exact keyword matching. This meant that the page we wanted people to find was perfectly tuned to show up in the search results when someone searched for that phrase. If you searched for exact keyword match, for example, you would find pages that used that phrase exactly as written. Not anymore. Now, you will find pages that discuss the general topic of exact keyword matching.

It may be subtle, but it is an important difference. Rand Fishkin of Moz explains it well in his whiteboard friday video.

All of that said, though, I still believe that mostgood SEO optimization still begin with the keyword. This hasnt changed.

Whats changed is the framework we need to use for implementing those keywords into our writing. This is the method that I am going to break down for you in the guide. I am going to show you, step by step, how to use keywords to create an SEO driven approach to content marketing.Try not to think of it as SEO so much as smart content marketing.

The first step is to find the keywords that matter most for you. There are several tools that will help you do this. The most notable is the Google Adwords Keyword Planner, a tool that is freely available with any Adwords account.

Should content marketers be using keywords in their writing process? Yes.

The concept here is very simple. Start by typing in one of the keywords that is most crucial to your business. Here at CoSchedule, for example, this would be something like content marketing or editorial calendars.From there, Google will automatically provide you with a list of words related to your primary keyword that people all around the world are searching for.

As a content marketer, this is incredibly valuable! Not only do you get a host of keyword ideas, but you should also begin to understand your readers more than ever. Thisiswhat they are searching for. How cool is that?

Keywords are located based on your website URL and product category. They are customized to you!

Once you have a list of results from Google, you can individually add keywords that stand out to you to your keyword plan.

Avoid getting get overly aggressive, though.For example, in this screenshot I probably dont need to add both content marketing strategy and content marketing strategies. They are a bit redundant, and not likely different enough for me to care about. Sincecontent marketing strategy gets more attention, it would make sense to go with that.

Add important keywords and phrases to your keyword plan.

Your goal here is to create a lost of 30-100 keywords that matter to your business, your audience, and to Google. You are doing research here, so the most important thing is that you learn what your audience wants, and what Google will reward.

Once youve created a good list, use the export option to download it as an Excel file, or whatever format you want to work with.

Key Point: Create a list of keywords that your blog should be targeting and keep it handy.

The list of keywords you built using the keyword planner is your new content marketing checklist. These are the words that you want your site to rank well for on Google. I consider them a list of keyword goals to shoot for.

The next step is to load these goals into a tool that will help your track and monitor where you site ranks for each of these terms. For this, I like to usePositionly, but larger SEO tools like Moz and RavenTools are good options as well. For me, Positionly offers a simplicity that the others dont. It does less, but sometimes that is more.

The purpose of Positionly is very simple. They aim to monitor daily changes to your search engine rankings and helpimprove where you show up in search engine listings. In other words, they will tell you where your site ranks on Google in respect to each keyword term that you add for your site.

Positionly will tell you how your sites ranks for each term. They will also monitor and report daily changes.

This is valuable information because it gives you a benchmark to work against. When you upload your initial list of terms, Positionly will give you an overall assessment of your site in comparison to your selected terms. Depending on how long you have been writing or working on SEO, your results may vary.

Positionly will asses how well your site currently rates for the keywords entered.

One of the hazards of a tools like Positionly is the frequency of information. On any given day you may log in to find that your rankings on several keywords have dropped for no particular reason. This is a natural occurrence, and not something that you should worry about too deeply. Ranking well on Google is an art, not a science. It is also a process, so dont expect to land on top and stay there forever. 😉

Key Point:Use a tool like Positionly to monitor your keyword ranking and track your progress.

Once you have your marching orders (keyword goals), it is time to start incorporating them into your content marketing process.

At CoSchedule, our goal is to focus on one keyword phrase each week by adding a blog post with that keyword phrase to our editorial calendar. We dont get overly scientific about it, we just plop it on there are and leave things up to the designated writer to figure out.

Incorporate your keyword based posts into your editorial calendar.

Once the post is on the calendar, it will get written. If you arent managing an editorial calendar for your team, this is an excellent reason to do so, and one that we heartily recommend. When you pre-plan your content you can become much more purposeful and strategic with you goals.

Once youve worked through your keyword goals list the first time, be sure to refer back to Positionly regularly to help prioritize the keywords that you want (and need) to improve on.

Key Point:Add keyword goals to your editorial calendar each week to keep yourself accountable.

It is worth mentioning at this point that you should never be writing a blog post where a specific keyword isnt identified.

On our team, we try as often as possible to identify the keyword immediately when scheduling a post. Each time we create a post, we either identify the keyword in the headline itself, or note it in the comments field if we are choosing to write the headline later on.

Identify SEO keywords before writing your content.

This is a good practice to get your team into, and will make a big impact on the quality of your posts. Not only will it add SEO value, but it will force your writing team to focus their writing on a well-selected and focused topic.

If you are having trouble identifying your keywords for one-off posts, there are two easy places you can go. First, you could always head back over to the Google Adwords Keyword Planning tool, but that might be overkill at this point. What I like to do is simply complete a basic Google search and take a look at the recommended search terms at the bottom of the page.

Related search terms on Google provide a wealth of keyword knowledge.

Another way to do this research is to use an content creation tool like Scribe by Copyblogger.

This tool allows you to do headline research right inside of your WordPress add/edit page, and provides additional details about the popularity and competition level of each keyword option. It will also provide data regarding your keywords from both Twitter and Google+.

The Scribe plugin by Copyblogger is a handy tool for content marketing SEO.

Key Point:Develop good habits, and declare a keyword for each post that you write.

Once you have a keyword selected for you post, you will need a few tools to ensure that your content stays on point. The two tools that we use here at CoSchedule are the Scribe plugin by Copyblogger and WordPress SEO from Yoast. If you are on a budget the Yoast plugin is free, and will get you 90% of the way to where you need to go.

Both of these plugins work in a similar way. With each, you start by declaring the keyword phrase that you are using for the post. From there, the plugins will tell you how well you content ranks for those keywords. These plugins will evaluate your post based on several key factors:

Article HeadlineIt is considered best practice to include your exact keyword phrase in the headline of your post.

Page Title The page title is the bit of text that will show up in your browser tab, or more importantly, at the top of your Google Search listing. You will definitely want to include your keyword in full here.

The Yoast snippet preview will give you a preview of your forth-coming search listing.

Page URLYour keyword should beincludedin the slug of your URL. WordPress makes this easy to customize as long as you do it before the post is published.

ContentBoth Yoast and Scribe will want to see that the keyword is mentioned within the content of your post. With this, the more you have the better. If you can include the keywords in various sub-headlines you will even get bonus points.

Meta Description The meta description is the short description of your post that will show up on Google. You will want to use your keyword phrase in this copy.

When writing your posts, you want to make sure they are as optimized as possible for the keywords that you are trying to reach. Both Scribe and Yoast will give your visual confirmation of your success.

Both WordPress SEO plugin and Scribe will visually show you how your articles ranks SEO-wise.

At our office, we always shoot for green before we publish every post. Clicking though both plugins will provide additional information and suggestions.

Yoast page analysis. Lots of good tips here.

Topics vs. Strict MatchOne thing that I want to point out is that you need to be careful about the difference between the strict matching of keywords and topic related search.

As Rand Fishkin pointed out the video posted above, Google cares more about how you cover the topic overall ratherthan the exact keyword itself. Yoast tends to lean to heavily on the strict match method, which is outdated by Googles standards. Scribe, however, seems to handle this much more gracefully and might be worth the extra investment.

Key Point:Optimize your posts so that they perform well for the chosen keywords.

Even though SEO is no longer about the tools and tricks, there are still a few you need to use to make sure that everything is in order. As any good web designer will tell you, most SEO happens in the page itself. If the structure and makeup of your webpage isnt properly optimized, you are already fighting an uphill battle.

You can always usePositionlyorthis free toolfrom Neil Patel to get an assessment of how your site performs.

Here are a few additional WordPress plugins that will help you get things in order:

WordPress SEO by YoastWordPress SEO is a powerful plugin. Use it to setup sitemaps on your site, and optimize your social sharing meta tags. Seriously, spend some time with this one.

WP Rocket Site speed can make a huge impact on your SEO performance. WP Rocket is a paid plugin, but unlike many of the free options, it shouldnt mess up your site. It is worth the few extra bucks.

In Depth Articles Generator Generates posts metadata for your pages to better present search results to users. There are other plugins that do this, but this one is simple and easy. If you need to validate that it is working, you can use the Google testing tool.

GooglesSearch Engine OptimizationStarter GuideThis free guide made available by Google is a great place to start in the world of SEO an optimization.

SEO isnt dead, it has just changed. The good news is that the new world of SEO is better than ever for content marketers like yourself. When combined with a few SEO basics, there is nothing stopping you from making SEO a core part of your inbound marketing strategy.

Read more:
An SEO Driven Approach To Content Marketing: The Complete …

 Posted by at 3:45 am  Tagged with:

Federal court rules that only drug companies, not supplement …

 Misc  Comments Off on Federal court rules that only drug companies, not supplement …
Sep 102015
 

(NaturalNews) In a ruling that many holistic healers and homeopathic physicians are likely to find hypocritical, a federal court has handed Big Pharma an unprecedented victory by giving a drug company preliminary approval to market a drug for a condition for which it has yet to be approved by the Food and Drug Administration.

The drug, Vascepa, manufactured by Amarin Pharma, is approved for use in treating very high levels of fats known as triglycerides over 500 mg per deciliter in a patient’s bloodstream, reports AllGov.com. But Amarin also wanted to promote the medication for use in patients who have “persistently high levels” of triglycerides, from 200 to 499 mg/deciliter.

The FDA denied that request earlier this year over concerns that Vascepa would not help such patients avoid heart attacks or heart disease. That decision led Amarin to file suit in court, claiming its First Amendment rights permitted the company to provide information to physicians and other primary care providers.

Providers have long prescribed medications for “off-label” uses those not included in a drug’s literature or for uses not specifically approved by federal regulators but the drug companies have traditionally been banned from marketing their products for such off-label uses.

“This is huge,” Jacob Sherkow, an associate professor at New York Law School, told The Washington Post. “There have been other instances a court has held that off-label marketing is protected by the First Amendment, but… this is the first time, I think, that any federal court that any court has held in such a clear, full-throated way that off-label marketing is protected by the First Amendment, period, full stop.”

AllGov.com reported that the case stemmed from a 2012 New York City federal appeals court ruling finding that a Big Pharma sales rep had not violated FDA regulations by promoting off-label use for a drug to treat narcolepsy, Xyrem, because his speech as long as he was not being misleading was protected by the First Amendment. However, in the Amarin case, the FDA said that the Xyrem decision was limited in scope and therefore could not be applied to Vascepa, but Engelmayer disagreed.

However, the parameter of “truthful speech” and a complete statement of facts has proved concerning to some.

“I find the decision very troubling. It’s a big push off on to a very slippery slope, a very steep slippery slope toward removing the government’s authority to limit the claims that drug companies can make about the effectiveness of their products,” Harvard Medical School professor Jerry Avorn told the Post.

“There’s an enormous amount, enormous numbers of statements that drug companies could make about their products that are not overtly fraudulent, but are not the same as a comprehensive review of all the good and bad evidence, that the FDA undertakes when it reviews a drug,” Avorn added.

Makers and consumers of health-related supplements, however, are also decrying the ruling, especially companies whose First Amendment rights have been ignored by courts and the FDA in the past.

In December 2012, we reported that a federal appeals court in New York upheld the free speech rights of a pharmaceutical company regarding off-label uses of Xyrem, even as courts and the FDA were gagging makers of natural supplements.

And in March 2013, we reported that the FDA used a truth-in-labeling regulation in issuing warning letters to a pair of supplement companies whose “crime” was nothing more than having customer-related interactions via the Internet.

It appears that there are two separate standards for Big Pharma and holistic and homeopathic healers.

Sources:

AllGov.com

WashingtonPost.com

WSJ.com

NaturalNews.com

Permalink to this article: http://www.naturalnews.com/051109_drug_companies_First_Amendment_rights_nutritional_supplements.html

Embed article link: (copy HTML code below): Federal court rules that only drug companies, not supplement companies, have First Amendment rights to truthful speech about health

Reprinting this article: Non-commercial use OK, cite NaturalNews.com with clickable link.

Follow Natural News on Facebook, Twitter, Google Plus, and Pinterest

Read this article:
Federal court rules that only drug companies, not supplement …

 Posted by at 10:44 am  Tagged with:

Moz Blog – SEO and Inbound Marketing Blog – Moz

 SEO  Comments Off on Moz Blog – SEO and Inbound Marketing Blog – Moz
Aug 262015
 

Learn SEO Broaden your SEO with marketing resources for all skill levels: best practices, beginner guides, industry survey results, videos, webinars and more.

Get started with: The Beginner’s Guide to SEO

The industry’s top wizards, doctors, and other experts offer their best advice, research, how-tos, and insightsall in the name of helping you level-up your SEO and online marketing skills.

A waterfall diagram, such as those produced by WebPageTest, is a powerful indicator of optimization opportunities. Do you know how to read them?

Are you a local business owner? Explore the hows and whys of submitting your business to local business directories in order to boost your local search visibility on Google.

Do search engines collect and utilise user behaviour data for ranking purposes? We’ve got a deep-dive into the data and theories behind user behaviour, search visibility, and more.

If you’re targeting a certain keyword, knowing where and how often to use that keyword in the various elements of your page is essential. In today’s Whiteboard Friday, Rand offers his recommendation.

There’s a compelling indicator of how our industry is evolving in an area that helps us become better marketers: gender equality. What’s changed over time and what are we doing to improve gender diversity in the workplace?

Have you seen the new Snack Pack? Explore Casey Meraz’s click test results on Google’s new local 3-pack, seeing what’s changed, what works, and what the future holds.

Those of you who have logged into your Moz Local dashboard recently may have noticed a few updates this week! I thought I’d post a quick announcement to highlight them.

Google recently shook up the local results in its SERPs, killing the local 7-packs in favor of a 3-pack that resembles the mobile experience. This post tells you everything you need to know about the change and what it means for your local marketing.

Brand fatigue is a real threat to your marketing strategy. In today’s Whiteboard Friday, Rand highlights some common causes of brand fatigue and how to combat it.

It’s here! We’re excited to announce the results of Moz’s biennial Search Engine Ranking Correlation Study and Expert Survey, aka Ranking Factors. Moz’s Ranking Factors study helps identify which attributes of pages and sites have the strongest association with ranking highly in Google. The study consists of t…

Today we’re excited to announce the results of Moz’s famous Ranking Factors study. The study helps to identify which attributes of webpages and sites have the strongest association with higher rankings in Google. Ready to dive in?

How do commercial and informational queries differ? Does one type of SERP show more or fewer results that are mobile-friendly or using HTTPS? Find those answers in this examination of more than 345,000 search results.

While SEO is a different field than it once was, technical chops are still required to do things really well. In today’s Whiteboard Friday, Rand pushes back against the idea that those skills are no longer necessary.

Buy your MozCon 2015 Video Bundle and access 27 sessions (over 15 hours) from top industry speakers on topics ranging from SEO and content strategy to email marketing and CRO.

Join the Moz Community to add a comment, give something a thumbs up/down, and get enhanced access to free tools!

View post:
Moz Blog – SEO and Inbound Marketing Blog – Moz

How the Bitcoin protocol actually works | DDI

 Bitcoin  Comments Off on How the Bitcoin protocol actually works | DDI
Aug 182015
 

Many thousands of articles have been written purporting to explain Bitcoin, the online, peer-to-peer currency. Most of those articles give a hand-wavy account of the underlying cryptographic protocol, omitting many details. Even those articles which delve deeper often gloss over crucial points. My aim in this post is to explain the major ideas behind the Bitcoin protocol in a clear, easily comprehensible way. Well start from first principles, build up to a broad theoretical understanding of how the protocol works, and then dig down into the nitty-gritty, examining the raw data in a Bitcoin transaction.

Understanding the protocol in this detailed way is hard work. It is tempting instead to take Bitcoin as given, and to engage in speculation about how to get rich with Bitcoin, whether Bitcoin is a bubble, whether Bitcoin might one day mean the end of taxation, and so on. Thats fun, but severely limits your understanding. Understanding the details of the Bitcoin protocol opens up otherwise inaccessible vistas. In particular, its the basis for understanding Bitcoins built-in scripting language, which makes it possible to use Bitcoin to create new types of financial instruments, such as smart contracts. New financial instruments can, in turn, be used to create new markets and to enable new forms of collective human behaviour. Talk about fun!

Ill describe Bitcoin scripting and concepts such as smart contracts in future posts. This post concentrates on explaining the nuts-and-bolts of the Bitcoin protocol. To understand the post, you need to be comfortable with public key cryptography, and with the closely related idea of digital signatures. Ill also assume youre familiar with cryptographic hashing. None of this is especially difficult. The basic ideas can be taught in freshman university mathematics or computer science classes. The ideas are beautiful, so if youre not familiar with them, I recommend taking a few hours to get familiar.

It may seem surprising that Bitcoins basis is cryptography. Isnt Bitcoin a currency, not a way of sending secret messages? In fact, the problems Bitcoin needs to solve are largely about securing transactions making sure people cant steal from one another, or impersonate one another, and so on. In the world of atoms we achieve security with devices such as locks, safes, signatures, and bank vaults. In the world of bits we achieve this kind of security with cryptography. And thats why Bitcoin is at heart a cryptographic protocol.

My strategy in the post is to build Bitcoin up in stages. Ill begin by explaining a very simple digital currency, based on ideas that are almost obvious. Well call that currency Infocoin, to distinguish it from Bitcoin. Of course, our first version of Infocoin will have many deficiencies, and so well go through several iterations of Infocoin, with each iteration introducing just one or two simple new ideas. After several such iterations, well arrive at the full Bitcoin protocol. We will have reinvented Bitcoin!

This strategy is slower than if I explained the entire Bitcoin protocol in one shot. But while you can understand the mechanics of Bitcoin through such a one-shot explanation, it would be difficult to understand why Bitcoin is designed the way it is. The advantage of the slower iterative explanation is that it gives us a much sharper understanding of each element of Bitcoin.

Finally, I should mention that Im a relative newcomer to Bitcoin. Ive been following it loosely since 2011 (and cryptocurrencies since the late 1990s), but only got seriously into the details of the Bitcoin protocol earlier this year. So Id certainly appreciate corrections of any misapprehensions on my part. Also in the post Ive included a number of problems for the author notes to myself about questions that came up during the writing. You may find these interesting, but you can also skip them entirely without losing track of the main text.

So how can we design a digital currency?

On the face of it, a digital currency sounds impossible. Suppose some person lets call her Alice has some digital money which she wants to spend. If Alice can use a string of bits as money, how can we prevent her from using the same bit string over and over, thus minting an infinite supply of money? Or, if we can somehow solve that problem, how can we prevent someone else forging such a string of bits, and using that to steal from Alice?

These are just two of the many problems that must be overcome in order to use information as money.

As a first version of Infocoin, lets find a way that Alice can use a string of bits as a (very primitive and incomplete) form of money, in a way that gives her at least some protection against forgery. Suppose Alice wants to give another person, Bob, an infocoin. To do this, Alice writes down the message I, Alice, am giving Bob one infocoin. She then digitally signs the message using a private cryptographic key, and announces the signed string of bits to the entire world.

(By the way, Im using capitalized Infocoin to refer to the protocol and general concept, and lowercase infocoin to refer to specific denominations of the currency. A similar useage is common, though not universal, in the Bitcoin world.)

This isnt terribly impressive as a prototype digital currency! But it does have some virtues. Anyone in the world (including Bob) can use Alices public key to verify that Alice really was the person who signed the message I, Alice, am giving Bob one infocoin. No-one else could have created that bit string, and so Alice cant turn around and say No, I didnt mean to give Bob an infocoin. So the protocol establishes that Alice truly intends to give Bob one infocoin. The same fact no-one else could compose such a signed message also gives Alice some limited protection from forgery. Of course, after Alice has published her message its possible for other people to duplicate the message, so in that sense forgery is possible. But its not possible from scratch. These two properties establishment of intent on Alices part, and the limited protection from forgery are genuinely notable features of this protocol.

I havent (quite) said exactly what digital money is in this protocol. To make this explicit: its just the message itself, i.e., the string of bits representing the digitally signed message I, Alice, am giving Bob one infocoin. Later protocols will be similar, in that all our forms of digital money will be just more and more elaborate messages [1].

A problem with the first version of Infocoin is that Alice could keep sending Bob the same signed message over and over. Suppose Bob receives ten copies of the signed message I, Alice, am giving Bob one infocoin. Does that mean Alice sent Bob ten different infocoins? Was her message accidentally duplicated? Perhaps she was trying to trick Bob into believing that she had given him ten different infocoins, when the message only proves to the world that she intends to transfer one infocoin.

What wed like is a way of making infocoins unique. They need a label or serial number. Alice would sign the message I, Alice, am giving Bob one infocoin, with serial number 8740348. Then, later, Alice could sign the message I, Alice, am giving Bob one infocoin, with serial number 8770431, and Bob (and everyone else) would know that a different infocoin was being transferred.

To make this scheme work we need a trusted source of serial numbers for the infocoins. One way to create such a source is to introduce a bank. This bank would provide serial numbers for infocoins, keep track of who has which infocoins, and verify that transactions really are legitimate,

In more detail, lets suppose Alice goes into the bank, and says I want to withdraw one infocoin from my account. The bank reduces her account balance by one infocoin, and assigns her a new, never-before used serial number, lets say 1234567. Then, when Alice wants to transfer her infocoin to Bob, she signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567. But Bob doesnt just accept the infocoin. Instead, he contacts the bank, and verifies that: (a) the infocoin with that serial number belongs to Alice; and (b) Alice hasnt already spent the infocoin. If both those things are true, then Bob tells the bank he wants to accept the infocoin, and the bank updates their records to show that the infocoin with that serial number is now in Bobs possession, and no longer belongs to Alice.

This last solution looks pretty promising. However, it turns out that we can do something much more ambitious. We can eliminate the bank entirely from the protocol. This changes the nature of the currency considerably. It means that there is no longer any single organization in charge of the currency. And when you think about the enormous power a central bank has control over the money supply thats a pretty huge change.

The idea is to make it so everyone (collectively) is the bank. In particular, well assume that everyone using Infocoin keeps a complete record of which infocoins belong to which person. You can think of this as a shared public ledger showing all Infocoin transactions. Well call this ledger the block chain, since thats what the complete record will be called in Bitcoin, once we get to it.

Now, suppose Alice wants to transfer an infocoin to Bob. She signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567, and gives the signed message to Bob. Bob can use his copy of the block chain to check that, indeed, the infocoin is Alices to give. If that checks out then he broadcasts both Alices message and his acceptance of the transaction to the entire network, and everyone updates their copy of the block chain.

We still have the where do serial number come from problem, but that turns out to be pretty easy to solve, and so I will defer it to later, in the discussion of Bitcoin. A more challenging problem is that this protocol allows Alice to cheat by double spending her infocoin. She sends the signed message I, Alice, am giving Bob one infocoin, with serial number 1234567 to Bob, and the messageI, Alice, am giving Charlie one infocoin, with [the same] serial number 1234567 to Charlie. Both Bob and Charlie use their copy of the block chain to verify that the infocoin is Alices to spend. Provided they do this verification at nearly the same time (before theyve had a chance to hear from one another), both will find that, yes, the block chain shows the coin belongs to Alice. And so they will both accept the transaction, and also broadcast their acceptance of the transaction. Now theres a problem. How should other people update their block chains? There may be no easy way to achieve a consistent shared ledger of transactions. And even if everyone can agree on a consistent way to update their block chains, there is still the problem that either Bob or Charlie will be cheated.

At first glance double spending seems difficult for Alice to pull off. After all, if Alice sends the message first to Bob, then Bob can verify the message, and tell everyone else in the network (including Charlie) to update their block chain. Once that has happened, Charlie would no longer be fooled by Alice. So there is most likely only a brief period of time in which Alice can double spend. However, its obviously undesirable to have any such a period of time. Worse, there are techniques Alice could use to make that period longer. She could, for example, use network traffic analysis to find times when Bob and Charlie are likely to have a lot of latency in communication. Or perhaps she could do something to deliberately disrupt their communications. If she can slow communication even a little that makes her task of double spending much easier.

How can we address the problem of double spending? The obvious solution is that when Alice sends Bob an infocoin, Bob shouldnt try to verify the transaction alone. Rather, he should broadcast the possible transaction to the entire network of Infocoin users, and ask them to help determine whether the transaction is legitimate. If they collectively decide that the transaction is okay, then Bob can accept the infocoin, and everyone will update their block chain. This type of protocol can help prevent double spending, since if Alice tries to spend her infocoin with both Bob and Charlie, other people on the network will notice, and network users will tell both Bob and Charlie that there is a problem with the transaction, and the transaction shouldnt go through.

In more detail, lets suppose Alice wants to give Bob an infocoin. As before, she signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567, and gives the signed message to Bob. Also as before, Bob does a sanity check, using his copy of the block chain to check that, indeed, the coin currently belongs to Alice. But at that point the protocol is modified. Bob doesnt just go ahead and accept the transaction. Instead, he broadcasts Alices message to the entire network. Other members of the network check to see whether Alice owns that infocoin. If so, they broadcast the message Yes, Alice owns infocoin 1234567, it can now be transferred to Bob. Once enough people have broadcast that message, everyone updates their block chain to show that infocoin 1234567 now belongs to Bob, and the transaction is complete.

This protocol has many imprecise elements at present. For instance, what does it mean to say once enough people have broadcast that message? What exactly does enough mean here? It cant mean everyone in the network, since we dont a priori know who is on the Infocoin network. For the same reason, it cant mean some fixed fraction of users in the network. We wont try to make these ideas precise right now. Instead, in the next section Ill point out a serious problem with the approach as described. Fixing that problem will at the same time have the pleasant side effect of making the ideas above much more precise.

Suppose Alice wants to double spend in the network-based protocol I just described. She could do this by taking over the Infocoin network. Lets suppose she uses an automated system to set up a large number of separate identities, lets say a billion, on the Infocoin network. As before, she tries to double spend the same infocoin with both Bob and Charlie. But when Bob and Charlie ask the network to validate their respective transactions, Alices sock puppet identities swamp the network, announcing to Bob that theyve validated his transaction, and to Charlie that theyve validated his transaction, possibly fooling one or both into accepting the transaction.

Theres a clever way of avoiding this problem, using an idea known as proof-of-work. The idea is counterintuitive and involves a combination of two ideas: (1) to (artificially) make it computationally costly for network users to validate transactions; and (2) to reward them for trying to help validate transactions. The reward is used so that people on the network will try to help validate transactions, even though thats now been made a computationally costly process. The benefit of making it costly to validate transactions is that validation can no longer be influenced by the number of network identities someone controls, but only by the total computational power they can bring to bear on validation. As well see, with some clever design we can make it so a cheater would need enormous computational resources to cheat, making it impractical.

Thats the gist of proof-of-work. But to really understand proof-of-work, we need to go through the details.

Suppose Alice broadcasts to the network the news that I, Alice, am giving Bob one infocoin, with serial number 1234567.

As other people on the network hear that message, each adds it to a queue of pending transactions that theyve been told about, but which havent yet been approved by the network. For instance, another network user named David might have the following queue of pending transactions:

I, Tom, am giving Sue one infocoin, with serial number 1201174.

I, Sydney, am giving Cynthia one infocoin, with serial number 1295618.

I, Alice, am giving Bob one infocoin, with serial number 1234567.

David checks his copy of the block chain, and can see that each transaction is valid. He would like to help out by broadcasting news of that validity to the entire network.

However, before doing that, as part of the validation protocol David is required to solve a hard computational puzzle the proof-of-work. Without the solution to that puzzle, the rest of the network wont accept his validation of the transaction.

What puzzle does David need to solve? To explain that, let be a fixed hash function known by everyone in the network its built into the protocol. Bitcoin uses the well-known SHA-256 hash function, but any cryptographically secure hash function will do. Lets give Davids queue of pending transactions a label, , just so its got a name we can refer to. Suppose David appends a number (called the nonce) to and hashes the combination. For example, if we use Hello, world! (obviously this is not a list of transactions, just a string used for illustrative purposes) and the nonce then (output is in hexadecimal)

The puzzle David has to solve the proof-of-work is to find a nonce such that when we append to and hash the combination the output hash begins with a long run of zeroes. The puzzle can be made more or less difficult by varying the number of zeroes required to solve the puzzle. A relatively simple proof-of-work puzzle might require just three or four zeroes at the start of the hash, while a more difficult proof-of-work puzzle might require a much longer run of zeros, say 15 consecutive zeroes. In either case, the above attempt to find a suitable nonce, with , is a failure, since the output doesnt begin with any zeroes at all. Trying doesnt work either:

We can keep trying different values for the nonce, . Finally, at we obtain:

This nonce gives us a string of four zeroes at the beginning of the output of the hash. This will be enough to solve a simple proof-of-work puzzle, but not enough to solve a more difficult proof-of-work puzzle.

What makes this puzzle hard to solve is the fact that the output from a cryptographic hash function behaves like a random number: change the input even a tiny bit and the output from the hash function changes completely, in a way thats hard to predict. So if we want the output hash value to begin with 10 zeroes, say, then David will need, on average, to try different values for before he finds a suitable nonce. Thats a pretty challenging task, requiring lots of computational power.

Obviously, its possible to make this puzzle more or less difficult to solve by requiring more or fewer zeroes in the output from the hash function. In fact, the Bitcoin protocol gets quite a fine level of control over the difficulty of the puzzle, by using a slight variation on the proof-of-work puzzle described above. Instead of requiring leading zeroes, the Bitcoin proof-of-work puzzle requires the hash of a blocks header to be lower than or equal to a number known as the target. This target is automatically adjusted to ensure that a Bitcoin block takes, on average, about ten minutes to validate.

(In practice there is a sizeable randomness in how long it takes to validate a block sometimes a new block is validated in just a minute or two, other times it may take 20 minutes or even longer. Its straightforward to modify the Bitcoin protocol so that the time to validation is much more sharply peaked around ten minutes. Instead of solving a single puzzle, we can require that multiple puzzles be solved; with some careful design it is possible to considerably reduce the variance in the time to validate a block of transactions.)

Alright, lets suppose David is lucky and finds a suitable nonce, . Celebration! (Hell be rewarded for finding the nonce, as described below). He broadcasts the block of transactions hes approving to the network, together with the value for . Other participants in the Infocoin network can verify that is a valid solution to the proof-of-work puzzle. And they then update their block chains to include the new block of transactions.

For the proof-of-work idea to have any chance of succeeding, network users need an incentive to help validate transactions. Without such an incentive, they have no reason to expend valuable computational power, merely to help validate other peoples transactions. And if network users are not willing to expend that power, then the whole system wont work. The solution to this problem is to reward people who help validate transactions. In particular, suppose we reward whoever successfully validates a block of transactions by crediting them with some infocoins. Provided the infocoin reward is large enough that will give them an incentive to participate in validation.

In the Bitcoin protocol, this validation process is called mining. For each block of transactions validated, the successful miner receives a bitcoin reward. Initially, this was set to be a 50 bitcoin reward. But for every 210,000 validated blocks (roughly, once every four years) the reward halves. This has happened just once, to date, and so the current reward for mining a block is 25 bitcoins. This halving in the rate will continue every four years until the year 2140 CE. At that point, the reward for mining will drop below bitcoins per block. bitcoins is actually the minimal unit of Bitcoin, and is known as a satoshi. So in 2140 CE the total supply of bitcoins will cease to increase. However, that wont eliminate the incentive to help validate transactions. Bitcoin also makes it possible to set aside some currency in a transaction as a transaction fee, which goes to the miner who helps validate it. In the early days of Bitcoin transaction fees were mostly set to zero, but as Bitcoin has gained in popularity, transaction fees have gradually risen, and are now a substantial additional incentive on top of the 25 bitcoin reward for mining a block.

You can think of proof-of-work as a competition to approve transactions. Each entry in the competition costs a little bit of computing power. A miners chance of winning the competition is (roughly, and with some caveats) equal to the proportion of the total computing power that they control. So, for instance, if a miner controls one percent of the computing power being used to validate Bitcoin transactions, then they have roughly a one percent chance of winning the competition. So provided a lot of computing power is being brought to bear on the competition, a dishonest miner is likely to have only a relatively small chance to corrupt the validation process, unless they expend a huge amount of computing resources.

Of course, while its encouraging that a dishonest party has only a relatively small chance to corrupt the block chain, thats not enough to give us confidence in the currency. In particular, we havent yet conclusively addressed the issue of double spending.

Ill analyse double spending shortly. Before doing that, I want to fill in an important detail in the description of Infocoin. Wed ideally like the Infocoin network to agree upon the order in which transactions have occurred. If we dont have such an ordering then at any given moment it may not be clear who owns which infocoins. To help do this well require that new blocks always include a pointer to the last block validated in the chain, in addition to the list of transactions in the block. (The pointer is actually just a hash of the previous block). So typically the block chain is just a linear chain of blocks of transactions, one after the other, with later blocks each containing a pointer to the immediately prior block:

Occasionally, a fork will appear in the block chain. This can happen, for instance, if by chance two miners happen to validate a block of transactions near-simultaneously both broadcast their newly-validated block out to the network, and some people update their block chain one way, and others update their block chain the other way:

This causes exactly the problem were trying to avoid its no longer clear in what order transactions have occurred, and it may not be clear who owns which infocoins. Fortunately, theres a simple idea that can be used to remove any forks. The rule is this: if a fork occurs, people on the network keep track of both forks. But at any given time, miners only work to extend whichever fork is longest in their copy of the block chain.

Suppose, for example, that we have a fork in which some miners receive block A first, and some miners receive block B first. Those miners who receive block A first will continue mining along that fork, while the others will mine along fork B. Lets suppose that the miners working on fork B are the next to successfully mine a block:

After they receive news that this has happened, the miners working on fork A will notice that fork B is now longer, and will switch to working on that fork. Presto, in short order work on fork A will cease, and everyone will be working on the same linear chain, and block A can be ignored. Of course, any still-pending transactions in A will still be pending in the queues of the miners working on fork B, and so all transactions will eventually be validated.

Likewise, it may be that the miners working on fork A are the first to extend their fork. In that case work on fork B will quickly cease, and again we have a single linear chain.

No matter what the outcome, this process ensures that the block chain has an agreed-upon time ordering of the blocks. In Bitcoin proper, a transaction is not considered confirmed until: (1) it is part of a block in the longest fork, and (2) at least 5 blocks follow it in the longest fork. In this case we say that the transaction has 6 confirmations. This gives the network time to come to an agreed-upon the ordering of the blocks. Well also use this strategy for Infocoin.

With the time-ordering now understood, lets return to think about what happens if a dishonest party tries to double spend. Suppose Alice tries to double spend with Bob and Charlie. One possible approach is for her to try to validate a block that includes both transactions. Assuming she has one percent of the computing power, she will occasionally get lucky and validate the block by solving the proof-of-work. Unfortunately for Alice, the double spending will be immediately spotted by other people in the Infocoin network and rejected, despite solving the proof-of-work problem. So thats not something we need to worry about.

A more serious problem occurs if she broadcasts two separate transactions in which she spends the same infocoin with Bob and Charlie, respectively. She might, for example, broadcast one transaction to a subset of the miners, and the other transaction to another set of miners, hoping to get both transactions validated in this way. Fortunately, in this case, as weve seen, the network will eventually confirm one of these transactions, but not both. So, for instance, Bobs transaction might ultimately be confirmed, in which case Bob can go ahead confidently. Meanwhile, Charlie will see that his transaction has not been confirmed, and so will decline Alices offer. So this isnt a problem either. In fact, knowing that this will be the case, there is little reason for Alice to try this in the first place.

An important variant on double spending is if Alice = Bob, i.e., Alice tries to spend a coin with Charlie which she is also spending with herself (i.e., giving back to herself). This sounds like it ought to be easy to detect and deal with, but, of course, its easy on a network to set up multiple identities associated with the same person or organization, so this possibility needs to be considered. In this case, Alices strategy is to wait until Charlie accepts the infocoin, which happens after the transaction has been confirmed 6 times in the longest chain. She will then attempt to fork the chain before the transaction with Charlie, adding a block which includes a transaction in which she pays herself:

Unfortunately for Alice, its now very difficult for her to catch up with the longer fork. Other miners wont want to help her out, since theyll be working on the longer fork. And unless Alice is able to solve the proof-of-work at least as fast as everyone else in the network combined roughly, that means controlling more than fifty percent of the computing power then she will just keep falling further and further behind. Of course, she might get lucky. We can, for example, imagine a scenario in which Alice controls one percent of the computing power, but happens to get lucky and finds six extra blocks in a row, before the rest of the network has found any extra blocks. In this case, she might be able to get ahead, and get control of the block chain. But this particular event will occur with probability . A more general analysis along these lines shows that Alices probability of ever catching up is infinitesimal, unless she is able to solve proof-of-work puzzles at a rate approaching all other miners combined.

Of course, this is not a rigorous security analysis showing that Alice cannot double spend. Its merely an informal plausibility argument. The original paper introducing Bitcoin did not, in fact, contain a rigorous security analysis, only informal arguments along the lines Ive presented here. The security community is still analysing Bitcoin, and trying to understand possible vulnerabilities. You can see some of this research listed here, and I mention a few related problems in the Problems for the author below. At this point I think its fair to say that the jury is still out on how secure Bitcoin is.

The proof-of-work and mining ideas give rise to many questions. How much reward is enough to persuade people to mine? How does the change in supply of infocoins affect the Infocoin economy? Will Infocoin mining end up concentrated in the hands of a few, or many? If its just a few, doesnt that endanger the security of the system? Presumably transaction fees will eventually equilibriate wont this introduce an unwanted source of friction, and make small transactions less desirable? These are all great questions, but beyond the scope of this post. I may come back to the questions (in the context of Bitcoin) in a future post. For now, well stick to our focus on understanding how the Bitcoin protocol works.

Lets move away from Infocoin, and describe the actual Bitcoin protocol. There are a few new ideas here, but with one exception (discussed below) theyre mostly obvious modifications to Infocoin.

To use Bitcoin in practice, you first install a wallet program on your computer. To give you a sense of what that means, heres a screenshot of a wallet called Multbit. You can see the Bitcoin balance on the left 0.06555555 Bitcoins, or about 70 dollars at the exchange rate on the day I took this screenshot and on the right two recent transactions, which deposited those 0.06555555 Bitcoins:

Suppose youre a merchant who has set up an online store, and youve decided to allow people to pay using Bitcoin. What you do is tell your wallet program to generate a Bitcoin address. In response, it will generate a public / private key pair, and then hash the public key to form your Bitcoin address:

You then send your Bitcoin address to the person who wants to buy from you. You could do this in email, or even put the address up publicly on a webpage. This is safe, since the address is merely a hash of your public key, which can safely be known by the world anyway. (Ill return later to the question of why the Bitcoin address is a hash, and not just the public key.)

The person who is going to pay you then generates a transaction. Lets take a look at the data from an actual transaction transferring bitcoins. Whats shown below is very nearly the raw data. Its changed in three ways: (1) the data has been deserialized; (2) line numbers have been added, for ease of reference; and (3) Ive abbreviated various hashes and public keys, just putting in the first six hexadecimal digits of each, when in reality they are much longer. Heres the data:

Lets go through this, line by line.

Line 1 contains the hash of the remainder of the transaction, 7c4025…, expressed in hexadecimal. This is used as an identifier for the transaction.

Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol.

Lines 3 and 4 tell us that the transaction has one input and one output, respectively. Ill talk below about transactions with more inputs and outputs, and why thats useful.

Line 5 contains the value for lock_time, which can be used to control when a transaction is finalized. For most Bitcoin transactions being carried out today the lock_time is set to 0, which means the transaction is finalized immediately.

Line 6 tells us the size (in bytes) of the transaction. Note that its not the monetary amount being transferred! That comes later.

Lines 7 through 11 define the input to the transaction. In particular, lines 8 through 10 tell us that the input is to be taken from the output from an earlier transaction, with the given hash, which is expressed in hexadecimal as 2007ae…. The n=0 tells us its to be the first output from that transaction; well see soon how multiple outputs (and inputs) from a transaction work, so dont worry too much about this for now. Line 11 contains the signature of the person sending the money, 304502…, followed by a space, and then the corresponding public key, 04b2d…. Again, these are both in hexadecimal.

One thing to note about the input is that theres nothing explicitly specifying how many bitcoins from the previous transaction should be spent in this transaction. In fact, all the bitcoins from the n=0th output of the previous transaction are spent. So, for example, if the n=0th output of the earlier transaction was 2 bitcoins, then 2 bitcoins will be spent in this transaction. This seems like an inconvenient restriction like trying to buy bread with a 20 dollar note, and not being able to break the note down. The solution, of course, is to have a mechanism for providing change. This can be done using transactions with multiple inputs and outputs, which well discuss in the next section.

Lines 12 through 14 define the output from the transaction. In particular, line 13 tells us the value of the output, 0.319 bitcoins. Line 14 is somewhat complicated. The main thing to note is that the string a7db6f… is the Bitcoin address of the intended recipient of the funds (written in hexadecimal). In fact, Line 14 is actually an expression in Bitcoins scripting language. Im not going to describe that language in detail in this post, the important thing to take away now is just that a7db6f… is the Bitcoin address.

You can now see, by the way, how Bitcoin addresses the question I swept under the rug in the last section: where do Bitcoin serial numbers come from? In fact, the role of the serial number is played by transaction hashes. In the transaction above, for example, the recipient is receiving 0.319 Bitcoins, which come out of the first output of an earlier transaction with hash 2007ae… (line 9). If you go and look in the block chain for that transaction, youd see that its output comes from a still earlier transaction. And so on.

There are two clever things about using transaction hashes instead of serial numbers. First, in Bitcoin theres not really any separate, persistent coins at all, just a long series of transactions in the block chain. Its a clever idea to realize that you dont need persistent coins, and can just get by with a ledger of transactions. Second, by operating in this way we remove the need for any central authority issuing serial numbers. Instead, the serial numbers can be self-generated, merely by hashing the transaction.

In fact, its possible to keep following the chain of transactions further back in history. Ultimately, this process must terminate. This can happen in one of two ways. The first possibilitty is that youll arrive at the very first Bitcoin transaction, contained in the so-called Genesis block. This is a special transaction, having no inputs, but a 50 Bitcoin output. In other words, this transaction establishes an initial money supply. The Genesis block is treated separately by Bitcoin clients, and I wont get into the details here, although its along similar lines to the transaction above. You can see the deserialized raw data here, and read about the Genesis block here.

The second possibility when you follow a chain of transactions back in time is that eventually youll arrive at a so-called coinbase transaction. With the exception of the Genesis block, every block of transactions in the block chain starts with a special coinbase transaction. This is the transaction rewarding the miner who validated that block of transactions. It uses a similar but not identical format to the transaction above. I wont go through the format in detail, but if you want to see an example, see here. You can read a little more about coinbase transactions here.

Something I havent been precise about above is what exactly is being signed by the digital signature in line 11. The obvious thing to do is for the payer to sign the whole transaction (apart from the transaction hash, which, of course, must be generated later). Currently, this is not what is done some pieces of the transaction are omitted. This makes some pieces of the transaction malleable, i.e., they can be changed later. However, this malleability does not include the amounts being paid out, senders and recipients, which cant be changed later. I must admit I havent dug down into the details here. I gather that this malleability is under discussion in the Bitcoin developer community, and there are efforts afoot to reduce or eliminate this malleability.

In the last section I described how a transaction with a single input and a single output works. In practice, its often extremely convenient to create Bitcoin transactions with multiple inputs or multiple outputs. Ill talk below about why this can be useful. But first lets take a look at the data from an actual transaction:

Lets go through the data, line by line. Its very similar to the single-input-single-output transaction, so Ill do this pretty quickly.

Line 1 contains the hash of the remainder of the transaction. This is used as an identifier for the transaction.

Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol.

Lines 3 and 4 tell us that the transaction has three inputs and two outputs, respectively.

Line 5 contains the lock_time. As in the single-input-single-output case this is set to 0, which means the transaction is finalized immediately.

Line 6 tells us the size of the transaction in bytes.

Lines 7 through 19 define a list of the inputs to the transaction. Each corresponds to an output from a previous Bitcoin transaction.

The first input is defined in lines 8 through 11.

In particular, lines 8 through 10 tell us that the input is to be taken from the n=0th output from the transaction with hash 3beabc…. Line 11 contains the signature, followed by a space, and then the public key of the person sending the bitcoins.

Lines 12 through 15 define the second input, with a similar format to lines 8 through 11. And lines 16 through 19 define the third input.

Lines 20 through 24 define a list containing the two outputs from the transaction.

The first output is defined in lines 21 and 22. Line 21 tells us the value of the output, 0.01068000 bitcoins. As before, line 22 is an expression in Bitcoins scripting language. The main thing to take away here is that the string e8c30622… is the Bitcoin address of the intended recipient of the funds.

The second output is defined lines 23 and 24, with a similar format to the first output.

More here:
How the Bitcoin protocol actually works | DDI

Decoding Illuminati Symbolism: Triangles, Pyramids and the Sun

 Illuminati  Comments Off on Decoding Illuminati Symbolism: Triangles, Pyramids and the Sun
Jul 222015
 

Psychiatrist Carl Jung once said about symbols that their purpose was to give a meaning to the life of man.Catapulted into the mainstream by Jay-Zs infamous Roc-diamond (which only looks likea triangle, although he has said that its a four sided diamond for the Rock in Roc-A-Fella records), the symbolism of the triangle and pyramid are key players in the realm of conspiracy theories and Illuminati symbolism. You can find these symbols in most any big-industry; music, film, corporate logos, etc. But why do we see these symbols so often? What do they truly mean?

The symbol of the triangle is commonly held to have a much deeper and esoteric meaning than the basic geometric shape we common-folk see. The symbolism, or meaning, of the triangle is usually viewed as one of spiritual importance. The Christian faith views the three sides of the triangle as the Holy Trinity; God the Father, God the Son, and God the Holy Spirit. Ancient Egyptians believed the right sided triangle represented their form of the Trinity with the hypotenuse being the child god Horus, the upright side being the sacred feminine goddess Isis, and the base is the male Osiris.

This concept was kept in a sort of chain of custody when the Greek mathematician Pythagoras learned much from the ancient Egyptians and then applied it to geometry. He even went as far as to set up one of the first schools of mystery with a religious sect that practiced his philosophy, mathematics, and conferring of esoteric principles. In theory, the secret societies, cults, occultists, and other nefarious groups, collectively known as the Illuminati, maintain all of this knowledge and use it in a much different manner.

To understand why all of this matters, you must learn about the belief system of the occult. A researcher named Marty Leeds wrote books on mathematics and the universal language that nature uses to communicate to us. He believes that various languages are sacred and have a basis in ancient symbols through mathematics. I find his argument compelling, and Ive tried to incorporate some its logic into this post, as I find it important to argument.

The three sides of a triangle represent the number 3, and this concept is used in gematria, the ancient Babylonian/Hebrew numerology practice that assigns numbers to words or letters (and also other mystical schools of thought). The number 3 is representative of the spirit realm (or the Heavens), while in contrast, the number 4 represents the physical realm (the material, three-dimensional world we can relate to). The number 3 is a number of the divine, showing the union of male and female that create a third being. Its the number of manifestation; to make something happen.

Another analogy to consider is that the upright triangle points towards the Heavens, while the inverted points to the Earth (or Hell if you want to get all fire and brimstone with it).

Read more:
Decoding Illuminati Symbolism: Triangles, Pyramids and the Sun

 Posted by at 7:55 pm  Tagged with:

Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F – Video

 Illuminati  Comments Off on Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F – Video
Apr 122015
 



Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F
hope you guys can glean from this post two.

By: Evangeline France

View post:
Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F – Video

Harper to meet with NATO secretary general

 NATO  Comments Off on Harper to meet with NATO secretary general
Apr 052015
 

By The Canadian Press

OTTAWA – NATO Secretary General Jens Stoltenberg is making his first visit to Canada this week, with the subject of how to address rising tensions with Russia likely to feature high on the agenda.

Prime Minister Stephen Harper is scheduled to meet with Stoltenberg on Monday, and a Canadian source close to the meetings said the threat posed by the Islamic State of Iraq and the Levant (ISIL) would be a major topic of discussion.

Harper plans to ask the House of Commons to extend and expand Canada’s military involvement this week.

But NATO has had no formal role to date in fighting ISIL. Russian President Vladmir Putin’s recent moves, including mobilizing 45,000 northern troops for military exercises last week, have been the alliance’s major preoccupation.

On Sunday, NATO’s supreme allied commander Gen. Philip Breedlove told a news conference that the west should consider sending defensive weapons into Ukraine. The UN has said 6,000 people have died in the country over the past year.

The United States has been actively considering sending lethal defensive weapons to Ukraine to help that country defend itself against Russian-backed fighters. Germany has urged caution, warning that supplying Ukraine could escalate tensions.

Defence Minister Jason Kenney has dropped broad hints that Canada could be poised to provide Ukraine with more military assistance. He has said cabinet is considering whether Canada should join the U.S. and Britain in a military training mission to help Ukrainian troops.

Stoltenberg, a former Norwegian cabinet minister who took up the post last October, has warned that snap Russian military exercises and less communication between Russia and NATO could have dire consequences.

“It is important we keep the channels for military communication open to have as much transparency as possible to avoid misunderstandings and to make sure that incidents don’t spiral and get out of control,” Stoltenberg told the Guardian newspaper last week.

See the rest here:
Harper to meet with NATO secretary general

 Posted by at 4:49 am  Tagged with:

WordPress WordPress SEO by Yoast WordPress Plugins

 SEO  Comments Off on WordPress WordPress SEO by Yoast WordPress Plugins
Apr 052015
 

WordPress out of the box is already technically quite a good platform for SEO, this was true when Joost wrote his original WordPress SEO article in 2008 (and updated every few months) and it’s still true today, but that doesn’t mean you can’t improve it further! This plugin is written from the ground up by Joost de Valk and his team at Yoast to improve your site’s SEO on all needed aspects. While this WordPress SEO plugin goes the extra mile to take care of all the technical optimization, more on that below, it first and foremost helps you write better content. WordPress SEO forces you to choose a focus keyword when you’re writing your articles, and then makes sure you use that focus keyword everywhere.

Premium Support The Yoast team does not provide support for the WordPress SEO plugin on the WordPress.org forums. One on one email support is available to people who bought the Premium WordPress SEO plugin only. Note that the Premium SEO plugin has several extra features too so it might be well worth your investment!

You should also check out the Local SEO, News SEO and Video SEO extensions to WordPress SEO, these of course come with support too.

Take a look at the explanation of the General tab in WordPress SEO, this is one of the 13 tutorial videos included in the Premium version of WordPress SEO:

Bug Reports Bug reports for WordPress SEO are welcomed on GitHub. Please note GitHub is not a support forum and issues that aren’t properly qualified as bugs will be closed.

Using the snippet preview you can see a rendering of what your post or page will look like in the search results, whether your title is too long or too short and your meta description makes sense in the context of a search result. This way the plugin will help you not only increase rankings but also increase the click through for organic search results.

The WordPress SEO plugins Page Analysis functionality checks simple things you’re bound to forget. It checks, for instance, if you have images in your post and whether they have an alt tag containing the focus keyword for that post. It also checks whether your posts are long enough, if you’ve written a meta description and if that meta description contains your focus keyword, if you’ve used any subheadings within your post, etc. etc.

The plugin also allows you to write meta titles and descriptions for all your category, tag and custom taxonomy archives, giving you the option to further optimize those pages.

Combined, this plugin makes sure that your content is the type of content search engines will love!

While out of the box WordPress is pretty good for SEO, it needs some tweaks here and there. This WordPress SEO plugin guides you through some of the settings needed, for instance by reminding you to enable pretty permalinks. But it also goes beyond that, by automatically optimizing and inserting the meta tags and link elements that Google and other search engines like so much:

See more here:
WordPress WordPress SEO by Yoast WordPress Plugins

The SEO-Hater’s Guide to Better Search Rankings – Forbes

 SEO  Comments Off on The SEO-Hater’s Guide to Better Search Rankings – Forbes
Apr 012015
 

Do the letters S, E, and O strike fear in your heart? When people talk about keyword research or link building, do you want to hide under a rock?

Not everyone is a fan of SEO, and thats okay.Many business owners are (understandably) more interested in performing core tasks related to their business; and while most would love to have more search engine traffic, the specifics of how to get it may feel like a bit of a mystery.

This post will give even the most SEO-indifferent or SEO-hating business owners a palatable strategy for improving their search rankings. You dont have to love it you just have to do it!

In my experience and as Ive been doing a lot of SEO recently on my free invoicing software, people who think they hate SEO often have a skewed idea of what SEO actually is. They think its all very technical (tootechnical), and that theres some kind of secret SEO formula they need to learn. And not only do they not havetimeto learn this formula, they have nointerestin learning it.

I get it. So let me give you a very short, to-the-point overview of what SEO really is, and why its not worth hating:SEO is simply letting the search engines know what your content is about.Thats it. While there are technical elements to it, almost anyone can understand and implement the basics. And the basics are often enough to get you increased search rankings.

Bottom line: Stop avoiding SEO. You can do this. The basics are just that: basic.

Assuming you now hate SEO slightly less, its time to move on to step 2.

There are a number of big mistakes that can pretty much ensure you wont get search traffic. Before we look at the stuff youcando, lets make sure youre not doing the stuff youshouldntdo. Take a look through this list to make sure youre not breaking any of these cardinal sins of SEO:

I know you dont like the sounds of this. Keywords sound technical, and we dont like technical. But keywords are actually just the words and phrases on your site that you want to rank for.

These might include:

See the original post here:
The SEO-Hater’s Guide to Better Search Rankings – Forbes

The SEO-Hater's Guide to Better Search Rankings

 SEO  Comments Off on The SEO-Hater's Guide to Better Search Rankings
Apr 012015
 

Do the letters S, E, and O strike fear in your heart? When people talk about keyword research or link building, do you want to hide under a rock?

Not everyone is a fan of SEO, and thats okay.Many business owners are (understandably) more interested in performing core tasks related to their business; and while most would love to have more search engine traffic, the specifics of how to get it may feel like a bit of a mystery.

This post will give even the most SEO-indifferent or SEO-hating business owners a palatable strategy for improving their search rankings. You dont have to love it you just have to do it!

In my experience and as Ive been doing a lot of SEO recently on my free invoicing software, people who think they hate SEO often have a skewed idea of what SEO actually is. They think its all very technical (tootechnical), and that theres some kind of secret SEO formula they need to learn. And not only do they not havetimeto learn this formula, they have nointerestin learning it.

I get it. So let me give you a very short, to-the-point overview of what SEO really is, and why its not worth hating:SEO is simply letting the search engines know what your content is about.Thats it. While there are technical elements to it, almost anyone can understand and implement the basics. And the basics are often enough to get you increased search rankings.

Bottom line: Stop avoiding SEO. You can do this. The basics are just that: basic.

Assuming you now hate SEO slightly less, its time to move on to step 2.

There are a number of big mistakes that can pretty much ensure you wont get search traffic. Before we look at the stuff youcando, lets make sure youre not doing the stuff youshouldntdo. Take a look through this list to make sure youre not breaking any of these cardinal sins of SEO:

I know you dont like the sounds of this. Keywords sound technical, and we dont like technical. But keywords are actually just the words and phrases on your site that you want to rank for.

These might include:

See original here:
The SEO-Hater's Guide to Better Search Rankings

Strauss: Liberty University students say they were required to attend Ted Cruz speech

 Liberty  Comments Off on Strauss: Liberty University students say they were required to attend Ted Cruz speech
Mar 272015
 

Republican Sen Ted Cruz appeared earlier this week at Liberty University in Virginia to announce that he was running for president in 2016, becoming the first major candidate in the Democratic or Republican parties to formally declare. Cruz delivered the news at Liberty the largest Christian university in the world before a gathering of students. What you might not know about that gathering was that the students were required to attend. This post explains what happened and why. It was written by Alexandra Markovich, a 19-year-old student at Princeton University who is a member of the University Press Club, a selective group of undergraduate students who freelance for regional and national publications. Markovich is an intended major in the Woodrow Wilson School of International and Public Affairs with a focus in Russian, East European, and Eurasian Studies.

By Alexandra Markovich

Ted Cruz became the first Republican candidate to officially announce his 2016 presidential campaign in front of an audience of 11,000 college students on March 23, 2015. The catch: they had to be there. Cruz made his announcement on Monday morning at Liberty University, where Convocation is mandatory to students living on campus at risk of a $10 fine for failing to attend.

As sophomore Luke Wittel walked through the doors of the Vines Center, home to Libertys mens and womens basketball teams, he was offered an American Flag, the first thing that made Wittel realize this was going to be more political rally than spiritual gathering. The American flags blended patriotism and support for Ted Cruz, Wittel said, in a way that made him nervous.

All Wittel could do to show his disagreement was not to take the flag. Wittel said that when he asked his RA if he could be excused and not be forced into apparent political association, he was sternly reminded of school policy. Throughout the hour-long Convocation, Wittel said he was not allowed to leave.

Liberty University is an evangelical university in Lynchburg, Virginia founded by the late pastor Jerry Falwell in 1971. A recent Washington Post article called Liberty the symbolic center of the GOP political-religious universe in recent years. The social conservative youth ticket will be an important card to punch in the GOP campaign.

Seeing the American flags handed out left Wittel with bitter feelings of political exploitation. Nothing makes you feel more like a pawn than being told to hold this and sit down, he said. But, Wittel sees logic behind holding the announcement at Liberty.

Link:
Strauss: Liberty University students say they were required to attend Ted Cruz speech

 Posted by at 7:45 pm  Tagged with:

Harper To Talk ISIL With NATO Secretary General Jens Stoltenberg

 NATO  Comments Off on Harper To Talk ISIL With NATO Secretary General Jens Stoltenberg
Mar 242015
 

OTTAWA – NATO Secretary General Jens Stoltenberg is making his first visit to Canada this week, with the subject of how to address rising tensions with Russia likely to feature high on the agenda.

Prime Minister Stephen Harper is scheduled to meet with Stoltenberg on Monday, and a Canadian source close to the meetings said the threat posed by the Islamic State of Iraq and the Levant (ISIL) would be a major topic of discussion.

Harper plans to ask the House of Commons to extend and expand Canada’s military involvement this week.

But NATO has had no formal role to date in fighting ISIL. Russian President Vladmir Putin’s recent moves, including mobilizing 45,000 northern troops for military exercises last week, have been the alliance’s major preoccupation.

On Sunday, NATO’s supreme allied commander Gen. Philip Breedlove told a news conference that the west should consider sending defensive weapons into Ukraine. The UN has said 6,000 people have died in the country over the past year.

The United States has been actively considering sending lethal defensive weapons to Ukraine to help that country defend itself against Russian-backed fighters. Germany has urged caution, warning that supplying Ukraine could escalate tensions.

Defence Minister Jason Kenney has dropped broad hints that Canada could be poised to provide Ukraine with more military assistance. He has said cabinet is considering whether Canada should join the U.S. and Britain in a military training mission to help Ukrainian troops.

Stoltenberg, a former Norwegian cabinet minister who took up the post last October, has warned that snap Russian military exercises and less communication between Russia and NATO could have dire consequences.

“It is important we keep the channels for military communication open to have as much transparency as possible to avoid misunderstandings and to make sure that incidents don’t spiral and get out of control,” Stoltenberg told the Guardian newspaper last week.

Continue reading here:
Harper To Talk ISIL With NATO Secretary General Jens Stoltenberg

 Posted by at 1:53 am  Tagged with:

Nato head tells David Cameron: We are counting on your leadership

 NATO  Comments Off on Nato head tells David Cameron: We are counting on your leadership
Mar 132015
 

We appreciate the leadership that the UK shows in the Alliance, and we count on leadership also in the future, Mr Stoltenberg said.

In the same press release, Nato said he would be meeting Michael Fallon to ensure important decisions from the Wales summit last year were being implemented.

A central outcome of the summit was a promise for all European allies to recommit to spending 2 per cent of their GDP on defence a long-standing obligation.

At the time the Prime Minister called on those countries below the mark to meet the obligation within a decade and signed a pledge saying Britain would aim to continue to hit the 2 per cent mark.

In a separate development, Mr Cameron appeared to admit the difficulty in justifying why a government should protect aid spending during austerity while not ring-fencing defence.

Pushed by the Financial Times on how the Prime Minister could say defence was more about deployability of forces than raw spending numbers while enshrining legal aid spending in law, Mr Cameron reportedly said: Its a fair point.

No 10 spokesperson said of the meeting between Mr Cameron and Mr Stoltenberg: The Prime Minister explained that the UK would continue to meet the 2 per cent target this financial year and next, but decisions beyond this would be made in the next Spending Review.

The Secretary General said he appreciated the UKs leadership within the Alliance and that the Government was using its defence spending to focus on investment in new capabilities.”

Last month two former Nato heads warned that Mr Cameron will embolden Mr Putin and Islamic terrorists if he reneges on a commitment to spend two per cent of GDP on defence.

Anders Fogh Rasmussen, who left the post as Nato general secretary last year, and his predecessor Jaap de Hoop Scheffer said cutting defence after the election would strength Britains enemies.

Go here to read the rest:
Nato head tells David Cameron: We are counting on your leadership

 Posted by at 2:52 pm  Tagged with:



Pierre Teilhard De Chardin | Designer Children | Prometheism | Euvolution