Cyborg | Designer-Babies | Futurism | Futurist | Immortality | Longevity | Nanotechnology | Post-Human | Singularity | Transhuman

Our MPs | Liberal Party of Canada

 Liberal  Comments Off on Our MPs | Liberal Party of Canada
Jun 242016
 

Justin Trudeau is the Leader of the Liberal Party of Canada.

Justin was first elected to Parliament in the Montreal riding of Papineau in 2008, defying political insiders who believed that a federalist candidate would have little chance against an incumbent member of the Bloc Qubcois. For Justin, the people of Papineau 50 percent of whom speak neither French nor English as their mother tongue exemplify Canadas rich diversity, evolving identity, and the struggle for equality of opportunity. He has served the hard working middle class families and small businesses of his constituency, who, in recent years, have faced economic challenges. He has worked alongside local community organisations by bringing together different cultures and religions, and establishing local initiatives on social issues, the environment, and the arts.

As a Member of Parliament, Justin has had many responsibilities including the Liberal Party Critic for Youth, Post-Secondary Education, Amateur Sports, Multiculturalism, and Citizenship and Immigration. Furthermore, he previously sat on the Parliamentary Committees on Environment and Sustainable Development and Citizenship and Immigration.

As a Parliamentarian, and prior to that, Justin travelled the country and met with Canadians in every region, consistently speaking about shared values, the importance of youth empowerment, protecting our wilderness, and living up to our place in the world. Some of Justins proudest accomplishments include his advocacy for victims of the earthquake in Haiti in 2010, his activism to protect the Nahanni river in the Northwest Territories in 2005 and holding the post of chair of Katimavik, Canadas national youth service program, from 2002 to 2006.

At the heart of Justins professional achievements whether as a math and French teacher in British Columbia, or his leadership role in Katimavik, or even his strong defense of Quebec as a Member of Parliament is a deep respect for Canadians from coast to coast to coast and his desire to serve them.

On April 14, 2013, Justin was elected Leader of the Liberal Party of Canada in the most open and accessible leadership election in Canadian history, in which tens of thousands of Canadians participated.

Justin has a Bachelor of Arts degree from McGill University, and a Bachelor of Education degree from the University of British Columbia. He was born on December 25, 1971, the eldest son of the late former Prime Minister Pierre Elliott Trudeau and Margaret Sinclair Trudeau Kemper. Justin is married to Sophie Grgoire. The couple welcomed their first child, Xavier James Trudeau on October 18, 2007 and added to their family with the arrival of Ella-Grace Trudeau on February 5, 2009, and Hadrien Trudeau on February 28, 2014.

See the rest here:

Our MPs | Liberal Party of Canada

 Posted by at 7:34 am  Tagged with:

Socio-Economic Collapse | Prometheism.net

 Socio-economic Collapse  Comments Off on Socio-Economic Collapse | Prometheism.net
Jun 172016
 

In archaeology, the classic Maya collapse refers to the decline of Maya civilization and abandonment of Maya cities in the southern Maya lowlands of Mesoamerica between the 8th and 9thcenturies, at the end of the Classic Mayan Period. Preclassic Maya experienced a similar collapse in the 2nd century.

The Classic Period of Mesoamerican chronology is generally defined as the period from 250 to 900, the last century of which is referred to as the Terminal Classic.[1] The classic Maya collapse is one of the greatest unsolved mysteries in archaeology. Urban centers of the southern lowlands, among them Palenque, Copn, Tikal, Calakmul, went into decline during the 8th and 9thcenturies and were abandoned shortly thereafter. Archaeologically, this decline is indicated by the cessation of monumental inscriptions and the reduction of large-scale architectural construction at the primary urban centers of the classic period.

Although termed a collapse, it did not mark the end of the Maya civilization; Northern Yucatn in particular prospered afterwards, although with very different artistic and architectural styles, and with much less use of monumental hieroglyphic writing. In the post-classic period following the collapse, the state of Chichn Itz built an empire that briefly united much of the Maya region,[citation needed] and centers such as Mayapn and Uxmal flourished, as did the Highland states of the Kiche and Kaqchikel Maya. Independent Maya civilization continued until 1697 when the Spanish conquered Nojpetn, the last independent city-state. Millions of Maya people still inhabit the Yucatn peninsula today.

Because parts of Maya civilization unambiguously continued, a number of scholars strongly dislike the term collapse.[2] Regarding the proposed collapse, E. W. Andrews IV went as far as to say, in my belief no such thing happened.[3]

The Maya often recorded dates on monuments they built. Few dated monuments were being built circa 500 around ten per year in 514, for example. The number steadily increased to make this number twenty per year by 672 and forty by around 750. After this, the number of dated monuments begins to falter relatively quickly, collapsing back to ten by 800 and to zero by 900. Likewise, recorded lists of kings complement this analysis. Altar Q shows a reign of kings from 426 to 763. One last king not recorded on Altar Q was Ukit Took, Patron of Flint, who was probably a usurper. The dynasty is believed to have collapsed entirely shortly thereafter. In Quirigua, twenty miles north of Copn, the last king Jade Sky began his rule between 895 and 900, and throughout the Maya area all kingdoms similarly fell around that time.[4]

A third piece of evidence of the progression of Maya decline, gathered by Ann Corinne Freter, Nancy Gonlin, and David Webster, uses a technique called obsidian hydration. The technique allowed them to map the spread and growth of settlements in the Copn Valley and estimate their populations. Between 400 and 450, the population was estimated at a peak of twenty-eight thousand between 750 and 800 larger than London at the time. Population then began to steadily decline. By 900 the population had fallen to fifteen thousand, and by 1200 the population was again less than 1000.

Some 88 different theories or variations of theories attempting to explain the Classic Maya Collapse have been identified.[5] From climate change to deforestation to lack of action by Mayan kings, there is no universally accepted collapse theory, although drought is gaining momentum as the leading explanation.[6]

The archaeological evidence of the Toltec intrusion into Seibal, Peten, suggests to some the theory of foreign invasion. The latest hypothesis states that the southern lowlands were invaded by a non-Maya group whose homelands were probably in the gulf coast lowlands. This invasion began in the 9thcentury and set off, within 100years, a group of events that destroyed the Classic Maya. It is believed that this invasion was somehow influenced by the Toltec people of central Mexico. However, most Mayanists do not believe that foreign invasion was the main cause of the Classic Maya Collapse; they postulate that no military defeat can explain or be the cause of the protracted and complex Classic Collapse process. Teotihuacan influence across the Maya region may have involved some form of military invasion; however, it is generally noted that significant Teotihuacan-Maya interactions date from at least the Early Classic period, well before the episodes of Late Classic collapse.[7]

The foreign invasion theory does not answer the question of where the inhabitants went. David Webster believed that the population should have increased because of the lack of elite power. Further, it is not understood why the governmental institutions were not remade following the revolts, which actually happened under similar circumstances in places like China. A study by anthropologist Elliot M. Abrams came to the conclusion that buildings, specifically in Copan, did not actually require an extensive amount of time and workers to construct.[8] However, this theory was developed during a time period when the archaeological evidence showed that there were fewer Maya people than there are now known to have been.[9] Revolutions, peasant revolts, and social turmoil change circumstances, and are often followed by foreign wars, but they run their course. There are no documented revolutions that caused wholesale abandonment of entire regions.

It has been hypothesized that the decline of the Maya is related to the collapse of their intricate trade systems, especially those connected to the central Mexican city of Teotihuacn. Preceding improved knowledge of the chronology of Mesoamerica, Teotihuacan was believed to have fallen during 700750, forcing the restructuring of economic relations throughout highland Mesoamerica and the Gulf Coast.[10] This remaking of relationships between civilizations would have then given the collapse of the Classic Maya a slightly later date. However, after knowing more about the events and the time periods that they occurred, it is now believed that the strongest Teotihuacan influence was during the 4th and 5thcenturies. In addition, the civilization of Teotihuacan started to lose its power, and maybe even abandoned the city, during 600650. This differs greatly from the previous belief that Teotihuacano power decreased during 700750.[11] But since the new decline date of 600650 has been accepted, the Maya civilizations are now thought to have lived on and prospered for another century and more[12] than what was previously believed. Rather than the decline of Teotihuacan directly preceding the collapse of the Maya, their decline is now seen as contributing to the 6thcentury hiatus.[12]

The disease theory is also a contender as a factor in the Classic Maya Collapse. Widespread disease could explain some rapid depopulation, both directly through the spread of infection itself and indirectly as an inhibition to recovery over the long run. According to Dunn (1968) and Shimkin (1973), infectious diseases spread by parasites are common in tropical rainforest regions, such as the Maya lowlands. Shimkin specifically suggests that the Maya may have encountered endemic infections related to American trypanosomiasis, Ascaris, and some enteropathogens that cause acute diarrheal illness. Furthermore, some experts believe that, through development of their civilization (that is, development of agriculture and settlements), the Maya could have created a disturbed environment, in which parasitic and pathogen-carrying insects often thrive.[13] Among the pathogens listed above, it is thought that those that cause the acute diarrheal illnesses would have been the most devastating to the Maya population. This is because such illness would have struck a victim at an early age, thereby hampering nutritional health and the natural growth and development of a child. This would have made them more susceptible to other diseases later in life. Such ideas as this could explain the role of disease as at least a possible partial reason for the Classic Maya Collapse.[14]

Mega-droughts hit the Yucatn Peninsula and Petn Basin areas with particular ferocity, as thin tropical soils decline in fertility and become unworkable when deprived of forest cover,[15] and due to regular seasonal drought drying up surface water.[16] Colonial Spanish officials accurately documented cycles of drought, famine, disease, and war, providing a reliable historical record of the basic drought pattern in the Maya region.[17]

Climatic factors were first implicated in the Collapse as early as 1931 by Mayanists Thomas Gann and J.E.S. Thompson.[18] In The Great Maya Droughts, Richardson Gill gathers and analyzes an array of climatic, historical, hydrologic, tree ring, volcanic, geologic, lake bed, and archeological research, and demonstrates that a prolonged series of droughts probably caused the Classic Maya Collapse.[19] The drought theory provides a comprehensive explanation, because non-environmental and cultural factors (excessive warfare, foreign invasion, peasant revolt, less trade, etc.) can all be explained by the effects of prolonged drought on Classic Maya civilization.[20]

Climatic changes are, with increasing frequency, found to be major drivers in the rise and fall of civilizations all over the world.[21] Professors Harvey Weiss of Yale University and Raymond S. Bradley of the University of Massachusetts have written, Many lines of evidence now point to climate forcing as the primary agent in repeated social collapse.[22] In a separate publication, Weiss illustrates an emerging understanding of scientists:

Within the past five years new tools and new data for archaeologists, climatologists, and historians have brought us to the edge of a new era in the study of global and hemispheric climate change and its cultural impacts. The climate of the Holocene, previously assumed static, now displays a surprising dynamism, which has affected the agricultural bases of pre-industrial societies. The list of Holocene climate alterations and their socio-economic effects has rapidly become too complex for brief summary.[23]

The drought theory holds that rapid climate change in the form of severe drought brought about the Classic Maya collapse. According to the particular version put forward by Gill in The Great Maya Droughts,

[Studies of] Yucatecan lake sediment cores provide unambiguous evidence for a severe 200-year drought from AD800 to 1000 the most severe in the last 7,000years precisely at the time of the Maya Collapse.[24]

Climatic modeling, tree ring data, and historical climate data show that cold weather in the Northern Hemisphere is associated with drought in Mesoamerica.[25] Northern Europe suffered extremely low temperatures around the same time as the Maya droughts. The same connection between drought in the Maya areas and extreme cold in northern Europe was found again at the beginning of the 20thcentury. Volcanic activity, within and outside Mesoamerica, is also correlated with colder weather and resulting drought, as the effects of the Tambora volcano eruption in 1815 indicate.[26]

Mesoamerican civilization provides a remarkable exception: civilization prospering in the tropical swampland. The Maya are often perceived as having lived in a rainforest, but technically, they lived in a seasonal desert without access to stable sources of drinking water.[27] The exceptional accomplishments of the Maya are even more remarkable because of their engineered response to the fundamental environmental difficulty of relying upon rainwater rather than permanent sources of water. The Maya succeeded in creating a civilization in a seasonal desert by creating a system of water storage and management which was totally dependent on consistent rainfall.[28] The constant need for water kept the Maya on the edge of survival. Given this precarious balance of wet and dry conditions, even a slight shift in the distribution of annual precipitation can have serious consequences.[16] Water and civilization were vitally connected in ancient Mesoamerica. Archaeologist and specialist in pre-industrial land and water usage practices, Vernon Scarborough, believes water management and access were critical to the development of Maya civilization.[29]

Critics of the drought theory wonder why the southern and central lowland cities were abandoned and the northern cities like Chichen Itza, Uxmal, and Coba continued to thrive.[30] One critic argued that Chichen Itza revamped its political, military, religious, and economic institutions away from powerful lords or kings.[31] Inhabitants of the northern Yucatn also had access to seafood, which might have explained the survival of Chichen Itza and Mayapan, cities away from the coast but within reach of coastal food supplies.[32] Critics of the drought theory also point to current weather patterns: much heavier rainfall in the southern lowlands compared to the lighter amount of rain in the northern Yucatn. Drought theory supporters state that the entire regional climate changed, including the amount of rainfall, so that modern rainfall patterns are not indicative of rainfall from 800 to 900. LSU archaeologist Heather McKillop found a significant rise in sea level along the coast nearest the southern Maya lowlands, coinciding with the end of the Classic period, and indicating climate change.[33]

David Webster, a critic of the megadrought theory says that much of the evidence provided by Gill comes from the northern Yucatn and not the Southern part of the peninsula, where Classic Maya civilization flourished. He also states that if water sources were to have dried up, then several city-states would have moved to other water sources. The fact that Gill suggests that all water in the region would have dried up and destroyed Maya civilization is a stretch, according to Webster.[34]

A study published in Science in 2012 found that modest rainfall reductions, amounting to only 25 to 40 percent of annual rainfall, may have been the tipping point to the Mayan collapse. Based on samples of lake and cave sediments in the areas surrounding major Mayan cities, the researchers were able to determine the amount of annual rainfall in the region. The mild droughts that took place between 800-950 would therefore be enough to rapidly deplete seasonal water supplies in the Yucatn lowlands, where there are no rivers.[35][36][37]

Some ecological theories of Maya decline focus on the worsening agricultural and resource conditions in the late Classic period. It was originally thought that the majority of Maya agriculture was dependent on a simple slash-and-burn system. Based on this method, the hypothesis of soil exhaustion was advanced by Orator F. Cook in 1921. Similar soil exhaustion assumptions are associated with erosion, intensive agricultural, and savanna grass competition.

More recent investigations have shown a complicated variety of intensive agricultural techniques utilized by the Maya, explaining the high population of the Classic Maya polities. Modern archaeologists now comprehend the sophisticated intensive and productive agricultural techniques of the ancient Maya, and several of the Maya agricultural methods have not yet been reproduced. Intensive agricultural methods were developed and utilized by all the Mesoamerican cultures to boost their food production and give them a competitive advantage over less skillful peoples.[38] These intensive agricultural methods included canals, terracing, raised fields, ridged fields, chinampas, the use of human feces as fertilizer, seasonal swamps or bajos, using muck from the bajos to create fertile fields, dikes, dams, irrigation, water reservoirs, several types of water storage systems, hydraulic systems, swamp reclamation, swidden systems, and other agricultural techniques that have not yet been fully understood.[39] Systemic ecological collapse is said to be evidenced by deforestation, siltation, and the decline of biological diversity.

In addition to mountainous terrain, Mesoamericans successfully exploited the very problematic tropical rainforest for 1,500years.[40] The agricultural techniques utilized by the Maya were entirely dependent upon ample supplies of water. The Maya thrived in territory that would be uninhabitable to most peoples. Their success over two millennia in this environment was amazing.[41]

Anthropologist Joseph Tainter wrote extensively about the collapse of the Southern Lowland Maya in his 1988 study, The Collapse of Complex Societies. His theory about Mayan collapse encompasses some of the above explanations, but focuses specifically on the development of and the declining marginal returns from the increasing social complexity of the competing Mayan city-states.[42] Psychologist Julian Jaynes suggested that the collapse was due to a failure in the social control systems of religion and political authority, due to increasing socioeconomic complexity that overwhelmed the power of traditional rituals and the kings authority to compel obedience.[43]

Originally posted here:

Classic Maya collapse Wikipedia, the free encyclopedia

Read more here:

Socio-Economic Collapse | Prometheism.net

Urban Dictionary: liberal

 Liberal  Comments Off on Urban Dictionary: liberal
Jun 172016
 

A liberal, in the American sense, is one who falls to the left in the political spectrum; In other parts of the world, however, liberalism is the belief in laissez-faire capitalism and free-market systems – hence the recently coined term, neoliberalism.

Although I do not like to generalize, for the purposes of a (somewhat) concise dictionary definition, here is the very basic liberal (American sense) ideology:

Politics: The federal government exists to protect and serve the people, and therefore, should be given sufficient power to fulfill its role successfully. Ways in which this can be accomplished include giving the federal government more power than local governments and having the government provide programs designed to protect the interests of the people (these include welfare, Medicare, and social security). Overall, these programs have helped extensively in aiding the poor and unfortunate, as well as the elderly and middle class. To make sure that the interests of the people are served, it was liberals (or so they were considered in their time) that devised the idea of a direct democracy, a republic, and modern democracy. This way, it is ensured that the federal government represents the interests of the people, and the extensive power that it is given is not used to further unpopular goals. Liberals do not concentrate on military power (though that is not to say they ignore it), but rather focus on funding towards education, improving wages, protecting the environment, etc. Many propose the dismantling of heavy-cost programs such as the Star Wars program (no, not the film series), in order to use the money to fund more practical needs.

Social Ideology: As one travels further left on the political spectrum, it is noticed that tolerance, acceptance, and general compassion for all people steadily increases (in theory at least). Liberals are typically concerned with the rights of the oppressed and unfortunate this, of course, does not mean that they ignore the rights of others (liberals represent the best interests of the middle-class in America). This has led many liberals to lobby for the rights of homosexuals, women, minorities, single-mothers, etc. Many fundamentalists see this is immoral; however, it is, in reality, the most mature, and progressive way in which to deal with social differences. Liberals are identified with fighting for equal rights, such as those who wanted to abolish slavery and those who fought hard for a woman’s reproductive right (see Abortion). Liberals have also often fought for ecological integrity, protecting the environment, diversity of species, as well as indigenous populations rights. Almost all social betterment programs are funded by liberal institutions, and government funded social programs on education improvement, childrens rights, womens rights, etc. are all supported by liberals. Basically, social liberalism is the mature, understanding way in which to embrace individual differences, not according to ancient dogma or religious prejudice, but according to the ideals of humanity that have been cultivated by our experiences throughout history, summed up in that famous American maxim: with liberty and justice for all.

Economics: Using the term liberal when speaking of economics is very confusing, as liberal in America is completely opposite to the rest of the world. Therefore, here, as I have been doing, I will concentrate on the American definition of liberal concerning economics. Liberals believe that the rights of the people, of the majority, are to be valued much more sincerely than those of corporations, and therefore have frequently proposed the weakening of corporate power through heavier taxation (of corporations), environmental regulations, and the formation of unions. Liberals often propose the heavier taxation of WEALTHY individuals, while alleviating taxes on the middle class, and especially the poor. Liberals (American sense) do not support laissez-faire economics because, to put it simply, multinational corporations take advantage of developing countries and encourage exploitation and child labor (multinational corporations are spawned from laissez-faire policies). Instead, many propose the nationalization of several industries, which would make sure that wealth and power is not concentrated in a few hands, but is in the hands of the people (represented by elected officials in government). I am not going to go into the extreme intricacies of the economic implications of privatization of resources, etc., but will say that privatization and globalization have greatly damaged the economies of Latin America, namely Argentina and Mexico (see NAFTA).

This summation of the leftist ideology may not be 100% correct in all situations, as there are many variations on several issues and I may have depicted the current definition of liberal as too far to the left than it is generally accepted. On that note, many leftists are critical of the political situation in America, claiming that the left is now in the center, as the general populace has been conditioned by institutions such as Fox News to consider everything left of Hitler (as one clever person put it) as radical liberalism. I, myself, have observed that, in America, there are two basic types of liberals: those who concern themselves only with liberal policies on the domestic front, and either ignore international affairs or remain patriotic and dedicated to the American way (Al Franken, Bill Clinton, etc.) And then there are those, despite the criticism they face from many fellow liberals (classified under the former definition), who are highly critical of US foreign policy, addressing such issues as Iran-Contra, the Sandanistas, Pinochet, Vietnam, NATOs intervention in Kosovo, our trade embargo on Cuba, etc, etc. (such as Noam Chomsky, William Blumm, etc.) Unfortunately, it seems that adolescent rage has run rampant on this particular word, and most definitions are either incoherent jumbles of insults and generalizations or deliberate spewing of misinformation (see the definition that describes the situation in Iraq, without addressing our suppression of popular revolts in Iraq, our pre-war sanctions on Iraq that have caused the death of some 5 million children, and our support for Saddam during the Iran-Iraq war, and even our post-war sale of biological elements usable in weapons to Saddams regime).

Read more here:

Urban Dictionary: liberal

 Posted by at 5:03 am  Tagged with:

High Seas Fleet – Wikipedia, the free encyclopedia

 High Seas  Comments Off on High Seas Fleet – Wikipedia, the free encyclopedia
Jun 172016
 

The High Seas Fleet (Hochseeflotte) was the battle fleet of the German Imperial Navy and saw action during the First World War. The formation was created in February 1907, when the Home Fleet (Heimatflotte) was renamed as the High Seas Fleet. Admiral Alfred von Tirpitz was the architect of the fleet; he envisioned a force powerful enough to challenge the Royal Navy’s predominance. Kaiser Wilhelm II, the German Emperor, championed the fleet as the instrument by which he would seize overseas possessions and make Germany a global power. By concentrating a powerful battle fleet in the North Sea while the Royal Navy was required to disperse its forces around the British Empire, Tirpitz believed Germany could achieve a balance of force that could seriously damage British naval hegemony. This was the heart of Tirpitz’s “Risk Theory,” which held that Britain would not challenge Germany if the latter’s fleet posed such a significant threat to its own.

The primary component of the Fleet was its battleships, typically organized in eight-ship squadrons, though it also contained various other formations, including the I Scouting Group. At its creation in 1907, the High Seas Fleet consisted of two squadrons of battleships, and by 1914, a third squadron had been added. The dreadnought revolution in 1906 greatly affected the composition of the fleet; the twenty-four pre-dreadnoughts in the fleet were rendered obsolete and required replacement. Enough dreadnoughts for two full squadrons were completed by the outbreak of war in mid 1914; the eight most modern pre-dreadnoughts were used to constitute a third squadron. Two additional squadrons of older vessels were mobilized at the onset of hostilities, though by the end of the conflict, these formations were disbanded.

The fleet conducted a series of sorties into the North Sea during the war designed to lure out an isolated portion of the numerically superior British Grand Fleet. These operations frequently used the fast battlecruisers of the I Scouting Group to raid the British coast as the bait for the Royal Navy. These operations culminated in the Battle of Jutland, on 31 May1 June 1916, where the High Seas Fleet confronted the whole of the Grand Fleet. The battle was inconclusive, but the British won strategically, as it convinced Admiral Reinhard Scheer, the German fleet commander, that even a highly favorable outcome to a fleet action would not secure German victory in the war. Scheer and other leading admirals therefore advised the Kaiser to order a resumption of the unrestricted submarine warfare campaign. The primary responsibility of the High Seas Fleet in 1917 and 1918 was to secure the German naval bases in the North Sea for U-boat operations. Nevertheless, the fleet continued to conduct sorties into the North Sea and detached units for special operations in the Baltic Sea against the Russian Baltic Fleet. Following the German defeat in November 1918, the Allies interned the bulk of the High Seas Fleet in Scapa Flow, where it was ultimately scuttled by its crew in June 1919, days before the belligerents signed the Treaty of Versailles.

In 1898, Admiral Alfred von Tirpitz became the State Secretary for the Imperial Navy Office (ReichsmarineamtRMA);[1] Tirpitz was an ardent supporter of naval expansion. During a speech in support of the First Naval Law on 6 December 1897, Tirpitz stated that the navy was “a question of survival” for Germany.[2] He also viewed Great Britain, with its powerful Royal Navy, as the primary threat to Germany. In a discussion with the Kaiser during his first month in his post as State Secretary, he stated that “for Germany the most dangerous naval enemy at present is England.”[3] Tirpitz theorized that an attacking fleet would require a 33percent advantage in strength to achieve victory, and so decided that a 2:3 ratio would be required for the German navy. For a final total of 60 German battleships, Britain would be required to build 90 to meet the 2:3 ratio envisioned by Tirpitz.[3]

The Royal Navy had heretofore adhered to the so-called “two-power standard,” first formulated in the Naval Defence Act of 1889, which required a larger fleet than those of the next two largest naval powers combined.[4] The crux of Tirpitz’s “risk theory” was that by building a fleet to the 2:3 ratio, Germany would be strong enough that even in the event of a British naval victory, the Royal Navy would incur damage so serious as to allow the third-ranked naval power to rise to preeminence. Implicit in Tirpitz’s theory was the assumption that the British would adopt an offensive strategy that would allow the Germans to use mines and submarines to even the numerical odds before fighting a decisive battle between Heligoland and the Thames. Tirpitz in fact believed Germany would emerge victorious from a naval struggle with Britain, as he believed Germany to possess superior ships manned by better-trained crews, more effective tactics, and led by more capable officers.[3]

In his first program, Tirpitz envisioned a fleet of nineteen battleships, divided into two eight-ship squadrons, one ship as a flagship, and two in reserve. The squadrons were further divided into four-ship divisions. This would be supported by the eight Siegfried- and Odinclasses of coastal defense ships, six large and eighteen small cruisers, and twelve divisions of torpedo boats, all assigned to the Home Fleet (Heimatflotte).[5] This fleet was secured by the First Naval Law, which passed in the Reichstag on 28 March 1898.[6] Construction of the fleet was to be completed by 1 April 1904. Rising international tensions, particularly as a result of the outbreak of the Boer War in South Africa and the Boxer Rebellion in China, allowed Tirpitz to push through an expanded fleet plan in 1900. The Second Naval Law was passed on 14 June 1900; it doubled the size of the fleet to 38 battleships and 20 large and 38 small cruisers. Tirpitz planned an even larger fleet. As early as September 1899, he had informed the Kaiser that he sought at least 45 battleships, and potentially might secure a third double-squadron, for a total strength of 48 battleships.[7]

During the initial period of German naval expansion, Britain did not feel particularly threatened.[6] The Lords of the Admiralty felt the implications of the Second Naval Law were not a significantly more dangerous threat than the fleet set by the First Naval Law; they believed it was more important to focus on the practical situation rather than speculation on future programs that might easily be reduced or cut entirely. Segments of the British public, however, quickly seized on the perceived threat posed by the German construction programs.[8] Despite their dismissive reaction, the Admiralty resolved to surpass German battleship construction. Admiral John Fisher, who became the First Sea Lord and head of the Admiralty in 1904, introduced sweeping reforms in large part to counter the growing threat posed by the expanding German fleet. Training programs were modernized, old and obsolete vessels were discarded, and the scattered squadrons of battleships were consolidated into four main fleets, three of which were based in Europe. Britain also made a series of diplomatic arrangements, including an alliance with Japan that allowed a greater concentration of British battleships in the North Sea.[9]

Fisher’s reforms caused serious problems for Tirpitz’s plans; he counted on a dispersal of British naval forces early in a conflict that would allow Germany’s smaller but more concentrated fleet to achieve a local superiority. Tirpitz could also no longer depend on the higher level of training in both the German officer corps and the enlisted ranks, nor the superiority of the more modern and homogenized German squadrons over the heterogeneous British fleet. In 1904, Britain signed the Entente cordiale with France, Britain’s primary naval rival. The destruction of two Russian fleets during the Russo-Japanese War in 1905 further strengthened Britain’s position, as it removed the second of her two traditional naval rivals.[10] These developments allowed Britain to discard the “two power standard” and focus solely on out-building Germany. In October 1906, Admiral Fisher stated “our only probable enemy is Germany. Germany keeps her whole Fleet always concentrated within a few hours of England. We must therefore keep a Fleet twice as powerful concentrated within a few hours of Germany.”[11]

The most damaging blow to Tirpitz’s plan came with the launch of HMSDreadnought in February 1906. The new battleship, armed with a main battery of ten 12-inch (30cm) guns, was considerably more powerful than any battleship afloat. Ships capable of battle with Dreadnought would need to be significantly larger than the old pre-dreadnoughts, which increased their cost and necessitated expensive dredging of canals and harbors to accommodate them. The German naval budget was already stretched thin; without new funding, Tirpitz would have to abandon his challenge to Britain.[12] As a result, Tirpitz went before the Reichstag in May 1906 with a request for additional funding. The First Amendment to the Second Naval Law was passed on 19 May and appropriated funding for the new battleships, as well as for the dredging required by their increased size.[6]

The Reichstag passed a second amendment to the Naval Law in March 1908 to provide an additional billion marks to cope with the growing cost of the latest battleships. The law also reduced the service life of all battleships from 25 to 20 years, which allowed Tirpitz to push for the replacement of older vessels earlier. A third and final amendment was passed in May 1912 represented a compromise between Tirpitz and moderates in parliament. The amendment authorized three new battleships and two light cruisers. The amendment called for the High Seas Fleet to be equipped with three squadrons of eight battleships each, one squadron of eight battlecruisers, and eighteen light cruisers. Two 8-ship squadrons would be placed in reserve, along with two armored and twelve light cruisers.[13] By the outbreak of war in August 1914, only one eight-ship squadron of dreadnoughtsthe I Battle Squadronhad been assembled with the Nassau and Helgoland-classbattleships. The second squadron of dreadnoughtsthe III Battle Squadronwhich included four of the Kaiser-classbattleships, was only completed when the four Knig-classbattleships entered service by early 1915.[14] As a result, the third squadronthe II Battle Squadronremained composed of pre-dreadnoughts through 1916.[15]

Before the 1912 naval law was passed, Britain and Germany attempted to reach a compromise with the Haldane Mission, led by the British War Minister Richard Haldane. The arms reduction mission ended in failure, however, and the 1912 law was announced shortly thereafter. The Germans were aware at as early as 1911, the Royal Navy had abandoned the idea of a decisive battle with the German fleet, in favor of a distant blockade at the entrances to the North Sea, which the British could easily control due to their geographical position. There emerged the distinct possibility that the German fleet would be unable to force a battle on its own terms, which would render it militarily useless. When the war came in 1914, the British did in fact adopt this strategy. Coupled with the restrictive orders of the Kaiser, who preferred to keep the fleet intact to be used as a bargaining chip in the peace settlements, the ability of the High Seas Fleet to affect the military situation was markedly reduced.[16]

The German Navy’s pre-war planning held that the British would be compelled to mount either a direct attack on the German coast to defeat the High Seas Fleet, or to put in place a close blockade. Either course of action would permit the Germans to whittle away at the numerical superiority of the Grand Fleet with submarines and torpedo boats. Once a rough equality of forces could be achieved, the High Seas Fleet would be able to attack and destroy the British fleet.[17] Implicit in Tirpitz’s strategy was the assumption that German vessels were better-designed, had better-trained crews, and would be employed with superior tactics. In addition, Tirpitz assumed that Britain would not be able to concentrate its fleet in the North Sea, owing to the demands of its global empire. At the start of a conflict between the two powers, the Germans would therefore be able to attack the Royal Navy with local superiority.[18]

The British, however, did not accommodate Tirpitz’s projections; from his appointment as the First Sea Lord in 1904, Fisher began a major reorganization of the Royal Navy. He concentrated British battleship strength in home waters, launched the Dreadnought revolution, and introduced rigorous training for the fleet personnel.[19] In 1912, the British concluded a joint defense agreement with France that allowed the British to concentrate in the North Sea while the French defended the Mediterranean.[20] Worse still, the British began developing the strategy of the distant blockade of Germany starting in 1904;[21] this removed the ability of German light craft to reduce Britain’s superiority in numbers and essentially invalidated German naval planning before the start of World War I.[22]

The primary base for the High Seas Fleet in the North Sea was Wilhelmshaven on the western side of the Jade Bight; the port of Cuxhaven, located on the mouth of the Elbe, was also a major base in the North Sea. The island of Heligoland provided a fortified forward position in the German Bight.[23]Kiel was the most important base in the Baltic, which supported the forward bases at Pillau and Danzig.[24] The Kaiser Wilhelm Canal through Schleswig-Holstein connected the Baltic and North Seas and allowed the German Navy to quickly shift naval forces between the two seas.[25] In peacetime, all ships on active duty in the High Seas Fleet were stationed in Wilhelmshaven, Kiel, or Danzig.[26] Germany possessed only one major overseas base, at Kiautschou in China,[27] where the East Asia Squadron was stationed.[28]

Steam ships of the period, which burned coal to fire their boilers, were naturally tied to coaling stations in friendly ports. The German Navy lacked sufficient overseas bases for sustained operations, even for single ships operating as commerce raiders.[29] The Navy experimented with a device to transfer coal from colliers to warships while underway in 1907, though the practice was not put into general use.[30] Nevertheless, German capital ships had a cruising range of at least 4,000nmi (7,400km; 4,600mi),[31] more than enough to operate in the Atlantic Ocean.[Note 1]

In 1897, the year Tirpitz came to his position as State Secretary of the Navy Office, the Imperial Navy consisted of a total of around 26,000 officers, petty officers, and enlisted men of various ranks, branches, and positions. By the outbreak of war in 1914, this had increased significantly to about 80,000 officers, petty officers, and men.[35] Capital ships were typically commanded by a Kapitn zur See (Captain at Sea) or Korvettenkapitn (corvette captain).[26] Each of these ships typically had a total crew in excess of 1,000 officers and men;[31] the light cruisers that screened for the fleet had crew sizes between 300 and 550.[36] The fleet torpedo boats had crews of about 80 to 100 officers and men, though some later classes approached 200.[37]

In early 1907, enough battleshipsof the Braunschweig and Deutschlandclasseshad been constructed to allow for the creation of a second full squadron.[38] On 16 February 1907,[39] Kaiser Wilhelm renamed the Home Fleet the High Seas Fleet. Admiral Prince Heinrich of Prussia, Wilhelm II’s brother, became the first commander of the High Seas Fleet; his flagship was SMSDeutschland.[38] While in a peace-time footing, the Fleet conducted a routine pattern of training exercises, with individual ships, with squadrons, and with the combined fleet, throughout the year. The entire fleet conducted several cruises into the Atlantic Ocean and the Baltic Sea.[40] Prince Henry was replaced in late 1909 by Vice Admiral Henning von Holtzendorff, who served until April 1913. Vice Admiral Friedrich von Ingenohl, who would command the High Seas Fleet in the first months of World War I, took command following the departure of Vice Admiral von Holtzendorff.[41]SMSFriedrich der Grosse replaced Deutschland as the fleet flagship on 2 March 1913.[42]

Despite the rising international tensions following the assassination of Archduke Franz Ferdinand on 28 June, the High Seas Fleet began its summer cruise to Norway on 13 July. During the last peacetime cruise of the Imperial Navy, the fleet conducted drills off Skagen before proceeding to the Norwegian fjords on 25 July. The following day the fleet began to steam back to Germany, as a result of Austria-Hungary’s ultimatum to Serbia. On the 27th, the entire fleet assembled off Cape Skudenes before returning to port, where the ships remained at a heightened state of readiness.[42] War between Austria-Hungary and Serbia broke out the following day, and in the span of a week all of the major European powers had joined the conflict.[43]

The High Seas Fleet conducted a number of sweeps and advances into the North Sea. The first occurred on 23 November 1914, though no British forces were encountered. Admiral von Ingenohl, the commander of the High Seas Fleet, adopted a strategy in which the battlecruisers of Rear Admiral Franz von Hipper’s I Scouting Group raided British coastal towns to lure out portions of the Grand Fleet where they could be destroyed by the High Seas Fleet.[44] The raid on Scarborough, Hartlepool and Whitby on 1516 December 1914 was the first such operation.[45] On the evening of 15 December, the German battle fleet of some twelve dreadnoughts and eight pre-dreadnoughts came to within 10nmi (19km; 12mi) of an isolated squadron of six British battleships. However, skirmishes between the rival destroyer screens in the darkness convinced von Ingenohl that he was faced with the entire Grand Fleet. Under orders from the Kaiser to avoid risking the fleet unnecessarily, von Ingenohl broke off the engagement and turned the fleet back toward Germany.[46]

Following the loss of SMSBlcher at the Battle of Dogger Bank in January 1915, the Kaiser removed Admiral von Ingenohl from his post on 2 February. Admiral Hugo von Pohl replaced him as commander of the fleet.[47] Admiral von Pohl conducted a series of fleet advances in 1915; in the first one on 2930 March, the fleet steamed out to the north of Terschelling and returned without incident. Another followed on 1718 April, where the fleet covered a mining operation by the II Scouting Group. Three days later, on 2122 April, the High Seas Fleet advanced towards the Dogger Bank, though again failed to meet any British forces.[48] Another sortie followed on 2930 May, during which the fleet advanced as far as Schiermonnikoog before being forced to turn back by inclement weather. On 10 August, the fleet steamed to the north of Heligoland to cover the return of the auxiliary cruiser Meteor. A month later, on 1112 September, the fleet covered another mine-laying operation off the Swarte Bank. The last operation of the year, conducted on 2324 October, was an advance without result in the direction of Horns Reef.[48]

Vice Admiral Reinhard Scheer became Commander in chief of the High Seas Fleet on 18 January 1916 when Admiral von Pohl became too ill to continue in that post.[49] Scheer favored a much more aggressive policy than that of his predecessor, and advocated greater usage of U-boats and zeppelins in coordinated attacks on the Grand Fleet; Scheer received approval from the Kaiser in February 1916 to carry out his intentions.[50] Scheer ordered the fleet on sweeps of the North Sea on 26 March, 23 April, and 2122 April. The battlecruisers conducted another raid on the English coast on 2425 April, during which the fleet provided distant support.[51] Scheer planned another raid for mid-May, but the battlecruiser Seydlitz had struck a mine during the previous raid and the repair work forced the operation to be pushed back until the end of the month.[52]

Admiral Scheer’s fleet, composed of 16 dreadnoughts, six pre-dreadnoughts, six light cruisers, and 31 torpedo boats departed the Jade early on the morning of 31 May. The fleet sailed in concert with Hipper’s five battlecruisers and supporting cruisers and torpedo boats.[53] The British navy’s Room 40 had intercepted and decrypted German radio traffic containing plans of the operation. The Admiralty ordered the Grand Fleet, totaling some 28 dreadnoughts and 9 battlecruisers, to sortie the night before in order to cut off and destroy the High Seas Fleet.[54]

At 16:00 UTC, the two battlecruiser forces encountered each other and began a running gun fight south, back towards Scheer’s battle fleet.[55] Upon reaching the High Seas Fleet, Vice Admiral David Beatty’s battlecruisers turned back to the north to lure the Germans towards the rapidly approaching Grand Fleet, under the command of Admiral John Jellicoe.[56] During the run to the north, Scheer’s leading ships engaged the Queen Elizabeth-class battleships of the 5th Battle Squadron.[57] By 18:30, the Grand Fleet had arrived on the scene, and was deployed into a position that would cross Scheer’s “T” from the northeast. To extricate his fleet from this precarious position, Scheer ordered a 16-point turn to the south-west.[58] At 18:55, Scheer decided to conduct another 16-point turn to launch an attack on the British fleet.[59]

This maneuver again put Scheer in a dangerous position; Jellicoe had turned his fleet south and again crossed Scheer’s “T.”[60] A third 16-point turn followed; Hipper’s mauled battlecruisers charged the British line to cover the retreat.[61] Scheer then ordered the fleet to adopt the night cruising formation, which was completed by 23:40.[62] A series of ferocious engagements between Scheer’s battleships and Jellicoe’s destroyer screen ensued, though the Germans managed to punch their way through the destroyers and make for Horns Reef.[63] The High Seas Fleet reached the Jade between 13:00 and 14:45 on 1 June; Scheer ordered the undamaged battleships of the I Battle Squadron to take up defensive positions in the Jade roadstead while the Kaiser-class battleships were to maintain a state of readiness just outside Wilhelmshaven.[64] The High Seas Fleet had sunk more British vessels than the Grand Fleet had sunk German, though Scheer’s leading battleships had taken a terrible hammering. Several capital ships, including SMSKnig, which had been the first vessel in the line, and most of the battlecruisers, were in drydock for extensive repairs for at least two months. On 1 June, the British had twenty-four capital ships in fighting condition, compared to only ten German warships.[65]

By August, enough warships had been repaired to allow Scheer to undertake another fleet operation on 1819 August. Due to the serious damage incurred by Seydlitz and SMSDerfflinger and the loss of SMSLtzow at Jutland, the only battlecruisers available for the operation were SMSVon der Tann and SMSMoltke, which were joined by SMSMarkgraf, SMSGrosser Kurfrst, and the new battleship SMSBayern.[66] Scheer turned north after receiving a false report from a zeppelin about a British unit in the area.[48] As a result, the bombardment was not carried out, and by 14:35, Scheer had been warned of the Grand Fleet’s approach and so turned his forces around and retreated to German ports.[67] Another fleet sortie took place on 1819 October 1916 to attack enemy shipping east of Dogger Bank. Despite being forewarned by signal intelligence, the Grand Fleet did not attempt to intercept. The operation was however cancelled due to poor weather after the cruiser Mnchen was torpedoed by the British submarine HMSE38.[68] The fleet was reorganized on 1 December;[48] the four Knig-classbattleships remained in the III Squadron, along with the newly commissioned Bayern, while the five Kaiser-class ships were transferred to the IV Squadron.[69] In March 1917 the new battleship Baden, built to serve as fleet flagship, entered service;[70] on the 17th, Scheer hauled down his flag from Friedrich der Grosse and transferred it to Baden.[48]

The war, now in its fourth year, was by 1917 taking its toll on the crews of the ships of the High Seas Fleet. Acts of passive resistance, such as the posting of anti-war slogans in the battleships SMSOldenburg and SMSPosen in January 1917, began to appear.[71] In June and July, the crews began to conduct more active forms of resistance. These activities included work refusals, hunger strikes, and taking unauthorized leave from their ships.[72] The disruptions came to a head in August, when a series of protests, anti-war speeches, and demonstrations resulted in the arrest of dozens of sailors.[73] Scheer ordered the arrest of over 200 men from the battleship Prinzregent Luitpold, the center of the anti-war activities. A series of courts-martial followed, which resulted in 77 guilty verdicts; nine men were sentenced to death for their roles, though only two men, Albin Kbis and Max Reichpietsch, were executed.[74]

In early September 1917, following the German conquest of the Russian port of Riga, the German navy decided to eliminate the Russian naval forces that still held the Gulf of Riga. The Navy High Command (Admiralstab) planned an operation, codenamed Operation Albion, to seize the Baltic island of sel, and specifically the Russian gun batteries on the Sworbe Peninsula.[75] On 18 September, the order was issued for a joint operation with the army to capture sel and Moon Islands; the primary naval component was to comprise its flagship, Moltke, and the III and IVBattle Squadrons of the High Seas Fleet.[76] The operation began on the morning of 12 October, when Moltke and the IIISquadron ships engaged Russian positions in Tagga Bay while the IVSquadron shelled Russian gun batteries on the Sworbe Peninsula on sel.[77]By 20 October, the fighting on the islands was winding down; Moon, sel, and Dag were in German possession. The previous day, the Admiralstab had ordered the cessation of naval actions and the return of the dreadnoughts to the High Seas Fleet as soon as possible.[78]

Admiral Scheer had used light surface forces to attack British convoys to Norway beginning in late 1917. As a result, the Royal Navy attached a squadron of battleships to protect the convoys, which presented Scheer with the possibility of destroying a detached squadron of the Grand Fleet. The operation called for Hipper’s battlecruisers to attack the convoy and its escorts on 23 April while the battleships of the High Seas Fleet stood by in support. On 22 April, the German fleet assembled in the Schillig Roads outside Wilhelmshaven and departed the following morning.[79] Despite the success in reaching the convoy route undetected, the operation failed due to faulty intelligence. Reports from U-boats indicated to Scheer that the convoys sailed at the start and middle of each week, but a west-bound convoy had left Bergen on Tuesday the 22nd and an east-bound group left Methil, Scotland, on the 24th, a Thursday. As a result, there was no convoy for Hipper to attack.[80] Beatty sortied with a force of 31 battleships and four battlecruisers, but was too late to intercept the retreating Germans. The Germans reached their defensive minefields early on 25 April, though approximately 40nmi (74km; 46mi) off Heligoland Moltke was torpedoed by the submarine E42; she successfully returned to port.[81]

A final fleet action was planned for the end of October 1918, days before the Armistice was to take effect. The bulk of the High Seas Fleet was to have sortied from their base in Wilhelmshaven to engage the British Grand Fleet; Scheerby now the Grand Admiral (Grossadmiral) of the fleetintended to inflict as much damage as possible on the British navy, in order to retain a better bargaining position for Germany, despite the expected casualties. However, many of the war-weary sailors felt the operation would disrupt the peace process and prolong the war.[82] On the morning of 29 October 1918, the order was given to sail from Wilhelmshaven the following day. Starting on the night of 29 October, sailors on Thringen and then on several other battleships mutinied.[83] The unrest ultimately forced Hipper and Scheer to cancel the operation.[84] When informed of the situation, the Kaiser stated “I no longer have a navy.”[85]

Following the capitulation of Germany on November 1918, most of the High Seas Fleet, under the command of Rear Admiral Ludwig von Reuter, were interned in the British naval base of Scapa Flow.[84] Prior to the departure of the German fleet, Admiral Adolf von Trotha made clear to von Reuter that he could not allow the Allies to seize the ships, under any conditions.[86] The fleet rendezvoused with the British light cruiser Cardiff, which led the ships to the Allied fleet that was to escort the Germans to Scapa Flow. The massive flotilla consisted of some 370 British, American, and French warships.[87] Once the ships were interned, their guns were disabled through the removal of their breech blocks, and their crews were reduced to 200 officers and enlisted men on each of the capital ships.[88]

The fleet remained in captivity during the negotiations that ultimately produced the Treaty of Versailles. Von Reuter believed that the British intended to seize the German ships on 21 June 1919, which was the deadline for Germany to have signed the peace treaty. Unaware that the deadline had been extended to the 23rd, Reuter ordered the ships to be sunk at the next opportunity. On the morning of 21 June, the British fleet left Scapa Flow to conduct training maneuvers, and at 11:20 Reuter transmitted the order to his ships.[86] Out of the interned fleet, only one battleship, Baden, three light cruisers, and eighteen destroyers were saved from sinking by the British harbor personnel. The Royal Navy, initially opposed to salvage operations, decided to allow private firms to attempt to raise the vessels for scrapping.[89] Cox and Danks, a company founded by Ernest Cox handled most of the salvage operations, including those of the heaviest vessels raised.[90] After Cox’s withdrawal due to financial losses in the early 1930s, Metal Industries Group, Inc. took over the salvage operation for the remaining ships. Five more capital ships were raised, though threeSMS Knig, SMSKronprinz, and SMS Markgrafwere too deep to permit raising. They remain on the bottom of Scapa Flow, along with four light cruisers.[91]

The High Seas Fleet, particularly its wartime impotence and ultimate fate, strongly influenced the later German navies, the Reichsmarine and Kriegsmarine. Former Imperial Navy officers continued to serve in the subsequent institutions, including Admiral Erich Raeder, Hipper’s former chief of staff, who became the commander in chief of the Reichsmarine. Raeder advocated long-range commerce raiding by surface ships, rather than constructing a large surface fleet to challenge the Royal Navy, which he viewed to be a futile endeavor. His initial version of Plan Z, the construction program for the Kriegsmarine in the late 1930s, called for large number of P-classcruisers, long-range light cruisers, and reconnaissance forces for attacking enemy shipping, though he was overruled by Adolf Hitler, who advocated a large fleet of battleships.[92]

See the article here:

High Seas Fleet – Wikipedia, the free encyclopedia

 Posted by at 5:00 am  Tagged with:

nootropics / smart-drugs

 Nootropics  Comments Off on nootropics / smart-drugs
Jun 172016
 

Sceptics about nootropics (“smart drugs”) are unwitting victims of the so-called Panglossian paradigm of evolution. They believe that our cognitive architecture has been so fine-honed by natural selection that any tinkering with such a wonderfully all-adaptive suite of mechanisms is bound to do more harm than good. Certainly the notion that merely popping a pill could make you brighter sounds implausible. It sounds like the sort of journalistic excess that sits more comfortably in the pages of Fortean Times than any scholarly journal of repute.

Yet as Dean, Morgenthaler and Fowkes’ (hereafter “DMF”) book attests, the debunkers are wrong. On the one hand, numerous agents with anticholinergic properties are essentially dumb drugs. They impair memory, alertness, verbal facility and creative thought. Conversely, a variety of cholinergic drugs and nutrients, which form a large part of the smart-chemist’s arsenal, can subtly but significantly enhance cognitive performance on a whole range of tests. This holds true for victims of Alzheimer’s Disease, who suffer in particular from a progressive and disproportionate loss of cholinergic neurons. Yet, potentially at least, cognitive enhancers can aid non-demented people too. Members of the “normally” ageing population can benefit from an increased availability of acetylcholine, improved blood-flow to the brain, increased ATP production and enhanced oxygen and glucose uptake. Most recently, research with ampakines, modulators of neurotrophin-regulating AMPA-type glutamate receptors, suggests that designer nootropics will soon deliver sharper intellectual performance even to healthy young adults.

DMF provide updates from Smart Drugs (1) on piracetam, acetyl-l-carnitine, vasopressin, and several vitamin therapies. Smart Drugs II offers profiles of agents such as selegiline (l-deprenyl), melatonin, pregnenolone, DHEA and ondansetron (Zofran). There is also a provocative question-and-answer section; a discussion of product sources; and a guide to further reading.

So what’s the catch? One problem, to which not all authorities on nootropics give enough emphasis, is the complex interplay between cognition and mood. Thus great care should be taken before tampering with the noradrenaline/acetylcholine axis. Thought-frenzied hypercholinergic states, for instance, are characteristic of one “noradrenergic” sub-type of depression. A predominance of forebrain cholinergic activity, frequently triggered by chronic uncontrolled stress, can lead to a reduced sensitivity to reward, an inability to sustain effort, and behavioural suppression.

This mood-modulating effect does make some sort of cruel genetic sense. Extreme intensity of reflective thought may function as an evolutionarily adaptive response when things go wrong. When they’re going right, as in optimal states of “flow experience”, we don’t need to bother. Hence boosting cholinergic function, alone and in the absence of further pharmacologic intervention, can subdue mood. It can even induce depression in susceptible subjects. Likewise, beta-adrenergic antagonists (e.g. propranolol (Inderal)) can induce depression and fatigue. Conversely, “dumb-drug” anticholinergics may sometimes have mood-brightening – progressing to deliriant – effects. Indeed antimuscarinic agents acting in the nucleus accumbens may even induce a “mindless” euphoria.

Now it might seem axiomatic that helping everyone think more deeply is just what the doctor ordered. Yet our education system is already pervaded by an intellectual snobbery that exalts academic excellence over emotional well-being. In the modern era, examination rituals bordering on institutionalised child-abuse take a heavy toll on young lives. Depression and anxiety-disorders among young teens are endemic – and still rising. It’s worth recalling that research laboratories routinely subject non-human animals to a regimen of “chronic mild uncontrolled stress” to induce depression in their captive animal population; investigators then test putative new antidepressants on the depressed animals to see if their despair can be experimentally reversed by patentable drugs. The “chronic mild stressors” that we standardly inflict on adolescent humans can have no less harmful effects on the mental health of captive school-students; but in this case, no organised effort is made to reverse it. Instead its victims often go on to self-medicate with ethyl alcohol, tobacco and street drugs. So arguably at least, the deformed and emotionally pre-literate minds churned out by our schools stand in need of safe, high-octane mood-brighteners more urgently than cognitive-tweakers. Memory-enhancers might be more worthwhile if we had more experiences worth remembering.

One possible solution to this dilemma involves taking a cholinergic agent such as piracetam (Nootropil) or aniracetam (Draganon, Ampamet) that also enhances dopamine function. Some researchers tentatively believe that the mesolimbic dopamine system acts as the final common pathway for pleasure in the brain. This hypothesis may well prove simplistic. There are certainly complications: it is not the neurotransmitter dopamine itself, but the post-synaptic metabolic cascades it triggers, that underlies motivated bliss. Other research suggests that it is the endogenous opioid system, and in particular activation of the mu opioid receptors, that mediates pure pleasure. Mesolimbic dopamine amplifies “incentive-motivation”: “wanting” and “liking” may have different substrates, albeit intimately linked. Moreover there are mood-elevating memory-enhancers such as phosphodiesterase inhibitors (e.g. the selective PDE4 inhibitor rolipram) that act on different neural pathways – speeding and strengthening memory-formation by prolonging the availability of CREB. In any event, several of the most popular smart drugs discussed by DMF do indeed act on both the cholinergic and dopaminergic systems. In addition, agents like aniracetam and its analogs increase hippocampal glutaminergic activity. Hippocampal function is critical to memory – and mood. Thus newly developed ampakines, agents promoting long-term potentiation of AMPA-type glutamate receptors, are powerful memory-enhancers and future nootropics.

Another approach to enhancing mood and intellect alike involves swapping or combining a choline agonist with a different, primarily dopaminergic drug. Here admittedly there are methodological problems. The improved test score performances reported on so-called smart dopaminergics may have other explanations. Not all studies adequately exclude the confounding variables of increased alertness, sharper sensory acuity, greater motor activity or improved motivation – as distinct from any “pure” nootropic action. Yet the selective dopamine reuptake blocker amineptine (Survector) is both a mood-brightener and a possible smart-drug. Likewise selegiline, popularly known as l-deprenyl, has potentially life-enhancing properties. Selegiline is a selective, irreversible MAO-b inhibitor with antioxidant, immune-system-boosting and anti-neurodegenerative effects. It retards the metabolism not just of dopamine but also of phenylethylamine, a trace amine also found in chocolate and released when we’re in love. Selegiline also stimulates the release of superoxide dismutase (SOD); SOD is a key enzyme which helps to quench damaging free-radicals. Taken consistently in low doses, selegiline extends the life-expectancy of rats by some 20%; enhances drive, libido and endurance; and independently improves cognitive performance in Alzheimer’s patients and in some healthy normals. It is used successfully to treat canine cognitive dysfunction syndrome (CDS) in dogs. In 2006, higher dose (i.e. less MAO-b selective) selegiline was licensed as the antidepressant EMSAM, a transdermal patch. Selegiline also protects the brain’s dopamine cells from oxidative stress. The brain has only about 30-40 thousand dopaminergic neurons in all. It tends to lose perhaps 13% a decade in adult life. An eventual 70%-80% loss leads to the dopamine-deficiency disorder Parkinson’s disease and frequently depression. Clearly anything that spares so precious a resource might prove a valuable tool for life-enrichment.

In 2005, a second selective MAO-b inhibitor, rasagiline (Azilect) gained an EC product license. Its introduction was followed a year later in the USA. Unlike selegiline, rasagiline doesn’t have amphetamine trace metabolites – a distinct if modest therapeutic advantage.

Looking further ahead, the bifunctional cholinesterase inhibitor and MAO-b inhibitor ladostigil acts both as a cognitive enhancer and a mood brightener. Ladostigil has neuroprotective and potential antiaging properties too. Its product-license is several years away at best.

Consider, for instance, the plight of genetically engineered “smart mice” endowed with an extra copy of the NR2B subtype of NMDA receptor. It is now known that such brainy “Doogie” mice suffer from a chronically increased sensitivity to pain. Memory-enhancing drugs and potential gene-therapies targeting the same receptor subtype might cause equally disturbing side-effects in humans. Conversely, NMDA antagonists like the dissociative anaesthetic drug ketamine exert amnestic, antidepressant and analgesic effects in humans and non-humans alike.

Amplified memory can itself be a mixed blessing. Even among the drug-nave and chronically forgetful, all kinds of embarrassing, intrusive and traumatic memories may haunt our lives. Such memories sometimes persist for months, years or even decades afterwards. Unpleasant memories can sour the well-being even of people who don’t suffer from clinical PTSD. The effects of using all-round memory enhancers might do something worse than merely fill our heads with clutter. Such agents could etch traumatic experiences more indelibly into our memories. Or worse, such all-round enhancers might promote the involuntary recall of our nastiest memories with truly nightmarish intensity.

By contrast, the design of chemical tools that empower us selectively to forget unpleasant memories may prove to be at least as life-enriching as agents that help us remember more effectively. Unlike the software of digital computers, human memories can’t be specifically deleted to order. But this design-limitation may soon be overcome. The synthesis of enhanced versions of protease inhibitors such as anisomycin may enable us selectively to erase horrible memories. If such agents can be refined for our personal medicine cabinets, then we’ll potentially be able to rid ourselves of nasty or unwanted memories at will – as distinct from drowning our sorrows with alcohol or indiscriminately dulling our wits with tranquillisers. In future, the twin availability of 1] technologies to amplify desirable memories, and 2] selective amnestics to extinguish undesirable memories, promises to improve our quality of life far more dramatically than use of today’s lame smart drugs.

Such a utopian pharmaceutical toolkit is still some way off. Given our current primitive state of knowledge, it’s hard to boost the function of one neurotransmitter signalling system or receptor sub-type without eliciting compensatory and often unwanted responses from others. Life’s successful, dopamine-driven go-getters, for instance, whether naturally propelled or otherwise, may be highly productive individuals. Yet they are rarely warm, relaxed and socially empathetic. This is because, crudely, dopamine overdrive tends to impair “civilising serotonin” function. Unfortunately, tests of putative smart drugs typically reflect an impoverished and culture-bound conception of intelligence. Indeed today’s “high IQ” alpha males may strike posterity as more akin to idiot savants than imposing intellectual giants. IQ tests, and all conventional scholastic examinations, neglect creative and practical intelligence. They simply ignore social cognition. Social intelligence, and its cognate notion of “emotional IQ”, isn’t some second-rate substitute for people who can’t do IQ tests. On the contrary, according to the Machiavellian ape hypothesis, the evolution of human intelligence has been driven by our superior “mind-reading” skills. Higher-order intentionality [e.g. “you believe that I hope that she thinks that I want…”, etc] is central to the lives of advanced social beings. The unique development of human mind is an adaptation to social problem-solving and the selective advantages it brings. Yet pharmaceuticals that enhance our capacity for empathy, enrich our social skills, expand our “state-space” of experience, or deepen our introspective self-knowledge are not conventional candidates for smart drugs. For such faculties don’t reflect our traditional [male] scientific value-judgements on what qualifies as “intelligence”. Thus in academia, for instance, competitive dominance behaviour among “alpha” male human primates often masquerades as the pursuit of scholarship. Emotional literacy is certainly harder to quantify scientifically than mathematical puzzle-solving ability or performance in verbal memory-tests. But to misquote Robert McNamara, we need to stop making what is measurable important, and find ways to make the important measurable. By some criteria, contemporary IQ tests are better measures of high-grade autism than mature intelligence. So before chemically manipulating one’s mind, it’s worth critically examining which capacities one wants to enhance; and to what end?

In practice, the first and most boring advice is often the most important. Many potential users of smart pills would be better and more simply advised to stop taking tranquillisers, sleeping tablets or toxic recreational drugs; eat omega-3 rich foods, more vegetables and generally improve their diet; and try more mentally challenging tasks. One of the easiest ways of improving memory, for instance, is to increase the flow of oxygenated blood to the brain. This can be achieved by running, swimming, dancing, brisk walking, and more sex. Regular vigorous exercise also promotes nerve cell growth in the hippocampus. Hippocampal brain cell growth potentially enhances mood, memory and cognitive vitality alike. Intellectuals are prone to echo J.S. Mill: “Better to be an unhappy Socrates than a happy pig”. But happiness is typically good for the hippocampus; by contrast, the reduced hippocampal volume anatomically characteristic of depressives correlates with the length of their depression.

In our current state of ignorance, homely remedies are still sometimes best. Thus moderate consumption of adenosine-inhibiting, common-or-garden caffeine improves concentration, mood and alertness; enhances acetylcholine release in the hippocampus; and statistically reduces the risk of suicide. Regular coffee drinking induces competitive and reversible inhibition of MAO enzymes type A and B owing to coffee’s neuroactive beta-carbolines. Coffee is also rich in antioxidants. Non-coffee drinkers are around three times more likely to contract Parkinson’s disease. A Michigan study found caffeine use was correlated with enhanced male virility in later life.

Before resorting to pills, aspiring intellectual heavyweights might do well to start the day with a low-fat/high carbohydrate breakfast: muesli rather than tasty well-buttered croissants. This will enhance memory, energy and blood glucose levels. An omega-3 rich diet will enhance all-round emotional and intellectual health too. A large greasy fry-up, on the other hand, can easily leave one feeling muddle-headed, drowsy and lethargic. If one wants to stay sharp, and to blunt the normal mid-afternoon dip, then eating big fatty lunches isn’t a good idea either. Fat releases cholecystokinin (CCK) from the duodenum. Modest intravenous infusions of CCK make one demonstrably dopey and subdued.

To urge such caveats is not to throw up one’s hands in defeatist resignation. Creative psychopharmacology can often in principle circumvent such problems, even today. Complementary and sometimes effective combinations such as sustained-release methylphenidate (Ritalin) and SSRIs such as fluoxetine (Prozac), for instance, are arguably still under-used. They could be more widely applied both in clinical psychiatry and, at least in the context of a general harm-reduction strategy, on the street. There may indeed be no safe drugs but just safe dosages. Yet some smart drugs, such as piracetam, really do seem to be at worst pretty innocuous. Agents such as the alpha-1 adrenergic agonist adrafinil (Olmifron) typically do have both mood-brightening and intellectually invigorating effects. Adrafinil, like its chemical cousin modafinil (Provigil), promotes alertness, vigilance and mental focus; and its more-or-less pure CNS action ensures it doesn’t cause unwanted peripheral sympathetic stimulation.

Unfortunately the lay public is currently ill-served, a few shining exceptions aside, by the professionals. A condition of ignorance and dependence is actively fostered where it isn’t just connived at in the wider population. So there’s often relatively little point in advising anyone contemplating acting on DMF’s book to consult their physician first. For it’s likely their physician won’t want to know, or want them to know, in the first instance.

As traditional forms of censorship, news-management and governmental information-control break down, however, and the Net insinuates itself into ever more areas of daily life, more and more people are stumbling upon – initially – and then exploring, the variety of drugs and combination therapies which leading-edge pharmaceutical research puts on offer. They are increasingly doing so as customers, and not as patronisingly labelled role-bound “patients”. Those outside the charmed circle have previously been cast in the obligatory role of humble supplicants. The more jaundiced or libertarian among the excluded may have felt themselves at the mercy of prescription-wielding, or -withholding, agents of one arm of the licensed drug cartels. So when the control of the cartels and their agents falters, there is an especially urgent need for incisive and high-quality information to be made readily accessible. Do DMF fulfil it?

Smart Drugs 2 lays itself wide open to criticism; but then it takes on an impossible task. In the perennial trade-off between accessibility and scholarly rigour, compromises are made on both sides. Ritual disclaimers aside, DMF’s tone can at times seem too uncritically gung-ho. Their drug-profiles and cited studies don’t always give due weight to the variations in sample size and the quality of controls. Nor do they highlight the uncertain calibre of the scholarly journals in which some of the most interesting results are published. DMFs inclusion of anecdote-studded personal testimonials is almost calculated to inflame medical orthodoxy. Moreover it should be stressed that the scientific gold-standard of large, placebo-controlled, double-blind cross-over prospective trials are still quite rare in this field as a whole.

Looking ahead, this century’s mood-boosting, intellect-sharpening, empathy-enhancing and personality-enriching drugs are themselves likely to prove only stopgaps. This is because invincible, life-long happiness and supergenius intellect may one day be genetically pre-programmed and possibly ubiquitous in our transhuman successors. Taking drugs to repair Nature’s deficiencies may eventually become redundant. Memory- and intelligence-boosting gene therapies are already imminent. But in repairing the deficiencies of an educational system geared to producing dysthymic pharmacological illiterates, Smart Drugs 1 and 2 offers a warmly welcome start.

Refs and further reading

HedWeb Future Opioids BLTC Research Superhappiness? Utopian Surgery? Social Media 2015 Nutritional Medicine Wirehead Hedonism The Good Drug Guide The Abolitionist Project Reproductive Revolution Critique Of Brave New World MDMA: Utopian Pharmacology Nootropics/Smart Drugs: Sources The Biointelligence Explosion (2013) Male intelligence vs female intelligence Humans and AI: Co-evolution, Fusion or Replacement? (2013) e-mail info@nootropics.com

Link:

nootropics / smart-drugs

 Posted by at 4:56 am  Tagged with:

Classic Maya collapse – Wikipedia, the free encyclopedia

 Socio-economic Collapse  Comments Off on Classic Maya collapse – Wikipedia, the free encyclopedia
Jun 152016
 

In archaeology, the classic Maya collapse refers to the decline of Maya civilization and abandonment of Maya cities in the southern Maya lowlands of Mesoamerica between the 8th and 9thcenturies, at the end of the Classic Mayan Period. Preclassic Maya experienced a similar collapse in the 2nd century.

The Classic Period of Mesoamerican chronology is generally defined as the period from 250 to 900, the last century of which is referred to as the Terminal Classic.[1] The classic Maya collapse is one of the greatest unsolved mysteries in archaeology. Urban centers of the southern lowlands, among them Palenque, Copn, Tikal, Calakmul, went into decline during the 8th and 9thcenturies and were abandoned shortly thereafter. Archaeologically, this decline is indicated by the cessation of monumental inscriptions and the reduction of large-scale architectural construction at the primary urban centers of the classic period.

Although termed a ‘collapse’, it did not mark the end of the Maya civilization; Northern Yucatn in particular prospered afterwards, although with very different artistic and architectural styles, and with much less use of monumental hieroglyphic writing. In the post-classic period following the collapse, the state of Chichn Itz built an empire that briefly united much of the Maya region,[citation needed] and centers such as Mayapn and Uxmal flourished, as did the Highland states of the K’iche’ and Kaqchikel Maya. Independent Maya civilization continued until 1697 when the Spanish conquered Nojpetn, the last independent city-state. Millions of Maya people still inhabit the Yucatn peninsula today.

Because parts of Maya civilization unambiguously continued, a number of scholars strongly dislike the term “collapse.”[2] Regarding the proposed collapse, E. W. Andrews IV went as far as to say, “in my belief no such thing happened.”[3]

The Maya often recorded dates on monuments they built. Few dated monuments were being built circa 500 – around ten per year in 514, for example. The number steadily increased to make this number twenty per year by 672 and forty by around 750. After this, the number of dated monuments begins to falter relatively quickly, collapsing back to ten by 800 and to zero by 900. Likewise, recorded lists of kings complement this analysis. Altar Q shows a reign of kings from 426 to 763. One last king not recorded on Altar Q was Ukit Took, “Patron of Flint”, who was probably a usurper. The dynasty is believed to have collapsed entirely shortly thereafter. In Quirigua, twenty miles north of Copn, the last king Jade Sky began his rule between 895 and 900, and throughout the Maya area all kingdoms similarly fell around that time.[4]

A third piece of evidence of the progression of Maya decline, gathered by Ann Corinne Freter, Nancy Gonlin, and David Webster, uses a technique called obsidian hydration. The technique allowed them to map the spread and growth of settlements in the Copn Valley and estimate their populations. Between 400 and 450, the population was estimated at a peak of twenty-eight thousand between 750 and 800 – larger than London at the time. Population then began to steadily decline. By 900 the population had fallen to fifteen thousand, and by 1200 the population was again less than 1000.

Some 88 different theories or variations of theories attempting to explain the Classic Maya Collapse have been identified.[5] From climate change to deforestation to lack of action by Mayan kings, there is no universally accepted collapse theory, although drought is gaining momentum as the leading explanation.[6]

The archaeological evidence of the Toltec intrusion into Seibal, Peten, suggests to some the theory of foreign invasion. The latest hypothesis states that the southern lowlands were invaded by a non-Maya group whose homelands were probably in the gulf coast lowlands. This invasion began in the 9thcentury and set off, within 100years, a group of events that destroyed the Classic Maya. It is believed that this invasion was somehow influenced by the Toltec people of central Mexico. However, most Mayanists do not believe that foreign invasion was the main cause of the Classic Maya Collapse; they postulate that no military defeat can explain or be the cause of the protracted and complex Classic Collapse process. Teotihuacan influence across the Maya region may have involved some form of military invasion; however, it is generally noted that significant Teotihuacan-Maya interactions date from at least the Early Classic period, well before the episodes of Late Classic collapse.[7]

The foreign invasion theory does not answer the question of where the inhabitants went. David Webster believed that the population should have increased because of the lack of elite power. Further, it is not understood why the governmental institutions were not remade following the revolts, which actually happened under similar circumstances in places like China. A study by anthropologist Elliot M. Abrams came to the conclusion that buildings, specifically in Copan, did not actually require an extensive amount of time and workers to construct.[8] However, this theory was developed during a time period when the archaeological evidence showed that there were fewer Maya people than there are now known to have been.[9] Revolutions, peasant revolts, and social turmoil change circumstances, and are often followed by foreign wars, but they run their course. There are no documented revolutions that caused wholesale abandonment of entire regions.

It has been hypothesized that the decline of the Maya is related to the collapse of their intricate trade systems, especially those connected to the central Mexican city of Teotihuacn. Preceding improved knowledge of the chronology of Mesoamerica, Teotihuacan was believed to have fallen during 700750, forcing the “restructuring of economic relations throughout highland Mesoamerica and the Gulf Coast”.[10] This remaking of relationships between civilizations would have then given the collapse of the Classic Maya a slightly later date. However, after knowing more about the events and the time periods that they occurred, it is now believed that the strongest Teotihuacan influence was during the 4th and 5thcenturies. In addition, the civilization of Teotihuacan started to lose its power, and maybe even abandoned the city, during 600650. This differs greatly from the previous belief that Teotihuacano power decreased during 700750.[11] But since the new decline date of 600650 has been accepted, the Maya civilizations are now thought to have lived on and prospered for another century and more[12] than what was previously believed. Rather than the decline of Teotihuacan directly preceding the collapse of the Maya, their decline is now seen as contributing to the 6thcentury hiatus.[12]

The disease theory is also a contender as a factor in the Classic Maya Collapse. Widespread disease could explain some rapid depopulation, both directly through the spread of infection itself and indirectly as an inhibition to recovery over the long run. According to Dunn (1968) and Shimkin (1973), infectious diseases spread by parasites are common in tropical rainforest regions, such as the Maya lowlands. Shimkin specifically suggests that the Maya may have encountered endemic infections related to American trypanosomiasis, Ascaris, and some enteropathogens that cause acute diarrheal illness. Furthermore, some experts believe that, through development of their civilization (that is, development of agriculture and settlements), the Maya could have created a “disturbed environment,” in which parasitic and pathogen-carrying insects often thrive.[13] Among the pathogens listed above, it is thought that those that cause the acute diarrheal illnesses would have been the most devastating to the Maya population. This is because such illness would have struck a victim at an early age, thereby hampering nutritional health and the natural growth and development of a child. This would have made them more susceptible to other diseases later in life. Such ideas as this could explain the role of disease as at least a possible partial reason for the Classic Maya Collapse.[14]

Mega-droughts hit the Yucatn Peninsula and Petn Basin areas with particular ferocity, as thin tropical soils decline in fertility and become unworkable when deprived of forest cover,[15] and due to regular seasonal drought drying up surface water.[16] Colonial Spanish officials accurately documented cycles of drought, famine, disease, and war, providing a reliable historical record of the basic drought pattern in the Maya region.[17]

Climatic factors were first implicated in the Collapse as early as 1931 by Mayanists Thomas Gann and J.E.S. Thompson.[18] In The Great Maya Droughts, Richardson Gill gathers and analyzes an array of climatic, historical, hydrologic, tree ring, volcanic, geologic, lake bed, and archeological research, and demonstrates that a prolonged series of droughts probably caused the Classic Maya Collapse.[19] The drought theory provides a comprehensive explanation, because non-environmental and cultural factors (excessive warfare, foreign invasion, peasant revolt, less trade, etc.) can all be explained by the effects of prolonged drought on Classic Maya civilization.[20]

Climatic changes are, with increasing frequency, found to be major drivers in the rise and fall of civilizations all over the world.[21] Professors Harvey Weiss of Yale University and Raymond S. Bradley of the University of Massachusetts have written, “Many lines of evidence now point to climate forcing as the primary agent in repeated social collapse.”[22] In a separate publication, Weiss illustrates an emerging understanding of scientists:

Within the past five years new tools and new data for archaeologists, climatologists, and historians have brought us to the edge of a new era in the study of global and hemispheric climate change and its cultural impacts. The climate of the Holocene, previously assumed static, now displays a surprising dynamism, which has affected the agricultural bases of pre-industrial societies. The list of Holocene climate alterations and their socio-economic effects has rapidly become too complex for brief summary.[23]

The drought theory holds that rapid climate change in the form of severe drought brought about the Classic Maya collapse. According to the particular version put forward by Gill in The Great Maya Droughts,

[Studies of] Yucatecan lake sediment cores … provide unambiguous evidence for a severe 200-year drought from AD800 to 1000 … the most severe in the last 7,000years … precisely at the time of the Maya Collapse.[24]

Climatic modeling, tree ring data, and historical climate data show that cold weather in the Northern Hemisphere is associated with drought in Mesoamerica.[25] Northern Europe suffered extremely low temperatures around the same time as the Maya droughts. The same connection between drought in the Maya areas and extreme cold in northern Europe was found again at the beginning of the 20thcentury. Volcanic activity, within and outside Mesoamerica, is also correlated with colder weather and resulting drought, as the effects of the Tambora volcano eruption in 1815 indicate.[26]

Mesoamerican civilization provides a remarkable exception: civilization prospering in the tropical swampland. The Maya are often perceived as having lived in a rainforest, but technically, they lived in a seasonal desert without access to stable sources of drinking water.[27] The exceptional accomplishments of the Maya are even more remarkable because of their engineered response to the fundamental environmental difficulty of relying upon rainwater rather than permanent sources of water. The Maya succeeded in creating a civilization in a seasonal desert by creating a system of water storage and management which was totally dependent on consistent rainfall.[28] The constant need for water kept the Maya on the edge of survival. Given this precarious balance of wet and dry conditions, even a slight shift in the distribution of annual precipitation can have serious consequences.[16] Water and civilization were vitally connected in ancient Mesoamerica. Archaeologist and specialist in pre-industrial land and water usage practices, Vernon Scarborough, believes water management and access were critical to the development of Maya civilization.[29]

Critics of the drought theory wonder why the southern and central lowland cities were abandoned and the northern cities like Chichen Itza, Uxmal, and Coba continued to thrive.[30] One critic argued that Chichen Itza revamped its political, military, religious, and economic institutions away from powerful lords or kings.[31] Inhabitants of the northern Yucatn also had access to seafood, which might have explained the survival of Chichen Itza and Mayapan, cities away from the coast but within reach of coastal food supplies.[32] Critics of the drought theory also point to current weather patterns: much heavier rainfall in the southern lowlands compared to the lighter amount of rain in the northern Yucatn. Drought theory supporters state that the entire regional climate changed, including the amount of rainfall, so that modern rainfall patterns are not indicative of rainfall from 800 to 900. LSU archaeologist Heather McKillop found a significant rise in sea level along the coast nearest the southern Maya lowlands, coinciding with the end of the Classic period, and indicating climate change.[33]

David Webster, a critic of the megadrought theory says that much of the evidence provided by Gill comes from the northern Yucatn and not the Southern part of the peninsula, where Classic Maya civilization flourished. He also states that if water sources were to have dried up, then several city-states would have moved to other water sources. The fact that Gill suggests that all water in the region would have dried up and destroyed Maya civilization is a stretch, according to Webster.[34]

A study published in Science in 2012 found that modest rainfall reductions, amounting to only 25 to 40 percent of annual rainfall, may have been the tipping point to the Mayan collapse. Based on samples of lake and cave sediments in the areas surrounding major Mayan cities, the researchers were able to determine the amount of annual rainfall in the region. The mild droughts that took place between 800-950 would therefore be enough to rapidly deplete seasonal water supplies in the Yucatn lowlands, where there are no rivers.[35][36][37]

Some ecological theories of Maya decline focus on the worsening agricultural and resource conditions in the late Classic period. It was originally thought that the majority of Maya agriculture was dependent on a simple slash-and-burn system. Based on this method, the hypothesis of soil exhaustion was advanced by Orator F. Cook in 1921. Similar soil exhaustion assumptions are associated with erosion, intensive agricultural, and savanna grass competition.

More recent investigations have shown a complicated variety of intensive agricultural techniques utilized by the Maya, explaining the high population of the Classic Maya polities. Modern archaeologists now comprehend the sophisticated intensive and productive agricultural techniques of the ancient Maya, and several of the Maya agricultural methods have not yet been reproduced. Intensive agricultural methods were developed and utilized by all the Mesoamerican cultures to boost their food production and give them a competitive advantage over less skillful peoples.[38] These intensive agricultural methods included canals, terracing, raised fields, ridged fields, chinampas, the use of human feces as fertilizer, seasonal swamps or bajos, using muck from the bajos to create fertile fields, dikes, dams, irrigation, water reservoirs, several types of water storage systems, hydraulic systems, swamp reclamation, swidden systems, and other agricultural techniques that have not yet been fully understood.[39] Systemic ecological collapse is said to be evidenced by deforestation, siltation, and the decline of biological diversity.

In addition to mountainous terrain, Mesoamericans successfully exploited the very problematic tropical rainforest for 1,500years.[40] The agricultural techniques utilized by the Maya were entirely dependent upon ample supplies of water. The Maya thrived in territory that would be uninhabitable to most peoples. Their success over two millennia in this environment was “amazing.”[41]

Anthropologist Joseph Tainter wrote extensively about the collapse of the Southern Lowland Maya in his 1988 study, The Collapse of Complex Societies. His theory about Mayan collapse encompasses some of the above explanations, but focuses specifically on the development of and the declining marginal returns from the increasing social complexity of the competing Mayan city-states.[42] Psychologist Julian Jaynes suggested that the collapse was due to a failure in the social control systems of religion and political authority, due to increasing socioeconomic complexity that overwhelmed the power of traditional rituals and the king’s authority to compel obedience.[43]

Go here to see the original:

Classic Maya collapse – Wikipedia, the free encyclopedia

 Posted by at 3:30 pm  Tagged with:

2016 Nootropics Survey Results | Slate Star Codex

 Nootropics  Comments Off on 2016 Nootropics Survey Results | Slate Star Codex
Jun 152016
 

[Disclaimer: Nothing here should be taken to endorse using illegal or dangerous substances. This was a quick informal survey and you should not make any important health decisions based on it. Talk to your doctor before trying anything.]

Nootropics are traditionally defined as substances that improve mental function. In practice they usually refer to psychoactive chemicals that are neither recreational drugs like cocaine and heroin, nor officially-endorsed psychiatric drugs like Prozac or Risperdal. Most are natural supplements, foreign medications available in US without prescription, or experimental compounds. They promise various benefits including clearer thinking, better concentration, improved mood, et cetera. You can read more about them here.

Although a few have been tested formally in small trials, many are known to work only based on anecdote and word of mouth. There are some online communities like r/nootropics where people get together, discuss them, and compare results. Ive hung out there for a while, and two years ago, in order to satisfy my own curiosity about which of these were most worth looking into, I got 150 people to answer a short questionnaire about their experiences with different drugs.

Since then the field has changed and I wanted to get updated data. This year 850 (!) people agreed to fill out my questionnaire and rate various nootropics on a scale of 0 10 thanks again to everyone who completed the survey.

Before the results themselves, a few comments.

Last time around I complained about noisy results. This year the sample size was five times larger and the results were less noisy. Heres an example: the ratings for caffeine form a beautiful bell curve:

Even better, even though this survey was 80% new people, when it asked the same questions as last years the results were quite similar they correlated at r = 0.76, about what youd get from making students take the same test twice. Whatevers producing these effects is pretty stable.

A possible objection since this survey didnt have placebo control, might all the results be placebo? Yes. But one check on this is that the different nootropics controlled against one another. If we believe that picamilon (rated 3.7) is a placebo, this suggests that PRL-8-53 (rated 5.6) does 19 percentage points points better than placebo.

But might this be confounded by lack of blinding? Yes. That is, if companies have really hyped PRL-8-53, and it comes in special packaging, and it just generally looks cooler than picamilon, maybe that would give it a stronger placebo effect.

Against this hypothesis I can only plead big differences between superficially similar drugs. For example, rhodiola and ashwagandha are both about equally popular. Theyre both usually sold by the same companies in the same packaging. Theyre both classified as adaptogens by the people who classify these sorts of things. But ashwagandha outperforms rhodiola by 0.9 points, which in a paired-samples t-test is significant at the p = 0.03 level. While you can always find some kind of difference in advertising or word-of-mouth that could conceivably have caused a placebo effect, there are at least some reasons to think somethings going on here.

Without further ado, heres what I found:

Some very predictable winners: Adderall is a prescription drug and probably doesnt even qualify as a nootropic; I included it as a reference point, and it unsurprisingly did very well. LSD microdosing is the practice of taking LSD at one-tenth or less of the normal hallucinogenic dose; users say that it improves creativity and happiness without any of the typical craziness. Phenibut is a Russian anxiolytic drug of undenied effectiveness which is sort of notorious for building tolerance and addiction if used incorrectly. And modafinil is a prescription medication for sleep issues which makes users more awake and energetic. All of these are undeniably effective but all are either addictive, illegal without prescription, or both.

Im more interested by a second tier of winners, including tianeptine, Semax, and ashwagandha. Tianeptine is a French antidepressant available (legally? kind of a gray area) without prescription in the US; users say it both provides a quick fix for depression and makes them happier and more energetic in general. Semax is a Russian peptide supposed to improve mental clarity and general well-being. Ashwagandha might seem weird to include here since its all the way down at #15, but a lot of the ones above it had low sample size or were things like caffeine that everyone already knows about, and its high position surprised me. Its an old Indian herb thats supposed to treat anxiety.

The biggest loser here is Alpha Brain, a proprietary supplement sold by a flashy-looking company for $35 a bottle. Many people including myself have previously been skeptical that they can be doing much given how many random things they throw into one little pill. But it looks like AlphaBrain underperformed even the nootropics that I think of as likely placebo things like choline and DMAE. Its possible that survey respondents penalized the company for commercializing what is otherwise a pretty un-branded space, ranking it lower than they otherwise might have to avoid endorsing that kind of thing.

(I was surprised to see picamilon, a Russian modification of the important neurotransmitter GABA, doing so badly. I thought it was pretty well-respected in the community. As far as I can tell, this one is just genuinely bad.)

Finally, a note on addiction.

Adderall, phenibut, and nicotine have all raised concern about possible addictive potential. I wanted to learn a little bit about peoples experiences here, so I asked a few questions about how often people were taking things at what dose and whether they got addicted or not.

In retrospect, these were poorly phrased and didnt get me the data I wanted. When people said they were taking Adderall every day and got addicted, I didnt know whether they meant they became addicted because they were using it every day, or that they were using it every day because they were addicted. People gave some really weird answers here and Im not sure how seriously I can take them. Moving on anyway:

A bit under 15% of users got addicted to Adderall. The conventional wisdom says recreational users are more likely to get addicted than people who take it for a psychiatric condition with a doctors prescription. There was no sign of this; people who took it legally and people who took it for ADHD were actually much more likely to get addicted than people who described themselves as illegal or recreational users. In retrospect this isnt surprising; typical psychiatric use is every day; typical recreational use is once in a while.

Only 3% of users got addicted to phenibut. This came as a big surprise to me given the caution most people show about this substance. Both of the two people who reported major addictions were using it daily at doses > 2g. The four people who reported minor addictions were less consistent, and some people gave confusing answers like that they had never used it more than once a month but still considered themselves addicted. People were more likely to report tolerance with more frequent use; of those who used it monthly or less, only 6% developed tolerance; of those who used it several times per month, 13%; of those who used it several times per week, 18%; of those who used it daily, 36%.

Then there was nicotine. About 35% of users reported becoming addicted, but this was heavily dependent upon variety of nicotine. Among users who smoked normal tobacco cigarettes, 65% reported addiction. Among those who smoked e-cigarettes, only 25% reported addiction (and again, since theres no time data, its possible these people switched to e-cigarettes because they were addicted and not vice versa). Among users of nicotine gum and lozenges, only 7% reported addiction, and only 1% reported major addiction. Although cigarettes are a known gigantic health/addiction risk, the nootropic communitys use of isolated nicotine as a stimulant seems from this survey (subject to the above caveat) to be comparatively but not completely safe.

I asked people to name their favorite nootropic not on the list. The three most popular answers were ALCAR, pramiracetam, and Ritalin. ALCAR and pramiracetam were on last years survey and ended up around the middle. Ritalin is no doubt very effective in much the same way Adderall is very effective and equally illegal without a prescription.

People also gave their personal stacks and their comments; you can find them in the raw data (.xlsx, .csv) or the fixed-up data (.csv, notes). If you find anything else interesting in there, please post it in the comments here and Ill add a link to it in this post.

EDIT: Jacobian adjusts for user bias

See the original post here:

2016 Nootropics Survey Results | Slate Star Codex

emergent by design

 Neurohacking  Comments Off on emergent by design
Jun 122016
 

illustration by Kirsten Zirngibl

this post originally appeared on Neurohacker Collective

The term hacker has its origins in computer programming subcultures from the 60s, and was used to describe people who wanted to take on hard problems in a spirit of playful exploration and a resistance to unearned authority. Although the methods, means and intentions of hackers varied widely, all seemed to share a unique ethos that mixed a deep commitment to individual autonomy and agency with an equally deep commitment to collaboration and co-creation.

Over time, the concept of hacking has traveled far from its origins, finding its way into a number of domains like Biohacking, Consciousness Hacking, Flow Hacking and Life Hacking. Each is a kind of hacking because each shares this hackers ethos and a commitment to using it to find the most effective ways to optimize the human experience.

We call the common thread that links these hacking communities together, empowered responsibility. This notion expresses the dual recognition that we are no longer able to rely on external authorities to take care of us (in any domain) but through a combination of ubiquitous information, individual experimentation and open collaboration, we are increasingly empowered to take responsibility for ourselves.

In the Biohacking community, the spirit of empowered responsibility drives the process of optimizing ones biological health and performance. Biohackers learn from each other how they can modify their nutrition, exercise, sleep, movement, and mindset to achieve the specific kind of well-being that they individually desire.

The Consciousness Hacking community takes empowered responsibility in using technology as a catalyst for psychological, emotional and spiritual flourishing. They utilize mindfulness techniques and biofeedback tools for self-exploration, taking personal responsibility for their conscious experience in this most individual of journeys.

Emerging from within and alongside these movements, we are observing the coalescence of a new and important domain: Neurohacking.

Whereas biohacking concentrates on the body, and consciousness hacking explores the inner experience, neurohacking is somewhere in the middle, focusing on the mind-brain interface the intersection of neurology and consciousness. Specifically, neurohacking involves applying science and technology to influence the brain and body in order to optimize subjective experience.

The desired outcomes of neurohacking cover everything from focused productivity, to expanded creativity, more restful sleep, reduced anxiety, enhanced empathy, and anything else that contributes to the psychological well-being and emotional health of whole, thriving human beings.

The technologies of neurohacking run the gamut from chemical technologies like nootropics and entheogens, probiotics to support the gut-brain connection, bioelectrical technologies like neurofeedback and transcranial stimulation, photic therapies like low level laser therapy and all the way to embodied practices like somatics and meditation. So long as there is a scientifically accessible biological mechanism for effecting subjective experience, it belongs in the domain of neurohacking.

Of course, like all emergent phenomena, neurohacking didnt just come from nowhere. For years there have been many movements and communities out there, playing in and pioneering some aspect of the neurohacking space.

Some of these domains include:

We propose that it is now timely and useful to perceive the commonality among these different movements and communities as shared aspects of Neurohacking. And in an effort to make these commonalities more visible and legible to each other, in the upcoming weeks we will take a deeper dive into each, highlight some notable people and projects in each space and explore the frontiers of the community from the point of view of Neurohacking.

In our next post, we will begin this exploration with the domain of Nootropics.

Excerpt from:

emergent by design

 Posted by at 12:39 am  Tagged with:

21 Actionable SEO Techniques You Can Use Right Now

 SEO  Comments Off on 21 Actionable SEO Techniques You Can Use Right Now
May 232016
 

by Brian Dean|Last updated May. 20, 2016

People that succeed with SEO do two things very well:

First, they identify SEO techniques that get them results.

Second, they put 100% of their resources into executing and scaling those techniques.

But youre probably wondering:

How do I find SEO strategies that actually work?

Well today Im going to make it easy for you.

All you need to do is carve out a few minutes of your day and tackle one of the 21 white hat SEO techniques below.

Free PDF Download: Get access to the free bonus checklist that will show you how to quickly execute these strategies. Includes 2 bonus techniques not found in this post.

Broken link building has it all

Scalable.

White hat.

Powerful.

Theres only one problem: finding broken links is a HUGE pain.

That is, unless you know about a little-known wrinkle in Wikipedias editing system.

You see, when a Wikipedia editor stumbles on a dead link, they dont delete the link right away.

Instead, they add a footnote next to the link that says dead link:

This footnote gives other editors a chance to confirm that the link is actually dead before removing it.

And that simple footnote makes finding broken links dead simple.

Heres how:

First, use this simple search string:

site:wikipedia.org [keyword] + dead link

For example, if you were in the investing space youd search for something like this:

Next, visit a page in the search results thats relevant to your site:

Hit ctrl + f and search for dead link:

Your browser will jump to any dead links in the references section:

Pro Tip: Wikipedia actually has a list of articles with dead links. This makes finding dead links in Wikipedia even easier.

OK. So once youve found a dead link, now what?

Well you could re-create that dead resource on your site and replace the dead link in Wikipedia with yours.

But that would only land you a single link (and a nofollow link at that).

Instead, I recommend tapping into The Moving Man Method.

This post will show you everything you need to know:

Now for our next SEO technique

Hit the play button to see how its done:

Last year I get an email out of the blue:

Turns out Emil used The Skyscraper Technique to achieve these impressive results.

Not only that, but Emil wanted to share his case study with the Backlinko community.

Thats when I had an idea:

Instead of writing a new post for Emils case study, why dont I add it to an existing post?

So thats what I did.

Specifically, I added Emils case study to this old post:

(I also updated the images and added some new tips)

The final result?

A new and improved version of the post:

To make sure the new post got the attention it deserved, I re-promoted it by sending an email to the Backlinko community:

I also shared it on social media:

The result?

A 111.37% increase in organic traffic to that page.

Pretty cool, right?

Its no secret that compelling title and description tags get more clicks in the SERPS.

(In fact, REALLY good copy can actually steal traffic from the top 3 results)

Question is: How do you know what people want to click on?

Thats easy: look at that keywords Adwords ads.

You see, the Adwords ads that you see for competitive keywords are the result of hundreds (if not thousands) of split tests.

Split tests to maximize clicks.

And you can use copy from these ads to turn your title and description tags into click magnets.

For example, lets say you were going to publish a blog post optimized around the keyword glass water bottles.

First, take a look at the Adwords ads for that keyword:

Keep an eye out for interesting copy from the ads that you can work into your title and description. In our glass water bottles example, we have phrases like:

Heres how your title and description tags might look:

As you can see, these tags include words that are proven to generate clicks.

What if there was an up-to-date list of blogs in your niche that you could use to find quality link opportunities?

I have good news. There is.

And its called AllTop.

AllTop is a modern day directory that curates the best blogs in every industry under the sun.

To find blogs in your niche, just go to the AllTop homepage and search for a keyword:

Next, find a category that fits with your sites topic:

And AllTop will show you their hand-picked list of quality blogs in that category:

Now you have a long list of some of the best blogs in your industry. And these bloggers are the exact people that you want to start building relationships with.

Lets face it: Most content curation is pretty weak.

I think I speak for everyone when I say that Ive read enough top 100 posts you need to read lists for one lifetime.

So how can you make your content curation stand out?

By tapping into Benefit-Focused Content Curation.

Benefit-Focused Content Curation is similar to most other types of curation, with one huge difference: it focuses on the outcomes that your audience wants.

Im sure youd like to see an example.

Here you go:

This is a guide I put together a while back called, Link Building: The Definitive Guide.

This guide has generated over 116,000 visitors from social media, forums, blogs and search engines:

(I should point out that the guides design and promotion contributed to its success. But it all started with how the content itself was organized)

What makes this guides curation unique is that its organized by benefits, not topics.

For example, Chapter 2 is called How to Get Top Notch Links Using Content Marketing:

Note that the title isnt: Chapter 2: Content Marketing. And most of the other chapters follow the same benefit-driven formula.

Why is this so important?

When someone sees a list of curated links they immediately ask themselves whats in it for me?.

And when you organize your content around outcomes and benefits, that answer becomes really, really obvious.

In other words, answering whats in it for me? makes the perceived value of your content MUCH higher than a list of 100 random resources.

With all the talk about Hummingbirds and Penguins its easy to forget about an important Google algorithm update from 2003 called Hilltop.

Despite being over ten years old, Hilltop still plays a major role in todays search engine landscape.

Hilltop is essentially an on-page SEO signal way that tells Google whether or not a page is a hub of information.

So: How does Google know which pages are hubs?

Its simple: Hubs are determined by the quality and relevancy of that pages outbound links.

This makes total sense if you think about it

The pages you link out to tend to reflect the topic of your page.

And pages that link to helpful resources also tend to be higher-quality than pages that only link to their own stuff.

In other words, pages that link out to awesome resources establish themselves as hubs of helpful content in the eyes of Big G.

In fact, a recent industry study found a correlation between outbound links and Google rankings.

Bottom line:

Link to at least 3 quality, relevant resources in every piece of content that you publish.

That will show Google that your page is a Hilltop Hub.

Read the rest here:
21 Actionable SEO Techniques You Can Use Right Now

Transhumanism – RationalWiki

 Transhumanism  Comments Off on Transhumanism – RationalWiki
Mar 252016
 

You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Julie from Crystal Nights by Greg Egan

Transhumanism (or H+), broadly speaking, is a futurist movement with a set of beliefs with a common theme of anticipating an evolutionary plateau beyond the current Homo sapiens. The term was coined and movement founded by the biologist Julian Huxley in 1957.

The general expectation is that in the near future greater manipulation of human nature will be possible because of the adoption of techniques apparent on the technological frontier: machine intelligence greater than that of contemporary humans, direct mind-computer interface, genetic engineering and nanotechnology. Transhumanists tend to believe that respect for human agency, even when practiced by humans in their current form, is valuable, however.

How plausible is transhumanism? In the 1930’s, many sensible people were sure human beings would never get to the moon and that was just one of many predictions that turned out incorrect.[1] Early 21st century people do not know one way or the other what will be possible in the future.

While frequently dismissed as mere speculation at best by most rationalists[citationneeded] (especially in light of the many failures of artificial intelligence), transhumanism is a strongly-held belief among many computer geeks, notably synthesizer and accessible computing guru Ray Kurzweil, a believer in the “technological singularity,” where technology evolves beyond humanity’s current capacity to understand or anticipate it, and Sun Microsystems founder and Unix demigod Bill Joy, who believes the inevitable result of AI research is the obsolescence of humanity.[2]

Certain recent technological advances are making the possibility of the realization of transhumanism appear more plausible: Scientists funded by the military developed an implant that can translate motor neuron signals into a form that a computer can use, thus opening the door for advanced prosthetics capable of being manipulated like biological limbs and producing sensory information.[3] This is on top of the earlier development of cochlear implants, which translate sound waves into nerve signals; they are often called “bionic ears.”[4]

Even DIY transhumanism is becoming an option, with people installing magnetic implants, allowing them to feel magnetic and electric fields.[5] Others have taken to wearing belts of magnets, in order to always be able to find magnetic north. Prosthetic limbs with some level of touch are also now being developed, a major milestone. [6]

Sadly, some followers of transhumanism[citationneeded] are based on a sort of blind-men-at-the-elephant thinking people assuming that because it can be imagined, it must be possible. Transhumanism is particularly associated with figures in computer science, which is a field that is in some ways more math and art than a true experimental science; as a result, a great many[citationneeded] transhumanists are technophiles with inevitabilist techno-utopian outlooks.

The example of the singularity is instructive; for a great many people, at least part of the singularity hinges on being able to create a true artificial intelligence. While it’s reasonable to contend that the complexity inherent in the human brain is entirely the result of mundane physics, and therefore can be reproduced in principle, singularitarians[citationneeded] tend to assume that the emulation of human intelligence not being impossible means having the ability to in the near future.

“Whole brain emulation” (WBE) is a term used by transhumanists to refer to, quite obviously, the emulation of a brain on a computer. While this is no doubt a possibility, it encounters two problems that keep it from being a certainty anytime in the near future.

The first is a philosophical objection: For WBE to work, “strong AI” (i.e. AI equivalent to or greater than human intelligence) must be attainable. A number of philosophical objections have been raised against strong AI, generally contending either that the mind or consciousness is not computable or that a simulation of consciousness is not equivalent to true consciousness (whatever that is). There is still controversy over strong AI in the field of philosophy of mind.[7]

A second possible objection is technological: WBE may not defy physics, but the technology to fully simulate a human brain (in the sense meant by transhumanists, at least) is a long way away. Currently, no computer (or network of computers) is powerful enough to simulate a human brain. Henry Markram, head of the Blue Brain Project, estimates that simulating a brain would require 500 petabytes of data for storage and that the power required to run the simulation would cost about $3 billion annually. (However, in 2008, he optimistically predicts this will be possible in ten years.[8]) In addition to technological limitations in computing, there are also the limits of neuroscience. Neuroscience currently relies on technology that can only scan the brain at the level of gross anatomy (e.g., fMRI, PET). Forms of single neuron imaging (SNI) have been developed recently, but they can only be used on animal subjects (usually rats) because they destroy neural tissue.[9]

Another transhumanist goal is mind uploading, which is one way they claim we will be able to achieve immortality. Aside from the problems with WBE listed above, mind uploading suffers a philosophical problem, namely the “swamp man problem.” That is, will the “uploaded” mind be “you” or simply a copy or facsimile of your mind? However, one possible way round this problem would be via incremental replacement of parts of the brain with their cybernetic equivalents (the patient being awake during each operation). Then there is no “breaking” of the continuity of the individual’s consciousness, and it becomes difficult for proponents of the “swamp man” hypothesis to pinpoint exactly when the individual stops being “themselves.”

Cryonics is another favorite of many transhumanists. In principle, cryonics is not impossible, but the current form of it is based largely on hypothetical future technologies and costs substantial amounts of money.

Fighting aging and extending life expectancy is possible the field that studies aging and attempts to provide suggestions for anti-aging technology is known as “biogerontology.” Aubrey de Grey, a transhumanist, has proposed a number of treatments for aging. In 2005, 28 scientists working in biogerontology signed a letter to EMBO Reports pointing out that de Grey’s treatments had never been demonstrated to work and that many of his claims for anti-aging technology were extremely inflated.[10]

Worst of all, some transhumanists outright ignore what people in the fields they’re interested in tell them; a few AI boosters, for example, believe that neurobiology is an outdated science because AI researchers can do it themselves anyway.[citationneeded] They seem to have taken the analogy used to introduce the computational theory of mind, “the mind (or brain) is like a computer.” Of course, the mind/brain is not a computer in the usual sense.[11] Debates with such people can take on the wearying feel of a debate with a creationist or climate change denialist, as such people will stick to their positions no matter what. Indeed, many critics are simply dismissed as Luddites or woolly-headed romantics who oppose scientific and technological progress.[12]

Transhumanism has often been criticized for not taking ethical issues seriously on a variety of topics,[13] including life extension technology,[14] cryonics,[15] and mind uploading and other enhancements.[16][17] Francis Fukuyama (in his doctrinaire neoconservative days) caused a stir by naming transhumanism “the world’s most dangerous idea.”[18] One of Fukuyama’s criticisms, that implementation of the technologies transhumanists push for will lead to severe inequality, is a rather common one.

A number of political criticisms of transhumanism have been made as well. Transhumanist organizations have been accused of being in the pocket of corporate and military interests.[19] The movement has been identified with Silicon Valley due to the fact that some of its biggest backers, such as Peter Thiel (of PayPal and Bitcoin fame), reside in the region.[20][21] Some writers see transhumanism as a hive of cranky and obnoxious techno-libertarianism.[22][23] The fact that Julian Huxley coined the term “transhumanism” and many transhumanists’ obsession with constructing a Nietzschean ubermensch known as the “posthuman” has led to comparisons with eugenics.[24][19] Like eugenics, it has been characterized as a utopian political ideology.[25] Jaron Lanier slammed it as “cybernetic totalism”.[26]

Some tension has developed between transhumanism and religion, namely Christianity. Some transhumanists, generally being atheistic naturalists, see all religion as an impediment to scientific and technological advancement and some Christians oppose transhumanism because of its stance on cloning and genetic engineering and label it as a heretical belief system.[27] Other transhumanists, however, have attempted to extend an olive branch to Christians.[28] Some have tried to reconcile their religion and techno-utopian beliefs, calling for a “scientific theology.”[29] There is even a Mormon transhumanist organization.[30]Ironically for the atheistic transhumanists, the movement has itself been characterized as a religion and its rhetoric compared to Christian apologetics.[31][32]

The very small political transhumanist political movement[wp] has gained momentum with Zoltan Istvan[wp] announcing his bid for US president, with the Transhumanist Party and other small political parties gaining support internationally.

The important thing about transhumanism is that while a lot of such predictions may in fact be possible (and may even be in their embryonic stages right now), a strong skeptical eye is required for any claimed prediction about the fields it covers. When evaluating such a claim, one will probably need a trip to a library (or Wikipedia, or a relevant scientist’s home page) to get up to speed on the basics.[33]

A common trope in science fiction for decades is that the prospect of transcending the current form may be positive, as in Arthur C. Clarke’s 1953 novel Childhood’s End or negative, as in the film The Matrix, with its barely disguised salvationist theme, or the Terminator series of films, where humanity has been essentially replaced by machine life. Change so radical elicits fear and thus it is unsurprising that many of the portrayals of transhumanism in popular culture are negative. The cyberpunk genre deals extensively with the theme of a transhumanist society gone wrong.

Among the utopian visions of transhumanism (fused with libertarianism) are those found in the collaborative online science fiction setting Orion’s Arm. Temporally located in the post-singularity future, 10,000 years from now, Orion’s Arm is massively optimistic about genetic engineering, continued improvements in computing and materials science. Because only technology which has been demonstrated to be impossible is excluded, even remotely plausible concepts has a tendency to be thrown in. At the highest end of the scale is artificial wormhole creation, baby universes and inertia without mass.[34]

Read more from the original source:

Transhumanism – RationalWiki

Transhumanism by Julian Huxley (1957)

 Transhumanism  Comments Off on Transhumanism by Julian Huxley (1957)
Mar 252016
 

In New Bottles for New Wine, London: Chatto & Windus, 1957, pp. 13-17

As a result of a thousand million years of evolution, the universe is becoming conscious of itself, able to understand something of its past history and its possible future. This cosmic self-awareness is being realized in one tiny fragment of the universe in a few of us human beings. Perhaps it has been realized elsewhere too, through the evolution of conscious living creatures on the planets of other stars. But on this our planet, it has never happened before.

Evolution on this planet is a history of the realization of ever new possibilities by the stuff of which earth (and the rest of the universe) is made life; strength, speed and awareness the flight of birds and the social polities of bees and ants; the emergence of mind, long before man was ever dreamt of, with the production of colour, beauty, communication, maternal care, and the beginnings of intelligence and insight. And finally, during the last few ticks of the cosmic clock, something wholly new and revolutionary, human beings with their capacities for conceptual thought and language, for self-conscious awareness and purpose, for accumulating and pooling conscious experience. For do not let us forget that the human species is as radically different from any of the microscopic single-celled animals that lived a thousand million years ago as they were from a fragment of stone or metal.

The new understanding of the universe has come about through the new knowledge amassed in the last hundred yearsby psychologists, biologists, and other scientists, by archaeologists, anthropologists, and historians. It has defined mans responsibility and destinyto be an agent for the rest of the world in the job of realizing its inherent potentialities as fully as possible.

It is as if man had been suddenly appointed managing director of the biggest business of all, the business of evolution appointed without being asked if he wanted it, and without proper warning and preparation. What is more, he cant refuse the job. Whether he wants to or not, whether he is conscious of what he is doing or not, he is in point of fact determining the future direction of evolution on this earth. That is his inescapable destiny, and the sooner he realizes it and starts believing in it, the better for all concerned.

What the job really boils down to is thisthe fullest realization of mans possibilities, whether by the individual, by the community, or by the species in its processional adventure along the corridors of time. Every man-jack of us begins as a mere speck of potentiality, a spherical and microscopic egg-cell. During the nine months before birth, this automatically unfolds into a truly miraculous range of organization: after birth, in addition to continuing automatic growth and development, the individual begins to realize his mental possibilitiesby building up a personality, by developing special talents, by acquiring knowledge and skills of various kinds, by playing his part in keeping society going. This post-natal process is not an automatic or a predetermined one. It may proceed in very different ways according to circumstances and according to the individuals own efforts. The degree to which capacities are realized can be more or less complete. The end-result can be satisfactory or very much the reverse: in particular, the personality may grievously fail in attaining any real wholeness. One thing is certain, that the well-developed, well-integrated personality is the highest product of evolution, the fullest realization we know of in the universe.

The first thing that the human species has to do to prepare itself for the cosmic office to which it finds itself appointed is to explore human nature, to find out what are the possibilities open to it (including, of course, its limitations, whether inherent or imposed by the facts of external nature). We have pretty well finished the geographical exploration of the earth; we have pushed the scientific exploration of nature, both lifeless and living, to a point at which its main outlines have become clear; but the exploration of human nature and its possibilities has scarcely begun. A vast New World of uncharted possibilities awaits its Columbus.

The great men of the past have given us glimpses of what is possible in the way of personality, of intellectual understanding, of spiritual achievement, of artistic creation. But these are scarcely more than Pisgah glimpses. We need to explore and map the whole realm of human possibility, as the realm of physical geography has been explored and mapped. How to create new possibilities for ordinary living? What can be done to bring out the latent capacities of the ordinary man and woman for understanding and enjoyment; to teach people the techniques of achieving spiritual experience (after all, one can acquire the technique of dancing or tennis, so why not of mystical ecstasy or spiritual peace?); to develop native talent and intelligence in the growing child, Instead of frustrating or distorting them? Already we know that painting and thinking, music and mathematics, acting and science can come to mean something very real to quite ordinary average boys and girls provided only that the fright methods are adopted for bringing out the childrens possibilities. We are beginning to realize that even the most fortunate people are living far below capacity, and that most human beings develop not more than a small fraction of their potential mental and spiritual efficiency. The human race, in fact, is surrounded by a large area of unrealized possibilities, a challenge to the spirit of exploration.

The scientific and technical explorations have given the Common Man all over the world a notion of physical possibilities. Thanks to science, the under-privileged are coming to believe that no one need be underfed or chronically diseased, or deprived of the benefits of its technical and practical applications.

The worlds unrest is largely due to this new belief. People are determined not to put up with a subnormal standard of physical health and material living now that science has revealed the possibility of raising it. The unrest will produce some unpleasant consequences before it is dissipated; but it is in essence a beneficent unrest, a dynamic force which will not be stilled until it has laid the physiological foundations of human destiny.

Once we have explored the possibilities open to consciousness and personality, and the knowledge of them has become Common property, a new source of unrest will have emerged, will realize and believe that if proper measures are taken, no one need be starved of true satisfaction, or condemned to sub-standard fulfillment. This process too will begin by being unpleasant, and end by being beneficent. It will begin by destroying the ideas and the institutions that stand in the way of our realizing our possibilities (or even deny that the possibilities are there to be realized), and will go on by at least making a start with the actual construction of true human destiny.

Up till now human life has generally been, as Hobbes described it, nasty, brutish and short; the great majority of human beings (if they have not already died young) have been afflicted with misery in one form or anotherpoverty, disease, ill-health, over-work, cruelty, or oppression. They have attempted to lighten their misery by means of their hopes and their ideals. The trouble has been that the hopes have generally been unjustified, the ideals have generally failed to correspond with reality.

The zestful but scientific exploration of possibilities and of the techniques for realizing them will make our hopes rational, and will set our ideals within the framework of reality, by showing how much of them are indeed realizable. Already, we can justifiably hold the belief that these lands of possibility exist, and that the present limitations and miserable frustrations of our existence could be in large measure surmounted. We are already justified in the conviction that human life as we know it in history is a wretched makeshift, rooted in ignorance; and that it could be transcended by a state of existence based on the illumination of knowledge and comprehension, just as our modern control of physical nature based on science transcends the tentative fumblings of our ancestors, that were rooted in superstition and professional secrecy.

To do this, we must study the possibilities of creating a more favourable social environment, as we have already done in large measure with our physical environment. We shall start from new premises. For instance, that beauty (something to enjoy and something to be proud of) is indispensable, and therefore that ugly or depressing towns are immoral; that quality of people, not mere quantity, is what we must aim at, and therefore that a concerted policy is required to prevent the present flood of population-increase from wrecking all our hopes for a better world; that true understanding and enjoyment are ends in themselves, as well as tools for or relaxations from a job, and that therefore we must explore and make fully available the techniques of education and self-education; that the most ultimate satisfaction comes from a depth and wholeness of the inner life, and therefore that we must explore and make fully available the techniques of spiritual development; above all, that there are two complementary parts of our cosmic duty one to ourselves, to be fulfilled in the realization and enjoyment of our capacities, the other to others, to be fulfilled in service to the community and in promoting the welfare of the generations to come and the advancement of our species as a whole.

The human species can, if it wishes, transcend itself not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.

I believe in transhumanism: once there are enough people who can truly say that, the human species will be on the threshold of a new kind of existence, as different from ours as ours is from that of Pekin man. It will at last be consciously fulfilling its real destiny.

Original post:

Transhumanism by Julian Huxley (1957)

Transhumanist Values – Nick Bostrom

 Transhumanism  Comments Off on Transhumanist Values – Nick Bostrom
Mar 232016
 

1. What is Transhumanism?

Transhumanism is a loosely defined movement that has developed gradually over the past two decades.[1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.

The enhancement options being discussed include radical extension of human health-span, eradication of disease, elimination of unnecessary suffering, and augmentation of human intellectual, physical, and emotional capacities. Other transhumanist themes include space colonization and the possibility of creating superintelligent machines, along with other potential developments that could profoundly alter the human condition. The ambit is not limited to gadgets and medicine, but encompasses also economic, social, institutional designs, cultural development, and psychological skills and techniques.

Transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman, beings with vastly greater capacities than present human beings have.

Some transhumanists take active steps to increase the probability that they personally will survive long enough to become posthuman, for example by choosing a healthy lifestyle or by making provisions for having themselves cryonically suspended in case of de-animation.[2] In contrast to many other ethical outlooks, which in practice often reflect a reactionary attitude to new technologies, the transhumanist view is guided by an evolving vision to take a more proactive approach to technology policy. This vision, in broad strokes, is to create the opportunity to live much longer and healthier lives, to enhance our memory and other intellectual faculties, to refine our emotional experiences and increase our subjective sense of well-being, and generally to achieve a greater degree of control over our own lives. This affirmation of human potential is offered as an alternative to customary injunctions against playing God, messing with nature, tampering with our human essence, or displaying punishable hubris.

Transhumanism does not entail technological optimism. While future technological capabilities carry immense potential for beneficial deployments, they also could be misused to cause enormous harm, ranging all the way to the extreme possibility of intelligent life becoming extinct. Other potential negative outcomes include widening social inequalities or a gradual erosion of the hard-to-quantify assets that we care deeply about but tend to neglect in our daily struggle for material gain, such as meaningful human relationships and ecological diversity. Such risks must be taken very seriously, as thoughtful transhumanists fully acknowledge.[3]

Transhumanism has roots in secular humanist thinking, yet is more radical in that it promotes not only traditional means of improving human nature, such as education and cultural refinement, but also direct application of medicine and technology to overcome some of our basic biological limits.

The range of thoughts, feelings, experiences, and activities accessible to human organisms presumably constitute only a tiny part of what is possible. There is no reason to think that the human mode of being is any more free of limitations imposed by our biological nature than are those of other animals. In much the same way as Chimpanzees lack the cognitive wherewithal to understand what it is like to be human the ambitions we humans have, our philosophies, the complexities of human society, or the subtleties of our relationships with one another, so we humans may lack the capacity to form a realistic intuitive understanding of what it would be like to be a radically enhanced human (a posthuman) and of the thoughts, concerns, aspirations, and social relations that such humans may have.

Our own current mode of being, therefore, spans but a minute subspace of what is possible or permitted by the physical constraints of the universe (see Figure 1). It is not farfetched to suppose that there are parts of this larger space that represent extremely valuable ways of living, relating, feeling, and thinking.

The limitations of the human mode of being are so pervasive and familiar that we often fail to notice them, and to question them requires manifesting an almost childlike naivet. Let consider some of the more basic ones.

Lifespan. Because of the precarious conditions in which our Pleistocene ancestors lived, the human lifespan has evolved to be a paltry seven or eight decades. This is, from many perspectives, a rather short period of time. Even tortoises do better than that.

We dont have to use geological or cosmological comparisons to highlight the meagerness of our allotted time budgets. To get a sense that we might be missing out on something important by our tendency to die early, we only have to bring to mind some of the worthwhile things that we could have done or attempted to do if we had had more time. For gardeners, educators, scholars, artists, city planners, and those who simply relish observing and participating in the cultural or political variety shows of life, three scores and ten is often insufficient for seeing even one major project through to completion, let alone for undertaking many such projects in sequence.

Human character development is also cut short by aging and death. Imagine what might have become of a Beethoven or a Goethe if they had still been with us today. Maybe they would have developed into rigid old grumps interested exclusively in conversing about the achievements of their youth. But maybe, if they had continued to enjoy health and youthful vitality, they would have continued to grow as men and artists, to reach levels of maturity that we can barely imagine. We certainly cannot rule that out based on what we know today. Therefore, there is at least a serious possibility of there being something very precious outside the human sphere. This constitutes a reason to pursue the means that will let us go there and find out.

Intellectual capacity. We have all had moments when we wished we were a little smarter. The three-pound, cheese-like thinking machine that we lug around in our skulls can do some neat tricks, but it also has significant shortcomings. Some of these such as forgetting to buy milk or failing to attain native fluency in languages you learn as an adult are obvious and require no elaboration. These shortcomings are inconveniences but hardly fundamental barriers to human development.

Yet there is a more profound sense in the constraints of our intellectual apparatus limit our modes of our mentation. I mentioned the Chimpanzee analogy earlier: just as is the case for the great apes, our own cognitive makeup may foreclose whole strata of understanding and mental activity. The point here is not about any logical or metaphysical impossibility: we need not suppose that posthumans would not be Turing computable or that they would have concepts that could not be expressed by any finite sentences in our language, or anything of that sort. The impossibility that I am referring to is more like the impossibility for us current humans to visualize an 200-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress. These things are impossible for us because, simply put, we lack the brainpower. In the same way, may lack the ability to intuitively understand what being a posthuman would be like or to grok the playing field of posthuman concerns.

Further, our human brains may cap our ability to discover philosophical and scientific truths. It is possible that failure of philosophical research to arrive at solid, generally accepted answers to many of the traditional big philosophical questions could be due to the fact that we are not smart enough to be successful in this kind of enquiry. Our cognitive limitations may be confining us in a Platonic cave, where the best we can do is theorize about shadows, that is, representations that are sufficiently oversimplified and dumbed-down to fit inside a human brain.

Bodily functionality. We enhance our natural immune systems by getting vaccinations, and we can imagine further enhancements to our bodies that would protect us from disease or help us shape our bodies according to our desires (e.g. by letting us control our bodies metabolic rate). Such enhancements could improve the quality of our lives.

A more radical kind of upgrade might be possible if we suppose a computational view of the mind. It may then be possible to upload a human mind to a computer, by replicating in silico the detailed computational processes that would normally take place in a particular human brain.[4] Being an upload would have many potential advantages, such as the ability to make back-up copies of oneself (favorably impacting on ones life-expectancy) and the ability to transmit oneself as information at the speed of light. Uploads might live either in virtual reality or directly in physical reality by controlling a robot proxy.

Sensory modalities, special faculties and sensibilities. The current human sensory modalities are not the only possible ones, and they are certainly not as highly developed as they could be. Some animals have sonar, magnetic orientation, or sensors for electricity and vibration; many have a much keener sense of smell, sharper eyesight, etc. The range of possible sensory modalities is not limited to those we find in the animal kingdom. There is no fundamental block to adding say a capacity to see infrared radiation or to perceive radio signals and perhaps to add some kind of telepathic sense by augmenting our brains with suitably interfaced radio transmitters.

Humans also enjoy a variety of special faculties, such as appreciation of music and a sense of humor, and sensibilities such as the capacity for sexual arousal in response to erotic stimuli. Again, there is no reason to think that what we have exhausts the range of the possible, and we can certainly imagine higher levels of sensitivity and responsiveness.

Mood, energy, and self-control. Despite our best efforts, we often fail to feel as happy as we would like. Our chronic levels of subjective well-being seem to be largely genetically determined. Life-events have little long-term impact; the crests and troughs of fortune push us up and bring us down, but there is little long-term effect on self-reported well-being. Lasting joy remains elusive except for those of us who are lucky enough to have been born with a temperament that plays in a major key.

In addition to being at the mercy of a genetically determined setpoint for our levels of well-being, we are limited in regard to energy, will-power, and ability to shape our own character in accordance with our ideals. Even such simple goals as losing weight or quitting smoking prove unattainable to many.

Some subset of these kinds of problems might be necessary rather than contingent upon our current nature. For example, we cannot both have the ability easily to break any habit and the ability to form stable, hard-to-break habits. (In this regard, the best one can hope for may be the ability to easily get rid of habits we didnt deliberately choose for ourselves in the first place, and perhaps a more versatile habit-formation system that would let us choose with more precision when to acquire a habit and how much effort it should cost to break it.)

The conjecture that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions. Take, for example, a dispositional theory of value such as the one described by David Lewis.[5] According to Lewiss theory, something is a value for you if and only if you would want to want it if you were perfectly acquainted with it and you were thinking and deliberating as clearly as possible about it. On this view, there may be values that we do not currently want, and that we do not even currently want to want, because we may not be perfectly acquainted with them or because we are not ideal deliberators. Some values pertaining to certain forms of posthuman existence may well be of this sort; they may be values for us now, and they may be so in virtue of our current dispositions, and yet we may not be able to fully appreciate them with our current limited deliberative capacities and our lack of the receptive faculties required for full acquaintance with them. This point is important because it shows that the transhumanist view that we ought to explore the realm of posthuman values does not entail that we should forego our current values. The posthuman values can be our current values, albeit ones that we have not yet clearly comprehended. Transhumanism does not require us to say that we should favor posthuman beings over human beings, but that the right way of favoring human beings is by enabling us to realize our ideals better and that some of our ideals may well be located outside the space of modes of being that are accessible to us with our current biological constitution.

We can overcome many of our biological limitations. It is possible that there are some limitations that are impossible for us to transcend, not only because of technological difficulties but on metaphysical grounds. Depending on what our views are about what constitutes personal identity, it could be that certain modes of being, while possible, are not possible for us, because any being of such a kind would be so different from us that they could not be us. Concerns of this kind are familiar from theological discussions of the afterlife. In Christian theology, some souls will be allowed by God to go to heaven after their time as corporal creatures is over. Before being admitted to heaven, the souls would undergo a purification process in which they would lose many of their previous bodily attributes. Skeptics may doubt that the resulting minds would be sufficiently similar to our current minds for it to be possible for them to be the same person. A similar predicament arises within transhumanism: if the mode of being of a posthuman being is radically different from that of a human being, then we may doubt whether a posthuman being could be the same person as a human being, even if the posthuman being originated from a human being.

We can, however, envision many enhancements that would not make it impossible for the post-transformation someone to be the same person as the pre-transformation person. A person could obtain quite a bit of increased life expectancy, intelligence, health, memory, and emotional sensitivity, without ceasing to exist in the process. A persons intellectual life can be transformed radically by getting an education. A persons life expectancy can be extended substantially by being unexpectedly cured from a lethal disease. Yet these developments are not viewed as spelling the end of the original person. In particular, it seems that modifications that add to a persons capacities can be more substantial than modifications that subtract, such as brain damage. If most of someone currently is, including her most important memories, activities, and feelings, is preserved, then adding extra capacities on top of that would not easily cause the person to cease to exist.

Preservation of personal identity, especially if this notion is given a narrow construal, is not everything. We can value other things than ourselves, or we might regard it as satisfactory if some parts or aspects of ourselves survive and flourish, even if that entails giving up some parts of ourselves such that we no longer count as being the same person. Which parts of ourselves we might be willing to sacrifice may not become clear until we are more fully acquainted with the full meaning of the options. A careful, incremental exploration of the posthuman realm may be indispensable for acquiring such an understanding, although we may also be able to learn from each others experiences and from works of the imagination.

Additionally, we may favor future people being posthuman rather than human, if the posthumans would lead lives more worthwhile than the alternative humans would. Any reasons stemming from such considerations would not depend on the assumption that we ourselves could become posthuman beings.

Transhumanism promotes the quest to develop further so that we can explore hitherto inaccessible realms of value. Technological enhancement of human organisms is a means that we ought to pursue to this end. There are limits to how much can be achieved by low-tech means such as education, philosophical contemplation, moral self-scrutiny and other such methods proposed by classical philosophers with perfectionist leanings, including Plato, Aristotle, and Nietzsche, or by means of creating a fairer and better society, as envisioned by social reformists such as Marx or Martin Luther King. This is not to denigrate what we can do with the tools we have today. Yet ultimately, transhumanists hope to go further.

If this is the grand vision, what are the more particular objectives that it translates into when considered as a guide to policy?

What is needed for the realization of the transhumanist dream is that technological means necessary for venturing into the posthuman space are made available to those who wish to use them, and that society be organized in such a manner that such explorations can be undertaken without causing unacceptable damage to the social fabric and without imposing unacceptable existential risks.

Global security. While disasters and setbacks are inevitable in the implementation of the transhumanist project (just as they are if the transhumanist project is not pursued), there is one kind of catastrophe that must be avoided at any cost:

Existential risk one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Several recent discussions have argued that the combined probability of the existential risks is very substantial. The relevance of the condition of existential safety to the transhumanist vision is obvious: if we go extinct or permanently destroy our potential to develop further, then the transhumanist core value will not be realized. Global security is the most fundamental and nonnegotiable requirement of the transhumanist project.

Technological progress. That technological progress is generally desirable from a transhumanist point of view is also self-evident. Many of our biological shortcomings (aging, disease, feeble memories and intellects, a limited emotional repertoire and inadequate capacity for sustained well-being) are difficult to overcome, and to do so will require advanced tools. Developing these tools is a gargantuan challenge for the collective problem-solving capacities of our species. Since technological progress is closely linked to economic development, economic growth or more precisely, productivity growth can in some cases serve as a proxy for technological progress. (Productivity growth is, of course, only an imperfect measure of the relevant form of technological progress, which, in turn, is an imperfect measure of overall improvement, since it omits such factors as equity of distribution, ecological diversity, and quality of human relationships.)

The history of economic and technological development, and the concomitant growth of civilization, is appropriately regarded with awe, as humanitys most glorious achievement. Thanks to the gradual accumulation of improvements over the past several thousand years, large portions of humanity have been freed from illiteracy, life-expectancies of twenty years, alarming infant-mortality rates, horrible diseases endured without palliatives, and periodic starvation and water shortages. Technology, in this context, is not just gadgets but includes all instrumentally useful objects and systems that have been deliberately created. This broad definition encompasses practices and institutions, such as double-entry accounting, scientific peer-review, legal systems, and the applied sciences.

Wide access. It is not enough that the posthuman realm be explored by someone. The full realization of the core transhumanist value requires that, ideally, everybody should have the opportunity to become posthuman. It would be sub-optimal if the opportunity to become posthuman were restricted to a tiny elite.

There are many reasons for supporting wide access: to reduce inequality; because it would be a fairer arrangement; to express solidarity and respect for fellow humans; to help gain support for the transhumanist project; to increase the chances that you will get the opportunity to become posthuman; to increase the chances that those you care about can become posthuman; because it might increase the range of the posthuman realm that gets explored; and to alleviate human suffering on as wide a scale as possible.

The wide access requirement underlies the moral urgency of the transhumanist vision. Wide access does not argue for holding back. On the contrary, other things being equal, it is an argument for moving forward as quickly as possible. 150,000 human beings on our planet die every day, without having had any access to the anticipated enhancement technologies that will make it possible to become posthuman. The sooner this technology develops, the fewer people will have died without access.

Consider a hypothetical case in which there is a choice between (a) allowing the current human population to continue to exist, and (b) having it instantaneously and painlessly killed and replaced by six billion new human beings who are very similar but non-identical to the people that exist today. Such a replacement ought to be strongly resisted on moral grounds, for it would entail the involuntary death of six billion people. The fact that they would be replaced by six billion newly created similar people does not make the substitution acceptable. Human beings are not disposable. For analogous reasons, it is important that the opportunity be become posthuman is made available to as many humans as possible, rather than having the existing population merely supplemented (or worse, replaced) by a new set of posthuman people. The transhumanist ideal will be maximally realized only if the benefits of technologies are widely shared and if they are made available as soon as possible, preferably within our lifetime.

From these specific requirements flow a number of derivative transhumanist values that translate the transhumanist vision into practice. (Some of these values may also have independent justifications, and transhumanism does not imply that that the list of values provided below is exhaustive.)

To start with, transhumanists typically place emphasis on individual freedom and individual choice in the area of enhancement technologies. Humans differ widely in their conceptions of what their own perfection or improvement would consist in. Some want to develop in one direction, others in different directions, and some prefer to stay the way they are. It would neither be morally unacceptable for anybody to impose a single standard to which we would all have to conform. People should have the right to choose which enhancement technologies, if any, they want to use. In cases where individual choices impact substantially on other people, this general principle may need to be restricted, but the mere fact that somebody may be disgusted or morally affronted by somebody elses using technology to modify herself would not normally a legitimate ground for coercive interference. Furthermore, the poor track record of centrally planned efforts to create better people (e.g. the eugenics movement and Soviet totalitarianism) shows that we need to be wary of collective decision-making in the field of human modification.

Another transhumanist priority is to put ourselves in a better position to make wise choices about where we are going. We will need all the wisdom we can get when negotiating the posthuman transition. Transhumanists place a high value on improvements in our individual and collective powers of understanding and in our ability to implement responsible decisions. Collectively, we might get smarter and more informed through such means as scientific research, public debate and open discussion of the future, information markets[8], collaborative information filtering[9]. On an individual level, we can benefit from education, critical thinking, open-mindedness, study techniques, information technology, and perhaps memory- or attention-enhancing drugs and other cognitive enhancement technologies. Our ability to implement responsible decisions can be improved by expanding the rule of law and democracy on the international plane. Additionally, artificial intelligence, especially if and when it reaches human-equivalence or greater, could give an enormous boost to the quest for knowledge and wisdom.

Given the limitations of our current wisdom, a certain epistemic tentativeness is appropriate, along with a readiness to continually reassess our assumptions as more information becomes available. We cannot take for granted that our old habits and beliefs will prove adequate in navigating our new circumstances.

Global security can be improved by promoting international peace and cooperation, and by strongly counteracting the proliferation of weapons of mass destruction. Improvements in surveillance technology may make it easier to detect illicit weapons programs. Other security measures might also be appropriate to counteract various existential risks. More studies on such risks would help us get a better understanding of the long-term threats to human flourishing and of what can be done to reduce them.

Since technological development is necessary to realize the transhumanist vision, entrepreneurship, science, and the engineering spirit are to be promoted. More generally, transhumanists favor a pragmatic attitude and a constructive, problem-solving approach to challenges, preferring methods that experience tells us give good results. They think it better to take the initiative to do something about it rather than sit around complaining. This is one sense in which transhumanism is optimistic. (It is not optimistic in the sense of advocating an inflated belief in the probability of success or in the Panglossian sense of inventing excuses for the shortcomings of the status quo.)

Transhumanism advocates the well-being of all sentience, whether in artificial intellects, humans, and non-human animals (including extraterrestrial species, if there are any). Racism, sexism, speciesism, belligerent nationalism and religious intolerance are unacceptable. In addition to the usual grounds for deeming such practices objectionable, there is also a specifically transhumanist motivation for this. In order to prepare for a time when the human species may start branching out in various directions, we need to start now to strongly encourage the development of moral sentiments that are broad enough encompass within the sphere of moral concern sentiences that are constituted differently from ourselves.

Finally, transhumanism stresses the moral urgency of saving lives, or, more precisely, of preventing involuntary deaths among people whose lives are worth living. In the developed world, aging is currently the number one killer. Aging is also biggest cause of illness, disability and dementia. (Even if all heart disease and cancer could be cured, life expectancy would increase by merely six to seven years.) Anti-aging medicine is therefore a key transhumanist priority. The goal, of course, is to radically extent peoples active health-spans, not to add a few extra years on a ventilator at the end of life.

Since we are still far from being able to halt or reverse aging, cryonic suspension of the dead should be made available as an option for those who desire it. It is possible that future technologies will make it possible to reanimate people who have cryonically suspended.[10] While cryonics might be a long shot, it definitely carries better odds than cremation or burial.

The table below summarizes the transhumanist values that we have discussed.

Read the rest here:

Transhumanist Values – Nick Bostrom

Natasha Vita-More | Transhuman Art

 Transhuman  Comments Off on Natasha Vita-More | Transhuman Art
Jan 182016
 

Natashas research concerns the aesthetics of human enhancement and radical life extension, with a focus on sciences and technologies of nanotechnology, biotechnology, information technology, and cognitive and neuro sciences (NBIC). Her conceptual future human design Primo Posthuman has been featured in Wired, Harpers Bazaar, Marie Claire, The New York Times, U.S. News & World Report, Net Business, Teleopolis, and Village Voice. She has appeared in over twenty-four televised documentaries on the future and culture, and has exhibited media artworks at National Centre for Contemporary Arts, Brooks Memorial Museum, Institute of Contemporary Art, Women In Video, Telluride Film Festival, and United States Film Festival and recently Evolution Haute Couture: Art and Science in the Post-Biological Age. Natasha has been the recipient of several awards: First Place Award at Brooks Memorial Museum, Special Recognition at Women in Video, and Best Graduate Student Project of 2005 for her Futures Podcast Series: at the University of Houston, Future Studies program.

Natasha is a proponent human rights and ethical means for human enhancement, and is published in Artifact, Technoetic Arts, Nanotechnology Perceptions, Annual Workshop on Geoethical Nanotechnology, Death And Anti- Death. She has a bi-monthly column in Nanotechnology Now, is a Guest Editor of The Global Spiral academic journal and on the Editorial Board of International Journal of Green Nanotechnology. Natasha authored Create / Recreate: the 3rd Millennial Culture on the emerging cybernetic culture and the future of humanism and the arts and sciences. She co-authored One on One Fitness, a guide to nutrition and aerobic and anaerobic exercise for women. Her new book The Transhumanist Reader: Classical and Contemporary Look at Philosophy and Technology is scheduled for publishing in 2012 through Wiley-Blackwell.

Natasha is Chair of Humanity+, international non-profit 501c3 organization and was the former president of Extropy Institute, networking organization Natasha continues to work with academic institutions, non-profit organizations and business about human futures. She is a track advisor at the Singularity University, on the Scientific Board of Lifeboat Foundation, a Fellow of the Institute for Ethics and Emerging Technologies, Visiting Scholar at 21st Century Medicine, and advises non-profit organizations including Adaptive A.I. and Alcor Life Extension Foundation. She has been a consultant to IBM on the future of human performance.

See the original post here:
Natasha Vita-More | Transhuman Art

 Posted by at 6:40 am  Tagged with:

Freedom to Tinker Research and expert commentary on …

 Freedom  Comments Off on Freedom to Tinker Research and expert commentary on …
Nov 032015
 

Yesterday I posted some thoughts about Purdue Universitys decision to destroy a video recording of my keynote address at its Dawn or Doom colloquium. The organizers had gone dark, and a promised public link was not forthcoming. After a couple of weeks of hoping to resolve the matter quietly, I did some digging and decided to write up what I learned. I posted on the web site of the Century Foundation, my main professional home:

It turns out that Purdue has wiped all copies of my video and slides from university servers, on grounds that I displayed classified documents briefly on screen. A breach report was filed with the universitys Research Information Assurance Officer, also known as the Site Security Officer, under the terms of Defense Department Operating Manual 5220.22-M. I am told that Purdue briefly considered, among other things, whether to destroy the projector I borrowed, lest contaminants remain.

I was, perhaps, naive, but pretty much all of that came as a real surprise.

Lets rewind. Information Assurance? Site Security?

These are familiar terms elsewhere, but new to me in a university context. I learned that Purdue, like a number of its peers, has a facility security clearance to perform classified U.S. government research. The manual of regulations runs to 141 pages. (Its terms forbid uncleared trustees to ask about the work underway on their campus, but thats a subject for another day.) The pertinent provision here, spelled out at length in a manual called Classified Information Spillage, requires sanitization, physical removal, or destruction of classified information discovered on unauthorized media.

Two things happened in rapid sequence around the time I told Purdue about my post.

First, the university broke a week-long silence and expressed a measure of regret:

UPDATE: Just after posting this item I received an email from Julie Rosa, who heads strategic communications for Purdue. She confirmed that Purdue wiped my video after consulting the Defense Security Service, but the university now believes it went too far.

In an overreaction while attempting to comply with regulations, the video was ordered to be deleted instead of just blocking the piece of information in question. Just FYI: The conference organizers were not even aware that any of this had happened until well after the video was already gone.

Im told we are attempting to recover the video, but I have not heard yet whether that is going to be possible. When I find out, I will let you know and we will, of course, provide a copy to you.

Then Edward Snowden tweeted the link, and the Century Foundations web site melted down. It now redirects to Medium, where you can find the full story.

I have not heard back from Purdue today about recovery of the video. It is not clear to me how recovery is even possible, if Purdue followed Pentagon guidelines for secure destruction. Moreover, although the university seems to suggest it could have posted most of the video, it does not promise to do so now. Most importantly, the best that I can hope for here is that my remarks and slides will be made available in redacted form with classified images removed, and some of my central points therefore missing. There would be one version of the talk for the few hundred people who were in the room on Sept. 24, and for however many watched the live stream, and another version left as the only record.

For our purposes here, the most notable questions have to do with academic freedom in the context of national security. How did a university come to sanitize a public lecture it had solicited, on the subject of NSA surveillance, from an author known to possess the Snowden documents? How could it profess to be shocked to find that spillage is going on at such a talk? The beginning of an answer came, I now see, in the question and answer period after my Purdue remarks. A post-doctoral research engineer stood up to ask whether the documents I had put on display were unclassified. No, I replied. Theyre classified still. Eugene Spafford, a professor of computer science there, later attributed that concern to junior security rangers on the faculty and staff. But the display of Top Secret material, he said, once noted, is something that cannot be unnoted.

Someone reported my answer to Purdues Research Information Assurance Officer, who reported in turn to Purdues representative at the Defense Security Service. By the terms of its Pentagon agreement, Purdue decided it was now obliged to wipe the video of my talk in its entirety. I regard this as a rather devout reading of the rules, which allowed Purdue to realistically consider the potential harm that may result from compromise of spilled information. The slides I showed had been viewed already by millions of people online. Even so, federal funding might be at stake for Purdue, and the notoriously vague terms of the Espionage Act hung over the decision. For most lawyers, abundance of caution would be the default choice. Certainly that kind of thinking is commonplace, and sometimes appropriate, in military and intelligence services.

But universities are not secret agencies. They cannot lightly wear the shackles of a National Industrial Security Program, as Purdue agreed to do. The values at their core, in principle and often in practice, are open inquiry and expression.

I do not claim I suffered any great harm when Purdue purged my remarks from its conference proceedings. I do not lack for publishers or public forums. But the next person whose talk is disappeared may have fewer resources.

More importantly, to my mind, Purdue has compromised its own independence and that of its students and faculty. It set an unhappy precedent, even if the people responsible thought they were merely following routine procedures.

One can criticize the university for its choices, and quite a few have since I published my post. What interests me is how nearly the results were foreordained once Purdue made itself eligible for Top Secret work.

Think of it as a classic case of mission creep. Purdue invited the secret-keepers of the Defense Security Service into one cloistered corner of campus (a small but significant fraction of research in certain fields, as the university counsel put it). The trustees accepted what may have seemed a limited burden, confined to the precincts of classified research.

Now the security apparatus claims jurisdiction over the campus (facility) at large. The university finds itself sanitizing a conference that has nothing to do with any government contract.

I am glad to see that Princeton takes the view that [s]ecurity regulations and classification of information are at variance with the basic objectives of a University. It does not permit faculty members to do classified work on campus, which avoids Purdues facility problem. And even so, at Princeton and elsewhere, there may be an undercurrent of self-censorship and informal restraint against the use of documents derived from unauthorized leaks.

Two of my best students nearly dropped a course I taught a few years back, called Secrecy, Accountability and the National Security State, when they learned the syllabus would include documents from Wikileaks. Both had security clearances, for summer jobs, and feared losing them. I told them I would put the documents on Blackboard, so they need not visit the Wikileaks site itself, but the readings were mandatory. Both, to their credit, stayed in the course. They did so against the advice of some of their mentors, including faculty members. The advice was purely practical. The U.S. government will not give a clear answer when asked whether this sort of exposure to published secrets will harm job prospects or future security clearances. Why take the risk?

Every student and scholar must decide for him- or herself, but I think universities should push back harder, and perhaps in concert. There is a treasure trove of primary documents in the archives made available by Snowden and Chelsea Manning. The government may wish otherwise, but that information is irretrievably in the public domain. Should a faculty member ignore the Snowden documents when designing a course on network security architecture? Should a student write a dissertation on modern U.S.-Saudi relations without consulting the numerous diplomatic cables on Wikileaks? To me, those would be abdications of the basic duty to seek out authoritative sources of knowledge, wherever they reside.

I would be interested to learn how others have grappled with these questions. I expect to write about them in my forthcoming book on surveillance, privacy and secrecy.

See more here:
Freedom to Tinker Research and expert commentary on …

 Posted by at 8:42 pm  Tagged with:

An SEO Driven Approach To Content Marketing: The Complete …

 SEO  Comments Off on An SEO Driven Approach To Content Marketing: The Complete …
Sep 232015
 

Should you be worried about SEO on your content marketing blog?

In recent months, the necessity of search engine optimization has come under major fire. AsGoogle released their Panda and Penguin algorithms we all saw a major reduction in search spam, and an almost overnight, we began noticing major changes in the type of content we saw in our own search results.

Long time SEO Jill Whalen, is now internet famous for quitting her career as an SEO following these major announcements. Google works now said Jill, thismeans, my friends, that my work here is done.

What does she mean? Is SEO really dead?

As is often the case, nothing is really dead. SEO has changed, dramatically, and as Jill points out, this is a good thing. The good news for content creators like you is that it has changed in your favor.Google now rewards content marketing over spam bots and link-building tricks. Its a victory for good content and a loss for tactics of questionable nature.

This is a good thing.

You may be wondering why you still need to consider SEO in your writing with all of the changes that have been made by Google. The answer is relatively simple: For a long time, SEO was all about tricks and tactics. It was truly about optimization and opportunism, but not anymore. Now, SEO is about content. Lots of content.

In other words, SEO as we know it picked up camp and moved in with content marketing. We have a new roommate. Why not get to know it a little?

From what I can see, the opportunity for content marketers to use SEO-driven tactics is more applicable now than ever. We already have the content. What if we add a little science and tactic to our workwho knows where we might go in the future? We could even put ourselves on page one of search. Wouldnt that be something?

How should the content marketer be approaching the search engines with our writing? This guide aims to answer these two questions. SEO may not be dead, but it has dramatically changed and that means there is a big opportunity for the content marketer who is paying attention.

Heres a step-by-step guide to what you need to do to have a modernSEO-driven approach to content marketing.

When outlining an SEO strategy for content marketing, we take a slightly different approach than what we were used to. It is probably best to begin understanding how (and why) Google is rewarding longer-form contentand other content that is visually focused. Google has started to see these elements as symbols of quality, and is doing a better job of connecting search users to quality content.

Again, thats a good thing, but it doesnt mean that some of the tried and true techniques of old SEO arent still viable. Thats where keywords come in.

One of the most important aspects of search engine optimization has always been the keywords, those words that people use to find our content in search.

In the early days of SEO, the goal was to achieve exact keyword matching. This meant that the page we wanted people to find was perfectly tuned to show up in the search results when someone searched for that phrase. If you searched for exact keyword match, for example, you would find pages that used that phrase exactly as written. Not anymore. Now, you will find pages that discuss the general topic of exact keyword matching.

It may be subtle, but it is an important difference. Rand Fishkin of Moz explains it well in his whiteboard friday video.

All of that said, though, I still believe that mostgood SEO optimization still begin with the keyword. This hasnt changed.

Whats changed is the framework we need to use for implementing those keywords into our writing. This is the method that I am going to break down for you in the guide. I am going to show you, step by step, how to use keywords to create an SEO driven approach to content marketing.Try not to think of it as SEO so much as smart content marketing.

The first step is to find the keywords that matter most for you. There are several tools that will help you do this. The most notable is the Google Adwords Keyword Planner, a tool that is freely available with any Adwords account.

Should content marketers be using keywords in their writing process? Yes.

The concept here is very simple. Start by typing in one of the keywords that is most crucial to your business. Here at CoSchedule, for example, this would be something like content marketing or editorial calendars.From there, Google will automatically provide you with a list of words related to your primary keyword that people all around the world are searching for.

As a content marketer, this is incredibly valuable! Not only do you get a host of keyword ideas, but you should also begin to understand your readers more than ever. Thisiswhat they are searching for. How cool is that?

Keywords are located based on your website URL and product category. They are customized to you!

Once you have a list of results from Google, you can individually add keywords that stand out to you to your keyword plan.

Avoid getting get overly aggressive, though.For example, in this screenshot I probably dont need to add both content marketing strategy and content marketing strategies. They are a bit redundant, and not likely different enough for me to care about. Sincecontent marketing strategy gets more attention, it would make sense to go with that.

Add important keywords and phrases to your keyword plan.

Your goal here is to create a lost of 30-100 keywords that matter to your business, your audience, and to Google. You are doing research here, so the most important thing is that you learn what your audience wants, and what Google will reward.

Once youve created a good list, use the export option to download it as an Excel file, or whatever format you want to work with.

Key Point: Create a list of keywords that your blog should be targeting and keep it handy.

The list of keywords you built using the keyword planner is your new content marketing checklist. These are the words that you want your site to rank well for on Google. I consider them a list of keyword goals to shoot for.

The next step is to load these goals into a tool that will help your track and monitor where you site ranks for each of these terms. For this, I like to usePositionly, but larger SEO tools like Moz and RavenTools are good options as well. For me, Positionly offers a simplicity that the others dont. It does less, but sometimes that is more.

The purpose of Positionly is very simple. They aim to monitor daily changes to your search engine rankings and helpimprove where you show up in search engine listings. In other words, they will tell you where your site ranks on Google in respect to each keyword term that you add for your site.

Positionly will tell you how your sites ranks for each term. They will also monitor and report daily changes.

This is valuable information because it gives you a benchmark to work against. When you upload your initial list of terms, Positionly will give you an overall assessment of your site in comparison to your selected terms. Depending on how long you have been writing or working on SEO, your results may vary.

Positionly will asses how well your site currently rates for the keywords entered.

One of the hazards of a tools like Positionly is the frequency of information. On any given day you may log in to find that your rankings on several keywords have dropped for no particular reason. This is a natural occurrence, and not something that you should worry about too deeply. Ranking well on Google is an art, not a science. It is also a process, so dont expect to land on top and stay there forever. 😉

Key Point:Use a tool like Positionly to monitor your keyword ranking and track your progress.

Once you have your marching orders (keyword goals), it is time to start incorporating them into your content marketing process.

At CoSchedule, our goal is to focus on one keyword phrase each week by adding a blog post with that keyword phrase to our editorial calendar. We dont get overly scientific about it, we just plop it on there are and leave things up to the designated writer to figure out.

Incorporate your keyword based posts into your editorial calendar.

Once the post is on the calendar, it will get written. If you arent managing an editorial calendar for your team, this is an excellent reason to do so, and one that we heartily recommend. When you pre-plan your content you can become much more purposeful and strategic with you goals.

Once youve worked through your keyword goals list the first time, be sure to refer back to Positionly regularly to help prioritize the keywords that you want (and need) to improve on.

Key Point:Add keyword goals to your editorial calendar each week to keep yourself accountable.

It is worth mentioning at this point that you should never be writing a blog post where a specific keyword isnt identified.

On our team, we try as often as possible to identify the keyword immediately when scheduling a post. Each time we create a post, we either identify the keyword in the headline itself, or note it in the comments field if we are choosing to write the headline later on.

Identify SEO keywords before writing your content.

This is a good practice to get your team into, and will make a big impact on the quality of your posts. Not only will it add SEO value, but it will force your writing team to focus their writing on a well-selected and focused topic.

If you are having trouble identifying your keywords for one-off posts, there are two easy places you can go. First, you could always head back over to the Google Adwords Keyword Planning tool, but that might be overkill at this point. What I like to do is simply complete a basic Google search and take a look at the recommended search terms at the bottom of the page.

Related search terms on Google provide a wealth of keyword knowledge.

Another way to do this research is to use an content creation tool like Scribe by Copyblogger.

This tool allows you to do headline research right inside of your WordPress add/edit page, and provides additional details about the popularity and competition level of each keyword option. It will also provide data regarding your keywords from both Twitter and Google+.

The Scribe plugin by Copyblogger is a handy tool for content marketing SEO.

Key Point:Develop good habits, and declare a keyword for each post that you write.

Once you have a keyword selected for you post, you will need a few tools to ensure that your content stays on point. The two tools that we use here at CoSchedule are the Scribe plugin by Copyblogger and WordPress SEO from Yoast. If you are on a budget the Yoast plugin is free, and will get you 90% of the way to where you need to go.

Both of these plugins work in a similar way. With each, you start by declaring the keyword phrase that you are using for the post. From there, the plugins will tell you how well you content ranks for those keywords. These plugins will evaluate your post based on several key factors:

Article HeadlineIt is considered best practice to include your exact keyword phrase in the headline of your post.

Page Title The page title is the bit of text that will show up in your browser tab, or more importantly, at the top of your Google Search listing. You will definitely want to include your keyword in full here.

The Yoast snippet preview will give you a preview of your forth-coming search listing.

Page URLYour keyword should beincludedin the slug of your URL. WordPress makes this easy to customize as long as you do it before the post is published.

ContentBoth Yoast and Scribe will want to see that the keyword is mentioned within the content of your post. With this, the more you have the better. If you can include the keywords in various sub-headlines you will even get bonus points.

Meta Description The meta description is the short description of your post that will show up on Google. You will want to use your keyword phrase in this copy.

When writing your posts, you want to make sure they are as optimized as possible for the keywords that you are trying to reach. Both Scribe and Yoast will give your visual confirmation of your success.

Both WordPress SEO plugin and Scribe will visually show you how your articles ranks SEO-wise.

At our office, we always shoot for green before we publish every post. Clicking though both plugins will provide additional information and suggestions.

Yoast page analysis. Lots of good tips here.

Topics vs. Strict MatchOne thing that I want to point out is that you need to be careful about the difference between the strict matching of keywords and topic related search.

As Rand Fishkin pointed out the video posted above, Google cares more about how you cover the topic overall ratherthan the exact keyword itself. Yoast tends to lean to heavily on the strict match method, which is outdated by Googles standards. Scribe, however, seems to handle this much more gracefully and might be worth the extra investment.

Key Point:Optimize your posts so that they perform well for the chosen keywords.

Even though SEO is no longer about the tools and tricks, there are still a few you need to use to make sure that everything is in order. As any good web designer will tell you, most SEO happens in the page itself. If the structure and makeup of your webpage isnt properly optimized, you are already fighting an uphill battle.

You can always usePositionlyorthis free toolfrom Neil Patel to get an assessment of how your site performs.

Here are a few additional WordPress plugins that will help you get things in order:

WordPress SEO by YoastWordPress SEO is a powerful plugin. Use it to setup sitemaps on your site, and optimize your social sharing meta tags. Seriously, spend some time with this one.

WP Rocket Site speed can make a huge impact on your SEO performance. WP Rocket is a paid plugin, but unlike many of the free options, it shouldnt mess up your site. It is worth the few extra bucks.

In Depth Articles Generator Generates posts metadata for your pages to better present search results to users. There are other plugins that do this, but this one is simple and easy. If you need to validate that it is working, you can use the Google testing tool.

GooglesSearch Engine OptimizationStarter GuideThis free guide made available by Google is a great place to start in the world of SEO an optimization.

SEO isnt dead, it has just changed. The good news is that the new world of SEO is better than ever for content marketers like yourself. When combined with a few SEO basics, there is nothing stopping you from making SEO a core part of your inbound marketing strategy.

Read more:
An SEO Driven Approach To Content Marketing: The Complete …

 Posted by at 3:45 am  Tagged with:

Federal court rules that only drug companies, not supplement …

 Misc  Comments Off on Federal court rules that only drug companies, not supplement …
Sep 102015
 

(NaturalNews) In a ruling that many holistic healers and homeopathic physicians are likely to find hypocritical, a federal court has handed Big Pharma an unprecedented victory by giving a drug company preliminary approval to market a drug for a condition for which it has yet to be approved by the Food and Drug Administration.

The drug, Vascepa, manufactured by Amarin Pharma, is approved for use in treating very high levels of fats known as triglycerides over 500 mg per deciliter in a patient’s bloodstream, reports AllGov.com. But Amarin also wanted to promote the medication for use in patients who have “persistently high levels” of triglycerides, from 200 to 499 mg/deciliter.

The FDA denied that request earlier this year over concerns that Vascepa would not help such patients avoid heart attacks or heart disease. That decision led Amarin to file suit in court, claiming its First Amendment rights permitted the company to provide information to physicians and other primary care providers.

Providers have long prescribed medications for “off-label” uses those not included in a drug’s literature or for uses not specifically approved by federal regulators but the drug companies have traditionally been banned from marketing their products for such off-label uses.

“This is huge,” Jacob Sherkow, an associate professor at New York Law School, told The Washington Post. “There have been other instances a court has held that off-label marketing is protected by the First Amendment, but… this is the first time, I think, that any federal court that any court has held in such a clear, full-throated way that off-label marketing is protected by the First Amendment, period, full stop.”

AllGov.com reported that the case stemmed from a 2012 New York City federal appeals court ruling finding that a Big Pharma sales rep had not violated FDA regulations by promoting off-label use for a drug to treat narcolepsy, Xyrem, because his speech as long as he was not being misleading was protected by the First Amendment. However, in the Amarin case, the FDA said that the Xyrem decision was limited in scope and therefore could not be applied to Vascepa, but Engelmayer disagreed.

However, the parameter of “truthful speech” and a complete statement of facts has proved concerning to some.

“I find the decision very troubling. It’s a big push off on to a very slippery slope, a very steep slippery slope toward removing the government’s authority to limit the claims that drug companies can make about the effectiveness of their products,” Harvard Medical School professor Jerry Avorn told the Post.

“There’s an enormous amount, enormous numbers of statements that drug companies could make about their products that are not overtly fraudulent, but are not the same as a comprehensive review of all the good and bad evidence, that the FDA undertakes when it reviews a drug,” Avorn added.

Makers and consumers of health-related supplements, however, are also decrying the ruling, especially companies whose First Amendment rights have been ignored by courts and the FDA in the past.

In December 2012, we reported that a federal appeals court in New York upheld the free speech rights of a pharmaceutical company regarding off-label uses of Xyrem, even as courts and the FDA were gagging makers of natural supplements.

And in March 2013, we reported that the FDA used a truth-in-labeling regulation in issuing warning letters to a pair of supplement companies whose “crime” was nothing more than having customer-related interactions via the Internet.

It appears that there are two separate standards for Big Pharma and holistic and homeopathic healers.

Sources:

AllGov.com

WashingtonPost.com

WSJ.com

NaturalNews.com

Permalink to this article: http://www.naturalnews.com/051109_drug_companies_First_Amendment_rights_nutritional_supplements.html

Embed article link: (copy HTML code below): Federal court rules that only drug companies, not supplement companies, have First Amendment rights to truthful speech about health

Reprinting this article: Non-commercial use OK, cite NaturalNews.com with clickable link.

Follow Natural News on Facebook, Twitter, Google Plus, and Pinterest

Read this article:
Federal court rules that only drug companies, not supplement …

 Posted by at 10:44 am  Tagged with:

Moz Blog – SEO and Inbound Marketing Blog – Moz

 SEO  Comments Off on Moz Blog – SEO and Inbound Marketing Blog – Moz
Aug 262015
 

Learn SEO Broaden your SEO with marketing resources for all skill levels: best practices, beginner guides, industry survey results, videos, webinars and more.

Get started with: The Beginner’s Guide to SEO

The industry’s top wizards, doctors, and other experts offer their best advice, research, how-tos, and insightsall in the name of helping you level-up your SEO and online marketing skills.

A waterfall diagram, such as those produced by WebPageTest, is a powerful indicator of optimization opportunities. Do you know how to read them?

Are you a local business owner? Explore the hows and whys of submitting your business to local business directories in order to boost your local search visibility on Google.

Do search engines collect and utilise user behaviour data for ranking purposes? We’ve got a deep-dive into the data and theories behind user behaviour, search visibility, and more.

If you’re targeting a certain keyword, knowing where and how often to use that keyword in the various elements of your page is essential. In today’s Whiteboard Friday, Rand offers his recommendation.

There’s a compelling indicator of how our industry is evolving in an area that helps us become better marketers: gender equality. What’s changed over time and what are we doing to improve gender diversity in the workplace?

Have you seen the new Snack Pack? Explore Casey Meraz’s click test results on Google’s new local 3-pack, seeing what’s changed, what works, and what the future holds.

Those of you who have logged into your Moz Local dashboard recently may have noticed a few updates this week! I thought I’d post a quick announcement to highlight them.

Google recently shook up the local results in its SERPs, killing the local 7-packs in favor of a 3-pack that resembles the mobile experience. This post tells you everything you need to know about the change and what it means for your local marketing.

Brand fatigue is a real threat to your marketing strategy. In today’s Whiteboard Friday, Rand highlights some common causes of brand fatigue and how to combat it.

It’s here! We’re excited to announce the results of Moz’s biennial Search Engine Ranking Correlation Study and Expert Survey, aka Ranking Factors. Moz’s Ranking Factors study helps identify which attributes of pages and sites have the strongest association with ranking highly in Google. The study consists of t…

Today we’re excited to announce the results of Moz’s famous Ranking Factors study. The study helps to identify which attributes of webpages and sites have the strongest association with higher rankings in Google. Ready to dive in?

How do commercial and informational queries differ? Does one type of SERP show more or fewer results that are mobile-friendly or using HTTPS? Find those answers in this examination of more than 345,000 search results.

While SEO is a different field than it once was, technical chops are still required to do things really well. In today’s Whiteboard Friday, Rand pushes back against the idea that those skills are no longer necessary.

Buy your MozCon 2015 Video Bundle and access 27 sessions (over 15 hours) from top industry speakers on topics ranging from SEO and content strategy to email marketing and CRO.

Join the Moz Community to add a comment, give something a thumbs up/down, and get enhanced access to free tools!

View post:
Moz Blog – SEO and Inbound Marketing Blog – Moz

How the Bitcoin protocol actually works | DDI

 Bitcoin  Comments Off on How the Bitcoin protocol actually works | DDI
Aug 182015
 

Many thousands of articles have been written purporting to explain Bitcoin, the online, peer-to-peer currency. Most of those articles give a hand-wavy account of the underlying cryptographic protocol, omitting many details. Even those articles which delve deeper often gloss over crucial points. My aim in this post is to explain the major ideas behind the Bitcoin protocol in a clear, easily comprehensible way. Well start from first principles, build up to a broad theoretical understanding of how the protocol works, and then dig down into the nitty-gritty, examining the raw data in a Bitcoin transaction.

Understanding the protocol in this detailed way is hard work. It is tempting instead to take Bitcoin as given, and to engage in speculation about how to get rich with Bitcoin, whether Bitcoin is a bubble, whether Bitcoin might one day mean the end of taxation, and so on. Thats fun, but severely limits your understanding. Understanding the details of the Bitcoin protocol opens up otherwise inaccessible vistas. In particular, its the basis for understanding Bitcoins built-in scripting language, which makes it possible to use Bitcoin to create new types of financial instruments, such as smart contracts. New financial instruments can, in turn, be used to create new markets and to enable new forms of collective human behaviour. Talk about fun!

Ill describe Bitcoin scripting and concepts such as smart contracts in future posts. This post concentrates on explaining the nuts-and-bolts of the Bitcoin protocol. To understand the post, you need to be comfortable with public key cryptography, and with the closely related idea of digital signatures. Ill also assume youre familiar with cryptographic hashing. None of this is especially difficult. The basic ideas can be taught in freshman university mathematics or computer science classes. The ideas are beautiful, so if youre not familiar with them, I recommend taking a few hours to get familiar.

It may seem surprising that Bitcoins basis is cryptography. Isnt Bitcoin a currency, not a way of sending secret messages? In fact, the problems Bitcoin needs to solve are largely about securing transactions making sure people cant steal from one another, or impersonate one another, and so on. In the world of atoms we achieve security with devices such as locks, safes, signatures, and bank vaults. In the world of bits we achieve this kind of security with cryptography. And thats why Bitcoin is at heart a cryptographic protocol.

My strategy in the post is to build Bitcoin up in stages. Ill begin by explaining a very simple digital currency, based on ideas that are almost obvious. Well call that currency Infocoin, to distinguish it from Bitcoin. Of course, our first version of Infocoin will have many deficiencies, and so well go through several iterations of Infocoin, with each iteration introducing just one or two simple new ideas. After several such iterations, well arrive at the full Bitcoin protocol. We will have reinvented Bitcoin!

This strategy is slower than if I explained the entire Bitcoin protocol in one shot. But while you can understand the mechanics of Bitcoin through such a one-shot explanation, it would be difficult to understand why Bitcoin is designed the way it is. The advantage of the slower iterative explanation is that it gives us a much sharper understanding of each element of Bitcoin.

Finally, I should mention that Im a relative newcomer to Bitcoin. Ive been following it loosely since 2011 (and cryptocurrencies since the late 1990s), but only got seriously into the details of the Bitcoin protocol earlier this year. So Id certainly appreciate corrections of any misapprehensions on my part. Also in the post Ive included a number of problems for the author notes to myself about questions that came up during the writing. You may find these interesting, but you can also skip them entirely without losing track of the main text.

So how can we design a digital currency?

On the face of it, a digital currency sounds impossible. Suppose some person lets call her Alice has some digital money which she wants to spend. If Alice can use a string of bits as money, how can we prevent her from using the same bit string over and over, thus minting an infinite supply of money? Or, if we can somehow solve that problem, how can we prevent someone else forging such a string of bits, and using that to steal from Alice?

These are just two of the many problems that must be overcome in order to use information as money.

As a first version of Infocoin, lets find a way that Alice can use a string of bits as a (very primitive and incomplete) form of money, in a way that gives her at least some protection against forgery. Suppose Alice wants to give another person, Bob, an infocoin. To do this, Alice writes down the message I, Alice, am giving Bob one infocoin. She then digitally signs the message using a private cryptographic key, and announces the signed string of bits to the entire world.

(By the way, Im using capitalized Infocoin to refer to the protocol and general concept, and lowercase infocoin to refer to specific denominations of the currency. A similar useage is common, though not universal, in the Bitcoin world.)

This isnt terribly impressive as a prototype digital currency! But it does have some virtues. Anyone in the world (including Bob) can use Alices public key to verify that Alice really was the person who signed the message I, Alice, am giving Bob one infocoin. No-one else could have created that bit string, and so Alice cant turn around and say No, I didnt mean to give Bob an infocoin. So the protocol establishes that Alice truly intends to give Bob one infocoin. The same fact no-one else could compose such a signed message also gives Alice some limited protection from forgery. Of course, after Alice has published her message its possible for other people to duplicate the message, so in that sense forgery is possible. But its not possible from scratch. These two properties establishment of intent on Alices part, and the limited protection from forgery are genuinely notable features of this protocol.

I havent (quite) said exactly what digital money is in this protocol. To make this explicit: its just the message itself, i.e., the string of bits representing the digitally signed message I, Alice, am giving Bob one infocoin. Later protocols will be similar, in that all our forms of digital money will be just more and more elaborate messages [1].

A problem with the first version of Infocoin is that Alice could keep sending Bob the same signed message over and over. Suppose Bob receives ten copies of the signed message I, Alice, am giving Bob one infocoin. Does that mean Alice sent Bob ten different infocoins? Was her message accidentally duplicated? Perhaps she was trying to trick Bob into believing that she had given him ten different infocoins, when the message only proves to the world that she intends to transfer one infocoin.

What wed like is a way of making infocoins unique. They need a label or serial number. Alice would sign the message I, Alice, am giving Bob one infocoin, with serial number 8740348. Then, later, Alice could sign the message I, Alice, am giving Bob one infocoin, with serial number 8770431, and Bob (and everyone else) would know that a different infocoin was being transferred.

To make this scheme work we need a trusted source of serial numbers for the infocoins. One way to create such a source is to introduce a bank. This bank would provide serial numbers for infocoins, keep track of who has which infocoins, and verify that transactions really are legitimate,

In more detail, lets suppose Alice goes into the bank, and says I want to withdraw one infocoin from my account. The bank reduces her account balance by one infocoin, and assigns her a new, never-before used serial number, lets say 1234567. Then, when Alice wants to transfer her infocoin to Bob, she signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567. But Bob doesnt just accept the infocoin. Instead, he contacts the bank, and verifies that: (a) the infocoin with that serial number belongs to Alice; and (b) Alice hasnt already spent the infocoin. If both those things are true, then Bob tells the bank he wants to accept the infocoin, and the bank updates their records to show that the infocoin with that serial number is now in Bobs possession, and no longer belongs to Alice.

This last solution looks pretty promising. However, it turns out that we can do something much more ambitious. We can eliminate the bank entirely from the protocol. This changes the nature of the currency considerably. It means that there is no longer any single organization in charge of the currency. And when you think about the enormous power a central bank has control over the money supply thats a pretty huge change.

The idea is to make it so everyone (collectively) is the bank. In particular, well assume that everyone using Infocoin keeps a complete record of which infocoins belong to which person. You can think of this as a shared public ledger showing all Infocoin transactions. Well call this ledger the block chain, since thats what the complete record will be called in Bitcoin, once we get to it.

Now, suppose Alice wants to transfer an infocoin to Bob. She signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567, and gives the signed message to Bob. Bob can use his copy of the block chain to check that, indeed, the infocoin is Alices to give. If that checks out then he broadcasts both Alices message and his acceptance of the transaction to the entire network, and everyone updates their copy of the block chain.

We still have the where do serial number come from problem, but that turns out to be pretty easy to solve, and so I will defer it to later, in the discussion of Bitcoin. A more challenging problem is that this protocol allows Alice to cheat by double spending her infocoin. She sends the signed message I, Alice, am giving Bob one infocoin, with serial number 1234567 to Bob, and the messageI, Alice, am giving Charlie one infocoin, with [the same] serial number 1234567 to Charlie. Both Bob and Charlie use their copy of the block chain to verify that the infocoin is Alices to spend. Provided they do this verification at nearly the same time (before theyve had a chance to hear from one another), both will find that, yes, the block chain shows the coin belongs to Alice. And so they will both accept the transaction, and also broadcast their acceptance of the transaction. Now theres a problem. How should other people update their block chains? There may be no easy way to achieve a consistent shared ledger of transactions. And even if everyone can agree on a consistent way to update their block chains, there is still the problem that either Bob or Charlie will be cheated.

At first glance double spending seems difficult for Alice to pull off. After all, if Alice sends the message first to Bob, then Bob can verify the message, and tell everyone else in the network (including Charlie) to update their block chain. Once that has happened, Charlie would no longer be fooled by Alice. So there is most likely only a brief period of time in which Alice can double spend. However, its obviously undesirable to have any such a period of time. Worse, there are techniques Alice could use to make that period longer. She could, for example, use network traffic analysis to find times when Bob and Charlie are likely to have a lot of latency in communication. Or perhaps she could do something to deliberately disrupt their communications. If she can slow communication even a little that makes her task of double spending much easier.

How can we address the problem of double spending? The obvious solution is that when Alice sends Bob an infocoin, Bob shouldnt try to verify the transaction alone. Rather, he should broadcast the possible transaction to the entire network of Infocoin users, and ask them to help determine whether the transaction is legitimate. If they collectively decide that the transaction is okay, then Bob can accept the infocoin, and everyone will update their block chain. This type of protocol can help prevent double spending, since if Alice tries to spend her infocoin with both Bob and Charlie, other people on the network will notice, and network users will tell both Bob and Charlie that there is a problem with the transaction, and the transaction shouldnt go through.

In more detail, lets suppose Alice wants to give Bob an infocoin. As before, she signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567, and gives the signed message to Bob. Also as before, Bob does a sanity check, using his copy of the block chain to check that, indeed, the coin currently belongs to Alice. But at that point the protocol is modified. Bob doesnt just go ahead and accept the transaction. Instead, he broadcasts Alices message to the entire network. Other members of the network check to see whether Alice owns that infocoin. If so, they broadcast the message Yes, Alice owns infocoin 1234567, it can now be transferred to Bob. Once enough people have broadcast that message, everyone updates their block chain to show that infocoin 1234567 now belongs to Bob, and the transaction is complete.

This protocol has many imprecise elements at present. For instance, what does it mean to say once enough people have broadcast that message? What exactly does enough mean here? It cant mean everyone in the network, since we dont a priori know who is on the Infocoin network. For the same reason, it cant mean some fixed fraction of users in the network. We wont try to make these ideas precise right now. Instead, in the next section Ill point out a serious problem with the approach as described. Fixing that problem will at the same time have the pleasant side effect of making the ideas above much more precise.

Suppose Alice wants to double spend in the network-based protocol I just described. She could do this by taking over the Infocoin network. Lets suppose she uses an automated system to set up a large number of separate identities, lets say a billion, on the Infocoin network. As before, she tries to double spend the same infocoin with both Bob and Charlie. But when Bob and Charlie ask the network to validate their respective transactions, Alices sock puppet identities swamp the network, announcing to Bob that theyve validated his transaction, and to Charlie that theyve validated his transaction, possibly fooling one or both into accepting the transaction.

Theres a clever way of avoiding this problem, using an idea known as proof-of-work. The idea is counterintuitive and involves a combination of two ideas: (1) to (artificially) make it computationally costly for network users to validate transactions; and (2) to reward them for trying to help validate transactions. The reward is used so that people on the network will try to help validate transactions, even though thats now been made a computationally costly process. The benefit of making it costly to validate transactions is that validation can no longer be influenced by the number of network identities someone controls, but only by the total computational power they can bring to bear on validation. As well see, with some clever design we can make it so a cheater would need enormous computational resources to cheat, making it impractical.

Thats the gist of proof-of-work. But to really understand proof-of-work, we need to go through the details.

Suppose Alice broadcasts to the network the news that I, Alice, am giving Bob one infocoin, with serial number 1234567.

As other people on the network hear that message, each adds it to a queue of pending transactions that theyve been told about, but which havent yet been approved by the network. For instance, another network user named David might have the following queue of pending transactions:

I, Tom, am giving Sue one infocoin, with serial number 1201174.

I, Sydney, am giving Cynthia one infocoin, with serial number 1295618.

I, Alice, am giving Bob one infocoin, with serial number 1234567.

David checks his copy of the block chain, and can see that each transaction is valid. He would like to help out by broadcasting news of that validity to the entire network.

However, before doing that, as part of the validation protocol David is required to solve a hard computational puzzle the proof-of-work. Without the solution to that puzzle, the rest of the network wont accept his validation of the transaction.

What puzzle does David need to solve? To explain that, let be a fixed hash function known by everyone in the network its built into the protocol. Bitcoin uses the well-known SHA-256 hash function, but any cryptographically secure hash function will do. Lets give Davids queue of pending transactions a label, , just so its got a name we can refer to. Suppose David appends a number (called the nonce) to and hashes the combination. For example, if we use Hello, world! (obviously this is not a list of transactions, just a string used for illustrative purposes) and the nonce then (output is in hexadecimal)

The puzzle David has to solve the proof-of-work is to find a nonce such that when we append to and hash the combination the output hash begins with a long run of zeroes. The puzzle can be made more or less difficult by varying the number of zeroes required to solve the puzzle. A relatively simple proof-of-work puzzle might require just three or four zeroes at the start of the hash, while a more difficult proof-of-work puzzle might require a much longer run of zeros, say 15 consecutive zeroes. In either case, the above attempt to find a suitable nonce, with , is a failure, since the output doesnt begin with any zeroes at all. Trying doesnt work either:

We can keep trying different values for the nonce, . Finally, at we obtain:

This nonce gives us a string of four zeroes at the beginning of the output of the hash. This will be enough to solve a simple proof-of-work puzzle, but not enough to solve a more difficult proof-of-work puzzle.

What makes this puzzle hard to solve is the fact that the output from a cryptographic hash function behaves like a random number: change the input even a tiny bit and the output from the hash function changes completely, in a way thats hard to predict. So if we want the output hash value to begin with 10 zeroes, say, then David will need, on average, to try different values for before he finds a suitable nonce. Thats a pretty challenging task, requiring lots of computational power.

Obviously, its possible to make this puzzle more or less difficult to solve by requiring more or fewer zeroes in the output from the hash function. In fact, the Bitcoin protocol gets quite a fine level of control over the difficulty of the puzzle, by using a slight variation on the proof-of-work puzzle described above. Instead of requiring leading zeroes, the Bitcoin proof-of-work puzzle requires the hash of a blocks header to be lower than or equal to a number known as the target. This target is automatically adjusted to ensure that a Bitcoin block takes, on average, about ten minutes to validate.

(In practice there is a sizeable randomness in how long it takes to validate a block sometimes a new block is validated in just a minute or two, other times it may take 20 minutes or even longer. Its straightforward to modify the Bitcoin protocol so that the time to validation is much more sharply peaked around ten minutes. Instead of solving a single puzzle, we can require that multiple puzzles be solved; with some careful design it is possible to considerably reduce the variance in the time to validate a block of transactions.)

Alright, lets suppose David is lucky and finds a suitable nonce, . Celebration! (Hell be rewarded for finding the nonce, as described below). He broadcasts the block of transactions hes approving to the network, together with the value for . Other participants in the Infocoin network can verify that is a valid solution to the proof-of-work puzzle. And they then update their block chains to include the new block of transactions.

For the proof-of-work idea to have any chance of succeeding, network users need an incentive to help validate transactions. Without such an incentive, they have no reason to expend valuable computational power, merely to help validate other peoples transactions. And if network users are not willing to expend that power, then the whole system wont work. The solution to this problem is to reward people who help validate transactions. In particular, suppose we reward whoever successfully validates a block of transactions by crediting them with some infocoins. Provided the infocoin reward is large enough that will give them an incentive to participate in validation.

In the Bitcoin protocol, this validation process is called mining. For each block of transactions validated, the successful miner receives a bitcoin reward. Initially, this was set to be a 50 bitcoin reward. But for every 210,000 validated blocks (roughly, once every four years) the reward halves. This has happened just once, to date, and so the current reward for mining a block is 25 bitcoins. This halving in the rate will continue every four years until the year 2140 CE. At that point, the reward for mining will drop below bitcoins per block. bitcoins is actually the minimal unit of Bitcoin, and is known as a satoshi. So in 2140 CE the total supply of bitcoins will cease to increase. However, that wont eliminate the incentive to help validate transactions. Bitcoin also makes it possible to set aside some currency in a transaction as a transaction fee, which goes to the miner who helps validate it. In the early days of Bitcoin transaction fees were mostly set to zero, but as Bitcoin has gained in popularity, transaction fees have gradually risen, and are now a substantial additional incentive on top of the 25 bitcoin reward for mining a block.

You can think of proof-of-work as a competition to approve transactions. Each entry in the competition costs a little bit of computing power. A miners chance of winning the competition is (roughly, and with some caveats) equal to the proportion of the total computing power that they control. So, for instance, if a miner controls one percent of the computing power being used to validate Bitcoin transactions, then they have roughly a one percent chance of winning the competition. So provided a lot of computing power is being brought to bear on the competition, a dishonest miner is likely to have only a relatively small chance to corrupt the validation process, unless they expend a huge amount of computing resources.

Of course, while its encouraging that a dishonest party has only a relatively small chance to corrupt the block chain, thats not enough to give us confidence in the currency. In particular, we havent yet conclusively addressed the issue of double spending.

Ill analyse double spending shortly. Before doing that, I want to fill in an important detail in the description of Infocoin. Wed ideally like the Infocoin network to agree upon the order in which transactions have occurred. If we dont have such an ordering then at any given moment it may not be clear who owns which infocoins. To help do this well require that new blocks always include a pointer to the last block validated in the chain, in addition to the list of transactions in the block. (The pointer is actually just a hash of the previous block). So typically the block chain is just a linear chain of blocks of transactions, one after the other, with later blocks each containing a pointer to the immediately prior block:

Occasionally, a fork will appear in the block chain. This can happen, for instance, if by chance two miners happen to validate a block of transactions near-simultaneously both broadcast their newly-validated block out to the network, and some people update their block chain one way, and others update their block chain the other way:

This causes exactly the problem were trying to avoid its no longer clear in what order transactions have occurred, and it may not be clear who owns which infocoins. Fortunately, theres a simple idea that can be used to remove any forks. The rule is this: if a fork occurs, people on the network keep track of both forks. But at any given time, miners only work to extend whichever fork is longest in their copy of the block chain.

Suppose, for example, that we have a fork in which some miners receive block A first, and some miners receive block B first. Those miners who receive block A first will continue mining along that fork, while the others will mine along fork B. Lets suppose that the miners working on fork B are the next to successfully mine a block:

After they receive news that this has happened, the miners working on fork A will notice that fork B is now longer, and will switch to working on that fork. Presto, in short order work on fork A will cease, and everyone will be working on the same linear chain, and block A can be ignored. Of course, any still-pending transactions in A will still be pending in the queues of the miners working on fork B, and so all transactions will eventually be validated.

Likewise, it may be that the miners working on fork A are the first to extend their fork. In that case work on fork B will quickly cease, and again we have a single linear chain.

No matter what the outcome, this process ensures that the block chain has an agreed-upon time ordering of the blocks. In Bitcoin proper, a transaction is not considered confirmed until: (1) it is part of a block in the longest fork, and (2) at least 5 blocks follow it in the longest fork. In this case we say that the transaction has 6 confirmations. This gives the network time to come to an agreed-upon the ordering of the blocks. Well also use this strategy for Infocoin.

With the time-ordering now understood, lets return to think about what happens if a dishonest party tries to double spend. Suppose Alice tries to double spend with Bob and Charlie. One possible approach is for her to try to validate a block that includes both transactions. Assuming she has one percent of the computing power, she will occasionally get lucky and validate the block by solving the proof-of-work. Unfortunately for Alice, the double spending will be immediately spotted by other people in the Infocoin network and rejected, despite solving the proof-of-work problem. So thats not something we need to worry about.

A more serious problem occurs if she broadcasts two separate transactions in which she spends the same infocoin with Bob and Charlie, respectively. She might, for example, broadcast one transaction to a subset of the miners, and the other transaction to another set of miners, hoping to get both transactions validated in this way. Fortunately, in this case, as weve seen, the network will eventually confirm one of these transactions, but not both. So, for instance, Bobs transaction might ultimately be confirmed, in which case Bob can go ahead confidently. Meanwhile, Charlie will see that his transaction has not been confirmed, and so will decline Alices offer. So this isnt a problem either. In fact, knowing that this will be the case, there is little reason for Alice to try this in the first place.

An important variant on double spending is if Alice = Bob, i.e., Alice tries to spend a coin with Charlie which she is also spending with herself (i.e., giving back to herself). This sounds like it ought to be easy to detect and deal with, but, of course, its easy on a network to set up multiple identities associated with the same person or organization, so this possibility needs to be considered. In this case, Alices strategy is to wait until Charlie accepts the infocoin, which happens after the transaction has been confirmed 6 times in the longest chain. She will then attempt to fork the chain before the transaction with Charlie, adding a block which includes a transaction in which she pays herself:

Unfortunately for Alice, its now very difficult for her to catch up with the longer fork. Other miners wont want to help her out, since theyll be working on the longer fork. And unless Alice is able to solve the proof-of-work at least as fast as everyone else in the network combined roughly, that means controlling more than fifty percent of the computing power then she will just keep falling further and further behind. Of course, she might get lucky. We can, for example, imagine a scenario in which Alice controls one percent of the computing power, but happens to get lucky and finds six extra blocks in a row, before the rest of the network has found any extra blocks. In this case, she might be able to get ahead, and get control of the block chain. But this particular event will occur with probability . A more general analysis along these lines shows that Alices probability of ever catching up is infinitesimal, unless she is able to solve proof-of-work puzzles at a rate approaching all other miners combined.

Of course, this is not a rigorous security analysis showing that Alice cannot double spend. Its merely an informal plausibility argument. The original paper introducing Bitcoin did not, in fact, contain a rigorous security analysis, only informal arguments along the lines Ive presented here. The security community is still analysing Bitcoin, and trying to understand possible vulnerabilities. You can see some of this research listed here, and I mention a few related problems in the Problems for the author below. At this point I think its fair to say that the jury is still out on how secure Bitcoin is.

The proof-of-work and mining ideas give rise to many questions. How much reward is enough to persuade people to mine? How does the change in supply of infocoins affect the Infocoin economy? Will Infocoin mining end up concentrated in the hands of a few, or many? If its just a few, doesnt that endanger the security of the system? Presumably transaction fees will eventually equilibriate wont this introduce an unwanted source of friction, and make small transactions less desirable? These are all great questions, but beyond the scope of this post. I may come back to the questions (in the context of Bitcoin) in a future post. For now, well stick to our focus on understanding how the Bitcoin protocol works.

Lets move away from Infocoin, and describe the actual Bitcoin protocol. There are a few new ideas here, but with one exception (discussed below) theyre mostly obvious modifications to Infocoin.

To use Bitcoin in practice, you first install a wallet program on your computer. To give you a sense of what that means, heres a screenshot of a wallet called Multbit. You can see the Bitcoin balance on the left 0.06555555 Bitcoins, or about 70 dollars at the exchange rate on the day I took this screenshot and on the right two recent transactions, which deposited those 0.06555555 Bitcoins:

Suppose youre a merchant who has set up an online store, and youve decided to allow people to pay using Bitcoin. What you do is tell your wallet program to generate a Bitcoin address. In response, it will generate a public / private key pair, and then hash the public key to form your Bitcoin address:

You then send your Bitcoin address to the person who wants to buy from you. You could do this in email, or even put the address up publicly on a webpage. This is safe, since the address is merely a hash of your public key, which can safely be known by the world anyway. (Ill return later to the question of why the Bitcoin address is a hash, and not just the public key.)

The person who is going to pay you then generates a transaction. Lets take a look at the data from an actual transaction transferring bitcoins. Whats shown below is very nearly the raw data. Its changed in three ways: (1) the data has been deserialized; (2) line numbers have been added, for ease of reference; and (3) Ive abbreviated various hashes and public keys, just putting in the first six hexadecimal digits of each, when in reality they are much longer. Heres the data:

Lets go through this, line by line.

Line 1 contains the hash of the remainder of the transaction, 7c4025…, expressed in hexadecimal. This is used as an identifier for the transaction.

Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol.

Lines 3 and 4 tell us that the transaction has one input and one output, respectively. Ill talk below about transactions with more inputs and outputs, and why thats useful.

Line 5 contains the value for lock_time, which can be used to control when a transaction is finalized. For most Bitcoin transactions being carried out today the lock_time is set to 0, which means the transaction is finalized immediately.

Line 6 tells us the size (in bytes) of the transaction. Note that its not the monetary amount being transferred! That comes later.

Lines 7 through 11 define the input to the transaction. In particular, lines 8 through 10 tell us that the input is to be taken from the output from an earlier transaction, with the given hash, which is expressed in hexadecimal as 2007ae…. The n=0 tells us its to be the first output from that transaction; well see soon how multiple outputs (and inputs) from a transaction work, so dont worry too much about this for now. Line 11 contains the signature of the person sending the money, 304502…, followed by a space, and then the corresponding public key, 04b2d…. Again, these are both in hexadecimal.

One thing to note about the input is that theres nothing explicitly specifying how many bitcoins from the previous transaction should be spent in this transaction. In fact, all the bitcoins from the n=0th output of the previous transaction are spent. So, for example, if the n=0th output of the earlier transaction was 2 bitcoins, then 2 bitcoins will be spent in this transaction. This seems like an inconvenient restriction like trying to buy bread with a 20 dollar note, and not being able to break the note down. The solution, of course, is to have a mechanism for providing change. This can be done using transactions with multiple inputs and outputs, which well discuss in the next section.

Lines 12 through 14 define the output from the transaction. In particular, line 13 tells us the value of the output, 0.319 bitcoins. Line 14 is somewhat complicated. The main thing to note is that the string a7db6f… is the Bitcoin address of the intended recipient of the funds (written in hexadecimal). In fact, Line 14 is actually an expression in Bitcoins scripting language. Im not going to describe that language in detail in this post, the important thing to take away now is just that a7db6f… is the Bitcoin address.

You can now see, by the way, how Bitcoin addresses the question I swept under the rug in the last section: where do Bitcoin serial numbers come from? In fact, the role of the serial number is played by transaction hashes. In the transaction above, for example, the recipient is receiving 0.319 Bitcoins, which come out of the first output of an earlier transaction with hash 2007ae… (line 9). If you go and look in the block chain for that transaction, youd see that its output comes from a still earlier transaction. And so on.

There are two clever things about using transaction hashes instead of serial numbers. First, in Bitcoin theres not really any separate, persistent coins at all, just a long series of transactions in the block chain. Its a clever idea to realize that you dont need persistent coins, and can just get by with a ledger of transactions. Second, by operating in this way we remove the need for any central authority issuing serial numbers. Instead, the serial numbers can be self-generated, merely by hashing the transaction.

In fact, its possible to keep following the chain of transactions further back in history. Ultimately, this process must terminate. This can happen in one of two ways. The first possibilitty is that youll arrive at the very first Bitcoin transaction, contained in the so-called Genesis block. This is a special transaction, having no inputs, but a 50 Bitcoin output. In other words, this transaction establishes an initial money supply. The Genesis block is treated separately by Bitcoin clients, and I wont get into the details here, although its along similar lines to the transaction above. You can see the deserialized raw data here, and read about the Genesis block here.

The second possibility when you follow a chain of transactions back in time is that eventually youll arrive at a so-called coinbase transaction. With the exception of the Genesis block, every block of transactions in the block chain starts with a special coinbase transaction. This is the transaction rewarding the miner who validated that block of transactions. It uses a similar but not identical format to the transaction above. I wont go through the format in detail, but if you want to see an example, see here. You can read a little more about coinbase transactions here.

Something I havent been precise about above is what exactly is being signed by the digital signature in line 11. The obvious thing to do is for the payer to sign the whole transaction (apart from the transaction hash, which, of course, must be generated later). Currently, this is not what is done some pieces of the transaction are omitted. This makes some pieces of the transaction malleable, i.e., they can be changed later. However, this malleability does not include the amounts being paid out, senders and recipients, which cant be changed later. I must admit I havent dug down into the details here. I gather that this malleability is under discussion in the Bitcoin developer community, and there are efforts afoot to reduce or eliminate this malleability.

In the last section I described how a transaction with a single input and a single output works. In practice, its often extremely convenient to create Bitcoin transactions with multiple inputs or multiple outputs. Ill talk below about why this can be useful. But first lets take a look at the data from an actual transaction:

Lets go through the data, line by line. Its very similar to the single-input-single-output transaction, so Ill do this pretty quickly.

Line 1 contains the hash of the remainder of the transaction. This is used as an identifier for the transaction.

Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol.

Lines 3 and 4 tell us that the transaction has three inputs and two outputs, respectively.

Line 5 contains the lock_time. As in the single-input-single-output case this is set to 0, which means the transaction is finalized immediately.

Line 6 tells us the size of the transaction in bytes.

Lines 7 through 19 define a list of the inputs to the transaction. Each corresponds to an output from a previous Bitcoin transaction.

The first input is defined in lines 8 through 11.

In particular, lines 8 through 10 tell us that the input is to be taken from the n=0th output from the transaction with hash 3beabc…. Line 11 contains the signature, followed by a space, and then the public key of the person sending the bitcoins.

Lines 12 through 15 define the second input, with a similar format to lines 8 through 11. And lines 16 through 19 define the third input.

Lines 20 through 24 define a list containing the two outputs from the transaction.

The first output is defined in lines 21 and 22. Line 21 tells us the value of the output, 0.01068000 bitcoins. As before, line 22 is an expression in Bitcoins scripting language. The main thing to take away here is that the string e8c30622… is the Bitcoin address of the intended recipient of the funds.

The second output is defined lines 23 and 24, with a similar format to the first output.

More here:
How the Bitcoin protocol actually works | DDI

Decoding Illuminati Symbolism: Triangles, Pyramids and the Sun

 Illuminati  Comments Off on Decoding Illuminati Symbolism: Triangles, Pyramids and the Sun
Jul 222015
 

Psychiatrist Carl Jung once said about symbols that their purpose was to give a meaning to the life of man.Catapulted into the mainstream by Jay-Zs infamous Roc-diamond (which only looks likea triangle, although he has said that its a four sided diamond for the Rock in Roc-A-Fella records), the symbolism of the triangle and pyramid are key players in the realm of conspiracy theories and Illuminati symbolism. You can find these symbols in most any big-industry; music, film, corporate logos, etc. But why do we see these symbols so often? What do they truly mean?

The symbol of the triangle is commonly held to have a much deeper and esoteric meaning than the basic geometric shape we common-folk see. The symbolism, or meaning, of the triangle is usually viewed as one of spiritual importance. The Christian faith views the three sides of the triangle as the Holy Trinity; God the Father, God the Son, and God the Holy Spirit. Ancient Egyptians believed the right sided triangle represented their form of the Trinity with the hypotenuse being the child god Horus, the upright side being the sacred feminine goddess Isis, and the base is the male Osiris.

This concept was kept in a sort of chain of custody when the Greek mathematician Pythagoras learned much from the ancient Egyptians and then applied it to geometry. He even went as far as to set up one of the first schools of mystery with a religious sect that practiced his philosophy, mathematics, and conferring of esoteric principles. In theory, the secret societies, cults, occultists, and other nefarious groups, collectively known as the Illuminati, maintain all of this knowledge and use it in a much different manner.

To understand why all of this matters, you must learn about the belief system of the occult. A researcher named Marty Leeds wrote books on mathematics and the universal language that nature uses to communicate to us. He believes that various languages are sacred and have a basis in ancient symbols through mathematics. I find his argument compelling, and Ive tried to incorporate some its logic into this post, as I find it important to argument.

The three sides of a triangle represent the number 3, and this concept is used in gematria, the ancient Babylonian/Hebrew numerology practice that assigns numbers to words or letters (and also other mystical schools of thought). The number 3 is representative of the spirit realm (or the Heavens), while in contrast, the number 4 represents the physical realm (the material, three-dimensional world we can relate to). The number 3 is a number of the divine, showing the union of male and female that create a third being. Its the number of manifestation; to make something happen.

Another analogy to consider is that the upright triangle points towards the Heavens, while the inverted points to the Earth (or Hell if you want to get all fire and brimstone with it).

Read more:
Decoding Illuminati Symbolism: Triangles, Pyramids and the Sun

 Posted by at 7:55 pm  Tagged with:

Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F – Video

 Illuminati  Comments Off on Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F – Video
Apr 122015
 



Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F
hope you guys can glean from this post two.

By: Evangeline France

View post:
Illuminati NWO explained 2015 with Gary Harbinger – Pope, Climate Change, Aliens, Christ F – Video




Pierre Teilhard De Chardin | Designer Children | Prometheism | Euvolution