Cyborg | Designer-Babies | Futurism | Futurist | Immortality | Longevity | Nanotechnology | Post-Human | Singularity | Transhuman

The Four Stages of Financial Independence – The Simple Dollar

 Financial Independence  Comments Off on The Four Stages of Financial Independence – The Simple Dollar
Jun 262016

Financial independence is a tricky phrase because it can mean different things to different people.

Right now, I view financial independence as being a state where I no longer have to work for money. Yet, seven or eight years ago, I might have viewed it as simply being free from worrying about my next paycheck. At different points in there, I might have seen financial independence completely differently.

Along the way, Ive come to realize that financial independence is made up of a series of stages. Some people might see more stages, while others might see fewer; I see four clear ones.

In my own financial journey and in the journey of others that Ive had conversations with financial independence generally means the next stage that hasnt been achieved yet.

For example, once upon a time, I viewed financial independence as not needing to rely on my parents or on my very next paycheck to survive. As I achieved that, my definition changed.

Lets walk through these four stages and look at what needs to be done to achieve each one.

According to recent studies, 76% of Americans live paycheck to paycheck. In the words of the article:

Fewer than one in four Americans have enough money in their savings account to cover at least six months of expenses, enough to help cushion the blow of a job loss, medical emergency or some other unexpected event, according to the survey of 1,000 adults. Meanwhile, 50% of those surveyed have less than a three-month cushion and 27% had no savings at all.

In other words, a person is in this category if theyre going to see significant financial problems within a short period if they lose their primary job. You can define short period however you want a month, six months, whatever.

I tend to define it as six months. If you were fired tomorrow and could survive for six months without getting a comparable job and without facing complete financial apocalypse or a huge explosion in your debt, youre probably enjoying freedom from the paycheck-to-paycheck cycle. Believe it or not, three in four Americans cant match that level.

A part of this is freedom from financial outpatient support from the Bank of Mom and Dad. If you still require a regular influx of cash from your parents to maintain your current lifestyle, then you are unquestionably still living paycheck to paycheck. Some people receive these kinds of gifts and channel all of it into savings, which is the best way to make financial progress with parental support. You must stand on your own two feet.

How did I do it? We achieved this level in late 2006 or early 2007, perhaps nine months after the beginning of our financial turnaround. We paid off several credit cards and built a very healthy emergency fund during those early months, but it took until the end of the year for us to begin to feel a bit of security about our situation.

How do you get here? The best method is to cut expenses. Live as cheap as possible and use the excess to get your bills up to date and build up some cash in your savings account. If youre not fully employed, look for work as you need income to make this happen. Spend less than you earn and master it, as youll always want to be in that state.

The next level of financial independence, in my experience, is freedom from debt. Why is this such a vital level? It represents the clearest possible case for minimizing ones monthly expenditures. Once your debts are gone, your set of monthly bills is going to be awfully small, plus you wont be giving away money in the form of interest payments.

When most people reach debt freedom, theyre often stunned at the amount of cash sitting in their checking account. It becomes much, much easier to invest for the future as you can take the money that was disappearing into a black hole of debt and instead apply it to your future. Youre building wealth instead of undoing earlier mistakes.

How did I do it? We achieved freedom from all non-mortgage debts in 2008 and complete debt freedom in 2011. Not only did it feel like a huge weight left our shoulders at that point, we noticed that our financial growth really began to accelerate. With no debt payments, we moved to a model where we have been banking my entire income since early 2012.

How do you get here? Build and execute a debt repayment plan. Keep your expenses low so that you can blow through that plan.

At this point, you can lose any of your family income streams and youll still survive. If you lose your primary job, you can keep rolling in perpetuity. You still need to work for a living, but none of your methods of earning money are a requirement. A pink slip is just shrugged off and changing career paths is completely fine.

Ideally, this is true because you have income arriving from a number of different sources. Maybe you earn money from your primary job, your investments, an array of Youtube videos youve posted, and maybe a book you wrote a year or two ago. If you lose any of those streams, youre still fine it just means you devote more time to the other ones. If you find your passion is gone from one of those streams, you can simply close one out and move to another one.

How did I do it? Sarah and I achieved this threshold sometime in late 2012 or early 2013. During that time, Sarah realistically thought about leaving her current career path for a while to pursue other things and we realized during that conversation that our finances really werent the primary part of the discussion any more. Yes, there would have been financial impact from that choice, but the discussion mostly revolved around Sarahs personal goals. She was free from her job at that point; she chose to stick with it because she realized how much she loved her work.

How do you get here? Invest for the future so that your money starts producing income on your own. Spend some of your spare time creating things that generate income for you, like writing a book or recording Youtube videos. Keep your expenses low so that you can afford to invest a lot and so that losing an income stream isnt devastating.

The final level which is the target that Sarah and I have for the future comes when your investment income exceeds your living expenses, which means that you no longer have to work for money. You can spend your time however you wish as long as you dont spend money foolishly. Ideally, your income from investments exceeds your spending so that you can roll some of that investment income into more investments, making you more or less inflation proof.

Our goal is to achieve this level by 2020 or so. Were aware that we do have the expensive mountain of three children entering into postsecondary education all within five years in the early 2020s; otherwise, wed probably be able to achieve it sooner than that.

How do you get here? Keep investing. Eventually, when you get close, invest in things that produce direct income for you, such as dividend-paying stocks or rental properties. Keep your expenses low, too.

Theres one common thread running through all of these stages: keep your spending under control. No matter where youre at financially, frivolous spending is your enemy. It pushes your financial goals away from you and increases the time it takes to move to the next level.

Still, its a balancing act. Sometimes, expenses that bring joy today are worth putting off that financial progress. The key is recognizing that your spending is slowing down your progress. Always question your unnecessary desires; let them thrive sometimes, but put the stops on the sillier ones.

This progression isnt going to happen immediately. Sarah and I took years to move from level to level, and we threw ourselves deeply into frugality while we were both working full-time jobs and I was starting a significant side business that was very time intensive and personal labor intensive. Be patient.

One final thought: it feels great to achieve each level. It makes your day-to-day life feel less challenging and less stressful. You experience much more freedom than before and you include many more factors such as personal happiness and engagement when making major life and career decisions.

Best of luck in your financial journey!

Have a question for us or just want to pick our brain? Follow Simple Dollar on Facebook. We respond to every message.

See the article here:

The Four Stages of Financial Independence – The Simple Dollar

 Posted by at 10:56 am  Tagged with:

Prison Bill Gates And Neo-Eugenics: Vaccines …

 Neo-eugenics  Comments Off on Prison Bill Gates And Neo-Eugenics: Vaccines …
Jun 262016

F. William Engdahl Financial Sense Friday, March 5, 2010

Microsoft founder and one of the worlds wealthiest men, Bill Gates, projects an image of a benign philanthropist using his billions via his (tax exempt) Bill & Melinda Gates Foundation, to tackle diseases, solve food shortages in Africa and alleviate poverty. In a recent conference in California, Gates reveals a less public agenda of his philanthropypopulation reduction, otherwise known as eugenics.

Gates made his remarks to the invitation-only Long Beach, California TED2010 Conference, in a speech titled, Innovating to Zero!. Along with the scientifically absurd proposition of reducing manmade CO2 emissions worldwide to zero by 2050, approximately four and a half minutes into the talk, Gates declares, First we got population. The world today has 6.8 billion people. Thats headed up to about 9 billion. Now if we do a really great job on new vaccines, health care, reproductive health services, we lower that by perhaps 10 or 15 percent.1 (authors emphasis).

In plain English, one of the most powerful men in the world states clearly that he expects vaccines to be used to reduce population growth. When Bill Gates speaks about vaccines, he speaks with authority. In January 2010 at the elite Davos World Economic Forum, Gates announced his foundation would give $10 billion (circa 7.5 billion) over the next decade to develop and deliver new vaccines to children in the developing world. 2

The primary focus of his multi-billion dollar Gates Foundation is vaccinations, especially in Africa and other underdeveloped countries. Bill and Melinda Gates Foundation is a founding member of the GAVI Alliance (Global Alliance for Vaccinations and Immunization) in partnership with the World Bank, WHO and the vaccine industry. The goal of GAVI is to vaccinate every newborn child in the developing world.

Now that sounds like noble philanthropic work. The problem is that the vaccine industry has been repeatedly caught dumping dangerousmeaning unsafe because untested or proven harmfulvaccines onto unwitting Third World populations when they cannot get rid of the vaccines in the West. 3 Some organizations have suggested that the true aim of the vaccinations is to make people sicker and even more susceptible to disease and premature death.4

Dumping toxins on the Third World

In the aftermath of the most recent unnecessary Pandemic declaration of a global H1N1 swine flu emergency, industrial countries were left sitting on hundreds of millions of doses of untested vaccines. They decided to get rid of the embarrassing leftover drugs by handing them over to the WHO which in turn plans to dump them for free on select poor countries. France has given 91 million of the 94 million doses the Sarkozy government bought from the pharma giants; Britain gave 55 million of its 60 million doses. The story for Germany and Norway is similar.5

As Dr. Thomas Jefferson, an epidemiologist with the Cochrane Research Center in Rome noted, Why do they give the vaccines to the developing countries at all? The pandemic has been called off in most parts of the world. The greatest threat in poor countries right now is heart and circulatory diseases while the virus figures at the bottom of the list. What is the medical reason for donating 180 million doses? 6 As well, flu is a minor problem in countries with abundant sunshine, and it turned out that the feared H1N1 Pandemic new great plague was the mildest flu on record.

The pharmaceutical vaccine makers do not speak about the enormous health damage from infant vaccination including autism and numerous neuro-muscular deformities that have been traced back to the toxic adjuvants and preservatives used in most vaccines. Many vaccines, especially multi-dose vaccines that are made more cheaply for sale to the Third World, contain something called Thimerosal (Thiomersol in the EU), a compound (sodium ethylmercurithiosalicylate), containing some 50% mercury, used as a preservative.

In July 1999 the US National Vaccine Information Center declared in a press release that, The cumulative effects of ingesting mercury can cause brain damage. The same month, the American Academy of Pediatrics (AAP) and the Centers for Disease Control and Prevention (CDC) alerted the public about the possible health effects associated with thimerosal-containing vaccines. They strongly recommended that thimerosal be removed from vaccines as soon as possible. Under the directive of the FDA Modernization Act of 1997, the Food and Drug Administration also determined that infants who received several thimerosal-containing vaccines may be receiving mercury exposure over and above the recommended federal guidelines.7


A new form of eugenics?

Gates interest in inducing population reduction among black and other minority populations is not new unfortunately. As I document in my book, Seeds of Destruction,8 since the 1920s the Rockefeller Foundation had funded the eugenics research in Germany through the Kaiser-Wilhelm Institutes in Berlin and Munich, including well into the Third Reich. They praised the forced sterilization of people by Hirtler Germany, and the Nazi ideas on race purity. It was John D. Rockefeller III, a life-long advocate of eugenics, who used his tax free foundation money to initiate the population reduction neo-Malthusian movement through his private Population Council in New York beginning in the 1950s.

The idea of using vaccines to covertly reduce births in the Third World is also not new. Bill Gates good friend, David Rockefeller and his Rockefeller Foundation were involved as early as 1972 in a major project together with WHO and others to perfect another new vaccine.

The results of the WHO-Rockefeller project were put into mass application on human guinea pigs in the early 1990s. The WHO oversaw massive vaccination campaigns against tetanus in Nicaragua, Mexico and the Philippines. Comite Pro Vida de Mexico, a Roman Catholic lay organization, became suspicious of the motives behind the WHO program and decided to test numerous vials of the vaccine and found them to contain human Chorionic Gonadotrophin, or hCG. That was a curious component for a vaccine designed to protect people against lock-jaw arising from infection with rusty nail wounds or other contact with certain bacteria found in soil. The tetanus disease was indeed, also rather rare. It was also curious because hCG was a natural hormone needed to maintain a pregnancy. However, when combined with a tetanus toxoid carrier, it stimulated formation of antibodies against hCG, rendering a woman incapable of maintaining a pregnancy, a form of concealed abortion. Similar reports of vaccines laced with hCG hormones came from the Philippines and Nicaragua.9

Gates Gene Revolution in Africa

The Bill and Melinda Gates Foundation, along with David Rockefellers Rockefeller Foundation, the creators of the GMO biotechnology, are also financing a project called The Alliance for a Green Revolution in Africa (AGRA) headed by former UN chief, Kofi Annan. Accepting the role as AGRA head in June 2007 Annan expressed his gratitude to the Rockefeller Foundation, the Bill & Melinda Gates Foundation, and all others who support our African campaign. The AGRA board is dominated by people from both the Gates and Rockefeller foundations. 10

Monsanto, DuPont, Dow, Syngenta and other major GMO agribusiness giants are reported at the heart of AGRA, using it as a back-door to spread their patented GMO seeds across Africa under the deceptive label, bio-technology, a euphemism for genetically engineered patented seeds. The person from the Gates Foundation responsible for its work with AGRA is Dr. Robert Horsch, a 25-year Monsanto GMO veteran who was on the team that developed Monsantos RoundUp Ready GMO technologies. His job is reportedly to use Gates money to introduce GMO into Africa.11

To date South Africa is the only African country permitting legal planting of GMO crops. In 2003 Burkina Faso authorized GMO trials. In 2005 Kofi Annans Ghana drafted bio-safety legislation and key officials expressed their intentions to pursue research into GMO crops. AGRA is being used to create networks of agro-dealers across Africa, at first with no mention of GMO seeds or herbicides, in order to have the infrastructure in place to massively introduce GMO.12

GMO, glyphosate and population reduction

GMO crops have never been proven safe for human or animal consumption. Moreover, they are inherently genetically unstable as they are an unnatural product of introducing a foreign bacteria such as Bacillus Thuringiensis (Bt) or other material into the DNA of a given seed to change its traits. Perhaps equally dangerous are the paired chemical herbicides sold as a mandatory part of a GMO contract, such as Monsantos Roundup, the most widely used such herbicide in the world. It contains highly toxic glyphosate compounds that have been independently tested and proven to exist in toxic concentrations in GMO applications far above that safe for humans or animals. Tests show that tiny amounts of glyphosate compounds would do damage to a human umbilical, embryonic and placental cells in a pregnant woman drinking the ground water near a GMO field.13

One long-standing project of the US Government has been to perfect a genetically-modified variety of corn, the diet staple in Mexico and many other Latin American countries. The corn has been field tested in tests financed by the US Department of Agriculture along with a small California bio-tech company named Epicyte. Announcing his success at a 2001 press conference, the president of Epicyte, Mitch Hein, pointing to his GMO corn plants, announced, We have a hothouse filled with corn plants that make anti-sperm antibodies. 14

Hein explained that they had taken antibodies from women with a rare condition known as immune infertility, isolated the genes that regulated the manufacture of those infertility antibodies, and, using genetic engineering techniques, had inserted the genes into ordinary corn seeds used to produce corn plants. In this manner, in reality they produced a concealed contraceptive embedded in corn meant for human consumption. Essentially, the antibodies are attracted to surface receptors on the sperm, said Hein. They latch on and make each sperm so heavy it cannot move forward. It just shakes about as if it was doing the lambada. 15 Hein claimed it was a possible solution to world over-population. The moral and ethical issues of feeding it to humans in Third World poor countries without their knowing it countries he left out of his remarks.

Spermicides hidden in GMO corn provided to starving Third World populations through the generosity of the Gates foundation, Rockefeller Foundation and Kofi Annans AGRA or vaccines that contain undisclosed sterilization agents are just two documented cases of using vaccines or GMO seeds to reduce population.

And the Good Club

Gates TED2010 speech on zero emissions and population reduction is consistent with a report that appeared in New York Citys ethnic media, in May 2009. According to the report, a secret meeting took place on May 5, 2009 at the home of Sir Paul Nurse, President of Rockefeller University, among some of the wealthiest people in America. Investment guru Warren Buffett who in 2006 decided to pool his $30 billion Buffett Foundation into the Gates foundation to create the worlds largest private foundation with some $60 billions of tax-free dollars was present. Banker David Rockefeller was the host.

The exclusive letter of invitation was signed by Gates, Rockefeller and Buffett. They decided to call themselves the Good Club. Also present was media czar Ted Turner, billionaire founder of CNN who stated in a 1996 interview for the Audubon nature magazine, where he said that a 95% reduction of world population to between 225-300 million would be ideal. In a 2008 interview at Philadelphias Temple University, Turner fine-tuned the number to 2 billion, a cut of more than 70% from todays population. Even less elegantly than Gates, Turner stated, we have too many people. Thats why we have global warming. We need less people using less stuff (sic).16

Others attending this first meeting of the Good Club reportedly were: Eli Broad real estate billionaire, New Yorks billionaire Mayor Michael Bloomberg and Wall Street billionaire and Council on Foreign Relations former head, Peter G. Peterson.

In addition, Julian H. Robertson, Jr., hedge-fund billionaire who worked with Soros attacking the currencies of Thailand, Indonesia, South Korea and the Asian Tigen economies, precipitating the 1997-98 Asia Crisis. Also present at the first session of the Good Club was Patty Stonesifer, former chief executive of the Gates foundation, and John Morgridge of Cisco Systems. The group represented a combined fortune of more than $125 billion. 17

According to reports apparently leaked by one of the attendees, the meeting was held in response to the global economic downturn and the numerous health and environmental crises that are plaguing the globe.

But the central theme and purpose of the secret Good Club meeting of the plutocrats was the priority concern posed by Bill Gates, namely, how to advance more effectively their agenda of birth control and global population reduction. In the talks a consensus reportedly emerged that they would back a strategy in which population growth would be tackled as a potentially disastrous environmental, social and industrial threat. 18

Global Eugenics agenda

Gates and Buffett are major funders of global population reduction programs, as is Turner, whose UN Foundation was created to funnel $1 billion of his tax-free stock option earnings in AOL-Time-Warner into various birth reduction programs in the developing world.19 The programs in Africa and elsewhere are masked as philanthropy and providing health services for poor Africans. In reality they involve involuntary population sterilization via vaccination and other medicines that make women of child-bearing age infertile. The Gates Foundation, where Buffett deposited the bulk of his wealth two years ago, is also backing introduction of GMO seeds into Africa under the cloak of the Kofi Annan-led Second Green Revolution in Africa. The introduction of GMO patented seeds in Africa to date has met with enormous indigenous resistance.

Health experts point out that were the intent of Gates really to improve the health and well-being of black Africans, the same hundreds of millions of dollars the Gates Foundation has invested in untested and unsafe vaccines could be used in providing minimal sanitary water and sewage systems. Vaccinating a child who then goes to drink feces-polluted river water is hardly healthy in any respect. But of course cleaning up the water and sewage systems of Africa would revolutionize the health conditions of the Continent.

Gates TED2010 comments about having new vaccines to reduce global population were obviously no off-the-cuff remark. For those who doubt, the presentation Gates made at the TED2009 annual gathering said almost exactly the same thing about reducing population to cut global warming. For the mighty and powerful of the Good Club, human beings seem to be a form of pollution equal to CO2.


1 Bill Gates, Innovating to Zero!, speech to the TED2010 annual conference, Long Beach, California, February 18, 2010, accessed in

2, Bill Gates makes $10 billion vaccine pledge, London Telegraph, January 29, 2010, accessed in t:

3 Louise Voller, Kristian Villesen, WHO Donates Millions of Doses of Surplus Medical Supplies to Developing countries, Danish Information, 22 December 2009, accessed in

4 One is the Population Research Institute in Washington,

5 Louise Voller et al, op. cit.

6 Ibid.

7 Noted in Vaccinations and Autism, accessed in

8 F. William Engdahl, Seeds of Destruction: The Hidden Agenda of Genetic Manipulation, Global Research (, Montreal, 2007, pp. 79-84.

9 James A. Miller, Are New Vaccines Laced With Birth-Control Drugs?, HLI Reports, Human Life International, Gaithersburg, Maryland; June-July 1995.

10 Cited in F. William Engdahl, Doomsday Seed Vault in the Arctic: Bill Gates, Rockefeller and the GMO giants know something we dont, Global Research, December 4, 2007, accessed in

11 Mariam Mayet, Africas Green Revolution rolls out the Gene Revolution, African Centre for Biosafety, ACB Briefing Paper No. 6/2009, Melville, South Africa, April 2009.

12 Ibid.

13 Nora Benachour and Gilles-Eric Seralini, Glyphosate Formulations Induce Apoptosis and Necrosis in Human Umbilical Embryonic, and Placental Cells, Chemical Research in Toxicology Journal, American Chemical Society, 2009, 22 (1), pp 97105.

14 Robin McKie, GMO Corn Set to Stop Man Spreading His Seed, London, The Observer, 9 September 2001.

15 Ibid. McKie writes, The pregnancy prevention plants are the handiwork of the San Diego biotechnology company Epicyte, where researchers have discovered a rare class of human antibodies that attack spermthe company has created tiny horticultural factories that make contraceptivesEssentially, the antibodies are attracted to surface receptors on the sperm, said Hein. They latch on and make each sperm so heavy it cannot move forward. It just shakes about as if it was doing the lambada.

16 Ted Turner, cited along with youTube video of Turner in Aaron Dykes, Ted Turner: World Needs a Voluntary One-Child Policy for the Next Hundred Years, Jones, April 29, 2008. Accessed in

17 John Harlow, Billionaire club in bid to curb overpopulation, London, The Sunday Times May 24, 2009. Accessed online in

18 Ibid.

19 United Nations Foundation, Women and Population Program, accessed in

Full story here.

This article was posted: Friday, March 5, 2010 at 11:31 am

Go here to see the original:

Prison Bill Gates And Neo-Eugenics: Vaccines …

 Posted by at 10:54 am  Tagged with:

Ayn Rand Institute – Wikipedia, the free encyclopedia

 Ayn Rand  Comments Off on Ayn Rand Institute – Wikipedia, the free encyclopedia
Jun 212016

The Ayn Rand Institute (ARI) is a 501(c)(3) nonprofit think tank in Irvine, California that promotes Ayn Rand’s philosophy, Objectivism. It was established in 1985, three years after Rand’s death, by Leonard Peikoff, Rand’s legal heir. Its executive director is Yaron Brook.[2]

ARI’s stated goal is:

. . . to spearhead a cultural renaissance that will reverse the anti-reason, anti-individualism, anti-freedom, anti-capitalist trends in today’s culture. The major battleground in this fight for reason and capitalism is the educational institutionshigh schools and, above all, the universities, where students learn the ideas that shape their lives.[3]

ARI is mainly an educational organization, but also has “outreach programs.” Its various programs include classes on Objectivism and related subjects offered through its Objectivist Academic Center, public lectures, op-ed articles, letters to the editor, competitions for essays about Rand’s novels, materials for Objectivist campus clubs, supplying Rand’s writings to schools and professors, and providing intellectuals for radio and TV interviews.[4]

During her lifetime, Rand helped establish The Foundation for the New Intellectual, to promote Objectivist ideas. The Foundation was dissolved some 15 years after her death, as having been made redundant by the Ayn Rand Institute. Although Rand never intended for Objectivism to become an organized movement, she heartily approved of rational individuals with the same ideas working toward a common goal.[5] Peikoff, her legal heir, was convinced to start the organization after businessman Ed Snider organized a meeting of possible financial supporters in New York in the fall of 1983.[6] Peikoff also agreed to be the first chairman of the organization’s board of directors.[7]

ARI began operations on February 1, 1985, three years after Rand’s death. The first board of directors included Snider and psychologist Edith Packer. Snider was also one of the founding donors for the organization.[7] Its first executive director was Michael Berliner, who was previously the chairman of the Department of Social and Philosophical Foundations of Education at California State University, Northridge.[8] ARI also established a board of governors, which initially included Harry Binswanger, Robert Hessen, Edwin A. Locke, Arthur Mode, George Reisman, Jay Snider, and Mary Ann Sures, with Peter Schwartz as its chairman.[9]M. Northrup Buechner and George Walsh joined the board of advisors shortly thereafter.[10]

ARI’s first two projects were aimed at students. One was developing a network of college clubs to study Objectivism. The other was a college scholarship contest for high-school students based on writing an essay about Rand’s novel The Fountainhead.[10] Later, additional essay contests were added based on Anthem, We the Living and Atlas Shrugged.[11] In 1988 the institute began publishing a newsletter for contributors, called Impact.[12]

In 1989, a philosophical dispute resulted in ARI ending its association with philosopher David Kelley.[13] Board of advisors member George Walsh, who agreed with Kelley, also left.[14] Kelley subsequently founded his own competing institute now known as The Atlas Society, which remains critical of ARI’s stance on loyalty.[15]

In January 2000, Berliner retired as Executive Director, replaced by Yaron Brook, then an assistant professor of finance at Santa Clara University.[2] The institute was originally headquartered in Marina del Rey, California, but in 2002, it moved to larger offices in Irvine, California.[16]

Charity Navigator, which rates charitable and educational organizations to inform potential donors, gives ARI four out of four stars. According to the latest data from Charity Navigator, ARI spends 86.7% of its expenses on programs, 8.6% on fundraising, and 4.6% on administration.[17] As of June 2012[update] the institute’s board of directors[18] consists of Brook; Berliner (co-chair); Arline Mann (co-chair), retired attorney, formerly of Goldman, Sachs & Co.; Carl Barney, CEO of several private colleges; Harry Binswanger, long-time associate of Ayn Rand; Peter LePort, a surgeon in private practice; Tara Smith, professor of philosophy at the University of Texas at Austin;[19] and John Allison, CEO of the Cato Institute and former CEO of BB&T.[20]

Peikoff retains a cooperative and influential relationship with ARI.[21] In 2006, he remarked that he approved of the work ARI has done[22] and in November 2010 that the executive director “has done a splendid job.”[23] Peikoff was a featured speaker at ARI summer conferences in 2007 and 2010.[24] In August, 2010, he demanded and received a change to ARI’s board of directors.[25]

ARI runs a variety of programs:

In 2008, ARI opened the Ayn Rand Center for Individual Rights (“ARC”) in Washington, D.C. to specialize in issues of public policy.[27]

During the current economic crisis, the ARC has been a vocal proponent of the position that government intervention is responsible for the crisis, and that the solution lies not in further government regulation but in moving toward full laissez-faire capitalism.[28][29]

On foreign policy, the ARC advocates American national self-interest, including ending the regimes that sponsor terrorism, rather than the Bush Administration’s policies which they see as timid, halfway measures that only weaken America’s position in the world.[30]

ARI sponsored writers and speakers have promoted a number of specific positions in contemporary political and social controversies.[31]

Since Objectivism advocates atheism, ARI promotes the separation of church and state, and its writers argue that the Religious Right poses a threat to individual rights.[32] They have argued against displaying religious symbols (such as the Ten Commandments) in government facilities[33] and against faith-based initiatives.[34] The institute argues that religion is incompatible with American ideals[35] and opposes the teaching of “intelligent design” in public schools.[36]

ARI has taken many controversial positions with respect to the Muslim world. They hold that the motivation for Islamic terrorism comes from their religiosity, not poverty or a reaction to Western policies.[37] They have urged that the US use overwhelming, retaliatory force to “end states who sponsor terrorism”, using whatever means are necessary to end the threat.[38] In his article “Ends States Who Sponsor Terrorism”, which was published as a full page ad in The New York Times, Peikoff wrote, “The choice today is mass death in the United States or mass death in the terrorist nations. Our Commander-In-Chief must decide whether it is his duty to save Americans or the governments who conspire to kill them.” Although some at ARI initially supported the invasion of Iraq, it has criticized how the Iraq War was handled.[39] Since October 2, 2001, the institute has held that Iran should be the primary target in the war against “Islamic totalitarianism”.[38]

ARI is generally supportive of Israel.[40] Of Zionism, executive director of the institute Yaron Brook writes: “Zionism fused a valid concern self-preservation amid a storm of hostility with a toxic premise ethnically based collectivism and religion.”[41]

In response to the Muhammad cartoons controversy, ARI started a Free Speech Campaign in 2006.[42]

ARI is highly critical of environmentalism and animal rights, arguing that they are destructive of human well-being.[43][44]

The institute is also highly critical of diversity and affirmative action programs, as well as multiculturalism, arguing that they are based on racist premises that ignore the commonality of a shared humanity.[45][46]

ARI supports women’s right to choose abortion,[47] voluntary euthanasia, and assisted suicide.[48]

ARI denounces neoconservatism in general. For example, C. Bradley Thompson wrote an article entitled “The Decline and Fall of American Conservatism”,[49] which was later turned into the book (with Yaron Brook) Neoconservatism: An Obituary for an Idea.[50]

Follow this link:

Ayn Rand Institute – Wikipedia, the free encyclopedia

 Posted by at 11:19 pm  Tagged with:

Convention on the High Seas – Wikipedia, the free encyclopedia

 High Seas  Comments Off on Convention on the High Seas – Wikipedia, the free encyclopedia
Jun 212016

The Convention on the High Seas is an international treaty which codifies the rules of international law relating to the high seas, otherwise known as international waters.[1] The treaty was one of four treaties created at the United Nations Conference on the Law of the Sea (UNCLOS I).[2] The treaty was signed 29 April 1958 and entered into force 30 September 1962.[3] As of 2013, the treaty had been ratified by 63 states.[4]

The treaty is divided into 37 articles:

Article 1: Definition of “high seas”.

Article 2: Statement of principles

Article 3: Access to the sea for landlocked states

Articles 47: the concept of a Flag State

Article 8: Warships

Article 9: Other ships in government service

Articles 1012: Safety, rescue

Article 13: Outlawing transport of slaves at sea

Articles 1421: Piracy

Article 22: Boarding of merchant ships by warships

Article 23: Hot pursuit, that is, pursuit of a vessel across borders for the purposes of law enforcement

Articles 2425: Pollution

Articles 2629: Submarine cables and pipelines

Articles 3037: legal framework, ratification, accession

Visit link:

Convention on the High Seas – Wikipedia, the free encyclopedia

 Posted by at 11:14 pm  Tagged with:

WW3 – World War Three in Detail, showing Start Date …

 Ww3  Comments Off on WW3 – World War Three in Detail, showing Start Date …
Jun 212016

A Three World War scenario was developed several decades ago (see Conspiratorial History). Two World Wars have already been achieved, and the Third and final World War envisions an attack on Iraq, Iran and/or Syria as being the trigger to set the entire Middle East into fiery conflagration. Once America is firmly entrenched into the Middle East with the majority of her first-line units, North Korea is to attack South Korea. Then, with America’s forces stretched well beyond the limit, China is to invade Taiwan. This will usher in the start of World War Three.

What constitutes a ‘world war’? How many countries need to be involved? And who decides at which point a number of regional skirmishes can be grouped together and called a World War? At the time, who called the official start of World War 1 and World War 2?

And have you noticed that although the term ‘World War Three’ is freely used in the alternative press and on the Internet, all the major news networks have stoically avoided using any phrase reminiscent of World War.

Since it’s difficult to find a definition for an event which has only happened twice in modern history, here’s my attempt at an answer to the question ‘what constitutes a world war’?

A World War is a military conflict spanning more than 2 continents, in which at least 20 major countries participate in an attack against a common enemy, and which has the attention of the man-in-the-street due to the significant loss of life.

With that definition, we can agree that WW1 and WW2 were in fact World Wars (both wars involved some degree of participation from most of the world’s then existing countries: Britain, France, Germany, Italy, Japan, the United States and the Soviet Union). We can also agree that we are very close to achieving World War 3. The only requirement left to fulfill the start of WW3 is that of a military conflict spanning more than 2 continents. As soon as Israel attacks Palestine, or North Korea attacks South Korea or the US, or China invades Taiwan, we will have the next World War well underway.

These are, I believe, the stages of the planned Third World War:

Both Biblical prophecy and the Illuminati plan state that Israel is the key. The Third World War is planned to begin when Israel goes to war against her Arab enemies. Then, and only then, will all the other elements begin to occur and they will do so in rapid succession. The plan is to have one disaster following another in such rapid succession that, before people can mentally and emotionally handle one disastrous news event, they will be hit with another. It is also accurate to say that until ALL of the elements for WW3 are in place, the plan will not commence.

While it would be naive to suggest a specific timeline for the events leading up to and including World War 3, we do know that the plans for World War 3 are well advanced, and our leaders involved in this secret plan are waiting only for the right signal before all-out war begins.

We are in the last stages of the preparation to so globalize the world that the Masonic New Age Christ (Antichrist) can appear to receive all the political and economic power of the world’s rulers. This is the Illuminati plan and Biblical prophecy (Revelation 17:12-17).

In the words of Peter Lemesurier, author of The Armageddon Script:

“Their script is now written, subject only to last-minute editing and stage-directions. The stage itself, albeit in darkness, is almost ready. Down in the pit, the subterranean orchestra is already tuning up. The last-minute, walk-on parts are even now being filled. Most of the main actors, one suspects, have already taken up their roles. Soon it will be time for them to come on stage, ready for the curtain to rise. The time for action will have come.”

Ladies and Gentlemen, please take your seats and welcome on stage the players of this Grand Play:

World War Three!

Intro | Prelude | Act I | Act II | Act III | Act IV | Act V | Act VI

For a detailed look at WW3 statistics, including the running cost of World War Three, the number of lives lost and the countries involved in World War Three, please see our World War Overview. Further details will be added as events dictate.

If you found this article interesting and want access to other carefully researched and well written articles, you might want to see what others are saying about the ThreeWorldWars newsletter.

Next: How the tragic events of 911 fit in with the planned World War 3.

Previous: The true cause of World War 2.

Top of Page

View original post here:

WW3 – World War Three in Detail, showing Start Date …

 Posted by at 11:12 pm  Tagged with:

The History of Gambling – Complete Gambling History Timeline

 Gambling  Comments Off on The History of Gambling – Complete Gambling History Timeline
Jun 192016

The history of humanity is inextricably linked with the history of gambling, as it seems that no matter how far back in time you go there are signs that where groups of people gathered together gambling was sure to have been taking place. Now we are not going to attempt to track every single twist and turn in the evolution of gambling in this article, but what we are going to do is to pick out some of the most important dates to act as milestones on the road to todays gambling experience.



While it is almost certain that some forms of betting have been taking place since the dawn of human history, the earliest concrete evidence comes from Ancient China where tiles were unearthed which appeared to have been used for a rudimentary game of chance. The Chinese Book of Songs makes reference to the drawing of wood which suggests that the tiles may have formed part of a lottery type game. We have evidence in the form of keno slips which were used in about 200bc as some sort of lottery to fund state works possibly including construction of the Great Wall of China. Lotteries continued to be used for civic purposes throughout history Harvard and Yale were both established using lottery funds and continue to do so until the present day.


The Greek poet Sophocles claimed that dice were invented by a mythological hero during the siege of Troy, and while this may have somewhat dubious basis in fact, his writings around 500bc were the first mention of dice in Greek history. We know that dice existed far earlier than this, since a pair had been uncovered from an Egyptian tomb from 3000bc, but what is certain is that the Ancient Greeks and Romans loved to gamble on all manner of things, seemingly at any given opportunity. In fact all forms of gambling including dice games were forbidden within the ancient city of Rome and a penalty imposed on those caught which was worth four times the stake being bet. As a result of this, ingenious Roman citizens invented the first gambling chips, so if they were nabbed by the guards they could claim to be playing only for chips and not for real money. (Note that this ruse will not work if attempted at a Vegas casino).



Most scholars agree that the first playing cards appeared in China in the 9th century, although the exact rules of the games they were used for have been lost to history. Some suggest that the cards were both the game and the stake, like trading card games played by children today, while other sources believe the first packs of cards to have been paper forms of Chinese domino. Certainly the cards used at this time bore very little relation to the standard 52 card decks we know today.


The earliest game still played in casinos today is the two player card game of Baccarat, a version of which was first mentioned as long ago as the 1400s when it migrated from Italy to France. Despite its early genesis, it took hundreds of years and various evolutions to arrive at the game we know today. Although different incarnations of the game have come and gone, the standard version played in casinos all over the world came from Cuba via Britain to the US, with a few alterations to the rules along the way. Although baccarat is effectively more of a spectator sport than a game, it is a feature of just about every casino due to its popularity with high rolling gamblers.



Some suggest that the earliest forms of blackjack came from a Spanish game called ventiuna (21) as this game appeared in a book written by the author of Don Quixote in 1601. Or was it the game of trente-un (31) from 1570? Or even quinze (15) from France decades earlier? As with all of these origin stories, the inventors of games of chance were rarely noted in the historical annals. The French game of vingt-et-un in the seventeenth century is certainly a direct forefather of the modern game, and this is the game that arrived in the US along with early settlers from France. The name blackjack was an American innovation, and linked to special promotions in Nevada casinos in the 1930s. To attract extra customers, 10 to 1 odds were paid out if the player won with a black Jack of Clubs or Spades together with an Ace of Spades. The special odds didnt last long, but the name is still with us today.


The earliest gambling houses which could reasonably be compared to casinos started to appear in the early 17th century in Italy. For example, in 1638, the Ridotto was established in Venice to provide a controlled gambling environment amidst the chaos of the annual carnival season. Casinos started to spring up all over continental Europe during the 19th century, while at the same time in the US much more informal gambling houses were in vogue. In fact steam boats taking prosperous farmers and traders up and down the Mississippi provided the venue for a lot of informal gambling stateside. Now when we think of casinos we tend to picture the Las Vegas Strip, which grew out of the ashes of the Depression in America.



Roulette as we know it today originated in the gaming houses of Paris, where players would have been familiar with the wheel we now refer to (ironically enough) as the American Roulette wheel. It took another 50 years until the European version came along with just one green zero, and generations of roulette players can be grateful for that. During the course of the 19th century roulette grew in popularity, and when the famous Monte Carlo casino adopted the single zero form of the game this spread throughout Europe and most of the world, although the Americans stuck to the original double zero wheels.


Its hard to pin down the precise origin of poker as with a lot of the games mentioned here, poker seems to have grown organically over decades and possibly centuries from various different card games. Some have pokers antecedents coming from seventeenth century Persia, while others say that the game we know today was inspired by a French game called Poque. What we do know for sure is that an English actor by the name of Joseph Crowell reported that a recognizable form of the game was being played in New Orleans in 1829, so that is as good a date as any for the birth of poker. The growth of the games popularity was fairly sluggish up until world poker tournaments started being played in Vegas in the 1970s. However poker really exploded with the advent of online poker and televised events allowing spectators to see the players hands. When amateur player Chris Moneymaker qualified for and won the 2003 world poker championship after qualifying through online play, it allowed everyone to picture themselves as online poker millionaires.



The first gambling machine which resembled the slots we know today was one developed by Messrs Sittman and Pitt in New York, which used the 52 cards on drum reels to make a sort of poker game. Around the same time the Liberty Bell machine was invented by a Charles Fey in San Francisco. This machine proved much more practical in the sense that winnings could be precisely regulated, and marked the beginning of the real slot game revolution. The fact that some new video slot games still feature bell symbols dates back to this early invention. While early machines spewed out cigars and gum instead of money, the money dispensing versions soon became a staple in bars and casinos around the globe, and when the first video slot was invented in 1976 this paved the way for the online video slots which were to follow.



The United States has always had an up and down relationship with gambling, dating back to when the very first European settlers arrived. Whereas Puritan bands of settlers banned gambling outright in their new settlements, those emigrating from England had a more lenient view of gambling and were more than happy to tolerate it. This dichotomous relationship has continued until now, and in 1910 public pressure led to a nationwide prohibition on gambling. Just like the alcohol prohibition of the same era, this proved somewhat difficult to enforce and gambling continued on in an only slightly discreet manner. The Wall Street Crash and the Great Depression that this spawned in the early 1930s led to gambling being legalized again, as for many this was the only prospect of alleviating the grinding poverty which they suffered through. Although gambling is legal in a number of States today most famously in Las Vegas, Nevada – online gambling is still something of a grey area in the United States. Right now, many international internet casinos are unable to accept American clients, although the signs are that this will change in the near future.


Microgaming is one of the largest casino and slot game developers in the world today, and they are also considered to be pioneers of online gambling. The leap into the world of virtual casinos was taken all the way back in 1994, which in internet terms is kind of like 2300bc! Online gaming was worth over a billion dollars within 5 years, and today is a multibillion dollar industry with over a thousand online casinos and growing. The first live dealer casinos appeared in 2003 courtesy of Playtech, bringing us closer to a hybrid between brick and mortar casinos and the virtual world.



Since New Jersey legalized online gambling in 2011, there has been a boom in the interest people have in it. America has seen a move towards legalizing it state by state, as well as experiencing the rapid rise in mobile gambling. Across the globe, internet users are gradually veering away from their desktops and towards their handheld devices. This is true of online gamblers too, wanting to be able to enjoy their favorite games whilst on the go. The top gambling sites out there have recognized a market and have stepped up to deliver. With a wave of impressive mobile focused online gambling destinations taking the world by storm, it’s safe to say that desktops are being left far behind in favour of more mobile alternatives.

What Comes Next?

It is just about as difficult to predict the future for gambling as it is to uncover some of the origins of the gambling games we know so well today. Much of the focus at the moment is on the mobile gaming market, with online casinos scrambling to make more content compatible with the latest hand held devices. Virtual reality technology is just taking its first steps as a commercial proposition, and you can be sure that there will be gambling applications down the road. How would you like to sit around a virtual poker table with a bunch of your friends from all over the world, share a few laughs, try to tell if you can spot a tell-tale facial tick; and all this from the comfort of your home? VR Headsets can make it happen maybe not today, but certainly just a few years down the track if technology continues to advance in bounds and leaps.

And after that? Well who knows, but when it comes to gambling all things are possible.

Read the rest here:

The History of Gambling – Complete Gambling History Timeline

 Posted by at 2:46 pm  Tagged with:

Ascension of Jesus – Wikipedia, the free encyclopedia

 Ascension  Comments Off on Ascension of Jesus – Wikipedia, the free encyclopedia
Jun 192016

The Ascension of Jesus (anglicized from the Vulgate Latin Acts 1:9-11 section title: Ascensio Iesu) is the Christian teaching found in the New Testament that the resurrected Jesus was taken up to Heaven in his resurrected body, in the presence of eleven of his apostles, occurring 40 days after the resurrection. In the biblical narrative, an angel tells the watching disciples that Jesus’ second coming will take place in the same manner as his ascension.[1]

The canonical gospels include two brief descriptions of the ascension of Jesus in Luke 24:50-53 and Mark 16:19. A more detailed account of Jesus’ bodily Ascension into the clouds is then given in the Acts of the Apostles (1:9-11).

The ascension of Jesus is professed in the Nicene Creed and in the Apostles’ Creed. The ascension implies Jesus’ humanity being taken into Heaven.[2] The Feast of the Ascension, celebrated on the 40th day of Easter (always a Thursday), is one of the chief feasts of the Christian year.[2] The feast dates back at least to the later 4th century, as is widely attested.[2] The ascension is one of the five major milestones in the gospel narrative of the life of Jesus, the others being baptism, transfiguration, crucifixion, and resurrection.[3][4]

By the 6th century the iconography of the ascension in Christian art had been established and by the 9th century ascension scenes were being depicted on domes of churches.[5][6] Many ascension scenes have two parts, an upper (Heavenly) part and a lower (earthly) part.[7] The ascending Jesus is often shown blessing with his right hand directed towards the earthly group below him and signifying that he is blessing the entire Church.[8]

The canonical gospels include two somewhat brief descriptions of the Ascension of Jesus in Luke 24:50-53 and Mark 16:19.[9][10][11]

In the Gospel of Mark 16:14, after the resurrection, Jesus “was manifested unto the eleven themselves as they sat at meat; …”. At the meal, Jesus said to them, “Go ye into all the world, and preach the gospel to the whole creation.” (Mark 16:15) Following this the Ascension is described in Mark 16:19 as follows:[9]

However, based on strong textual and literary evidences, biblical scholars no longer accept Mark 16:9-20 as original to the book.[12] Rather, this section appears to have been compiled based on other gospel accounts and appended at a much later time. As such, the writer of Luke-Acts is the only original author in the New Testament to have referred to the ascension of Jesus.

In Luke, Jesus leads the eleven disciples to Bethany, not far from Jerusalem. Luke 24:50-52 describes the Ascension as follows:[9][10]

The blessing is often interpreted as a priestly act in which Jesus leaves his disciples in the care of God the Father.[10] The return to Jerusalem after the Ascension ends the Gospel of Luke where it began: Jerusalem.[11]

The narrative of the Acts of the Apostles begins with the account of Jesus’ appearances after his resurrection and his Ascension forty days thereafter in Acts 1:9-11.[10][11] Acts 1:9-12 specifies the location of the Ascension as the “mount called Olivet” near Jerusalem.

Acts 1:3 states that Jesus:

After giving a number of instructions to the apostles Acts 1:9 describes the Ascension as follows:

Following this two men clothed in white appear and tell the apostles that Jesus will return in the same manner as he was taken, and the apostles return to Jerusalem.[11]

A number of statements in the New Testament may be interpreted as references to the Ascension.[13]

Acts 1:9-12 states that the Ascension took place on Mount Olivet (the “Mount of Olives”, on which the village of Bethany sits). After the Ascension the apostles are described as returning to Jerusalem from the mount that is called Olivet, which is near Jerusalem, within a Sabbath day’s journey. Tradition has consecrated this site as the Mount of Ascension. The Gospel of Luke states that the event took place ‘in the vicinity of Bethany’ and the Gospel of Mark specifies no location.

Before the conversion of Constantine in 312 AD, early Christians honored the Ascension of Christ in a cave on the Mount of Olives. By 384, the place of the Ascension was venerated on the present open site, uphill from the cave.[16]

The Chapel of the Ascension in Jerusalem today is a Christian and Muslim holy site now believed to mark the place where Jesus ascended into heaven. In the small round church/mosque is a stone imprinted with what some claim to be the very footprints of Jesus.[16]

Around the year 390 a wealthy Roman woman named Poimenia financed construction of the original church called “Eleona Basilica” (elaion in Greek means “olive garden”, from elaia “olive tree,” and has an oft-mentioned similarity to eleos meaning “mercy”). This church was destroyed by Sassanid Persians in 614. It was subsequently rebuilt, destroyed, and rebuilt again by the Crusaders. This final church was later also destroyed by Muslims, leaving only a 12×12 meter octagonal structure (called a martyrium”memorial”or “Edicule”) that remains to this day.[17] The site was ultimately acquired by two emissaries of Saladin in the year 1198 and has remained in the possession of the Islamic Waqf of Jerusalem ever since. The Russian Orthodox Church also maintains a Convent of the Ascension on the top of the Mount of Olives.

The Ascension of Jesus is professed in the Nicene Creed and in the Apostles’ Creed. The Ascension implies Jesus’ humanity being taken into Heaven.[2]

The Catechism of the Catholic Church (Item 668) states:[18]

Referring to Mark 16:19 (“So then the Lord Jesus, after he had spoken unto them, was received up into heaven, and sat down at the right hand of God.”) Pope John Paul II stated that Scripture positions the significance of the Ascension in two statements: “Jesus gave instructions, and then Jesus took his place.[19]

John Paul II also separately emphasized that Jesus had foretold of his Ascension several times in the Gospels, e.g. John 16:10 at the Last Supper: “I go to the Father, and you will see me no more” and John 20:17 after his resurrection he tells Mary Magdalene: “I have not yet ascended to the Father; go to my brethren and say to them, I am ascending to my Father and your Father, to my God and your God”.[20]

In Orthodox, Oriental non-Chalcedonian, and Assyrian theology, the Ascension of Christ is interpreted as the culmination of the Mystery of the Incarnation, in that it not only marked the completion of Jesus’ physical presence among his apostles, but consummated the union of God and man when Jesus ascended in his glorified human body to sit at the right hand of God the Father. The Ascension and the Transfiguration both figure prominently in the Orthodox Christian doctrine of theosis. In the Chalcedonian Churches, the bodily Ascension into heaven is also understood as the final earthly token of Christ’s two natures: divine and human.[21]

The Westminster Confession of Faith (part of the Reformed tradition in Calvinism and influential in the Presbyterian church), in Article four of Chapter eight, states: “On the third day He arose from the dead, with the same body in which He suffered, with which also he ascended into heaven, and there sits at the right hand of His Father, making intercession, and shall return, to judge men and angels, at the end of the world.”[22]

The Second Helvetic Confession addresses the purpose and character of Christ’s ascension in Chapter 11:[23]

New Testament scholar Rudolph Bultmann writes, “The cosmology of the N.T. is essentially mythical in character. The world is viewed as a three-storied structure, with the Earth in the center, the heaven above, and the underworld beneath. Heaven is the abode of God and of celestial beingsangels… No one who is old enough to think for himself supposes that God lives in a local heaven.”[24]

The Jesus Seminar considers the New Testament accounts of Jesus’ ascension as inventions of the Christian community in the Apostolic Age.[25] They describe the Ascension as a convenient device to discredit ongoing appearance claims within the Christian community.[25]

The Feast of the Ascension is one of the great feasts in the Christian liturgical calendar, and commemorates the bodily Ascension of Jesus into Heaven. Ascension Day is traditionally celebrated on a Thursday, the fortieth day from Easter day. However, some Roman Catholic provinces have moved the observance to the following Sunday. The feast is one of the ecumenical feasts (i.e., universally celebrated), ranking with the feasts of the Passion, of Easter, and Pentecost.

The Ascension has been a frequent subject in Christian art, as well as a theme in theological writings.[6] By the 6th century the iconography of the Ascension had been established and by the 9th century Ascension scenes were being depicted on domes of churches.[5][26] The Rabbula Gospels (c. 586) include some of the earliest images of the Ascension.[26]

Many ascension scenes have two parts, an upper (Heavenly) part and a lower (earthly) part. The ascending Christ may be carrying a resurrection banner or make a sign of benediction with his right hand.[7] The blessing gesture by Christ with his right hand is directed towards the earthly group below him and signifies that he is blessing the entire Church.[8] In the left hand, he may be holding a Gospel or a scroll, signifying teaching and preaching.[8]

The Eastern Orthodox portrayal of the Ascension is a major metaphor for the mystical nature of the Church.[27] In many Eastern icons the Virgin Mary is placed at the center of the scene in the earthly part of the depiction, with her hands raised towards Heaven, often accompanied by various Apostles.[27] The upwards looking depiction of the earthly group matches the Eastern liturgy on the Feast of the Ascension: “Come, let us rise and turn our eyes and thoughts high…”[8]

The 2016 film, Risen, depicts Jesus’ ascension in a more understated tone. The film depicts Jesus giving his final address to his disciples while in front of the Sun as it rises on daybreak, and rather than himself physically ascending, Jesus turns and walks into the glare of the Sun and disappears into its light as the Sun itself ascends into the sky.

Read more:

Ascension of Jesus – Wikipedia, the free encyclopedia

Entheogens: Whats in a Name? The Untold History of …

 Entheogens  Comments Off on Entheogens: Whats in a Name? The Untold History of …
Jun 192016

Articles in this Series: 1) R. Gordon Wasson: The Man, the Legend, the Myth. Beginning a New History of Magic Mushrooms, Ethnomycology,and the Psychedelic Revolution. By Jan Irvin, May 13, 2012 2) How Darwin, Huxley, and the Esalen Institute launched the 2012 and psychedelic revolutions and began one of the largest mind control operations in history. Some brief notes. By Jan Irvin, August 28, 2012 3) Manufacturing the Deadhead: A Product of Social Engineering, by Joe Atwill and Jan Irvin, May 13, 2013 4) Entheogens: Whats in a Name? The Untold History of Psychedelic Spirituality, Social Control, and the CIA, by Jan Irvin, November 11, 2014 5) Spies in Academic Clothing: The Untold History of MKULTRA and the Counterculture And How the Intelligence Community Misleads the 99%, by Jan Irvin, May 13, 2015 PDF version: Download latest version v3.5 – Nov. 20, 2014

Computer generated Text-Aloud audio version:

Youtube computer generated version with onscreen citations:

Franais: (Full text translated to French)

Franais (French) translation PDF:

Today there are many names for drug substances that we commonly refer to as hallucinogens, psychedelics, psychoactives, or entheogens, et al. But it hasnt always been that way. The study of the history and etymology of the words for these fascinating substances takes us, surprisingly, right into the heart of military intelligence, and what became the CIAs infamous MKULTRA mind control program, and reveals how the names themselves were used in marketing these substances to the public, and especially to the youth and countercultures.[1]

The official history has it that the CIA personnel involved in MKULTRA were just dupes, kind of stupid, and, by their egregious errors, the psychedelic revolution happened thwarting their efforts. The claim is that these substances got out of the CIAs control. Words like blowback and incompetence are often tossed around in such theories regarding the CIA and military intelligence, but without much, if any, supporting evidence.

Its almost impossible today to have a discussion regarding the actual documents and facts of MKULTRA and the psychedelic revolution without someone interrupting to inform you how it really happened even though most often they have never studied anything on the subject.

As we get started, I would like to propose that we question this idea of blowback: Who does it benefit to believe that it was all an accident and that the CIA and military intelligence were just dupes? Does it benefit you, or them? It might be uncomfortable for a moment for some of us to admit that maybe they (the agents) werent so stupid, and maybe we were the ones duped. Sometimes the best medicine is to just admit hey, you got me and laugh it off. For those of you whove heard these blowback theories and havent considered the possibility that the CIA created these movements intentionally, this article may be challenging for you, but stick with it, as it will be worth your while.

Now were ready. Because, defenses aside, a more honest, and less biased, inquiry into the history and facts reveals, startlingly, something quite different from the popular myths. This paper reveals, for the first time, how the opposite of the official history is true, and that the CIA did, in fact, create the psychedelic revolution and countercultures intentionally.

As Ill show in this article, the goal had changed and they wanted a name that would help sell these substances to the masses as sources of spiritual enlightenment rather than insanity. In their book The Psychedelic Experience: A Manual Based on the Tibetan Book of the Dead, we see doctors Timothy Leary, Ralph Metzner, and Richard Alpert explain:

Of course, the drug dose does not produce the transcendent experience. It merely acts as a chemical key it opens the mind, frees the nervous system of its ordinary patterns and structures. The nature of the experience depends almost entirely on set and setting. Set denotes the preparation of the individual, including his personality structure and his mood at the time. Setting is physical the weather, the room’s atmosphere; social feelings of persons present towards one another; and cultural prevailing views as to what is real. It is for this reason that manuals or guide-books are necessary. Their purpose is to enable a person to understand the new realities of the expanded consciousness, to serve as road maps for new interior territories which modern science has made accessible.[2] Timothy Leary, Ralph Metzner, Richard Alpert

But what was the purpose of all of this? They state The nature of the experience depends almost entirely on set and setting. As well discover on this etymological trip, it was all about marketing the CIAs marketing regarding set and setting. Sound like a whacky conspiracy theory yet? As well soon discover, its not. The CIAs MKULTRA program was very real, was exposed before Congress in the Rockefeller and Church Commissions, and was all over the news media in the 1970s. But that was 40 years ago and this is now. So why should we care? Because much of the program wasnt revealed in the 1970s and persists to the present, and it affected just about everyone. It wasnt limited to just a few thousand victims of the CIAs secret human experiments. There were actually many more victims millions more. You may have been one of them.

As well see, this idea that the psychedelic revolution and counterculture were intentionally created affects most of us: the youth caught up in drug use, the parents, the anti-war movement, those involved in the psychedelic revolution or in politics; as well as artists, or people who use these substances for spirituality, or even anyone whos ever spoken the word psychedelic. It affects us because, as well see, thats what it was meant to do.

In the early years of research into these drugs, psychology researchers and military intelligence communities sometimes called them, aside from hallucinogen, by the name “psychotomimetic” which means psychosis mimicking. The word hallucinogen, to generate hallucinations, came just a few years before psychotomimetic. The same year that psychotomimetic was created we also saw the creation of the word psychedelic which means to manifest the mind. The last stage of this etymological evolution, as well see, was the word entheogen which means to generate god within. Well return to hallucinogen and these other words in the course of our journey.

While these words may have told what these substances do in the intelligence communitys collective understanding, accurate or not, they are loaded with implications. Suggestibility, otherwise known as set and setting, is one of them. The study of the history of these words, their etymology, reveals how MKULTRA researchers covered up and kept covered up until now that is this aspect of the MKULTRA mind control program.

In the 1950s most CIA candidates and agents were required to take psychedelic or hallucinogenic drugs to prepare them for chemical and biological warfare attack. This requirement didn’t turn the agency into hippies. As this article will show, marketing and PR people that the Agency later hired created that end result.

19 November 1953


The Medical Office commented also on the draft memorandum to DCI from Director of Security, subject: Project Experimental Project Utilizing Trainee Volunteers; to the effect that it was recommended the program not be confined merely to male volunteer trainee personnel but that the field of selection be broadened to include all components of the Agency and recommended that the subject memorandum be changed as appropriate to the broadening of such scope. The Project committee verbally concurred in this recommendation. [][3] ~ CIA MKULTRA files

As Jay Stevens, author of Storming Heaven, reveals in the following quote, suggestibility plays a large part in the way psychedelic drugs work.

To drive someone crazy with LSD was no great accomplishment, particularly if you told the person he was taking a psychotomimetic and you gave it to him in one of those pastel hospital cells with a grim nurse standing by scribbling notes.[4] ~Jay Stevens

Psychotomimetic (psychosis mimicking) is a word loaded with implications, suggestibility being the most important.

This is something that Aldous Huxley, Dr. Timothy Leary, R. Gordon Wasson and others made clear in their books and articles. In order to suggest what the creators of the psychedelic revolution wanted, they had to pay particular attention to the name(s) used for these substances.

What’s in a name? … Answer, practically everything.[5] ~ Aldous Huxley

However, for marketing and PR purposes, the word psychotomimetic was abandoned, or remarketed, not long after it was created in 1957.

But why is all of this important?

As Huxley just admitted above: What’s in a name? … Answer, practically everything.

Insanity, or psychosis mimicking, or even generating hallucinations, arent attractive terms and dont work well for marketing purposes or for the outcome of the psychedelic or, more importantly, the entheogenic experience.

Though this may sound implausible at first, the purpose of making these substances more attractive was to intentionally sell them, and not just to patients in hospital wards and to those in a chair with their therapists, but, especially, to the youth and countercultures of the world a nefarious purpose indeed. Here Leary reflects on Arthur Koestlers work regarding juvenilization:

From Koestler I learned about juvenilization, the theory that evolution occurs not in the adult (final form) of a species but in juveniles, larvals, adolescents, pre-adults. The practical conclusion: if you want to bring about mutations in a species, work with the young. Koestlers teaching about paedomorphosis prepared me to understand the genetic implications of the 1960s youth movement and its rejection of the old culture.[6] ~ Timothy Leary

The understanding of suggestibility, or set and setting, including the name given these substances, is everything in how psychedelics work and were studied (and used) by the CIA for social control.

What could the name be replaced with? This was the problem set before those interested in remarketing these substances to the youth, counterculture and artists around the world. When discussing how to market these drugs with Humphry Osmond, Aldous Huxley remarked:

About a name for these drugs – what a problem![7] ~ Aldous Huxley

Over a couple decades this project would be undertaken by two different teams: Aldous Huxley, Humphry Osmond and Abram Hoffer; and the second, headed by Professor Carl A. P. Ruck of Boston University, included R. Gordon Wasson, and also Jonathan Ott, Jeremy Bigwood and Daniel Staples.

Some of us formed a committee under the Chairmanship of Carl Ruck to devise a new word for the potions that held Antiquity in awe. After trying out a number of words he came up with entheogen, god generated within, which his committee unanimously adopted[].[8] ~ Gordon Wasson

And though they defend them, Martin Lee and Bruce Shlain reveal some of these remarketing tactics in Acid Dreams:

The scientist who directly oversaw this research project was Dr. Paul Hoch, an early advocate of the theory that LSD and other hallucinogens were essentially psychosis-producing drugs. In succeeding years Hoch performed a number of bizarre experiments for the army while also serving as a CIA consultant. Intraspinal injections of mescaline and LSD were administered to psychiatric patients, causing an “immediate, massive, and almost shocklike picture with higher doses.”

Aftereffects (“generalized discomfort,” “withdrawal,” “oddness,” and “unreality feelings”) lingered for two to three days following the injections. Hoch, who later became New York State Commissioner for Mental Hygiene, also gave LSD to psychiatric patients and then lobotomized them in order to compare the effects of acid before and after psychosurgery. (“It is possible that a certain amount of brain damage is of therapeutic value,” Hoch once stated.) In one experiment a hallucinogen was administered along with a local anesthetic and the subject was told to describe his visual experiences as surgeons removed chunks of his cerebral cortex.[9] ~ Martin Lee and Bruce Shlain

In the following quote the authors reveal their bias in the situation, arguing for the spiritual aspects, while in the same book denying the psychosis aspects and that the psychedelic revolution was intentionally created by the CIA:

Many other researchers, however, dismissed transcendental insight as either “happy psychosis” or a lot of nonsense. The knee-jerk reaction on the part of the psychotomimetic stalwarts was indicative of a deeply ingrained prejudice against certain varieties of experience. In advanced industrial societies paranormal” states of consciousness are readily disparaged as “abnormal” or pathological. Such attitudes, cultural as much as professional, played a crucial role in circumscribing the horizon of scientific investigation into hallucinogenic agents.[10] ~ Martin Lee and Bruce Shlain

Here Lee and Shlain resort to name calling and ridicule, for example referring to psychotomimetic stalwarts and deeply ingrained prejudice, as the foundation of their argument rather than looking at the evidence itself which sounds ironic in a book about the CIA using these same substances for mind control. And who were these psychotomimetic stalwarts? Was it only Dr. Hoch? As well see, Lee and Shlain seem to also be referring to Aldous Huxley, Humphry Osmond, Albert Hofmann and Sasha Shulgin.

Lee and Shlain, while partially exposing MKULTRA, then promote the idea that the psychotomimetic theory was invalid. They continue:

Despite widespread acknowledgment that the model psychosis concept had outlived its usefulness, the psychiatric orientation articulated by those of Dr. Hoch’s persuasion prevailed in the end. When it came time to lay down their hand, the medical establishment and the media both “mimicked” the line that for years had been secretly promoted by the CIA and the militarythat hallucinogenic drugs were extremely dangerous because they drove people insane, and all this talk about creativity and personal growth was just a lot of hocus pocus. This perception of LSD governed the major policy decisions enacted by the FDA and the drug control apparatus in the years ahead.[11] [emphasis added] ~ Marty Lee and Bruce Shlain

Here we see the idea that the psychosis concept had outlived its usefulness. What does that mean exactly? Its an ambiguous statement. Most assume it to mean that the substances didnt actually create psychosis. But is that true? What if, instead, due to the above-mentioned suggestibility factor and set and setting, they decided to remarket these drugs as spiritual rather than psychotic? If we entertain this idea, we realize it could take just a new name to change not only everything about the outcome of the experience, but how quickly the youth and counterculture would adopt them. Well expand on this idea throughout this article.

On a side note, it should probably be mentioned that it was actually Timothy Leary and Arthur Kleps who went (along with Walter Bowart and Allen Ginsberg) before Congress in 1966 recommending regulation. You cant have a good youthful rebellion with legal substances!

Senator Dodd. Don’t you think that the drug needs to be put under control and restriction?

Dr. LEARY. Pardon, sir.

Senator Dodd. Let me rephrase my question. Dont you feel that LSD should be put under some control, or restriction as to its use?

Dr. LEARY. Yes, sir.

Senator Dodd. As to its sale, its possession, and its use?

Dr. LEARY. I definitely do. In the first place, I think that the 1965 Drug Control Act, which this committee, I understand, sponsored, is the high water mark in such legislation.

Dr. Leary. Yes, sir. I agree completely with your bill, the 1965 Drug Control Act. I think this is—

Senator Dodd. That the Federal Government and the State governments ought to control it?

Dr. Leary. Exactly. I am in 100 percent agreement with the 1965 drug control bill.

Senator Kennedy of Massachusetts. So there shouldnt be—

Dr. Leary. I wish the States, I might add, would follow the wisdom of this committee and the Senate and Congress of the United States and follow your lead with exactly that kind of legislation.

Senator Kennedy of Massachusetts. So there should not be indiscriminate distribution of this drug should there?

Dr. Leary. I have never suggested that, sir. I have never urged anyone to take LSD. I have always deplored indiscriminate or unprepared use.[12]

As the University of Richmond website relates:

Leary was one of many experts who testified at the 1966 subcommittee hearings, which showed both ardent support and uncompromising opposition to LSD.[] Just several months after the subcommittee hearings, LSD was banned in California. By October 1968, possession of LSD was banned federally in the United States with the passage of the Staggers-Dodd Bill, marking a tremendous step towards the War On Drugs campaign that would arise in the 1970s.[13]

But who within the CIA had promoted this term psychotomimetic?

For a moment, lets turn to the Oxford English Dictionary, where, under the definition of psychotomimetic, it states:

psychotomimetic, a. and n.

A.A adj. Having an effect on the mind orig. likened to that of a psychotic state, with abnormal changes in thought, perception, and mood and a subjective feeling of an expansion of consciousness; of or pertaining to a drug with this effect.[14]

Under the quotations in the OED for psychotomimetic, we further see that R. W. Gerard is listed for 1955, and the second entry for 1957 is from Dr. Humphry Osmond:

1956 R. W. Gerard in Neuropharmacology: Trans. 2nd Conf., 1955 132 Let us at least agree to speak of so-called psychoses when we are dealing with them in animals. Along that same line, I have liked a term which I have been using latelypsychosomimeticfor these agents instead of schizophrenogenic. 1957 Neuropharmacology: Trans. 3rd Conf., 1956 205 (heading) Effects of psychosomimetic drugs in animals and man. 1957 H. Osmond in Ann. N.Y. Acad. Sci. LXVI. 417 The designation psychotomimetic agents for those drugs that mimic some of the mental aberrations that occur in the psychoses had been suggested by Ralph Gerard and seemed especially appropriate.[15] [emphasis added]

If we read the OED entry carefully, what we see above is that Gerard actually used the term psychosomimetic with an s, rather than psychotomimetic with a t. In fact, it appears from the OED that it was Osmond himself who was first to begin using the term psychotomimetic, which was also adopted by the CIA and military for their purposes. This same Osmond, as well soon discover, just months later created the name psychedelic for these substances. Notice that Osmond states The designation psychotomimetic agents [] seemed especially appropriate. That Osmond created the word psychotomimetic is a fact that Lee and Shlain seem to want to avoid.

In another interesting quote in the OED from 1970, we see none other than Sasha Shulgin referring to ibogaine as a psychotomimetic:

1970 A. T. Shulgin in D. H. Efron Psychotomimetic Drugs 25 Ibogaineis another example in the family of psychotomimetics, with complex structures and no resemblance to known metabolic materials.[16]

Was this a slip by authors Lee and Shlain revealing that Osmond and Shulgin were CIA?

It is true, in fact, that both worked for the government. While Shulgin worked for the DEA, he was also a member of the infamous Bohemian Club[17]; and as we’ll see below, Osmond is revealed in the CIAs MKULTRA documents.[18] But lets not get ahead of ourselves. Well come back to this shortly.

In 1954, pre-dating the OEDs reference to Huxleys close friend Humphry Osmond, in The Doors of Perception Huxley stated:

Most takers of mescalin [sic] experience only the heavenly part of schizophrenia. The drug brings hell and purgatory only to those who have had a recent case of jaundice, or who suffer from periodical depressions or chronic anxiety.[19] ~ Aldous Huxley

He continued:

The schizophrenic is a soul not merely unregenerate, but desperately sick into the bargain. His sickness consists in the inability to take refuge from inner and outer reality (as the sane person habitually does) in the homemade universe of common sensethe strictly human world of useful notions, shared symbols and socially acceptable conventions. The schizophrenic is like a man permanently under the influence of mescaline[20] ~ Aldous Huxley

In Heaven and Hell Huxley went on:

Many schizophrenics have their times of heavenly happiness; but the fact that (unlike the mascalin [sic] taker) they do not know when, if ever, they will be permitted to return to the reassuring banality of everyday experience causes even heaven to seem appalling.[21] ~ Aldous Huxley

In their letters, Aldous Huxley and Humphry Osmond were very concerned over what to call these substances, but why should the public have cared what these two people wanted to call them? They were still mostly secret at this time and hardly anyone knew about them except through marketing efforts and publications. Furthermore, why were Huxley and Osmond so concerned, and why would it be a problem, unless there were an ulterior motive?

The issue here is a Bernaysian/Koestler-type marketing strategy. With a word like psychotomimetic these substances would have never taken hold in the youth and countercultures. It was fine for underground LSD and other studies by the intelligence community, but for the new purpose, theyd need a new name. From Huxleys letters in a book titled Moksha, we find:

740 North Kings Road, Los Angeles 46, Cal. 30 March, 1956

Dear Humphry,

Thank you for your letter, which I shall answer only briefly, since I look forward to talking to you at length in New York before very long. About a name for these drugs – what a problem! I have looked into Liddell and Scott and find that there is a verb phaneroein, “to make visible or manifest,” and an adjective phaneros, meaning “manifest, open to sight, evident.” The word is used in botany – phanerogam as opposed to cryptogam. Psychodetic (4) is something I don’t quite get the hang of it. Is it an analogue of geodetic, geodesy? If so, it would mean mind-dividing, as geodesy means earth-dividing, from ge and daiein. Could you call these drugs psychophans? or phaneropsychic drugs? Or what about phanerothymes? Thymos means soul, in its primary usage, and is the equivalent of Latin animus. The word is euphonious and easy to pronounce; besides it has relatives in the jargon of psychology-e.g. cyclothyme. On the whole I think this is better than psychophan or phaneropsychic. []

Yours, Aldous

“To make this trivial world sublime,

Take half a gram of phanerothyme.

(4) Osmond had mentioned psychedelics, as a new name for mind-changing drugs to replace the term psychotomimetics. Huxley apparently misread the word as “psychodetics,” hence his mystification. Osmond replied: “To fathom Hell or soar angelic, Just take a pinch of psychedelic.

Huxley still did not get the spelling, which he made psychodelic [Smith’s note]. Huxley invariably uses psychodelic for psychedelic, as he and others thought the latter term incorrect. Huxley’s spelling has been retained, as this was undoubtedly his preference. However, it fails one criterion of Osmond, which is that the term be “uncontaminated by other associations.”[22] [emphasis added]

Why was it important to meet the criterion for the new word to be uncontaminated by other associations? They dont say, but we can surmise that its because of this remarketing strategy and they needed to be careful of the term chosen. The word psychodelic contains psycho, but psycho carries negative associations. This explains why psychedelic is the only word in the English language to use psyche rather than psycho the criterion it failed was complete avoidance of any name that could imply a negative experience. Lee and Shlain in Acid Dreams give their version of the story thus:

The two men had been close friends ever since Huxley’s initial mescaline experience, and they carried on a lively correspondence. At first Huxley proposed the word phanerothyme, which derived from roots relating to “spirit” or “soul.” A letter to Osmond included the following couplet:

To make this trivial world sublime,

Take half a Gramme of phanerothyme.

To which Osmond responded:

To fathom hell or soar angelic

Just take a pinch of psychedelic.

And so it came to pass that the word psychedelic was coined. Osmond introduced it to the psychiatric establishment in 1957. Addressing a meeting of the New York Academy of Sciences, he argued that hallucinogenic drugs did “much more” than mimic psychosis, and therefore an appropriate name must “include concepts of enriching the mind and enlarging the vision.” He suggested a neutral term to replace psychotomimetic, and his choice was certainly vague enough. Literally translated, psychedelic means “mind-manifesting,” implying that drugs of this category do not produce a predictable sequence of events but bring to the fore whatever is latent within the unconscious. Accordingly Osmond recognized that LSD could be a valuable tool for psychotherapy. This notion represented a marked departure from the military-medical paradigm, which held that every LSD experience was automatically an experimental psychosis.[23] ~ Marty Lee & Bruce Shlain

Its ironic that they claimed the term psychedelic, for mind manifesting is neutral. A more appropriate word to describe it would be ambiguous. But notice that its gone from mimicking psychosis to manifesting the mind. And just months earlier Osmond was promoting the word psychotomimetic, which he said seemed especially appropriate. Here Lee and Shlain admit that Albert Hofmann was involved with this public relations scheme:

Read the rest here:

Entheogens: Whats in a Name? The Untold History of …

 Posted by at 2:39 pm  Tagged with:

Talk:Transhumanism/Archive 2 – Wikipedia, the free …

 Transtopianism  Comments Off on Talk:Transhumanism/Archive 2 – Wikipedia, the free …
Jun 192016

Split long article

Might this longish entry be better presented as a series of pages? JasonS 03:34 Jan 13, 2003 (UTC)

Dnagod 20:56, 9 Feb 2005 (UTC)

In the interest of ensuring transhuman is NPOV: Who decides what the definition of transhumanism is?

This element of humanism, is that from huxley or someone else?

Does the man who invented the word, Julian Huxley decide the definition of Transhumanism, does one in modern times who publically states the definition decide or does the World Transhumanism Association decide?

I would like clarity as to who ultimately determines what transhumanism means because the definition used by the WTA and other groups differs. More importantly, what gives one authority or the command to be able to define in an undisputed what transhumanism is, so that other POV’s can be excluded?

For instance I have reviewed the entire, and site, and I can’t seem to figure out how you could label it as disputed in the links section?

What is to say the world transhumanism association isnt disputed?

I can see how one might label cosmotheism as white racial separatist, but and I would like more discussion as to why it is disputed as a transhumanism group. And why is Cosmotheism a disputed offshoot? Cosmotheism was developed in the 1960’s and 1970’s which came before extropy and WTA, so why is it an offshoot? I thought offshoot meant, that something existed and a branch or seed came off that plant. Can you please define offshoot and explain who decides what is or is not transhumanism?

More on this humanism element of Transhumanism, is that from huxley or someone else? Thanks.

Why does the link to cosmotheism keep getting deleted? Just because that article had a banned user associated w it doesn’t make it any less relevent. Sam [Spade] 20:56, 4 Aug 2004 (UTC)

I’d like to incorporate a mention of the Human Cognome Project into this article, as it is relevent to human brain augmentation and AI research. Any suggestions? — Dave User:Sydhart

Why is, and labelled pseudotranshuman organizations? To me that represents bias as to why those web sites would be labelled pseudo, what makes a web site pseudo?

On the front page of it states the following

(Prometheism is) The First Sovereign Transtopian & Neo-Eugenic Libertarian Religious-State.

In the principles sections of prometheism it states

Our Promethean Species embraces Conscious Evolution

Our immediate aim is to create a neo-eugenically enhanced race that will eventually become a new, superior species with whatever scientific means are available at the present time. In the short-term, this will be achieved via neo-eugenics, ie. voluntary positive eugenics, human cloning, germ-line engineering, gene therapy and genetic engineering.

In the long-term, when the science becomes available we intend to utilize transhuman technologies: nanotechnology, mind uploading, A/I and other variations of ultra exo-tech.

Our goal is to enable total and unlimited self-transformation, consciousness and expansion across the universe of our species.

It also states note the key words – Transhuman Technologies… and the embracing of transhumanism and extropy.

We Define neo-eugenics as conscious evolution (these words are interchangeable). Purposefully directed evolution via voluntary positive neo-eugenics (including voluntary selective breeding), cloning, genetic engineering and ultimately any and all transhuman technologies. Neo-Eugenics means harnessing all science, technology and knowledge available now or in the future, guiding it with spirituality, ethical considerations and higher consciousness, ultimately towards achieving total and unlimited self transformation. The term Neo-Eugenics embodies the sciences and philosophies involved in Biotechnology, Extropy and Transhumanism all merged in a philosophy of spiritual Conscious Evolution.

I believe removing prometheism from this page, will be cause to bring this issue to arbitration to confirm that the individual who keeps removing it obviously is biased and lacks an understanding of what transhumanism. NPOV. thats your problem brian NPOV and blatant bias.

Dnagod 22:22, 7 Feb 2005 (UTC)

Extropy and a lot of the other sites listed under manifestos are linked else where in the article, so I felt it important to also include these manifestos

Please do not revert to childish insults, and a biased personal agenda removing these links, they belong their and represent Principles which I dare say are some of the most interesting, fascinating and creative principles.

Don’t abuse your privileges here and force your agenda on this topic of transhumanism, all perspectives are welcome here whether you like it or not.

Dnagod 17:26, 8 Feb 2005 (UTC)

What makes you think transtopianism ( is not secular?

STOP removing these links, you are biased, emotional, unfair, unbalanced and lacking in neutrality.

These links are to stay, and you have no right to remove them. They are valid and legit links, Do not abuse your privileges on this project or you will be revoked.

Dnagod 02:55, 9 Feb 2005 (UTC)

The man who invented the word Transhumanism (Huxley), was an open, avid and published advocate of state sponsored coercive eugenics, selective breeding, and elitist eugenic communities. Therefore you are wrong, and thus the specific issue of VOLUNTARY eugenics does NOT violate in anyway, shape or form, being part of transhumanism. You are wrong, biased, unfair, unbalanced, and lacking in neutrality. and DO NOT SUPPORT COERCIVE EUGENICS in their PRINCIPLES, THEY SUPPORT VOLUNTARY – EUGENICS – READ VOLUNTARY. Forgive the capitalization, but I do that for emphasis, not to scream.

please stop removing these links, you are biased, emotional, unfair, unbalanced and lacking in neutrality. These are not personal attacks, these are stated facts that you have not read the web site.

These links are to stay, and you have no right to remove them. They are valid and legit links, Do not abuse your privileges on this project.

I ask you to bring arbitration and discussion on this fact. Your censorship, bias and personal agenda will not win. Go to right now and find one place on this site that says prometheism supports COERCIVE EUGENICS. you will not find it anywhere. clearly states that it only supports voluntary eugenics. Read the sworn oath on

The Sworn Oath of Prometheism (front page of

We Prometheans are voluntarily coming together to purposefully direct the creation of a new post-human species. A species with higher intellect, creativity, consciousness and love of ones people. A communion of intellect and beauty, for the simple reason that it can be done. This creation is what gives us purpose and meaning. No other justification is required for this program to advance our Promethean species.

Next I want you to read the Principles of prometheism

2. Our Promethean Species embraces Conscious Evolution

Our immediate aim is to create a neo-eugenically enhanced race that will eventually become a new, superior species with whatever scientific means are available at the present time. In the short-term, this will be achieved via neo-eugenics, ie. voluntary positive eugenics, human cloning, germ-line engineering, gene therapy and genetic engineering.

5. Total Freedom, Liberty and Self-Determination

Our Libertarian religious nation is founded on the principles of total freedom of speech (including offensive language and language which hurts peoples feelings), freedom of thought, the right to bear arms, liberty, progress, productivity and the pursuit of individual happiness.

nation is VOLUNTARY ONLY. We REJECT all totalitarianism and believe COERCIVE neo-eugenics is counter to the ideal of individual freedom. The promethean governments sole purpose is to protect the rights of the individual. We DO NOT wish to STERILIZE anyone or FORCE anyone to practice neo-eugenics.

DNA or genetic capital is the most valuable commodity in the universe. Our primary goal is to promote positive and voluntary neo-eugenics by channeling national resources to the best, brightest and most creative.

We Define neo-eugenics as conscious evolution (these words are interchangeable). Purposefully directed evolution via voluntary positive neo-eugenics (including voluntary selective breeding), cloning, genetic engineering and ultimately any and all transhuman technologies. Neo-Eugenics means harnessing all science, technology and knowledge available now or in the future, guiding it with spirituality, ethical considerations and higher consciousness, ultimately towards achieving total and unlimited self transformation. The term Neo-Eugenics embodies the sciences and philosophies involved in Biotechnology, Extropy and Transhumanism all merged in a philosophy of spiritual Conscious Evolution.

This is from the principles of Last Updated: 3/13/03 this means that prometheism is NOT FRINGE, it does not support the fringe philosophy of FORCED COERCIVE EUGENICS. Again the capitalization is not screaming, its meant to provide emphasis. Also my comments about you not being very knowledgeable about and are not meant as personal insults or personal attacks, but as an observation.

Dnagod 20:06, 9 Feb 2005 (UTC)

Read more:

Talk:Transhumanism/Archive 2 – Wikipedia, the free …

 Posted by at 2:32 pm  Tagged with:

Yeah, About That Second Amendment

 Second Amendment  Comments Off on Yeah, About That Second Amendment
Jun 192016

Source: Jim Jesus /

The Second Amendment of the United States Constitution reads: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”

While there have been countless debates, tests and judgments that have defined and re-defined how to interpret this amendment, the current prevailing interpretation and belief in America is that individual gun ownership is a constitutional right. As a result, America has seen a steady and consistent stream of deregulation around gun ownership, even as mass shootings appear to be on the rise. As progressives get increasingly concerned about the gun culture in America, as a tactic, they try to make their case by comparing gun ownership to other safety-related, common-sense laws:

While certainly humorous while making a practical point, this tweet burn completely misses the larger point: people don’t have a constitutional right to buy Sudafed. You simply cannot compare a constitutional right to anything else not on the fundamental rights playing field.

This lack of focus on the constitutional argument is where progressives have lost their way. They have been so focused on the practical utility of public policy that they end up losing the larger fights that define America. Constitutional interpretation lends itself to a more strategic (and philosophical) debate platform than arguing the facts and stats on how laws can and should protect people. Constitutional theory is the debate platform that conservatives have been playing on for decades while progressives get frustrated and lose ground.

The remarkable irony is that the wording and intent within the Second Amendment is actually on progressive’s side. In fact, the Second Amendment is a progressive’s dream: the third word in the amendment is “regulated” for heaven’s sake.

No matter the interpretation of every other word and phrase after the first three words, the entire context of the amendment is that it will be a regulated right. Through this lens, the Second Amendment is barely even comparable to the First Amendment in terms of what rights it enables. There is simply no language in the First Amendment that regulates the right to free speech… and yet we still regulate speech despite the unassailable strength of the the First Amendment constitutional language

The upshot? Even in today’s hardcore gun rights environment and culture, the Constitution itself provides the guidance — and mandate — to not just regulate militia (i.e., groups of people) and arms, but to regulate them well.

How our culture defines “well” can and will certainly evolve over time, but we shouldn’t let gun rights ideologues and arms industry special interests continue to convince the public that they’re the only ones who have the Constitution on their side in this debate.

Yes, current Supreme Court interpretation is that every citizen has the right to bear arms. But it’s also constitutionally mandated that we regulate these armed people (i.e., militia) and their arms well. Seeing as the right to bear arms has been implemented pretty effectively in America, perhaps now it’s time to start implementing regulation well too, as the Constitution also mandates.

Editor’s note: On 6/18, I revised the article to include people (i.e., militia)” as well as arms, because I originally mistakenly linked regulation only to arms, not the people who have the right to own them

Yeah, About That Second Amendment

Gambling Addiction and Problem Gambling –

 Gambling  Comments Off on Gambling Addiction and Problem Gambling –
Jun 172016

Warning Signs and How to Get Help for Your Gambling Problem In This Article

Whether you bet on sports, scratch cards, roulette, poker, or slotsin a casino, at the track, or onlineproblem gambling can strain relationships, interfere with work, and lead to financial catastrophe. You may even do things you never thought you would, like stealing money to gamble or pay debts. You may think you cant stop, but with the right help, you can overcome a gambling problem or addiction and regain control of your life.

Gambling addiction, also known ascompulsive gambling or gambling disorder, is an impulse-control disorder. Compulsive gamblers cant control the impulse to gamble, even when its hurting themselves or their loved ones.

Gamblers can have a problem without being totally out of control.Problem gamblingis any gambling behavior that disrupts your life. If youre preoccupied with gambling, spending more and more time and money on it, or gambling despite serious consequences, you have a gambling problem.

MYTH: You have to gamble every day to be a problem gambler. FACT: A problem gambler may gamble frequently or infrequently. Gambling is a problem if it causes problems.

MYTH: Problem gambling is not really a problem if the gambler can afford it. FACT: Problems caused by excessive gambling are not just financial. Too much time spent on gambling can also lead to relationship and legal problems, job loss, mental health problems including depression and anxiety, and even suicide.

MYTH: Partners of problem gamblers often drive problem gamblers to gamble. FACT: Problem gamblers often rationalize their behavior. Blaming others is one way to avoid taking responsibility for their actions, including what is needed to overcome the problem.

MYTH: If a problem gambler builds up a debt, you should help them take care of it. FACT: Quick fix solutions may appear to be the right thing to do. However, bailing the gambler out of debt may actually make matters worse by enabling gambling problems to continue.

Gambling addiction is sometimes referred to as a “hidden illness” because there are no obvious physical signs or symptoms like there are in drug or alcohol addiction. You may have a gambling problem if you:

Take this test to find out.

The biggest step to overcoming a gambling addiction is realizing that you have a problem. It takes tremendous strength and courage to own up to this, especially if you have lost a lot of money and strained or broken relationships along the way. But many others have been in your shoes and have been able to break the habit. You can to.

Staying in recoverymaking a permanent commitment to stop gamblingis possible if you:

One way to stop gambling is to remove the elements necessary for gambling to occur in your life and replace them with healthier choices. The four elements needed for gambling to continue are:

Maintaining recovery from gambling addiction or problem gambling depends a lot on finding alternative behaviors you can substitute for gambling. Some examples include:

To provide excitement, get a rush of adrenaline

Sport or a challenging hobby, such as mountain biking, rock climbing, or Go Kart racing

To be more social, overcome shyness or isolation

Counseling, enroll in a public speaking class, join a social group, connect with family and friends, volunteer, find new friends

To numb unpleasant feelings, not think about problems

Therapy, consult Helpguides Emotional Intelligence toolkit

Boredom or loneliness

Find something youre passionate about such as art, music, sports, or books and then find others with the same interests

To relax after a stressful day

As little as 15 minutes of daily exercise can relieve stress. Or deep breathing, meditation, or massage

To solve money problems

The odds are always stacked against you so its far better to seek help with debts from a credit counselor

Feeling the urge to gamble is normal, but as you build healthier choices and a strong support network, resisting cravings will become easier. When a gambling craving strikes:

If you arent able to resist the gambling craving, dont be too hard on yourself or use it as an excuse to give up. Overcoming a gambling addiction is a tough process. You may slip from time to time; the important thing is to learn from your mistakes and continue working towards recovery.

Seeking professional help or seeing a therapist does not mean you are weak or cant handle your problems. Therapy can give you tools and support for coping with your addiction that will last a lifetime. Problem gambling can sometimes be a symptom of bipolar disorder, so your doctor or therapist may need to rule out this disorder before making a diagnosis.

While compulsive gamblers need the support of their family and friends to stop gambling, its common for loved ones to have conflicting emotions. You may have tried to cover up for the gambler or spent a lot of time trying to keep him or her from gambling. At the same time, you might be furious at your loved one for gambling again and tired of trying to keep up the charade.

When gamblers feel hopeless, the risk of suicide is high. Its very important to take any thoughts or talk of suicide seriously. Call the National Suicide Prevention Lifeline at 1-800-273-8255 or for a suicide helpline outside the U.S., visit Befrienders Worldwide.



Source: Dept. of Mental Health & Addiction Services

If you want to learn how to recognize the emotional issues that contribute to addiction and how to overcome them, FEELING LOVED can help.


The National Council on Problem Gambling Helpline Offers a confidential, 24-hour helpline for problem gamblers or their family members in the U.S. Call 1-800-522-4700. (NCPG)

Gamblers Anonymous Twelve-step Gamblers Anonymous program, an internationalsupport network of meetings to assist people who have a gambling problem. (Gamblers Anonymous)

Gamcare Offers support, information, and advice for those with a gambling problem in the UK. Call the helpline 0845 6000 133. (Gamcare)

Gambling Help Online Provides 24-hour helpline in Australia for counseling, information, and referrals. Call 1800 858 858. (Gambling Help Online)

Canadian Resources for Those Affected by Problem Gambling Find help and information on problem gambling in your area of Canada. (Centre for Addiction and Mental Health)

What Is Problem Gambling? Learn about the gambling continuum and the key differences between recreational gambling and problem gambling. (British Columbia Responsible & Problem Gambling Program)

Do I Need Help?: Helpful Questions for Self-evaluation Includes questions for self-evaluation, as well as questions for family members who suspect a gambling problem. (Connecticut Department of Mental Health and Addiction Services)

Gamblers Self-Assessment Online questionnaire to help gamblers determine if they have a problem or a gambling addiction. (California Department of Public Health)

Your First Step to Change: Gambling Self-change toolkit helps problem gamblers learn about their addiction and take steps to overcome it. (The Division on Addictions, Cambridge Health Alliance and Harvard Medical School)

The Four Steps Although the article is written for Obsessive Compulsive Disorder, it outlines in more detail the four steps used in a variant of cognitive behavioral therapy, and how you can apply them to change thought processes and control impulses. (Westwood Institute of Anxiety Disorders, commercial site)

Freedom from Problem Gambling(PDF) Self-help workbook for compulsive gamblers, with tips on how to avoid relapse and fight gambling urges. (UCLA Gambling Studies Program and California Department of Public Health)

Choosing a Treatment Facility Learn what treatments are appropriate for problem gambling and what questions you should ask when look at facilities. (National Council on Problem Gambling)

Problem Gamblers and their Finances(PDF) In-depth guide for treatment professionals on how to help a problem gambler cope with financial problems and pressures. (National Endowment for Financial Education)

Help for Family, Friends, Employers, and Co-Workers Learn how gambling addiction affects family and friends and what you can do to address the problem. (Connecticut Department of Mental Health and Addiction Services)

Personal Financial Strategies for the Loved Ones of Problem Gamblers(PDF) Designed to help families deal with personal financial issues due to a loved one’s problem gambling. (National Council on Problem Gambling)

Information for Families (Centre for Addiction and Mental Health) Explore resources for family members of problem gamblers. Includes a downloadable PDF guide for families.

Gam-Anon Twelve-step program for the problem gamblers spouse, family members, or close friends. (Gam-Anon International Service Office, Inc)

Authors: Jeanne Segal, Ph.D., Melinda Smith, M.A., and Lawrence Robinson. Last updated: April 2016.

See the original post:

Gambling Addiction and Problem Gambling –

Entheogens & Existential Intelligence: The Use of Plant …

 Entheogens  Comments Off on Entheogens & Existential Intelligence: The Use of Plant …
Jun 172016

Used with permission. The official published version :

Painting by Yvonne McGillivray

In light of recent specific liberalizations in drug laws in some countries, this article investigates the potential of entheogens (i.e. psychoactive plants used as spiritual sacraments) as tools to facilitate existential intelligence. Plant teachers from the Americas such as ayahuasca, psilocybin mushrooms, peyote, and the Indo-Aryan soma of Eurasia are examples of both past- and presently-used entheogens. These have all been revered as spiritual or cognitive tools to provide a richer cosmological understanding of the world for both human individuals and cultures. I use Howard Gardners (1999a) revised multiple intelligence theory and his postulation of an existential intelligence as a theoretical lens through which to account for the cognitive possibilities of entheogens and explore potential ramifications for education.

In this article I assess and further develop the possibility of an existential intelligence as postulated by Howard Gardner (1999a). Moreover, I entertain the possibility that some kinds of psychoactive substancesentheogenshave the potential to facilitate this kind of intelligence. This issue arises from the recent liberalization of drug laws in several Western industrialized countries to allow for the sacramental use of ayahuasca, a psychoactive tea brewed from plants indigenous to the Amazon. I challenge readers to step outside a long-standing dominant paradigm in modern Western culture that a priori regards hallucinogenic drug use as necessarily maleficent and devoid of any merit. I intend for my discussion to confront assumptions about drugs that have unjustly perpetuated the disparagement and prohibition of some kinds of psychoactive substance use. More broadly, I intend for it to challenge assumptions about intelligence that constrain contemporary educational thought.

Entheogen is a word coined by scholars proposing to replace the term psychedelic (Ruck, Bigwood, Staples, Ott, & Wasson, 1979), which was felt to overly connote psychological and clinical paradigms and to be too socio-culturally loaded from its 1960s roots to appropriately designate the revered plants and substances used in traditional rituals. I use both terms in this article: entheogen when referring to a substance used as a spiritual or sacramental tool, and psychedelic when referring to one used for any number of purposes during or following the so-called psychedelic era of the 1960s (recognizing that some contemporary non-indigenous uses may be entheogenicthe categories are by no means clearly discreet). What kinds of plants or chemicals fall into the category of entheogen is a matter of debate, as a large number of inebriantsfrom coca and marijuana to alcohol and opiumhave been venerated as gifts from the gods (or God) in different cultures at different times. For the purposes of this article, however, I focus on the class of drugs that Lewin (1924/1997) termed phantastica, a name deriving from the Greek word for the faculty of imagination (Shorter Oxford English Dictionary, 1973). Later these substances became known as hallucinogens or psychedelics, a class whose members include lysergic acid derivatives, psilocybin, mescaline and dimethyltryptamine. With the exception of mescaline, these all share similar chemical structures; all, including mescaline, produce similar phenomenological effects; and, more importantly for the present discussion, all have a history of ritual use as psychospiritual medicines or, as I argue, cultural tools to facilitate cognition (Schultes & Hofmann, 1992).

The issue of entheogen use in modern Western culture becomes more significant in light of several legal precedents in countries such as Brazil, Holland, Spain and soon perhaps the United States and Canada. Ayahuasca, which I discuss in more detail in the following section on plant teachers, was legalized for religious use by non-indigenous people in Brazil in 1987i. One Brazilian group, the Santo Daime, was using its sacrament in ceremonies in the Netherlands when, in the autumn of 1999, authorities intervened and arrested its leaders. This was the first case of religious intolerance by a Dutch government in over three hundred years. A subsequent legal challenge, based on European Union religious freedom laws, saw them acquitted of all charges, setting a precedent for the rest of Europe (Adelaars, 2001). A similar case in Spain resulted in the Spanish government granting the right to use ayahuasca in that country. A recent court decision in the United States by the 10th Circuit Court of Appeals, September 4th, 2003, ruled in favour of religious freedom to use ayahuasca (Center for Cognitive Liberty and Ethics, 2003). And in Canada, an application to Health Canada and the Department of Justice for exemption to the Controlled Drugs and Substances Act is pending, which may permit the Santo Daime Church the religious use of their sacrament, known as Daime or Santo Daimeii (J.W. Rochester, personal communication, October 8th, 2003)

One of the questions raised by this trend of liberalization in otherwise prohibitionist regulatory regimes is what benefits substances such as ayahuasca have. The discussion that follows takes up this question with respect to contemporary psychological theories about intelligence and touches on potential ramifications for education. The next section examines the metaphor of plant teachers, which is not uncommon among cultures that have traditionally practiced the entheogenic use of plants. Following that, I use Howard Gardners theory of multiple intelligences (1983) as a theoretical framework with which to account for cognitive implications of entheogen use. Finally, I take up a discussion of possible relevance of existential intelligence and entheogens to education.

Before moving on to a broader discussion of intelligence(s), I will provide some background on ayahuasca and entheogens. Ayahuasca has been a revered plant teacher among dozens of South American indigenous peoples for centuries, if not longer (Luna, 1984; Schultes & Hofmann, 1992). The word ayahuasca is from the Quechua language of indigenous peoples of Ecuador and Peru, and translates as vine of the soul (Metzner, 1999). Typically, it refers to a tea made from a jungle liana, Banisteriopsis caapi, with admixtures of other plants, but most commonly the leaves of a plant from the coffee family, Psychotria viridis (McKenna, 1999). These two plants respectively contain harmala alkaloids and dimethyltryptamine, two substances that when ingested orally create a biochemical synergy capable of producing profound alterations in consciousness (Grob, et al., 1996; McKenna, Towers & Abbot, 1984). Among the indigenous peoples of the Amazon, ayahuasca is one of the most valuable medicinal and sacramental plants in their pharmacopoeias. Although shamans in different tribes use the tea for various purposes, and have varying recipes for it, the application of ayahuasca as an effective tool to attain understanding and wisdom is one of the most prevalent (Brown, 1986; Dobkin de Rios, 1984).

Notwithstanding the explosion of popular interest in psychoactive drugs during the 1960s, ayahuasca until quite recently managed to remain relatively obscure in Western cultureiii. However, the late 20th century saw the growth of religious movements among non-indigenous people in Brazil syncretizing the use of ayahuasca with Christian symbolism, African spiritualism, and native ritual. Two of the more widespread ayahuasca churches are the Santo Daime (Santo Daime, 2004) and the Unio do Vegetal (Unio do Vegetal, 2004). These organizations have in the past few decades gained legitimacy as valid, indeed valuable, spiritual practices providing social, psychological and spiritual benefits (Grob, 1999; Riba, et al., 2001).

Ayahuasca is not the only plant teacher in the pantheon of entheogenic tools. Other indigenous peoples of the Americas have used psilocybin mushrooms for millennia for spiritual and healing purposes (Dobkin de Rios, 1973; Wasson, 1980). Similarly, the peyote cactus has a long history of use by Mexican indigenous groups (Fikes, 1996; Myerhoff, 1974; Stewart, 1987), and is currently widely used in the United States by the Native American Church (LaBarre, 1989; Smith & Snake, 1996). And even in the early history of Western culture, the ancient Indo-Aryan texts of the Rig Veda sing the praises of the deified Soma (Pande, 1984). Although the taxonomic identity of Soma is lost, it seems to have been a plant or mushroom and had the power to reliably induce mystical experiencesan entheogen par excellence (Eliade, 1978; Wasson, 1968). The variety of entheogens extends far beyond the limited examples I have offered here. However, ayahuasca, psilocybin mushrooms, peyote and Soma are exemplars of plants which have been culturally esteemed for their psychological and spiritual impacts on both individuals and communities.

In this article I argue that the importance of entheogens lies in their role as tools, as mediators between mind and environment. Defining a psychoactive drug as a toolperhaps a novel concept for someinvokes its capacity to effect a purposeful change on the mind/body. Commenting on Vygotskys notions of psychological tools, John-Steiner and Souberman (1978) note that tool use has . . . important effects upon internal and functional relationships within the human brain (p. 133). Although they were likely not thinking of drugs as tools, the significance of this observation becomes even more literal when the tools in question are plants or chemicals ingested with the intent of affecting consciousness through the manipulation of brain chemistry. Indeed, psychoactive plants or chemicals seem to defy the traditional bifurcation between physical and psychological tools, as they affect the mind/body (understood by modern psychologists to be identical).

It is important to consider the degree to which the potential of entheogens comes not only from their immediate neuropsychological effects, but also from the social practicesritualsinto which their use has traditionally been incorporated (Dobkin de Rios, 1996; Smith, 2000). The protective value that ritual provides for entheogen use is evident from its universal application in traditional practices (Weil, 1972/1986). Medical evidence suggests that there are minimal physiological risks associated with psychedelic drugs (Callaway, et al., 1999; Grinspoon & Bakalar, 1979/1998; Julien, 1998). Albert Hofmann (1980), the chemist who first accidentally synthesized and ingested LSD, contends that the psychological risks associated with psychedelics in modern Western culture are a function of their recreational use in unsafe circumstances. A ritual context, however, offers psychospiritual safeguards that make the potential of entheogenic plant teachers to enhance cognition an intriguing possibility.

Howard Gardner (1983) developed a theory of multiple intelligences that originally postulated seven types of intelligence (iv). Since then, he has added a naturalist intelligence and entertained the possibility of a spiritual intelligence (1999a; 1999b). Not wanting to delve too far into territory fraught with theological pitfalls, Gardner (1999a) settled on looking at existential intelligence rather than spiritual intelligence (p. 123). Existential intelligence, as Gardner characterizes it, involves having a heightened capacity to appreciate and attend to the cosmological enigmas that define the human condition, an exceptional awareness of the metaphysical, ontological and epistemological mysteries that have been a perennial concern for people of all cultures (1999a).

In his original formulation of the theory, Gardner challenges (narrow) mainstream definitions of intelligence with a broader one that sees intelligence as the ability to solve problems or to fashion products that are valued in at least one culture or community (1999a, p. 113). He lays out eight criteria, or signs, that he argues should be used to identify an intelligence; however, he notes that these do not constitute necessary conditions for determining an intelligence, merely desiderata that a candidate intelligence should meet (1983, p. 62). He also admits that none of his original seven intelligences fulfilled all the criteria, although they all met a majority of the eight. For existential intelligence, Gardner himself identifies six which it seems to meet; I will look at each of these and discuss their merits in relation to entheogens.

One criterion applicable to existential intelligence is the identification of a neural substrate to which the intelligence may correlate. Gardner (1999a) notes that recent neuropsychological evidence supports the hypothesis that the brains temporal lobe plays a key role in producing mystical states of consciousness and spiritual awareness (p. 124-5; LaPlante, 1993; Newberg, DAquili & Rause, 2001). He also recognizes that certain brain centres and neural transmitters are mobilized in [altered consciousness] states, whether they are induced by the ingestion of substances or by a control of the will (Gardner, 1999a, p.125). Another possibility, which Gardner does not explore, is that endogenous dimethyltryptamine (DMT) in humans may play a significant role in the production of spontaneous or induced altered states of consciousness (Pert, 2001). DMT is a powerful entheogenic substance that exists naturally in the mammalian brain (Barker, Monti & Christian, 1981), as well as being a common constituent of ayahuasca and the Amazonian snuff, yopo (Ott, 1994). Furthermore, DMT is a close analogue of the neurotransmitter 5-hydroxytryptamine, or serotonin. It has been known for decades that the primary neuropharmacological action of psychedelics has been on serotonin systems, and serotonin is now understood to be correlated with healthy modes of consciousness.

One psychiatric researcher has recently hypothesized that endogenous DMT stimulates the pineal gland to create such spontaneous psychedelic states as near-death experiences (Strassman, 2001). Whether this is correct or not, the role of DMT in the brain is an area of empirical research that deserves much more attention, especially insofar as it may contribute to an evidential foundation for existential intelligence.

Another criterion for an intelligence is the existence of individuals of exceptional ability within the domain of that intelligence. Unfortunately, existential precocity is not something sufficiently valued in modern Western culture to the degree that savants in this domain are commonly celebrated today. Gardner (1999a) observes that within Tibetan Buddhism, the choosing of lamas may involve the detection of a predisposition to existential intellect (if it is not identifying the reincarnation of a previous lama, as Tibetan Buddhists themselves believe) (p. 124). Gardner also cites Czikszentmilhalyis consideration of the early-emerging concerns for cosmic issues of the sort reported in the childhoods of future religious leaders like Gandhi and of several future physicists (Gardner, 1999a, p. 124; Czikszentmilhalyi, 1996). Presumably, some individuals who are enjoined to enter a monastery or nunnery at a young age may be so directed due to an appreciable manifestation of existential awareness. Likewise, individuals from indigenous cultures who take up shamanic practicewho have abilities beyond others to dream, to imagine, to enter states of trance (Larsen, 1976, p. 9)often do so because of a significant interest in cosmological concerns at a young age, which could be construed as a prodigious capacity in the domain of existential intelligencev (Eliade, 1964; Greeley, 1974; Halifax, 1979).

The third criterion for determining an intelligence that Gardner suggests is an identifiable set of core operational abilities that manifest that intelligence. Gardner finds this relatively unproblematic and articulates the core operations for existential intelligence as:

the capacity to locate oneself with respect to the farthest reaches of the cosmosthe infinite no less than the infinitesimaland the related capacity to locate oneself with respect to the most existential aspects of the human condition: the significance of life, the meaning of death, the ultimate fate of the physical and psychological worlds, such profound experiences as love of another human being or total immersion in a work of art. (1999a, p. 123)

Gardner notes that as with other more readily accepted types of intelligence, there is no specific truth that one would attain with existential intelligencefor example, as musical intelligence does not have to manifest itself in any specific genre or category of music, neither does existential intelligence privilege any one philosophical system or spiritual doctrine. As Gardner (1999a) puts it, there exists [with existential intelligence] a species potentialor capacityto engage in transcendental concerns that can be aroused and deployed under certain circumstances (p. 123). Reports on uses of psychedelics by Westerners in the 1950s and early 1960sgenerated prior to their prohibition and, some might say, profanationreveal a recurrent theme of spontaneous mystical experiences that are consistent with enhanced capacity of existential intelligence (Huxley, 1954/1971; Masters & Houston, 1966; Pahnke, 1970; Smith, 1964; Watts, 1958/1969).

Another criterion for admitting an intelligence is identifying a developmental history and a set of expert end-state performances for it. Pertaining to existential intelligence, Gardner notes that all cultures have devised spiritual or metaphysical systems to deal with the inherent human capacity for existential issues, and further that these respective systems invariably have steps or levels of sophistication separating the novice from the adept. He uses the example of Pope John XXIIIs description of his training to advance up the ecclesiastic hierarchy as a contemporary illustration of this point (1999a, p. 124). However, the instruction of the neophyte is a manifest part of almost all spiritual training and, again, the demanding process of imparting of shamanic wisdomoften including how to effectively and appropriately use entheogensis an excellent example of this process in indigenous cultures (Eliade, 1964).

A fifth criterion Gardner suggests for an intelligence is determining its evolutionary history and evolutionary plausibility. The self-reflexive question of when and why existential intelligence first arose in the Homo genus is one of the perennial existential questions of humankind. That it is an exclusively human trait is almost axiomatic, although a small but increasing number of researchers are willing to admit the possibility of higher forms of cognition in non-human animals (Masson & McCarthy, 1995; Vonk, 2003). Gardner (1999a) argues that only by the Upper Paleolithic period did human beings within a culture possess a brain capable of considering the cosmological issues central to existential intelligence (p. 124) and that the development of a capacity for existential thinking may be linked to a conscious sense of finite space and irreversible time, two promising loci for stimulating imaginative explorations of transcendental spheres (p. 124). He also suggests that thoughts about existential issues may well have evolved as responses to necessarily occurring pain, perhaps as a way of reducing pain or better equipping individuals to cope with it (Gardner, 1999a, p. 125). As with determining the evolutionary origin of language, tracing a phylogenesis of existential intelligence is conjectural at best. Its role in the development of the species is equally difficult to assess, although Winkelman (2000) argues that consciousness and shamanic practicesand presumably existential intelligence as wellstem from psychobiological adaptations integrating older and more recently evolved structures in the triune hominid brain. McKenna (1992) goes even so far as to postulate that the ingestion of psychoactive substances such as entheogenic mushrooms may have helped stimulate cognitive developments such as existential and linguistic thinking in our proto-human ancestors. Some researchers in the 1950s and 1960s found enhanced creativity and problem-solving skills among subjects given LSD and other psychedelic drugs (Harman, McKim, Mogar, Fadiman & Stolaroff, 1966; Izumi, 1970; Krippner, 1985; Stafford & Golightly, 1967), skills which certainly would have been evolutionarily advantageous to our hominid ancestors. Such avenues of investigation are beginning to be broached again by both academic scholars and amateur psychonauts (Dobkin de Rios & Janiger, 2003; Spitzer, et al., 1996; MAPS Bulletin, 2000).

The final criterion Gardner mentions as applicable to existential intelligence is susceptibility to encoding in a symbol system. Here, again, Gardner concedes that there is abundant evidence in favour of accepting existential thinking as an intelligence. In his words, many of the most important and most enduring sets of symbol systems (e.g., those featured in the Catholic liturgy) represent crystallizations of key ideas and experiences that have evolved within [cultural] institutions (1999a, p. 123). Another salient example that illustrates this point is the mytho-symbolism ascribed to ayahuasca visions among the Tukano, an Amazonian indigenous people. Reichel-Dolmatoff (1975) made a detailed study of these visions by asking a variety of informants to draw representations with sticks in the dirt (p. 174). He compiled twenty common motifs, observing that most of them bear a striking resemblance to phosphene patterns (i.e. visual phenomena perceived in the absence of external stimuli or by applying light pressure to the eyeball) compiled by Max Knoll (Oster, 1970). The Tukano interpret these universal human neuropsychological phenomena as symbolically significant according to their traditional ayahuasca-steeped mythology, reflecting the codification of existential ideas within their culture.

Narby (1998) also examines the codification of symbols generated during ayahuasca experiences by tracing similarities between intertwining snake motifs in the visions of Amazonian shamans and the double-helix structure of deoxyribonucleic acid. He found remarkable similarities between representations of biological knowledge by indigenous shamans and those of modern geneticists. More recently, Narby (2002) has followed up on this work by bringing molecular biologists to the Amazon to participate in ayahuasca ceremonies with experiences shamans, an endeavour he suggests may provide useful cross-fertilization in divergent realms of human knowledge.

The two other criteria of an intelligence are support from experimental psychological tasks and support from psychometric findings. Gardner suggests that existential intelligence is more debatable within these domains, citing personality inventories that attempt to measure religiosity or spirituality; he notes, it remains unclear just what is being probed by such instruments and whether self-report is a reliable index of existential intelligence (1999a, p. 125). It seems transcendental states of consciousness and the cognition they engender do not lend themselves to quantification or easy replication in psychology laboratories. However, Strassman, Qualls, Uhlenhuth, & Kellner (1994) developed a psychometric instrumentthe Hallucinogen Rating Scaleto measure human responses to intravenous administration of DMT, and it has since been reliably used for other psychedelic experiences (Riba, Rodriguez-Fornells, Strassman, & Barbanoj, 2001).

One historical area of empirical psychological research that did ostensibly stimulate a form of what might be considered existential intelligence was clinical investigations into psychedelics. Until such research became academically unfashionable and then politically impossible in the early 1970s, psychologists and clinical researchers actively explored experimentally-induced transcendent experiences using drugs in the interests of both pure science and applied medical treatments (Abramson, 1967; Cohen, 1964; Grinspoon & Bakalar, 1979/1998; Masters & Houston, 1966). One of the more famous of these was Pahnkes (1970) so-called Good Friday experiment, which attempted to induce spiritual experiences with psilocybin within a randomized double-blind control methodology. His conclusion that mystical experiences were indeed reliably produced, despite methodological problems with the study design, was borne out by a critical long-term follow-up (Doblin, 1991), which raises intriguing questions about both entheogens and existential intelligence.

Studies such as Pahnkes (1970), despite their promise, were prematurely terminated due to public pressure from a populace alarmed by burgeoning contemporary recreational drug use. Only about a decade ago did the United States government give researchers permission to renew (on a very small scale) investigations into psychedelics (Strassman 2001; Strassman & Qualls, 1994). Cognitive psychologists are also taking an interest in entheogens such as ayahuasca (Shanon, 2002). Regardless of whether support for existential intelligence can be established psychometrically or in experimental psychological tasks, Gardners theory expressly stipulates that not all eight criteria must be uniformly met in order for an intelligence to qualify. Nevertheless, Gardner claims to find the phenomenon perplexing enough, and the distance from other intelligences great enough (1999a, p. 127) to be reluctant at present to add existential intelligence to the list . . . . At most [he is] willing, Fellini-style, to joke about 8 intelligences (p. 127). I contend that research into entheogens and other means of altering consciousness will further support the case for treating existential intelligence as a valid cognitive domain.

By recapitulating and augmenting Gardners discussion of existential intelligence, I hope to have strengthened the case for its inclusion as a valid cognitive domain. However, doing so raises questions of what ramifications an acceptance of existential intelligence would have for contemporary Western educational theory and practice. How might we foster this hitherto neglected intelligence and allow it to be used in constructive ways? There is likely a range of educational practices that could be used to stimulate cognition in this domain, many of which could be readily implemented without much Yet I intentionally raise the prospect of using entheogens in this capacitynot with young children, but perhaps with older teens in the passage to adulthoodto challenge theorists, policy-makers and practitioners.vii

The potential of entheogens as tools for education in contemporary Western culture was identified by Aldous Huxley. Although better known as a novelist than as a philosopher of education, Huxley spent a considerable amount of timeparticularly as he neared the end of his lifeaddressing the topic of education. Like much of his literature, Huxleys observations and critiques of the socio-cultural forces at work in his time were cannily prescient; they bear as much, if not more, relevance in the 21st century as when they were written. Most remarkably, and relevant to my thesis, Huxley saw entheogens as possible educational tools:

Under the current dispensation the vast majority of individuals lose, in the course of education, all the openness to inspiration, all the capacity to be aware of other things than those enumerated in the Sears-Roebuck catalogue which constitutes the conventionally real world . . . . Is it too much to hope that a system of education may some day be devised, which shall give results, in terms of human development, commensurate with the time, money, energy and devotion expended? In such a system of education it may be that mescalin or some other chemical substance may play a part by making it possible for young people to taste and see what they have learned about at second hand . . . in the writings of the religious, or the works of poets, painters and musicians. (Letter to Dr. Humphrey Osmond, April 10th, 1953in Horowitz & Palmer, 1999, p.30)

In a more literary expression of this notion, Huxleys final novel Island (1962) portrays an ideal culture that has achieved a balance of scientific and spiritual thinking, and which also incorporates the ritualized use of entheogens for education. The representation of drug use that Huxley portrays in Island contrasts markedly with the more widely-known soma of his earlier novel, Brave New World (1932/1946): whereas soma was a pacifier that muted curiosity and served the interests of the controlling elite, the entheogenic moksha medicine of Island offered liminal experiences in young adults that stimulated profound reflection, self-actualization and, I submit, existential intelligence.

Huxleys writings point to an implicit recognition of the capacity of entheogens to be used as educational tools. The concept of tool here refers not merely the physical devices fashioned to aid material production, but, following Vygotsky (1978), more broadly to those means of symbolic and/or cultural mediation between the mind and the world (Cole, 1996; Wertsch, 1991). Of course, deriving educational benefit from a tool requires much more than simply having and wielding it; one must also have an intrinsic respect for the object qua tool, a cultural system in which the tool is valued as such, and guides or teachers who are adept at using the tool to provide helpful direction. As Larsen (1976) remarks in discussing the phenomenon of would-be shamans in Western culture experimenting with mind-altering chemicals: we have no symbolic vocabulary, no grounded mythological tradition to make our experiences comprehensible to us . . . no senior shamans to help ensure that our [shamanic experience of] dismemberment be followed by a rebirth (p. 81). Given the recent history of these substances in modern Western culture, it is hardly surprising that they have been demonized (Hofmann, 1980). However, cultural practices that have traditionally used entheogens as therapeutic agents consistently incorporate protective safeguardsset, settingviii, established dosages, and mythocultural respect (Zinberg, 1984). The fear that inevitably arises in modern Western culture when addressing the issue of entheogens stems, I submit, not from any properties intrinsic to the substances themselves, but rather from a general misunderstanding of their power and capacity as tools. Just as a sharp knife can be used for good or ill, depending on whether it is in the hands of a skilled surgeon or a reckless youth, so too can entheogens be used or misused.

The use of entheogens such as ayahuasca is exemplary of the long and ongoing tradition in many cultures to employ psychoactives as tools that stimulate foundational types of understanding (Tupper, in press). That such substances are capable of stimulating profoundly transcendent experiences is evident from both the academic literature and anecdotal reports. Accounting fully for their action, however, requires going beyond the usual explanatory schemas: applying Gardners (1999a) multiple intelligence theory as a heuristic framework opens new ways of understanding entheogens and their potential benefits. At the same time, entheogens bolster the case for Gardners proposed addition of existential intelligence. This article attempts to present these concepts in such a way that the possibility of using entheogens as tools is taken seriously by those with an interest in new and transformative ideas in education.

Abramson, H. A. (Ed.). (1967). The use of LSD in psychotherapy and alcoholism. New York: Bobbs-Merrill Co. Ltd.

Adelaars, A. (2001, 21 April). Court case in Holland against the use of ayahuasca by the Dutch Santo Daime Church. Retrieved January 2, 2002 from

Barker, S.A., Monti, J.A. & Christian, S.T. (1981). N,N-Dimethyltryptamine: An endogenous hallucinogen. International Review of Neurobiology. 22, 83-110.

Brown, M.F. (1986). Tsewas gift: Magic and meaning in an Amazonian society. Washington, D.C.: Smithsonian Institution Press.

Burroughs, W. S., & Ginsberg, A. (1963). The yage letters. San Francisco, CA: City Lights Books.

Callaway, J.C., McKenna, D.J., Grob, C.S., Brito, G.S., Raymon, L.P., Poland, R.E., Andrade, E.N., & Mash, D.C. (1999). Pharmacokinetics of hoasca alkaloids in healthy humans. Journal of Ethnopharmacology. 65, 243-256.

Center for Cognitive Liberty and Ethics. (2003, September 5). 10th Circuit: Church likely to prevail in dispute over hallucinogenic tea. Retrieved February 7, 2004, from

Cohen, S. (1964). The beyond within: The LSD story. New York: Atheneum.

Cole, M. (1996). Culture in mind. Cambridge, MA: Harvard University Press.

Cremin, L. A. (1961). The transformation of the school: Progressivism in American education, 1867-1957. New York: Vintage Books.

Czikszentmilhalyi, M. (1996). Creativity. New York: Harper Collins.

Davis, W. (2001, January 23). In Coulter, P. (Producer). The end of the wild [radio program]. Toronto: Canadian Broadcasting Corporation.

Dobkin de Rios, M. (1973). The influence of psychotropic flora and fauna on Maya religion. Current Anthropology. 15(2), 147-64.

Dobkin de Rios, M. (1984). Hallucinogens: Cross-cultural perspectives. Albuquerque, NM: University of New Mexico Press.

Dobkin de Rios, M. (1996). On human pharmacology of hoasca: A medical anthropology perspective. The Journal of Nervous and Mental Disease. 184(2), 95-98.

Dobkin de Rios, M., & Janiger, O. (2003). LSD, Spirituality, and the Creative Process. Park Street Press.

Doblin, R. (1991). Pahnkes Good Friday Experiment: A long-term follow-up and methodological critique. The Journal of Transpersonal Psychology. 23(1): 1-28.

Egan, K. (2002). Getting it wrong from the beginning: Our progressivist inheritance from Herbert Spencer, John Dewey, and Jean Piaget. New Haven, CT: Yale University Press.

Eliade, M. (1964). Shamanism: Archaic techniques of ecstasy. (W.R. Trask, Trans.). New York: Pantheon Books.

Eliade, M. (1978). A history of religious ideas: From the stone age to the Eleusinian mysteries (Vol. 1). Chicago, IL: University of Chicago Press.

Fikes, J. C. (1996). A brief history of the Native American Church. In H. Smith & R. Snake (Eds.), One nation under god: The triumph of the Native American church (p. 167-73). Santa Fe, NM: Clear Light Publishers.

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

Gardner, H. (1999a). Are there additional intelligences? In J. Kane (Ed.), Education, information, transformation: Essays on learning and thinking (p. 111-131). Upper Saddle River, NJ: Prentice-Hall.

Gardner, H. (1999b). Intelligence reframed: Multiple intelligences for the 21st century. New York: Basic Books.

Gotz, I.L. (1970). The psychedelic teacher: Drugs, mysticism, and schools. Philadelphia, PA: Westminster Press.

Greeley, A .M. (1974). Ecstasy: A way of knowing. Englewood Cliffs, NJ: Prentice-Hall.

Grinspoon, L., & Bakalar, J. B. (1998). Psychedelic drugs reconsidered. New York: The Lindesmith Center (Original work published 1979).

Grob, C. S., McKenna, D. J., Callaway, J. C., Brito, G. C., Neves, E. S., Oberlander, G., Saide, O. L., Labigalini, E., Tacla, C., Miranda, C. T., Strassman, R. J., & Boone, K. B. (1996). Human psychopharmacology of hoasca, a plant hallucinogen used in ritual context in Brazil. The Journal of Nervous and Mental Disease. 184(2), 86-94.

Grob, C. S. (1999). The psychology of ayahuasca. In R. Metzner (Ed.), Ayahuasca: Hallucinogens, consciousness, and the spirit of nature (p. 214-249). New York: Thunders Mouth Press.

Halifax, J. (1979). Shamanic voices: A survey of visionary narratives. New York: Dutton.

Harman, W. W., McKim, R. H., Mogar, R. E., Fadiman, J., and Stolaroff, M. (1966). Psychedelic agents in creative problem-solving: A pilot study. Psychological Reports. 19: 211-227.

Hofmann, A. (1980). LSD: My problem child. (J. Ott, Trans.). New York: McGraw-Hill.

Horowitz, M., & Palmer, C. (Eds.). (1999). Moksha: Aldous Huxleys classic writings on psychedelics and the visionary experience. Rochester, VT: Park Street Press.

Huxley, A. (1946). Brave new world: A novel. New York: Harper & Row. (Original work published 1932).

Huxley, A. (1962). Island. New York: Harper & Row.

Huxley, A. (1971). The doors of perception & heaven and hell. Middlesex, England: Penguin Books. (Original work published 1954).

Izumi, K. (1970). LSD and architectural design. In B. Aaronson & H. Osmond, (Eds.), Psychedelics: The uses and implications of hallucinogenic drugs (p. 381-397). Garden City, NY: Anchor Books.

John-Steiner, V., & Souberman, E. (1978). Afterword. In L. Vygotsky, Mind in society: The development of higher psychological processes (p. 121-133). Cambridge, MA: Harvard University Press.

Julien, R.M. (1998). A primer of drug action: A concise, non-technical guide to the actions, uses, and side effects of psychoactive drugs (8th ed.). Portland, OR: W.H. Freeman & Company.

Krippner, S. (1985). Psychedelic drugs and creativity. Journal of Psychoactive Drugs. 17(4): 235-245.

LaBarre, W. (1989). The peyote cult (5th ed.). Hamden, CT: Shoe String Press.

LaPlante, E. (1993). Seized: Temporal lobe epilepsy as a medical, historical, and artistic phenomenon. New York: Harper-Collins.

Larsen, S. (1976). The shamans doorway: Opening the mythic imagination to contemporary consciousness. New York: Harper & Row.

Lewin, L. (1997). Phantastica: A classic survey on the use and abuse of mind-altering plants. Rochester, VT: Park Street Press. (Original work published 1924).

Luna, L.E. (1984). The concept of plants as teachers among four mestizo shamans of Iquitos, northeastern Peru. Journal of Ethnopharmacology. 11(2), 135-156.

MAPS (Multidisciplinary Association for Psychedelic Studies) Bulletin. (2000). Psychedelics & Creativity. 10(3). Retrieved February 15th, 2004 from:

Masson, J. M., & McCarthy, S. (1995). When elephants weep: The emotional lives of animals. New York: Delta Books.

Masters, R. E. L., & Houston, J. (1966). The varieties of psychedelic experience. New York: Holt, Rinehart and Winston.

McKenna, D.J. (1999). Ayahuasca: An ethnopharmacologic history. In R. Metzner (Ed.), Ayahuasca: Hallucinogens, consciousness, and the spirit of nature (p. 187-213). New York: Thunders Mouth Press.

McKenna, D. J., Towers, G. H. N., & Abbot, F. (1984). Monoamine oxidase inhibitors in South American hallucinogenic plants: Tryptamine and -carboline constituents of ayahuasca. Journal of Ethnopharmacology. 10(2), 195-223.

McKenna, T. (1992). Food of the gods: The search for the original tree of knowledge. New York: Bantam.

Metzner, R. (1999). Introduction: Amazonian vine of visions. In R. Metzner (Ed.), Ayahuasca: Hallucinogens, consciousness, and the spirit of nature (p. 1-45). New York: Thunders Mouth Press.

Myerhoff, B. G. (1974). Peyote hunt: The sacred journey of the Huichol Indians. Ithaca, NY: Cornell University Press.

Narby, J. (1998). The cosmic serpent: DNA and the origins of knowledge. New York: Jeremy P. Tarcher/Putnam.

Narby, J. (2002). Shamans and scientists. In C.S. Grob (Ed.), Hallucinogens: A reader (p. 159-163). New York: Jeremy P. Tarcher/Putnam.

Newberg, A., DAquili, E., & Rause, V. (2001). Why god wont go away: Brain science and the biology of belief. New York: Ballantine Books.

Oster, G. (1970). Phosphenes. Scientific American. 222(2), 83-87.

Ott, J. (1994). Ayahuasca analogues: Pangan entheogens. Kennewick, WA: Natural Products Co.

Pahnke, W. (1970). Drugs and Mysticism. In B. Aaronson & H. Osmond, (Eds.), Psychedelics: The uses and implications of hallucinogenic drugs (p. 145-165). Cambridge, MA: Schenkman.

Pande, C. G. (1984). Foundations of Indian culture: Spiritual vision and symbolic forms in ancient India. New Delhi: Books & Books.

Pert, C. (2001, May 26). The matter of emotions. Paper presented at the Remaining Human Forum, University of British Columbia, Vancouver.

Reichel-Dolmatoff, G. (1975). The shaman and the jaguar: A study of narcotic drugs among the Indians of Colombia. Philadelphia, PA: Temple University Press.

Riba, J., Rodriguez-Fornells, A., Urbano, G., Morte, A., Antonijoan, R., Montero, M., Callaway, J.C., & Barbanoj, M.J. (2001). Subjective effects and tolerability of the South American psychoactive beverage Ayahuasca in healthy volunteers. Psychopharmacology. 154, 85-95.

Riba, J., Rodriguez-Fornells, A., Strassman, R.J., & Barbanoj, M.J. (2001). Psychometric assessment of the Hallucinogen Rating Scale in two different populations of hallucinogen users. Drug and Alcohol Dependence. 62(3): 215-223.

Ruck, C., Bigwood, J., Staples, B., Ott, J., & Wasson, R. G. (1979). Entheogens. The Journal of Psychedelic Drugs. 11(1-2), 145-146.

Santo Daime. (2004). Santo Daime: The rainforests doctrine. Retrieved February 7th, 2004 from

Schultes, R. E., & Hofmann, A. (1992). Plants of the gods: Their sacred, healing, and hallucinogenic powers. Rochester, VT: Healing Arts Press.

Read the original:

Entheogens & Existential Intelligence: The Use of Plant …

 Posted by at 4:57 am  Tagged with:

The Key Role of Impurities in Ancient Damascus Steel Blades

 Tms  Comments Off on The Key Role of Impurities in Ancient Damascus Steel Blades
Jun 172016

The art of producing the famous 16-18th century Damascus steel blades found in many museums was lost long ago. Recently, however, research has established strong evidence supporting the theory that the distinct surface patterns on these blades result from a carbide-banding phenomenon produced by the microsegregation of minor amounts of carbide-forming elements present in the wootz ingots from which the blades were forged. Further, it is likely that wootz Damascus blades with damascene patterns may have been produced only from wootz ingots supplied from those regions of India having appropriate impurity-containing ore deposits.

This article is concerned with the second type of Damascus steel, sometimes called oriental Damascus. The most common examples of these steels are swords and daggers, although examples of body armor are also known. The name Damascus apparently originated with these steels. The steel itself was produced not in Damascus, but in India and became known in English literature in the early 19th century3 as wootz steel, as it is referred to here. Detailed pictures of many such wootz Damascus swords are presented in Figiel’s book,4 and the metallurgy of these blades is discussed in Smith’s book.5

Unfortunately, the technique of producing wootz Damascus steel blades is a lost art. The date of the last blades produced with the highest-quality damascene patterns is uncertain, but is probably around 1750; it is unlikely that blades displaying low-quality damascene patterns were produced later than the early 19th century. Debate has persisted in the metallurgy community over the past 200 years as to how these blades were made and why the surface pattern appeared.6-8 Research efforts over the years have claimed the discovery of methods to reproduce wootz Damascus steel blades,9-12 but all of these methods suffer from the same problemmodern bladesmiths have been unable to use the methods to reproduce the blades. The successful reproduction of wootz Damascus blades requires that blades be produced that match the chemical composition, possess the characteristic damascene surface pattern, and possess the same internal microstructure that causes the surface pattern.

A detailed picture description of the production process for this blade has recently been published.14 In addition, the technique has been fully described in the literature,15-17 and it has been shown that blades possessing high-quality damascene patterns can be repeatedly produced utilizing the technique. The technique is, in essence, a simple reproduction of the general method described by the earlier researchers. A small steel ingot of the correct composition (Fe + 1.5C) is produced in a closed crucible and is then forged to a blade shape. However, some key factors are now specified. These include the time/temperature record of the ingot preparation, the temperature of the forging operations, and the type and composition level of impurity elements in the Fe + 1.5C steel. It appears that the most important factor is the type of impurity elements in the steel ingot. Recent work17-18 has shown that bands of clustered Fe3C particles can be produced in the blades by the addition of very small amounts (0.03% or less) of one or more carbide-forming elements, such as V, Mo, Cr, Mn, and Nb. The elements vanadium and molybdenum appear to be the most effective elements in causing the band formation to occur. An obvious question raised by these results is, are these elements also present at low levels in the 16-18th century wootz Damascus blades?

This article presents the results of a study of these four samples. Also, four additional wootz Damascus blades, all thought to be a few hundred years old, have been acquired and are included. Hence, all of the blades studied here are more than two centuries old and were presumably made from wootz steel. These blades are referred to as genuine wootz Damascus blades to differentiate them from the reconstructed wootz Damascus blades made by the technique developed by the authors.

Pieces were cut from one end of each of the samples with a thin diamond saw. A 2 cm length was cut for chemical-analysis studies, and an 8 mm length sample was used for microstructure analysis. The chemical analyses were done using emission spectroscopy on a calibrated machine at Nucor Steel Corporation. Table I presents the chemical analyses, along with the values reported by Zschokke. Agreement between the analyses done by Zschokke in 1924 and the present data is reasonably good.

Micrographs of surface and transverse sections of the remaining three swords are shown in Figure 3. The micrographs of the surfaces are, in effect, taper sections through the bands seen on the micrographs of the section views, and, as expected, the widths of the bands are expanded in the surface views.

Rockwell C hardness data were taken along the centerline of the transverse sections of all four swords in order to more fully characterize them. A large variation in hardness was found and is presented in Table II. The hardness correlated with the matrix microstructure. The matrix structure of the blades underwent a transition from pearlite at the thin tip to a divorced eutectoid ferrite + cementite at the fat end (thickness = 3-4 mm). These structures are consistent with recent kinetic studies of the eutectoid reaction in hypereutectoid steels.19-20 The studies show that in two-phase (austenite + Fe3C) steels, the divorced eutectoid transformation (DET) dominates at slow cooling rates and the pearlite reaction dominates at higher cooling rates; the DET is favored as the density of the Fe3C particles in the transforming austenite increases. Hence, the matrix microstructures indicate that the blades were air-cooled with pearlite dominating near the faster cooling cutting edge. The dominance of the DET matrix structure in swords 7 and 10 probably results from the higher amount of interband Fe3C present in these swords.

In swords 7 and 10, the particles are dominantly plate-shaped with the thin direction aligned in the forging plane of the sword blades. Consequently, the area of the particles on the sword face is generally larger than on the sections. The standard deviation of the data was consistently in the range of 20-25%, so that differences in the areas on the three surfaces are problematic, whereas, the differences in minimum and maximum diameters are significant. For blades 7 and 10, the maximum/minimum aspect ratio of the particles averages around three on both transverse and longitudinal sections and around two on the sword faces. The ratios are slightly less for blade 9, reflecting the more globular shape of the particles and the observation that the oblong particles do not have their broad face well aligned in the forging plane, as they do on blades 7 and 10.

Experiments have been carried out on the reconstructed wootz Damascus blades in which the ladder and rose pattern were produced by both the groove-cutting and groove-forging techniques. The patterns in the blade of Figure 1 were made with the groove-cutting technique, and detailed photographs of the process have recently been published (Figure 6a).14 These patterns may be compared to similar ladder/rose patterns made by the die-forging technique (Figure 6b). The circular pattern in Figure 6b (called the rose pattern on ancient blades) was made with a hollow cylindrical die, while the pattern in Figure 6a was made by removing metal with a specially shaped solid drill. In the case of the die-forged patterns, the ridges produced by the upsetting action of the die were removed with a belt grinder prior to additional forging.

A comparison of the ladder patterns produced by grinding versus forging reveals nearly identical features (Figure 6). Figiel points out that there is a large variation in the pattern in the bands of the several examples presented in his book.4 Hence, this study is only able to conclude that the ancient smiths produced the ladder patterns by making parallel grooves across the surface of nearly finished blades, either by forging or cutting/grinding.

It is well established25-28 that the ferrite/pearlite banding of hypoeutectoid steels results from microsegregation of the X element in Fe-C-X alloys, where X is generally manganese, phosphorus, or an alloy addition. For the example X = P, it is established that the microsegregation of phosphorus to the interdendritic regions (IRs) causes ferrite to nucleate preferentially in the IRs. If the cooling rate is slow enough, the ferrite grows as blocky grain boundary allotriomorphs and pushes the carbon ahead of the growth front until pearlite forms between neighboring IRs. Apparently, rolling or forging deformation is quite effective in aligning the IRs of the solidified ingots into planar arrays, because the ferrite appears as planar bands parallel to the deformation plane separated by bands of pearlite. The ferrite/pearlite bands of sword 8 were probably produced by this type of banding caused, most likely, by the microsegregation of phosphorus.

A strong body of evidence has been obtained16-18 that supports the theory that the layered structures in the normal hypereutectoid Damascus steels are produced by a mechanism similar to the mechanism causing ferrite/pearlite banding in hypoeutectoid steels with one important difference in ferrite/pearlite banding, the bands form on a single thermal cycle. For example, the ferrite/pearlite bands can be destroyed by complete austenitization at low temperatures (just above the A3 temperature) followed by rapid cooling and are then reformed in a single heat up to austenite, followed by an adequately slow cool.26 (Low-temperature austenitization is required to avoid homogenization of the microsegregated X element.) The carbide bands of the wootz Damascus steel are destroyed by a complete austenitization at low temperatures (just above the Acm temperature) followed by cooling at all rates, slow or fast. However, if the steel is then repeatedly cycled to maximum temperatures of around 50-100C below Acm, the carbide bands will begin to develop after a few cycles and become clear after 6-8 cycles.

The formation mechanism of the carbides clustered selectively along the IRs during the cyclic heating of the forging process is not resolved. It seems likely, however, that it involves a selective coarsening process, whereby cementite particles lying on the IRs slowly become larger than their neighbors lying on dendrite regions and crowd them out. A model for such a selective coarsening process has been presented.17 During the heat-up stage of each thermal cycle, the smaller cementite particles will dissolve, and only the larger particles will remain at the forging temperature, which lies just below the Acm temperature. The model requires the segregated impurity atoms lying in the IRs to selectively reduce the mobility of the cementite/austenite interfaces in those regions. Larger particles would then occur in the IRs at the forging temperature. They probably maintain their dominance on cool down because one would not expect the small particles that had dissolved to renucleate on cool down in the presence of the nearby cementite particles. These near-by particles would provide sites for cementite growth prior to adequate local supercooling sufficient to nucleate new particles.

Based on this experience, it seems likely that the fraction of Indian crucible steel that was successfully forged into the damascened blades was probably quite small; the majority of surviving wootz Damascus blades probably display low-quality surface patterns. Craddock29 has come to this same conclusion based on an analysis of the literature on damascene-patterned steels. The results on the four Moser blades studied by Zschokke support this same conclusion. These blades were supposedly representative of good-quality damascened blades from the east, and yet of the four, only sword 9 displays the high-quality Fe3C bands characteristic of the best museum-quality wootz Damascus blades.

One of the big mysteries of wootz Damascus steel has been why the art of making these blades was lost. The vanadium levels provide the basis for a theory. Based on our studies, it is clear that to produce the damascene patterns of a museum-quality wootz Damascus blade the smith would have to fulfill at least three requirements. First, the wootz ingot would have to have come from an ore deposit that provided significant levels of certain trace elements, notably, Cr, Mo, Nb, Mn, or V. This idea is consistent with the theory of some authors30 who believe the blades with good patterns were only produced from wootz ingots made in southern India, apparently around Hyderabad. Second, the data of Table IV confirm previous knowledge that wootz Damascus blades with good patterns are characterized by a high phosphorus level. This means that the ingots of these blades would be severely hot short, which explains why Breant’s9 19th century smiths in Paris could not forge wootz ingots. Therefore, as previously shown,15 successful forging would require the development of heat-treating techniques that decarburized the surface in order to produce a ductile surface rim adequate to contain the hot-short interior regions. Third, a smith who developed a heat-treatment technique that allowed the hot-short ingots to be forged might still not have learned how to produce the surface patterns, because they do not appear until the surface decarb region is ground off the blades; this grinding process is not a simple matter.

The smiths that produced the high-quality blades would most likely have kept the process for making these blades a closely guarded secret to be passed on only to their apprentices. The smiths would be able to teach the apprentices the second and third points listed, but point one is something they would not have known. There is no difference in physical appearance between an ingot with the proper minor elements present and one without. Suppose that during several generations all of the ingots from India were coming from an ore body with the proper amount of minor elements present, and blades with good patterns were being produced. Then, after a few centuries, the ore source may have been exhausted or become inaccessible to the smithing community; therefore, the technique no longer worked. With time, the smiths who knew about the technique died out without passing it on to their apprentices (since it no longer worked), so even if a similar source was later found, the knowledge was no longer around to exploit it. The possible validity of this theory could be examined if data were available on the level of carbide-forming elements in the various ore deposits in India used to produce wootz steel.


J.D. Verhoeven is currently a professor in the Materials Science and Engineering Department at Iowa State University. A.H. Pendray is currently president of the Knifemakers Guild. W.E. Dauksch is retired as vice president and general manager of Nucor Steel Corporation.

For more information, contact J.D. Verhoeven, Iowa State University, Materials Science and Engineering Department, 104 Wilhelm Hall, Ames, Iowa 50011; (515) 294-9471; fax (515) 294-4291;

Direct questions about this or any other JOM page to

Go here to see the original:

The Key Role of Impurities in Ancient Damascus Steel Blades

Neurotechnology – Wikipedia, the free encyclopedia

 Neurotechnology  Comments Off on Neurotechnology – Wikipedia, the free encyclopedia
Jun 172016

Neurotechnology is any technology that has a fundamental influence on how people understand the brain and various aspects of consciousness, thought, and higher order activities in the brain. It also includes technologies that are designed to improve and repair brain function and allow researchers and clinicians to visualize the brain.

The field of neurotechnology has been around for nearly half a century but has only reached maturity in the last twenty years. The advent of brain imaging revolutionized the field, allowing researchers to directly monitor the brains activities during experiments. Neurotechnology has made significant impact on society, though its presence is so commonplace that many do not realize its ubiquity. From pharmaceutical drugs to brain scanning, neurotechnology affects nearly all industrialized people either directly or indirectly, be it from drugs for depression, sleep, ADD, or anti-neurotics to cancer scanning, stroke rehabilitation, and much more.

As the fields depth increases it will potentially allow society to control and harness more of what the brain does and how it influences lifestyles and personalities. Commonplace technologies already attempt to do this; games like BrainAge,[1] and programs like Fast ForWord[2] that aim to improve brain function, are neurotechnologies.

Currently, modern science can image nearly all aspects of the brain as well as control a degree of the function of the brain. It can help control depression, over-activation, sleep deprivation, and many other conditions. Therapeutically it can help improve stroke victims motor coordination, improve brain function, reduce epileptic episodes (see epilepsy), improve patients with degenerative motor diseases (Parkinson’s disease, Huntingtons Disease, ALS), and can even help alleviate phantom pain perception.[3] Advances in the field promise many new enhancements and rehabilitation methods for patients suffering from neurological problems. The neurotechnology revolution has given rise to the Decade of the Mind initiative, which was started in 2007.[4] It also offers the possibility of revealing the mechanisms by which mind and consciousness emerge from the brain.

Magnetoencephalography is a functional neuroimaging technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain, using very sensitive magnetometers. Arrays of SQUIDs (superconducting quantum interference devices) are the most common magnetometer. Applications of MEG include basic research into perceptual and cognitive brain processes, localizing regions affected by pathology before surgical removal, determining the function of various parts of the brain, and neurofeedback. This can be applied in a clinical setting to find locations of abnormalities as well as in an experimental setting to simply measure brain activity.[5]

Magnetic resonance imaging (MRI) is used for scanning the brain for topological and landmark structure in the brain, but can also be used for imaging activation in the brain.[6] While detail about how MRI works is reserved for the actual MRI article, the uses of MRI are far reaching in the study of neuroscience. It is a cornerstone technology in studying the mind, especially with the advent of functional MRI (fMRI).[7] Functional MRI measures the oxygen levels in the brain upon activation (higher oxygen content = neural activation) and allows researchers to understand what loci are responsible for activation under a given stimulus. This technology is a large improvement to single cell or loci activation by means of exposing the brain and contact stimulation. Functional MRI allows researchers to draw associative relationships between different loci and regions of the brain and provides a large amount of knowledge in establishing new landmarks and loci in the brain.[8]

Computed tomography (CT) is another technology used for scanning the brain. It has been used since the 1970s and is another tool used by neuroscientists to track brain structure and activation.[6] While many of the functions of CT scans are now done using MRI, CT can still be used as the mode by which brain activation and brain injury are detected. Using an X-ray, researchers can detect radioactive markers in the brain that indicate brain activation as a tool to establish relationships in the brain as well as detect many injuries/diseases that can cause lasting damage to the brain such as aneurysms, degeneration, and cancer.

Positron emission tomography (PET) is another imaging technology that aids researchers. Instead of using magnetic resonance or X-rays, PET scans rely on positron emitting markers that are bound to a biologically relevant marker such as glucose.[9] The more activation in the brain the more that region requires nutrients, so higher activation appears more brightly on an image of the brain. PET scans are becoming more frequently used by researchers because PET scans are activated due to metabolism whereas MRI is activated on a more physiological basis (sugar activation versus oxygen activation).

Transcranial magnetic stimulation (TMS) is essentially direct magnetic stimulation to the brain. Because electric currents and magnetic fields are intrinsically related, by stimulating the brain with magnetic pulses it is possible to interfere with specific loci in the brain to produce a predictable effect.[10] This field of study is currently receiving a large amount of attention due to the potential benefits that could come out of better understanding this technology.[11] Transcranial magnetic movement of particles in the brain shows promise for drug targeting and delivery as studies have demonstrated this to be noninvasive on brain physiology.[12]

Transcranial direct current stimulation (tDCS) is a form of neurostimulation which uses constant, low current delivered via electrodes placed on the scalp. The mechanisms underlying tDCS effects are still incompletely understood, but recent advances in neurotechnology allowing for in vivo assessment of brain electric activity during tDCS[13] promise to advance understanding of these mechanisms. Research into using tDCS on healthy adults have demonstrated that tDCS can increase cognitive performance on a variety of tasks, depending on the area of the brain being stimulated. tDCS has been used to enhance language and mathematical ability (though one form of tDCS was also found to inhibit math learning),[14] attention span, problem solving, memory,[15] and coordination.

Electroencephalography (EEG) is a method of measuring brainwave activity non-invasively. A number of electrodes are placed around the head and scalp and electrical signals are measured. Typically EEGs are used when dealing with sleep, as there are characteristic wave patterns associated with different stages of sleep.[16] Clinically EEGs are used to study epilepsy as well as stroke and tumor presence in the brain. EEGs are a different method to understand the electrical signaling in the brain during activation.

Magnetoencephalography (MEG) is another method of measuring activity in the brain by measuring the magnetic fields that arise from electrical currents in the brain.[17] The benefit to using MEG instead of EEG is that these fields are highly localized and give rise to better understanding of how specific loci react to stimulation or if these regions over-activate (as in epileptic seizures).

Neurodevices are any devices used to monitor or regulate brain activity. Currently there are a few available for clinical use as a treatment for Parkinsons disease. The most common neurodevices are deep brain stimulators (DBS) that are used to give electrical stimulation to areas stricken by inactivity.[18] Parkinsons disease is known to be caused by an inactivation of the basal ganglia (nuclei) and recently DBS has become the more preferred form of treatment for Parkinsons disease, although current research questions the efficiency of DBS for movement disorders.[18]

Neuromodulation is a relatively new field that combines the use of neurodevices and neurochemistry. The basis of this field is that the brain can be regulated using a number of different factors (metabolic, electrical stimulation, physiological) and that all these can be modulated by devices implanted in the neural network. While currently this field is still in the researcher phase, it represents a new type of technological integration in the field of neurotechnology. The brain is a very sensitive organ, so in addition to researching the amazing things that neuromodulation and implanted neural devices can produce, it is important to research ways to create devices that elicit as few negative responses from the body as possible. This can be done by modifying the material surface chemistry of neural implants.

Researchers have begun looking at uses for stem cells in the brain, which recently have been found in a few loci. A large number of studies[citation needed] are being done to determine if this form of therapy could be used in a large scale. Experiments have successfully used stem cells in the brains of children who suffered from injuries in gestation and elderly people with degenerative diseases in order to induce the brain to produce new cells and to make more connections between neurons.

Pharmaceuticals play a vital role in maintaining stable brain chemistry, and are the most commonly used neurotechnology by the general public and medicine. Drugs like sertraline, methylphenidate, and zolpidem act as chemical modulators in the brain, and they allow for normal activity in many people whose brains cannot act normally under physiological conditions. While pharmaceuticals are usually not mentioned and have their own field, the role of pharmaceuticals is perhaps the most far-reaching and commonplace in modern society (the focus on this article will largely ignore neuropharmaceuticals, for more information, see neuropsychopharmacology). Movement of magnetic particles to targeted brain regions for drug delivery is an emerging field of study and causes no detectable circuit damage.[19]

Stimulation with low-intensity magnetic fields is currently under study for depression at Harvard Medical School, and has previously been explored by Bell (et al.),[20] Marino (et al.),[21] and others.

Magnetic resonance imaging is a vital tool in neurological research in showing activation in the brain as well as providing a comprehensive image of the brain being studied. While MRIs are used clinically for showing brain size, it still has relevance in the study of brains because it can be used to determine extent of injuries or deformation. These can have a significant effect on personality, sense perception, memory, higher order thinking, movement, and spatial understanding. However, current research tends to focus more so on fMRI or real-time functional MRI (rtfMRI).[22] These two methods allow the scientist or the participant, respectively, to view activation in the brain. This is incredibly vital in understanding how a person thinks and how their brain reacts to a persons environment, as well as understanding how the brain works under various stressors or dysfunctions. Real-time functional MRI is a revolutionary tool available to neurologists and neuroscientists because patients can see how their brain reacts to stressors and can perceive visual feedback.[8] CT scans are very similar to MRI in their academic use because they can be used to image the brain upon injury, but they are more limited in perceptual feedback.[6] CTs are generally used in clinical studies far more than in academic studies, and are found far more often in a hospital than a research facility. PET scans are also finding more relevance in academia because they can be used to observe metabolic uptake of neurons, giving researchers a wider perspective about neural activity in the brain for a given condition.[9] Combinations of these methods can provide researchers with knowledge of both physiological and metabolic behaviors of loci in the brain and can be used to explain activation and deactivation of parts of the brain under specific conditions.

Transcranial magnetic stimulation is a relatively new method of studying how the brain functions and is used in many research labs focused on behavioral disorders and hallucinations. What makes TMS research so interesting in the neuroscience community is that it can target specific regions of the brain and shut them down or activate temporarily; thereby changing the way the brain behaves. Personality disorders can stem from a variety of external factors, but when the disorder stems from the circuitry of the brain TMS can be used to deactivate the circuitry. This can give rise to a number of responses, ranging from normality to something more unexpected, but current research is based on the theory that use of TMS could radically change treatment and perhaps act as a cure for personality disorders and hallucinations.[11] Currently, repetitive transcranial magnetic stimulation (rTMS) is being researched to see if this deactivation effect can be made more permanent in patients suffering from these disorders. Some techniques combine TMS and another scanning method such as EEG to get additional information about brain activity such as cortical response.[23]

Both EEG and MEG are currently being used to study the brains activity under different conditions. Each uses similar principles but allows researchers to examine individual regions of the brain, allowing isolation and potentially specific classification of active regions. As mentioned above, EEG is very useful in analysis of immobile patients, typically during the sleep cycle. While there are other types of research that utilize EEG,[23] EEG has been fundamental in understanding the resting brain during sleep.[16] There are other potential uses for EEG and MEG such as charting rehabilitation and improvement after trauma as well as testing neural conductivity in specific regions of epileptics or patients with personality disorders.

Neuromodulation can involve numerous technologies combined or used independently to achieve a desired effect in the brain. Gene and cell therapy are becoming more prevalent in research and clinical trials and these technologies could help stunt or even reverse disease progression in the central nervous system. Deep brain stimulation is currently used in many patients with movement disorders and is used to improve the quality of life in patients.[18] While deep brain stimulation is a method to study how the brain functions per se, it provides both surgeons and neurologists important information about how the brain works when certain small regions of the basal ganglia (nuclei) are stimulated by electrical currents.

The future of neurotechnologies lies in how they are fundamentally applied, and not so much on what new versions will be developed. Current technologies give a large amount of insight into the mind and how the brain functions, but basic research is still needed to demonstrate the more applied functions of these technologies. Currently, rtfMRI is being researched as a method for pain therapy. deCharms et al. have shown that there is a significant improvement in the way people perceive pain if they are made aware of how their brain is functioning while in pain. By providing direct and understandable feedback, researchers can help patients with chronic pain decrease their symptoms. This new type of bio/mechanical-feedback is a new development in pain therapy.[8] Functional MRI is also being considered for a number of more applicable uses outside of the clinic. Research has been done on testing the efficiency of mapping the brain in the case when someone lies as a new way to detect lying.[24] Along the same vein, EEG has been considered for use in lie detection as well.[25] TMS is being used in a variety of potential therapies for patients with personality disorders, epilepsy, PTSD, migraine, and other brain-firing disorders, but has been found to have varying clinical success for each condition.[11] The end result of such research would be to develop a method to alter the brains perception and firing and train patients brains to rewire permanently under inhibiting conditions (for more information see rTMS).[11] In addition, PET scans have been found to be 93% accurate in detecting Alzheimer’s disease nearly 3 years before conventional diagnosis, indicating that PET scanning is becoming more useful in both the laboratory and the clinic.[26]

Stem cell technologies are always salient both in the minds of the general public and scientists because of their large potential. Recent advances in stem cell research have allowed researchers to ethically pursue studies in nearly every facet of the body, which includes the brain. Research has shown that while most of the brain does not regenerate and is typically a very difficult environment to foster regeneration,[27] there are portions of the brain with regenerative capabilities (specifically the hippocampus and the olfactory bulbs).[28] Much of the research in central nervous system regeneration is how to overcome this poor regenerative quality of the brain. It is important to note that there are therapies that improve cognition and increase the amount of neural pathways,[2] but this does not mean that there is a proliferation of neural cells in the brain. Rather, it is called a plastic rewiring of the brain (plastic because it indicates malleability) and is considered a vital part of growth. Nevertheless, many problems in patients stem from death of neurons in the brain, and researchers in the field are striving to produce technologies that enable regeneration in patients with stroke, Parkinsons diseases, severe trauma, and Alzheimer’s disease, as well as many others. While still in fledgling stages of development, researchers have recently begun making very interesting progress in attempting to treat these diseases. Researchers have recently successfully produced dopaminergic neurons for transplant in patients with Parkinsons diseases with the hopes that they will be able to move again with a more steady supply of dopamine.[29][not in citation given] Many researchers are building scaffolds that could be transplanted into a patient with spinal cord trauma to present an environment that promotes growth of axons (portions of the cell attributed with transmission of electrical signals) so that patients unable to move or feel might be able to do so again.[30] The potentials are wide-ranging, but it is important to note that many of these therapies are still in the laboratory phase and are slowly being adapted in the clinic.[31] Some scientists remain skeptical with the development of the field, and warn that there is a much larger chance that electrical prosthesis will be developed to solve clinical problems such as hearing loss or paralysis before cell therapy is used in a clinic.[32][need quotation to verify]

Novel drug delivery systems are being researched in order to improve the lives of those who struggle with brain disorders that might not be treated with stem cells, modulation, or rehabilitation. Pharmaceuticals play a very important role in society, and the brain has a very selective barrier that prevents some drugs from going from the blood to the brain. There are some diseases of the brain such as meningitis that require doctors to directly inject medicine into the spinal cord because the drug cannot cross the bloodbrain barrier.[33] Research is being conducted to investigate new methods of targeting the brain using the blood supply, as it is much easier to inject into the blood than the spine. New technologies such as nanotechnology are being researched for selective drug delivery, but these technologies have problems as with any other. One of the major setbacks is that when a particle is too large, the patients liver will take up the particle and degrade it for excretion, but if the particle is too small there will not be enough drug in the particle to take effect.[34] In addition, the size of the capillary pore is important because too large a particle might not fit or even plug up the hole, preventing adequate supply of the drug to the brain.[34] Other research is involved in integrating a protein device between the layers to create a free-flowing gate that is unimpeded by the limitations of the body. Another direction is receptor-mediated transport, where receptors in the brain used to transport nutrients are manipulated to transport drugs across the bloodbrain barrier.[35] Some have even suggested that focused ultrasound opens the bloodbrain barrier momentarily and allows free passage of chemicals into the brain.[36] Ultimately the goal for drug delivery is to develop a method that maximizes the amount of drug in the loci with as little degraded in the blood stream as possible.

Neuromodulation is a technology currently used for patients with movement disorders, although research is currently being done to apply this technology to other disorders. Recently, a study was done on if DBS could improve depression with positive results, indicating that this technology might have potential as a therapy for multiple disorders in the brain.[32][need quotation to verify] DBS is limited by its high cost however, and in developing countries the availability of DBS is very limited.[18] A new version of DBS is under investigation and has developed into the novel field, optogenetics.[31] Optogenetics is the combination of deep brain stimulation with fiber optics and gene therapy. Essentially, the fiber optic cables are designed to light up under electrical stimulation, and a protein would be added to a neuron via gene therapy to excite it under light stimuli.[37] So by combining these three independent fields, a surgeon could excite a single and specific neuron in order to help treat a patient with some disorder. Neuromodulation offers a wide degree of therapy for many patients, but due to the nature of the disorders it is currently used to treat its effects are often temporary. Future goals in the field hope to alleviate that problem by increasing the years of effect until DBS can be used for the remainder of the patients life. Another use for neuromodulation would be in building neuro-interface prosthetic devices that would allow quadriplegics the ability to maneuver a cursor on a screen with their thoughts, thereby increasing their ability to interact with others around them. By understanding the motor cortex and understanding how the brain signals motion, it is possible to emulate this response on a computer screen.[38]

The ethical debate about use of embryonic stem cells has stirred controversy both in the United States and abroad; although more recently these debates have lessened due to modern advances in creating induced pluripotent stem cells from adult cells. The greatest advantage for use of embryonic stem cells is the fact that they can differentiate (become) nearly any type of cell provided the right conditions and signals. However, recent advances by Shinya Yamanaka et al. have found ways to create pluripotent cells without the use of such controversial cell cultures.[39] Using the patients own cells and re-differentiating them into the desired cell type bypasses both possible patient rejection of the embryonic stem cells and any ethical concerns associated with using them, while also providing researchers a larger supply of available cells. However, induced pluripotent cells have the potential to form benign (though potentially malignant) tumors, and tend to have poor survivability in vivo (in the living body) on damaged tissue.[40] Much of the ethics concerning use of stem cells has subsided from the embryonic/adult stem cell debate due to its rendered moot, but now societies find themselves debating whether or not this technology can be ethically used. Enhancements of traits, use of animals for tissue scaffolding, and even arguments for moral degeneration have been made with the fears that if this technology reaches its full potential a new paradigm shift will occur in human behavior.

New neurotechnologies have always garnered the appeal of governments, from lie detection technology and virtual reality to rehabilitation and understanding the psyche. Due to the Iraq War and War on Terror, American soldiers coming back from Iraq and Afghanistan are reported to have percentages up to 12% with PTSD.[41] There are many researchers hoping to improve these peoples conditions by implementing new strategies for recovery. By combining pharmaceuticals and neurotechnologies, some researchers have discovered ways of lowering the “fear” response and theorize that it may be applicable to PTSD.[42] Virtual reality is another technology that has drawn much attention in the military. If improved, it could be possible to train soldiers how to deal with complex situations in times of peace, in order to better prepare and train a modern army.

Finally, when these technologies are being developed society must understand that these neurotechnologies could reveal the one thing that people can always keep secret: what they are thinking. While there are large amounts of benefits associated with these technologies, it is necessary for scientists and policy makers alike to consider implications about cognitive liberty.[43] This term is important in many ethical circles concerned with the state and goals of progress in the field of neurotechnology (see Neuroethics). Current improvements such as brain fingerprinting or lie detection using EEG or fMRI could give rise to a set fixture of loci/emotional relationships in the brain, although these technologies are still years away from full application.[43] It is important to consider how all these neurotechnologies might affect the future of society, and it is suggested that political, scientific, and civil debates are heard about the implementation of these newer technologies that potentially offer a new wealth of once-private information.[43] Some ethicists are also concerned with the use of TMS and fear that the technique could be used to alter patients in ways that are undesired by the patient.[11]

Visit link:

Neurotechnology – Wikipedia, the free encyclopedia

Digital Darwinism: How Disruptive Technology Is Changing …

 Darwinism  Comments Off on Digital Darwinism: How Disruptive Technology Is Changing …
Jun 172016

Skip Article Header. Skip to: Start of Article. Image: keoni101/Flickr

Social media, mobile, wearables, Internet of Things, real-time these are just some of the technologies that are disrupting markets. Changes in how people communicate, connect, and discover are carrying incredible implications for businesses and just about anything where people are involved. Its not so much that technology is part of our everyday life or that technology is relentless in its barrage on humanity.

The real threat and opportunity in technologys disruption lies in the evolution of customer and employee behavior, values, and expectations. Companies are faced with a quandary as they invest resources and budgets in current technology and business strategies (business as usual) versus that of the unknown in how those investments align, or dont, with market and behavior shifts.

This is a time of digital Darwinism an era where technology and society are evolving faster than businesses can naturally adapt. This sets the stage for a new era of leadership, a new generation of business models, charging behind a mantra of adapt or die.

Rather than react to change or be disrupted by it, some forward-looking companies are investing in digital transformation to adapt and outperform peers.In November 2012, research-based consultancy Capgemini published a report studying the digital maturity of companies pursuing digital transformation. In its report, The Digital Advantage: How digital leaders outperform their peers in every industry, Capgemini found that those companies that are highly vested in both digital intensity and transformation management intensity, aka The Digirati, derive more revenue from their physical assets, theyre more profitable, and they also possess high market valuations.

Why is That?

It comes down to one word, relevance. If consumer behavior is evolving as a result of technology, businesses either compete to get ahead of it, they perpetually react to it, or they belittle it. One of the most problematic aspects around digital maturity is that technology is both part of the solution and also part of the problem.

Enter digital transformation.

Digital transformation may sound like its something youd hear in buzzword bingo, but it is one of the most important movements facing businesses today. It is forcing businesses to look beyond the world as they know it, observe how things are changing on the outside, to change transform philosophies, models, and systems on the inside. Ask 10 different experts in digital transformation for their definition of it though and you may just get 10 different answers. Before strategists can consider digital transformation, they at least have to know what it is, why its important, and what they need to do.

In 2013, I set out to better understand the catalyst and challenges around digital transformation and also the people driving it forward. It is indeed a deep and complex topic. I had to focus my research. Capgemini among others have already made tremendous headway in their work around technology and process models defining the evolution of digital maturity. One of the things I heard over and over was the need to know whos responsible for it and how do companies take steps in the right direction. Specifically, strategists wanted to know how to make the case in the absence of executive leadership pointing in new directions and leading teams to adapt or die! As a result, I explored digital transformation from a more human perspective. After a year of interviewing 20 leading digital strategists at some of the biggest brands around the world, I released my latest report, Digital Transformation: Why and How Companies are Investing in New Business Models to Lead Digital Customer Experiences.

What is Digital Transformation?

Again, it is a sweeping topic. Simply defined, digital transformation the intentional effots to adapt to this onslaught of disruptive technologies and how its affecting customer and employee behavior. As technology becomes a permanent fixture in everyday life, organizations are forced to update legacy technology strategies and supporting methodologies to better reflect how the real world is evolving. And, the need to do so is becoming increasingly obligatory.

In my research, I concentrated on how businesses are pursuing digital transformation in their quest to specifically understand how disruptive technology affects the customer experience. In turn, I learned how companies are reverse engineering investments, processes, and systems to better align with how markets are changing.

Because its focusing on customer behavior, digital transformation is actually in its own way making businesses more human. As such, digital transformation is not a specifically about technology, its empowered by it. Without an end in mind, digital transformation continually seeks out how to use technology in ways that improve customer experiences and relationships. It also represents an effort that introduces new models for business and, equally, creates a way of staying in business as customers become increasingly digital.

Some key findings from my research include:

While early in its evolution, digital transformation represents the next big thing in customer experience and, ultimately, how business is done. Those companies that get it and invest more in learning about their digital customers behaviors, preferences, and expectations will carry a significant competitive advantage over those that figure it out later (if at all). What separates typical new technology investments from those pursued by companies in my report is the ongoing search to find answers to problems and opportunities presented by the nuances of digital customers.

For example:

In the end, digital transformation is not a fad or a trendy moniker. It represents the future of business through the re-alignment of, or new investment in, technology and business models to more effectively engage digital consumers at every touchpoint in the customer experience lifecycle. Its bigger than any one area of technology disruption though and thats the point. Social media, mobile, cloud, et al. are converging into a greater force to push businesses out of comfort zones and into areas where true innovation can manifest.

The Result?

The roles and objectives of everyday marketing, social media, web, mobile and customer service and loyalty, can evolve to meet the needs and expectations of a more connected and discerning digital customer. Additionally, the outcome of even the smallest investments in change brings together typically disparate groups to work in harmony across the entire customer journey. This allows teams to cooperate, or merge into new groups, in uniting the digital journey to improve engagement; deliver a holistic experience; and eliminate friction, gaps, and overlap.

Perhaps the most important takeaway from my research is the pure ambition to make businesses relevant in a digital era.

The road to digital transformation is far from easy, but it carries great rewards for businesses and customers alike. It takes a village to bring about change, and it also takes the spark and perseverance of one person to spot important trends and create a sense of urgency around new possibilities.

But make no mistake. Digital transformation efforts grow market opportunities and profits as well as scaling efficiently in the process.


Brian Solis is a principal analyst at Altimeter Group. He is also an award-winning author, prominent blogger, and keynote speaker. @briansolis

See the article here:

Digital Darwinism: How Disruptive Technology Is Changing …

 Posted by at 4:54 am  Tagged with:

How Virtual Reality Works | HowStuffWorks

 Virtual Reality  Comments Off on How Virtual Reality Works | HowStuffWorks
Jun 172016

What do you think of when you hear the words virtual reality (VR)? Do you imagine someone wearing a clunky helmet attached to a computer with a thick cable? Do visions of crudely rendered pterodactyls haunt you? Do you think of Neo and Morpheus traipsing about the Matrix? Or do you wince at the term, wishing it would just go away?

If the last applies to you, you’re likely a computer scientist or engineer, many of whom now avoid the words virtual reality even while they work on technologies most of us associate with VR. Today, you’re more likely to hear someone use the words virtual environment (VE) to refer to what the public knows as virtual reality. We’ll use the terms interchangeably in this article.

Naming discrepancies aside, the concept remains the same – using computer technology to create a simulated, three-dimensional world that a user can manipulate and explore while feeling as if he were in that world. Scientists, theorists and engineers have designed dozens of devices and applications to achieve this goal. Opinions differ on what exactly constitutes a true VR experience, but in general it should include:

In this article, we’ll look at the defining characteristics of VR, some of the technology used in VR systems, a few of its applications, some concerns about virtual reality and a brief history of the discipline. In the next section, we’ll look at how experts define virtual environments, starting with immersion.

Read more here:

How Virtual Reality Works | HowStuffWorks

 Posted by at 4:54 am  Tagged with:

The Zeitgeist Movement – Skeptic Project

 Zeitgeist Movement  Comments Off on The Zeitgeist Movement – Skeptic Project
Jun 172016

Author: Edward L Winston Added: June 13, 2010 Discuss: Discuss this article.

Over the last couple of months, mainly since Zeitgeist Movement (TZM) members began trekking to our forums, I’ve gotten a lot of emails from TZM members asking me various questions. This post is to outline the topics covered in my correspondence with said members.

I’ll likely update this page as I get feedback from people.

Primarily the issues discussed are why I believe TZM will fail and why I think it’s impossible to find common ground with TZM. I want to be clear that, given a different set of circumstances which I will discuss, maybe TZM could be successful and we could find common ground, but if things don’t change, neither will my stance.

The leader of TZM, Peter Joseph, is far more damaging to his own movement than I imagine many of the hardcore members want to believe:

More could be said about Peter Joseph, and is said in later sections, but our forums are full of former TZM members who shed even more light on the emerging cult of personality around him.

The most important issue here is that Peter Joseph is the leader of TZM and his word is law, despite claiming that he doesn’t consider himself the leader, he acts unilaterally to forbid members for talking to outsiders, for example banning members who post on our forums that aren’t glorifying him.

Something that I never stop hearing is the phrase the movies aren’t the movement. This referring to the fact that the movies promote conspiracy theories, but TZM is something else entirely, and exists separately from the movement. I would believe that if not for the following issues:

A lot of people don’t like that I use foul language, but I needed to display the utter lack of compassion for other human beings TZM leadership seems to have, as well as some hardcore members. The situation in Haiti, again, is a great example of this — reading many posts on the forums from members, it’s quite clear that unless The Venus Project (TVP) is going to be the solution to the problems in Haiti, there’s no use in helping them after the earthquake there.

I get asked “well, what are YOU doing to improve the world?” by TZM members a lot. I constantly bring up that I volunteer pretty much every weekend and I donate 10% of my income to charity, and a lot of time I will donate more than that. Most come back with the fact “charity doesn’t fix the problem.” While they’re right that charity doesn’t fix the problem permanently, sitting on a forum doesn’t either — though some members have the audacity to claim that TZM is a charity, despite never lifting a finger for anyone else.

The example I use when talking to TZM members about this is:

If you saw a starving/dying man in the street, would you do something to help him, or would you say “once our movement gets to 50 million members, I’ll be able to help you, but until then, see you later!”?

That’s essentially the logic behind the leadership of TZM and what many members parrot to me, just in a much nicer way. They love talking about how many children are starving to death today, but they refuse to help them today, and instead speak of some far off future that they can’t figure out how to get to.

I know and understand that not all TZM members are like this. I’ve seen some wonderful generosity and so forth coming from members, but more often than not, these members also don’t follow Peter Joseph blindly, because the ones that do refuse to help anyone else.

Here’s a list of problems that I believe TZM has:

There could be more added here later.

I don’t really see a future for TZM outside of degrading to hardcore members. Peter Joseph talks about a new movie coming out in October of 2010 that’s going to get “millions” of new members, so essentially nearly 2 years of doing nothing but waiting for yet another film are what TZM has to show for.

I think it’s all a shame, however, because getting all of those people together could have done something, could have lead to actual success in some way, but it’s not even close to that. This hasn’t stopped members from discussing the transition to the Resource Based Economy, despite the fact they’re discussing step 10,000 when they haven’t even reached step 1 and don’t seem to want to.

At this point is essentially a way to stroke Peter Joseph’s ego rather than accomplish any goals.

Sometimes I’m asked what I’d change about TZM, in order to make it more acceptable. Well, while I don’t think most of these changes are possible due to the way TZM is run, I usually humor those who ask:

So, essentially my “5 point plan” is completely incompatible with a movement where Peter Joseph is the overlord.

Would you like to know more?

Visit link:

The Zeitgeist Movement – Skeptic Project

 Posted by at 4:53 am  Tagged with:

Seasteading – Wikipedia, the free encyclopedia

 Seasteading  Comments Off on Seasteading – Wikipedia, the free encyclopedia
Jun 172016

Seasteading is the concept of creating permanent dwellings at sea, called seasteads, outside the territory claimed by any government. Most proposed seasteads have been modified cruising vessels. Other proposed structures have included a refitted oil platform, a decommissioned anti-aircraft platform, and custom-built floating islands.[1]

No one has created a state on the high seas that has been recognized as a sovereign state. The Principality of Sealand is a disputed micronation formed on a discarded sea fort near Suffolk, England.[2] The closest things to a seastead that have been built so far are large ocean-going ships sometimes called “floating cities”, and smaller floating islands.

The term combines the words sea and homesteading. At least two people independently began using it: Ken Neumeyer in his book Sailing the Farm (1981) and Wayne Gramlich in his article “Seasteading Homesteading on the High Seas” (1998).[3]

Outside the Exclusive Economic Zone of 200 nautical miles (370km), which countries can claim according to the United Nations Convention on the Law of the Sea, the high seas are not subject to the laws of any sovereign state other than the flag under which a ship sails. Examples of organizations using this possibility are Women on Waves, enabling abortions for women in countries where abortions are subject to strict laws, and offshore radio stations which were anchored in international waters. Like these organizations, a seastead would take advantage of the absence of laws and regulations outside the sovereignty of nations, and choose from among a variety of alternate legal systems such as those underwritten by “Las Portadas”.[4]

“When Seasteading becomes a viable alternative, switching from one government to another would be a matter of sailing to the other without even leaving your house,” said Patri Friedman at the first annual Seasteading conference.[5][6][7]

The Seasteading Institute (TSI), founded by Wayne Gramlich and Patri Friedman on April 15, 2008, is an organization formed to facilitate the establishment of autonomous, mobile communities on seaborne platforms operating in international waters.[5][8][9] Gramlichs 1998 article “SeaSteading Homesteading on the High Seas” outlined the notion of affordable steading, and attracted the attention of Friedman with his proposal for a small-scale project.[3] The two began working together and posted their first collaborative book online in 2001, which explored aspects of seasteading from waste disposal to flags of convenience.

The project picked up mainstream exposure in 2008 after having been brought to the attention of PayPal cofounder Peter Thiel, who contributed $500,000 to fund the creation of The Seasteading Institute and has since spoken out on behalf of its viability, as seen in his essay “The Education of a Libertarian”,[10] published online by Cato Unbound. The Seasteading Institute has received widespread media attention from sources such as CNN, Wired,[5]Prospect,[11]The Economist[9] Business Insider,[12] and BBC[13] American journalist John Stossel wrote an article about seasteading in February 2011 and hosted Friedman on his show on the Fox Business Network.[14]

On July 31, 2011, Friedman stepped down from the role of executive director, and became chairman of the board. Friedman was replaced by Randolph Hencken. Concomitantly, the institute’s directors of business strategy and legal strategy went on to start Blueseed, the first commercial seasteading venture.[15]

Between May 31 and June 2, 2012, The Seasteading Institute held its third annual conference.[16]

In the spring of 2013,[17] the Institute launched The Floating City Project,[18] which combines principles of both seasteading and startup cities,[19] by seeking to locate a floating city within the territorial waters of an existing nation, rather than the open ocean. The institute argued that it would be easier to engineer a seastead in relatively calm, shallow waters; that the location would make it easier for residents to reach as well as to acquire goods and services from existing supply chains; and that a host nation would place a floating city within the international legal framework.

The Institute raised $27,082 from 291 funders in a crowdfunding campaign[20] and commissioned DeltaSync[21] to design a floating city concept for The Floating City Project. In December 2013, the concept report was published. The Seasteading Institute has also been collecting data from potential residents through a survey.[22]

The first seasteads are projected to be cruise ships adapted for semi-permanent habitation. Cruise ships are a proven technology, and they address most of the challenges of living at sea for extended periods of time. The cost of the first shipstead was estimated at $10M.[23]

The Seasteading Institute has been working on communities floating above the sea in spar buoys, similar to oil platforms.[24] The project would start small, using proven technology as much as possible, and try to find viable, sustainable ways of running a seastead.[25] Innovations that enable full-time living at sea will have to be developed. The cruise ship industry’s development suggests this may be possible.

A proposed design for a custom-built seastead is a floating dumbbell in which the living area is high above sea level, which minimizes the influence of waves. In 2004, research was documented in an online book that covers living on the oceans.[26]

The Seasteading Institute focuses on three areas: building a community, doing research and building the first seastead in the San Francisco Bay. In January 2009, the Seasteading Institute patented a design for a 200-person resort seastead, ClubStead, about a city block in size, produced by consultancy firm Marine Innovation & Technology. ClubStead marked the first major development in hard engineering, from extensive analysis to simulations, of the seasteading movement.[9][26][27]

At the Seasteading Institute Forum, an idea arose to create an island from modules.[28] There are several different designs for the modules, with a general consensus that reinforced concrete is the most proven, sustainable and cost-effective material for seastead structures,[29] as indicated by use in oil platforms and concrete submarines. The company AT Design Office recently made another design using the modular island method.[30]

Many architects and firms have created designs for floating cities, including Vincent Callebaut,[31][32]Paolo Soleri[33] and companies such as Shimizu and Tangram 3DS.[34]Marshall Savage also discussed building tethered artificial islands in his book The Millennial Project: Colonizing the Galaxy in Eight Easy Steps, with several color plates illustrating his ideas. Some design competitions have also yielded designs, such as those produced by Evolo and other companies.[35][36][37]

In 2008, Friedman and Gramlich had hoped to float the first prototype seastead in the San Francisco Bay by 2010[38][39] but 2010 plans were to launch a seastead by 2014.[40] The Seasteading Institute projected in 2010 that the seasteading population would exceed 150 individuals in 2015.[41]

The Seasteading Institute held its first conference in Burlingame, California, October 10, 2008. 45 people from 9 countries attended.[42] The second Seasteading conference was significantly larger, and held in San Francisco, California, September 2830, 2009.[43][44] The third Seasteading conference took place on May 31 – June 2, 2012.[45]

As of 2011[update], Blueseed was a company working on launching a ship near Silicon Valley which was to serve as a visa-free startup community and entrepreneurial incubator. The shipstead planned to offer living and office space, high-speed Internet connectivity, and regular ferry service to the mainland.[46][47] The project aims included overcoming the difficulty organizations face obtaining US work visas, intending to use the easier B-1/B-2 visas to travel to the mainland, while work will be done on the ship.[46][47][dated info] Blueseed founders Max Marty and Dario Mutabdzija met when both were employees of The Seasteading Institute.[46][47]

Seasteading has been imagined numerous times in pop culture in recent years.

Read the original:

Seasteading – Wikipedia, the free encyclopedia

 Posted by at 4:53 am  Tagged with:

Technology – Wikipedia, the free encyclopedia

 Technology  Comments Off on Technology – Wikipedia, the free encyclopedia
Jun 172016

This article is about the use and knowledge of techniques and processes for producing goods and services. For other uses, see Technology (disambiguation).

Technology (“science of craft”, from Greek , techne, “art, skill, cunning of hand”; and -, -logia[3]) is the collection of techniques, skills, methods and processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation. Technology can be the knowledge of techniques, processes, etc. or it can be embedded in machines, computers, devices and factories, which can be operated by individuals without detailed knowledge of the workings of such things.

The human species’ use of technology began with the conversion of natural resources into simple tools. The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food and the invention of the wheel helped humans to travel in and control their environment. Developments in historic times, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. The steady progress of military technology has brought weapons of ever-increasing destructive power, from clubs to nuclear weapons.

Technology has many effects. It has helped develop more advanced economies (including today’s global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products, known as pollution, and deplete natural resources, to the detriment of Earth’s environment. Various implementations of technology influence the values of a society and new technology often raises new ethical questions. Examples include the rise of the notion of efficiency in terms of human productivity, a term originally applied only to machines, and the challenge of traditional norms.

Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticise the pervasiveness of technology in the modern world, arguing that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.

Until recently, it was believed that the development of technology was restricted only to human beings, but 21st century scientific studies indicate that other primates and certain dolphin communities have developed simple tools and passed their knowledge to other generations.

The use of the term “technology” has changed significantly over the last 200 years. Before the 20th century, the term was uncommon in English, and usually referred to the description or study of the useful arts.[4] The term was often connected to technical education, as in the Massachusetts Institute of Technology (chartered in 1861).[5]

The term “technology” rose to prominence in the 20th century in connection with the Second Industrial Revolution. The term’s meanings changed in the early 20th century when American social scientists, beginning with Thorstein Veblen, translated ideas from the German concept of Technik into “technology”. In German and other European languages, a distinction exists between technik and technologie that is absent in English, which usually translates both terms as “technology”. By the 1930s, “technology” referred not only to the study of the industrial arts but to the industrial arts themselves.[6]

In 1937, the American sociologist Read Bain wrote that “technology includes all tools, machines, utensils, weapons, instruments, housing, clothing, communicating and transporting devices and the skills by which we produce and use them.”[7] Bain’s definition remains common among scholars today, especially social scientists. But equally prominent is the definition of technology as applied science, especially among scientists and engineers, although most social scientists who study technology reject this definition.[8] More recently, scholars have borrowed from European philosophers of “technique” to extend the meaning of technology to various forms of instrumental reason, as in Foucault’s work on technologies of the self (techniques de soi).

Dictionaries and scholars have offered a variety of definitions. The Merriam-Webster Dictionary offers a definition of the term: “the practical application of knowledge especially in a particular area” and “a capability given by the practical application of knowledge”.[9]Ursula Franklin, in her 1989 “Real World of Technology” lecture, gave another definition of the concept; it is “practice, the way we do things around here”.[10] The term is often used to imply a specific field of technology, or to refer to high technology or just consumer electronics, rather than technology as a whole.[11]Bernard Stiegler, in Technics and Time, 1, defines technology in two ways: as “the pursuit of life by means other than life”, and as “organized inorganic matter.”[12]

Technology can be most broadly defined as the entities, both material and immaterial, created by the application of mental and physical effort in order to achieve some value. In this usage, technology refers to tools and machines that may be used to solve real-world problems. It is a far-reaching term that may include simple tools, such as a crowbar or wooden spoon, or more complex machines, such as a space station or particle accelerator. Tools and machines need not be material; virtual technology, such as computer software and business methods, fall under this definition of technology.[13]W. Brian Arthur defines technology in a similarly broad way as “a means to fulfill a human purpose”.[14]

The word “technology” can also be used to refer to a collection of techniques. In this context, it is the current state of humanity’s knowledge of how to combine resources to produce desired products, to solve problems, fulfill needs, or satisfy wants; it includes technical methods, skills, processes, techniques, tools and raw materials. When combined with another term, such as “medical technology” or “space technology”, it refers to the state of the respective field’s knowledge and tools. “State-of-the-art technology” refers to the high technology available to humanity in any field.

Technology can be viewed as an activity that forms or changes culture.[15] Additionally, technology is the application of math, science, and the arts for the benefit of life as it is known. A modern example is the rise of communication technology, which has lessened barriers to human interaction and, as a result, has helped spawn new subcultures; the rise of cyberculture has, at its basis, the development of the Internet and the computer.[16] Not all technology enhances culture in a creative way; technology can also help facilitate political oppression and war via tools such as guns. As a cultural activity, technology predates both science and engineering, each of which formalize some aspects of technological endeavor.

The distinction between science, engineering and technology is not always clear. Science is the reasoned investigation or study of natural phenomena, aimed at discovering enduring principles among elements of the phenomenal world by employing formal techniques such as the scientific method.[17] Technologies are not usually exclusively products of science, because they have to satisfy requirements such as utility, usability and safety.

Engineering is the goal-oriented process of designing and making tools and systems to exploit natural phenomena for practical human means, often (but not always) using results and techniques from science. The development of technology may draw upon many fields of knowledge, including scientific, engineering, mathematical, linguistic, and historical knowledge, to achieve some practical result.

Technology is often a consequence of science and engineering although technology as a human activity precedes the two fields. For example, science might study the flow of electrons in electrical conductors, by using already-existing tools and knowledge. This new-found knowledge may then be used by engineers to create new tools and machines, such as semiconductors, computers, and other forms of advanced technology. In this sense, scientists and engineers may both be considered technologists; the three fields are often considered as one for the purposes of research and reference.[18]

The exact relations between science and technology in particular have been debated by scientists, historians, and policymakers in the late 20th century, in part because the debate can inform the funding of basic and applied science. In the immediate wake of World War II, for example, in the United States it was widely considered that technology was simply “applied science” and that to fund basic science was to reap technological results in due time. An articulation of this philosophy could be found explicitly in Vannevar Bush’s treatise on postwar science policy, ScienceThe Endless Frontier: “New products, new industries, and more jobs require continuous additions to knowledge of the laws of nature… This essential new knowledge can be obtained only through basic scientific research.” In the late-1960s, however, this view came under direct attack, leading towards initiatives to fund science for specific tasks (initiatives resisted by the scientific community). The issue remains contentiousthough most analysts resist the model that technology simply is a result of scientific research.[19][20]

The use of tools by early humans was partly a process of discovery and of evolution. Early humans evolved from a species of foraging hominids which were already bipedal,[21] with a brain mass approximately one third of modern humans.[22] Tool use remained relatively unchanged for most of early human history. Approximately 50,000 years ago, the use of tools and complex set of behaviors emerged, believed by many archaeologists to be connected to the emergence of fully modern language.[23]

Hominids started using primitive stone tools millions of years ago. The earliest stone tools were little more than a fractured rock, but approximately 40,000 years ago, pressure flaking provided a way to make much finer work.

The discovery and utilization of fire, a simple energy source with many profound uses, was a turning point in the technological evolution of humankind.[24] The exact date of its discovery is not known; evidence of burnt animal bones at the Cradle of Humankind suggests that the domestication of fire occurred before 1,000,000BC;[25] scholarly consensus indicates that Homo erectus had controlled fire by between 500,000BC and 400,000BC.[26][27] Fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten.[28]

Other technological advances made during the Paleolithic era were clothing and shelter; the adoption of both technologies cannot be dated exactly, but they were a key to humanity’s progress. As the Paleolithic era progressed, dwellings became more sophisticated and more elaborate; as early as 380,000BC, humans were constructing temporary wood huts.[29][30] Clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions; humans began to migrate out of Africa by 200,000BC and into other continents, such as Eurasia.[31]

Man’s technological ascent began in earnest in what is known as the Neolithic period (“New stone age”). The invention of polished stone axes was a major advance that allowed forest clearance on a large scale to create farms. Agriculture fed larger populations, and the transition to sedentism allowed simultaneously raising more children, as infants no longer needed to be carried, as nomadic ones must. Additionally, children could contribute labor to the raising of crops more readily than they could to the hunter-gatherer economy.[32][33]

With this increase in population and availability of labor came an increase in labor specialization.[34] What triggered the progression from early Neolithic villages to the first cities, such as Uruk, and the first civilizations, such as Sumer, is not specifically known; however, the emergence of increasingly hierarchical social structures and specialized labor, of trade and war amongst adjacent cultures, and the need for collective action to overcome environmental challenges such as irrigation, are all thought to have played a role.[35]

Continuing improvements led to the furnace and bellows and provided the ability to smelt and forge native metals (naturally occurring in relatively pure form).[36]Gold, copper, silver, and lead, were such early metals. The advantages of copper tools over stone, bone, and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 8000 BC).[37] Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4000 BC). The first uses of iron alloys such as steel dates to around 1400 BC.

Meanwhile, humans were learning to harness other forms of energy. The earliest known use of wind power is the sailboat.[38] The earliest record of a ship under sail is shown on an Egyptian pot dating back to 3200 BC.[39] From prehistoric times, Egyptians probably used the power of the annual flooding of the Nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and ‘catch’ basins. Similarly, the early peoples of Mesopotamia, the Sumerians, learned to use the Tigris and Euphrates rivers for much the same purposes. But more extensive use of wind and water (and even human) power required another invention.

According to archaeologists, the wheel was invented around 4000 B.C. probably independently and nearly simultaneously in Mesopotamia (in present-day Iraq), the Northern Caucasus (Maykop culture) and Central Europe. Estimates on when this may have occurred range from 5500 to 3000 B.C., with most experts putting it closer to 4000 B.C. The oldest artifacts with drawings that depict wheeled carts date from about 3000 B.C.; however, the wheel may have been in use for millennia before these drawings were made. There is also evidence from the same period for the use of the potter’s wheel. More recently, the oldest-known wooden wheel in the world was found in the Ljubljana marshes of Slovenia.[40]

The invention of the wheel revolutionized trade and war. It did not take long to discover that wheeled wagons could be used to carry heavy loads. Fast (rotary) potters’ wheels enabled early mass production of pottery. But it was the use of the wheel as a transformer of energy (through water wheels, windmills, and even treadmills) that revolutionized the application of nonhuman power sources.

Innovations continued through the Middle Ages with innovations such as silk, the horse collar and horseshoes in the first few hundred years after the fall of the Roman Empire. Medieval technology saw the use of simple machines (such as the lever, the screw, and the pulley) being combined to form more complicated tools, such as the wheelbarrow, windmills and clocks. The Renaissance brought forth many of these innovations, including the printing press (which facilitated the greater communication of knowledge), and technology became increasingly associated with science, beginning a cycle of mutual advancement. The advancements in technology in this era allowed a more steady supply of food, followed by the wider availability of consumer goods.

Starting in the United Kingdom in the 18th century, the Industrial Revolution was a period of great technological discovery, particularly in the areas of agriculture, manufacturing, mining, metallurgy and transport, driven by the discovery of steam power. Technology took another step in a second industrial revolution with the harnessing of electricity to create such innovations as the electric motor, light bulb and countless others. Scientific advancement and the discovery of new concepts later allowed for powered flight, and advancements in medicine, chemistry, physics and engineering. The rise in technology has led to skyscrapers and broad urban areas whose inhabitants rely on motors to transport them and their daily bread. Communication was also greatly improved with the invention of the telegraph, telephone, radio and television. The late 19th and early 20th centuries saw a revolution in transportation with the invention of the airplane and automobile.

The 20th century brought a host of innovations. In physics, the discovery of nuclear fission has led to both nuclear weapons and nuclear power. Computers were also invented and later miniaturized utilizing transistors and integrated circuits. information technology subsequently led to the creation of the Internet, which ushered in the current Information Age. Humans have also been able to explore space with satellites (later used for telecommunication) and in manned missions going all the way to the moon. In medicine, this era brought innovations such as open-heart surgery and later stem cell therapy along with new medications and treatments.

Complex manufacturing and construction techniques and organizations are needed to make and maintain these new technologies, and entire industries have arisen to support and develop succeeding generations of increasingly more complex tools. Modern technology increasingly relies on training and education their designers, builders, maintainers, and users often require sophisticated general and specific training. Moreover, these technologies have become so complex that entire fields have been created to support them, including engineering, medicine, and computer science, and other fields have been made more complex, such as construction, transportation and architecture.

Generally, technicism is a reliance or confidence in technology as a benefactor of society. Taken to extreme, technicism is the belief that humanity will ultimately be able to control the entirety of existence using technology. In other words, human beings will someday be able to master all problems and possibly even control the future using technology. Some, such as Stephen V. Monsma,[41] connect these ideas to the abdication of religion as a higher moral authority.

Optimistic assumptions are made by proponents of ideologies such as transhumanism and singularitarianism, which view technological development as generally having beneficial effects for the society and the human condition. In these ideologies, technological development is morally good. Some critics see these ideologies as examples of scientism and techno-utopianism and fear the notion of human enhancement and technological singularity which they support. Some have described Karl Marx as a techno-optimist.[42]

On the somewhat skeptical side are certain philosophers like Herbert Marcuse and John Zerzan, who believe that technological societies are inherently flawed. They suggest that the inevitable result of such a society is to become evermore technological at the cost of freedom and psychological health.

Many, such as the Luddites and prominent philosopher Martin Heidegger, hold serious, although not entirely deterministic reservations, about technology (see “The Question Concerning Technology”[43]). According to Heidegger scholars Hubert Dreyfus and Charles Spinosa, “Heidegger does not oppose technology. He hopes to reveal the essence of technology in a way that ‘in no way confines us to a stultified compulsion to push on blindly with technology or, what comes to the same thing, to rebel helplessly against it.’ Indeed, he promises that ‘when we once open ourselves expressly to the essence of technology, we find ourselves unexpectedly taken into a freeing claim.'[44]” What this entails is a more complex relationship to technology than either techno-optimists or techno-pessimists tend to allow.[45]

Some of the most poignant criticisms of technology are found in what are now considered to be dystopian literary classics, for example Aldous Huxley’s Brave New World and other writings, Anthony Burgess’s A Clockwork Orange, and George Orwell’s Nineteen Eighty-Four. And, in Faust by Goethe, Faust’s selling his soul to the devil in return for power over the physical world, is also often interpreted as a metaphor for the adoption of industrial technology. More recently, modern works of science fiction, such as those by Philip K. Dick and William Gibson, and films (e.g. Blade Runner, Ghost in the Shell) project highly ambivalent or cautionary attitudes toward technology’s impact on human society and identity.

The late cultural critic Neil Postman distinguished tool-using societies from technological societies and, finally, what he called “technopolies,” that is, societies that are dominated by the ideology of technological and scientific progress, to the exclusion or harm of other cultural practices, values and world-views.[46]

Darin Barney has written about technology’s impact on practices of citizenship and democratic culture, suggesting that technology can be construed as (1) an object of political debate, (2) a means or medium of discussion, and (3) a setting for democratic deliberation and citizenship. As a setting for democratic culture, Barney suggests that technology tends to make ethical questions, including the question of what a good life consists in, nearly impossible, because they already give an answer to the question: a good life is one that includes the use of more and more technology.[47]

Nikolas Kompridis has also written about the dangers of new technology, such as genetic engineering, nanotechnology, synthetic biology and robotics. He warns that these technologies introduce unprecedented new challenges to human beings, including the possibility of the permanent alteration of our biological nature. These concerns are shared by other philosophers, scientists and public intellectuals who have written about similar issues (e.g. Francis Fukuyama, Jrgen Habermas, William Joy, and Michael Sandel).[48]

Another prominent critic of technology is Hubert Dreyfus, who has published books On the Internet and What Computers Still Can’t Do.

Another, more infamous anti-technological treatise is Industrial Society and Its Future, written by Theodore Kaczynski (aka The Unabomber) and printed in several major newspapers (and later books) as part of an effort to end his bombing campaign of the techno-industrial infrastructure.

The notion of appropriate technology, however, was developed in the 20th century (e.g., see the work of E. F. Schumacher and of Jacques Ellul) to describe situations where it was not desirable to use very new technologies or those that required access to some centralized infrastructure or parts or skills imported from elsewhere. The eco-village movement emerged in part due to this concern.

This article mainly focusses on American concerns even if it can reasonably be generalized to other Western countries.

The inadequate quantity and quality of American jobs is one of the most fundamental economic challenges we face. […] What’s the linkage between technology and this fundamental problem?

In his article, Jared Bernstein, a Senior Fellow at the Center on Budget and Policy Priorities,[49] questions the widespread idea that automation, and more broadly technological advances have mainly contributed to this growing labor market problem. His thesis appears to be a third way between Optimism and Skepticism. Basically, he stands for a neutral approach of the linkage between technology and American issues concerning unemployment and eroding wages.

He uses two main arguments to defend his point. First of all, because of recent technological advances, an increasing number of workers are losing their jobs. Yet, scientific evidence fails to clearly demonstrate that technology has displaced so many workers that it has created more problems than it has solved. Indeed, automation threatens repetitive jobs but higher-end jobs are still necessary because they complement technology and manual jobs that “requires flexibility judgment and common sense”[50] remain hard to be replaced by machines.Second, studies have not defined clear links between recent technology advances and the wage trends of the last decades.

Therefore, according to Jared Bernstein, instead of focusing on technology and its hypothetical influences on current American increasing unemployment and eroding wages, one needs to worry more about “bad policy that fails to offset the imbalances in demand, trade, income and opportunity.”[50]

Thomas P. Hughes pointed out that because technology has been considered as a key way to solve problems, we need to be aware of its complex and varied characters to use it more efficiently.[51] What is the difference between a wheel or a compass and cooking machines such as an oven or a gas stove? Can we consider all of them, only a part of them or none of them as technologies?

Technology is often considered too narrowly: according to Thomas P. Hughes “Technology is a creative process involving human ingenuity.[51] This definition emphasizing on creativity avoids unbounded definition that may mistakenly include cooking technologies. But it also highlights the prominent role of humans and therefore their responsibilities for the use of complex technological systems.

Yet, because technology is everywhere and has dramatically changed landscapes and societies, Hughes argued that engineers, scientists, and managers often have believed that they can use technology to shape the world as they want. They have often supposed that technology is easily controllable and this assumption has to be thoroughly questioned.[51] For instance, Evgeny Morozov particularly challenges two concepts: Internet-centrism and solutionism.[52] Internet-centrism refers to the idea that our society is convinced that the Internet is one of the most stable and coherent forces. Solutionism is the ideology that every social issue can be solved thanks to technology and especially thanks to the internet. In fact, technology intrinsically contains uncertainties and limitations. According to Alexis Madrigal’s critique of Morozov’s theory, to ignore it will lead to unexpected consequences that could eventually cause more damage than the problems they seek to address.[53]Benjamin Cohen and Gwen Ottinger precisely discussed the multivalent effects of technology.[54]

Therefore, recognition of the limitations of technology and more broadly scientific knowledge is needed especially in cases dealing with environmental justice and health issues. Gwen Ottinger continues this reasoning and argues that the ongoing recognition of the limitations of scientific knowledge goes hand in hand with scientists and engineers new comprehension of their role. Such an approach of technology and science “[require] technical professionals to conceive of their roles in the process differently. [They have to consider themselves as] collaborators in research and problem solving rather than simply providers of information and technical solutions”.[55]

Technology is properly defined as any application of science to accomplish a function. The science can be leading edge or well established and the function can have high visibility or be significantly more mundane but it is all technology, and its exploitation is the foundation of all competitive advantage.

Technology-based planning is what was used to build the US industrial giants before WWII (e.g., Dow, DuPont, GM) and it what was used to transform the US into a superpower. It was not economic-based planning.

In 1983 Project Socrates was initiated in the US intelligence community to determine the source of declining US economic and military competitiveness. Project Socrates concluded that technology exploitation is the foundation of all competitive advantage and that declining US competitiveness was from decision-making in the private and public sectors switching from technology exploitation (technology-based planning) to money exploitation (economic-based planning) at the end of World War II.

Project Socrates determined that to rebuild US competitiveness, decision making throughout the US had to readopt technology-based planning. Project Socrates also determined that countries like China and India had continued executing technology-based (while the US took its detour into economic-based) planning, and as a result had considerably advanced the process and were using it to build themselves into superpowers. To rebuild US competitiveness the US decision-makers needed to adopt a form of technology-based planning that was far more advanced than that used by China and India.

Project Socrates determined that technology-based planning makes an evolutionary leap forward every few hundred years and the next evolutionary leap, the Automated Innovation Revolution, was poised to occur. In the Automated Innovation Revolution the process for determining how to acquire and utilize technology for a competitive advantage (which includes R&D) is automated so that it can be executed with unprecedented speed, efficiency and agility.

Project Socrates developed the means for automated innovation so that the US could lead the Automated Innovation Revolution in order to rebuild and maintain the country’s economic competitiveness for many generations.[56][57][58]

The use of basic technology is also a feature of other animal species apart from humans. These include primates such as chimpanzees, some dolphin communities,[59][60] and crows.[61][62] Considering a more generic perspective of technology as ethology of active environmental conditioning and control, we can also refer to animal examples such as beavers and their dams, or bees and their honeycombs.

The ability to make and use tools was once considered a defining characteristic of the genus Homo.[63] However, the discovery of tool construction among chimpanzees and related primates has discarded the notion of the use of technology as unique to humans. For example, researchers have observed wild chimpanzees utilising tools for foraging: some of the tools used include leaf sponges, termite fishing probes, pestles and levers.[64]West African chimpanzees also use stone hammers and anvils for cracking nuts,[65] as do capuchin monkeys of Boa Vista, Brazil.[66]

Theories of technology often attempt to predict the future of technology based on the high technology and science of the time.

View original post here:

Technology – Wikipedia, the free encyclopedia

 Posted by at 4:52 am  Tagged with:

Vaccination Agenda: An Implicit Transhumanism / Dehumanism

 Transhumanism  Comments Off on Vaccination Agenda: An Implicit Transhumanism / Dehumanism
Jun 172016

Let’s face it: the only real justification for using vaccines to “immunize” ourselves against disease is derived from the natural fact that when challenged our immune systems launch a successful response. Were it not for the elegance, proficiency, and mostly asymptomatic success of our recombinatorial (antibody-based) immune systems in dealing so well with infectious challenges, vaccination would have no cause, no scientific explanation, no justification whatsoever.*

In fact, ever since the adaptive, antigen-specific immune system evolved in early vertebrates 500 million years ago, our bodies have been doing a pretty good job of keeping us alive on this planet without need for synthetic, vaccine-mediated immunity. Indeed, infectious challenges are necessary for the development of a healthy immune system and in order to prevent autoimmune conditions from emerging as a result of TH2 dominance.

In other words, take away these natural infectious challenges, and the immune system can and will turn upon itself; take way these infectious challenges and lasting immunity against tens, if not hundreds of thousands of pathogens we are exposed to throughout our lives, would not be possible.

Can vaccines really co-opt, improve upon, and replace natural immunity with synthetic immunity?

How many will this require?

Are we not already at the critical threshold of vaccine overload?

By “improving” on our humanness in this way, are we not also at the same moment departing dramatically from it?

Presently, compliance with the CDC’s immunization schedule for children from birth through 6 years of age requires 60+ vaccines* be administered, purportedly to make them healthier than non-vaccinated or naturally immunized ones.** Sixty vaccines, while a disturbingly high amount (for those who retain the complementary human faculties of reason and intuition), does not, however, correctly convey just how many antigenic challenges these children face in total…

A new paper published in the journal Lupus entitled, “Mechanisms of aluminum adjuvant toxicity and autoimmunity in pediatric populations,” points out that as many as 125 antigenic compounds, along with high amounts of aluminum (AI) adjuvants are given to children by the time they are 4 and 6 years old, in some “developed” countries.

The authors also state: “Immune challenges during early development, including those vaccine-induced, can lead to permanent detrimental alterations of the brain and immune function. Experimental evidence also shows that simultaneous administration of as little as two to three immune adjuvants can overcome genetic resistance to autoimmunity.”

Vaccine adjuvants are agents that accelerate, enhance or prolong the antigen-specific immune responses vaccines intend to elicit. In essence, they enhance vaccine “efficacy,” which is defined by the ability to raise antibody titers. A vaccine’s “effectiveness,” on the other hand — and which is the real-world measure of whether a vaccine works or not — is not ascertainable through the number of antibodies produced. Whether or not a vaccine or vaccine adjuvant boosts antibodies that have actual affinity with the intended pathogen is what counts in the real world, i.e. antibody-antigen affinity, (and not the sheer volume of antibodies produced) determines whether a vaccine will be effective or not.

The semantic confusion between “vaccine efficacy” and “vaccine effectiveness” ensures that vaccines which disrupt/harm/hypersensitize the immune system by stimulating unnaturally elevated antibody titers may obtain FDA approval, despite the fact that they have never been shown to confer real-world protection. *** Some vaccine researchers have even suggested that breastfeeding, which may reduce vaccine-induced elevations in antibody titers in infants, i.e. its iatrogenic disease-promoting effects, should temporarily be delayed in order not to interfere with the vaccine’s so-called “efficacy.”

Common adjuvants include: aluminum, mineral oil, detergent stabilized squalene-in-water, pertactin, formaldehyde, viral DNA, phosphate, all of which are inherently toxic, no matter what the route of exposure.

Many parents today do not consider how dangerous injecting adjuvants directly into the muscle (and sometimes blood, due to incorrect and/or non-existent aspiration techniques), especially in non-infected, healthy offspring whose immune systems are only just learning to launch effective responses to the innumerable pathogens already blanketing their environment.

Adequate breastfeeding, in fact, is the most successful strategy in the prevention of morbidity and mortality associated with infectious challenges, and is so distinctively mammalian (i.e. obtaining nourishment and immunity through the mammary glands), that without adequate levels (only 11.3% of infants in the US were exclusively breastfed through the first six months of life (Source: CDC, 2004)) infants become much more readily susceptible to illness.

Not only have humans strayed from their mammalian roots, by creating and promoting infant formula over breast milk, and then promoting synthetic immunity via vaccines over the natural immunity conferred through breastfeeding and sunlight exposure, for instance, but implicit within the dominant medical model to replace natural immunity with a synthetic one, is a philosophy of transhumanism, a movement which intends to improve upon and transcend our humanity, and has close affiliation with some aspects of eugenics.***

The CDC’s immunization schedule reflects a callous lack of regard for the 3 billion years of evolution that brought us to our present, intact form, without elaborate technologies like vaccination — and likely only because we never had them at our disposal to inflict potentially catastrophic harm to ourselves.

The CDC is largely responsible for generating the mass public perception that there is greater harm in not “prophylactically” injecting well over 100 distinct disease-promoting and immune-disruptive substances into the bodies of healthy children. They have been successful in instilling the concept into the masses that Nature failed in her design, and that medical and genetic technologies and interventions can be used to create a superior human being.

Continue to Page 2

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of GreenMedInfo or its staff.


Vaccination Agenda: An Implicit Transhumanism / Dehumanism

 Posted by at 4:52 am  Tagged with:

Pierre Teilhard De Chardin | Designer Children | Prometheism | Euvolution