Cyborg | Designer-Babies | Futurism | Futurist | Immortality | Longevity | Nanotechnology | Post-Human | Singularity | Transhuman

Euthanasia – Wikipedia, the free encyclopedia

 Euthanasia  Comments Off on Euthanasia – Wikipedia, the free encyclopedia
Jun 172016
 

This article is about euthanasia of humans. For mercy killings performed on other animals, see Animal euthanasia.

Euthanasia (from Greek: ; “good death”: , eu; “well” or “good” , thanatos; “death”) is the practice of intentionally ending a life in order to relieve pain and suffering.[1]

There are different euthanasia laws in each country. The British House of Lords Select Committee on Medical Ethics defines euthanasia as “a deliberate intervention undertaken with the express intention of ending a life, to relieve intractable suffering”.[2] In the Netherlands and Flanders, euthanasia is understood as “termination of life by a doctor at the request of a patient”.[3]

Euthanasia is categorized in different ways, which include voluntary, non-voluntary, or involuntary. Voluntary euthanasia is legal in some countries. Non-voluntary euthanasia (patient’s consent unavailable) is illegal in all countries. Involuntary euthanasia (without asking consent or against the patient’s will) is also illegal in all countries and is usually considered murder.[4] As of 2006, euthanasia is the most active area of research in contemporary bioethics.[5]

In some countries there is a divisive public controversy over the moral, ethical, and legal issues of euthanasia. Those who are against euthanasia may argue for the sanctity of life, while proponents of euthanasia rights emphasize alleviating suffering, and preserving bodily integrity, self-determination, and personal autonomy.[6] Jurisdictions where euthanasia is legal include the Netherlands, Canada,[7]Colombia, Belgium, and Luxembourg.

Like other terms borrowed from history, “euthanasia” has had different meanings depending on usage. The first apparent usage of the term “euthanasia” belongs to the historian Suetonius, who described how the Emperor Augustus, “dying quickly and without suffering in the arms of his wife, Livia, experienced the ‘euthanasia’ he had wished for.”[8] The word “euthanasia” was first used in a medical context by Francis Bacon in the 17th century, to refer to an easy, painless, happy death, during which it was a “physician’s responsibility to alleviate the ‘physical sufferings’ of the body.” Bacon referred to an “outward euthanasia”the term “outward” he used to distinguish from a spiritual conceptthe euthanasia “which regards the preparation of the soul.”[9]

In current usage, euthanasia has been defined as the “painless inducement of a quick death”.[10] However, it is argued that this approach fails to properly define euthanasia, as it leaves open a number of possible actions which would meet the requirements of the definition, but would not be seen as euthanasia. In particular, these include situations where a person kills another, painlessly, but for no reason beyond that of personal gain; or accidental deaths that are quick and painless, but not intentional.[11][12]

Another approach incorporates the notion of suffering into the definition.[11] The definition offered by the Oxford English Dictionary incorporates suffering as a necessary condition, with “the painless killing of a patient suffering from an incurable and painful disease or in an irreversible coma”,[13] This approach is included in Marvin Khol and Paul Kurtz’s definition of it as “a mode or act of inducing or permitting death painlessly as a relief from suffering”.[14] Counterexamples can be given: such definitions may encompass killing a person suffering from an incurable disease for personal gain (such as to claim an inheritance), and commentators such as Tom Beauchamp and Arnold Davidson have argued that doing so would constitute “murder simpliciter” rather than euthanasia.[11]

The third element incorporated into many definitions is that of intentionality the death must be intended, rather than being accidental, and the intent of the action must be a “merciful death”.[11] Michael Wreen argued that “the principal thing that distinguishes euthanasia from intentional killing simpliciter is the agent’s motive: it must be a good motive insofar as the good of the person killed is concerned.”[15] Similarly, Heather Draper speaks to the importance of motive, arguing that “the motive forms a crucial part of arguments for euthanasia, because it must be in the best interests of the person on the receiving end.”[12] Definitions such as that offered by the House of Lords Select Committee on Medical Ethics take this path, where euthanasia is defined as “a deliberate intervention undertaken with the express intention of ending a life, to relieve intractable suffering.”[2] Beauchamp and Davidson also highlight Baruch Brody’s “an act of euthanasia is one in which one person… (A) kills another person (B) for the benefit of the second person, who actually does benefit from being killed”.[16]

Draper argued that any definition of euthanasia must incorporate four elements: an agent and a subject; an intention; a causal proximity, such that the actions of the agent lead to the outcome; and an outcome. Based on this, she offered a definition incorporating those elements, stating that euthanasia “must be defined as death that results from the intention of one person to kill another person, using the most gentle and painless means possible, that is motivated solely by the best interests of the person who dies.”[17] Prior to Draper, Beauchamp and Davidson had also offered a definition that includes these elements. Their definition specifically discounts fetuses in order to distinguish between abortions and euthanasia:[18]

“In summary, we have argued… that the death of a human being, A, is an instance of euthanasia if and only if (1) A’s death is intended by at least one other human being, B, where B is either the cause of death or a causally relevant feature of the event resulting in death (whether by action or by omission); (2) there is either sufficient current evidence for B to believe that A is acutely suffering or irreversibly comatose, or there is sufficient current evidence related to A’s present condition such that one or more known causal laws supports B’s belief that A will be in a condition of acute suffering or irreversible comatoseness; (3) (a) B’s primary reason for intending A’s death is cessation of A’s (actual or predicted future) suffering or irreversible comatoseness, where B does not intend A’s death for a different primary reason, though there may be other relevant reasons, and (b) there is sufficient current evidence for either A or B that causal means to A’s death will not produce any more suffering than would be produced for A if B were not to intervene; (4) the causal means to the event of A’s death are chosen by A or B to be as painless as possible, unless either A or B has an overriding reason for a more painful causal means, where the reason for choosing the latter causal means does not conflict with the evidence in 3b; (5) A is a nonfetal organism.”[19]

Wreen, in part responding to Beauchamp and Davidson, offered a six-part definition:

“Person A committed an act of euthanasia if and only if (1) A killed B or let her die; (2) A intended to kill B; (3) the intention specified in (2) was at least partial cause of the action specified in (1); (4) the causal journey from the intention specified in (2) to the action specified in (1) is more or less in accordance with A’s plan of action; (5) A’s killing of B is a voluntary action; (6) the motive for the action specified in (1), the motive standing behind the intention specified in (2), is the good of the person killed.”[20]

Wreen also considered a seventh requirement: “(7) The good specified in (6) is, or at least includes, the avoidance of evil”, although as Wreen noted in the paper, he was not convinced that the restriction was required.[21]

In discussing his definition, Wreen noted the difficulty of justifying euthanasia when faced with the notion of the subject’s “right to life”. In response, Wreen argued that euthanasia has to be voluntary, and that “involuntary euthanasia is, as such, a great wrong”.[21] Other commentators incorporate consent more directly into their definitions. For example, in a discussion of euthanasia presented in 2003 by the European Association of Palliative Care (EPAC) Ethics Task Force, the authors offered: “Medicalized killing of a person without the person’s consent, whether nonvoluntary (where the person in unable to consent) or involuntary (against the person’s will) is not euthanasia: it is murder. Hence, euthanasia can be voluntary only.”[22] Although the EPAC Ethics Task Force argued that both non-voluntary and involuntary euthanasia could not be included in the definition of euthanasia, there is discussion in the literature about excluding one but not the other.[21]

Euthanasia may be classified according to whether a person gives informed consent into three types: voluntary, non-voluntary and involuntary.[23][24]

There is a debate within the medical and bioethics literature about whether or not the non-voluntary (and by extension, involuntary) killing of patients can be regarded as euthanasia, irrespective of intent or the patient’s circumstances. In the definitions offered by Beauchamp and Davidson and, later, by Wreen, consent on the part of the patient was not considered as one of their criteria, although it may have been required to justify euthanasia.[11][25] However, others see consent as essential.

Euthanasia conducted with the consent of the patient is termed voluntary euthanasia. Active voluntary euthanasia is legal in Belgium, Luxembourg and the Netherlands. Passive voluntary euthanasia is legal throughout the U.S. per Cruzan v. Director, Missouri Department of Health. When the patient brings about his or her own death with the assistance of a physician, the term assisted suicide is often used instead. Assisted suicide is legal in Switzerland and the U.S. states of California, Oregon, Washington, Montana and Vermont.

Euthanasia conducted when the consent of the patient is unavailable is termed non-voluntary euthanasia. Examples include child euthanasia, which is illegal worldwide but decriminalised under certain specific circumstances in the Netherlands under the Groningen Protocol.

Euthanasia conducted against the will of the patient is termed involuntary euthanasia.

Voluntary, non-voluntary and involuntary euthanasia can all be further divided into passive or active variants.[26] Passive euthanasia entails the withholding of common treatments, such as antibiotics, necessary for the continuance of life.[2] Active euthanasia entails the use of lethal substances or forces, such as administering a lethal injection, to kill and is the most controversial means. A number of authors consider these terms to be misleading and unhelpful.[2]

According to the historian N. D. A. Kemp, the origin of the contemporary debate on euthanasia started in 1870.[27] Euthanasia is known to have been debated and practiced long before that date. Euthanasia was practiced in Ancient Greece and Rome: for example, hemlock was employed as a means of hastening death on the island of Kea, a technique also employed in Marseilles. Euthanasia, in the sense of the deliberate hastening of a person’s death, was supported by Socrates, Plato and Seneca the Elder in the ancient world, although Hippocrates appears to have spoken against the practice, writing “I will not prescribe a deadly drug to please someone, nor give advice that may cause his death” (noting there is some debate in the literature about whether or not this was intended to encompass euthanasia).[28][29][30]

The term “euthanasia” in the earlier sense of supporting someone as they died was used for the first time by Francis Bacon (1561-1626). In his work, Euthanasia medica, he chose this ancient Greek word and, in doing so, distinguished between euthanasia interior, the preparation of the soul for death, and euthanasia exterior, which was intended to make the end of life easier and painless, in exceptional circumstances by shortening life. That the ancient meaning of an easy death came to the fore again in the early modern period can be seen from its definition in the 18th century Zedlers Universallexikon:

The concept of euthanasia in the sense of alleviating the process of death goes back to the medical historian, Karl Friedrich Heinrich Marx, who drew on Bacon’s philosophical ideas. According to Marx, a doctor had a moral duty to ease the suffering of death through encouragement, support and mitigation using medication. Such an “alleviation of death” reflected the contemporary Zeitgeist, but was brought into the medical canon of responsibility for the first time by Marx. Marx also stressed the distinction between the theological care of the soul of sick people from the physical care and medical treatment by doctors.[32][33]

Euthanasia in its modern sense has always been strongly opposed in the Judeo-Christian tradition. Thomas Aquinas opposed both and argued that the practice of euthanasia contradicted our natural human instincts of survival,[34] as did Francois Ranchin (15651641), a French physician and professor of medicine, and Michael Boudewijns (16011681), a physician and teacher.[29]:208[35] Other voices argued for euthanasia, such as John Donne in 1624,[36] and euthanasia continued to be practised. In 1678, the publication of Caspar Questel’s De pulvinari morientibus non subtrahend, (“On the pillow of which the dying should not be deprived”), initiated debate on the topic. Questel described various customs which were employed at the time to hasten the death of the dying, (including the sudden removal of a pillow, which was believed to accelerate death), and argued against their use, as doing so was “against the laws of God and Nature”.[29]:209211 This view was shared by many who followed, including Philipp Jakob Spener, Veit Riedlin and Johann Georg Krnitz.[29]:211 Despite opposition, euthanasia continued to be practised, involving techniques such as bleeding, suffocation, and removing people from their beds to be placed on the cold ground.[29]:211214

Suicide and euthanasia became more accepted during the Age of Enlightenment.[35]Thomas More wrote of euthanasia in Utopia, although it is not clear if More was intending to endorse the practice.[29]:208209 Other cultures have taken different approaches: for example, in Japan suicide has not traditionally been viewed as a sin, as it is used in cases of honor, and accordingly, the perceptions of euthanasia are different from those in other parts of the world.[37]

In the mid-1800s, the use of morphine to treat “the pains of death” emerged, with John Warren recommending its use in 1848. A similar use of chloroform was revealed by Joseph Bullar in 1866. However, in neither case was it recommended that the use should be to hasten death. In 1870 Samuel Williams, a schoolteacher, initiated the contemporary euthanasia debate through a speech given at the Birmingham Speculative Club in England, which was subsequently published in a one-off publication entitled Essays of the Birmingham Speculative Club, the collected works of a number of members of an amateur philosophical society.[38]:794 Williams’ proposal was to use chloroform to deliberately hasten the death of terminally ill patients:

That in all cases of hopeless and painful illness, it should be the recognized duty of the medical attendant, whenever so desired by the patient, to administer choloroform or such other anaesthetic as may by-and-bye supersede chloroform so as to destroy consciousness at once, and put the sufferer to a quick and painless death; all needful precautions being adopted to prevent any possible abuse of such duty; and means being taken to establish, beyond the possibility of doubt or question, that the remedy was applied at the express wish of the patient.

The essay was favourably reviewed in The Saturday Review, but an editorial against the essay appeared in The Spectator.[27] From there it proved to be influential, and other writers came out in support of such views: Lionel Tollemache wrote in favour of euthanasia, as did Annie Besant, the essayist and reformer who later became involved with the National Secular Society, considering it a duty to society to “die voluntarily and painlessly” when one reaches the point of becoming a ‘burden’.[27][39]Popular Science analyzed the issue in May 1873, assessing both sides of the argument.[40] Kemp notes that at the time, medical doctors did not participate in the discussion; it was “essentially a philosophical enterprise… tied inextricably to a number of objections to the Christian doctrine of the sanctity of human life”.[27]

The rise of the euthanasia movement in the United States coincided with the so-called Gilded Age, a time of social and technological change that encompassed an “individualistic conservatism that praised laissez-faire economics, scientific method, and rationalism”, along with major depressions, industrialisation and conflict between corporations and labour unions.[38]:794 It was also the period in which the modern hospital system was developed, which has been seen as a factor in the emergence of the euthanasia debate.[41]

Robert Ingersoll argued for euthanasia, stating in 1894 that where someone is suffering from a terminal illness, such as terminal cancer, they should have a right to end their pain through suicide. Felix Adler offered a similar approach, although, unlike Ingersoll, Adler did not reject religion. In fact, he argued from an Ethical Culture framework. In 1891, Alder argued that those suffering from overwhelming pain should have the right to commit suicide, and, furthermore, that it should be permissible for a doctor to assist thus making Adler the first “prominent American” to argue for suicide in cases where people were suffering from chronic illness.[42] Both Ingersoll and Adler argued for voluntary euthanasia of adults suffering from terminal ailments.[42] Dowbiggin argues that by breaking down prior moral objections to euthanasia and suicide, Ingersoll and Adler enabled others to stretch the definition of euthanasia.[43]

The first attempt to legalise euthanasia took place in the United States, when Henry Hunt introduced legislation into the General Assembly of Ohio in 1906.[44]:614 Hunt did so at the behest of Anna Hall, a wealthy heiress who was a major figure in the euthanasia movement during the early 20th century in the United States. Hall had watched her mother die after an extended battle with liver cancer, and had dedicated herself to ensuring that others would not have to endure the same suffering. Towards this end she engaged in an extensive letter writing campaign, recruited Lurana Sheldon and Maud Ballington Booth, and organised a debate on euthanasia at the annual meeting of the American Humane Association in 1905 described by Jacob Appel as the first significant public debate on the topic in the 20th century.[44]:614616

Hunt’s bill called for the administration of an anesthetic to bring about a patient’s death, so long as the person is of lawful age and sound mind, and was suffering from a fatal injury, an irrevocable illness, or great physical pain. It also required that the case be heard by a physician, required informed consent in front of three witnesses, and required the attendance of three physicians who had to agree that the patient’s recovery was impossible. A motion to reject the bill outright was voted down, but the bill failed to pass, 79 to 23.[38]:796[44]:618619

Along with the Ohio euthanasia proposal, in 1906 Assemblyman Ross Gregory introduced a proposal to permit euthanasia to the Iowa legislature. However, the Iowa legislation was far broader in scope than that offered in Ohio. It allowed for the death of any person of at least ten years of age who suffered from an ailment that would prove fatal and cause extreme pain, should they be of sound mind and express a desire to artificially hasten their death. In addition, it allowed for infants to be euthanised if they were sufficiently deformed, and permitted guardians to request euthanasia on behalf of their wards. The proposed legislation also imposed penalties on physicians who refused to perform euthanasia when requested: a 612 month prison term and a fine of between $200 and $1000. The proposal proved to be controversial.[44]:619621 It engendered considerable debate and failed to pass, having been withdrawn from consideration after being passed to the Committee on Public Health.[44]:623

After 1906 the euthanasia debate reduced in intensity, resurfacing periodically but not returning to the same level of debate until the 1930s in the United Kingdom.[38]:796

The Voluntary Euthanasia Legalisation Society was founded in 1935 by Charles Killick Millard (now called Dignity in Dying). The movement campaigned for the legalisation of euthanasia in Great Britain.

In January 1936, King George V was given a fatal dose of morphine and cocaine in order to hasten his death. At the time he was suffering from cardio-respiratory failure, and the decision to end his life was made by his physician, Lord Dawson.[45] Although this event was kept a secret for over 50 years, the death of George V coincided with proposed legislation in the House of Lords to legalise euthanasia. The legislation came through the British Volunteer Euthanasia Legalisation Society.[46]

Euthanasia opponent Ian Dowbiggin argues that the early membership of the Euthanasia Society of America (ESA) reflected how many perceived euthanasia at the time, often seeing it as a eugenics matter rather than an issue concerning individual rights.[42] Dowbiggin argues that not every eugenist joined the ESA “solely for eugenic reasons”, but he postulates that there were clear ideological connections between the eugenics and euthanasia movements.[42]

A 24 July 1939 killing of a severely disabled infant in Nazi Germany was described in a BBC “Genocide Under the Nazis Timeline” as the first “state-sponsored euthanasia”.[47] Parties that consented to the killing included Hitler’s office, the parents, and the Reich Committee for the Scientific Registration of Serious and Congenitally Based Illnesses.[47]The Telegraph noted that the killing of the disabled infantwhose name was Gerhard Kretschmar, born blind, with missing limbs, subject to convulsions, and reportedly “an idiot” provided “the rationale for a secret Nazi decree that led to ‘mercy killings’ of almost 300,000 mentally and physically handicapped people”.[48] While Kretchmar’s killing received parental consent, most of the 5,000 to 8,000 children killed afterwards were forcibly taken from their parents.[47][48]

The “euthanasia campaign” of mass murder gathered momentum on 14 January 1940 when the “handicapped” were killed with gas vans and killing centres, eventually leading to the deaths of 70,000 adult Germans.[49] Professor Robert Jay Lifton, author of The Nazi Doctors and a leading authority on the T4 program, contrasts this program with what he considers to be a genuine euthanasia. He explains that the Nazi version of “euthanasia” was based on the work of Adolf Jost, who published The Right to Death (Das Recht auf den Tod) in 1895. Lifton writes: “Jost argued that control over the death of the individual must ultimately belong to the social organism, the state. This concept is in direct opposition to the Anglo-American concept of euthanasia, which emphasizes the individual’s ‘right to die’ or ‘right to death’ or ‘right to his or her own death,’ as the ultimate human claim. In contrast, Jost was pointing to the state’s right to kill…. Ultimately the argument was biological: ‘The rights to death [are] the key to the fitness of life.’ The state must own deathmust killin order to keep the social organism alive and healthy.”[50]

In modern terms, the use of “euthanasia” in the context of Action T4 is seen to be a euphemism to disguise a program of genocide, in which people were killed on the grounds of “disabilities, religious beliefs, and discordant individual values”.[51] Compared to the discussions of euthanasia that emerged post-war, the Nazi program may have been worded in terms that appear similar to the modern use of “euthanasia”, but there was no “mercy” and the patients were not necessarily terminally ill.[51] Despite these differences, historian and euthanasia opponent Ian Dowbiggin writes that “the origins of Nazi euthanasia, like those of the American euthanasia movement, predate the Third Reich and were intertwined with the history of eugenics and Social Darwinism, and with efforts to discredit traditional morality and ethics.”[42]:65

On January 6, 1949, the Euthanasia Society of America presented to the New York State Legislature a petition to legalize euthanasia, signed by 379 leading Protestant and Jewish ministers, the largest group of religious leaders ever to have taken this stance. A similar petition had been sent to the New York State Legislature in 1947, signed by approximately 1,000 New York physicians. Catholic religious leaders criticized the petition, saying that such a bill would “legalize a suicide-murder pact” and a “rationalization of the fifth commandment of God, ‘Though Shalt Not Kill.'”[52] The Right Reverend Robert E. McCormick stated that

“The ultimate object of the Euthanasia Society is based on the Totalitarian principle that the state is supreme and that the individual does not have the right to live if his continuance in life is a burden or hindrance to the state. The Nazis followed this principle and compulsory Euthanasia was practiced as a part of their program during the recent war. We American citizens of New York State must ask ourselves this question: ‘Are we going to finish Hitler’s job?'”[52]

The petition brought tensions between the American Euthanasia Society and the Catholic Church to a head that contributed to a climate of anti-Catholic sentiment generally regarding issues such as birth control, eugenics, and population control.[42]

The petition did not lead to a law.

Historically, the euthanasia debate has tended to focus on a number of key concerns. According to euthanasia opponent Ezekiel Emanuel, proponents of euthanasia have presented four main arguments: a) that people have a right to self-determination, and thus should be allowed to choose their own fate; b) assisting a subject to die might be a better choice than requiring that they continue to suffer; c) the distinction between passive euthanasia, which is often permitted, and active euthanasia, which is not substantive (or that the underlying principlethe doctrine of double effectis unreasonable or unsound); and d) permitting euthanasia will not necessarily lead to unacceptable consequences. Pro-euthanasia activists often point to countries like the Netherlands and Belgium, and states like Oregon, where euthanasia has been legalized, to argue that it is mostly unproblematic.

Similarly, Emanuel argues that there are four major arguments presented by opponents of euthanasia: a) not all deaths are painful; b) alternatives, such as cessation of active treatment, combined with the use of effective pain relief, are available; c) the distinction between active and passive euthanasia is morally significant; and d) legalising euthanasia will place society on a slippery slope,[53] which will lead to unacceptable consequences.[38]:7978 In fact, in Oregon, in 2013, pain wasn’t one of the top five reasons people sought euthanasia. Top reasons were a loss of dignity, and a fear of burdening others.[54]

In the United States in 2013, 47% nationwide supported doctor-assisted suicide. This included 32% of Latinos, 29% of African-Americans, and almost nobody with disabilities.[54]

West’s Encyclopedia of American Law states that “a ‘mercy killing’ or euthanasia is generally considered to be a criminal homicide”[55] and is normally used as a synonym of homicide committed at a request made by the patient.[56]

The judicial sense of the term “homicide” includes any intervention undertaken with the express intention of ending a life, even to relieve intractable suffering.[56][57][58] Not all homicide is unlawful.[59] Two designations of homicide that carry no criminal punishment are justifiable and excusable homicide.[59] In most countries this is not the status of euthanasia. The term “euthanasia” is usually confined to the active variety; the University of Washington website states that “euthanasia generally means that the physician would act directly, for instance by giving a lethal injection, to end the patient’s life”.[60]Physician-assisted suicide is thus not classified as euthanasia by the US State of Oregon, where it is legal under the Oregon Death with Dignity Act, and despite its name, it is not legally classified as suicide either.[61] Unlike physician-assisted suicide, withholding or withdrawing life-sustaining treatments with patient consent (voluntary) is almost unanimously considered, at least in the United States, to be legal.[62] The use of pain medication in order to relieve suffering, even if it hastens death, has been held as legal in several court decisions.[60]

Some governments around the world have legalized voluntary euthanasia but most commonly it is still considered to be criminal homicide. In the Netherlands and Belgium, where euthanasia has been legalized, it still remains homicide although it is not prosecuted and not punishable if the perpetrator (the doctor) meets certain legal conditions.[63][64][65][66]

A survey in the United States of more than 10,000 physicians came to the result that approximately 16% of physicians would ever consider halting life-sustaining therapy because the family demands it, even if they believed that it was premature. Approximately 55% would not, and for the remaining 29%, it would depend on circumstances.[67]

This study also stated that approximately 46% of physicians agree that physician-assisted suicide should be allowed in some cases; 41% do not, and the remaining 14% think it depends.[67]

In the United Kingdom, the pro-assisted dying group Dignity in Dying cite conflicting research on attitudes by doctors to assisted dying: with a 2009 Palliative Medicine-published survey showing 64% support (to 34% oppose) for assisted dying in cases where a patient has an incurable and painful disease, while 49% of doctors in a study published in BMC Medical Ethics oppose changing the law on assisted dying to 39% in favour.[68]

More:

Euthanasia – Wikipedia, the free encyclopedia

World War III – Wikipedia, the free encyclopedia

 Ww3  Comments Off on World War III – Wikipedia, the free encyclopedia
Jun 172016
 

World War III (WWIII or WW3), also known as the Third World War, is a hypothetical worldwide military conflict subsequent to World Wars I and II. Because of the development and use of nuclear weapons near the end of World War II and their subsequent acquisition and deployment by many countries, it is feared that a Third World War could lead to a nuclear holocaust causing the end of human civilization and most or all human life on Earth. A common hypothesis is that a small number of people could survive such an Armageddon, possibly in deep underground blast shelters or away from Earth, such as on the Moon or Mars or in space vehicles. Another major concern is that biological warfare could cause a very large number of casualties, either intentionally or inadvertently by an accidental release of a biological agent, the unexpected mutation of an agent, or its adaptation to other species after use.

One of the first imagined scenarios, hypothesized shortly after World War II, was a nuclear war between the United States and the Soviet Union, which emerged as superpowers following World War II. This has been widely used as a premise or plot device in books, films, television productions, and video games. A few writers have instead applied the term “World War III” to the Cold War, arguing that it met the definition of a world war even though there was no direct armed conflict between the superpowers.

World War I (19141918) was regarded at the time as the “war to end all wars,” as it was believed there could never again be another global conflict of such magnitude. World War II (19391945) proved that to be false, and with the advent of the Cold War in 1947 and the adoption of nuclear weapons, the possibility of a third global conflict became more plausible. The perceived threat then decreased with the end of the Cold War in 1991 when the Soviet Union collapsed, leaving the United States as the sole global superpower. A Third World War was anticipated and planned for by military and civil authorities in many countries. Scenarios ranged from conventional warfare to limited or total nuclear warfare, even leading to the destruction of civilization.

Military planners have been war gaming various scenarios, preparing for the worst, since the early days of the Cold War. Some of those plans are now out of date and have been partially or fully declassified.

British Prime Minister Winston Churchill was concerned that, with the enormous size of Soviet forces deployed in Europe at the end of WWII and the unreliability of the Soviet leader Joseph Stalin, there was a serious threat to Western Europe. In AprilMay 1945, British Armed Forces developed Operation Unthinkable, thought to be the first scenario of the Third World War.[1] Its primary goal was “to impose upon Russia the will of the United States and the British Empire”.[2] The plan was rejected by the British Chiefs of Staff Committee as militarily unfeasible.

“Operation Dropshot” was the 1950s United States contingency plan for a possible nuclear and conventional war with the Soviet Union in the Western European and Asian theaters.

At the time the US nuclear arsenal was limited in size, based mostly in the United States, and depended on bombers for delivery. Dropshot included mission profiles that would have used 300 nuclear bombs and 29,000 high-explosive bombs on 200 targets in 100 cities and towns to wipe out 85% of the Soviet Union’s industrial potential at a single stroke. Between 75 and 100 of the 300 nuclear weapons were targeted to destroy Soviet combat aircraft on the ground.

The scenario was devised prior to the development of intercontinental ballistic missiles. It was also devised before Robert McNamara and President Kennedy changed the US Nuclear War plan from the ‘city killing’ countervalue strike plan to “counterforce” (targeted more at military forces). Nuclear weapons at this time were not accurate enough to hit a naval base without destroying the city adjacent to it, so the aim in using them was to destroy the enemy industrial capacity in an effort to cripple their war economy.

In January 1950, the North Atlantic Council approved NATO’s military strategy of containment.[3] NATO military planning took on a renewed urgency following the outbreak of the Korean War in mid-1950, prompting NATO to establish a “force under a centralised command, adequate to deter aggression and to ensure the defence of Western Europe”. Allied Command Europe was established under General of the Army Dwight D. Eisenhower, US Army, on 2 April 1951.[4][5] The Western Union Defence Organization had previously carried out Exercise Verity, a 1949 multilateral exercise involving naval air strikes and submarine attacks.

Exercise Mainbrace brought together 200 ships and over 50,000 personnel to practice the defence of Denmark and Norway from Russian attack in 1952. It was the first major NATO exercise. The exercise was jointly commanded by Supreme Allied Commander Atlantic Admiral Lynde D. McCormick, USN, and Supreme Allied Commander Europe General Matthew B. Ridgeway, US Army, during the Fall of 1952.

The US, UK, Canada, France, Denmark, Norway, Portugal, Netherlands, and Belgium all participated.

Exercises Grand Slam and Longstep were naval exercises held in the Mediterranean Sea during 1952 to practice dislodging an enemy occupying force and amphibious assault. It involved over 170 warships and 700 aircraft under the overall command of Admiral Carney. The overall exercise commander, Admiral Carney summarized the accomplishments of Exercise Grand Slam by stating: “We have demonstrated that the senior commanders of all four powers can successfully take charge of a mixed task force and handle it effectively as a working unit.”[citation needed]

The USSR called the exercises “war-like acts” by NATO, with particular reference to the participation of Norway and Denmark, and prepared for its own military maneuvers in the Soviet Zone.[6][7]

This was a major NATO naval exercise held in 1957, simulating a response to an all-out Soviet attack on NATO. The exercise involved over 200 warships, 650 aircraft, and 75,000 personnel from the United States Navy, the United Kingdom’s Royal Navy, the Royal Canadian Navy, the French Navy, the Royal Netherlands Navy, and the Royal Norwegian Navy. As the largest peacetime naval operation up to that time, Operation Strikeback was characterized by military analyst Hanson W. Baldwin of The New York Times as “constituting the strongest striking fleet assembled since World War II”.[8]

Exercise Reforger (from return of forces to Germany) was an annual exercise conducted, during the Cold War, by NATO. The exercise was intended to ensure that NATO had the ability to quickly deploy forces to West Germany in the event of a conflict with the Warsaw Pact. The Warsaw Pact outnumbered NATO throughout the Cold War in conventional forces, especially armor. Therefore, in the event of a Soviet invasion, in order not to resort to tactical nuclear strikes, NATO forces holding the line against a Warsaw Pact armored spearhead would have to be quickly resupplied and replaced. Most of this support would have come across the Atlantic from the US and Canada.

Reforger was not merely a show of forcein the event of a conflict, it would be the actual plan to strengthen the NATO presence in Europe. In that instance, it would have been referred to as Operation Reforger. Important components in Reforger included the Military Airlift Command, the Military Sealift Command, and the Civil Reserve Air Fleet.

Seven Days to the River Rhine was a top secret military simulation exercise developed in 1979 by the Warsaw Pact. It started with the assumption that NATO would launch a nuclear attack on the Vistula river valley in a first-strike scenario, which would result in as many as two million Polish civilian casualties.[9] In response, a Soviet counter-strike would be carried out against West Germany, Belgium, the Netherlands and Denmark, with Warsaw Pact forces invading West Germany and aiming to stop at the River Rhine by the seventh day. Other USSR plans stopped only upon reaching the French border on day nine. Individual Warsaw Pact states were only assigned their own subpart of the strategic picture; in this case, the Polish forces were only expected to go as far as Germany. The Seven Days to the Rhine plan envisioned that Poland and Germany would be largely destroyed by nuclear exchanges, and that large numbers of troops would die of radiation sickness. It was estimated that NATO would fire nuclear weapons behind the advancing Soviet lines to cut off their supply lines and thus blunt their advance. While this plan assumed that NATO would use nuclear weapons to push back any Warsaw Pact invasion, it did not include nuclear strikes on France or the United Kingdom. Newspapers speculated when this plan was declassified, that France and the UK were not to be hit in an effort to get them to withhold use of their own nuclear weapons.

Exercise Able Archer was an annual exercise by the United States military in Europe that practiced command and control procedures, with emphasis on transition from solely conventional operations to chemical, nuclear, and conventional operations during a time of war.

“Able Archer 83” was a five-day North Atlantic Treaty Organization (NATO) command post exercise starting on 7 November 1983, that spanned Western Europe, centered on the Supreme Headquarters Allied Powers Europe (SHAPE) Headquarters in Casteau, north of the city of Mons. Able Archer exercises simulated a period of conflict escalation, culminating in a coordinated nuclear attack.[10]

The realistic nature of the 1983 exercise, coupled with deteriorating relations between the United States and the Soviet Union and the anticipated arrival of strategic Pershing II nuclear missiles in Europe, led some members of the Soviet Politburo and military to believe that Able Archer 83 was a ruse of war, obscuring preparations for a genuine nuclear first strike.[10][11][12][13] In response, the Soviets readied their nuclear forces and placed air units in East Germany and Poland on alert.[14][15] This “1983 war scare” is considered by many historians to be the closest the world has come to nuclear war since the Cuban Missile Crisis of 1962.[16] The threat of nuclear war ended with the conclusion of the exercise on 11 November.[17][18]

The Strategic Defense Initiative (SDI) was proposed by US President Ronald Reagan on 23 March 1983.[19] In the later part of his Presidency, numerous factors (which included watching the 1983 movie The Day After and hearing through a Soviet defector that Able Archer 83 almost triggered a Russian first strike) had turned Ronald Reagan against the concept of winnable nuclear war, and he began to see nuclear weapons as more of a “wild card” than a strategic deterrent. Although he later believed in disarmament treaties slowly blunting the danger of nuclear weaponry by reducing their number and alert status, he also believed a technological solution might allow incoming ICBMs to be shot down, thus making the US invulnerable to a first strike. However the USSR saw the SDI concept as a major threat, since unilateral deployment of the system would allow the US to launch a massive first strike on the Soviet Union without any fear of retaliation.

The SDI concept was to use ground-based and space-based systems to protect the United States from attack by strategic nuclear ballistic missiles. The initiative focused on strategic defense rather than the prior strategic offense doctrine of Mutual Assured Destruction (MAD). The Strategic Defense Initiative Organization (SDIO) was set up in 1984 within the United States Department of Defense to oversee the Strategic Defense Initiative.

NATO operational plans for a Third World War have involved NATO allies who do not have their own nuclear weapons, using nuclear weapons supplied by the United States as part of a general NATO war plan, under the direction of NATO’s Supreme Allied Commander.

Of the three nuclear powers in NATO (France, the United Kingdom and the United States), only the United States has provided weapons for nuclear sharing. As of November 2009[update], Belgium, Germany, Italy, the Netherlands and Turkey are still hosting US nuclear weapons as part of NATO’s nuclear sharing policy.[20][21]Canada hosted weapons until 1984,[22] and Greece until 2001.[20][23] The United Kingdom also received US tactical nuclear weapons such as nuclear artillery and Lance missiles until 1992, despite the UK being a nuclear weapons state in its own right; these were mainly deployed in Germany.

In peace time, the nuclear weapons stored in non-nuclear countries are guarded by US airmen though previously some artillery and missile systems were guarded by US Army soldiers; the codes required for detonating them are under American control. In case of war, the weapons are to be mounted on the participating countries’ warplanes. The weapons are under custody and control of USAF Munitions Support Squadrons co-located on NATO main operating bases who work together with the host nation forces.[20]

As of 2005[update], 180 tactical B61 nuclear bombs of the 480 US nuclear weapons believed to be deployed in Europe fall under the nuclear sharing arrangement.[24] The weapons are stored within a vault in hardened aircraft shelters, using the USAF WS3 Weapon Storage and Security System. The delivery warplanes used are F-16s and Panavia Tornados.[25]

With the development of the arms race in the 1950s, an apocalyptic war between the United States and the Soviet Union was considered possible, and a number of events have been described as potential triggers for a nuclear conflict.

Norman Podhoretz has suggested that the Cold War can be identified as World War III[37] because it was fought, although by proxy, on a global scale, involving the United States, NATO, the Soviet Union and Warsaw Pact countries.[citation needed] Similarly, Eliot Cohen, the director of strategic studies at the Paul H. Nitze School of Advanced International Studies at Johns Hopkins University, declared, in The Wall Street Journal, that he considers World War III to be history, writing: “The Cold War was World War III, which reminds us that not all global conflicts entail the movement of multi-million-man armies, or conventional front lines on a map.”[38] On the 24 May 2011 edition of CNBC’s Kudlow and Company, host Lawrence Kudlow, discussing a book by former deputy Under-Secretary of Defense Jed Babbin, accepted the view of the Cold War as World War III, adding, “World War IV is the terror war, and war with China would be World War V.”[39]

On 1 February 2015, Iraq’s Prime Minister declared that the War on ISIS was effectively “World War III”, due to ISIS’ declaration of a Worldwide Caliphate, its aims to conquer the world, and its success in spreading the conflict to multiple countries outside of the Levant region.[40] In response to the November 2015 Paris attacks, King of Jordan Abdullah II and Pope Francis of Vatican City stated that World War III was happening.[41][42]

In his State of the Union Address on 12 January 2016, president Obama of the US countered: “as we focus on destroying ISIS, over-the-top claims that this is World War III just play into their hands. Masses of fighters on the back of pickup trucks and twisted souls plotting in apartments or garages pose an enormous danger to civilians and must be stopped. But they do not threaten our national existence.”[43]

In February 2016 Russian prime minister Dmitry Medvedev stated that sending foreign ground troops into Syria could result in a world war.[44]

Go here to read the rest:

World War III – Wikipedia, the free encyclopedia

Entheogens & Existential Intelligence: The Use of Plant …

 Entheogens  Comments Off on Entheogens & Existential Intelligence: The Use of Plant …
Jun 172016
 

Used with permission. The official published version : http://www.csse.ca/CJE/Articles/FullText/CJE27-4/CJE27-4-tupper.pdf

Painting by Yvonne McGillivray

In light of recent specific liberalizations in drug laws in some countries, this article investigates the potential of entheogens (i.e. psychoactive plants used as spiritual sacraments) as tools to facilitate existential intelligence. Plant teachers from the Americas such as ayahuasca, psilocybin mushrooms, peyote, and the Indo-Aryan soma of Eurasia are examples of both past- and presently-used entheogens. These have all been revered as spiritual or cognitive tools to provide a richer cosmological understanding of the world for both human individuals and cultures. I use Howard Gardners (1999a) revised multiple intelligence theory and his postulation of an existential intelligence as a theoretical lens through which to account for the cognitive possibilities of entheogens and explore potential ramifications for education.

In this article I assess and further develop the possibility of an existential intelligence as postulated by Howard Gardner (1999a). Moreover, I entertain the possibility that some kinds of psychoactive substancesentheogenshave the potential to facilitate this kind of intelligence. This issue arises from the recent liberalization of drug laws in several Western industrialized countries to allow for the sacramental use of ayahuasca, a psychoactive tea brewed from plants indigenous to the Amazon. I challenge readers to step outside a long-standing dominant paradigm in modern Western culture that a priori regards hallucinogenic drug use as necessarily maleficent and devoid of any merit. I intend for my discussion to confront assumptions about drugs that have unjustly perpetuated the disparagement and prohibition of some kinds of psychoactive substance use. More broadly, I intend for it to challenge assumptions about intelligence that constrain contemporary educational thought.

Entheogen is a word coined by scholars proposing to replace the term psychedelic (Ruck, Bigwood, Staples, Ott, & Wasson, 1979), which was felt to overly connote psychological and clinical paradigms and to be too socio-culturally loaded from its 1960s roots to appropriately designate the revered plants and substances used in traditional rituals. I use both terms in this article: entheogen when referring to a substance used as a spiritual or sacramental tool, and psychedelic when referring to one used for any number of purposes during or following the so-called psychedelic era of the 1960s (recognizing that some contemporary non-indigenous uses may be entheogenicthe categories are by no means clearly discreet). What kinds of plants or chemicals fall into the category of entheogen is a matter of debate, as a large number of inebriantsfrom coca and marijuana to alcohol and opiumhave been venerated as gifts from the gods (or God) in different cultures at different times. For the purposes of this article, however, I focus on the class of drugs that Lewin (1924/1997) termed phantastica, a name deriving from the Greek word for the faculty of imagination (Shorter Oxford English Dictionary, 1973). Later these substances became known as hallucinogens or psychedelics, a class whose members include lysergic acid derivatives, psilocybin, mescaline and dimethyltryptamine. With the exception of mescaline, these all share similar chemical structures; all, including mescaline, produce similar phenomenological effects; and, more importantly for the present discussion, all have a history of ritual use as psychospiritual medicines or, as I argue, cultural tools to facilitate cognition (Schultes & Hofmann, 1992).

The issue of entheogen use in modern Western culture becomes more significant in light of several legal precedents in countries such as Brazil, Holland, Spain and soon perhaps the United States and Canada. Ayahuasca, which I discuss in more detail in the following section on plant teachers, was legalized for religious use by non-indigenous people in Brazil in 1987i. One Brazilian group, the Santo Daime, was using its sacrament in ceremonies in the Netherlands when, in the autumn of 1999, authorities intervened and arrested its leaders. This was the first case of religious intolerance by a Dutch government in over three hundred years. A subsequent legal challenge, based on European Union religious freedom laws, saw them acquitted of all charges, setting a precedent for the rest of Europe (Adelaars, 2001). A similar case in Spain resulted in the Spanish government granting the right to use ayahuasca in that country. A recent court decision in the United States by the 10th Circuit Court of Appeals, September 4th, 2003, ruled in favour of religious freedom to use ayahuasca (Center for Cognitive Liberty and Ethics, 2003). And in Canada, an application to Health Canada and the Department of Justice for exemption to the Controlled Drugs and Substances Act is pending, which may permit the Santo Daime Church the religious use of their sacrament, known as Daime or Santo Daimeii (J.W. Rochester, personal communication, October 8th, 2003)

One of the questions raised by this trend of liberalization in otherwise prohibitionist regulatory regimes is what benefits substances such as ayahuasca have. The discussion that follows takes up this question with respect to contemporary psychological theories about intelligence and touches on potential ramifications for education. The next section examines the metaphor of plant teachers, which is not uncommon among cultures that have traditionally practiced the entheogenic use of plants. Following that, I use Howard Gardners theory of multiple intelligences (1983) as a theoretical framework with which to account for cognitive implications of entheogen use. Finally, I take up a discussion of possible relevance of existential intelligence and entheogens to education.

Before moving on to a broader discussion of intelligence(s), I will provide some background on ayahuasca and entheogens. Ayahuasca has been a revered plant teacher among dozens of South American indigenous peoples for centuries, if not longer (Luna, 1984; Schultes & Hofmann, 1992). The word ayahuasca is from the Quechua language of indigenous peoples of Ecuador and Peru, and translates as vine of the soul (Metzner, 1999). Typically, it refers to a tea made from a jungle liana, Banisteriopsis caapi, with admixtures of other plants, but most commonly the leaves of a plant from the coffee family, Psychotria viridis (McKenna, 1999). These two plants respectively contain harmala alkaloids and dimethyltryptamine, two substances that when ingested orally create a biochemical synergy capable of producing profound alterations in consciousness (Grob, et al., 1996; McKenna, Towers & Abbot, 1984). Among the indigenous peoples of the Amazon, ayahuasca is one of the most valuable medicinal and sacramental plants in their pharmacopoeias. Although shamans in different tribes use the tea for various purposes, and have varying recipes for it, the application of ayahuasca as an effective tool to attain understanding and wisdom is one of the most prevalent (Brown, 1986; Dobkin de Rios, 1984).

Notwithstanding the explosion of popular interest in psychoactive drugs during the 1960s, ayahuasca until quite recently managed to remain relatively obscure in Western cultureiii. However, the late 20th century saw the growth of religious movements among non-indigenous people in Brazil syncretizing the use of ayahuasca with Christian symbolism, African spiritualism, and native ritual. Two of the more widespread ayahuasca churches are the Santo Daime (Santo Daime, 2004) and the Unio do Vegetal (Unio do Vegetal, 2004). These organizations have in the past few decades gained legitimacy as valid, indeed valuable, spiritual practices providing social, psychological and spiritual benefits (Grob, 1999; Riba, et al., 2001).

Ayahuasca is not the only plant teacher in the pantheon of entheogenic tools. Other indigenous peoples of the Americas have used psilocybin mushrooms for millennia for spiritual and healing purposes (Dobkin de Rios, 1973; Wasson, 1980). Similarly, the peyote cactus has a long history of use by Mexican indigenous groups (Fikes, 1996; Myerhoff, 1974; Stewart, 1987), and is currently widely used in the United States by the Native American Church (LaBarre, 1989; Smith & Snake, 1996). And even in the early history of Western culture, the ancient Indo-Aryan texts of the Rig Veda sing the praises of the deified Soma (Pande, 1984). Although the taxonomic identity of Soma is lost, it seems to have been a plant or mushroom and had the power to reliably induce mystical experiencesan entheogen par excellence (Eliade, 1978; Wasson, 1968). The variety of entheogens extends far beyond the limited examples I have offered here. However, ayahuasca, psilocybin mushrooms, peyote and Soma are exemplars of plants which have been culturally esteemed for their psychological and spiritual impacts on both individuals and communities.

In this article I argue that the importance of entheogens lies in their role as tools, as mediators between mind and environment. Defining a psychoactive drug as a toolperhaps a novel concept for someinvokes its capacity to effect a purposeful change on the mind/body. Commenting on Vygotskys notions of psychological tools, John-Steiner and Souberman (1978) note that tool use has . . . important effects upon internal and functional relationships within the human brain (p. 133). Although they were likely not thinking of drugs as tools, the significance of this observation becomes even more literal when the tools in question are plants or chemicals ingested with the intent of affecting consciousness through the manipulation of brain chemistry. Indeed, psychoactive plants or chemicals seem to defy the traditional bifurcation between physical and psychological tools, as they affect the mind/body (understood by modern psychologists to be identical).

It is important to consider the degree to which the potential of entheogens comes not only from their immediate neuropsychological effects, but also from the social practicesritualsinto which their use has traditionally been incorporated (Dobkin de Rios, 1996; Smith, 2000). The protective value that ritual provides for entheogen use is evident from its universal application in traditional practices (Weil, 1972/1986). Medical evidence suggests that there are minimal physiological risks associated with psychedelic drugs (Callaway, et al., 1999; Grinspoon & Bakalar, 1979/1998; Julien, 1998). Albert Hofmann (1980), the chemist who first accidentally synthesized and ingested LSD, contends that the psychological risks associated with psychedelics in modern Western culture are a function of their recreational use in unsafe circumstances. A ritual context, however, offers psychospiritual safeguards that make the potential of entheogenic plant teachers to enhance cognition an intriguing possibility.

Howard Gardner (1983) developed a theory of multiple intelligences that originally postulated seven types of intelligence (iv). Since then, he has added a naturalist intelligence and entertained the possibility of a spiritual intelligence (1999a; 1999b). Not wanting to delve too far into territory fraught with theological pitfalls, Gardner (1999a) settled on looking at existential intelligence rather than spiritual intelligence (p. 123). Existential intelligence, as Gardner characterizes it, involves having a heightened capacity to appreciate and attend to the cosmological enigmas that define the human condition, an exceptional awareness of the metaphysical, ontological and epistemological mysteries that have been a perennial concern for people of all cultures (1999a).

In his original formulation of the theory, Gardner challenges (narrow) mainstream definitions of intelligence with a broader one that sees intelligence as the ability to solve problems or to fashion products that are valued in at least one culture or community (1999a, p. 113). He lays out eight criteria, or signs, that he argues should be used to identify an intelligence; however, he notes that these do not constitute necessary conditions for determining an intelligence, merely desiderata that a candidate intelligence should meet (1983, p. 62). He also admits that none of his original seven intelligences fulfilled all the criteria, although they all met a majority of the eight. For existential intelligence, Gardner himself identifies six which it seems to meet; I will look at each of these and discuss their merits in relation to entheogens.

One criterion applicable to existential intelligence is the identification of a neural substrate to which the intelligence may correlate. Gardner (1999a) notes that recent neuropsychological evidence supports the hypothesis that the brains temporal lobe plays a key role in producing mystical states of consciousness and spiritual awareness (p. 124-5; LaPlante, 1993; Newberg, DAquili & Rause, 2001). He also recognizes that certain brain centres and neural transmitters are mobilized in [altered consciousness] states, whether they are induced by the ingestion of substances or by a control of the will (Gardner, 1999a, p.125). Another possibility, which Gardner does not explore, is that endogenous dimethyltryptamine (DMT) in humans may play a significant role in the production of spontaneous or induced altered states of consciousness (Pert, 2001). DMT is a powerful entheogenic substance that exists naturally in the mammalian brain (Barker, Monti & Christian, 1981), as well as being a common constituent of ayahuasca and the Amazonian snuff, yopo (Ott, 1994). Furthermore, DMT is a close analogue of the neurotransmitter 5-hydroxytryptamine, or serotonin. It has been known for decades that the primary neuropharmacological action of psychedelics has been on serotonin systems, and serotonin is now understood to be correlated with healthy modes of consciousness.

One psychiatric researcher has recently hypothesized that endogenous DMT stimulates the pineal gland to create such spontaneous psychedelic states as near-death experiences (Strassman, 2001). Whether this is correct or not, the role of DMT in the brain is an area of empirical research that deserves much more attention, especially insofar as it may contribute to an evidential foundation for existential intelligence.

Another criterion for an intelligence is the existence of individuals of exceptional ability within the domain of that intelligence. Unfortunately, existential precocity is not something sufficiently valued in modern Western culture to the degree that savants in this domain are commonly celebrated today. Gardner (1999a) observes that within Tibetan Buddhism, the choosing of lamas may involve the detection of a predisposition to existential intellect (if it is not identifying the reincarnation of a previous lama, as Tibetan Buddhists themselves believe) (p. 124). Gardner also cites Czikszentmilhalyis consideration of the early-emerging concerns for cosmic issues of the sort reported in the childhoods of future religious leaders like Gandhi and of several future physicists (Gardner, 1999a, p. 124; Czikszentmilhalyi, 1996). Presumably, some individuals who are enjoined to enter a monastery or nunnery at a young age may be so directed due to an appreciable manifestation of existential awareness. Likewise, individuals from indigenous cultures who take up shamanic practicewho have abilities beyond others to dream, to imagine, to enter states of trance (Larsen, 1976, p. 9)often do so because of a significant interest in cosmological concerns at a young age, which could be construed as a prodigious capacity in the domain of existential intelligencev (Eliade, 1964; Greeley, 1974; Halifax, 1979).

The third criterion for determining an intelligence that Gardner suggests is an identifiable set of core operational abilities that manifest that intelligence. Gardner finds this relatively unproblematic and articulates the core operations for existential intelligence as:

the capacity to locate oneself with respect to the farthest reaches of the cosmosthe infinite no less than the infinitesimaland the related capacity to locate oneself with respect to the most existential aspects of the human condition: the significance of life, the meaning of death, the ultimate fate of the physical and psychological worlds, such profound experiences as love of another human being or total immersion in a work of art. (1999a, p. 123)

Gardner notes that as with other more readily accepted types of intelligence, there is no specific truth that one would attain with existential intelligencefor example, as musical intelligence does not have to manifest itself in any specific genre or category of music, neither does existential intelligence privilege any one philosophical system or spiritual doctrine. As Gardner (1999a) puts it, there exists [with existential intelligence] a species potentialor capacityto engage in transcendental concerns that can be aroused and deployed under certain circumstances (p. 123). Reports on uses of psychedelics by Westerners in the 1950s and early 1960sgenerated prior to their prohibition and, some might say, profanationreveal a recurrent theme of spontaneous mystical experiences that are consistent with enhanced capacity of existential intelligence (Huxley, 1954/1971; Masters & Houston, 1966; Pahnke, 1970; Smith, 1964; Watts, 1958/1969).

Another criterion for admitting an intelligence is identifying a developmental history and a set of expert end-state performances for it. Pertaining to existential intelligence, Gardner notes that all cultures have devised spiritual or metaphysical systems to deal with the inherent human capacity for existential issues, and further that these respective systems invariably have steps or levels of sophistication separating the novice from the adept. He uses the example of Pope John XXIIIs description of his training to advance up the ecclesiastic hierarchy as a contemporary illustration of this point (1999a, p. 124). However, the instruction of the neophyte is a manifest part of almost all spiritual training and, again, the demanding process of imparting of shamanic wisdomoften including how to effectively and appropriately use entheogensis an excellent example of this process in indigenous cultures (Eliade, 1964).

A fifth criterion Gardner suggests for an intelligence is determining its evolutionary history and evolutionary plausibility. The self-reflexive question of when and why existential intelligence first arose in the Homo genus is one of the perennial existential questions of humankind. That it is an exclusively human trait is almost axiomatic, although a small but increasing number of researchers are willing to admit the possibility of higher forms of cognition in non-human animals (Masson & McCarthy, 1995; Vonk, 2003). Gardner (1999a) argues that only by the Upper Paleolithic period did human beings within a culture possess a brain capable of considering the cosmological issues central to existential intelligence (p. 124) and that the development of a capacity for existential thinking may be linked to a conscious sense of finite space and irreversible time, two promising loci for stimulating imaginative explorations of transcendental spheres (p. 124). He also suggests that thoughts about existential issues may well have evolved as responses to necessarily occurring pain, perhaps as a way of reducing pain or better equipping individuals to cope with it (Gardner, 1999a, p. 125). As with determining the evolutionary origin of language, tracing a phylogenesis of existential intelligence is conjectural at best. Its role in the development of the species is equally difficult to assess, although Winkelman (2000) argues that consciousness and shamanic practicesand presumably existential intelligence as wellstem from psychobiological adaptations integrating older and more recently evolved structures in the triune hominid brain. McKenna (1992) goes even so far as to postulate that the ingestion of psychoactive substances such as entheogenic mushrooms may have helped stimulate cognitive developments such as existential and linguistic thinking in our proto-human ancestors. Some researchers in the 1950s and 1960s found enhanced creativity and problem-solving skills among subjects given LSD and other psychedelic drugs (Harman, McKim, Mogar, Fadiman & Stolaroff, 1966; Izumi, 1970; Krippner, 1985; Stafford & Golightly, 1967), skills which certainly would have been evolutionarily advantageous to our hominid ancestors. Such avenues of investigation are beginning to be broached again by both academic scholars and amateur psychonauts (Dobkin de Rios & Janiger, 2003; Spitzer, et al., 1996; MAPS Bulletin, 2000).

The final criterion Gardner mentions as applicable to existential intelligence is susceptibility to encoding in a symbol system. Here, again, Gardner concedes that there is abundant evidence in favour of accepting existential thinking as an intelligence. In his words, many of the most important and most enduring sets of symbol systems (e.g., those featured in the Catholic liturgy) represent crystallizations of key ideas and experiences that have evolved within [cultural] institutions (1999a, p. 123). Another salient example that illustrates this point is the mytho-symbolism ascribed to ayahuasca visions among the Tukano, an Amazonian indigenous people. Reichel-Dolmatoff (1975) made a detailed study of these visions by asking a variety of informants to draw representations with sticks in the dirt (p. 174). He compiled twenty common motifs, observing that most of them bear a striking resemblance to phosphene patterns (i.e. visual phenomena perceived in the absence of external stimuli or by applying light pressure to the eyeball) compiled by Max Knoll (Oster, 1970). The Tukano interpret these universal human neuropsychological phenomena as symbolically significant according to their traditional ayahuasca-steeped mythology, reflecting the codification of existential ideas within their culture.

Narby (1998) also examines the codification of symbols generated during ayahuasca experiences by tracing similarities between intertwining snake motifs in the visions of Amazonian shamans and the double-helix structure of deoxyribonucleic acid. He found remarkable similarities between representations of biological knowledge by indigenous shamans and those of modern geneticists. More recently, Narby (2002) has followed up on this work by bringing molecular biologists to the Amazon to participate in ayahuasca ceremonies with experiences shamans, an endeavour he suggests may provide useful cross-fertilization in divergent realms of human knowledge.

The two other criteria of an intelligence are support from experimental psychological tasks and support from psychometric findings. Gardner suggests that existential intelligence is more debatable within these domains, citing personality inventories that attempt to measure religiosity or spirituality; he notes, it remains unclear just what is being probed by such instruments and whether self-report is a reliable index of existential intelligence (1999a, p. 125). It seems transcendental states of consciousness and the cognition they engender do not lend themselves to quantification or easy replication in psychology laboratories. However, Strassman, Qualls, Uhlenhuth, & Kellner (1994) developed a psychometric instrumentthe Hallucinogen Rating Scaleto measure human responses to intravenous administration of DMT, and it has since been reliably used for other psychedelic experiences (Riba, Rodriguez-Fornells, Strassman, & Barbanoj, 2001).

One historical area of empirical psychological research that did ostensibly stimulate a form of what might be considered existential intelligence was clinical investigations into psychedelics. Until such research became academically unfashionable and then politically impossible in the early 1970s, psychologists and clinical researchers actively explored experimentally-induced transcendent experiences using drugs in the interests of both pure science and applied medical treatments (Abramson, 1967; Cohen, 1964; Grinspoon & Bakalar, 1979/1998; Masters & Houston, 1966). One of the more famous of these was Pahnkes (1970) so-called Good Friday experiment, which attempted to induce spiritual experiences with psilocybin within a randomized double-blind control methodology. His conclusion that mystical experiences were indeed reliably produced, despite methodological problems with the study design, was borne out by a critical long-term follow-up (Doblin, 1991), which raises intriguing questions about both entheogens and existential intelligence.

Studies such as Pahnkes (1970), despite their promise, were prematurely terminated due to public pressure from a populace alarmed by burgeoning contemporary recreational drug use. Only about a decade ago did the United States government give researchers permission to renew (on a very small scale) investigations into psychedelics (Strassman 2001; Strassman & Qualls, 1994). Cognitive psychologists are also taking an interest in entheogens such as ayahuasca (Shanon, 2002). Regardless of whether support for existential intelligence can be established psychometrically or in experimental psychological tasks, Gardners theory expressly stipulates that not all eight criteria must be uniformly met in order for an intelligence to qualify. Nevertheless, Gardner claims to find the phenomenon perplexing enough, and the distance from other intelligences great enough (1999a, p. 127) to be reluctant at present to add existential intelligence to the list . . . . At most [he is] willing, Fellini-style, to joke about 8 intelligences (p. 127). I contend that research into entheogens and other means of altering consciousness will further support the case for treating existential intelligence as a valid cognitive domain.

By recapitulating and augmenting Gardners discussion of existential intelligence, I hope to have strengthened the case for its inclusion as a valid cognitive domain. However, doing so raises questions of what ramifications an acceptance of existential intelligence would have for contemporary Western educational theory and practice. How might we foster this hitherto neglected intelligence and allow it to be used in constructive ways? There is likely a range of educational practices that could be used to stimulate cognition in this domain, many of which could be readily implemented without much controversy.vi Yet I intentionally raise the prospect of using entheogens in this capacitynot with young children, but perhaps with older teens in the passage to adulthoodto challenge theorists, policy-makers and practitioners.vii

The potential of entheogens as tools for education in contemporary Western culture was identified by Aldous Huxley. Although better known as a novelist than as a philosopher of education, Huxley spent a considerable amount of timeparticularly as he neared the end of his lifeaddressing the topic of education. Like much of his literature, Huxleys observations and critiques of the socio-cultural forces at work in his time were cannily prescient; they bear as much, if not more, relevance in the 21st century as when they were written. Most remarkably, and relevant to my thesis, Huxley saw entheogens as possible educational tools:

Under the current dispensation the vast majority of individuals lose, in the course of education, all the openness to inspiration, all the capacity to be aware of other things than those enumerated in the Sears-Roebuck catalogue which constitutes the conventionally real world . . . . Is it too much to hope that a system of education may some day be devised, which shall give results, in terms of human development, commensurate with the time, money, energy and devotion expended? In such a system of education it may be that mescalin or some other chemical substance may play a part by making it possible for young people to taste and see what they have learned about at second hand . . . in the writings of the religious, or the works of poets, painters and musicians. (Letter to Dr. Humphrey Osmond, April 10th, 1953in Horowitz & Palmer, 1999, p.30)

In a more literary expression of this notion, Huxleys final novel Island (1962) portrays an ideal culture that has achieved a balance of scientific and spiritual thinking, and which also incorporates the ritualized use of entheogens for education. The representation of drug use that Huxley portrays in Island contrasts markedly with the more widely-known soma of his earlier novel, Brave New World (1932/1946): whereas soma was a pacifier that muted curiosity and served the interests of the controlling elite, the entheogenic moksha medicine of Island offered liminal experiences in young adults that stimulated profound reflection, self-actualization and, I submit, existential intelligence.

Huxleys writings point to an implicit recognition of the capacity of entheogens to be used as educational tools. The concept of tool here refers not merely the physical devices fashioned to aid material production, but, following Vygotsky (1978), more broadly to those means of symbolic and/or cultural mediation between the mind and the world (Cole, 1996; Wertsch, 1991). Of course, deriving educational benefit from a tool requires much more than simply having and wielding it; one must also have an intrinsic respect for the object qua tool, a cultural system in which the tool is valued as such, and guides or teachers who are adept at using the tool to provide helpful direction. As Larsen (1976) remarks in discussing the phenomenon of would-be shamans in Western culture experimenting with mind-altering chemicals: we have no symbolic vocabulary, no grounded mythological tradition to make our experiences comprehensible to us . . . no senior shamans to help ensure that our [shamanic experience of] dismemberment be followed by a rebirth (p. 81). Given the recent history of these substances in modern Western culture, it is hardly surprising that they have been demonized (Hofmann, 1980). However, cultural practices that have traditionally used entheogens as therapeutic agents consistently incorporate protective safeguardsset, settingviii, established dosages, and mythocultural respect (Zinberg, 1984). The fear that inevitably arises in modern Western culture when addressing the issue of entheogens stems, I submit, not from any properties intrinsic to the substances themselves, but rather from a general misunderstanding of their power and capacity as tools. Just as a sharp knife can be used for good or ill, depending on whether it is in the hands of a skilled surgeon or a reckless youth, so too can entheogens be used or misused.

The use of entheogens such as ayahuasca is exemplary of the long and ongoing tradition in many cultures to employ psychoactives as tools that stimulate foundational types of understanding (Tupper, in press). That such substances are capable of stimulating profoundly transcendent experiences is evident from both the academic literature and anecdotal reports. Accounting fully for their action, however, requires going beyond the usual explanatory schemas: applying Gardners (1999a) multiple intelligence theory as a heuristic framework opens new ways of understanding entheogens and their potential benefits. At the same time, entheogens bolster the case for Gardners proposed addition of existential intelligence. This article attempts to present these concepts in such a way that the possibility of using entheogens as tools is taken seriously by those with an interest in new and transformative ideas in education.

Abramson, H. A. (Ed.). (1967). The use of LSD in psychotherapy and alcoholism. New York: Bobbs-Merrill Co. Ltd.

Adelaars, A. (2001, 21 April). Court case in Holland against the use of ayahuasca by the Dutch Santo Daime Church. Retrieved January 2, 2002 from http://www.santodaime.org/community/news/2105_holland.htm

Barker, S.A., Monti, J.A. & Christian, S.T. (1981). N,N-Dimethyltryptamine: An endogenous hallucinogen. International Review of Neurobiology. 22, 83-110.

Brown, M.F. (1986). Tsewas gift: Magic and meaning in an Amazonian society. Washington, D.C.: Smithsonian Institution Press.

Burroughs, W. S., & Ginsberg, A. (1963). The yage letters. San Francisco, CA: City Lights Books.

Callaway, J.C., McKenna, D.J., Grob, C.S., Brito, G.S., Raymon, L.P., Poland, R.E., Andrade, E.N., & Mash, D.C. (1999). Pharmacokinetics of hoasca alkaloids in healthy humans. Journal of Ethnopharmacology. 65, 243-256.

Center for Cognitive Liberty and Ethics. (2003, September 5). 10th Circuit: Church likely to prevail in dispute over hallucinogenic tea. Retrieved February 7, 2004, from http://www.cognitiveliberty.org/dll/udv_10prelim.htm

Cohen, S. (1964). The beyond within: The LSD story. New York: Atheneum.

Cole, M. (1996). Culture in mind. Cambridge, MA: Harvard University Press.

Cremin, L. A. (1961). The transformation of the school: Progressivism in American education, 1867-1957. New York: Vintage Books.

Czikszentmilhalyi, M. (1996). Creativity. New York: Harper Collins.

Davis, W. (2001, January 23). In Coulter, P. (Producer). The end of the wild [radio program]. Toronto: Canadian Broadcasting Corporation.

Dobkin de Rios, M. (1973). The influence of psychotropic flora and fauna on Maya religion. Current Anthropology. 15(2), 147-64.

Dobkin de Rios, M. (1984). Hallucinogens: Cross-cultural perspectives. Albuquerque, NM: University of New Mexico Press.

Dobkin de Rios, M. (1996). On human pharmacology of hoasca: A medical anthropology perspective. The Journal of Nervous and Mental Disease. 184(2), 95-98.

Dobkin de Rios, M., & Janiger, O. (2003). LSD, Spirituality, and the Creative Process. Park Street Press.

Doblin, R. (1991). Pahnkes Good Friday Experiment: A long-term follow-up and methodological critique. The Journal of Transpersonal Psychology. 23(1): 1-28.

Egan, K. (2002). Getting it wrong from the beginning: Our progressivist inheritance from Herbert Spencer, John Dewey, and Jean Piaget. New Haven, CT: Yale University Press.

Eliade, M. (1964). Shamanism: Archaic techniques of ecstasy. (W.R. Trask, Trans.). New York: Pantheon Books.

Eliade, M. (1978). A history of religious ideas: From the stone age to the Eleusinian mysteries (Vol. 1). Chicago, IL: University of Chicago Press.

Fikes, J. C. (1996). A brief history of the Native American Church. In H. Smith & R. Snake (Eds.), One nation under god: The triumph of the Native American church (p. 167-73). Santa Fe, NM: Clear Light Publishers.

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

Gardner, H. (1999a). Are there additional intelligences? In J. Kane (Ed.), Education, information, transformation: Essays on learning and thinking (p. 111-131). Upper Saddle River, NJ: Prentice-Hall.

Gardner, H. (1999b). Intelligence reframed: Multiple intelligences for the 21st century. New York: Basic Books.

Gotz, I.L. (1970). The psychedelic teacher: Drugs, mysticism, and schools. Philadelphia, PA: Westminster Press.

Greeley, A .M. (1974). Ecstasy: A way of knowing. Englewood Cliffs, NJ: Prentice-Hall.

Grinspoon, L., & Bakalar, J. B. (1998). Psychedelic drugs reconsidered. New York: The Lindesmith Center (Original work published 1979).

Grob, C. S., McKenna, D. J., Callaway, J. C., Brito, G. C., Neves, E. S., Oberlander, G., Saide, O. L., Labigalini, E., Tacla, C., Miranda, C. T., Strassman, R. J., & Boone, K. B. (1996). Human psychopharmacology of hoasca, a plant hallucinogen used in ritual context in Brazil. The Journal of Nervous and Mental Disease. 184(2), 86-94.

Grob, C. S. (1999). The psychology of ayahuasca. In R. Metzner (Ed.), Ayahuasca: Hallucinogens, consciousness, and the spirit of nature (p. 214-249). New York: Thunders Mouth Press.

Halifax, J. (1979). Shamanic voices: A survey of visionary narratives. New York: Dutton.

Harman, W. W., McKim, R. H., Mogar, R. E., Fadiman, J., and Stolaroff, M. (1966). Psychedelic agents in creative problem-solving: A pilot study. Psychological Reports. 19: 211-227.

Hofmann, A. (1980). LSD: My problem child. (J. Ott, Trans.). New York: McGraw-Hill.

Horowitz, M., & Palmer, C. (Eds.). (1999). Moksha: Aldous Huxleys classic writings on psychedelics and the visionary experience. Rochester, VT: Park Street Press.

Huxley, A. (1946). Brave new world: A novel. New York: Harper & Row. (Original work published 1932).

Huxley, A. (1962). Island. New York: Harper & Row.

Huxley, A. (1971). The doors of perception & heaven and hell. Middlesex, England: Penguin Books. (Original work published 1954).

Izumi, K. (1970). LSD and architectural design. In B. Aaronson & H. Osmond, (Eds.), Psychedelics: The uses and implications of hallucinogenic drugs (p. 381-397). Garden City, NY: Anchor Books.

John-Steiner, V., & Souberman, E. (1978). Afterword. In L. Vygotsky, Mind in society: The development of higher psychological processes (p. 121-133). Cambridge, MA: Harvard University Press.

Julien, R.M. (1998). A primer of drug action: A concise, non-technical guide to the actions, uses, and side effects of psychoactive drugs (8th ed.). Portland, OR: W.H. Freeman & Company.

Krippner, S. (1985). Psychedelic drugs and creativity. Journal of Psychoactive Drugs. 17(4): 235-245.

LaBarre, W. (1989). The peyote cult (5th ed.). Hamden, CT: Shoe String Press.

LaPlante, E. (1993). Seized: Temporal lobe epilepsy as a medical, historical, and artistic phenomenon. New York: Harper-Collins.

Larsen, S. (1976). The shamans doorway: Opening the mythic imagination to contemporary consciousness. New York: Harper & Row.

Lewin, L. (1997). Phantastica: A classic survey on the use and abuse of mind-altering plants. Rochester, VT: Park Street Press. (Original work published 1924).

Luna, L.E. (1984). The concept of plants as teachers among four mestizo shamans of Iquitos, northeastern Peru. Journal of Ethnopharmacology. 11(2), 135-156.

MAPS (Multidisciplinary Association for Psychedelic Studies) Bulletin. (2000). Psychedelics & Creativity. 10(3). Retrieved February 15th, 2004 from: http://www.maps.org/news-letters/v10n3/

Masson, J. M., & McCarthy, S. (1995). When elephants weep: The emotional lives of animals. New York: Delta Books.

Masters, R. E. L., & Houston, J. (1966). The varieties of psychedelic experience. New York: Holt, Rinehart and Winston.

McKenna, D.J. (1999). Ayahuasca: An ethnopharmacologic history. In R. Metzner (Ed.), Ayahuasca: Hallucinogens, consciousness, and the spirit of nature (p. 187-213). New York: Thunders Mouth Press.

McKenna, D. J., Towers, G. H. N., & Abbot, F. (1984). Monoamine oxidase inhibitors in South American hallucinogenic plants: Tryptamine and -carboline constituents of ayahuasca. Journal of Ethnopharmacology. 10(2), 195-223.

McKenna, T. (1992). Food of the gods: The search for the original tree of knowledge. New York: Bantam.

Metzner, R. (1999). Introduction: Amazonian vine of visions. In R. Metzner (Ed.), Ayahuasca: Hallucinogens, consciousness, and the spirit of nature (p. 1-45). New York: Thunders Mouth Press.

Myerhoff, B. G. (1974). Peyote hunt: The sacred journey of the Huichol Indians. Ithaca, NY: Cornell University Press.

Narby, J. (1998). The cosmic serpent: DNA and the origins of knowledge. New York: Jeremy P. Tarcher/Putnam.

Narby, J. (2002). Shamans and scientists. In C.S. Grob (Ed.), Hallucinogens: A reader (p. 159-163). New York: Jeremy P. Tarcher/Putnam.

Newberg, A., DAquili, E., & Rause, V. (2001). Why god wont go away: Brain science and the biology of belief. New York: Ballantine Books.

Oster, G. (1970). Phosphenes. Scientific American. 222(2), 83-87.

Ott, J. (1994). Ayahuasca analogues: Pangan entheogens. Kennewick, WA: Natural Products Co.

Pahnke, W. (1970). Drugs and Mysticism. In B. Aaronson & H. Osmond, (Eds.), Psychedelics: The uses and implications of hallucinogenic drugs (p. 145-165). Cambridge, MA: Schenkman.

Pande, C. G. (1984). Foundations of Indian culture: Spiritual vision and symbolic forms in ancient India. New Delhi: Books & Books.

Pert, C. (2001, May 26). The matter of emotions. Paper presented at the Remaining Human Forum, University of British Columbia, Vancouver.

Reichel-Dolmatoff, G. (1975). The shaman and the jaguar: A study of narcotic drugs among the Indians of Colombia. Philadelphia, PA: Temple University Press.

Riba, J., Rodriguez-Fornells, A., Urbano, G., Morte, A., Antonijoan, R., Montero, M., Callaway, J.C., & Barbanoj, M.J. (2001). Subjective effects and tolerability of the South American psychoactive beverage Ayahuasca in healthy volunteers. Psychopharmacology. 154, 85-95.

Riba, J., Rodriguez-Fornells, A., Strassman, R.J., & Barbanoj, M.J. (2001). Psychometric assessment of the Hallucinogen Rating Scale in two different populations of hallucinogen users. Drug and Alcohol Dependence. 62(3): 215-223.

Ruck, C., Bigwood, J., Staples, B., Ott, J., & Wasson, R. G. (1979). Entheogens. The Journal of Psychedelic Drugs. 11(1-2), 145-146.

Santo Daime. (2004). Santo Daime: The rainforests doctrine. Retrieved February 7th, 2004 from http://www.santodaime.org/indexy.htm

Schultes, R. E., & Hofmann, A. (1992). Plants of the gods: Their sacred, healing, and hallucinogenic powers. Rochester, VT: Healing Arts Press.

Read the original:

Entheogens & Existential Intelligence: The Use of Plant …

 Posted by at 4:57 am  Tagged with:

Zinc Health Professional Fact Sheet

 Food Supplements  Comments Off on Zinc Health Professional Fact Sheet
Jun 152016
 

Introduction

See Consumer for easy-to-read facts about Zinc.

Zinc is an essential mineral that is naturally present in some foods, added to others, and available as a dietary supplement. Zinc is also found in many cold lozenges and some over-the-counter drugs sold as cold remedies.

Zinc is involved in numerous aspects of cellular metabolism. It is required for the catalytic activity of approximately 100 enzymes [1,2] and it plays a role in immune function [3,4], protein synthesis [4], wound healing [5], DNA synthesis [2,4], and cell division [4]. Zinc also supports normal growth and development during pregnancy, childhood, and adolescence [6-8] and is required for proper sense of taste and smell [9]. A daily intake of zinc is required to maintain a steady state because the body has no specialized zinc storage system [10].

Intake recommendations for zinc and other nutrients are provided in the Dietary Reference Intakes (DRIs) developed by the Food and Nutrition Board (FNB) at the Institute of Medicine of the National Academies (formerly National Academy of Sciences) [2]. DRI is the general term for a set of reference values used for planning and assessing nutrient intakes of healthy people. These values, which vary by age and gender [2], include the following:

The current RDAs for zinc are listed in Table 1 [2]. For infants aged 0 to 6 months, the FNB established an AI for zinc that is equivalent to the mean intake of zinc in healthy, breastfed infants.

* Adequate Intake (AI)

Food A wide variety of foods contain zinc (Table 2) [2]. Oysters contain more zinc per serving than any other food, but red meat and poultry provide the majority of zinc in the American diet. Other good food sources include beans, nuts, certain types of seafood (such as crab and lobster), whole grains, fortified breakfast cereals, and dairy products [2,11].

Phytateswhich are present in whole-grain breads, cereals, legumes, and other foodsbind zinc and inhibit its absorption [2,12,13]. Thus, the bioavailability of zinc from grains and plant foods is lower than that from animal foods, although many grain- and plant-based foods are still good sources of zinc [2].

* DV = Daily Value. DVs were developed by the U.S. Food and Drug Administration to help consumers compare the nutrient contents of products within the context of a total diet. The DV for zinc is 15 mg for adults and children age 4 and older. Food labels, however, are not required to list zinc content unless a food has been fortified with this nutrient. Foods providing 20% or more of the DV are considered to be high sources of a nutrient.

The U.S. Department of Agriculture’s (USDA’s) Nutrient Database Web site [11] lists the nutrient content of many foods and provides a comprehensive list of foods containing zinc arranged by nutrient content and by food name.

Dietary supplements Supplements contain several forms of zinc, including zinc gluconate, zinc sulfate, and zinc acetate. The percentage of elemental zinc varies by form. For example, approximately 23% of zinc sulfate consists of elemental zinc; thus, 220 mg of zinc sulfate contains 50 mg of elemental zinc. The elemental zinc content appears in the Supplement Facts panel on the supplement container. Research has not determined whether differences exist among forms of zinc in absorption, bioavailability, or tolerability.

In addition to standard tablets and capsules, some zinc-containing cold lozenges are labeled as dietary supplements.

Other sources Zinc is present in several products, including some labeled as homeopathic medications, sold over the counter for the treatment and prevention of colds. Numerous case reports of anosmia (loss of the sense of smell), in some cases long-lasting or permanent, have been associated with the use of zinc-containing nasal gels or sprays [14,15]. In June 2009, the FDA warned consumers to stop using three zinc-containing intranasal products because they might cause anosmia [16]. The manufacturer recalled these products from the marketplace. Currently, these safety concerns have not been found to be associated with cold lozenges containing zinc.

Zinc is also present in some denture adhesive creams at levels ranging from 1734 mg/g [17]. While use of these products as directed (0.51.5 g/day) is not of concern, chronic, excessive use can lead to zinc toxicity, resulting in copper deficiency and neurologic disease. Such toxicity has been reported in individuals who used 2 or more standard 2.4 oz tubes of denture cream per week [17,18]. Many denture creams have now been reformulated to eliminate zinc.

Most infants (especially those who are formula fed), children, and adults in the United States consume recommended amounts of zinc according to two national surveys, the 19881991 National Health and Nutrition Examination Survey (NHANES III) [19] and the 1994 Continuing Survey of Food Intakes of Individuals (CSFII) [20].

However, some evidence suggests that zinc intakes among older adults might be marginal. An analysis of NHANES III data found that 35%45% of adults aged 60 years or older had zinc intakes below the estimated average requirement of 6.8 mg/day for elderly females and 9.4 mg/day for elderly males. When the investigators considered intakes from both food and dietary supplements, they found that 20%25% of older adults still had inadequate zinc intakes [21].

Zinc intakes might also be low in older adults from the 2%4% of U.S. households that are food insufficient (sometimes or often not having enough food) [22]. Data from NHANES III indicate that adults aged 60 years or older from food-insufficient families had lower intakes of zinc and several other nutrients and were more likely to have zinc intakes below 50% of the RDA on a given day than those from food-sufficient families [23].

Zinc deficiency is characterized by growth retardation, loss of appetite, and impaired immune function. In more severe cases, zinc deficiency causes hair loss, diarrhea, delayed sexual maturation, impotence, hypogonadism in males, and eye and skin lesions [2,8,24,25]. Weight loss, delayed healing of wounds, taste abnormalities, and mental lethargy can also occur [5,8,26-30]. Many of these symptoms are non-specific and often associated with other health conditions; therefore, a medical examination is necessary to ascertain whether a zinc deficiency is present.

Zinc nutritional status is difficult to measure adequately using laboratory tests [2,31,32] due to its distribution throughout the body as a component of various proteins and nucleic acids [33]. Plasma or serum zinc levels are the most commonly used indices for evaluating zinc deficiency, but these levels do not necessarily reflect cellular zinc status due to tight homeostatic control mechanisms [8]. Clinical effects of zinc deficiency can be present in the absence of abnormal laboratory indices [8]. Clinicians consider risk factors (such as inadequate caloric intake, alcoholism, and digestive diseases) and symptoms of zinc deficiency (such as impaired growth in infants and children) when determining the need for zinc supplementation [2].

In North America, overt zinc deficiency is uncommon [2]. When zinc deficiency does occur, it is usually due to inadequate zinc intake or absorption, increased losses of zinc from the body, or increased requirements for zinc [26,27,34]. People at risk of zinc deficiency or inadequacy need to include good sources of zinc in their daily diets. Supplemental zinc might also be appropriate in certain situations.

People with gastrointestinal and other diseases Gastrointestinal surgery and digestive disorders (such as ulcerative colitis, Crohn’s disease, and short bowel syndrome) can decrease zinc absorption and increase endogenous zinc losses primarily from the gastrointestinal tract and, to a lesser extent, from the kidney [2,26,35,36]. Other diseases associated with zinc deficiency include malabsorption syndrome, chronic liver disease, chronic renal disease, sickle cell disease, diabetes, malignancy, and other chronic illnesses [37]. Chronic diarrhea also leads to excessive loss of zinc [24].

Vegetarians The bioavailability of zinc from vegetarian diets is lower than from non-vegetarian diets because vegetarians do not eat meat, which is high in bioavailable zinc and may enhance zinc absorption. In addition, vegetarians typically eat high levels of legumes and whole grains, which contain phytates that bind zinc and inhibit its absorption [31,38].

Vegetarians sometimes require as much as 50% more of the RDA for zinc than non-vegetarians [2]. In addition, they might benefit from using certain food preparation techniques that reduce the binding of zinc by phytates and increase its bioavailability. Techniques to increase zinc bioavailability include soaking beans, grains, and seeds in water for several hours before cooking them and allowing them to sit after soaking until sprouts form [38]. Vegetarians can also increase their zinc intake by consuming more leavened grain products (such as bread) than unleavened products (such as crackers) because leavening partially breaks down the phytate; thus, the body absorbs more zinc from leavened grains than unleavened grains.

Pregnant and lactating women Pregnant women, particularly those starting their pregnancy with marginal zinc status, are at increased risk of becoming zinc insufficient due, in part, to high fetal requirements for zinc [39]. Lactation can also deplete maternal zinc stores [40]. For these reasons, the RDA for zinc is higher for pregnant and lactating women than for other women (see Table 1) [2].

Older infants who are exclusively breastfed Breast milk provides sufficient zinc (2 mg/day) for the first 46 months of life but does not provide recommended amounts of zinc for infants aged 712 months, who need 3 mg/day [2,33]. In addition to breast milk, infants aged 712 months should consume age-appropriate foods or formula containing zinc [2]. Zinc supplementation has improved the growth rate in some children who demonstrate mild-to-moderate growth failure and who have a zinc deficiency [24,41].

People with sickle cell disease Results from a large cross-sectional survey suggest that 44% of children with sickle cell disease have a low plasma zinc concentration [42], possibly due to increased nutrient requirements and/or poor nutritional status [43]. Zinc deficiency also affects approximately 60%70% of adults with sickle cell disease [44]. Zinc supplementation has been shown to improve growth in children with sickle cell disease [43].

Alcoholics Approximately 30%50% of alcoholics have low zinc status because ethanol consumption decreases intestinal absorption of zinc and increases urinary zinc excretion [44]. In addition, the variety and amount of food consumed by many alcoholics is limited, leading to inadequate zinc intake [2,46,47].

Immune function Severe zinc deficiency depresses immune function [48], and even mild to moderate degrees of zinc deficiency can impair macrophage and neutrophil functions, natural killer cell activity, and complement activity [49]. The body requires zinc to develop and activate T-lymphocytes [2,50]. Individuals with low zinc levels have shown reduced lymphocyte proliferation response to mitogens and other adverse alterations in immunity that can be corrected by zinc supplementation [49,51]. These alterations in immune function might explain why low zinc status has been associated with increased susceptibility to pneumonia and other infections in children in developing countries and the elderly [52-55].

Wound healing Zinc helps maintain the integrity of skin and mucosal membranes [49]. Patients with chronic leg ulcers have abnormal zinc metabolism and low serum zinc levels [56], and clinicians frequently treat skin ulcers with zinc supplements [57]. The authors of a systematic review concluded that zinc sulfate might be effective for treating leg ulcers in some patients who have low serum zinc levels [58,59]. However, research has not shown that the general use of zinc sulfate in patients with chronic leg ulcers or arterial or venous ulcers is effective [58,59].

Diarrhea Acute diarrhea is associated with high rates of mortality among children in developing countries [60]. Zinc deficiency causes alterations in immune response that probably contribute to increased susceptibility to infections, such as those that cause diarrhea, especially in children [49].

Studies show that poor, malnourished children in India, Africa, South America, and Southeast Asia experience shorter courses of infectious diarrhea after taking zinc supplements [61]. The children in these studies received 440 mg of zinc a day in the form of zinc acetate, zinc gluconate, or zinc sulfate [61].

In addition, results from a pooled analysis of randomized controlled trials of zinc supplementation in developing countries suggest that zinc helps reduce the duration and severity of diarrhea in zinc-deficient or otherwise malnourished children [62]. Similar findings were reported in a meta-analysis published in 2008 and a 2007 review of zinc supplementation for preventing and treating diarrhea [63,64]. The effects of zinc supplementation on diarrhea in children with adequate zinc status, such as most children in the United States, are not clear.

The World Health Organization and UNICEF now recommend short-term zinc supplementation (20 mg of zinc per day, or 10 mg for infants under 6 months, for 1014 days) to treat acute childhood diarrhea [60].

The common cold Researchers have hypothesized that zinc could reduce the severity and duration of cold symptoms by directly inhibiting rhinovirus binding and replication in the nasal mucosa and suppressing inflammation [65,66]. Although studies examining the effect of zinc treatment on cold symptoms have had somewhat conflicting results, overall zinc appears to be beneficial under certain circumstances. Several studies are described below in which zinc is administered as a lozenge or zinc-containing syrup that temporarily “sticks” in the mouth and throat. This allows zinc to make contact with the rhinovirus in those areas.

In a randomized, double-blind, placebo-controlled clinical trial, 50 subjects (within 24 hours of developing the common cold) took a zinc acetate lozenge (13.3 mg zinc) or placebo every 23 wakeful hours. Compared with placebo, the zinc lozenges significantly reduced the duration of cold symptoms (cough, nasal discharge, and muscle aches) [67].

In another clinical trial involving 273 participants with experimentally induced colds, zinc gluconate lozenges (providing 13.3 mg zinc) significantly reduced the duration of illness compared with placebo but had no effect on symptom severity [68]. However, treatment with zinc acetate lozenges (providing 5 or 11.5 mg zinc) had no effect on either cold duration or severity. Neither zinc gluconate nor zinc acetate lozenges affected the duration or severity of cold symptoms in 281 subjects with natural (not experimentally induced) colds in another trial [68].

In 77 participants with natural colds, a combination of zinc gluconate nasal spray and zinc orotate lozenges (37 mg zinc every 23 wakeful hours) was also found to have no effect on the number of asymptomatic patients after 7 days of treatment [69].

In September of 2007, Caruso and colleagues published a structured review of the effects of zinc lozenges, nasal sprays, and nasal gels on the common cold [66]. Of the 14 randomized, placebo-controlled studies included, 7 (5 using zinc lozenges, 2 using a nasal gel) showed that the zinc treatment had a beneficial effect and 7 (5 using zinc lozenges, 1 using a nasal spray, and 1 using lozenges and a nasal spray) showed no effect.

More recently, a Cochrane review concluded that “zinc (lozenges or syrup) is beneficial in reducing the duration and severity of the common cold in healthy people, when taken within 24 hours of onset of symptoms” [70]. The author of another review completed in 2004 also concluded that zinc can reduce the duration and severity of cold symptoms [65]. However, more research is needed to determine the optimal dosage, zinc formulation and duration of treatment before a general recommendation for zinc in the treatment of the common cold can be made [70].

As previously noted, the safety of intranasal zinc has been called into question because of numerous reports of anosmia (loss of smell), in some cases long-lasting or permanent, from the use of zinc-containing nasal gels or sprays [14-16].

Age-related macular degeneration Researchers have suggested that both zinc and antioxidants delay the progression of age-related macular degeneration (AMD) and vision loss, possibly by preventing cellular damage in the retina [71,72]. In a population-based cohort study in the Netherlands, high dietary intake of zinc as well as beta carotene, vitamin C, and vitamin E was associated with reduced risk of AMD in elderly subjects [73]. However, the authors of a systematic review and meta-analysis published in 2007 concluded that zinc is not effective for the primary prevention of early AMD [74], although zinc might reduce the risk of progression to advanced AMD.

The Age-Related Eye Disease Study (AREDS), a large, randomized, placebo-controlled, clinical trial (n = 3,597), evaluated the effect of high doses of selected antioxidants (500 mg vitamin C, 400 IU vitamin E, and 15 mg beta-carotene) with or without zinc (80 mg as zinc oxide) on the development of advanced AMD in older individuals with varying degrees of AMD [72]. Participants also received 2 mg copper to prevent the copper deficiency associated with high zinc intakes. After an average follow-up period of 6.3 years, supplementation with antioxidants plus zinc (but not antioxidants alone) significantly reduced the risk of developing advanced AMD and reduced visual acuity loss. Zinc supplementation alone significantly reduced the risk of developing advanced AMD in subjects at higher risk but not in the total study population. Visual acuity loss was not significantly affected by zinc supplementation alone. A follow-up AREDS2 study confirmed the value of this supplement in reducing the progression of AMD over a median follow-up period of 5 years [75]. Importantly, AREDS2 revealed that a formulation providing 25 mg zinc (about one-third the amount in the original AREDS formulation) provided the same protective effect against developing advanced AMD.

Two other small clinical trials evaluated the effects of supplementation with 200 mg zinc sulfate (providing 45 mg zinc) for 2 years in subjects with drusen or macular degeneration. Zinc supplementation significantly reduced visual acuity loss in one of the studies [76] but had no effect in the other [77].

A Cochrane review concluded that the evidence supporting the use of antioxidant vitamins and zinc for AMD comes primarily from the AREDS study [71]. Individuals who have or are developing AMD should talk to their health care provider about taking a zinc-containing AREDS supplement.

Interactions with iron and copper Iron-deficiency anemia is a serious world-wide public health problem. Iron fortification programs have been credited with improving the iron status of millions of women, infants, and children. Fortification of foods with iron does not significantly affect zinc absorption. However, large amounts of supplemental iron (greater than 25 mg) might decrease zinc absorption [2,78]. Taking iron supplements between meals helps decrease its effect on zinc absorption [78].

High zinc intakes can inhibit copper absorption, sometimes producing copper deficiency and associated anemia [79,80]. For this reason, dietary supplement formulations containing high levels of zinc, such as the one used in the AREDS study [72], sometimes contain copper.

Zinc toxicity can occur in both acute and chronic forms. Acute adverse effects of high zinc intake include nausea, vomiting, loss of appetite, abdominal cramps, diarrhea, and headaches [2]. One case report cited severe nausea and vomiting within 30 minutes of ingesting 4 g of zinc gluconate (570 mg elemental zinc) [81]. Intakes of 150450 mg of zinc per day have been associated with such chronic effects as low copper status, altered iron function, reduced immune function, and reduced levels of high-density lipoproteins [82]. Reductions in a copper-containing enzyme, a marker of copper status, have been reported with even moderately high zinc intakes of approximately 60 mg/day for up to 10 weeks [2]. The doses of zinc used in the AREDS study (80 mg per day of zinc in the form of zinc oxide for 6.3 years, on average) have been associated with a significant increase in hospitalizations for genitourinary causes, raising the possibility that chronically high intakes of zinc adversely affect some aspects of urinary physiology [83].

The FNB has established ULs for zinc (Table 3). Long-term intakes above the UL increase the risk of adverse health effects [2]. The ULs do not apply to individuals receiving zinc for medical treatment, but such individuals should be under the care of a physician who monitors them for adverse health effects.

Zinc supplements have the potential to interact with several types of medications. A few examples are provided below. Individuals taking these medications on a regular basis should discuss their zinc intakes with their healthcare providers.

Antibiotics Both quinolone antibiotics (such as Cipro) and tetracycline antibiotics (such as Achromycin and Sumycin) interact with zinc in the gastrointestinal tract, inhibiting the absorption of both zinc and the antibiotic [84,85]. Taking the antibiotic at least 2 hours before or 46 hours after taking a zinc supplement minimizes this interaction [86].

Penicillamine Zinc can reduce the absorption and action of penicillamine, a drug used to treat rheumatoid arthritis [87]. To minimize this interaction, individuals should take zinc supplements at least 2 hours before or after taking penicillamine [85].

Diuretics Thiazide diuretics such as chlorthalidone (Hygroton) and hydrochlorothiazide (Esidrix and HydroDIURIL) increase urinary zinc excretion by as much as 60% [88]. Prolonged use of thiazide diuretics could deplete zinc tissue levels, so clinicians should monitor zinc status in patients taking these medications.

The federal government’s 2015-2020 Dietary Guidelines for Americans notes that “Nutritional needs should be met primarily from foods. … Foods in nutrient-dense forms contain essential vitamins and minerals and also dietary fiber and other naturally occurring substances that may have positive health effects. In some cases, fortified foods and dietary supplements may be useful in providing one or more nutrients that otherwise may be consumed in less-than-recommended amounts.”

For more information about building a healthy diet, refer to the Dietary Guidelines for Americans and the U.S. Department of Agriculture’s MyPlate.

The Dietary Guidelines for Americans describes a healthy eating pattern as one that:

This fact sheet by the Office of Dietary Supplements provides information that should not take the place of medical advice. We encourage you to talk to your healthcare providers (doctor, registered dietitian, pharmacist, etc.) about your interest in, questions about, or use of dietary supplements and what may be best for your overall health. Any mention in this publication of a specific brand name is not an endorsement of the product.

Updated: February 11, 2016

Read this article:

Zinc Health Professional Fact Sheet

 Posted by at 3:26 pm  Tagged with:

Principality of Sealand – Wikipedia, the free encyclopedia

 Sealand  Comments Off on Principality of Sealand – Wikipedia, the free encyclopedia
Jun 132016
 

For more information about the structure claimed by Sealand see HM Fort Roughs

The Principality of Sealand is an unrecognised micronation that claims Roughs Tower, an offshore platform located in the North Sea approximately 12 kilometres (7.5mi) off the coast of Suffolk, England, as its territory. Roughs Tower is a disused Maunsell Sea Fort, originally called HM Fort Roughs, built as an anti-aircraft defensive gun platform by the British during World War II.[3][4]

Since 1967, the decommissioned HM Fort Roughs has been occupied by family and associates of Paddy Roy Bates, who claim that it is an independent sovereign state.[3] Bates seized it from a group of pirate radio broadcasters in 1967 with the intention of setting up his own station at the site.[5] He attempted to establish Sealand as a nation-state in 1975 with the writing of a national constitution and establishment of other national symbols.[3]

While it has been described as the world’s smallest country[6] or nation,[7] Sealand is not officially recognised by any established sovereign state in spite of Sealand’s government’s claim that it has been de facto recognised by the United Kingdom[3] and Germany.[8] The United Nations Convention on the Law of the Sea in force since 1994 states “Artificial islands, installations and structures do not possess the status of islands. They have no territorial sea of their own, and their presence does not affect the delimitation of the territorial sea, the exclusive economic zone or the continental shelf”.[9] Sealand was not grandfathered, and sits in British waters.

Bates moved to the mainland when he became elderly, naming his son Michael regent. Bates died in October 2012 at the age of 91.[10] Michael lives in England.[11]

In 1943, during World War II, HM Fort Roughs (sometimes called Roughs Tower) was constructed by the United Kingdom as one of the Maunsell Forts,[12] primarily to defend the vital shipping lanes in nearby estuaries against German Navy mine-laying aircraft. It consisted of a floating pontoon base with a superstructure of two hollow towers joined by a deck upon which other structures could be added. The fort was towed to a position above the Rough Sands sandbar, where its base was deliberately flooded to sink it on its final resting place. This is approximately 7 nautical miles (13km) from the coast of Suffolk, outside the then 3nmi (6km) claim of the United Kingdom and, therefore, in international waters.[12] The facility was occupied by 150300 Royal Navy personnel throughout World War II; the last full-time personnel left in 1956.[12]

Roughs Tower was occupied in February and August 1965 by Jack Moore and his daughter Jane, squatting on behalf of the pirate station Wonderful Radio London.

On 2 September 1967, the fort was occupied by Major Paddy Roy Bates, a British subject and pirate radio broadcaster, who ejected a competing group of pirate broadcasters.[5] Bates intended to broadcast his pirate radio station called Radio Essex from the platform.[13] Despite having the necessary equipment, he never began broadcasting.[14] Bates declared the independence of Roughs Tower and deemed it the Principality of Sealand.[5]

In 1968, British workmen entered what Bates claimed to be his territorial waters in order to service a navigational buoy near the platform. Michael Bates (son of Paddy Roy Bates) tried to scare the workmen off by firing warning shots from the former fort. As Bates was a British subject at the time, he was summoned to court in England on firearms charges following the incident.[15] But as the court ruled that the platform (which Bates was now calling “Sealand”) was outside British territorial limits, being beyond the then 3-nautical-mile (6km) limit of the country’s waters, the case could not proceed.[16]

In 1975, Bates introduced a constitution for Sealand, followed by a national flag, a national anthem, a currency and passports.[3]

In August 1978, Alexander Achenbach, who describes himself as the Prime Minister of Sealand, hired several German and Dutch mercenaries to spearhead an attack on Sealand while Bates and his wife were in England.[8] They stormed the platform with speedboats, jet skis and helicopters, and took Bates’ son Michael hostage. Michael was able to retake Sealand and capture Achenbach and the mercenaries using weapons stashed on the platform. Achenbach, a German lawyer who held a Sealand passport, was charged with treason against Sealand[8] and was held unless he paid DM75,000 (more than US$35,000 or 23,000).[17] The governments of the Netherlands, Austria and Germany petitioned the British government for his release, but the United Kingdom disavowed his imprisonment, citing the 1968 court decision.[3] Germany then sent a diplomat from its London embassy to Sealand to negotiate for Achenbach’s release. Roy Bates relented after several weeks of negotiations and subsequently claimed that the diplomat’s visit constituted de facto recognition of Sealand by Germany.[8]

Following the former’s repatriation, Achenbach and Gernot Ptz established a government in exile, sometimes known as the Sealand Rebel Government or Sealandic Rebel Government, in Germany.[8] Achenbach’s appointed successor, Johannes Seiger, continues to claim via his website that he is Sealand’s legitimate ruling authority.[18]

The claim that Sealand is an independent sovereign state is based on an interpretation of a 1968 decision of an English court, in which it was held that Roughs Tower was in international waters and thus outside the jurisdiction of the domestic courts.[3]

In international law, the most common schools of thought for the creation of statehood are the constitutive and declaratory theories of state creation. The constitutive theory is the standard nineteenth-century model of statehood, and the declaratory theory was developed in the twentieth century to address shortcomings of the constitutive theory. In the constitutive theory, a state exists exclusively via recognition by other states. The theory splits on whether this recognition requires ‘diplomatic recognition’ or merely ‘recognition of existence’. No other state grants Sealand official recognition, but it has been argued by Bates that negotiations carried out by Germany following a brief hostage incident constituted ‘recognition of existence’ (and, since the German government reportedly sent an ambassador to the tower, diplomatic recognition). In the declaratory theory of statehood, an entity becomes a state as soon as it meets the minimal criteria for statehood. Therefore, recognition by other states is purely ‘declaratory’.[33]

In 1987, the UK extended its territorial waters from 3 to 12 nautical miles (6 to 22km). Sealand now sits inside British waters.[34] The United Kingdom is one of 165 parties to the United Nations Convention on the Law of the Sea (in force since 1994), which states in Part V, Article 60, that: ‘Artificial islands, installations and structures do not possess the status of islands. They have no territorial sea of their own, and their presence does not affect the delimitation of the territorial sea, the exclusive economic zone or the continental shelf’.[9] In the opinion of law academic John Gibson, there is little chance that Sealand would be recognised as a nation because it is a man-made structure.[34]

Irrespective of its legal status, Sealand is managed by the Bates family as if it were a recognised sovereign entity and they are its hereditary royal rulers. Roy Bates styled himself as ‘Prince Roy’ and his widow ‘Princess Joan’. Their son is known as ‘His Royal Highness Prince Michael’ and has been referred to as the ‘Prince Regent’ by the Bates family since 1999.[35] In this role, he apparently serves as Sealand’s acting ‘Head of State’ and also its ‘Head of Government’.[36] At a micronations conference hosted by the University of Sunderland in 2004, Sealand was represented by Michael Bates’ son James. The facility is now occupied by one or more caretakers representing Michael Bates, who himself resides in Essex, England.[35]

Sealand’s constitution was instituted in 1974. It consists of a preamble and seven articles.[37] The preamble asserts Sealand’s independence, while the articles variously deal with Sealand’s status as a constitutional monarchy, the empowerment of government bureaux, the role of an appointed, advisory senate, the functions of an appointed, advisory legal tribunal, a proscription against the bearing of arms except by members of a designated ‘Sealand Guard’, the exclusive right of the sovereign to formulate foreign policy and alter the constitution, and the hereditary patrilinear succession of the monarchy.[38] Sealand’s legal system is claimed to follow British common law, and statutes take the form of decrees enacted by the sovereign.[39] Sealand has issued “fantasy passports” (as termed by the Council of the European Union), which are not valid for international travel,[40] and holds the Guinness World Record for ‘the smallest area to lay claim to nation status’.[41] Sealand’s motto is E Mare Libertas (From the Sea, Freedom). It appears on Sealandic items such as stamps, passports and coins and is the title of the Sealandic anthem. The anthem was composed by Londoner Basil Simonenko;[42] being an instrumental anthem, it does not have lyrics. In 2005, the anthem was recorded by the Slovak Radio Symphony Orchestra and released on their CD National Anthems of the World, Vol. 7: Qatar Syria.

Sealand has been involved in several commercial operations, including the issuing of coins and postage stamps and the establishment of an offshore Internet hosting facility, or ‘data haven’.[43][44] Sealand also has an official website and publishes an online newspaper, Sealand News.[45] In addition, a number of amateur athletes ‘represent’ Sealand in sporting events, including unconventional events like the egg throwing world championship, which the Sealand team won in 2008.[46]

Several dozen different Sealand coins have been minted since 1972. In the early 1990s, Achenbach’s German group also produced a coin, featuring a likeness of ‘Prime Minister Seiger’.[47] Sealand’s coins and postage stamps are denominated in ‘Sealand dollars’, which it deems to be at parity with the U.S. dollar.[48] Sealand first issued postage stamps in 1969, and issues through 1977. No further stamps were produced until 2010. Sealand is not a member of the Universal Postal Union, therefore its inward address is a PO Box in the United Kingdom.[49] Once it is mailed to Sealand’s tourist and government office, it will then be brought to Sealand. Sealand only has one street address, The Row.[50]

A Sealand mailing address looks like this:[50]

Bureau of Internal Affairs 5, The Row SEALAND 1001 (c/o Sealand Post Bag, IP11 9SZ, UK)

Sealand also sells titles of individual nobility including Lord, Baron, Count and those titles’ distaff equivalents. Following Roy Bates’ 2012 death, Sealand also began publicly offering knighthoods.[51][52]

In 2000, worldwide publicity was created about Sealand following the establishment of a new entity called HavenCo, a data haven, which effectively took control of Roughs Tower itself; however, Ryan Lackey, HavenCo’s founder, later quit and claimed that Bates had lied to him by keeping the 19901991 court case[clarification needed] from him and that, as a result, he had lost the money he had invested in the venture.[53] In November 2008, operations of HavenCo ceased without explanation.[54]

Sealand is not recognized by any major international sporting body, and its population is insufficient to maintain a team composed entirely of Sealanders in any team sport. However, Sealand claims to have official national athletes, including non-Sealanders. These athletes take part in various sports, such as curling, mini-golf, football, fencing, ultimate frisbee, table football and athletics, although all its teams compete out of the country.[55] The Sealand National Football Association is an associate member of the Nouvelle Fdration-Board, a football sanctioning body for non-recognised states and states not members of FIFA. It administers the Sealand national football team. In 2004 the national team played its first international game against land Islands national football team, drawing 22.[56]

Sealand claims that its first official athlete was Darren Blackburn of Oakville, Ontario, Canada, who was appointed in 2003. Blackburn has represented Sealand at a number of local sporting events, including marathons and off-trail races.[57] In 2004, mountaineer Slader Oviatt carried the Sealandic flag to the top of Muztagh Ata.[58] Also in 2007, Michael Martelle represented the Principality of Sealand in the World Cup of Kung Fu, held in Quebec City, Canada; bearing the designation of Athleta Principalitas Bellatorius (Principal Martial Arts Athlete and Champion), Martelle won two silver medals, becoming the first-ever Sealand athlete to appear on a world championship podium.[59]

In 2008, Sealand hosted a skateboarding event with Church and East sponsored by Red Bull.[60][61][62] Sealand’s fencing team is located in the United States, affiliated with the University of California, Irvine.

In 2009, Sealand announced the revival of the Football Association and their intention to compete in a future Viva World Cup. Scottish author Neil Forsyth was appointed as President of the Sealand Football Association.[63] Sealand played the second game in their history against Chagos Islands on 5 May 2012, losing 31. The team included actor Ralf Little and former Bolton Wanderers defender Simon Charlton.[64]

In 2009 and 2010, Sealand sent teams to play in various ultimate frisbee club tournaments in the United Kingdom, Ireland and the Netherlands. They placed 11th at UK nationals in 2010.[65]

From early summer of 2012 Sealand has been represented in the flat track variant of roller derby, by a team principally composed of skaters from the South Wales area.[66]

Sealand played a friendly match in aid of charity against an “All Stars” team from Fulham F.C. on 18 May 2013, losing 57.[67][68]

On 22 May 2013, the mountaineer Kenton Cool placed a Sealand flag at the summit of Mount Everest.[69]

Coordinates: 515342.6N 12849.8E / 51.895167N 1.480500E / 51.895167; 1.480500

Read more from the original source:

Principality of Sealand – Wikipedia, the free encyclopedia

 Posted by at 12:55 pm  Tagged with:

Biological warfare – Wikipedia, the free encyclopedia

 Germ Warfare  Comments Off on Biological warfare – Wikipedia, the free encyclopedia
Jun 122016
 

Biological warfare (BW)also known as germ warfareis the use of biological toxins or infectious agents such as bacteria, viruses, and fungi with the intent to kill or incapacitate humans, animals or plants as an act of war. Biological weapons (often termed “bio-weapons”, “biological threat agents”, or “bio-agents”) are living organisms or replicating entities (viruses, which are not universally considered “alive”) that reproduce or replicate within their host victims. Entomological (insect) warfare is also considered a type of biological weapon. This type of warfare is distinct from nuclear warfare and chemical warfare, which together with biological warfare make up NBC, the military acronym for nuclear, biological, and chemical warfare using weapons of mass destruction (WMDs). None of these are conventional weapons, which are primarily due to their explosive, kinetic, or incendiary potential.

Biological weapons may be employed in various ways to gain a strategic or tactical advantage over the enemy, either by threats or by actual deployments. Like some of the chemical weapons, biological weapons may also be useful as area denial weapons. These agents may be lethal or non-lethal, and may be targeted against a single individual, a group of people, or even an entire population. They may be developed, acquired, stockpiled or deployed by nation states or by non-national groups. In the latter case, or if a nation-state uses it clandestinely, it may also be considered bioterrorism.[1]

There is an overlap between biological warfare and chemical warfare, as the use of toxins produced by living organisms is considered under the provisions of both the Biological Weapons Convention and the Chemical Weapons Convention. Toxins and psychochemical weapons are often referred to as midspectrum agents. Unlike bioweapons, these midspectrum agents do not reproduce in their host and are typically characterized by shorter incubation periods.[2]

Offensive biological warfare, including mass production, stockpiling and use of biological weapons, was outlawed by the 1972 Biological Weapons Convention (BWC). The rationale behind this treaty, which has been ratified or acceded to by 170 countries as of April 2013,[3] is to prevent a biological attack which could conceivably result in large numbers of civilian casualties and cause severe disruption to economic and societal infrastructure.[citation needed] Many countries, including signatories of the BWC, currently pursue research into the defense or protection against BW, which is not prohibited by the BWC.

A nation or group that can pose a credible threat of mass casualty has the ability to alter the terms on which other nations or groups interact with it. Biological weapons allow for the potential to create a level of destruction and loss of life far in excess of nuclear, chemical or conventional weapons, relative to their mass and cost of development and storage. Therefore, biological agents may be useful as strategic deterrents in addition to their utility as offensive weapons on the battlefield.[4][5]

As a tactical weapon for military use, a significant problem with a BW attack is that it would take days to be effective, and therefore might not immediately stop an opposing force. Some biological agents (smallpox, pneumonic plague) have the capability of person-to-person transmission via aerosolized respiratory droplets. This feature can be undesirable, as the agent(s) may be transmitted by this mechanism to unintended populations, including neutral or even friendly forces. While containment of BW is less of a concern for certain criminal or terrorist organizations, it remains a significant concern for the military and civilian populations of virtually all nations.

Rudimentary forms of biological warfare have been practiced since antiquity.[6] During the 6th century BC, the Assyrians poisoned enemy wells with a fungus that would render the enemy delirious. In 1346, the bodies of Mongol warriors of the Golden Horde who had died of plague were thrown over the walls of the besieged Crimean city of Kaffa. Specialists disagree over whether this operation may have been responsible for the spread of the Black Death into Europe.[7][8][9][10]

It has been claimed that the British Marines used smallpox in New South Wales in 1789.[11] Historians have long debated inconclusively whether the British Army used smallpox in an episode against Native Americans in 1763.[12]

By 1900 the germ theory and advances in bacteriology brought a new level of sophistication to the techniques for possible use of bio-agents in war. Biological sabotagein the form of anthrax and glanderswas undertaken on behalf of the Imperial German government during World War I (19141918), with indifferent results.[13] The Geneva Protocol of 1925 prohibited the use of chemical weapons and biological weapons.

With the onset of World War II, the Ministry of Supply in the United Kingdom established a BW program at Porton Down, headed by the microbiologist Paul Fildes. The research was championed by Winston Churchill and soon tularemia, anthrax, brucellosis, and botulism toxins had been effectively weaponized. In particular, Gruinard Island in Scotland, during a series of extensive tests was contaminated with anthrax for the next 56 years. Although the UK never offensively used the biological weapons it developed on its own, its program was the first to successfully weaponize a variety of deadly pathogens and bring them into industrial production.[14]

When the USA entered the war, mounting British pressure for the creation of a similar research program for an Allied pooling of resources, led to the creation of a large industrial complex at Fort Detrick, Maryland in 1942 under the direction of George W. Merck.[15] The biological and chemical weapons developed during that period were tested at the Dugway Proving Grounds in Utah. Soon there were facilities for the mass production of anthrax spores, brucellosis, and botulism toxins, although the war was over before these weapons could be of much operational use.[16]

The most notorious program of the period was run by the secret Imperial Japanese Army Unit 731 during the war, based at Pingfan in Manchuria and commanded by Lieutenant General Shir Ishii. This unit did research on BW, conducted often fatal human experiments on prisoners, and produced biological weapons for combat use.[17] Although the Japanese effort lacked the technological sophistication of the American or British programs, it far outstripped them in its widespread application and indiscriminate brutality. Biological weapons were used against both Chinese soldiers and civilians in several military campaigns.[18] In 1940, the Japanese Army Air Force bombed Ningbo with ceramic bombs full of fleas carrying the bubonic plague.[19] Many of these operations were ineffective due to inefficient delivery systems,[17] although up to 400,000 people may have died.[20] During the Zhejiang-Jiangxi Campaign in 1942, around 1,700 Japanese troops died out of a total 10,000 Japanese soldiers who fell ill with disease when their own biological weapons attack rebounded on their own forces.[21][22]

During the final months of World War II, Japan planned to use plague as a biological weapon against U.S. civilians in San Diego, California, during Operation Cherry Blossoms at Night. The plan was set to launch on 22 September 1945, but it was not executed because of Japan’s surrender on 15 August 1945.[23][24][25][26]

In Britain, the 1950s saw the weaponization of plague, brucellosis, tularemia and later equine encephalomyelitis and vaccinia viruses, but the programme was unilaterally cancelled in 1956. The United States Army Biological Warfare Laboratories weaponized anthrax, tularemia, brucellosis, Q-fever and others.

In 1969, the UK and the Warsaw Pact, separately, introduced proposals to the UN to ban biological weapons, and US President Richard Nixon terminated production of biological weapons, allowing only scientific research for defensive measures. The Biological and Toxin Weapons Convention was signed by the US, UK, USSR and other nations, as a ban on “development, production and stockpiling of microbes or their poisonous products except in amounts necessary for protective and peaceful research” in 1972. However, the Soviet Union continued research and production of massive offensive biological weapons in a program called Biopreparat, despite having signed the convention.[27] By 2011, 165 countries had signed the treaty and none are proventhough nine are still suspected[28]to possess offensive BW programs.[28]

It has been argued that rational state actors would never use biological weapons offensively. The argument is that biological weapons cannot be controlled: the weapon could backfire and harm the army on the offensive, perhaps having even worse effects than on the target. An agent like smallpox or other airborne viruses would almost certainly spread worldwide and ultimately infect the user’s home country. However, this argument does not necessarily apply to bacteria. For example, anthrax can easily be controlled and even created in a garden shed; the FBI suspects it can be done for as little as $2,500 using readily available laboratory equipment.[29] Also, using microbial methods, bacteria can be suitably modified to be effective in only a narrow environmental range, the range of the target that distinctly differs from the army on the offensive. Thus only the target might be affected adversely. The weapon may be further used to bog down an advancing army making them more vulnerable to counterattack by the defending force.

Ideal characteristics of a biological agent to be used as a weapon against humans are high infectivity, high virulence, non-availability of vaccines, and availability of an effective and efficient delivery system. Stability of the weaponized agent (ability of the agent to retain its infectivity and virulence after a prolonged period of storage) may also be desirable, particularly for military applications, and the ease of creating one is often considered. Control of the spread of the agent may be another desired characteristic.

The primary difficulty is not the production of the biological agent, as many biological agents used in weapons can often be manufactured relatively quickly, cheaply and easily. Rather, it is the weaponization, storage and delivery in an effective vehicle to a vulnerable target that pose significant problems.

For example, Bacillus anthracis is considered an effective agent for several reasons. First, it forms hardy spores, perfect for dispersal aerosols. Second, this organism is not considered transmissible from person to person, and thus rarely if ever causes secondary infections. A pulmonary anthrax infection starts with ordinary influenza-like symptoms and progresses to a lethal hemorrhagic mediastinitis within 37 days, with a fatality rate that is 90% or higher in untreated patients.[30] Finally, friendly personnel can be protected with suitable antibiotics.

A large-scale attack using anthrax would require the creation of aerosol particles of 1.5 to 5m: larger particles would not reach the lower respiratory tract, while smaller particles would be exhaled back out into the atmosphere. At this size, conductive powders tend to aggregate because of electrostatic charges, hindering dispersion. So the material must be treated to insulate and neutralize the charges. The weaponized agent must be resistant to degradation by rain and ultraviolet radiation from sunlight, while retaining the ability to efficiently infect the human lung. There are other technological difficulties as well, chiefly relating to storage of the weaponized agent.

Agents considered for weaponization, or known to be weaponized, include bacteria such as Bacillus anthracis, Brucella spp., Burkholderia mallei, Burkholderia pseudomallei, Chlamydophila psittaci, Coxiella burnetii, Francisella tularensis, some of the Rickettsiaceae (especially Rickettsia prowazekii and Rickettsia rickettsii), Shigella spp., Vibrio cholerae, and Yersinia pestis. Many viral agents have been studied and/or weaponized, including some of the Bunyaviridae (especially Rift Valley fever virus), Ebolavirus, many of the Flaviviridae (especially Japanese encephalitis virus), Machupo virus, Marburg virus, Variola virus, and Yellow fever virus. Fungal agents that have been studied include Coccidioides spp..[31][32]

Toxins that can be used as weapons include ricin, staphylococcal enterotoxin B, botulinum toxin, saxitoxin, and many mycotoxins. These toxins and the organisms that produce them are sometimes referred to as select agents. In the United States, their possession, use, and transfer are regulated by the Centers for Disease Control and Prevention’s Select Agent Program.

The former US biological warfare program categorized its weaponized anti-personnel bio-agents as either Lethal Agents (Bacillus anthracis, Francisella tularensis, Botulinum toxin) or Incapacitating Agents (Brucella suis, Coxiella burnetii, Venezuelan equine encephalitis virus, Staphylococcal enterotoxin B).

The United States developed an anti-crop capability during the Cold War that used plant diseases (bioherbicides, or mycoherbicides) for destroying enemy agriculture. Biological weapons also target fisheries as well as water-based vegetation. It was believed that destruction of enemy agriculture on a strategic scale could thwart Sino-Soviet aggression in a general war. Diseases such as wheat blast and rice blast were weaponized in aerial spray tanks and cluster bombs for delivery to enemy watersheds in agricultural regions to initiate epiphytotics (epidemics among plants). When the United States renounced its offensive biological warfare program in 1969 and 1970, the vast majority of its biological arsenal was composed of these plant diseases.[citation needed] Enterotoxins and Mycotoxins were not affected by Nixon’s order.

Though herbicides are chemicals, they are often grouped with biological warfare and chemical warfare because they may work in a similar manner as biotoxins or bioregulators. The Army Biological Laboratory tested each agent and the Army’s Technical Escort Unit was responsible for transport of all chemical, biological, radiological (nuclear) materials. Scorched earth tactics or destroying livestock and farmland were carried out in the Vietnam war (cf. Agent Orange)[33] and Eelam War in Sri Lanka.[citation needed]

Biological warfare can also specifically target plants to destroy crops or defoliate vegetation. The United States and Britain discovered plant growth regulators (i.e., herbicides) during the Second World War, and initiated a herbicidal warfare program that was eventually used in Malaya and Vietnam in counterinsurgency operations.

In 1980s Soviet Ministry of Agriculture had successfully developed variants of foot-and-mouth disease, and rinderpest against cows, African swine fever for pigs, and psittacosis to kill chicken. These agents were prepared to spray them down from tanks attached to airplanes over hundreds of miles. The secret program was code-named “Ecology”.[31]

Attacking animals is another area of biological warfare intended to eliminate animal resources for transportation and food. In the First World War, German agents were arrested attempting to inoculate draft animals with anthrax, and they were believed to be responsible for outbreaks of glanders in horses and mules. The British tainted small feed cakes with anthrax in the Second World War as a potential means of attacking German cattle for food denial, but never employed the weapon. In the 1950s, the United States had a field trial with hog cholera.[citation needed] During the Mau Mau Uprising in 1952, the poisonous latex of the African milk bush was used to kill cattle.[34]

Outside the context of war, humans have deliberately introduced the rabbit disease Myxomatosis, originating in South America, to Australia and Europe, with the intention of reducing the rabbit population which had devastating but temporary results, with wild rabbit populations reduced to a fraction of their former size but survivors developing immunity and increasing again.

Entomological warfare (EW) is a type of biological warfare that uses insects to attack the enemy. The concept has existed for centuries and research and development have continued into the modern era. EW has been used in battle by Japan and several other nations have developed and been accused of using an entomological warfare program. EW may employ insects in a direct attack or as vectors to deliver a biological agent, such as plague. Essentially, EW exists in three varieties. One type of EW involves infecting insects with a pathogen and then dispersing the insects over target areas.[35] The insects then act as a vector, infecting any person or animal they might bite. Another type of EW is a direct insect attack against crops; the insect may not be infected with any pathogen but instead represents a threat to agriculture. The final method uses uninfected insects, such as bees, wasps, etc., to directly attack the enemy.[36]

In 2010 at The Meeting of the States Parties to the Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and Their Destruction in Geneva[37] the sanitary epidemiological reconnaissance was suggested as well-tested means for enhancing the monitoring of infections and parasitic agents, for practical implementation of the International Health Regulations (2005). The aim was to prevent and minimize the consequences of natural outbreaks of dangerous infectious diseases as well as the threat of alleged use of biological weapons against BTWC States Parties.

It is important to note that most classical and modern biological weapons’ pathogens can be obtained from a plant or an animal which is naturally infected.[38]

Indeed, in the largest biological weapons accident known the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979, sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city and still off limits to visitors today, see Sverdlovsk Anthrax leak).[39]

Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill.

For example, in the case of anthrax, it is likely that by 2436 hours after an attack, some small percentage of individuals (those with compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports).[40] The incubation period for humans is estimated to be about 11.8 days to 12.1 days. This suggested period is the first model that is independently consistent with data from the largest known human outbreak. These projections refines previous estimates of the distribution of early onset cases after a release and supports a recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses of anthrax.[41] By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease.[40]

From most specific to least specific:[42]

1. Single cause of a certain disease caused by an uncommon agent, with lack of an epidemiological explanation.

2. Unusual, rare, genetically engineered strain of an agent.

3. High morbidity and mortality rates in regards to patients with the same or similar symptoms.

4. Unusual presentation of the disease.

5. Unusual geographic or seasonal distribution.

6. Stable endemic disease, but with an unexplained increase in relevance.

7. Rare transmission (aerosols, food, water).

8. No illness presented in people who were/are not exposed to “common ventilation systems (have separate closed ventilation systems) when illness is seen in persons in close proximity who have a common ventilation system.”

9. Different and unexplained diseases coexisting in the same patient without any other explanation.

10. Rare illness that affects a large, disparate population (respiratory disease might suggest the pathogen or agent was inhaled).

11. Illness is unusual for a certain population or age-group in which it takes presence.

12. Unusual trends of death and/or illness in animal populations, previous to or accompanying illness in humans.

13. Many effected reaching out for treatment at the same time.

14. Similar genetic makeup of agents in effected individuals.

15. Simultaneous collections of similar illness in non-contiguous areas, domestic, or foreign.

16. An abundance of cases of unexplained diseases and deaths.

The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and law enforcement communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapon attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians.

The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive.

The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a “sandwich immunoassay”, in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires.[43]

In the Netherlands, the company TNO has designed Bioaerosol Single Particle Recognition eQuipment (BiosparQ). This system would be implemented into the national response plan for bioweapon attacks in the Netherlands.[44]

Researchers at Ben Gurion University in Israel are developing a different device called the BioPen, essentially a “Lab-in-a-Pen”, which can detect known biological agents in under 20 minutes using an adaptation of the ELISA, a similar widely employed immunological technique, that in this case incorporates fiber optics.[45]

Theoretically, novel approaches in biotechnology, such as synthetic biology could be used in the future to design novel types of biological warfare agents.[46][47][48][49] Special attention has to be laid on future experiments (of concern) that:[50]

Most of the biosecurity concerns in synthetic biology, however, are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab.[51][52][53] Recently, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as “the most important innovation in the synthetic biology space in nearly 30 years.”[54] While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks.[54] However, due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space.[54][55][56]

(passim)

Read the original here:

Biological warfare – Wikipedia, the free encyclopedia

 Posted by at 12:45 am  Tagged with:

THE WAR ON DRUGS EXPLAINED Vox

 War On Drugs  Comments Off on THE WAR ON DRUGS EXPLAINED Vox
Jun 122016
 

Card 1 of 17

In the 1970s, President Richard Nixon formally launched the war on drugs to eradicate illicit drug use in the US. “If we cannot destroy the drug menace in America, then it will surely in time destroy us,” Nixontold Congress in 1971. “I am not prepared to accept this alternative.”

Over the next couple decades, particularly under the Reagan administration, what followed was the escalation of global military and police efforts against drugs. But in that process, the drug war led to unintended consequences that have proliferated violence around the world and contributed to mass incarceration in the US, even if it has made drugs less accessible and reduced potential levels of drug abuse.

Nixon inaugurated the war on drugs at a time when America was in hysterics over widespread drug use. Drug use had become more public and prevalent during the 1960s due in part to the counterculture movement, and many Americans felt that drug use had become a serious threat to the country and its moral standing.

Over the past four decades, the US has committed more than $1 trillion to the war on drugs. But the crackdown has in some waysfailed to produce the desired results: Drug use remains a very serious problem in the US, even though the drug war has made these substances less accessible. The drug war also led to several some unintended negative consequences, including abig strain on America’s criminal justice system and the proliferation ofdrug-related violence around the world.

While Nixon began the modern war on drugs, America hasa long history of trying to control the use of certain drugs. Laws passed in the early 20th century attempted to restrict drug production and sales. Some of this history is racially tinged, and, perhaps as a result, the war on drugs has long hit minority communities the hardest.

In response to the failures and unintended consequences, many drug policy experts and historians have called for reforms: a larger focus onrehabilitation, thedecriminalization of currently illicit substances, and even the legalization of all drugs.

The question with these policies, as with the drug war more broadly, is whether the risks and costs are worth the benefits. Drug policy is often described as choosing between a bunch of bad or mediocre options, rather than finding the perfect solution. In the case of the war on drugs, the question is whether the very real drawbacks of prohibition more racially skewed arrests, drug-related violence around the world, and financial costs are worth the potential gains from outlawing and hopefully depressing drug abuse in the US.

Card 2 of 17

The goal of the war on drugs is to reduce drug use. The specific aim is to destroy and inhibit the international drug trade making drugs scarcer and costlier, and therefore making drug habits in the US unaffordable. And although some of the data shows drugs getting cheaper, drug policy experts generally believe that the drug war is nonetheless preventing some drug abuse by making the substances less accessible.

The prices of most drugs, as tracked by the Office of National Drug Control Policy, have plummeted. Between 1981 and 2007, the median bulk price of heroin is down by roughly 93 percent, and the median bulk price of powder cocaine is down by about 87 percent. Between 1986 and 2007, the median bulk price of crack cocaine fell by around 54 percent. The prices of meth and marijuana, meanwhile, have remained largely stable since the 1980s.

Much of this is explained by what’s known as the balloon effect: Cracking down on drugs in one area doesn’t necessarily reduce the overall supply of drugs. Instead, drug production and trafficking shift elsewhere, because the drug trade is so lucrative that someone will always want to take it up particularly in countries where the drug trade might be one of the only economic opportunities and governments won’t be strong enough to suppress the drug trade.

The balloon effect has been documented in multiple instances, includingPeru and Bolivia to Colombia in the 1990s, the Netherlands Antilles to West Africa in the early 2000s, and Colombia and Mexico to El Salvador, Honduras, and Guatemala in the 2000s and 2010s.

Sometimes the drug war has failed to push down production altogether, like in Afghanistan. The US spent$7.6 billion between 2002 and 2014 to crack down on opium in Afghanistan, where a bulk of the world’s supply for heroin comes from. Despite the efforts, Afghanistan’s opium poppy crop cultivation reached record levels in 2013.

On the demand side, illicit drug use has dramatically fluctuated since the drug war began.The Monitoring the Future survey, which tracks illicit drug use among high school students, offers a useful proxy: In 1975, four years after President Richard Nixon launched the war on drugs, 30.7 percent of high school seniors reportedly used drugs in the previous month. In 1992, the rate was 14.4 percent. In 2013, it was back up to 25.5 percent.

Still, prohibition does likely make drugs less accessible than they would be if they were legal. A 2014study by Jon Caulkins, a drug policy expert at Carnegie Mellon University, suggested that prohibition multiplies the price of hard drugs like cocaine by as much as 10 times. And illicit drugs obviously aren’t available through easy means one can’t just walk into a CVS and buy heroin. So the drug war is likely stopping somedrug use: Caulkins estimates that legalization could lead hard drug abuse to triple,although he told me it could go much higher.

But there’s also evidence that the drug war is too punitive:A 2014 study from Peter Reuter at the University of Maryland and Harold Pollack at the University of Chicago found there’s no good evidence that tougher punishments or harsher supply-elimination efforts do a better job of pushing down access to drugs and substance abuse than lighter penalties. So increasing the severity of the punishment doesn’t do much, if anything, to slow the flow of drugs.

Instead, most of the reduction in accessibility from the drug war appears to be a result of the simple fact that drugs are illegal, which by itself makes drugs more expensive and less accessible by eliminating avenues toward mass production and distribution.

The question is whether the possible reduction of potential drug use is worth the drawbacks that come in other areas, including a strained criminal justice system and the global proliferation of violence fueled by illegal drug markets. If the drug war has failed to significantly reduce drug use, production, and trafficking, then perhaps it’s not worth these costs, and a new approach is preferable.

Card 3 of 17

The US uses what’s called thedrug scheduling system. Under theControlled Substances Act, there are five categories of controlled substances known as schedules, which weigh a drug’s medical value and abuse potential.

Medical value is typically evaluated through scientific research, particularly large-scale clinical trials similar to those used by the Food and Drug Administration for pharmaceuticals. Potential for abuse isn’t clearly defined by the Controlled Substances Act, but for the federal government, abuse is when individuals take a substance on their own initiative, leading to personal health hazards or dangers to society as a whole.

Under this system, Schedule 1 drugs are considered to have no medical value and a high potential for abuse. Schedule 2 drugs have high potential for abuse but some medical value. As the rank goes down to Schedule 5, a drug’s potential for abuse generally decreases.

It may be helpful to think of the scheduling system as made up of two distinct groups: nonmedical and medical. The nonmedical group is the Schedule 1 drugs, which are considered to have no medical value and high potential for abuse. The medical group is the Schedule 2 to 5 drugs, which have some medical value and are numerically ranked based on abuse potential (from high to low).

Marijuana and heroin are Schedule 1 drugs, so the federal government says they have no medical value and a high potential for abuse. Cocaine, meth, and opioid painkillers are Schedule 2 drugs, so they’re considered to have some medical value and high potential for abuse. Steroids and testosterone products are Schedule 3, Xanax and Valium are Schedule 4, and cough preparations with limited amounts of codeine are Schedule 5. Congressspecifically exempted alcohol and tobacco from the schedules in 1970.

Although these schedules help shapecriminal penalties for illicit drug possession and sales, they’re not always the final word. Congress, for instance, massively increased penalties against crack cocaine in 1986 in response to concerns about a crack epidemic and its potential link to crime. And state governments can set up their own criminal penalties and schedules for drugs as well.

Other countries, like the UK and Australia, use similar systems to the US, although their specific rankings for some drugs differ.

Card 4 of 17

The US fights the war on drugs both domestically and overseas.

On the domestic front, the federal government supplies local and state police departments with funds, legal flexibility, and special equipment to crack down on illicit drugs. Local and state police then use this funding to go after drug dealing organizations.

“[Federal] assistance helped us take out major drug organizations, and we took out a number of them in Baltimore,” said Neill Franklin, a retired police major and executive director of Law Enforcement Against Prohibition, which opposes the war on drugs. “But to do that, we took out the low-hanging fruit to work up the chain to find who was at the top of the pyramid. It started with low-level drug dealers, working our way up to midlevel management, all the way up to the kingpins.”

Some of the funding, particularly from the Byrne Justice Assistance Grant program, encourages local and state police to participate in anti-drug operations. If police don’t use the money to go after illicit substances, they risk losing it providing a financial incentive for cops to continue the war on drugs.

Although the focus is on criminal groups, casual users still get caught in the criminal justice system. Between 1999 and 2007, Human Rights Watch found at least 80 percent of drug-related arrests were for possession, not sales.

It seems, however, that arrests for possession don’t typically turn into convictions and prison time. According to federal statistics, only 5.3 percent of drug offenders in federal prisons and 27.9 percent of drug offenders in state prisons in 2004 were in for drug possession. The overwhelming majority were in for trafficking, and a small few were in for an unspecified “other” category.

Mexican officials incinerate 130 tons of seized marijuana.

Internationally, the US regularly aids other countries in their efforts to crack down on drugs. For example, the US in the 2000s provided military aid and training to Colombia in what’s known as Plan Colombia to help the Latin American country go after criminal organizations and paramilitaries funded through drug trafficking.

Federal officials argue that helping countries like Colombia attacks the source of illicit drugs, since such substances are often produced in Latin America and shipped north to the US. But the international efforts have consistently displaced, not eliminated, drug trafficking and the violence that comes with it to other countries.

Given the struggles of the war on drugs to meet its goals, federal and state officials have begun moving away from harsh enforcement tactics and tough-on-crime stances. The White House Office of National Drug Control Policy nowadvocates for a bigger focus on rehabilitation and less on law enforcement. Even some conservatives, like former Texas Governor Rick Perry, have embraced drug courts, which place drug offenders into rehabilitation programs instead of jail or prison.

The idea behind these reforms is to find a better balance between locking up more people for drug trafficking while moving genuinely problematic drug users to rehabilitation and treatment services that could help them.”We can’t arrest our way out of the problem,” Michael Botticelli, US drug czar,said, “and we really need to focus our attention on proven public health strategies to make a significant difference as it relates to drug use and consequences to that in the United States.”

Card 5 of 17

The escalation of the criminal justice system’s reach over the past few decades, ranging from more incarceration to seizures of private property and militarization, can be traced back to the war on drugs.

After the US stepped up the drug war throughout the 1970s and ’80s, harsher sentences for drug offenses played a role in turning the country into theworld’s leader in incarceration. (But drug offenders still make up a small part of the prison population: About 54 percent of people in state prisons which house more than 86 percent of the US prison population were violent offenders in 2012, and 16 percent were drug offenders, according to the Bureau of Justice Statistics.)

Still, mass incarceration has massively strained the criminal justice system and led to a lot of overcrowding in US prisons to the point that some states, such asCalifornia, have rolled back penalties for nonviolent drug users and sellers with the explicit goal of reducing their incarcerated population.

In terms of police powers,civil asset forfeitures have been justified as a way to go after drug dealing organizations. These forfeitures allow law enforcement agencies to take the organizations’ assets cash in particular and then use the gains to fund more anti-drug operations. The idea is to turn drug dealers’ ill-gotten gains against them.

But there have beenmany documented cases in which police abused civil asset forfeiture, including instances in which police took people’s cars and cash simply because they suspected but couldn’t prove that there was some sort of illegal activity going on. In these cases, it’s actually up to people whose private property was taken to prove that they weren’t doing anything illegal instead of traditional legal standards in which police have to prove wrongdoing or reasonable suspicion of it before they act.

Similarly, the federal government helped militarize local and state police departments in an attempt to better equip them in the fight against drugs. The Pentagon’s 1033 program, which gives surplus military-grade equipment to police, was created in the 1990s as part of President GeorgeHW Bush’s escalation of the war on drugs. The deployment of SWAT teams, as reported by the ACLU, also increased during the past few decades, and 62 percent of SWAT raids in 2011 and 2012 were for drug searches.

Various groups have complained that these increases in police power are often abused and misused. The ACLU, for instance, argues that civil asset forfeitures threaten Americans’ civil liberties and property rights, because police can often seize assets without even filing charges. Such seizures also might encourage police to focus on drug crimes, since a raid can result in actual cash that goes back to the police department, while a violent crime conviction likely would not. The libertarian Cato Institute has also criticized the war on drugs for decades, because anti-drug efforts gave cover to a huge expansion of law enforcement’s surveillance capabilities, including wiretaps and US mail searches.

The militarization of police became a particular sticking point during the 2014 protests in Ferguson, Missouri, over the police shooting of Michael Brown. After heavily armed police responded to largely peaceful protesters with armored vehicle that resemble tanks, tear gas, and sound cannons, law enforcement experts andjournalists criticized the tactics.

Since the beginning of the war on drugs, the general trend has been to massively grow police powers and expand the criminal justice system as a means of combating drug use. But as the drug warstruggles to halt drug use and trafficking, the heavy-handed policies which many describe as draconian have been called into question. If the war on drugs isn’t meeting its goals, critics say these expansions of the criminal justice system aren’t worth the financial strain and costs to liberty in the US.

Card 6 of 17

The war on drugs has created a black market for illicit drugs that criminal organizations around the world can rely on for revenue that payrolls other, more violent activities. This market supplies so much revenue that drug trafficking organizations can actually rival developing countries’ weak government institutions.

In Mexico, for example, drug cartels have leveraged their profits from the drug trade to violently maintain their stranglehold over the market despite the government’s war on drugs. As a result, public decapitations have become a particularly prominent tactic of ruthless drug cartels. As many as 80,000 people have died in the war. Tens of thousands of people have gone missing since 2007, including 43 students who vanished in 2014 in a widely publicized case.

But even if Mexico were to actually defeat drug cartels, this potentially wouldn’t reduce drug war violence on a global scale.Instead, drug production and trafficking, and the violence that comes with both, would likely shift elsewhere, because the drug trade is so lucrative that someone will always want to take it up particularly in countries where the drug trade might be one of the only economic opportunities and governments won’t be strong enough to suppress the drug trade.

In 2014, for instance, the drug warsignificantly contributed to the child migrant crisis. After some drug trafficking was pushed out of Mexico, gangs and drug cartels stepped up their operations in Central America’s Northern Triangle of El Salvador, Honduras, and Guatemala. These countries, with their weak criminal justice and law enforcement systems, didn’t seem to have the capacity to deal with the influx of violence and crime.

The war on drugs “drove a lot of the activities to Central America, a region that has extremely weakened systems,” Adriana Beltran of the Washington Office on Latin Americaexplained. “Unfortunately, there hasn’t been a strong commitment to building the criminal justice system and the police.”

As a result, children fled their countries by the thousands ina major humanitarian crisis. Many of these children ended up in the US, where the refugee system simply doesn’t have the capacity to handle the rush of child migrants.

Although the child migrant crisis is fairly unique in its specific circumstances and effects, the series of events a government cracks down on drugs, trafficking moves to another country, and the drug trade brings violence and crime is pretty typical in the history of the war on drugs. In the past couple of decades it happened inColombia, Mexico, Venezuela, and Ecuador after successful anti-drug crackdowns in other Latin American countries.

The Wall Street Journal explained:

Ironically, the shift is partly a by-product of a drug-war success story, Plan Colombia. In a little over a decade, the U.S. spent nearly $8 billion to back Colombia’s efforts to eradicate coca fields, arrest traffickers and battle drug-funded guerrilla armies such as the Revolutionary Armed Forces of Colombia, or FARC. Colombian cocaine production declined, the murder rate plunged and the FARC is on the run.

But traffickers adjusted. Cartels moved south across the Ecuadorean border to set up new storage facilities and pioneer new smuggling routes from Ecuador’s Pacific coast. Colombia’s neighbor to the east, Venezuela, is now the departure point for half of the cocaine going to Europe by sea.

As a 2012 report from the UN Office on Drugs and Crime explained, “one countrys success became the problem of others.”

This global proliferation of violence is one of the most prominent costs of the drug war. When evaluating whether the war on drugs has been successful, experts and historians weigh this cost, along with the rise of incarceration in the US, against the benefits, such as potentially depressed drug use, to gauge whether anti-drug efforts have been worth it.

Card 7 of 17

Enforcing the war on drugs costs the US more than $51 billion each year, according to the Drug Policy Alliance. As of 2012, the US had spent $1 trillion on anti-drug efforts.

The spending estimates don’t account for the loss of potential taxes on currently illegal substances. According to a 2010 paper from the libertarian Cato Institute, taxing and regulating illicit drugs similarly to tobacco and alcohol could raise $46.7 billion in tax revenue each year.

These annual costs the spending, the lost potential taxes add up to nearly 2 percent of state and federal budgets, which totaled an estimated $6.1 trillion in 2013. That’s not a huge amount of money, but it may not be worth the cost if the war on drugs is leading todrug-related violence around the world and isn’t significantly reducingdrug abuse.

Card 8 of 17

In the US, the war on drugs mostly impacts minority, particularly black, communities. This disproportionate effect is why critics often call the war on drugs racist.

Although black communities aren’t more likely to use orsell drugs, they are much more likely to be arrested and incarcerated for drug offenses.

When black defendants are convicted for drug crimes, they face longer prison sentences as well. Drug sentences for black men were 13.1 percent longer than drug sentences for white men between 2007 and 2009, according to a 2012 report from the US Sentencing Commission.

TheSentencing Project explained the differences in a February 2015 report: “Myriad criminal justice policies that appear to be race-neutral collide with broader socioeconomic patterns to create a disparate racial impact Socioeconomic inequality does lead people of color to disproportionately use and sell drugs outdoors, where they are more readily apprehended by police.”

One example: Trafficking crack cocaine, one of the few illicit drugs that’s more popular among black Americans, carries the harshest punishment. The threshold for a five-year mandatory minimum sentence of crack is 28 grams. In comparison, the threshold for powder cocaine, which is more popular among white than black Americans but pharmacoligically similar to crack, is 500 grams.

As for the broader racial disparities, federal programs that encourage local and state police departments to crack down on drugs may create perverse incentives to go after minority communities. Some federal grants, for instance, previously required police to make more drug arrests in order to obtain more funding for anti-drug efforts. Neill Franklin, a retired police major from Maryland and executive director of Law Enforcement Against Prohibition, said minority communities are “the low-hanging fruit” for police departments because they tend to sell in open-air markets, such as public street corners, and have less political and financial power than white Americans.

In Chicago, for instance, an analysis byProject Know, a drug addiction resource center, foundenforcement of anti-drug laws is concentrated in poor neighborhoods, which tend to have more crime but are predominantly black:

“Doing these evening and afternoon sweeps meant 20 to 30 arrests, and now you have some great numbers for your grant application,” Franklin said. “In that process, we also ended up seizing a lot of money and a lot of property. That’s another cash cow.”

The disproportionate arrest and incarceration rates have clearly detrimental effects on minority communities. A 2014 study published in the journal Sociological Science found boys with imprisoned fathers are much less likely to possess the behavioral skills needed to succeed in school by the age of 5, starting them on a vicious path known as theschool-to-prison pipeline.

As the drug war continues, these racial disparities have become one of the major points of criticism against it. It’s not just whether the war on drugs has led to the widespread, costly incarceration of millions of Americans, but whether incarceration has created”the new Jim Crow” a reference to policies, such as segregation and voting restrictions, that subjugated black communities in America.

Card 9 of 17

Beyond the goal ofcurtailing drug use, the motivations behind the US war on drugs have been rooted in historical fears of immigrants and minority groups.

The US began regulating and restricting drugs during the first half of the 20th century, particularly through thePure Food and Drug Act of 1906, the Harrison Narcotics Tax Act of 1914, and the Marijuana Tax Act of 1937. During this period, racial and ethnic tensions were particularly high across the country not just toward African Americans, but toward Mexican and Chinese immigrants as well.

As the New York Times explained, the federal prohibition of marijuana came during a period of national hysteria about the effect of the drug on Mexican immigrants and black communities. Concerns about a new, exotic drug, coupled with feelings of xenophobia and racism that were all too common in the 1930s, drove law enforcement, the broader public, and eventually legislators to demand the drug’s prohibition. “Police in Texas border towns demonized the plant in racial terms as the drug of ‘immoral’ populations who were promptly labeled ‘fiends,'” wrote the Times’s Brent Staples.

These beliefs extended to practically all forms of drug prohibition. According to historian Peter Knight, opium largely came over to America with Chinese immigrants on the West Coast. Americans, already skeptical of the drug, quickly latched on to xenophobic beliefs that opium somehow made Chinese immigrants dangerous. “Stories of Chinese immigrants who lured white females into prostitution, along with the media depictions of the Chinese as depraved and unclean, bolstered the enactment of anti-opium laws in eleven states between 1877 and 1900,” Knight wrote.

Cocaine was similarly attached in fear to black communities, neuroscientist Carl Hartwrote for the Nation. The belief was so widespread that the New York Times even felt comfortable writing headlines in 1914 that claimed “Negro cocaine ‘fiends’ are a new southern menace.” The author of the Times piece a physician wrote, “[The cocaine user] imagines that he hears people taunting and abusing him, and this often incites homicidal attacks upon innocent and unsuspecting victims.” He later added, “Many of the wholesale killings in the South may be cited as indicating that accuracy in shooting is not interfered with is, indeed, probably improved by cocaine. I believe the record of the ‘cocaine n—-r’ near Asheville who dropped five men dead in their tracks using only one cartridge for each, offers evidence that is sufficiently convincing.”

Most recently, these fears of drugs and the connection to minorities came up during what law enforcement officials characterized as a crack cocaine epidemic in the 1980s and ’90s. Lawmakers, judges, and police in particular linked crack to violence in minority communities. The connection was part of the rationale for making it 100 times easier to get a mandatory minimum sentence for crack cocaine over powder cocaine, even though the two drugs are pharmacologically identical. As a result, minority groups have received considerably harsher prison sentences for illegal drugs. (In 2010, the ratio between crack’s sentence and cocaine’s was reduced from 100-to-1 to 18-to-1.)

Hart explained, after noting the New York Times’s coverage in particular: “Over the [late 1980s], a barrage of similar articles connected crack and its associated problems with black people. Entire specialty police units were deployed to ‘troubled neighborhoods,’ making excessive arrests and subjecting the targeted communities to dehumanizing treatment. Along the way, complex economic and social forces were reduced to criminal justice problems; resources were directed toward law enforcement rather than neighborhoods real needs, such as job creation.”

Follow this link:

THE WAR ON DRUGS EXPLAINED Vox

 Posted by at 12:44 am  Tagged with:

Euthanasia – The New York Times

 Euthanasia  Comments Off on Euthanasia – The New York Times
Jun 122016
 

Latest Articles

As the state begins to allow what has come to be known as aid in dying, two patients and two doctors explain how it will affect them and how they are preparing for the changes.

By JENNIFER MEDINA

Canadas Supreme Court overturned criminal laws banning assisted suicide last year, and new legislation has not been put into place.

By IAN AUSTEN

Advocates on both sides of the issue respond.

Its important to provide a humane option to the dying. Its also essential that people have access to palliative and hospice care.

By THE EDITORIAL BOARD

On Thursday, Prime Minister Justin Trudeau of Canada unveiled a new bill that would legalize doctor-assisted suicides for people suffering from serious medical ailments.

By REUTERS

The prime minister has introduced legislation to address the legal void left after Canadas Supreme Court overturned a ban on doctor-assisted death.

By IAN AUSTEN

Researchers who looked at doctor-assisted deaths in the Netherlands found that some patients had declined treatment that might have helped.

By BENEDICT CAREY

Doctors, it turns out, arent much different than everyone else when it comes to where they die.

By DANIELLE OFRI, M.D.

Although Mr. Hooker repeatedly lost his races for elective office in Tennessee, he managed to advance a progressive agenda through his candidacies and as a plaintiff.

By SAM ROBERTS

The magazines ethicist on a siblings struggle, favors in the workplace and secrets between friends.

By KWAME ANTHONY APPIAH

Shared Belief, a gelding, won 10 of 12 career starts and had earnings of more than $2.9 million.

As Holocaust survivors, my parents insisted on being in control of their own deaths.

By ANN M. ALTMAN

When it comes to the end of life, what role should patients play in deciding the terms of their own death?

By SUSAN GUBAR

A son in Colombia helps his mother die while making plans of his own.

CARLOS FRAMB

Un hijo ayuda a su madre enferma y enfrenta su propio destino

CARLOS FRAMB

In signing legislation allowing physician-assisted suicide, the California governor reflected on what he would want in the face of his own death.

By PHILIP M. BOFFEY

The law allowing doctors to prescribe life-ending drugs for terminally ill patients is expected to take effect sometime next year.

By IAN LOVETT and RICHARD PREZ-PEA

End of Life Choices New York writes that such actions are not a suicide or assisted suicide.

The governor should sign into law a bill that would allow some terminally ill patients to hasten their death.

By THE EDITORIAL BOARD

If Gov. Jerry Brown signs the measure, the state will become the fifth to allow doctors to prescribe life-ending medication to some patients.

By IAN LOVETT

As the state begins to allow what has come to be known as aid in dying, two patients and two doctors explain how it will affect them and how they are preparing for the changes.

By JENNIFER MEDINA

Canadas Supreme Court overturned criminal laws banning assisted suicide last year, and new legislation has not been put into place.

By IAN AUSTEN

Advocates on both sides of the issue respond.

Its important to provide a humane option to the dying. Its also essential that people have access to palliative and hospice care.

By THE EDITORIAL BOARD

On Thursday, Prime Minister Justin Trudeau of Canada unveiled a new bill that would legalize doctor-assisted suicides for people suffering from serious medical ailments.

By REUTERS

The prime minister has introduced legislation to address the legal void left after Canadas Supreme Court overturned a ban on doctor-assisted death.

By IAN AUSTEN

Researchers who looked at doctor-assisted deaths in the Netherlands found that some patients had declined treatment that might have helped.

By BENEDICT CAREY

Doctors, it turns out, arent much different than everyone else when it comes to where they die.

By DANIELLE OFRI, M.D.

Although Mr. Hooker repeatedly lost his races for elective office in Tennessee, he managed to advance a progressive agenda through his candidacies and as a plaintiff.

By SAM ROBERTS

The magazines ethicist on a siblings struggle, favors in the workplace and secrets between friends.

By KWAME ANTHONY APPIAH

Shared Belief, a gelding, won 10 of 12 career starts and had earnings of more than $2.9 million.

As Holocaust survivors, my parents insisted on being in control of their own deaths.

By ANN M. ALTMAN

When it comes to the end of life, what role should patients play in deciding the terms of their own death?

By SUSAN GUBAR

A son in Colombia helps his mother die while making plans of his own.

CARLOS FRAMB

Un hijo ayuda a su madre enferma y enfrenta su propio destino

CARLOS FRAMB

In signing legislation allowing physician-assisted suicide, the California governor reflected on what he would want in the face of his own death.

By PHILIP M. BOFFEY

The law allowing doctors to prescribe life-ending drugs for terminally ill patients is expected to take effect sometime next year.

By IAN LOVETT and RICHARD PREZ-PEA

End of Life Choices New York writes that such actions are not a suicide or assisted suicide.

The governor should sign into law a bill that would allow some terminally ill patients to hasten their death.

By THE EDITORIAL BOARD

If Gov. Jerry Brown signs the measure, the state will become the fifth to allow doctors to prescribe life-ending medication to some patients.

By IAN LOVETT

Link:

Euthanasia – The New York Times

Sealand – Simple English Wikipedia, the free encyclopedia

 Sealand  Comments Off on Sealand – Simple English Wikipedia, the free encyclopedia
Jun 122016
 

Sealand is a self-claimed country in the North Sea, but it is not an island. A structure called Roughs Tower was built in the sea by the British Royal Navy, and later became Sealand. It is very small. There is only room for 10 people on it. Even though a man named Michael Bates says Sealand is a country, not all countries agree with him. Once some people from The Netherlands went to Sealand to take it over. Michael Bates did not want this to happen, so he used helicopters and fought them to get it back. He won, and put some of those people in jail until he was pressured to let them go by other countries. If a boat goes too near Sealand, people from Sealand might fire guns at the boat. Even though Michael Bates says he is the prince of Sealand, it is very small so he usually is not there. Other people stay there to take care of Sealand. Sealand has its own stamps, national anthem, money, flag, and more things just like a real country. Bates’ reasons why Sealand should be a real country are these: Sealand is out in the ocean, and when Sealand was created no country owned the ocean. Also, people asked Michael Bates to let his prisoners from The Netherlands go. Bates said that if they thought Sealand was not a country, they would not ask him to do that.

The owners claim that Sealand is an independent sovereign state because in 1968 an English court decided that Roughs Tower was in international waters and outside the jurisdiction of the British courts.[8]

In international law, the two most common rules of statehood are the constitutive and declaratory theories of state creation. In the constitutive theory, a state exists by recognition by other states. The theory splits on whether this recognition requires “diplomatic recognition” or just “recognition of existence”. No other state grants Sealand official recognition, but it has been argued by Bates that negotiations carried out by Germany constituted “recognition of existence”. In the declaratory theory of statehood, an entity becomes a state as soon as it meets the minimal criteria for statehood. Therefore recognition by other states is purely “declaratory”.[9] In 1987, the UK extended its territorial waters from three to twelve miles. Sealand now sits inside waters that Britain claims as its territory.[10]

Irrespective of its legal status, Sealand is managed by the Bates family as if it were a recognised sovereign entity, and they are its hereditary royal rulers. Roy Bates styles himself “Prince Roy” and his wife “Princess Joan”. Their son is known as “His Royal Highness Prince Michael” and has been referred to as the “Prince Regent” by the Bates family since 1999.[11] In this role, he apparently serves as Sealand’s acting “Head of State” and also its “Head of Government”.[12] At a micronations conference hosted by the University of Sunderland in 2004, Sealand was represented by Michael Bates’ son James, who was referred to as “Prince Royal James.”[13] The facility is now occupied by one or more caretakers representing Michael Bates, who himself resides in Essex, England.[11] Sealand’s constitution was instituted in 1974. It consists of a preamble and seven articles. The preamble asserts Sealand’s independence, while the articles variously deal with Sealand’s status as a constitutional monarchy, the empowerment of government bureaus, the role of an appointed, advisory senate, the functions of an appointed, advisory legal tribunal, a proscription against the bearing of arms except by members of a designated “Sealand Guard”, the exclusive right of the sovereign to formulate foreign policy and alter the constitution, and the hereditary patrilinear succession of the monarchy.[14] Sealand’s legal system is claimed to follow British common law, and statutes take the form of decrees enacted by the sovereign.[15] Sealand has issued passports and has operated as a flag of convenience state, and it also holds the Guinness World Record for “the smallest area to lay claim to nation status”.[16] Sealand’s motto is E Mare Libertas (English: From the Sea, Freedom).[17] It appears on Sealandic items, such as stamps, passports, and coins, and is the title of the Sealandic anthem. The anthem was composed by Londoner Basil Simonenko;[18] it does not have lyrics.

At the beginning of 2007, the Bates put an ad in the newspaper. They would like to sell Sealand for 65 million pounds. [19][20] National motto: E mare libertas (Latin: From the sea, freedom)

More:

Sealand – Simple English Wikipedia, the free encyclopedia

Tor Browser – uk.pcmag.com

 Tor Browser  Comments Off on Tor Browser – uk.pcmag.com
Jun 032016
 

Need to hire an assassin, buy some contraband, view illegal porn, or just bypass government, corporate, or identity thief snooping? Tor is your answer. Tor, which stands for “The Onion Router” is not a product, but a protocol that lets you hide your Web browsing as though it were obscured by the many layers of an onion. The most common way to view the so-called Dark Web that comprises Tor sites is by using the Tor Browser, a modded version of Mozilla Firefox. Using this Web browser also hides your location, IP address, and other identifying data from regular websites. Accessing Tor has long been beyond the ability of the average user. Tor Browser manages to simplify the process of protecting your identity onlinebut at the price of performance.

What Is Tor? Ifyou’re thinking that Tor comes from a sketchy group of hackers, know that its core technology was developed by the U.S. Naval Research Lab and D.A.R.P.A.. The Tor Project non-profit receives sizeable donations from various federal entities such as The National Science Foundation. The Tor Project has a page listing many examples of legitimate types of Tor users, such as political dissidents in countries with tight control over the Internet and individuals concerned about personal privacy.

Tor won’t encrypt your datafor that, you’ll need a Virtual Private Network (VPN). Instead, Tor routes your Internet traffic through a series of intermediary nodes. This makes it very difficult for government snoops or aggressive advertisers to track you online. Using Tor affords far more privacy than other browsers’ private (or Incognito) modes, since it obscures your IP address so that you can’t be trackedwith it. Standard browsers’ private browsing modes discard your cached pages and browsing history afteryour browsing session.Even Firefox’s new, enhanced private browsing mode doesn’t hide your identifiable IP address from the sites you visit, though it does prevent them tracking you based on cookies.

Starting Up Connecting to the Tor network entails more than just installing a browser and firing up websites. You need to install support code, but luckily, the free Tor Browser bundle streamlines the process. Installers are available for Windows, Mac, and Linux. Tor Project recommends installing the browser on a USB drivefor more anonymity and portability; the driveneeds to have 80MB free space.

We tested a standard Windows installer, with choices to create desktop icons and run the browser immediately. The browser itself is a heavily modified version of Firefox 38.5 (as of this writing), and includes several security plug-ins as well as security tweaks such as not caching any website data. For a full rundown of the PCMag Editors’ Choice browser’s many features, read our full review of Firefox.

Before merrily browsing along anonymously, you need to inform Tor about your Webconnection. If your Internet connection is censored, you configure one way, if not, you can connect directly to the network. Since we live in a free societyand work for benevolent corporate overlords, we connected directly for testing. After connecting to the Tor relay system (a dialog with a progress bar appears at this stage), the browser launches, and you see theTor project’s page.

Interface The browser’s home page includes a plea for financial support to the project, a search box using the anonymized Disconnect.me search, and a Test Tor Network Settings link. Hitting the latter loads a page that indicates whether you’re successfully anonymized. We recommend taking this step. The page even shows your apparent IP addressapparent because it’s by no means your actual IP address. We verified this by opening Microsoft Edge and checking ouractual IP address on Web search sites. The two addresses couldn’t have been more different, because the Tor Browser reports the IP address of a Tor node.

The browser interface is identical with Firefox, except with some necessary add-ons installed. NoScript, a commonly used Firefox add-on, is preinstalled and can be used to block most non-HTML content on the Web. The green onion button to the left of the address bar is the Torbutton add-on. It lets you see your Tor network settings, but also the circuit you’re using: Ourcircuit started in Germany and passed through two different addresses in the Netherlands before reaching the good old Internet. If that doesn’t suit you, you can request a new circuit, either for the current session or for the current site. This was one of our favorite features.

One thing we really like about the Tor Browser is how it makes existing security and privacy tools easier to use. NoScript, for example, can be a harsh mistress, who can bedifficult to configure, and can break websites. But a security panel in the Torbutton presents you with a simple security slide. At the lowest, default setting, all browser features are enabled. At the highest setting, all JavaScript and even some image types are blocked, among other settings. This makes it easy to raise or lower the level of protection you need, without having to muck around in multiple settings windows.

Everything you do in the browser is tested for anonymity: When we tried full-screening the browser window, a message told us that that could provide sites a way to track us, and recommended leaving the window at the default size. And the project’s site specifically states that using Tor alone doesn’t guarantee anonymity, but rather that you have to abide by safe browsing guidelines: don’t use BitTorrent, don’t install additionalbrowser add-ons, don’t open documents or media while online. The recommendation to only visit secure HTTPS sites is optionally enforced by a plug-in called HTTPS Everywhere.

Even if you follow these recommendations, though, someone could detect the simple fact that you’re using Tor, unless you set it up to use a Tor bridge relay. Those are not listed in the Tor directory, so hackers (and governments) would have more trouble finding them.

One thing we noticed while browsing the standard Web through Tor was the need to enter a CAPTCHA to access many sites. This is because your cloaked URL looks suspicious to website security services such as CloudFlare, used by millions of sites to protect themselves. It’s just one more price you pay for anonymity.

We also had trouble finding the correct version of websites we wished to visit. Directing the Tor Browser to PCMag.com, for example, took us to the Netherlands localization of our website. We could not find any way to direct us back to the main URL, which lets you access the U.S. site.

The Dark Web You can use Tor to anonymize browsing to standard websites, of course, but there’s a whole hidden network of sites that don’t appear on the standard Web at all, and are only visible if you’re using a Tor connection. You can read all about it in our feature,Inside the Dark Web. If you use a standard search engine, even one anonymized by Disconnect.me, you just see standard websites. By the way, you may improve your privacy by switching to an anonymous search provider such as DuckDuckGo or Startpage.com. DuckDuckGo even offers a hidden search version, and Sinbad Search is only available through Tor. Ahmia is another search engine, on the open Web, for finding hidden Tor sites, with the twist of only showing sites that are on the up-and-up.

Tor hidden sites have URLs that end in .onion, preceded by 16 alphanumeric characters. You can find directories of these hidden sites with categories resembling the good old days of Yahoo. There’s even a Tor Links Directory page (on the regular Web) that’s a directory of these directories. There are many chat and message boards, but you even find directories of things like lossless audio files, video game hacks, and financial services such as anonymous bitcoin, and even a Tor version of Facebook. Many onion sites are very slow or completely downkeep in mind that they’re not run by deep-pocketed Web companies. Very often we clicked an onion link only to be greeted with an “Unable to Connect” error. Sinbad helpfully displays a red “Offline on last crawl” bullet to let you know that a site is probably nonfunctional.

Speed and Compatibility Webpage loading time under Tor is typicallyfar slower than browsing with a standard Internet connection. It’s really not possible to state definitively by how much your browsing will be slowed down if you use Tor, because it depends on the particular relay servers your traffic is being routed through. And this can change every time for every browsing session. As a very rough rule of thumb, however, PCMag.com took 11.3 seconds to load in Firefox and 28.7 seconds in the Tor Browser, at the same time, over the same FiOS connection on the open Web. Your mileage, of course, will vary.

As for browser benchmarks, the results hew to Firefox’s own performance, with near-leading performance on all the major JavaScript tests, JetStream and Octane, for example. Onourtest laptop, the Tor Browser scored 20,195 on Octane, compared with 22,297 for standard Firefoxnot a huge difference. The Tor network routing is a far more significant factor in browsing performance than browser JavaScript speed. That is, unless you’ve blocked all JavaScript.

Keep in mind, though, that the Tor Browser is based on the Firefox Extended Support Release versions, which updates less frequently so that large organizations have time to maintain their custom code. That means you don’t get quite the latest in Firefox performance and features, but security updates are delivered at the same time as new main versions.

There’s a similar story when it comes to standards compatibility: On the HTML5Test.com site, which quantifies the number of new Web standards supported by a browser, the Tor Browser gets a score of 412, compared with 468 for the latest Firefox version. You may run into incompatible sites, though. For example, none of the Internet speed connection test sites performed correctly in the Tor Browser.

Tor, Browser of Thunder With the near complete lack of privacy on today’s Web, Tor is becoming more and more necessary. Itlets you browsethe Web knowing that all those tracking services aren’t watching your every move. Most of us have experienced how an ad follows you from site to site, just because you clicked on, or searched for a product or service once. All that goes away.

Of course, you pay a price of extra setup and slower performance with the Tor Browser, but it’s less onerous than you may think. And the included support for fine-grain privacy and security protection is excellent. If you take your online privacy seriously, you owe it to yourself to check out the Tor Browser. For standard, full-speed Web browsing, however, check out PCMag Editors’ Choice Web browser, Firefox.

More here:
Tor Browser – uk.pcmag.com

 Posted by at 5:47 am  Tagged with:

The SCF Group | Licensed Company & Trust Managers

 Offshore Companies  Comments Off on The SCF Group | Licensed Company & Trust Managers
May 222016
 

A FULL RANGE OF LEGAL, ACCOUNTANCY & TAX PLANNING SERVICES

The SCF Group specializes in providing tax efficient structures be they for general investment, trading abroad or for consultancy. The Group also has considerable experience in advising on UK & overseas property investment, international tax planning & international structures for computer consultants. Our fiscal migration and tax planning department is operated by UK qualified lawyers and accountants and can advise both domiciled and non-domiciled individuals on how to mitigate their individual and corporate tax exposure be it in the UK, Ireland or internationally. Our legal & business department can provide specialized advice on all domestic and international tax planning issues but also upon ‘key’ issues such as asset protection be it in the form of trusts, private interest foundations (PIF’s), pre-nuptial and post-nuptial agreements, divorce and general family law, legal drafting, leave to remove applications etc.

“INVESTING OR TRADING ABROAD”

At a more basic level we provide a wide range of Irish and UK company formation, Irish and UK accountancy, “offshore company formation” and offshore/international accountancy services for most European and international company formation jurisdictions.

For overseas property investors and those wishing to set up tax efficient international trading companies – employing mechanisms such as an offshore companies, low tax and managed companies based in jurisdictions such as Ireland, Cyprus, Switzerland and The Isle of Man – particularly where investment in Eastern Europe is involved, our Group has established its own offices in Ireland, the UK, Cyprus and affiliates in jurisdictions such Switzerland, Luxembourg and the Netherlands where we can carryout Swiss company formation, Luxembourg company formation and Netherlands company formation. In fact, we are confident that in the case of UK company formation, Irish company formation and Cyprus company formation we are probably the most comprehensive service providers in Europe be it at a company formation, accountancy or company management level. In addition, we have affiliates and agents in all other ‘key’ tax planning jurisdictions including Switzerland, the Netherlands, Luxembourg, Monaco, Gibraltar, Malta, Singapore and Hong Kong. We can provide a wide range of asset protection trusts, private interest foundations “PIF’s & ASSET PROTECTION” in countries such as Lichtenstein, Panama and Switzerland both as general methods of limiting exposure to income, corporate, inheritance and capital gains taxes together with acting as ideal pre-nuptial insurance policies. The SCF Group has its sales representative office in London but also fully owned offices in Madeira, Cyprus and Ireland where we can provide company formation, nominee directors, company administration, accountancy, VAT management, confidential offshore bank accounts in all major jurisdictions including Switzerland, Cyprus, the United Kingdom, the Isle of Man and Panama when planning for tax for offshore banking.

Sitemap | Privacy Policy

Here is the original post:
The SCF Group | Licensed Company & Trust Managers

 Posted by at 10:43 pm  Tagged with:

What is NATO?

 NATO  Comments Off on What is NATO?
Apr 142016
 

NATO is a political and military alliance of 28 North American and European countries, bound by shared democratic values, that have joined together to best pursue security and defense. In addition to the United States, the other NATO Allies are Albania, Belgium, Bulgaria, Canada, Croatia, the Czech Republic, Denmark, Estonia, France, Germany, Greece, Hungary, Iceland, Italy, Latvia, Lithuania, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Turkey, and the United Kingdom. The principle of collective defense is at the heart of NATO and is enshrined in Article 5 of the Alliances founding Washington Treaty, which asserts that an attack on one Ally is to be considered an attack on all. NATO invoked Article 5 of the Washington Treaty for the first time in its history following the 9/11 terrorist attacks against the United States.

Founded in 1949, NATO played a unique role in maintaining stability and security in the trans-Atlantic area during the Cold War. Since the end of the Cold War the Alliance has transformed itself to meet the security challenges of the new century, continuing with adoption of a new NATO Strategic Concept at the Lisbon NATO Summit in 2010. Today, NATOs operations include leading the International Security Assistance Force (ISAF) mission in Afghanistan, ensuring a safe and secure environment in Kosovo through the KFOR mission, and contributing to international counter-piracy efforts off the Horn of Africa through Operation Ocean Shield. In 2011, NATO successfully carried out the UN-mandated mission in Libya to protect civilians, enforce a no-fly zone, and enforce a maritime arms embargo. NATO has also provided airlift and sealift support to the African Union (AU) missions in Somalia and Sudan, has engaged in a number of humanitarian relief operations in recent years, including delivery of over 100 tons of supplies from Europe to the United States following Hurricane Katrina, and leads the counterterrorism Operation Active Endeavor in the Mediterranean Sea.

Recognizing that the security challenges Allies face often emerge beyond Europe, NATO has become the hub of a global security network, establishing partnerships with over thirty countries. These ties provide opportunities for practical military cooperation and political dialogue. Partners have contributed significantly to NATO operations in Afghanistan, Kosovo, Iraq, and Libya.

Here is the original post:
What is NATO?

Kids.Net.Au – Encyclopedia > NATO

 NATO  Comments Off on Kids.Net.Au – Encyclopedia > NATO
Mar 022016
 

The North Atlantic Treaty Organization (NATO) is an international organization for defence collaboration established in 1949, in support of the North Atlantic Treaty signed in Washington, D.C. on April 4, 1949.

The core provision of the treaty is Article V, which states:

This provision was intended so that if the Soviet Union launched an attack against the European allies of the United States, it would be treated as if it was an attack on the United States itself. However the feared Soviet invasion of Europe never came. Instead, the provision was used for the first time in the treaty’s history on September 12, 2001 in response to the September 11, 2001 Terrorist Attack.

Member States From the foundation in 1949 or with the year of accession.

Greece and Turkey joined the organization in February 1952. Germany joined as West Germany in 1955 and German unification in 1990 extended the membership to the areas of former East Germany. Spain was admitted on May 30, 1982 and the former Warsaw Pact Countries of Poland, Hungary and the Czech Republic made history by becoming members on March 12, 1999.

France is still a member of NATO but retired from the military command in 1966. Iceland, the sole member of NATO which does not have its own military force, joined on the condition that they would not be forced to participate in warfare.

History

On March 17, 1948 Benelux, France, and the United Kingdom signed the Treaty of Brussels[?] which is a precursor to the NATO Agreement.

The Soviet Union and its satellite states formed the Warsaw Pact in the 1950s in order to counterbalance NATO. Both organisations were opposing sides in the cold war. After the fall of the Iron Curtain in 1989, the Warsaw Pact disintegrated.

NATO saw its first military engagement in the Kosovo War, where it waged an 11-week bombing campaign against Serbian forces starting on March 24, 1999.

Three former communist countries, Hungary, the Czech Republic and Poland, joined NATO in 1999. At the Prague (Czech Republic) summit of November 21-22, 2002 seven countries have been invited to start talks in order to join the Alliance: Estonia, Latvia, Lithuania, Slovenia, Slovakia, Bulgaria and Romania. The invited countries are expected to join NATO in 2004. Albania and the Former Yugoslav Republic of Macedonia will probably be told they have not met the economic, political and military reform criteria and will have to wait. Croatia applied only in 2002 and has just started the process.

Charles de Gaulle’s decision to remove France from NATO’s military command in 1966 to pursue its own nuclear defence program precipitated the relocation of the NATO Headquarters from Paris, France to Brussels, Belgium by October 16, 1967. While the political headquarters is located in Brussels the military headquarters, the Supreme Headquarters Allied Powers Europe (SHAPE), are located just south of Brussles, in the town of Mons.

September 13, 2001, NATO invoked, for the first time in its history, an article in its charter that states that any attack on a member state is considered an attack against the entire alliance. This came in response to the September 11, 2001 Terrorist Attack/

On February 10, 2003 NATO faced a serious crisis because of France and Belgium breaking the procedure of silent approval concerning the timing of protective measures for Turkey in case of a possible war with Iraq. Germany did not use its right to break the procedure but said it supported the veto.

On April 16, 2003, NATO agreed to take command in August of the International Security Assistance Force (ISAF) in Afghanistan. The decision came at the request of Germany and the Netherlands, the two nations leading ISAF at the time of the agreement. It was approved unanimously by all 19 NATO ambassadors. This marked first time in NATO’s history that it took charge of a mission outside the north Atlantic area. Canada had originally been slated to take over ISAF in August.

See also: Euro-Atlantic Partnership Council, OSCE, WEU, UN

All Wikipedia text is available under the terms of the GNU Free Documentation License

View post:
Kids.Net.Au – Encyclopedia > NATO

Island – definition of island by The Free Dictionary

 Islands  Comments Off on Island – definition of island by The Free Dictionary
Feb 292016
 

noun isle, inch (Scot. & Irish), atoll, holm (dialect), islet, ait or eyot (dialect), cay or key a day trip to the island of Gozo Islands and island groups Achill, Admiralty, Aegean, Aegina, Alcatraz, Aldabra, Alderney, Aleutian, Alexander, Amboina, Andaman, Andaman and Nicobar, Andreanof, Andros, Anglesey, Anguilla, Anticosti, Antigua, Antilles, Antipodes, Aran, Arran, Aru or Arru, Aruba, Ascension, Auckland, Azores, Baffin, Bahamas, Balearic, Bali, Banaba, Bangka, Banks, Baranof, Barbados, Barbuda, Bardsey, Barra, Basilan, Basse-Terre, Batan, Belau, Belle, Benbecula, Bermuda, Biak, Billiton, Bioko, Bohol, Bonaire, Bonin, Bora Bora, Borneo, Bornholm, Bougainville, British, Bute, Butung, Caicos, Caldy, Calf of Man, Campobello, Canary, Canna, Canvey, Cape Breton, Capri, Caroline, Cayman, Ceb, Ceylon, Channel, Chatham, Cheju, Chichagof, Chilo, Chios, Choiseul, Christmas, Cocos, Coll, Colonsay, Coney, Cook, Corfu, Corregidor, Corsica, Crete, Cuba, Curaao, Cyclades, Cyprus, Cythera, Delos, D’Entrecasteaux, Diomede, Disko, Diu, Djerba or Jerba, Dodecanese, Dominica, Dry Tortugas, Easter, Eigg, Elba, Ellesmere, Espritu Santo, Euboea, Faeroes, Faial or Fayal, Fair, Falkland, Falster, Farquhar, Fernando de Noronha, Fiji, Flannan, Flinders, Flores, Florida Keys, Foula, Foulness, Franz Josef Land, French West Indies, Frisian, Fyn, Galpagos, Gambier, Gigha, Gilbert, Gotland, Gothland, or Gottland, Grand Bahama, Grand Canary, Grande-Terre, Grand Manan, Greater Antilles, Greater Sunda, Greenland, Grenada, Grenadines, Guadalcanal, Guam, Guernsey, Hainan or Hainan Tao, Handa, Hawaii, Hayling, Heard and McDonald, Hebrides, Heimaey, Heligoland, Herm, Hispaniola, Hokkaido, Holy, Hong Kong, Honshu, Hormuz or Ormuz, Howland, Ibiza, Icaria, Iceland, Imbros, Iona, Ionian, Ireland, Ischia, Islay, Isle Royale, Ithaca, Iwo Jima, Jamaica, Jan Mayen, Java, Jersey, Jolo, Juan Fernndez, Jura, Kangaroo, Kauai, Keos, Kerrera, Kiritimati, Kodiak, Kos or Cos, Kosrae, Krakatoa or Krakatau, Kuril or Kurile, Kyushu or Kiushu, La Palma, Labuan, Lakshadweep, Lampedusa, Lanai, Lavongai, Leeward, Lemnos, Lesbos, Lesser Antilles, Levks, Leukas, or Leucas, Lewis with Harris or Lewis and Harris, Leyte, Liberty, Lindisfarne, Line, Lipari, Lismore, Lolland or Laaland, Lombok, Long, Longa, Lord Howe, Luing, Lundy, Luzon, Mackinac, Macquarie, Madagascar, Madeira, Madura, Maewo, Mah, Mainland, Majorca, Maldives, Mal, Malta, Man, Manhattan, Manitoulin, Maraj, Margarita, Marie Galante, Marinduque, Marquesas, Marshall, Martinique, Masbate, Mascarene, Matsu or Mazu, Maui, Mauritius, May, Mayotte, Melanesia, Melos, Melville, Mersea, Micronesia, Mindanao, Mindoro, Minorca, Miquelon, Molokai, Moluccas, Montserrat, Mount Desert, Muck, Mull, Mykonos, Nantucket, Nauru, Naxos, Negros, Netherlands Antilles, Nevis, New Britain, New Caledonia, Newfoundland, New Georgia, New Guinea, New Ireland, New Providence, New Siberian, Nicobar, Niue, Norfolk, North, North Uist, Nusa Tenggara, Oahu, Oceania, Okinawa, Orkneys or Orkney, Palawan, Palmyra, Panay, Pantelleria, Pros, Patmos, Pelagian, Pemba, Penang, Pescadores, Philae, Philippines, Phoenix, Pitcairn, Polynesia, Ponape, Pribilof, Prince Edward, Prince of Wales, Principe, Qeshm or Qishm, Queen Charlotte, Queen Elizabeth, Quemoy, Raasay, Ramsey, Rarotonga, Rathlin, Runion, Rhodes, Rhum, Rialto, Roanoke, Robben, Rockall, Rona, Ross, Ryukyu, Saba, Safety, Saipan, Sakhalin, Salamis, Saltee, Samar, Samoa, Samos, Samothrace, San Cristbal, San Juan, San Salvador, Santa Catalina, Sao Miguel, Sao Tom, Sardinia, Sark, Savaii, Scalpay, Schouten, Scilly, Sea, Seil, Seram or Ceram, Seychelles, Sheppey, Shetland, Sicily, Singapore, Sjlland, Skikoku, Skokholm, Skomer, Skye, Skyros or Scyros, Society, Socotra, South, Southampton, South Georgia, South Orkney, South Shetland, South Uist, Spitsbergen, Sporades, Sri Lanka, St. Croix, St. Helena, St. John, St. Kilda, St. Kitts or St. Christopher, St. Lucia, St. Martin, St. Tudwal’s, St. Vincent, Staffa, Staten, Stewart, Stroma, Stromboli, Sulawesi, Sumatra, Sumba or Soemba, Sumbawa or Soembawa, Summer, Sunda or Soenda, Tahiti, Taiwan, Tasmania, Tenedos, Tenerife, Terceira, Thanet, Thsos, Thera, Thousand, Thursday, Timor, Tiree, Tobago, Tokelau, Tombo, Tonga, Tortola, Tortuga, Trinidad, Tristan da Cunha, Trobriand, Truk, Tsushima, Tuamotu, Tubuai, Turks, Tutuila, Tuvalu, Ulva, Unimak, Upolu, Ushant, Vancouver, Vanua Levu, Vanuatu, Vestmannaeyjar, Victoria, Virgin, Visayan, Viti Levu, Volcano, Walcheren, Walney, West Indies, Western, Wight, Windward, Wrangel, Yap, Youth, Zante, Zanzibar

Visit link:
Island – definition of island by The Free Dictionary

Rationalism | Article about rationalism by The Free Dictionary

 Rationalism  Comments Off on Rationalism | Article about rationalism by The Free Dictionary
Feb 022016
 

[Lat.,=belonging to reason], in philosophy, a theory that holds that reason alone, unaided by experience, can arrive at basic truth regarding the world. Associated with rationalism is the doctrine of innate ideas and the method of logically deducing truths about the world from “self-evident” premises. Rationalism is opposed to empiricism on the question of the source of knowledge and the techniques for verification of knowledge. Ren Descartes, G. W. von Leibniz, and Baruch Spinoza all represent the rationalist position, and John Locke the empirical. Immanuel Kant in his critical philosophy attempted a synthesis of these two positions. More loosely, rationalism may signify confidence in the intelligible, orderly character of the world and in the mind’s ability to discern such order. It is opposed by irrationalism, a view that either denies meaning and coherence in reality or discredits the ability of reason to discern such coherence. Irrational philosophies accordingly stress the will at the expense of reason, as exemplified in the existentialism of Jean-Paul Sartre or Karl Jaspers. In religion, rationalism is the view that recognizes as true only that content of faith that can be made to appeal to reason. In the Middle Ages the relationship of faith to reason was a fundamental concern of

).

See E. Heimann, Reason and Faith in Modern Society (1961); T. F. Torrance, God and Rationality (1971); R. L. Arrington, Rationalism, Realism, and Relativism (1989).

e.g. in

, endangered by world events as well as by sceptical movements in philosophy. However, rationalism in the sense of a belief in progress survives in a modified form in many areas of sociology and philosophy (e.g. see

). A further view is that it is a mistake to polarize rationalism and empiricism, since both of these play a role in human knowledge, which always involves both conception (rationalism) and perception (empiricism), e.g. See

. See also

.

a collective designation for the architectural schools of the first half of the 20th century that made use of the achievements of modern science and technology. In the broad sense, rationalism in architecture is sometimes equated with the concept of modern architecture, as represented by the work of L. H. Sullivan in the United States, H. P. Berlage in the Netherlands, A. Loos in Austria, the masters of the Deutscher Werkbund in Germany, and A. Perret in France.

The establishment of rationalism in the early 1920s was largely promoted by the theories propagated by the circle of architects associated with the journal LEsprit nouveau. The movements leaders were Le Corbusier in France and W. Gro-pius of the Bauhaus school of architecture in Germany.

Rationalism flourished essentially from the 1920s through the 1950s. In 1928 its supporters organized the International Congress for Modern Architecture, which met until 1959. Rationalist ideas concerning urban planning were set forth in 1933 in the Athens Charter. In the 1950s the general architectural principles of rationalism led to the creation of the international style, represented by the work of L. Mies van der Rohe and many others. The dogmatic architectural ideas and the social-reformist utopianism of the proponents of rationalism led to a crisis in the movement by the late 1950s.

The Russian architects of Asnova (Association of New Architects), including N. A. Ladovskii and K. S. Melnikov, proclaimed themselves to be rationalists. They emphasized psychological and physiological factors in the appreciation of architectural form and sought rational principles in the visual aspect of architecture.

a philosophical school that considers reason to be the foundation of human understanding and behavior. Rationalism is the opposite of fideism, irrationalism, and sensationalism (empiricism). The term rationalism has been used to designate and characterize philosophical concepts since the 19th century, but historically the rationalist tradition originated in ancient Greek philosophy. For example, Parmenides, who distinguished between the knowledge of truth (obtained through reason) and the knowledge of opinion (obtained through sensory perception), considered reason to be the criterion of truth.

Rationalism took shape in modern times as an integral system of epistemological views, as a result of the development of mathematics and the natural sciences. In contrast to medieval Scholasticism and religious dogmatism, the classical rationalism of the 17th and 18th centuries (Descartes, Spinoza, Male-branche, and Leibniz) was based on the idea of natural orderan infinite chain of causality pervading the world. Thus, the principles of rationalism were accepted by both materialists (Spinoza) and idealists (Leibniz), although the character of rationalism differed in the two philosophical trends, depending on how the question of the origin of knowledge was resolved.

The rationalism of the 17th and 18th centuries, which asserted the decisive role of reason in both human cognition and human activity, was one of the philosophical sources of the ideology of the Enlightenment. The cult of reason was also characteristic of the 18th-century French materialists, who adopted a philosophical position of materialistic sensationalism and criticized the speculative constructs of rationalism.

Seeking to substantiate the absolute reliability of the principles of science and the tenets of mathematics and the natural sciences, rationalism attempted to explain how knowledge obtained through human cognitive activity could be objective, universal, and necessary. Unlike sensationalism, rationalism maintained that scientific knowledge, which possesses these logical properties, could be attained through reason, which served as the source of knowledge and as the criterion of truth. For example, the rationalist Leibniz modified the basic thesis of sensationalism, as stated by Locke (there is nothing in reason that was not previously present in sensations) by appending to it the phrase other than reason itself. In other words, reason is capable of grasping not only the particular and the accidental, to which sensory perception is limited, but also the universal and the essential.

The concept of reason as the single source of scientific knowledge led rationalists to an idealist conclusion regarding the existence of innate ideas (Descartes) or of predispositions and inclinations in thought that are independent of sensory impressions (Leibniz). The underestimation by rationalists of the role of sensory perception, mans link with the external world, led to the separation of thought from the object of cognition.

Kant, who attempted to reconcile the ideas of rationalism and sensationalism, proposed that all our knowledge begins with the senses, passes to the faculty of understanding, and ends with reason (I. Kant, Sock, vol. 3, Moscow, 1964, p. 340). According to Kant, reason cannot serve as the universal criterion of truth. In order to explain the properties of knowledge, Kant introduced the concept of the apriority (a priori knowledge) of both conceptual forms (as in classical rationalism) and forms of contemplationspace and time. However, Kantian rationalism retains its force only at the price of adopting an agnostic positionthat is, it deals only with the world of phenomena and excludes consideration of things-in-themselves, or objective reality.

In Hegels philosophy the absolute idea, or absolute reason, is the original principle and essence of the world, and the process of cognition is viewed as the self-cognition of reason, which comprehends its own content in the world. In Hegel, therefore, the development of the objective world is represented as a purely logical, rational process, and rationalism assumes the character of panlogism.

Bourgeois philosophy of the 19th and 20th centuries (positivism and neopositivism, for example) lost faith in the unlimited power of reason. The prevailing trend in 19th- and 20th- century bourgeois philosophy is a critique of classical rationalism, with its ideals of the power of reason and mans unlimited rational activity. This critique is based either on irrationalism or on a moderate, limited rationalism. For example, Freudianism, which asserts the dominant role of irrational, subconscious elements, criticizes rationalism from the standpoint of irrationalism, as do intuitionism and existentialism. The concepts of M. Weber and K. Mannheim are representative of the critique of rationalism from the standpoint of moderate, limited rationalism, which is associated less with the logical problems of cognition and more with a search for the sociocultural bases and limits of rationalism.

The narrrow, one-sided character of rationalism was overcome in Marxism. It was possible to resolve the contradiction between empiricism and rationalism on the basis of fundamentally new principles developed in the theory of cognition of dialectical materialism. The basic condition for resolving the contradiction between empiricism and rationalism was an analysis of the process of cognition, in integral association with practical activity for transforming reality. V. I. Lenin wrote: From living perception to abstract thought, and from this to practice such is the dialectical path of the cognition of truth and the cognition of objective reality (Poln. sobr. soch., 5th ed., vol. 29, pp. 15253).

Originally posted here:

Rationalism | Article about rationalism by The Free Dictionary

The Tor Browser: Tor Browser – au.pcmag.com

 Tor Browser  Comments Off on The Tor Browser: Tor Browser – au.pcmag.com
Jan 282016
 

The Tor Browser makes the tricky work of surfing the Web anonymously as easy as using any other browser, but with a significant performance hit.

Jan. 26, 2016

Need to hire an assassin, buy some contraband, view illegal porn, or just bypass government, corporate, or identity thief snooping? Tor is your answer. Tor, which stands for “The Onion Router” is not a product, but a protocol that lets you hide your Web browsing as though it were obscured by the many layers of an onion. The most common way to view the so-called Dark Web that comprises Tor sites is by using the Tor Browser, a modded version of Mozilla Firefox. Using this Web browser also hides your location, IP address, and other identifying data from regular websites. Accessing Tor has long been beyond the ability of the average user. Tor Browser manages to simplify the process of protecting your identity onlinebut at the price of performance.

What Is Tor? Ifyou’re thinking that Tor comes from a sketchy group of hackers, know that its core technology was developed by the U.S. Naval Research Lab and D.A.R.P.A.. The Tor Project non-profit receives sizeable donations from various federal entities such as The National Science Foundation. The Tor Project has a page listing many examples of legitimate types of Tor users, such as political dissidents in countries with tight control over the Internet and individuals concerned about personal privacy.

Tor won’t encrypt your datafor that, you’ll need a Virtual Private Network (VPN). Instead, Tor routes your Internet traffic through a series of intermediary nodes. This makes it very difficult for government snoops or aggressive advertisers to track you online. Using Tor affords far more privacy than other browsers’ private (or Incognito) modes, since it obscures your IP address so that you can’t be trackedwith it. Standard browsers’ private browsing modes discard your cached pages and browsing history afteryour browsing session.Even Firefox’s new, enhanced private browsing mode doesn’t hide your identifiable IP address from the sites you visit, though it does prevent them tracking you based on cookies.

We tested a standard Windows installer, with choices to create desktop icons and run the browser immediately. The browser itself is a heavily modified version of Firefox 38.5 (as of this writing), and includes several security plug-ins as well as security tweaks such as not caching any website data. For a full rundown of the PCMag Editors’ Choice browser’s many features, read our full review of Firefox.

Before merrily browsing along anonymously, you need to inform Tor about your Webconnection. If your Internet connection is censored, you configure one way, if not, you can connect directly to the network. Since we live in a free societyand work for benevolent corporate overlords, we connected directly for testing. After connecting to the Tor relay system (a dialog with a progress bar appears at this stage), the browser launches, and you see theTor project’s page.

The browser interface is identical with Firefox, except with some necessary add-ons installed. NoScript, a commonly used Firefox add-on, is preinstalled and can be used to block most non-HTML content on the Web. The green onion button to the left of the address bar is the Torbutton add-on. It lets you see your Tor network settings, but also the circuit you’re using: Ourcircuit started in Germany and passed through two different addresses in the Netherlands before reaching the good old Internet. If that doesn’t suit you, you can request a new circuit, either for the current session or for the current site. This was one of our favorite features.

One thing we really like about the Tor Browser is how it makes existing security and privacy tools easier to use. NoScript, for example, can be a harsh mistress, who can bedifficult to configure, and can break websites. But a security panel in the Torbutton presents you with a simple security slide. At the lowest, default setting, all browser features are enabled. At the highest setting, all JavaScript and even some image types are blocked, among other settings. This makes it easy to raise or lower the level of protection you need, without having to muck around in multiple settings windows.

Everything you do in the browser is tested for anonymity: When we tried full-screening the browser window, a message told us that that could provide sites a way to track us, and recommended leaving the window at the default size. And the project’s site specifically states that using Tor alone doesn’t guarantee anonymity, but rather that you have to abide by safe browsing guidelines: don’t use BitTorrent, don’t install additionalbrowser add-ons, don’t open documents or media while online. The recommendation to only visit secure HTTPS sites is optionally enforced by a plug-in called HTTPS Everywhere.

Even if you follow these recommendations, though, someone could detect the simple fact that you’re using Tor, unless you set it up to use a Tor bridge relay. Those are not listed in the Tor directory, so hackers (and governments) would have more trouble finding them.

One thing we noticed while browsing the standard Web through Tor was the need to enter a CAPTCHA to access many sites. This is because your cloaked URL looks suspicious to website security services such as CloudFlare, used by millions of sites to protect themselves. It’s just one more price you pay for anonymity.

We also had trouble finding the correct version of websites we wished to visit. Directing the Tor Browser to PCMag.com, for example, took us to the Netherlands localization of our website. We could not find any way to direct us back to the main URL, which lets you access the U.S. site.

Tor hidden sites have URLs that end in .onion, preceded by 16 alphanumeric characters. You can find directories of these hidden sites with categories resembling the good old days of Yahoo. There’s even a Tor Links Directory page (on the regular Web) that’s a directory of these directories. There are many chat and message boards, but you even find directories of things like lossless audio files, video game hacks, and financial services such as anonymous bitcoin, and even a Tor version of Facebook. Many onion sites are very slow or completely downkeep in mind that they’re not run by deep-pocketed Web companies. Very often we clicked an onion link only to be greeted with an “Unable to Connect” error. Sinbad helpfully displays a red “Offline on last crawl” bullet to let you know that a site is probably nonfunctional.

As for browser benchmarks, the results hew to Firefox’s own performance, with near-leading performance on all the major JavaScript tests, JetStream and Octane, for example. Onourtest laptop, the Tor Browser scored 20,195 on Octane, compared with 22,297 for standard Firefoxnot a huge difference. The Tor network routing is a far more significant factor in browsing performance than browser JavaScript speed. That is, unless you’ve blocked all JavaScript.

Keep in mind, though, that the Tor Browser is based on the Firefox Extended Support Release versions, which updates less frequently so that large organizations have time to maintain their custom code. That means you don’t get quite the latest in Firefox performance and features, but security updates are delivered at the same time as new main versions.

There’s a similar story when it comes to standards compatibility: On the HTML5Test.com site, which quantifies the number of new Web standards supported by a browser, the Tor Browser gets a score of 412, compared with 468 for the latest Firefox version. You may run into incompatible sites, though. For example, none of the Internet speed connection test sites performed correctly in the Tor Browser.

Of course, you pay a price of extra setup and slower performance with the Tor Browser, but it’s less onerous than you may think. And the included support for fine-grain privacy and security protection is excellent. If you take your online privacy seriously, you owe it to yourself to check out the Tor Browser. For standard, full-speed Web browsing, however, check out PCMag Editors’ Choice Web browser, Firefox.

Go here to read the rest:
The Tor Browser: Tor Browser – au.pcmag.com

NATO – U-S-History.com

 NATO  Comments Off on NATO – U-S-History.com
Oct 282015
 

NATO is based on the North Atlantic Treaty, which provides the organization a framework. The treaty provides that an armed attack against one or more of NATO`s member nations shall be considered an attack against them all.* NATO is headquartered in Brussels, Belgium. The organization was formed in 1949. Many nations joined NATO even Iceland, the only member without a military force.

The organization was originally formed out of the fear that the Soviet Union would ally militarily with Eastern European nations, i.e. the Warsaw Pact, and thus become a threat to Western Europe and the United States. In short, the alliance is an association of free states united in their determination to preserve their security through mutual guarantees and stable relations with other countries.

From 1945 to 1949, Europe faced the crucial need for economic reconstruction. Western European countries and their North American allies viewed with apprehension the expansionist policies and methods of the U.S.S.R. Having fulfilled their own wartime commitments, and desiring to reduce their defense establishments and demobilize forces, Western governments became increasingly alarmed as it became clear that the Soviet leadership intended to maintain its own military forces at full strength.

Furthermore, in view of the Soviet Communist Party`s avowed ideology, it was evident that appeals to the United Nations Charter, and international settlements reached at the end of the war, would not assure democratic states their autonomy. The rise of nondemocratic governments in many central and eastern European countries, and the resultant repression of opposition parties and basic human rights, raised more alarm in the West.

Between 1947 and 1949, a series of extraordinary political events brought matters to a head. They included direct threats to the sovereignty of Norway, Greece, Turkey and other countries, the June 1948 coup in Czechoslovakia, and the illegal blockade of Berlin that began in April of the same year. The signing of the Brussels Treaty in March 1948 marked the commitment of five Western European countries Belgium, France, Luxembourg, the Netherlands, and the United Kingdom to develop a common defense system and strengthen the ties among them in a manner that would enable them to resist ideological, political and military threats to their security. Later, Denmark, Iceland, Italy, Norway and Portugal were invited by the Brussels Treaty powers to become participants in that process.

Then followed negotiations with the United States and Canada on the creation of a single North Atlantic alliance based on security guarantees and mutual commitments between Europe and North America. The alliance would become the transatlantic link by which the security of North America was permanently tied to the security of Europe.

Negotiations culminated in the signing of the treaty in April 1949, entered into freely by each country following public debate and due parliamentary process. The treaty a legal and contractual basis for the alliance was established within the framework of Article 51 of the United Nations Charter, which reaffirms the inherent right of independent states to individual or collective defense. The treaty requires of each of them not to enter into any other international commitment that might conflict with its provisions. The preamble to the treaty states that the aim of the allies is to promote peaceful and friendly relations in the North Atlantic area.

However, at the time of the treatys signing, the immediate purpose of NATO was to defend its members against a potential threat resulting from the policies and growing military capacity of the Soviet Union. The treaty created a common security system based on a partnership among the 12 countries. Others joined later:

The means by which the alliance carries out its security policies includes the maintenance of a sufficient military capability to prevent war and to provide for effective defense; an overall capability to manage crises affecting the security of its members; and active promotion of dialogue with other nations. The alliance performs the following fundamental security tasks:

A continent evolves

NATO has worked since its inception for the establishment of a just and lasting peaceful order in Europe based on common values of democracy, human rights and the rule of law. That central alliance objective has taken on renewed significance since the end of the Cold War because, for the first time in the post- World War II history of Europe, the prospect of its achievement has become a reality as embodied by the European Union.

From time to time, the alliance met at the summit level with heads of state and governments participating. Their direct participation in the process of taking decisions by consensus, raised the public profile of such meetings and bestowed on them increased historical significance.

By 1991, the major transformation of international security at the end of the 1980s was dictating the shape of the new NATO that would emerge over the next few years. The first of a series of four summit meetings that would plot the course of the alliances adaptation to the coming decade took place in Rome in November 1991. It would be followed by another summit meeting in Brussels in January 1994, two further meetings in Madrid in July 1997, and in Washington in April 1999.

Epilogue

The world has seen many changes since the inception of NATO. NATO peacekeeping forces maintain vigilance at hot spots around the world. Kosovo, Afghanistan and Somalia all enjoy a NATO presence. NATO announced on June 9, 2005, that it would help the African Union (AU) expand its peacekeeping mission in Darfur, Sudan, by airlifting additional AU peacekeepers into the region and assisting with training.

The following is from a speech by former NATO Secretary General Lord Robertson on November 12, 2003. The occasion was hosted by the George C. Marshall Foundation, the Center for Transatlantic Relations at Johns Hopkins School of Advanced Internationa Studies and the Royal Norwegian Embassy:

Another excerpt from the same speech:

The following is an illustration of how the world has changed. General Ray Henault of the Canadian Air Force accepted the chairmanship of NATO`s Military Committee on June 16, 2005, from his predecessor, General Harald Kujat of the German Air Force. The Military Committee is the highest military decision-making authority in NATO, assisting and advising the North Atlantic Council. The Chairman of the Military Committee is selected by the Chiefs of Defense and appointed for a three-year term of office.

See the article here:
NATO – U-S-History.com

 Posted by at 11:44 am  Tagged with:

North Atlantic Treaty Organization (NATO), 1949 – 19451952 …

 NATO  Comments Off on North Atlantic Treaty Organization (NATO), 1949 – 19451952 …
Oct 232015
 

North Atlantic Treaty Organization (NATO), 1949

The North Atlantic Treaty Organization was created in 1949 by the United States, Canada, and several Western European nations to provide collective security against the Soviet Union.

Signing of the NATO Treaty

NATO was the first peacetime military alliance the United States entered into outside of the Western Hemisphere. After the destruction of the Second World War, the nations of Europe struggled to rebuild their economies and ensure their security. The former required a massive influx of aid to help the war-torn landscapes re-establish industries and produce food, and the latter required assurances against a resurgent Germany or incursions from the Soviet Union. The United States viewed an economically strong, rearmed, and integrated Europe as vital to the prevention of communist expansion across the continent. As a result, Secretary of State George Marshall proposed a program of large-scale economic aid to Europe. The resulting European Recovery Program, or Marshall Plan, not only facilitated European economic integration but promoted the idea of shared interests and cooperation between the United States and Europe. Soviet refusal either to participate in the Marshall Plan or to allow its satellite states in Eastern Europe to accept the economic assistance helped to reinforce the growing division between east and west in Europe.

In 19471948, a series of events caused the nations of Western Europe to become concerned about their physical and political security and the United States to become more closely involved with European affairs. The ongoing civil war in Greece, along with tensions in Turkey, led President Harry S. Truman to assert that the United States would provide economic and military aid to both countries, as well as to any other nation struggling against an attempt at subjugation. A Soviet-sponsored coup in Czechoslovakia resulted in a communist government coming to power on the borders of Germany. Attention also focused on elections in Italy as the communist party had made significant gains among Italian voters. Furthermore, events in Germany also caused concern. The occupation and governance of Germany after the war had long been disputed, and in mid-1948, Soviet premier Joseph Stalin chose to test Western resolve by implementing a blockade against West Berlin, which was then under joint U.S., British, and French control but surrounded by Soviet-controlled East Germany. This Berlin Crisis brought the United States and the Soviet Union to the brink of conflict, although a massive airlift to resupply the city for the duration of the blockade helped to prevent an outright confrontation. These events caused U.S. officials to grow increasingly wary of the possibility that the countries of Western Europe might deal with their security concerns by negotiating with the Soviets. To counter this possible turn of events, the Truman Administration considered the possibility of forming a European-American alliance that would commit the United States to bolstering the security of Western Europe.

Signing of the Brussels Treaty

The Western European countries were willing to consider a collective security solution. In response to increasing tensions and security concerns, representatives of several countries of Western Europe gathered together to create a military alliance. Great Britain, France, Belgium, the Netherlands and Luxembourg signed the Brussels Treaty in March, 1948. Their treaty provided collective defense; if any one of these nations was attacked, the others were bound to help defend it. At the same time, the Truman Administration instituted a peacetime draft, increased military spending, and called upon the historically isolationist Republican Congress to consider a military alliance with Europe. In May of 1948, Republican Senator Arthur H. Vandenburg proposed a resolution suggesting that the President seek a security treaty with Western Europe that would adhere to the United Nations charter but exist outside of the Security Council where the Soviet Union held veto power. The Vandenburg Resolution passed, and negotiations began for the North Atlantic Treaty.

In spite of general agreement on the concept behind the treaty, it took several months to work out the exact terms. The U.S. Congress had embraced the pursuit of the international alliance, but it remained concerned about the wording of the treaty. The nations of Western Europe wanted assurances that the United States would intervene automatically in the event of an attack, but under the U.S. Constitution the power to declare war rested with Congress. Negotiations worked toward finding language that would reassure the European states but not obligate the United States to act in a way that violated its own laws. Additionally, European contributions to collective security would require large-scale military assistance from the United States to help rebuild Western Europes defense capabilities. While the European nations argued for individual grants and aid, the United States wanted to make aid conditional on regional coordination. A third issue was the question of scope. The Brussels Treaty signatories preferred that membership in the alliance be restricted to the members of that treaty plus the United States. The U.S. negotiators felt there was more to be gained from enlarging the new treaty to include the countries of the North Atlantic, including Canada, Iceland, Denmark, Norway, Ireland, and Portugal. Together, these countries held territory that formed a bridge between the opposite shores of the Atlantic Ocean, which would facilitate military action if it became necessary.

President Truman inspecting a tank produced under the Mutual Defense Assistance Program

The result of these extensive negotiations was the signing of the North Atlantic Treaty in 1949. In this agreement, the United States, Canada, Belgium, Denmark, France, Iceland, Italy, Luxemburg, the Netherlands, Norway, Portugal, and the United Kingdom agreed to consider attack against one an attack against all, along with consultations about threats and defense matters. This collective defense arrangement only formally applied to attacks against the signatories that occurred in Europe or North America; it did not include conflicts in colonial territories. After the treaty was signed, a number of the signatories made requests to the United States for military aid. Later in 1949, President Truman proposed a military assistance program, and the Mutual Defense Assistance Program passed the U.S. Congress in October, appropriating some $1.4 billion dollars for the purpose of building Western European defenses.

Soon after the creation of the North Atlantic Treaty Organization, the outbreak of the Korean War led the members to move quickly to integrate and coordinate their defense forces through a centralized headquarters. The North Korean attack on South Korea was widely viewed at the time to be an example of communist aggression directed by Moscow, so the United States bolstered its troop commitments to Europe to provide assurances against Soviet aggression on the European continent. In 1952, the members agreed to admit Greece and Turkey to NATO and added the Federal Republic of Germany in 1955. West German entry led the Soviet Union to retaliate with its own regional alliance, which took the form of the Warsaw Treaty Organization and included the Soviet satellite states of Eastern Europe as members.

The collective defense arrangements in NATO served to place the whole of Western Europe under the American nuclear umbrella. In the 1950s, one of the first military doctrines of NATO emerged in the form of massive retaliation, or the idea that if any member was attacked, the United States would respond with a large-scale nuclear attack. The threat of this form of response was meant to serve as a deterrent against Soviet aggression on the continent. Although formed in response to the exigencies of the developing Cold War, NATO has lasted beyond the end of that conflict, with membership even expanding to include some former Soviet states. It remains the largest peacetime military alliance in the world.

Read more here:
North Atlantic Treaty Organization (NATO), 1949 – 19451952 …

 Posted by at 11:47 pm  Tagged with:

Top 12 tax havens for US companies RT Business

 Tax Havens  Comments Off on Top 12 tax havens for US companies RT Business
Oct 232015
 

US corporations are making record profits in tax havens like Bermuda, the Cayman Islands, and the British Virgin Islands (BVI). Some of the profits exceed the GDP of the host country, with Bermudas offshore profits 1643% of total economic output.

As a share of Gross Domestic Product (GDP), profits from subsidiary US companies operating in the Netherlands are more than 100 percent of the countrys annual economic output, according to a new study by Citizens for Tax Justice, published Tuesday.

In Bermuda, US companies reported $94 billion in profit, but the islands GDP is only $6 billion. The report draws on data collected by the US International Revenue Service from subsidiaries reporting profits outside of the US in 2010.

Clearly, American corporations are using various tax gimmicks to shift profits actually earned in the US and other countries where they actually do business into their subsidiaries in these tiny countries, the report says.

US companies filed the largest profits in the Netherlands, Bermuda, Ireland, Luxembourg, the Cayman Islands, Switzerland, Singapore, the Bahamas, the British Virgin Islands, Cyprus, the Netherlands Antilles, and Barbados. But none of these finances are factored into the GDP of the host countries.

When filing US income taxes, a foreign corporation is defined if its US shareholders control more than 50 percent of the outstanding voting stock.

Offshore wealth money that is kept abroad for tax purposes- is a popular tactic for American companies to avoid paying high taxes in the US. Google, Apple, and other hi-tech companies have all been accused of sheltering money abroad and not contributing enough to the American tax system, which is their main market.

Many US companies use a loophole called repatriation in order to delay paying the US government taxes. Under US tax law, companies with offshore subsidiaries can wait until their company is repatriated, or returned to the US, until they pay taxes. This tool encourages US companies to report profits outside of the US, where they are safe from high taxes.

Other countries can offer very attractive corporate tax rates compared to the required 40 percent in America. Bermuda, the Cayman Islands, and the Bahamas for example, have a rate of 0 percent.

Ireland has a corporate tax rate of 12.5 percent, Switzerland 17.92 percent, and Luxembourg a local rate of 29.22 percent, according to data from KMPG Global.

The only country where companies pay more taxes than in America is in the United Arab Emirates, which has a 55 percent corporate tax rate.

Countries, or tax havens, can provide opportunities for investors by lowering their corporate tax rates as well as income tax rates.

Low income tax rates can make investment more competitive and business climate more attractive for some investors looking for loopholes. An estimate by Boston Consulting Group pegs offshore wealth at $8.5 trillion. Other independent estimates peg it as high as $20 trillion.

With the G20 and OECD countries focused on curbing tax evasion and avoidance, several Caribbean countries Bermuda, Barbados and Cayman Islands would be subject to a tightening tax noose. These countries could face a deceleration in economic activity if international tax structures are to be dismantled.

Continue reading here:
Top 12 tax havens for US companies RT Business

 Posted by at 11:45 pm  Tagged with:

What Is NATO? Purpose, History, Members and Alliances

 NATO  Comments Off on What Is NATO? Purpose, History, Members and Alliances
Sep 032015
 

U.S. Infantry Troops Arrive In Poland For NATO Exercises. Photo: Sean Gallup/Getty Images

NATO stands for the North Atlantic Treaty Organization. It’s an alliance of 28 member countries roughly bordering the North Atlantic Ocean: Canada, U.S., Turkey and most members of the European Union. NATO’s purpose is to protect the freedom of its members. As famously defined in Article 5, “…an armed attack upon one…shall be considered an attack upon them all.”

In recent years, NATO’s purpose has expanded to include defense against weapons of mass destruction, terrorism, and cyber attacks.

Since its inception following World War II, NATO has had to continually redefine its focus as a military and political alliance to keep up with the changing face of war.

What Is the Purpose of NATO Today?:

NATO protects the security of its members. However, it must also take into consideration aggression against non-members that threaten the stability of the region. That’s why its September 2014 summit focused onPresident Putin’s goal to create a “Little Russia” out of Ukraine’s eastern region. Although Ukraine is not a NATO member, other former USSR countries are, and they’re worried. President Obama vowed to defend countries such as Latvia, Lithuania and Estonia. The U.S. contributes three-quarters of NATO’s budget. (Source: WSJ, U.S. Vows NATO Defense of Baltics, Sep. 4, 2014)

On August 28,2014, NATO announcedit had photos proving that Russia was invading Ukraine. Although Ukraine is not a NATO member, it has been working closely with NATO over the years. Russia’s invasion of Ukraine threatens NATO members who are afraid they will be next because they were also former U.S.S.R.

satellite countries.

NATO expanded its role after the 9/11 attacks to include the war on terrorism. NATO is winding down its mission in Afghanistan, which deployed 84,000 troops at its peak from both NATO-member countries and at least a dozen non-members. By 2014, NATO expects to transition all security to the Afghan military.

NATO itself admits that “Peacekeeping has become at least as difficult as peacemaking.” As a result, NATO is strengthening alliances throughout the world. In the age of globalization, transatlantic peace has become a worldwide effort that extends beyond military might alone. (Source: NATO History)

What Is the History of NATO?:

NATO was established after World War II as part of the United Nations. Its primary purpose was to defend member nations against the large number of troops in pro-communist countries. The U.S. also wanted to maintain a presence in Europe, to prevent a resurgence of military nationalism and foster political union. In this way, NATO made the European Union possible.

NATO and the Cold War:

During the Cold War, NATO’s mission expanded to prevent nuclear war. After West Germany joined NATO, the communist countries formed the Warsaw Pact alliance, including the USSR, Bulgaria, Hungary, Rumania, Poland, Czechoslovakia and East Germany. In response, NATO adopted the “Massive Retaliation” policy, which promised to use nuclear weapons if the Pact attacked. This deterrence policy allowed Europe to focus on economic development instead of building large conventional armies.

The Soviet Union, on the other hand, continued to build its military presence. By the end of the Cold War, it was spending three times what the U.S. was with only one-third the economic power. When the Berlin Wall fell in 1989, it was due to economic as well as ideological reasons.After the USSR dissolved in the late 1980s, NATO’s relationship with Russia thawed. In 1997, the NATO-Russia Founding Act was signed to build bilateral cooperation. In 2002, the NATO-Russia Council was formed to allow NATO members and Russia to partner on common security issues.

The collapse of the USSR led to unrest in its former satellite states. NATO expanded its focus to address this instability when a civil war in the former Yugoslavia turned into ethnic cleansing and genocide. NATO’s initial support of a United Nations naval embargo led to the enforcement of a no-fly zone. Violations then led to a few airstrikes until September 1999, when NATO conducted a heavy nine-day air campaign that ended the war. By December of that year, NATO deployed a peace-keeping force of 60,000 soldiers that ended in 2004, when NATO transferred this function to the European Union.

NATO Member Countries:

NATO’s 28 members include: Albania, Belgium, Bulgaria, Canada, Croatia, Czech Republic, Denmark, Estonia, France, Germany, Greece, Hungary, Iceland, Italy, Latvia, Lithuania, Luxembourg, Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Turkey, United Kingdom and the United States. Each member is represented by an ambassador, who is supported by officials that serve on the different NATO committees. From time to time, the President/Prime Minister, Foreign Affairs Minister or head of Defense will meet to discuss NATO business.

NATO Alliances:

NATO is involved with three alliances that expand its influence beyond its 28 member countries.

In addition, NATO cooperates with eight other countries in joint security issues. These countries include five in Asia (Australia, Japan, Republic of Korea, Mongolia and New Zealand) and two in the Middle East (Afghanistan and Pakistan). (Source: NATO, Partnerships)Article updated August 28, 2014

Continued here:
What Is NATO? Purpose, History, Members and Alliances




Pierre Teilhard De Chardin | Designer Children | Prometheism | Euvolution