Cyborg | Designer-Babies | Futurism | Futurist | Immortality | Longevity | Nanotechnology | Post-Human | Singularity | Transhuman

Reasons Against Cloning – VIDEOS & ARTICLES

 Cloning  Comments Off on Reasons Against Cloning – VIDEOS & ARTICLES
Jun 172016
 

Written by Patrick Dixon

Futurist Keynote Speaker: Posts, Slides, Videos – What is Human Cloning? How to Clone. But Ethical?

Human cloning: who is cloning humans and arguments against cloning (2007)

How human clones are being made – for medical research. Arguments for and against human cloning research. Why some people want to clone themselves or even to clone the dead (and not just cloning pets).

Why investors are moving away from human cloning and why human cloning now looks a last-century way to fight disease (2007)

Should we ban human cloning? Arguments against cloning

An abnormal baby would be a nightmare come true. The technique is extremely risky right now. A particular worry is the possibility that the genetic material used from the adult will continue to age so that the genes in a newborn baby clone could be – say – 30 years old or more on the day of birth. Many attempts at animal cloning produced disfigured monsters with severe abnormalities. So that would mean creating cloned embryos, implanting them and destroying (presumably) those that look imperfect as they grow in the womb. However some abnormalities may not appear till after birth. A cloned cow recently died several weeks after birth with a huge abnormality of blood cell production. Dolly the Sheep died prematurely of severe lung disease in February 2003, and also suffered from arthritis at an unexpectedly early age – probably linked to the cloning process.

Even if a few cloned babies are born apparently normal we will have to wait up to 20 years to be sure they are not going to have problems later -for example growing old too fast. Every time a clone is made it is like throwing the dice and even a string of “healthy” clones being born would not change the likelihood that many clones born in future may have severe medical problems. And of course, that’s just the ones born. What about all the disfigured and highly abnormal clones that either spontaneously aborted or were destroyed / terminated by scientists worried about the horrors they might be creating.

A child grows up knowing her mother is her sister, her grandmother is her mother. Her father is her brother-in-law. Every time her mother looks at her, she is seeing herself growing up. Unbearable emotional pressures on a teenager trying to establish his or her identity. What happens to a marriage when the “father” sees his wife’s clone grow up into the exact replica (by appearance) of the beautiful 18 year old he fell in love with 35 years ago? A sexual relationship would of course be with his wife’s twin, no incest involved technically.

Or maybe the child knows it is the twin of a dead brother or sister. What kind of pressures will he or she feel, knowing they were made as a direct replacement for another? It is a human experiment doomed to failure because the child will NOT be identical in every way, despite the hopes of the parents. One huge reason will be that the child will be brought up in a highly abnormal household: one where grief has been diverted into makeing a clone instead of adjusting to loss. The family environment will be totally different than that the other twin experienced. That itself will place great pressures on the emotional development of the child. You will not find a child psychiatrist in the world who could possibly say that there will not be very significant emotional risk to the cloned child as a result of these pressures.

What would Hitler have done with cloning technology if available in the 1940s? There are powerful leaders in every generation who will seek to abuse this technology for their own purposes. Going ahead with cloning technology makes this far more likely. You cannot have so-called therapeutic cloning without reproductive cloning because the technique to make cloned babies is the same as to make a cloned embryo to try to make replacement tissues. And at the speed at which biotech is accelerating there will soon be other ways to get such cells – adult stem cell technology. It is rather crude to create a complete embryonic identical twin embryo just to get hold of stem cells to make – say – nervous tissue. Much better to take cells from the adult and trigger them directly to regress to a more primitive form without the ethical issues raised by inserting a full adult set of genes into an unfertilised egg.

Related news items:

Older news items:

Thanks for promoting with Facebook LIKE or Tweet. Really interested to hear your views. Post below.

Javacript is required for help and viewing images.

Reply to Trinity Heckathorn

Reply to the great cornholio

Reply to the great cornholio

Reply to the great cornholio

Reply to jazmine Smith =)

Reply to jazmine Smith =)

1

See original here:

Reasons Against Cloning – VIDEOS & ARTICLES

 Posted by at 4:55 am  Tagged with:

Robotics – Wikipedia, the free encyclopedia

 Robotics  Comments Off on Robotics – Wikipedia, the free encyclopedia
Jun 172016
 

Robotics is the branch of mechanical engineering, electrical engineering and computer science that deals with the design, construction, operation, and application of robots,[1] as well as computer systems for their control, sensory feedback, and information processing.

These technologies deal with automated machines (robots for short) that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behaviour, and or cognition. Many of today’s robots are inspired by nature, contributing to the field of bio-inspired robotics.

The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.[2] Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion. Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks. Robotics is also used in STEM (Science, Technology, Engineering, and Mathematics) as a teaching aid.

The word robotics was derived from the word robot, which was introduced to the public by Czech writer Karel apek in his play R.U.R. (Rossum’s Universal Robots), which was published in 1920.[3] The word robot comes from the Slavic word robota, which means labour. The play begins in a factory that makes artificial people called robots, creatures who can be mistaken for humans very similar to the modern ideas of androids. Karel apek himself did not coin the word. He wrote a short letter in reference to an etymology in the Oxford English Dictionary in which he named his brother Josef apek as its actual originator.[3]

According to the Oxford English Dictionary, the word robotics was first used in print by Isaac Asimov, in his science fiction short story “Liar!”, published in May 1941 in Astounding Science Fiction. Asimov was unaware that he was coining the term; since the science and technology of electrical devices is electronics, he assumed robotics already referred to the science and technology of robots. In some of Asimov’s other works, he states that the first use of the word robotics was in his short story Runaround (Astounding Science Fiction, March 1942).[4][5] However, the original publication of “Liar!” predates that of “Runaround” by ten months, so the former is generally cited as the word’s origin.

In 1942 the science fiction writer Isaac Asimov created his Three Laws of Robotics.

In 1948 Norbert Wiener formulated the principles of cybernetics, the basis of practical robotics.

Fully autonomous robots only appeared in the second half of the 20th century. The first digitally operated and programmable robot, the Unimate, was installed in 1961 to lift hot pieces of metal from a die casting machine and stack them. Commercial and industrial robots are widespread today and used to perform jobs more cheaply, more accurately and more reliably, than humans. They are also employed in some jobs which are too dirty, dangerous, or dull to be suitable for humans. Robots are widely used in manufacturing, assembly, packing and packaging, transport, earth and space exploration, surgery, weaponry, laboratory research, safety, and the mass production of consumer and industrial goods.[6]

There are many types of robots; they are used in many different environments and for many different uses, although being very diverse in application and form they all share three basic similarities when it comes to their construction:

As more and more robots are designed for specific tasks this method of classification becomes more relevant. For example, many robots are designed for assembly work, which may not be readily adaptable for other applications. They are termed as “assembly robots”. For seam welding, some suppliers provide complete welding systems with the robot i.e. the welding equipment along with other material handling facilities like turntables etc. as an integrated unit. Such an integrated robotic system is called a “welding robot” even though its discrete manipulator unit could be adapted to a variety of tasks. Some robots are specifically designed for heavy load manipulation, and are labelled as “heavy duty robots.”

Current and potential applications include:

At present mostly (leadacid) batteries are used as a power source. Many different types of batteries can be used as a power source for robots. They range from leadacid batteries, which are safe and have relatively long shelf lives but are rather heavy compared to silvercadmium batteries that are much smaller in volume and are currently much more expensive. Designing a battery-powered robot needs to take into account factors such as safety, cycle lifetime and weight. Generators, often some type of internal combustion engine, can also be used. However, such designs are often mechanically complex and need fuel, require heat dissipation and are relatively heavy. A tether connecting the robot to a power supply would remove the power supply from the robot entirely. This has the advantage of saving weight and space by moving all power generation and storage components elsewhere. However, this design does come with the drawback of constantly having a cable connected to the robot, which can be difficult to manage.[20] Potential power sources could be:

Actuators are the “muscles” of a robot, the parts which convert stored energy into movement. By far the most popular actuators are electric motors that rotate a wheel or gear, and linear actuators that control industrial robots in factories. There are some recent advances in alternative types of actuators, powered by electricity, chemicals, or compressed air.

The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. These motors are often preferred in systems with lighter loads, and where the predominant form of motion is rotational.

Various types of linear actuators move in and out instead of by spinning, and often have quicker direction changes, particularly when very large forces are needed such as with industrial robotics. They are typically powered by compressed air (pneumatic actuator) or an oil (hydraulic actuator).

A spring can be designed as part of the motor actuator, to allow improved force control. It has been used in various robots, particularly walking humanoid robots.[21]

Pneumatic artificial muscles, also known as air muscles, are special tubes that expand(typically up to 40%) when air is forced inside them. They are used in some robot applications.[22][23][24]

Muscle wire, also known as shape memory alloy, Nitinol or Flexinol wire, is a material which contracts (under 5%) when electricity is applied. They have been used for some small robot applications.[25][26]

EAPs or EPAMs are a new[when?] plastic material that can contract substantially (up to 380% activation strain) from electricity, and have been used in facial muscles and arms of humanoid robots,[27] and to enable new robots to float,[28] fly, swim or walk.[29]

Recent alternatives to DC motors are piezo motors or ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic elements, vibrating many thousands of times per second, cause linear or rotary motion. There are different mechanisms of operation; one type uses the vibration of the piezo elements to step the motor in a circle or a straight line.[30] Another type uses the piezo elements to cause a nut to vibrate or to drive a screw. The advantages of these motors are nanometer resolution, speed, and available force for their size.[31] These motors are already available commercially, and being used on some robots.[32][33]

Elastic nanotubes are a promising artificial muscle technology in early-stage experimental development. The absence of defects in carbon nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10J/cm3 for metal nanotubes. Human biceps could be replaced with an 8mm diameter wire of this material. Such compact “muscle” might allow future robots to outrun and outjump humans.[34]

Sensors allow robots to receive information about a certain measurement of the environment, or internal components. This is essential for robots to perform their tasks, and act upon any changes in the environment to calculate the appropriate response. They are used for various forms of measurements, to give the robots warnings about safety or malfunctions, and to provide real time information of the task it is performing.

Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips.[35][36] The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of the rigid core and are connected to an impedance-measuring device within the core. When the artificial skin touches an object the fluid path around the electrodes is deformed, producing impedance changes that map the forces received from the object. The researchers expect that an important function of such artificial fingertips will be adjusting robotic grip on held objects.

Scientists from several European countries and Israel developed a prosthetic hand in 2009, called SmartHand, which functions like a real oneallowing patients to write with it, type on a keyboard, play piano and perform other fine movements. The prosthesis has sensors which enable the patient to sense real feeling in its fingertips.[37]

Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences and views from cameras.

In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common.

Computer vision systems rely on image sensors which detect electromagnetic radiation which is typically in the form of either visible light or infra-red light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Robots can also be equipped with multiple vision sensors to be better able to compute the sense of depth in the environment. Like human eyes, robots’ “eyes” must also be able to focus on a particular area of interest, and also adjust to variations in light intensities.

There is a subfield within computer vision where artificial systems are designed to mimic the processing and behavior of biological system, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have their background in biology.

Other common forms of sensing in robotics use lidar, radar and sonar.[citation needed]

Robots need to manipulate objects; pick up, modify, destroy, or otherwise have an effect. Thus the “hands” of a robot are often referred to as end effectors,[38] while the “arm” is referred to as a manipulator.[39] Most robot arms have replaceable effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator, for example a humanoid hand.[40] Learning how to manipulate a robot often requires a close feedback between human to the robot, although there are several methods for remote manipulation of robots. [41]

One of the most common effectors is the gripper. In its simplest manifestation it consists of just two fingers which can open and close to pick up and let go of a range of small objects. Fingers can for example be made of a chain with a metal wire run through it.[42] Hands that resemble and work more like a human hand include the Shadow Hand and the Robonaut hand.[43] Hands that are of a mid-level complexity include the Delft hand.[44][45] Mechanical grippers can come in various types, including friction and encompassing jaws. Friction jaws use all the force of the gripper to hold the object in place using friction. Encompassing jaws cradle the object in place, using less friction.

Vacuum grippers are very simple astrictive[46] devices, but can hold very large loads provided the prehension surface is smooth enough to ensure suction.

Pick and place robots for electronic components and for large objects like car windscreens, often use very simple vacuum grippers.

Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand, MANUS,[47] and the Schunk hand.[48] These are highly dexterous manipulators, with as many as 20 degrees of freedom and hundreds of tactile sensors.[49]

For simplicity most mobile robots have four wheels or a number of continuous tracks. Some researchers have tried to create more complex wheeled robots with only one or two wheels. These can have certain advantages such as greater efficiency and reduced parts, as well as allowing a robot to navigate in confined places that a four-wheeled robot would not be able to.

Balancing robots generally use a gyroscope to detect how much a robot is falling and then drive the wheels proportionally in the same direction, to counterbalance the fall at hundreds of times per second, based on the dynamics of an inverted pendulum.[50] Many different balancing robots have been designed.[51] While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot, when used as such Segway refer to them as RMP (Robotic Mobility Platform). An example of this use has been as NASA’s Robonaut that has been mounted on a Segway.[52]

A one-wheeled balancing robot is an extension of a two-wheeled balancing robot so that it can move in any 2D direction using a round ball as its only wheel. Several one-wheeled balancing robots have been designed recently, such as Carnegie Mellon University’s “Ballbot” that is the approximate height and width of a person, and Tohoku Gakuin University’s “BallIP”.[53] Because of the long, thin shape and ability to maneuver in tight spaces, they have the potential to function better than other robots in environments with people.[54]

Several attempts have been made in robots that are completely inside a spherical ball, either by spinning a weight inside the ball,[55][56] or by rotating the outer shells of the sphere.[57][58] These have also been referred to as an orb bot [59] or a ball bot.[60][61]

Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass.

Tank tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor and military robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA’s Urban Robot “Urbie”.[62]

Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however none have yet been made which are as robust as a human. There has been much study on human inspired walking, such as AMBER lab which was established in 2008 by the Mechanical Engineering Department at Texas A&M University.[63] Many other robots have been built that walk on more than two legs, due to these robots being significantly easier to construct.[64][65] Walking robots can be used for uneven terrains, which would provide better mobility and energy efficiency than other locomotion methods. Hybrids too have been proposed in movies such as I, Robot, where they walk on 2 legs and switch to 4 (arms+legs) when going to a sprint. Typically, robots on 2 legs can walk well on flat floors and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:

The Zero Moment Point (ZMP) is the algorithm used by robots such as Honda’s ASIMO. The robot’s onboard computer tries to keep the total inertial forces (the combination of Earth’s gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot’s foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over).[66] However, this is not exactly how a human walks, and the difference is obvious to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory.[67][68][69] ASIMO’s walking algorithm is not static, and some dynamic balancing is used (see below). However, it still requires a smooth surface to walk on.

Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot, could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself.[70] Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults.[71] A quadruped was also demonstrated which could trot, run, pace, and bound.[72] For a full list of these robots, see the MIT Leg Lab Robots page.[73]

A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot’s motion, and places the feet in order to maintain stability.[74] This technique was recently demonstrated by Anybots’ Dexter Robot,[75] which is so stable, it can even jump.[76] Another example is the TU Delft Flame.

Perhaps the most promising approach utilizes passive dynamics where the momentum of swinging limbs is used for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO.[77][78]

A modern passenger airliner is essentially a flying robot, with two humans to manage it. The autopilot can control the plane for each stage of the journey, including takeoff, normal flight, and even landing.[79] Other flying robots are uninhabited, and are known as unmanned aerial vehicles (UAVs). They can be smaller and lighter without a human pilot on board, and fly into dangerous territory for military surveillance missions. Some can even fire on targets under command. UAVs are also being developed which can fire on targets automatically, without the need for a command from a human. Other flying robots include cruise missiles, the Entomopter, and the Epson micro helicopter robot. Robots such as the Air Penguin, Air Ray, and Air Jelly have lighter-than-air bodies, propelled by paddles, and guided by sonar.

Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings.[80] The Japanese ACM-R5 snake robot[81] can even navigate both on land and in water.[82]

A small number of skating robots have been developed, one of which is a multi-mode walking and skating device. It has four legs, with unpowered wheels, which can either step or roll.[83] Another robot, Plen, can use a miniature skateboard or roller-skates, and skate across a desktop.[84]

Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimics the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage. An example of this is Capuchin,[85] built by Dr. Ruixiang Zhang at Stanford University, California. Another approach uses the specialized toe pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass. Examples of this approach include Wallbot[86] and Stickybot.[87] China’s Technology Daily reported on November 15, 2008 that Dr. Li Hiu Yeung and his research group of New Concept Aircraft (Zhuhai) Co., Ltd. had successfully developed a bionic gecko robot named “Speedy Freelander”. According to Dr. Li, the gecko robot could rapidly climb up and down a variety of building walls, navigate through ground and wall fissures, and walk upside-down on the ceiling. It was also able to adapt to the surfaces of smooth glass, rough, sticky or dusty walls as well as various types of metallic materials. It could also identify and circumvent obstacles automatically. Its flexibility and speed were comparable to a natural gecko. A third approach is to mimic the motion of a snake climbing a pole.[citation needed]

It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%.[88] Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion.[89] Notable examples are the Essex University Computer Science Robotic Fish G9,[90] and the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion.[91] The Aqua Penguin,[92] designed and built by Festo of Germany, copies the streamlined shape and propulsion by front “flippers” of penguins. Festo have also built the Aqua Ray and Aqua Jelly, which emulate the locomotion of manta ray, and jellyfish, respectively.

In 2014 iSplash-II was developed by R.J Clapham PhD at Essex University. It was the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained. This build attained swimming speeds of 11.6BL/s (i.e. 3.7m/s).[93] The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined wave form.[94]

Sailboat robots have also been developed in order to make measurements at the surface of the ocean. A typical sailboat robot is Vaimos [95] built by IFREMER and ENSTA-Bretagne. Since the propulsion of sailboat robots uses the wind, the energy of the batteries is only used for the computer, for the communication and for the actuators (to tune the rudder and the sail). If the robot is equipped with solar panels, the robot could theoretically navigate forever. The two main competitions of sailboat robots are WRSC, which takes place every year in Europe, and Sailbot.

Though a significant percentage of robots in commission today are either human controlled, or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots such as ASIMO, and Mein robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns’ driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as lidar, video cameras, and inertial guidance systems for better navigation between waypoints.

The state of the art in sensory intelligence for robots will have to progress through several orders of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the floors. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO, or Data of Star Trek, Next Generation.

Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech.[96] The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent.[97] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first “voice input system” which recognized “ten digits spoken by a single user with 100% accuracy” in 1952.[98] Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.[99]

Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium,[100] making it necessary to develop the emotional component of robotic voice through various techniques.[101][102]

One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate “down the road, then turn right”. It is likely that gestures will make up a part of the interaction between humans and robots.[103] A great many systems have been developed to recognize human hand gestures.[104]

Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos).[105] The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi[106] can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.[107]

Artificial emotions can also be generated, composed of a sequence of facial expressions and/or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots.

Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future.[108] Nevertheless, researchers are trying to create robots which appear to have a personality:[109][110] i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.[111]

The Socially Intelligent Machines Lab of the Georgia Institute of Technology researches new concepts of guided teaching interaction with robots. Aim of the projects is a social robot learns task goals from human demonstrations without prior knowledge of high-level concepts. These new concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned using a Bayesian approach. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. The results are demonstrated by the robot Curi who can scoop some pasta from a pot onto a plate and serve the sauce on top.[112]

The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted, and to calculate the appropriate signals to the actuators (motors) which move the mechanical.

The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands. Sensor fusion may first be used to estimate parameters of interest (e.g. the position of the robot’s gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction) is inferred from these estimates. Techniques from control theory convert the task into commands that drive the actuators.

At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a “cognitive” model. Cognitive models try to represent the robot, the world, and how they interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.

Control systems may also have varying levels of autonomy.

Another classification takes into account the interaction between human control and the machine motions.

Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them but other investigations, such as MIT’s cyberflora project, are almost wholly academic.

A first particular new innovation in robot design is the opensourcing of robot-projects. To describe the level of advancement of a robot, the term “Generation Robots” can be used. This term is coined by Professor Hans Moravec, Principal Research Scientist at the Carnegie Mellon University Robotics Institute in describing the near future evolution of robot technology. First generation robots, Moravec predicted in 1997, should have an intellectual capacity comparable to perhaps a lizard and should become available by 2010. Because the first generation robot would be incapable of learning, however, Moravec predicts that the second generation robot would be an improvement over the first and become available by 2020, with the intelligence maybe comparable to that of a mouse. The third generation robot should have the intelligence comparable to that of a monkey. Though fourth generation robots, robots with human intelligence, professor Moravec predicts, would become possible, he does not predict this happening before around 2040 or 2050.[114]

The second is evolutionary robots. This is a methodology that uses evolutionary computation to help design robots, especially the body form, or motion and behavior controllers. In a similar way to natural evolution, a large population of robots is allowed to compete in some way, or their ability to perform a task is measured using a fitness function. Those that perform worst are removed from the population, and replaced by a new set, which have new behaviors based on those of the winners. Over time the population improves, and eventually a satisfactory robot may appear. This happens without any direct programming of the robots by the researchers. Researchers use this method both to create better robots,[115] and to explore the nature of evolution.[116] Because the process often requires many generations of robots to be simulated,[117] this technique may be run entirely or mostly in simulation, then tested on real robots once the evolved algorithms are good enough.[118] Currently, there are about 10 million industrial robots toiling around the world, and Japan is the top country having high density of utilizing robots in its manufacturing industry.[citation needed]

The study of motion can be divided into kinematics and dynamics.[119] Direct kinematics refers to the calculation of end effector position, orientation, velocity, and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance, and singularity avoidance. Once all relevant positions, velocities, and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end effector acceleration. This information can be used to improve the control algorithms of a robot.

In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones, and improve the interaction between these areas. To do this, criteria for “optimal” performance and ways to optimize design, structure, and control of robots must be developed and implemented.

Bionics and biomimetics apply the physiology and methods of locomotion of animals to the design of robots. For example, the design of BionicKangaroo was based on the way kangaroos jump.

Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics.[120] Robots have become a popular educational tool in some middle and high schools, particularly in parts of the USA,[121] as well as in numerous youth summer camps, raising interest in programming, artificial intelligence and robotics among students. First-year computer science courses at some universities now include programming of a robot in addition to traditional software engineering-based coursework.[122][123]

Universities offer bachelors, masters, and doctoral degrees in the field of robotics.[124]Vocational schools offer robotics training aimed at careers in robotics.

The Robotics Certification Standards Alliance (RCSA) is an international robotics certification authority that confers various industry- and educational-related robotics certifications.

Several national summer camp programs include robotics as part of their core curriculum, including Digital Media Academy, RoboTech, and Cybercamps. In addition, youth summer robotics programs are frequently offered by celebrated museums such as the American Museum of Natural History[125] and The Tech Museum of Innovation in Silicon Valley, CA, just to name a few. An educational robotics lab also exists at the IE & mgmnt Faculty of the Technion. It was created by Dr. Jacob Rubinovitz.

Some examples of summer camps are: EdTech, the Robotics Camp-Montreal, AfterFour-Toronto, Exceed Robotics-Thornhill, among many others.

All this camps offers:

There are lots of competitions all around the globe. One of the most important competitions is the FLL or FIRST Lego League. The idea of this specific competition is that kids start developing knowledge and getting into robotics while playing with Legos since they are 9 years old. This competition is associated with Ni or National Instruments.

Many schools across the country are beginning to add robotics programs to their after school curriculum. Some major programs for afterschool robotics include FIRST Robotics Competition, Botball and B.E.S.T. Robotics.[126] Robotics competitions often include aspects of business and marketing as well as engineering and design.

The Lego company began a program for children to learn and get excited about robotics at a young age.[127]

Robotics is an essential component in many modern manufacturing environments. As factories increase their use of robots, the number of roboticsrelated jobs grow and have been observed to be steadily rising. [128] The employment of robots in industries has increased productivity and efficiency savings and is typically seen as a long term investment for benefactors.

A discussion paper drawn up by EU-OSHA highlights how the spread of robotics presents both opportunities and challenges for occupational safety and health (OSH).[129]

The greatest OSH benefits stemming from the wider use of robotics should be substitution for people working in unhealthy or dangerous environments. In space, defence, security, or the nuclear industry, but also in logistics, maintenance and inspection, autonomous robots are particularly useful in replacing human workers performing dirty, dull or unsafe tasks, thus avoiding workers exposures to hazardous agents and conditions and reducing physical, ergonomic and psychosocial risks. For example, robots are already used to perform repetitive and monotonous tasks, to handle radioactive material or to work in explosive atmospheres. In the future, many other highly repetitive, risky or unpleasant tasks will be performed by robots in a variety of sectors like agriculture, construction, transport, healthcare, firefighting or cleaning services.

Despite these advances, there are certain skills to which humans will be better suited than machines for some time to come and the question is how to achieve the best combination of human and robot skills. The advantages of robotics include heavy-duty jobs with precision and repeatability, whereas the advantages of humans include creativity, decision-making, flexibility and adaptability. This need to combine optimal skills has resulted in collaborative robots and humans sharing a common workspace more closely and led to the development of new approaches and standards to guarantee the safety of the man-robot merger. Some European countries are including robotics in their national programmes and trying to promote a safe and flexible co-operation between robots and operators to achieve better productivity. For example, the German Federal Institute for Occupational Safety and Health (BAuA) organises annual workshops on the topic human-robot collaboration.

In future, co-operation between robots and humans will be diversified, with robots increasing their autonomy and human-robot collaboration reaching completely new forms. Current approaches and technical standards[130][131] aiming to protect employees from the risk of working with collaborative robots will have to be revised.

119. FLL. (2016, March 24). Retrieved March 25, 2016, from http://www.firstinspires.org/robotics/fll 120. Robotics Summer Camps. (n.d.). Retrieved March 25, 2016, from http://www.ourkids.net/robotics-camps.php 121. Practical Ed Tech Summer Camp. (2016). Retrieved March 25, 2016, from http://practicaledtech.com/practical-ed-tech-summer-camp 122. VEX Robotics Competitions. (2015). Retrieved March 25, 2016, from http://www.robotevents.com/robot-competitions/vex-robotics-competition?limit=500

The rest is here:

Robotics – Wikipedia, the free encyclopedia

 Posted by at 4:54 am  Tagged with:

War On Drugs: Pictures, Videos, Breaking News

 War On Drugs  Comments Off on War On Drugs: Pictures, Videos, Breaking News
Jun 122016
 

According to Fox 45 Now, a classy upstanding citizen was robbed & assaulted in an alley by her drug dealer, Tutu after asking him to turn away while s…

Brian Smith

Native New Englander now residing in South Carolina

I love our president, his passion and I appreciate his proposal to heal the epidemic of opioid abuse, but I believe the bigger picture goes beyond treatment centers and expanding scope of practice. We need to assist our patients from the inside, the roots, only then can they truly begin to heal.

Erica Benedicto

Root-Cause Integrative PA-C, Yoga Teacher, Storyteller, Community/Clinical Curator, Speaker, Thinker, Doer, Rapscallion

Becoming a mother has really opened my heart. Besides being with family, I notice the compassion most when I am teaching yoga. One of my favorite plac…

Pia Artesona

Los Angeles-based yoga teacher, writer, life coach and mother-to-be

A robust public conversation is currently unfolding led by the formerly incarcerated and seized by President Obama himself to reflect on our current criminal justice system and the lasting stigma and damage it causes those who have been in contact with it. But does a nation of second chances include those of us who are immigrants?

Tania Unzueta

Legal and Policy Director for Mijente and the #Not1More Campaign

Iceland may be the world’s most progressive country at reducing teenage substance abuse. In the more than 4 decades that I have studied, researched …

In 2014, the U.S. Department of Justice confirmed Louisiana remained number 1, among the 50 states, with 38,030 in prison, a rate of 816 per 100,000 o…

There are two problems with threatening long sentences to extract cooperation from low-level drug offenders. This strategy is ineffective in impacting the drug trade. It also inflicts immense collateral damage on innocent people and low-level offenders, while letting the guiltiest offenders off more easily.

Amos Irwin

Chief of Staff at the Criminal Justice Policy Foundation; Training Director at Law Enforcement Against Prohibition

Boy, it isn’t every day you get to write a headline like that! But those are the kinds of feelings Ted Cruz seems to bring out in everyone — left, right, and center.

During my imprisonment I had tried to commit suicide, been stuck with a knife, and was beat down with a pipe–but nothing hurt me more than my separat…

Anthony Papa

Manager of Media & Artist Relations, Drug Policy Alliance

Jason Hernandez never thought he would see the outside world again.

Today, drug cartels are playing the political activism game and are increasing their support base by appealing to the hearts and minds of millions of people through the widespread social discontent and the ideal of social justice.

Ana Davila

Masters in Science in Global Affairs and Transnational Security Candidate at New York University

The disdain that the Amish faithful feel for family members who reject their all-encompassing religious worldview is such that they refuse to dine with them at the same table.

Kathleen Frydl

Historian studying US state power, policies, and the institutions that shape American life.

In the United States, while there are shifting patterns of drug use, there is no simple relationship to the severity of the nation’s drug laws. The caveat is that from the European study, relaxing penalties had equally unpredictable results. Annan’s statement needs that bit of context. We rate this claim Mostly True.

Undeniably, the world is splintering. Geopolitical blocs are forming once again, the nuclear arms race is reigniting and religious war rages. Globalization is in retreat as publics across the planet suspect trade agreements, politicians talk about building walls and refugees are turned away. Yet, as Parag Khanna, author of the new book, “Connectography,” writes this week from Singapore, “the same world that appears to be falling apart is actually coming together.” (continued)

While mostly ignored by the media (and almost completely ignored in the debates), the issue is going to become a lot more important in the general election, as many states will have recreational legalization ballot initiatives to vote on.

LISBON, Portugal — This week’s U.N. summit on the global drug problem is already a turning point in our collective journey toward improving global drug policy. Whatever the final formal conclusions, reforms are on and history is in the making.

See the original post here:

War On Drugs: Pictures, Videos, Breaking News

 Posted by at 12:44 am  Tagged with:

Annenberg Classroom – First Amendment

 Misc  Comments Off on Annenberg Classroom – First Amendment
Jan 312016
 

First Amendment – The Text11 Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

11On September 25, 1789, Congress transmitted to the states twelve proposed amendments. Two of these, which involved congressional representation and pay, were not adopted. The remaining ten amendments, known as the Bill of Rights, were ratified on December 15, 1791.

First Amendment – The Meaning Freedom of Speech and of the Press: The First Amendment allows citizens to express and to be exposed to a wide range of opinions and views. It was intended to ensure a free exchange of ideas even if the ideas are unpopular.

Freedom of speech encompasses not only the spoken and written word, but also all kinds of expression (including non-verbal communications, such as sit-ins, art, photographs, films and advertisements). Under its provisions, the media including television, radio and the Internet is free to distribute a wide range of news, facts, opinions and pictures. The amendment protects not only the speaker, but also the person who receives the information. The right to read, hear, see and obtain different points of view is a First Amendment right as well.

But the right to free speech is not absolute. The U.S. Supreme Court has ruled that the government sometimes may be allowed to limit speech. For example, the government may limit or ban libel (the communication of false statements about a person that may injure his or her reputation), obscenity, fighting words, and words that present a clear and present danger of inciting violence. The government also may regulate speech by limiting the time, place or manner in which it is made. For example the government may require activists to obtain a permit before holding a large protest rally on a public street.

Freedom of Assembly and Right to Petition the Government: The First Amendment also protects the freedom of assembly, which can mean physically gathering with a group of people to picket or protest; or associating with one another in groups for economic, political or religious purposes.

The First Amendment also protects the right not to associate, which means that the government cannot force people to join a group they do not wish to join. A related right is the right to petition the government, including everything from signing a petition to filing a lawsuit.

Freedom of Religion: The First Amendment’s free exercise clause allows a person to hold whatever religious beliefs he or she wants, and to exercise that belief by attending religious services, praying in public or in private, proselytizing or wearing religious clothing, such as yarmulkes or headscarves. Also included in the free exercise clause is the right not to believe in any religion, and the right not to participate in religious activities.

Second, the establishment clause prevents the government from creating a church, endorsing religion in general, or favoring one set of religious beliefs over another. As the U.S. Supreme Court decided in 1947 in Everson v. Board of Education of Ewing Township, the establishment clause was intended to erect “a wall of separation between church and state,” although the degree to which government should accommodate religion in public life has been debated in numerous Supreme Court decisions since then.

Go here to read the rest:
Annenberg Classroom – First Amendment

National Speakers Association New York Chapter

 NSA  Comments Off on National Speakers Association New York Chapter
Jan 142016
 

Events :: Upcoming… SUN MON TUE WED THU FRI SAT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15* 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Recent Events… December 15th: Working Session: Professional Members Only 12/15/15, 6:00pm (click above for more information) Join us on December 15th for a session on How To Build Your Personal Brand and Become A Local Celebrity Entrepreneur with Ramon Ray December 9: NSA-NYC Holiday Party! 12/9/15, 6:30pm (Click above for more information.) Please join the Leadership Team of the New York Chapter of the National Speakers Association for our annual festive holiday party. November 13: Do YOU! Finding Your Point of Distinction 11/13/15, 9:00am (click above for more information) Join us on November 13th for Jessica Pettitt Do YOU! Finding Your Points of Distinction October 28th: Working Session: Professional Members Only 10/28/15, 6:00pm (click above for more information) Join us on October 28th for a session on Pitching the Media with Jess Todtfeld, CSP October 20th Meet Up: Elevate Your Presentation Skills! 10/20/15, 6:30pm (click above for more information)Join us on Tuesday, October 20th, for: Elevate Your Presentation Skills! – 2 Hours, 2 Speakers, an Interactive Discussion and Tangible Take-a-Ways! October 16th: Sell Your Thoughts: How to Build Your Thought Leadership, Increase Your Revenue 10/16/15, 8:30am (click above for more information) We are delighted to have Neen James, CSP, as our speaker on October 16th! Discover how to leverage your ideas with Neens presentation on Sell Your Thoughts: How to Build Your Thought Leadership, Increase Your Revenue. September 18th: Creating Systems to put You Center Stage with Ruby Newell-Legner 09/18/15, 8:30am (click above for more information)Join us on September 18th as we kick-off our 2015-2016 season with Ruby Newell-Legners program on Creating Systems to put You Center Stage! August 5th: Working Session: Professional Members Only 08/5/15, 6:00pm (click above for more information) Join us on August 5th for a session on The Technology of Presentations with Trevor Perry June 24th Meet Up: NSA Sneak Peek and Navigate 06/24/15, 6:30pm (click above for more information)Join us on Wednesday, June 24th, for NSA Convention Sneak Peak and Navigate. June 12: 2015 Annual NSA-NYC Luncheon 06/12/15, 12:00pm (click above for more information) Come join the NSA-NYC chapter for our final 2014-2015 meeting where we will share a lovely meal and honor those members who have contributed so much to our chapter.

Read more from the original source:
National Speakers Association New York Chapter

 Posted by at 9:43 am  Tagged with:

University of Illinois Repeals the First Amendment for Its …

 Misc  Comments Off on University of Illinois Repeals the First Amendment for Its …
Dec 182015
 

Late Friday afternoon (August 22), the University of Illinois broke its three-week long silence on the controversy regarding the Chancellor’s revocation of a tenured offer to Steven Salaita, who had accepted a faculty position in the American Indian Studies Program at the flagship campus at Urbana-Champaign. Chancellor Phyllis Wise and Board of Trustees Chairman Christopher Kennedy both issued statements explaining the revocation, but in terms far more alarming than the original decision itself. It is not an exaggeration to say that the Chancellor and the Board of Trustees have now declared that the First Amendment does not apply to any tenured faculty at the University of Illinois.

A bit of background to Friday’s bombshell statements. Last October, Professor Salaita, then teaching at Virginia Tech, accepted a tenured offer from the Urbana-Champaign campus. He went through the regular appointments process at the University of Illinois, and received approval by the relevant departments and deans after a review of his scholarship and teaching. The offer, which he accepted, was conditional on approval by the Board of Trustees. Such approval clauses are typical in all teaching contracts and had, previously, been pro forma at Illinois, as they are at all serious universities: it is not the job of the Board of Trustees of a research institution to second-guess the judgment of academics and scholars. Well before the Board took the matter up, even University officials were describing Salaita as a faculty member, and he moved to Illinois and was scheduled to teach two classes this fall.

Salaita also has a Twitter account. “Tweets” are limited to 140 characters, so the medium is conducive primarily to spontaneous and superficial commentary. As a Palestinian-American and scholar of colonialism, Salaita tweeted extensively about the Israeli attack on Gaza. Contrary to the initial misrepresentations put into circulation by far right websites, none of the tweets were either anti-semitic or incitements to violence. Some were vulgar, some juvenile, some insulting, some banal. The First Amendment unequivocally protects Salaita’s right to express every one of those opinions on a matter of public concern, and to do so, if he wants, with vulgarity and insults. As a matter of American constitutional law, this is not a close case.

Part of the First Amendment’s protection of such speech is that government, including a state university, is prohibited from punishing the speaker for his expression or viewpoint. Revoking a job offer because of such speech would, again, be clearly unconstitutional. Salaita’s constitutional and contractual claims will no doubt be adjudicated in court, and the University should lose.

That now brings us to Friday’s shocking statements. Chancellor Wise declared that “we cannot… tolerate… personal and disrespectful words or actions that demean and abuse either viewpoints themselves or those who express them.” Yet as a matter of well-settled American constitutional law, the University of Illinois must tolerate “words… that demean and abuse either viewpoints themselves or those who express them.” The University has no choice, both as a matter of constitutional law and as a matter of its contractual commitment with its faculty to academic freedom. Scathing critiques of both viewpoints and authors abound in almost all scholarly fields; it would be the end of serious scholarly inquiry and debate were administrators to become the arbiters of “good manners.” More simply, it would be illegal for the University to start punishing its faculty for failure to live up to the Chancellor’s expectations for “civil” speech and disagreement.

The university, of course, need not and should not tolerate the mistreatment of students in the classroom, but there is no evidence of any such pedagogical misconduct in this case; indeed, the public evidence is that Salaita is a successful and popular teacher. No serious university evaluates pedagogical fitness based on speculative inferences from twitter accounts, yet the Chancellor’s statement implies that this is what Illinois has done in this instance. Faculty have pedagogical and professional obligations to their students, but that does not include the obligation to refrain from expressing views, whether about matters of public concern or matters within the purview of a faculty member’s scholarship, that some student somewhere might find upsetting, leading that student to conclude that that faculty member might not “value[] that student as a human being.” A student’s entitlement is to be treated seriously and professionally in the classroom; students have no entitlement to never find the views of their professors offensive or upsetting.

Chairman Kennedy’s statement is even worse than the Chancellor’s. While endorsing the Chancellor’s abrogation of the constitutional and contractual rights of the faculty, he goes even further, declaring that “there can be no place” for “disrespectful and demeaning speech” “in our democracy, and therefore, there will be no place for it in our university.” We may certainly hope for more civility in public life, but “disrespectful and demeaning speech” not only has an extensive presence in our democracy (as everyone knows), it has a constitutionally protected place as well, as the United States Supreme Court has repeatedly affirmed. Yet Chairman Kennedy says he believes only in “free speech tempered in respect for human rights.” But there is no doctrine of “free speech tempered in respect for human rights” in American constitutional law. It is a national embarrassment that a public official, the Chairman of the University of Illinois’s Board of Trustees, apparently does not know even the basic facts about the American constitutional system.

At moments like this, one wonders: Where are the lawyers? Chancellor Wise and Chairman Kennedy have made statements that commit the University of Illinois to illegal because unconstitutional courses of action. They should resign, or be removed from office, before doing further damage to one of the nation’s great research universities. Their public statements make clear they are unfit to lead academic institutions in which both freedom of speech and freedom of research and inquiry are upheld.

Read the original post:
University of Illinois Repeals the First Amendment for Its …

 Posted by at 2:42 pm  Tagged with:

Free Speech, Language, and the Rule of Law

 Free Speech  Comments Off on Free Speech, Language, and the Rule of Law
Oct 032015
 

Contents Some Thoughts on Free Speech, Language, and the Rule of Law by Thomas Streeter

(from Robert Jensen and David S. Allen (eds.), Freeing the First Amendment: Critical Perspectives on Freedom of Expression, New York University Press, 1995, pp. 31-53.)

This chapter discusses the relevance of research and reflection on language to recent critical trends in thinking on free speech. There is a tendency to interpret many of the recent revisionist approaches to free speech as if they were simply calls for exceptions to otherwise clear cut rules and principles, as if, say, pornography or racism are so exceptionally evil that they fall outside the parameters of the kinds of speech that are “obviously” protected under the First Amendment. This misses the fact that the new approaches, with varying degrees of explicitness, involve theoretical and epistemological challenges to the underlying premises of free speech law in general; over the long run, what the new approaches are calling for are not exceptions but a restructuring of free speech law as a whole. The ideas driving this profound rethinking come from a variety of traditions, including various currents of feminism, literary theory, and theories of race and ethnicity. This chapter focuses on just one of those traditions: the complex twentieth century theorizing of language, sometimes called the “linguistic turn” in twentieth century philosophy. Although the linguistic turn is only one aspect of the new thinking about free speech, and although its importance and character is not agreed upon by all those advocating the new thinking, calling attention to it is useful because it nicely highlights some conceptual difficulties of the traditional framework and because it helps differentiate the revisionist criticisms from social determinist and other subtly authoritarian criticisms of free speech.

On the one hand, this chapter argues that the linguistic turn involves some revelations about the nature of language and human communication that do not accord well with the understandings of language implicit in free speech law, particularly with the metaphor of the marketplace of ideas. On the other, it argues that part of what is at stake is the way American culture envisions the rule of law as a whole. In particular, important currents of the understanding of the rule of law suggest the possibility and necessity of constructing rules, procedures, and meanings that transcend or can be abstracted from context, whereas the linguistic turn suggests that this is impossible, that meanings can be determined only in relation to particular contexts. The final part of this chapter, therefore, suggests some avenues for exploring free speech in its historical and social context, as opposed to efforts to abstract it out of context.

In the course of a discussion of the campus hate speech controversy, literary critic Henry Louis Gates (speaking from an African American position) provided the following hypothetical examples of potentially “harmful” speech directed at a minority student:

Sociolinguistics offers an answer to the first question: the social phenomenon of linguistic style. It is not the contents of the first statement that give it force; the argument it makes is, at best, dubious and obfuscatory, whereas the second statement at least would communicate the true feelings of the speaker towards the hearer with considerable precision. The first statement’s power comes from its style.

It is a well established fact that fluency in any language involves mastery, not just of a single, “correct” version of a language, but of a variety of styles or codes appropriate to specific contexts.[2] Gates’ first example is a case of the formal or “elaborated” style of contemporary English, which is highly valued in academic and professional settings. It is characterized by, among other things, Latinate vocabulary (“demanding educational environments” instead of “tough schools”) and elaborate syntax. The second is an example of informal or restricted style, characterized by ellipsis (omitting “You get out of my face . . . “) and colloquial constructions.

Linguists also have long insisted that, in an absolute sense, formal style is no more correct or better for communication than informal style. Scientifically speaking, what makes a style appropriate or inappropriate is the social context in which it is used: in an academic setting, the formal character of the first example gives the statement force, but in another context, say, a working class bar, it might only elicit laughter and derision whereas the second statement might have considerable impact. In the appropriate context, therefore, one can use informal style brilliantly and subtly, and conversely, it is quite possible to speak in a thoroughly formal style and yet be inept, offensive, or simply unclear.[3]

What style differences communicate, then, are not specific contents, but social relations between speakers and listeners, i.e., relations of power, hierarchy, solidarity, intimacy, and so forth. In particular, formal language suggests a relation of impersonal authority between speaker and listener, whereas informal language suggests a more intimate (though not necessarily friendly) relationship. You can petrify a child by interjecting into an otherwise informal conversation, “No you may not.” The shift to formal style (no ellipsis, “may not” instead of “can’t”) shows that the speaker is not just making a request, but is asserting his or her powers of authority as an adult over the child listener.

Gates’s first example would be more wounding to a minority student, therefore, because, by couching itself in a formal, academic style, it is rhetorically structured as the expression of “impersonal,” rational, and thus institutionally sanctioned, sentiments. It thereby invokes the full force of the authority of the university against the student’s efforts to succeed in it. Gates’s second example, with its informal style, suggests that one individual, the speaker, harbors racist ill will towards the listener. The first example, by contrast, suggests that, not just one individual, but the entire institution of the university in all its impersonal, “rational” majesty, looks upon the student as unfit.

So why is it easier to penalize the second kind of statement than the first, when it is the first that is potentially more damaging (which is not necessarily to suggest that we should penalize the first kind of statement)? Contemporary law in general is insensitive to matters of linguistic style. Hollywood action movies have made a cliche of lampooning the incongruity of reading the highly formal, legalistic Miranda clause during arrests, which are typically emotional encounters between working class cops and criminals, i.e., contexts where informal style would be appropriate.[4] In First Amendment jurisprudence, where language is not only the vehicle but the subject matter of the law, this insensitivity can lead to conceptual confusion. Linguistic style may be a fact of life, but traditional legal liberal ways of thinking about free speech, especially those encapsulated in the metaphor of the “marketplace of ideas,” are strangely incapable of addressing it.

The marketplace metaphor in free speech law involves imagining symbolic and linguistic phenomena as if they were analogous to market exchange, which implies a number of things about language. Most obviously, it implies that language is primarily an exchange, a transference of something (perhaps “information”), from one person to another. Hence, in linguistic exchanges what matters is the contents of the exchange, not the style or form in which it is “packaged,” just as in real market exchanges it makes little difference if you pay by check or cash. Yet, as in Gates’ example, in language the “package” can be everything. The marketplace metaphor, then, draws our attention away from the importance of just the kind of stylistic differences that sociolinguists say are central to the workings of everyday language.

The marketplace metaphor, furthermore, tends to imply that the good that comes from unconstrained human speech comes from some neutral, universal, mechanical, and leveling process, a linguistic equivalent to the economist’s invisible hand out of which will emerge truth, or at least some form of democratic justice. That neutral, mechanical process, furthermore, is contrasted in law with “arbitrary” government interference. And yet, in several ways, linguistics has taught that language itself is arbitrary at its core; in language, the boundary between “natural” processes and arbitrary ones is difficult, some would argue impossible, to discern.

Linguists say that language is “arbitrary” in the sense that meaning emerges, not from anything logically inherent in words or their arrangement, but from the specific conventions and expectations shared by members of a given speech community, conventions and expectations that can and do change dramatically from time to time and place to place. Aside from language in general and perhaps some very deep-level aspects of syntax, there is very little that is universal, neutral, or mechanical about human languages. This insight grew out of the observation that languages differ profoundly from one another, not only in terms of the meanings of specific words, but in terms of basic aspects of the ways those words are arranged: some languages have only two or three words for color, for example, others have nothing English speakers would recognize as verb tenses. But it has also been bolstered by detailed analysis of the workings of language in general. Meanings are fixed neither by logic nor by some natural relation of words to things, but by the contextual and shifting system of interpretation shared by the members of a given speech community.

The arbitrariness of language presents two problems for traditional thinking about freedom of speech. One problem involves legal interpretation, the belief that properly expert judges and lawyers following the proper procedures can arrive at the correct interpretation of a dispute. Often described as the problem of the indeterminacy of law, the purely contextual character of meaning would suggest that legal decisions will always be forced to fall back on contingent, social or political values to decide where the boundaries in the law lie.[5] It is in the character of language, in other words, that a judge will never be able to look at the text of the Bill of Rights and legal precedents to decide whether or not flag burning is protected by the First Amendment; she will always in one way or another be forced to make a choice about whether or not she thinks it should be protected, and will always be faced with the possibility that a reasonable person could plausibly disagree.

Indeterminacy should not be mistaken for the absurd assertion that any word can mean any thing, that there is no stability to meaning whatsoever. As deconstructionist literary critic Barbara Johnson puts it,

A second problem suggested by the arbitrariness of language involves the impossibility of abstracting from context that is a linchpin of the formalist legal logic which today dominates thinking about freedom of speech. According to some understandings of the rule of law, justice is best served when applied according to indisputable, clear rules of procedure and decisionmaking. Hence the First Amendment protects Nazis marching in Skokie and flag burning, not because anything good is being accomplished in either case, but because the important thing is to uphold the rules impartially and unequivocally. And being impartial and unequivocal typically means that rules are upheld regardless of context.

If one were to suggest, say, that the harm from Nazis marching in a Jewish suburb outweighs the value of protecting their speech because of the history of the Holocaust and the irrational and violent character of Nazi ideology, or that flag burning is such an ineffectual form of political expression and so potentially offensive that nothing would be lost by restricting it, the formalist counterargument is that this would “blur” the boundaries, cross what lawyers call the bright lines, upon which our system of justice rests: the rules are more important than the context.

An important example of formalist reasoning is the Bellotti case, in which the Supreme Court struck down a Massachusetts law limiting corporate campaign donations. The Court reached its decision, not simply by weighing the positive and negative effects of the law, nor by deciding that it was a good thing in this case to grant large corporations the same rights as private individuals. The decision was based on the argument that even considering the source of the campaign donations (the “speech” in question) was inappropriate; every individual has a right to unrestricted political speech, and even asking whether corporate “individuals” are as worthy of protection as ordinary individuals would blur the bright lines upon which the rule of law is based.[7] Another example would be American Booksellers Association, Inc. v. Hudnut, when the court threw out an anti-pornography ordinance. The court argued that, even if pornography has negative effects, the same might be said of other forms of protected speech. From this it concluded that “[i]f the fact that speech plays a role in a process of conditioning were enough to permit governmental regulation, that would be the end of freedom of speech,” and thus negative effects do not justify restrictions. As Stanley Fish has pointed out, this is a peculiar logic: faced with facts which call into question the speech/action distinction which underlies the law, the court upholds the law against the facts which would undermine it. But it is a typically formalist logic: the point is to uphold the rule of law, i.e., abstract, neutral principles and procedures; if the coherence of those abstract principles is threatened by facts, you throw out the facts, not the principles.[8]

The problem is that, if the meanings of statements emerge from convention, from social context, then the insistence on excluding context, on divorcing rules and their enforcement from social and political complexities of a situation, is an impossibility. This is not simply an argument that it would be reasonable to sometimes include a little bit of context in legal decisionmaking, that First Amendment law should lean towards a more policy-oriented weighing and balancing of principles and rights in special circumstances such as highly concentrated or technologically inaccessible media. Rather, the argument is that formalist arguments of free speech can not be doing what they claim, that context is present in decisions in spite of claims to the contrary. Decisions that grant protection to marching Nazis and flag burning are not simply decisions that show a preference for bright line rules over context; on the contrary, such decisions are themselves a product of a particular social and historical context, and in turn contribute to the making of particular contexts.

The collapse of the boundary between “natural” speech and arbitrary interference with it implied by indeterminacy creates a further problem for First Amendment interpretation: the collapse of the distinction between speech and conduct or speech and action. The exercise of free speech, the “free marketplace of ideas,” is imagined as a kind of neutral, free and equal exchange, contrasted with unfree or arbitrary coercion. What disappears in the face of the arbitrariness of language is the coherence of that contrast, the faith that there is an important categorical distinction between people talking and arguing and people coercing one another through some kind of action. It is now an axiom of sociolinguistics and many other schools of thought that language use is an important kind of social action, that words do not merely reflect reality or express ideas, they primarily are a way of doing things, a way of acting in the social world. Although J. L. Austin began his classic How to Do Things With Words by describing a limited category of statements that do things–“performatives”–he later enlarged the category and made its boundaries much less clear by acknowledging the frequency of “indirect performatives,” i.e., statements that might appear to be merely descriptive but in context can be shown to be in fact doing something.[9] Some have since argued that in a sense all utterances are performatives.

None of which is to suggest that a subtle verbal snub is identical to punching someone in the nose. We do not call trespassing on someone’s lawn and shooting them identical, though they are both categorized as violations, as coercive. When Stanley Fish argues that speech in everyday life should not be imagined as if it takes place in “the sterilized and weightless atmosphere of a philosophy seminar,”[10] or when Matsuda et. al argue that words can wound, the argument is not that every slight or insult ought to be treated as if it were assault and battery.[11] What they are criticizing is the belief that there is a fundamental, categorical dichotomy between speech and conduct, that the dichotomy is clear and generalizable enough to form one of the principle structures of our law and democracy.

All this points to a deeper critique of the marketplace metaphor. The metaphor implies that linguistic exchanges, like market exchanges, take place between individuals who, in the absence of some outside interference, exist merely as individuals, not as persons in particular contexts with particular backgrounds. These are the famous abstract individuals of legal liberalism, the persons referred to as “A” and “B” in law school lectures on contracts: persons bereft, in legal liberalism’s ideal world, of gender, class, ethnicity, history. People the world over, the marketplace metaphor suggests, all share the characteristics of being in essence rational, self-interested individuals, inherently active and desirous. Language use, then, is a matter of expressing pre-existing interests; it is a tool used by individuals to buy cheap and sell dear in the marketplace of ideas. Language is something one uses.

But, according to at least some schools of linguistics and language philosophy, language is also something that happens to us, something that “speaks us” as much as we speak it. Language is an inherently collective, social precondition to individuality. Most definitions of language exclude any notion of a language possessed by only one individual; for language to be language it must be shared. People do not choose, after all, their first language; in a sense it chooses people. And the particularities of the language that chooses people, many would say, in turn shapes their consciousness, their sense of what counts as reason, their perceptions of the world and their selves within it, even their desires.[12]

This is not to imply, however, some kind of simple social determinism. Here is where the linguistic turn in philosophy suggests something very different from the common assertion that individual behaviors are “caused” by social structures. For one of the central discoveries of linguistics and language theory is what Barthes called “a paradoxical idea of structure: a system with neither close nor center.”[13] Except for analytical purposes, linguistic structure does not exist outside of anyone’s use of it. Language is certainly structured, in some sense of that word; linguistic grammar is the central example of structure, although scholars have brought to our attention many higher-level structures like linguistic style. But that structure is not simply some kind of exterior constraint, a Hobbesian limit on individual action; it is not the “structure” of, say, Durkheimian sociology or orthodox Marxism. It is dynamic, changing, and creative. As Chomsky pointed out, one grammatical system is capable of generating an infinite variety of sentences. And grammar is a practical, thoroughly collective human accomplishment, not an exterior system imposed upon individuals by a reified “society.” It is enabling as well as constraining: linguistic structure is a precondition of self-expression, not just a limit to it.

Language thus troubles both legal liberalism’s happy vision of rational individuals and its dark side, its Hobbesian view of society as the basic constraint on individuals; it calls into question the marketplace metaphor’s notions of both individual freedom and social order. The attraction of the marketplace metaphor in law is much the same as the attraction of marketplace theory itself: it posits a realm that is both free of arbitrary constraint, and yet ordered by the certain yet neutral and unequivocal rules of the marketplace. What the fact of linguistic structure calls into question is not merely the “freedom” of linguistic exchange but also its certainty, its divisibility from “arbitrary” external restraints and interference.

When MacKinnon argues that pornography is a form of action, not of speech, or when Matsuda argues that the context of racism and the subjective experiences of minorities in the U.S. ought to be a primary consideration in the creation and interpretation of hate speech laws, in the long run what motivates these scholars is not just a desire for specific exceptions to an otherwise intact First Amendment doctrine.[14] The suggestion is not simply that pornography is so damaging, or that the specific horrors of slavery and its legacy of racism so evil that unusual exceptions to free speech protection are called for (though the evils of rape-culture and racism very well might be the most urgent problems in the U.S. today). Rather, the suggestion, at least implicitly, is that the evils of rape-culture and contemporary racism force us, or should force us, to fundamentally reconsider how American law thinks about freedom, speech, and their regulation.

Furthermore, the critique of the oppositions that underpin free speech law such as speech and action, rules and context, or politics and law, need not be read as a simple denial that any differences exist. It is obviously not the case that there is no difference between slighting someone with a racial epithet and hitting them in the head, or between decisionmaking in courts and decisionmaking in legislatures. The argument is rather that these differences are neither clear nor generalizable enough to coherently underwrite a system of decisionmaking that claims to be able to transcend context and achieve the neutrality that is the goal of law in the first place.

Inquiry does not come to an end when one accepts the criticisms of the formalist First Amendment framework, and acknowledges the inevitability of politics and context. Stanley Fish’s quip notwithstanding, there is such a thing as free speech. If something is not what we think it is, it does not follow that it does not exist. Free speech is one of the major and most influential political and legal discourses of this century; for better or worse, it has helped make American society, our world, what it is. So the task is to rethink the character of free speech, to specify its historical context and political incidence. This is a large task; here I can only speculate about one aspect of the historical context of free speech, its relation to notions of the rule of law, and one aspect of its political incidence, its relations to social class.

The concept of a neutral, objective system of law that transcends politics is not just an abstraction important to lawyers and judges. (Lawyers and judges, in fact, are often acutely aware of just how political and unstable legal interpretation can sometimes be on a day-to-day basis.) A faith in the neutral rule of law is an important element of American culture, of the popular imagination. Evidence for this can be seen in the way that legal institutions and documents are more often celebrated, more often used to define American democracy, than political institutions and accomplishments. One might think, for example, that in an electoral democracy the most important historical event, the event most widely celebrated, would be the extension of the vote to the majority of the population. Yet most citizens do not know the amendment or the year in which the vote was extended to women, much less the history of the long political struggles that led to the passage of the nineteenth amendment in 1920. On the other hand, the Constitution is regularly celebrated in fora ranging from scholarly conferences to reverential Philip Morris ads, even though that hallowed document underwrote a legal system that upheld slavery for three quarters of a century, excluded women from voting for more than half a century after that, and did not come to rigorously protect political dissent until about fifty years ago. Nonetheless, American culture tends to worship the Constitution and remain ignorant of the history of universal suffrage. The story of the Constitution is a story of law, whereas the story of women’s suffrage is a story of protracted political struggle. And in some ways, at least, mainstream American political culture worships the former more than the latter.

What is the substance of this worship? What makes law neutral, and how does it support democracy? The short answer might be that if a society makes its decisions according to fixed rules instead of individual or collective whims, individuals will be less able to gain systematic advantage over others. The long answer would involve an extended and controversial discussion of a large chunk of the literature of legal theory and political science. But there is a mid-range answer based in historical observations, which suggests that in the U.S. two patterns of argument or logics have tended to shape legal decisionmaking, particularly in this century. One logic has been called alternately formalist, classical, bright line, rule-based, or simply legal justice; the other, standards-based, revisionist, policy oriented, realist, or substantive justice.[15]

Arguably, the First Amendment has become the centerpiece of the American faith in the rule of law in this century, and not coincidentally, First Amendment law is also highly formalist. Formalism is not simply absolutism, a belief that there should be no exceptions. It is more a way of thinking about what law and legal interpretation are and how they work. (Describing the ACLU’s position on the First Amendment as “absolutist” is thus a bit of a red herring.) In at least many of its variations, formalism involves the claim that law is apolitical and neutral because it rests on a rigid, formal model, based on an ideal of axiomatic deduction from rules and unequivocal, “bright line” legal distinctions. The role of law, then, is to locate and uphold clear boundaries–bright lines–between the rights of individuals and between individuals and the state. Legal language and legal expertise are thought valuable precisely because they provide fixed, rigorous meanings unsullied by the political and social winds of the moment. Given a certain set of legal rules and a certain legally defined situation, it is assumed, a properly trained judge or lawyer, within certain boundaries, can use expertise in legal language and reasoning to arrive at, or at least approximate, the correct interpretation, which is generally a matter of pinpointing exactly where the boundaries lie.

Policy oriented decisionmaking, in contrast, tends to be context sensitive, accepting of blurry boundaries, functionalist, and messier. It is also much more common in legal decisionmaking than popular wisdom would suggest. In policy argument, justice is thought to be best served by subtle, well-informed analyses of particular contexts and judicial “balancing” of competing interests and principles; rights and values are treated, not as hard rules distinguished by bright lines, but as general standards that can be differentially implemented according to context. Administrative law, such as that involved in enacting the Federal Communication Commission’s public interest standard for broadcasters, is a classic example of policy oriented decisionmaking. Brown v. Board of Education also includes some exemplary policy argument.

Policy-oriented decisionmaking sometimes is justified in terms of head-on attacks on formalism of the type associated with the critiques of free speech just discussed. Both in practice and in theory, the argument goes, the supposedly “bright line” distinctions upon which formalism is based are rarely if ever as bright as imagined. Stanley Fish’s polemic, “There is no such thing as free speech,” is a recent example of such a critique, but in some ways his position echoes, for example, Felix Cohen’s legal realist argument earlier in the century, in “Transcendental Nonsense and the Functional Approach.”[16]

It is important, however, that outside the academy policy-oriented legal decisionmaking has been justified less by theoretical criticisms of formalism as a whole and more by a sense that, in certain limited and specialized contexts, policy-oriented decisionmaking is simply practical. Formalism seems to be the place our culture celebrates the ideal of the rule of law; policy argument seems to be the place where most of the detailed legal work of ordering society goes on. Policy argument dominates largely in domains unrelated to communication: the law of corporations, environmental law, urban planning, and so forth. The prominent example of policy logic in communication is probably government licensing of broadcast stations according to the public interest standard. Licensing was originally created because communication by radio waves was understood to be characterized by spectrum scarcity and other complicated and contingent technical matters, such as rapidly evolving technologies and strategic needs of the military. Treating broadcasters differently than newspapers was thus thought to be simply called for by context, not because there was thought to be a formal right or principle at stake such as the public’s right to access to communication.

It is sometimes suggested that policy arguments began to replace formalist ones in legal argument somewhere around the turn of the century, and formalism was finally defeated with the end of the Lochner era in 1937. On the level of legal metatheory, there may be truth to this, but it remains the case that in practice both logics remain today. Sometimes the two logics are associated with competing sides in a legal controversy. The argument that television violence ought to be censored because its measurably harmful effects on children outweigh considerations of free speech is a typical policy argument; arguing against such censorship because it would open the door to more serious restrictions of freedom of speech is to lean in a formalist direction. But the two logics are also often mixed in the context of any given argument. Conservatives argue that broadcast licensing violates free speech rights but also is inefficient in the context of new technologies; liberals argue that guarantied citizen access to mass communications would be beneficial for industrial society but also should be treated as a “new First Amendment right.”[17]

So it is perhaps the case that what has been changing over the years is not simply a shift from one kind of argument to the other, but a shift in the “mix” of the two, a shift in how the two kinds of argument have been used in which cases. And here the historical literature suggests that, gradually in this century, the focus of formalist argumentation has shifted from the realm of property and contract to free speech. Up through the late nineteenth century, during what Mensch calls the classical era of jurisprudence, property was the central, formal right; in theory property was celebrated as the essence of legal liberalism, and in practice it was used aggressively in a wide variety of areas. Property rights were invoked to justify bans on speaking in public parks, the picketing of factories during union drives, and turn-of-the-century social legislation. Gradually, this formalist application of property fell out of favor, and met its final demise in the 1937 overturn of Lochner, during the New Deal.[18]

Perhaps it is not entirely coincidental that, as formalist notions of property declined, the formalist understanding of free speech rose. In a familiar history, the First Amendment was gradually elevated to its current legal status, both in case law and in the popular imagination. What has triumphed in this period is not a policy-oriented understanding of free speech (in spite of the best efforts of a long line of scholars from Alexander Meiklejohn to Sunstein, but a rigidly formalist one. So today, property rights advocates who would like to see a return to something like the Lochner era interpretations of property, like Richard Epstein, argue that the rules applied to free speech should also be applied to property. Conversely, from somewhere towards the other end of the political spectrum, Cass Sunstein has called for “A New Deal for Free Speech” wherein the 1930s revisions of property law be extended to communication.[19]

Why has formalism in legal discourse shifted from property and contract to free speech? At this point, I can only speculate. It’s possible to put a cynical economic interpretation on the shift: Formal interpretations of property were abandoned because they became increasingly impractical in the face of the bureaucratic corporate form of business and other late nineteenth and early twentieth century economic developments. Conversely, the soap box speakers became sanctified in law precisely during the historical period that they ceased being effective. In the nineteenth century, union organizers, pacifists, and other “radicals” all made good use of the soap box–of face-to-face speaking in public places–as a communicative tool, and were regularly arrested for doing so. In this century, however, the key to popular communication has become access to radio, television, and other expensive technology-based mass media, which have rendered the soap box increasingly irrelevant as an organizing tool. A formalist interpretation of the First Amendment grants symbolic protection to soap boxes while in practice protecting media corporations much more effectively than dissidents.

Such an account of the shift, however, risks a functionalist tautology (explaining historical events in terms of the needs they serve for the power bloc) and fails to account for the imaginative power of First Amendment formalism. So a more comprehensive explanation might add two observations. First, from a distance, formalism is satisfying to a legal liberal vision of the rule of law, whereas policy argument can appear as arbitrary, obscure, and haughtily technocratic. College sophomores have little trouble understanding why it might be good for the rule of law to protect Nazis marching in Skokie, but it takes a lot of effort to convince them of the grand principles at stake in, say, the regulation requiring TV stations to charge political candidates the same rate for advertising time they charge their most favored advertiser instead of their standard rates. Second, from up close, from the perspective of those involved in everyday, small legal decisions, formalism is frequently impractical, whereas policy-oriented decisions seem reasonable and pragmatic. Few suburban homeowners would take kindly to the suggestion that their neighbors should be allowed to raise pigs or let their lawns go to weed on the grounds that to do so would be to uphold the sanctity of formal property rights.

It seems to be the case, then, that the American polity seems to want a legal system that can satisfy both the desire for legitimacy provided by formalism and the “practical” effectiveness of policy-oriented decisionmaking. Perhaps, therefore, the formalist interpretation of the First Amendment became popular in part because it came to take property’s place as a symbol of legal clarity and formal justice. In both the popular and legal imaginations, the image of the property-holding yeoman farmer was gradually supplanted by the soap box speaker as the central archetype and emblem of legally protected exercise of rights and freedoms in a democratic society.

1. Labor and Management

The polity, however, is not the public. The community of individuals who appreciate the formalist interpretation of free speech may include a wide range of people, such as lawyers, judges, politicians, journalists, professors, and many others in positions to directly or indirectly influence legal and political consciousness. And it includes a wide range of political positions: liberals at the ACLU seem to have little trouble agreeing with conservatives on the Supreme Court that flag burning is protected speech. But it certainly does not include everyone. The majority of the American public has a hard time seeing the justice of protecting flag burning. And this may not mean simply that the public disdains free speech. The ACLU reports that the majority of the complaints it receives come from workers who feel their speech has been restricted by their bosses–a kind of speech that the Supreme Court and the ACLU agree is not protected.

Elizabeth Mensch has remarked that, although many formerly bright lines have been blurred in twentieth century law, the boundary between capital and labor remains as bright and impermeable as ever.[20] The First Amendment, as it is currently interpreted, protects owners and managers more than individual speakers. It prevents government agencies from interfering with the speech of private agencies delineated by boundaries of ownership and management, not by individual human beings.

As a result, employees have basically no free speech rights with regards to their employers, including employees of media businesses. When a journalist is told by an editor to drop a story because it is politically inflammatory, the journalist can find little comfort in First Amendment law. Network program practices departments engage in systematic and thorough censorship of scripts for television series with all the zeal (if not the same principles) of Communist Party apparatchiks. Under law, there’s a sense in which A. J. Liebling’s bon mot–that the only freedom of speech in this country is for those who own one–is literally true.

For all that, Liebling’s quip is an oversimplification. There are many limits on the power of media owners to influence content, such as the resistance of the community of professional journalists to owner manipulation on both ethical and self-interested grounds. Evidence suggests that, among some groups, there probably is a popular ethic of free speech in the U.S. that extends beyond the powers of owners and managers. When conservative newspaper tycoon Rupert Murdoch bought the left-wing Village Voice and tried to dismiss its editor, for example, the threat of a staff walkout forced him to back down, and he left the paper’s editorial content alone thereafter.[21]

2. Social Class and Linguistic Style

Bringing “popular ethics” into the discussion, however, brings us back to the second question suggested by Gates’ examples: why does it seem easier to pass rules prohibiting direct racial epithets than elaborate, formal statements? It is well established that linguistic style is associated with social class. Sociolinguist Basil Bernstein demonstrated that children from middle and professional classes tend to do better in school than working class students in part because they speak more often and more fluently in formal style, or what Bernstein calls “elaborated code.” Working class students, in contrast, tend to be more comfortable, and are probably more fluent in, informal style, or what Bernstein calls “restricted code.”[22]

One style is not better than the other. Rather, each style is an adaptation to specific patterns of life and work. Informal style has the effect of stressing membership within a group; it is useful for interactions among people who are familiar with each other and work with each other on a regular basis, and thus live in “dense” social networks, i.e., high levels of interaction with a limited number of people. It has a high proportion of ellipsis and colloquialisms, not because such language is simpler, but because these take advantage of a higher degree of shared knowledge between speaker and listener. Similarly, it has a higher proportion of personal pronouns (you and they) and tag-questions soliciting agreement of the listener (nice day, isn’t it?), because these express a sense of cooperation and solidarity.[23]

Formal style, in contrast, is for people whose social networks are less dense, who regularly deal with strangers and thus communicate in contexts in which ellipsis and colloquialisms are more likely to generate confusion than solidarity. Similarly, formal style’s high proportion of subordinate clauses, passive verbs, and adjectives (besides connoting high-mindedness through its echo of Latin grammar) are adaptations to the need to explain details comprehensively when speaker and listener do not share as much background knowledge and cannot easily rely on features of the extra-linguistic context. Interestingly, in spite of the frequency of passive verbs, formal style also contains a higher proportion of pronoun “I.” This has the effect of imposing the speaker’s individuality on the utterance, of stressing her or his unique nature as a person, as opposed to expressing membership in a group. Some research suggests that formal style leads people to be judged as more intelligent, more educated, and less friendly and less likable than informal style.

It is not the case that working class people use only informal style and middle class people use only formal style. A garage mechanic will probably shift to formal speech when dealing with a customer irate over a bill, and only the most hopelessly pompous college professors use formal style when speaking with their friends and families. But mastery over the different styles is not evenly distributed. Bernstein’s work suggests that middle and professional class students’ relatively better skills and comfort with formal style functions as a form of what Bourdieu calls “cultural capital,” enhancing their life prospects.[24] Given the relation of style to the character of work, moreover, fluency in formal style (though not accent) is probably associated with a person’s present occupation, regardless of class background.

What does this have to do with free speech? James Carey has argued that the speech/action distinction in free speech law is an expression of distinctly middle class values and sensibilities. Carey tells the story of a middle class man who enters a working class bar and not long thereafter comes flying out the plate glass window; the man then says with astonishment, “but all I did was use words!” Carey’s point is that, to the working class individuals in the bar, words have power. For them, the difference between insulting someone’s mother and punching them in the nose is not as obvious or absolute as it is for the middle class person.

Carolyn Marvin has elaborated on these contrasting sets of values in our culture in terms of what she calls “text” and “body”:

The First Amendment as currently interpreted is envisioned largely in terms of that which middle and professional class people have mastery over, abstract formal expression in speech and writing. This is why it is harder to censure Gates’ first example than the second. Within the community of people who share those values, there is something equalizing about free speech. But it should not be surprising that, for people who do not make a living that way, for workers and other people whose bodies are the source of their value to society, formalist protection of free speech may not make sense, and might even appear as simply another way that people with privileges (such as academics writing about free speech) exercise their power over people who don’t.

The analyses and arguments of this chapter do not offer resolutions to all of the many important debates among non-formalist theorists of freedom of speech, such as those between Gates and Matsuda et al. over campus hate speech codes. But it does do two things. First, it tries to clarify some of the underlying principles and issues at stake today in debates over free speech, particularly the inevitability of context and the problems this poses for traditional formalist understandings of the rule of law. Second, it points in the direction of a rethinking of free speech based in context, and suggests two (among many possible) avenues to pursue: the historical shift of formalism from property to free speech and to matters of language and social class in both legal discourse and in nonlegal situations. Clearly, these examples of context-based analysis are intended only to be suggestive. But what they suggest, it is hoped, is that this kind of inquiry, if expanded into rich and subtle contextual analyses, might indeed help resolve some debates and contribute to a more fully democratic, substantive interpretation of the role of free speech in law and culture.

[1]. Henry Louis Gates, “Let Them Talk,” The New Republic, Sept. 20 & 27, 1993, pp. 37-49: p. 45.

[2]. “Style” is the generally accepted sociolinguistic term for language varieties that can be classified on a continuum for formal to informal. The word “code” is used by Basil Bernstein, Class, Codes And Control, 2d edition (Boston: Routledge & K. Paul, 1974).

[3]. William Labov, “The Logic of Nonstandard English,” in Giglioli (ed.) Language and Social Context (Penguin, 1972), pp. 179-216.

[4]. For a sociolinguistically informed analysis of the role of linguistic style during arrest and interrogation see, Janet E. Ainsworth, “In a Different Register: The Pragmatics of Powerlessness in Police Interrogation,” Yale Law Journal, 103 (November, 1993): 259-322.

[5]. Mark Kelman, A Guide to Critical Legal Studies (Cambridge, Mass.: Harvard University Press, 1987), p. 12 and passim.

[6]. Barbara Johnson, A World of Difference (Baltimore: Johns Hopkins Univ. Press, 1987), p. 6.

[7]. First National Bank of Boston v Bellotti, 435 US 765, 776 (1978)

[8]. 771 F.2d 323 (7th Cir. 1985), aff’d, 475 U.S. 1601 (1986), p. 329; quoted in Stanley Fish, “Fraught With Death: Skepticism, Progressivism, and the First Amendment,” University of Colorado Law Review, 64 Fall 1993: 1061-1086, p. 1065.

[9]. See Ainsworth, “In a Different Register,” note 15: “Austin initially adopts the intuitively appealing assumption that constative utterances, unlike performatives, are true or false. Having set up these opposing categories of performative and constative utterances, Austin ultimately deconstructs this dichotomy” with his analysis of indirect performatives.

[10]. Fish, “Fraught With Death,” p. 1061.

[11]. Mari J. Matsuda, Charles R. Lawrence III, Richard Delgado, and Kimberle Williams Crenshaw, Words that Wound: Critical Race Theory, Assaultive Speech, and the First Amendment (Boulder, Colorado: Westview Press, 1993).

[12]. The classic and extreme version of this notion is the “Sapir-Whorf hypothesis” named after linguists Edward Sapir and Benjamin Whorf. For a post-structuralist variation of it, see Rosalind Coward and John Ellis, Language and Materialism: Developments in Semiology and the Theory of the Subject (London: Routledge and Kegan Paul, 1977).

[13]. Roland Barthes, Image, Music, Text (New York: Hill and Wang, 1977), p. 159.

[14]. Catharine A. MacKinnon, Only Words (Cambridge, Mass: Harvard University Press, 1993).

[15]. Elizabeth Mensch divides legal thought into classical and realist or revisionist forms. Duncan Kennedy talks of the distinction between rules and standards. Roberto Unger speaks of “legal justice” and “substantive justice.” See Elizabeth Mensch, “The History of Mainstream Legal Thought” in David Kairys, ed., The Politics of Law: A Progressive Critique (New York: Pantheon, 1982), pp. 18-39; Duncan Kennedy, “Form and Substance in Private Law Adjudication,” Harvard Law Review, 89 (1976): 1685, pp. 1687-89; see also Roberto M. Unger, Knowledge and Politics (New York: The Free Press, 1975), p. 91.

[16]. Stanley Fish, “There’s No Such Thing As Free Speech And It’s a Good Thing Too,” Boston Review, Feb. 1992, p. 3; Felix Cohen, “Transcendental Nonsense and the Functional Approach,” Columbia Law Review 35 (1935): 809.

[17]. For example, Jerome A. Barron, Freedom Of The Press For Whom? The Right Of Access To Mass Media (Bloomington: Indiana University Press 1973).

[18]. Jennifer Nedelsky, Private Property and the Limits of American Constitutionalism: The Madisonian Framework and Its Legacy (Chicago: University of Chicago Press, 1990).

[19]. Cass R. Sunstein, “Free Speech Now,” The University of Chicago Law Review, 59 (Winter 1992): 255; Richard A. Epstein, “Property, Speech, and the Politics of Distrust,” The University of Chicago law review 59 (Winter 1992): p. 41.

[20]. Mensch, “The History of Mainstream Legal Thought,” p. 26.

[21]. Alex S. Jones, “At Village Voice, A Clashing Of Visions,” The New York Times, June 28, 1985, Section B; p. 5, Column 1.

[22]. Bernstein, Class, Codes And Control.

[23]. This survey of Bernstein’s work relies heavily on Peter Trudgill, Sociolinguistics: An Introduction to Language and Society (London: Penguin Books, 1983, revised edition), pp. 132-140.

[24]. Pierre Bourdieu, Distinction: A Social Critique of the Judgment of Taste, trans. R. Nice (London: Routledge & Kegan Paul, 1984).

[25]. Carolyn Marvin, “Theorizing the Flagbody: Symbolic Dimensions of the Flag Desecration Debate, or Why the Bill of Rights Does Not Fly in the Ballpark,” Critical Studies in Mass Communication, 8, (June, 1991): pp. 120-121.

[26]. Social class is of course a complex construct, and is used here suggestively, not comprehensively or precisely. Marvin points out that the values of “body” in fact extend to and in many ways are exemplified by military personnel, a group which overlaps with but is not limited to working class individuals.

More here:
Free Speech, Language, and the Rule of Law

First Amendment of our countrys Bill of Rights

 Misc  Comments Off on First Amendment of our countrys Bill of Rights
Sep 102015
 

Freedom of Speech and of the Press: The First Amendment allows citizens to express and to be exposed to a wide range of opinions and views. It was intended to ensure a free exchange of ideas even if the ideas are unpopular.

Freedom of speech encompasses not only the spoken and written word, but also all kinds of expression (including non-verbal communications, such as sit-ins, art, photographs, films and advertisements). Under its provisions, the media including television, radio and the Internet is free to distribute a wide range of news, facts, opinions and pictures. The amendment protects not only the speaker, but also the person who receives the information. The right to read, hear, see and obtain different points of view is a First Amendment right as well.

But the right to free speech is not absolute. The U.S. Supreme Court has ruled that the government sometimes may be allowed to limit speech. For example, the government may limit or ban libel (the communication of false statements about a person that may injure his or her reputation), obscenity, fighting words, and words that present a clear and present danger of inciting violence. The government also may regulate speech by limiting the time, place or manner in which it is made. For example the government may require activists to obtain a permit before holding a large protest rally on a public street. Freedom of Assembly and Right to Petition the Government: The First Amendment also protects the freedom of assembly, which can mean physically gathering with a group of people to picket or protest; or associating with one another in groups for economic, political or religious purposes.

The First Amendment also protects the right not to associate, which means that the government cannot force people to join a group they do not wish to join. A related right is the right to petition the government, including everything from signing a petition to filing a lawsuit.

Freedom of Religion: The First Amendment’s free exercise clause allows a person to hold whatever religious beliefs he or she wants, and to exercise that belief by attending religious services, praying in public or in private, proselytizing or wearing religious clothing, such as yarmulkes or headscarves. Also included in the free exercise clause is the right not to believe in any religion, and the right not to participate in religious activities.

Second, the establishment clause prevents the government from creating a church, endorsing religion in general, or favoring one set of religious beliefs over another. As the U.S. Supreme Court decided in 1947 in Everson v. Board of Education of Ewing Township, the establishment clause was intended to erect “a wall of separation between church and state,” although the degree to which government should accommodate religion in public life has been debated in numerous Supreme Court decisions since then.

See the original post:
First Amendment of our countrys Bill of Rights

Human Genetics Alert – The Threat of Human Genetic Engineering

 Human Genetic Engineering  Comments Off on Human Genetics Alert – The Threat of Human Genetic Engineering
Jul 282015
 

David King

The main debate around human genetics currently centres on the ethics of genetic testing, and possibilities for genetic discrimination and selective eugenics. But while ethicists and the media constantly re-hash these issues, a small group of scientists and publicists are working towards an even more frightening prospect: the intentional genetic engineering of human beings. Just as Ian Wilmut presented us with the first clone of an adult mammal, Dolly, as a fait accompli, so these scientists aim to set in place the tools of a new techno-eugenics, before the public has ever had a chance to decide whether this is the direction we want to go in. The publicists, meanwhile are trying to convince us that these developments are inevitable. The Campaign Against Human Genetic Engineering, has been set up in response to this threat.

Currently, genetic engineering is only applied to non-reproductive cells (this is known as ‘gene therapy’) in order to treat diseases in a single patient, rather than in all their descendants. Gene therapy is still very unsuccessful, and we are often told that the prospect of reproductive genetic engineering is remote. In fact, the basic technologies for human genetic engineering (HGE) have been available for some time and at present are being refined and improved in a number of ways. We should not make the same mistake that was made with cloning, and assume that the issue is one for the far future.

In the first instance, the likely justifications of HGE will be medical. One major step towards reproductive genetic engineering is the proposal by US gene therapy pioneer, French Anderson, to begin doing gene therapy on foetuses, to treat certain genetic diseases. Although not directly targeted at reproductive cells, Anderson’s proposed technique poses a relatively high risk that genes will be ‘inadvertently’ altered in the reproductive cells of the foetus, as well as in the blood cells which he wants to fix. Thus, if he is allowed to go ahead, the descendants of the foetus will be genetically engineered in every cell of their body. Another scientist, James Grifo of New York University is transferring cell nuclei from the eggs of older to younger women, using similar techniques to those used in cloning. He aims to overcome certain fertility problems, but the result would be babies with three genetic parents, arguably a form of HGE. In addition to the two normal parents, these babies will have mitochondria (gene-containing subcellular bodies which control energy production in cells) from the younger woman.

Anderson is a declared advocate of HGE for medical purposes, and was a speaker at a symposium last year at UCLA, at which advocates of HGE set out their stall. At the symposium, which was attended by nearly 1,000 people, James Watson, of DNA discovery fame, advocated the use of HGE not merely for medical purposes, but for ‘enhancement’: ‘And the other thing, because no one really has the guts to say it, I mean, if we could make better human beings by knowing how to add genes, why shouldn’t we do it?’

In his recent book, Re-Making Eden (1998), Princeton biologist, Lee Silver celebrates the coming future of human ‘enhancement’, in which the health, appearance, personality, cognitive ability, sensory capacity, and life-span of our children all become artifacts of genetic engineering, literally selected from a catalog. Silver acknowledges that the costs of these technologies will limit their full use to only a small ‘elite’, so that over time society will segregate into the “GenRich” and the “Naturals”:

“The GenRich – who account for 10 percent of the American population – all carry synthetic genes… that were created in the laboratory …All aspects of the economy, the media, the entertainment industry, and the knowledge industry are controlled by members of the GenRich class…Naturals work as low-paid service providers or as labourers, and their children go to public schools… If the accumulation of genetic knowledge and advances in genetic enhancement technology continue … the GenRich class and the Natural class will become…entirely separate species with no ability to cross-breed, and with as much romantic interest in each other as a current human would have for a chimpanzee.”

Silver, another speaker at the UCLA symposium, believes that these trends should not and cannot be stopped, because to do so would infringe on liberty.

Most scientists say that what is preventing them from embarking on HGE is the risk that the process will itself generate new mutations, which will be passed on to future generations. Official scientific and ethical bodies tend to rely on this as the basis for forbidding attempts at HGE, rather than any principled opposition to the idea.

In my view, we should not allow ourselves to be lulled into a false sense of security by this argument. Experience with genetically engineered crops, for example, shows that we are unlikely ever to arrive at a situation when we can be sure that the risks are zero. Instead, when scientists are ready to proceed, we will be told that the risks are ‘acceptable’, compared to the benefits. Meanwhile, there will be people telling us loudly that since they are taking the risks with their children, we have no right to interfere.

See the original post here:

Human Genetics Alert – The Threat of Human Genetic Engineering

Twenty-second Amendment to the United States Constitution

 Second Amendment  Comments Off on Twenty-second Amendment to the United States Constitution
Apr 142015
 

The Twenty-second Amendment of the United States Constitution sets a term limit for election to the office of President of the United States. Congress passed the amendment on March 21, 1947. It was ratified by the requisite number of states on February 27, 1951.

Section 1. No person shall be elected to the office of the President more than twice, and no person who has held the office of President, or acted as President, for more than two years of a term to which some other person was elected President shall be elected to the office of the President more than once. But this article shall not apply to any person holding the office of President when this article was proposed by the Congress, and shall not prevent any person who may be holding the office of President, or acting as President, during the term within which this article becomes operative from holding the office of President or acting as President during the remainder of such term.

Section 2. This article shall be inoperative unless it shall have been ratified as an amendment to the Constitution by the legislatures of three-fourths of the several states within seven years from the date of its submission to the states by the Congress.

Historians point to George Washington’s decision not to seek a third term as evidence that the founders saw a two-term limit as a bulwark against a monarchy, although his Farewell Address suggests that he was not seeking re-election because of his age. Thomas Jefferson also contributed to the convention of a two-term limit when he wrote in 1807, “if some termination to the services of the chief Magistrate be not fixed by the Constitution, or supplied by practice, his office, nominally four years, will in fact become for life.”[1] Jeffersons immediate successors, James Madison and James Monroe, adhered to the two-term principle as well. In a new political atmosphere several years later, Andrew Jackson continued the precedent.

Prior to Franklin D. Roosevelt, few Presidents attempted to serve for more than two terms. Ulysses S. Grant sought a third term in 1880 after serving from 1869 to 1877, but narrowly lost his party’s nomination to James Garfield. Grover Cleveland tried to serve a third term (and second consecutive term) in 1896, but did not have enough support in the wake of the Panic of 1893. Cleveland lost support to the Silverites led by William Jennings Bryan, and declined to head the Gold Democrat ticket, though he did endorse the Gold Democrats. Theodore Roosevelt succeeded to the presidency upon William McKinley’s assassination and was himself elected in 1904 to a full term, serving from 1901 to 1909. He sought to be elected to a (non-consecutive) term in 1912 but lost to Woodrow Wilson. Wilson himself tried to get a third term in 1920,[citation needed] by deadlocking the convention. Wilson deliberately blocked the nomination of his Secretary of the Treasury and son-in-law, William Gibbs McAdoo. However, Wilson was too unpopular even within his own party at the time, and James M. Cox was nominated. In 1940, Franklin D. Roosevelt became the only president to be elected to a third term; supporters cited the war in Europe as a reason for breaking with precedent.

In the 1944 election, during World War II, Roosevelt won a fourth term but suffered a cerebral hemorrhage and died in office the following year. Thus, Franklin Roosevelt was the only President to have served more than two terms. Near the end of the 1944 campaign, Republican nominee Thomas E. Dewey, the governor of New York, announced support of an amendment that would limit future presidents to two terms. According to Dewey, “Four terms, or sixteen years, is the most dangerous threat to our freedom ever proposed.”[2]

The Republican-controlled 80th Congress approved a 22nd Amendment in March 1947;[3] it was signed by Speaker of the House Joseph W. Martin and acting President pro tempore of the Senate William F. Knowland.[4] Nearly four years later, in February 1951, enough states ratified the amendment for its adoption. While excluded from the amendment’s restrictions, then-President Harry S. Truman ultimately decided not to seek another term in 1952.[3]

The Congress proposed the Twenty-second Amendment on March 24, 1947.[5] The proposed amendment was adopted on February 27, 1951. The following states ratified the amendment:

Ratification was completed on February 27, 1951. The amendment was subsequently ratified by the following states:

In addition, the following states voted to reject the amendment:

Read more here:
Twenty-second Amendment to the United States Constitution

 Posted by at 8:49 pm  Tagged with:

Pence signs 'fix' for religious freedom law

 Freedom  Comments Off on Pence signs 'fix' for religious freedom law
Apr 082015
 

Top Indiana Republican lawmakers overhauled their week-old religious freedom law Thursday with a follow-up measure intended to ease concerns driven by businesses that it could lead to discrimination. Gov. Mike Pence then signed it into law.

The changes appear to have tamped down some of the criticism — but in doing so Pence and lawmakers infuriated social conservative activists and set the stage for a bigger fight next year over expanding Indiana’s anti-discrimination law to cover gays and lesbians.

Republican legislative leaders unveiled their series of changes Thursday morning to the law that triggered intense backlash from businesses, sports associations, pro-LGBT groups and even fiscally-focused conservatives when Pence signed it last week.

The GOP-dominated House and Senate approved a legislative fix, which was added into an unrelated bill, on Thursday, sending it to Pence’s desk almost immediately.

Despite last-minute lobbying from conservative groups like Indiana Right to Life to get Pence to veto the fix, the governor signed it Thursday evening.

“In the midst of this furious debate, I have prayed earnestly for wisdom and compassion, and I have felt the prayers of people across this state and across this nation. For that I will be forever grateful,” Pence said in a statement.

“There will be some who think this legislation goes too far and some who think it does not go far enough, but as governor I must always put the interest of our state first and ask myself every day, ‘What is best for Indiana?'” he said. “I believe resolving this controversy and making clear that every person feels welcome and respected in our state is best for Indiana.”

The changes prohibit businesses from using the law as a defense in court for refusing “to offer or provide services, facilities, use of public accommodations, goods, employment, or housing” to any customers based on “race, color, religion, ancestry, age, national origin, disability, sex, sexual orientation, gender identity, or United States military service.”

It doesn’t accomplish what the law’s critics wanted most: Adding sexual orientation to the list of categories protected by Indiana’s anti-discrimination law.

But that debate, GOP legislators acknowledged, is coming soon. House Speaker Brian Bosma said the backlash against the religious freedom law has “opened many perspectives” and that the anti-discrimination law “needs to be discussed.”

See original here:
Pence signs 'fix' for religious freedom law

 Posted by at 11:46 am  Tagged with:

Hip Hop Illuminati 101 Part 2 BEYONCE AND JAY Z – Video

 Illuminati  Comments Off on Hip Hop Illuminati 101 Part 2 BEYONCE AND JAY Z – Video
Apr 012015
 



Hip Hop Illuminati 101 Part 2 BEYONCE AND JAY Z
Rebecca Holly Hood Scott Amazon Top-Selling Author, Speaker, Activist Rebecca Holly Hood Scott Amazon Top-Selling Author, Speaker, Activist http:/. Rebecca Holly Hood Scott Amazon Top-Selling…

By: Prher Pro

Read more:
Hip Hop Illuminati 101 Part 2 BEYONCE AND JAY Z – Video

Torontos Green Lotus Celebrates the Launch of their SEO Tools with a Party

 SEO  Comments Off on Torontos Green Lotus Celebrates the Launch of their SEO Tools with a Party
Mar 312015
 

Toronto, Ontario (PRWEB) March 31, 2015

Officially launched on March 10, 2015 Green Lotus SEO Tools is a suite of tools designed to help entrepreneurs, small business owners and marketing professionals develop and manage online brand presence and search engine optimization.

With a simple, user-friendly dashboard and an extensive do-it-yourself SEO Checklist, Green Lotus SEO Tools provide users with an easy to follow SEO strategy in the form of tasks. On-Site Audits, Keyword Research and Content Optimization tools help users begin to optimize their websites, and ensure they are search engine friendly.

Green Lotus SEO Tools also provide a platform for conducting competitor research. Competitor data can be insanely helpful when it comes to keyword selection, landing page optimization and link building. With the Competitor Backlink Spy Tool users can view all links to competitors and start developing their own link building strategy. Competitor Content Analysis and Rank Spy Tools provide the ability to track keywords, rankings, paid advertisements and vertical metrics (including news, videos, images, etc), and help users remain competitive online.

Social metrics have become an important factor to consider in search engine optimization. Green Lotus SEO & Social Media Tools collect brand social data and provide an easy to view chart tracking social metrics over time with the Social Monitoring Tool, and online branding with the Web Buzz Monitoring Tool.

Combined, the 30+ SEO Tools are a perfect suite of tools for businesses focused on managing their own search engine optimization. A complimentary SEO Tools 30 Day Trial is available to the public, no credit card required!

Green Lotus Tools provide exceptional value at a low cost! Starting at $20 /month on annual packages, these tools are not only insightful but affordable.

More about Green Lotus: Bassem Ghali is the driving force behind Green Lotus and has a knack for creating innovative online marketing strategies for small, medium and large businesses. Bassem is a Toronto Search Engine Marketing Strategist and Speaker with more than 8 years of experience managing online marketing strategies for some of Canadians largest corporations including Canadian Tire, Direct Energy, and Toronto Star – New in Homes.

Demonstrated success in online marketing has led to speaking engagements at various events including Search Engine Strategies (SES) Toronto, University of Toronto, Ryerson University, Humber College, American Marketing Association, SOHO Business Expo, Online Revealed Canada Conference, Newmarket Chamber of Commerce and more.

Green Lotus: 1 Yonge Street – Suite 1801. Toronto, Ontario M5E 1E5. Toll Free: 1 800-878-1667

Go here to read the rest:
Torontos Green Lotus Celebrates the Launch of their SEO Tools with a Party

Hasse Carlsson om varfr han valt att vara medlem i Talarfreningen NSA Sweden – Video

 NSA  Comments Off on Hasse Carlsson om varfr han valt att vara medlem i Talarfreningen NSA Sweden – Video
Mar 132015
 



Hasse Carlsson om varfr han valt att vara medlem i Talarfreningen NSA Sweden
Hasse berttar varfr han valt att vara medlem i Talarfreningen National Speaker Association of Sweden.

By: Talarfreningen National Speakers Association of Sweden

Read more:
Hasse Carlsson om varfr han valt att vara medlem i Talarfreningen NSA Sweden – Video

House Freedom Caucus hires first staffer

 Freedom  Comments Off on House Freedom Caucus hires first staffer
Mar 132015
 

The House Freedom Caucus, the nascent group of conservative lawmakers whove frustrated GOP leaders, has hired its first staff member as it gears up for upcoming spending battles.

Steve Chartan will serve as executive director of the Freedom Caucus, the group saidFriday.

Steves experience on the Senate Steering Committee makes him an ideal executive director for the House Freedom Caucus,the groups chairman, Rep. Jim Jordan (R-Ohio) said in a statement.

His relationships on and off the Hill will help HFC to successfully promote common-sense solutions that benefit the countless Americans who feel that they are forgotten by Washington.

Chartans hiring is yet another sign that the Freedom Caucus, which launched in January, is taking additional steps to organize ahead of looming fights within the GOP over the budget, lifting the debt ceiling and whether the Export-Import Bank should be renewed.

The Freedom Caucus, which Jordan described to The Wall Street Journalas an agile, active group of about 40 Republicans devoted to limited-government principles, helped derail Speaker John Boehners GOP plan to extend funding for the Homeland Security Department for three weeks.

House Republicans failure to pass the bill meant Boehner, an Ohio Republican like Jordan, was forced to rely on Democrats to prevent a shutdown at the agency, raising more questions about the Speakers political vulnerabilities.

The invite-only group is seen as a rival to the much larger, more inclusive Republican Study Committee, which is closer to Boehner and his team. Majority whip Steve Scalise (R-La.) had served as RSC chairman before stepping down to take the No. 3 job in leadership. And current RSC Chairman Bill Flores (R-Texas) has expressed a willingness to work with leadership behind the scenes.

Follow this link:
House Freedom Caucus hires first staffer

 Posted by at 2:46 pm  Tagged with:

Bitcoin Cryptocurrency Crash Course with Andreas …

 Cryptocurrency  Comments Off on Bitcoin Cryptocurrency Crash Course with Andreas …
Feb 072015
 

The Jefferson Club, Silicon Valley

Website: https://www.jeffersonclub.org

Meetup.com: https://www.meetup.com/Jefferson-Club…

The Jefferson Club is for anyone interested in making sound and informed long term, strategic decisions in view of the current economic, social, and political climate. Our goal is to better understand the direction of the economy, society, and politics, both in terms of trends and surprises. If you seek a better understanding of what’s happening in the world and how it will impact you and your family, then join us.

——————————————————- Information About The Speaker: Andreas Antonopoulos

Website: https://antonopoulos.com

Bio: Former co-founder of Nemertes Research an Industry Analyst firm, Andreas is a broad-reaching technologist, who is well-versed in many technology subjects. He is a serial tech entrepreneur, having launched businesses in London, New York, and California, in the Bay Area. He has earned degrees in Computer Science, Data Communications and Distributed Systems. With experience ranging from hardware and electronics to high level business and financial systems technology consulting and decades as CTO/CIO/CSO in many companies he combines authority and deep knowledge with an ability to make complex subjects easy to understand. He often brings a fresh perspective to a topic with surprising insights and his ability to identify underlying principles and connections between different topics. More than 200 of his articles on security, cloud computing and data centers have been published in print and syndicated worldwide. His many expertise include Bitcoin, crypto-currencies, Information Security, Cryptography, Cloud Computing, Data Centers, Linux, Open Source and robotics software development. He also has been CISSP certified for 12 years.

As a bitcoin entrepreneur, Andreas has founded three bitcoin businesses and launched several community open source projects. He often writes articles and blog posts on bitcoin and is a permanent host on Let’s Talk Bitcoin and prolific public speaker at technology events and was a recent speaker/presenter at this year’s San Jose Bitcoin 2013 Conference

Video Produced by Come Correct Media http://www.comecorrectmedia.com http://www.youtube.com/comecorrectmedia

Follow this link:
Bitcoin Cryptocurrency Crash Course with Andreas …

 Posted by at 6:50 am  Tagged with:

Does the First Amendment need a New Deal?

 Misc  Comments Off on Does the First Amendment need a New Deal?
Jan 272015
 

Lindsay France

Adam Liptak, Supreme Court correspondent for The New York Times, delivers a 2015 Frank Irvine Endowed Lecture (FIELS), “A New Deal for the First Amendment?”

The terrorist attack on the office of Charlie Hebdo, a satirical magazine in Paris, sparked a heated debate on the freedom of speech around the world. In America, this new dialogue was a continuation of a much longer, equally passionate debate on the First Amendment rights, one that has been taking place in the Supreme Court.

Adam Liptak, the Supreme Court correspondent for The New York Times, discussed the First Amendment in A New Deal for the First Amendment? at Cornell Law School Jan. 22.

Liptak began his talk with a 2011 Supreme Court case, Sorrell v. IMS Health Inc., which determined the legality of selling a doctors prescription information. The case was decided using the First Amendment, causing Justice Stephen Breyer to accuse the court of Lochnerism, a reference to the contentious 1905 Lochner v. New York decision based on the amendment. The Lochner case, Liptak explained, is often placed in the anti-canon of Supreme Court cases, along with other notorious decisions such as Dred Scott and Plessy v. Ferguson.

What made the Sorrell and Lochner cases so controversial, Liptak continued, is how the law was interpreted and applied. In Sorrell, prescription information was a form of speech, which could be protected, but it was also an economic activity, which could be regulated. A similar duality existed in Lochner. Reconciling this duality led to the controversy: The state legislatures tried to impose economic regulations and the contradicting court decision was dismissed as judicial activism.

Liptak mentioned another possible consideration in applying the law: If judging is, as he phrased it, weighing competing interests and putting a thumb on the scale in favor of marginalized speech, then should a deciding factor in applying the First Amendment be the relative power of the speaker? Though Liptak did not have an answer to this question, an audience member raised the possibility that a power-based consideration could lead to influential organizations, like major newspapers, being censored.

This brought Liptak to the dangers of applying the First Amendment liberally. I practiced First Amendment law for 14 years, and I drank the Kool-Aid, he said, describing his previous faith in the amendment. Over the years, many important decisions have been made using it, including allowing protestors near funerals and decriminalizing flag burning. However, he added, There is something troubling we should think about: economic regulations being struck down on the basis of free speech.

The Lochner era, which was characterized by such decisions, ended in the 1930s with the New Deal. To end our modern era of First Amendment law, Liptak suggested, a new New Deal is needed.

The lecture was presented by the Law Schools Frank Irvine Endowed Lecture Series.

Original post:
Does the First Amendment need a New Deal?

 Posted by at 1:41 pm  Tagged with:

Crowd calls for prayer and justice at Freedom Corner

 Freedom  Comments Off on Crowd calls for prayer and justice at Freedom Corner
Jan 052015
 

They came by the dozens, totaling upward of 300, some carrying signs bearing Together We Can and Justice for All, most wearing their Sunday best, many holding hands or raising them in supplication.

The members of the 19 churches of the Hill District Ministers Alliance turned out Sunday afternoon, the first Sunday of the new year, at the historic Freedom Corner in the Hill District to get the ear not only of elected officials but of God, said organizer Victor Grigsby, chairman of the alliance and pastor of the Central Baptist Church in the Hill District.

There was stirring talk and singing. Mostly, there was prayer.

Theres an urgency in our community. We felt a strong need to voice our concerns. We wanted to talk about injustice. We wanted to pray, as a community, Rev. Grigsby said. Pittsburgh Police Chief Cameron McLay and Cmdr. Eric Holmes attended. Im here because I support anything that brings community together [to support peace,] Cmdr. Holmes said.

Recent national incidents involving the deaths of African-American citizens during encounters with police sparked the rally for justice and against violence, which began at 1:30 p.m. with a march down Centre Avenue to Freedom Corner, where speakers took turns at the microphone.

The precipitating factors were [shootings in] Ferguson [Mo.] and Brooklyn, New York, but this is the way we wanted to begin 2015, the first Sunday in 2015, in a peaceful voicing of our concerns, Rev. Grigsby said.

Speaker after speaker asked the group to join in prayer for government leaders, for funding of programs to benefit the community, for police, for peace.

Karen Kane: kkane@post-gazette.com or at 724-772-9180.

Go here to see the original:
Crowd calls for prayer and justice at Freedom Corner

Keynote Speaker: Ken Banks Presented by SpeakInc Freedom from Money – Video

 Freedom  Comments Off on Keynote Speaker: Ken Banks Presented by SpeakInc Freedom from Money – Video
Dec 112014
 



Keynote Speaker: Ken Banks Presented by SpeakInc Freedom from Money
Mobile Technology Social/Environmental Change Expert Where Technology Meets Anthropology, Conservation and Development Ken Banks, Founder of kiwanja.net, devotes himself to the …

By: speakinc

Read the original post:
Keynote Speaker: Ken Banks Presented by SpeakInc Freedom from Money – Video

 Posted by at 6:48 pm  Tagged with:



Pierre Teilhard De Chardin | Designer Children | Prometheism | Euvolution