• 20 Most Unethical Experiments in Psychology

Humanity often pays a high price for progress and understanding — at least, that seems to be the case in many famous psychological experiments. Human experimentation is a very interesting topic in the world of human psychology. While some famous experiments in psychology have left test subjects temporarily distressed, others have left their participants with life-long psychological issues . In either case, it’s easy to ask the question: “What’s ethical when it comes to science?” Then there are the experiments that involve children, animals, and test subjects who are unaware they’re being experimented on. How far is too far, if the result means a better understanding of the human mind and behavior ? We think we’ve found 20 answers to that question with our list of the most unethical experiments in psychology .

Emma Eckstein

experiments unethical

Electroshock Therapy on Children

experiments unethical

Operation Midnight Climax

experiments unethical

The Monster Study

experiments unethical

Project MKUltra

experiments unethical

The Aversion Project

experiments unethical

Unnecessary Sexual Reassignment

experiments unethical

Stanford Prison Experiment

experiments unethical

Milgram Experiment

experiments unethical

The Monkey Drug Trials

experiments unethical

Featured Programs

Facial expressions experiment.

experiments unethical

Little Albert

experiments unethical

Bobo Doll Experiment

experiments unethical

The Pit of Despair

experiments unethical

The Bystander Effect

experiments unethical

Learned Helplessness Experiment

experiments unethical

Racism Among Elementary School Students

experiments unethical

UCLA Schizophrenia Experiments

experiments unethical

The Good Samaritan Experiment

experiments unethical

Robbers Cave Experiment

experiments unethical

Related Resources:

  • What Careers are in Experimental Psychology?
  • What is Experimental Psychology?
  • The 25 Most Influential Psychological Experiments in History
  • 5 Best Online Ph.D. Marriage and Family Counseling Programs
  • Top 5 Online Doctorate in Educational Psychology
  • 5 Best Online Ph.D. in Industrial and Organizational Psychology Programs
  • Top 10 Online Master’s in Forensic Psychology
  • 10 Most Affordable Counseling Psychology Online Programs
  • 10 Most Affordable Online Industrial Organizational Psychology Programs
  • 10 Most Affordable Online Developmental Psychology Online Programs
  • 15 Most Affordable Online Sport Psychology Programs
  • 10 Most Affordable School Psychology Online Degree Programs
  • Top 50 Online Psychology Master’s Degree Programs
  • Top 25 Online Master’s in Educational Psychology
  • Top 25 Online Master’s in Industrial/Organizational Psychology
  • Top 10 Most Affordable Online Master’s in Clinical Psychology Degree Programs
  • Top 6 Most Affordable Online PhD/PsyD Programs in Clinical Psychology
  • 50 Great Small Colleges for a Bachelor’s in Psychology
  • 50 Most Innovative University Psychology Departments
  • The 30 Most Influential Cognitive Psychologists Alive Today
  • Top 30 Affordable Online Psychology Degree Programs
  • 30 Most Influential Neuroscientists
  • Top 40 Websites for Psychology Students and Professionals
  • Top 30 Psychology Blogs
  • 25 Celebrities With Animal Phobias
  • Your Phobias Illustrated (Infographic)
  • 15 Inspiring TED Talks on Overcoming Challenges
  • 10 Fascinating Facts About the Psychology of Color
  • 15 Scariest Mental Disorders of All Time
  • 15 Things to Know About Mental Disorders in Animals
  • 13 Most Deranged Serial Killers of All Time

Online Psychology Degree Guide

Site Information

  • About Online Psychology Degree Guide

CNN values your feedback

Unethical experiments’ painful contributions to today’s medicine.

In this picture taken on November 28, 2018, Chinese scientist He Jiankui reacts during a panel discussion after his speech at the Second International Summit on Human Genome Editing in Hong Kong. - A scientist who upended a Hong Kong conference with his claim to have created the world's first genetically-edited babies cancelled a fresh talk and was heavily criticised by organisers on November 29, who labelled him as irresponsible. (Photo by Anthony WALLACE / AFP)        (Photo credit should read ANTHONY WALLACE/AFP/Getty Images)

CNN Films’ “Three Identical Strangers,” the astonishing story of triplets separated at birth who discover a dark secret about their past, premieres Sunday, January 27, at 9 p.m. ET.

Chinese scientist He Jiankui sent shockwaves around the world last year with his claim that he had modified twin babies’ DNA before their birth. The modification was made with gene editing tool CRISPR-Cas9, he said, and made the babies resistant to HIV. Scientists from China and around the world spoke out about the experiment, which many say was unethical and not needed to prevent the virus. The scientist had also been warned by peers not to go down this path.

His experiments, which are still clouded with the uncertainty of his claims and his whereabouts, open a Pandora’s box of questions around ethics in experiments with humans – even though these dilemmas aren’t new.

Historic examples of human experimentation include wartime atrocities by Nazi doctors that tested the limits of human survival. Another led to the creation of the hepatitis B vaccine prototype. Wendell Johnson, who made several contributions to the field of communication disorders, tried to induce stuttering in normally fluent children. In the 1940s , prisoners in Illinois were infected with malaria to test anti-malaria drugs.

Chinese scientist He Jiankui said  he modified two babies' DNA before their birth.

Such experiments have been criticized as unethical but have advanced medicine and its ethical codes, such as the Nuremberg Code.

When He made his claim of genetically altering humans, the response from the global medical community was swift and condemning.

“It is out of the question that the experiment is unethical,” said Jing Bao Nie, professor of bioethics at the University of Otago in New Zealand. Without “medical necessity, it is not ethical to carry out” gene editing.

Sarah Chan, director of the University of Edinburgh’s Mason Institute for Medicine, Life Sciences and the Law, adds that the balance of risks and benefits make it hard to justify this experiment. Genome editing of embryos is still not fully established, and “virtually all scientists will say we don’t yet know enough about It to be able to recommend that we just go ahead with it clinically,” she said.

In this picture taken on November 28, 2018, Chinese scientist He Jiankui reacts during a panel discussion after his speech at the Second International Summit on Human Genome Editing in Hong Kong. - A scientist who upended a Hong Kong conference with his claim to have created the world's first genetically-edited babies cancelled a fresh talk and was heavily criticised by organisers on November 29, who labelled him as irresponsible. (Photo by Anthony WALLACE / AFP)        (Photo credit should read ANTHONY WALLACE/AFP/Getty Images)

Related article The scientist, the twins and the experiment that geneticists say went too far

If it were the case of a life-threatening disease that will cause tremendous pain, and the only way to alleviate the pain would be a risky experimental procedure, then Chan thinks “given the immense benefit, we could produce perhaps taking that risk is justified.”

When it comes to medical ethics, different principles need to be weighed against each other by an institutional review board, deciding over experiments involving human participants.

A definition of medical ethics

Medical ethicists and researchers commonly hold that there are seven general rules for an ethical experiment involving humans, explained Govind Persad, assistant law professor at the University of Denver.

Experiments should be socially valuable and scientifically valid, and people have to be selected fairly and respected. The risks and benefits to participants and the benefits to society need to be weighed against each other, and there needs to be an independent outside review of the ethics of the experiment, Persad said.

The risks and benefits equation sometimes includes third-party consideration, such as tests of a vaccine that includes a virus that can “shed” and infect others who are not research participants, Persad said. Research on smallpox vaccine is one example .

If He’s experiment produced any mutations, these could be passed down to the twins’ children and then diffuse into the general population, which didn’t consent to that change, Persad explained.

“I don’t know how large of a risk that is,” Persad said. “Because again, it depends on the odds of the mutation, whether the mutation was one that would end up staying in the population or whether it would be selected out over time.”

Many of national and international protocols, like the 2005 UN Declaration on Human Rights and Bioethics , include some of these seven principles, Persad said. But as with most international documents, these protocols are not legally binding.

The first document outlining how research should be done in a fair way was a product of Nazi war atrocities.

The Nuremberg Trials began November 20, 1945, in Germany.

During the 1940s, Nazi doctors conducted human experiments on prisoners in concentration camps. In all of these experiments, which one study by the Jewish Virtual Library describes as “acts of torture,” prisoners were forced into danger, nearly all enduring mutilation and pain, and many experiments had fatal outcomes. Most famously, experiments were conducted by Dr. Josef Mengele , who was interested in twins and performed “agonizing and often lethal” research on them.

Renate Guttmann was one of the “Mengele Twins,” according to the Holocaust Encyclopedia, subjected to experiments such as injections that made her vomit and have diarrhea, and blood being taken from her neck.

Twenty Nazi doctors were sentenced in the 1945-46 Nuremberg trials. The process resulted in the first ethics document, the Nuremberg Code , a 10-point declaration on how to conduct ethical scientific research.

Rosalyn Haber (third from the left) with her family before they were transported to Auschwitz death camp in 1944.

Related article She survived the Holocaust and says the trauma of being separated from your parents lasts forever

But some doctors felt that this code did not apply to them.

A decade later, pediatrician Dr. Saul Krugman was asked to do something about rampant hepatitis in the Willowbrook State School for children with intellectual disabilities on Staten Island, New York. Krugman found that over 90% of children at the school were infected.

Contracting hepatitis was “inevitable” and “predictable” due to poor hygiene at the overcrowded school, according to the first study Krugman and his colleagues carried out in Willowbrook. He decided to try to develop a vaccine, and parents were informed and asked for consent.

Krugman’s experiment helped him discover two strains of hepatitis – A and B – and how these spread, A spreading via the fecal-oral route and B through intimate contact and transfer of body fluids. Fifteen years later, he developed a prototype hepatitis B vaccine.

In his paper, Krugman agrees with criticism that the ends do not justify the means but says he does not believe that to apply to his own work, since all children at the school were constantly exposed to the risk of acquiring hepatitis.

head transplant doctor 2 STORY BODY ONLY

Related article Are human head transplants coming soon?

The subsequent debate pointed out that the central ethical question around Krugman’s work is whether it can be acceptable to perform a dangerous experiment on a person, in this case the Willowbrook students, who will themselves see no benefit from it.

Kelly Edwards, professor of bioethics at the University of Washington, thinks back to the needed balance of risks and benefits in an experiment. “We had a trend of saying ‘this group of people is already suffering,’ ” she says, which inspired researchers to study these populations for some generalizable knowledge that would help others. “But we still are really taking advantage of this one group of people suffering.”

She believes there are now other methods that would have brought the same results. But because the vaccine was acquired in this unethical way and we are using the “tainted data” – results from unethical experiments – Edwards says we owe some recognition to “the children who contributed to that knowledge.”

Tainted medical past

The need for retribution and compensation is found in a famously unethical experiment: the Tuskegee syphilis study. Syphilis was seen as a major health problem in the 1920s, so in 1932, the US Public Health Service and the Tuskegee Institute in Alabama began a study to record the natural progression of the disease.

The study observed 600 black men, 201 of whom did not have the disease. In order to incentivize participants, they were offered free medical exams, meals and burial insurance. But they were not informed of what was being investigated; instead, they were told that they would receive treatment for “bad blood” – a local term that the Centers for Disease Control and Prevention says was used to describe several illnesses, including syphilis, anemia and fatigue.

Those who carried the diseases were not treated for syphilis, even when penicillin became an effective cure in 1947.

After the first reports about the study in 1972, an advisory panel was appointed to review the Tuskegee study. Their conclusion was that the knowledge gained “was sparse” compared to the risks to the subjects. The study concluded in October of that year.

Shortly after, a class-action lawsuit was filed on behalf of the participants and their families. A $10 million settlement was reached.

The Tuskegee Health Benefit Program was established to pay compensation such as lifetime medical benefits and burial services to all living participants and their wives and children. President Bill Clinton publicly apologized for the study in 1997 .

brain donation 1

Related article History of mistrust complicates study of dementia in African-Americans

Edwards noted that many medicines and vaccines now in routine use were obtained initially through unethical means, “and some of them are not even as much on our consciousness.”

The birth control pill was tested in 1955 on women in Puerto Rico who were not told that they were involved in a clinical trial or that the pill was experimental and had potentially dangerous side effects.

The 1979 Belmont Report into ethical guidelines for scientific research made informed consent US law and therefore such experiments illegal.

The commission responsible for the Belmont report also wrote topic-specific reports, one of which was on the use of prisoners in experiments. “It was a pretty widespread practice to use prisoner populations,” Edwards said, because it was seen as offering them a way to repay their debts to society.

One place where prisoners were used in experiments was Holmesburg Prison in Philadelphia in the 1950s. Dermatologist Dr. Albert M. Kligman, famous for patenting the acne treatment Retin-A, conducted many tests on these inmates. Retin-A was partially based on Kligman’s experiments on prisoners at Holmesburg, according to a report . Some included studying the reaction to dangerous chemicals, such as dioxin , an Agent Orange ingredient, the removal of thumbnails to see how fingers react to abuse, or the infestation of inmates with ringworm .

Holmesburg Prison, in the northeast section of Philadelphia, in 1970.

One psychiatrist working at Holmesburg at the same time as Kligman reported that tranquilizers, antibiotics and Johnson & Johnson toothpaste and mouthwash were all tested on inmates, according to Sana Loue in “ Textbook of Research Ethics : Theory and Practice.”

Participating in these experiments was one of way for prisoners to earn money and a further means to control them, Loue said.

Prisoners’ inability to give consent because their lives are completely controlled by others and the large risk of coerciveness are what inspired the Belmont report to rule out experiments with this vulnerable population, Edwards said.

The present and future of ethics

The reports that followed these experiments were used to draw up laws and governance bodies, such as institutional review boards. These boards are made up of a small group of representatives from the institution that would like to carry out the experiment and one non-scientific community representative; they decide whether an experiment is ethical and should go ahead.

Get CNN Health's weekly newsletter

Sign up here to get The Results Are In with Dr. Sanjay Gupta every Tuesday from the CNN Health team.

Edwards says the institutional review boards offer a small one-time assessment of the situation. She hopes for more ongoing ethical review practices during experiments, like data safety monitoring, used mainly in clinical trials. This monitoring tool can halt an experiment at any time.

Chan also sees the need for more discussions around ethics. He’s experiment and the second international human genome editing summit in Hong Kong, where He publicly defended his work , showed that there “is a real will to have these discussions seriously [and] to consider both what the benefits are but also to consider very carefully the conditions under which we should be using these technologies,” she said.

').concat(a,'

Show all

'.concat(e,"

'.concat(i,"

Automatically renews at ').concat(n.labelSubtext,"/").concat(n.billingInterval," . Cancel anytime.

\n ').concat(n,'\n

\n ').concat(t,'\n

This page will automatically redirect in 5 seconds...

').concat(o).concat(n,"

\n '+(null!=(o=r(e,"if").call(u,null!=n?r(n,"cta2PreText"):n,{name:"if",hash:{},fn:l.program(20,t,0),inverse:l.noop,data:t,loc:{start:{line:29,column:20},end:{line:29,column:61}}}))?o:"")+"\n"+(null!=(o=(r(e,"ifAll")||n&&r(n,"ifAll")||l.hooks.helperMissing).call(u,null!=n?r(n,"cta2Text"):n,null!=n?r(n,"cta2Link"):n,{name:"ifAll",hash:{},fn:l.program(22,t,0),inverse:l.noop,data:t,loc:{start:{line:30,column:20},end:{line:35,column:30}}}))?o:"")+"

\n \n '+i((u=null!=(u=p(e,"title")||(null!=n?p(n,"title"):n))?u:c,(0,_typeof2.default)(u)===s?u.call(r,{name:"title",hash:{},data:t,loc:{start:{line:20,column:73},end:{line:20,column:82}}}):u))+" \n "+i((u=null!=(u=p(e,"subtext")||(null!=n?p(n,"subtext"):n))?u:c,(0,_typeof2.default)(u)===s?u.call(r,{name:"subtext",hash:{},data:t,loc:{start:{line:21,column:24},end:{line:21,column:35}}}):u))+"\n \n

Hello World!

10 Psychological Experiments That Could Never Happen Today

The Chronicle of Higher Education

Nowadays, the American Psychological Association has a Code of Conduct in place when it comes to ethics in psychological experiments. Experimenters must adhere to various rules pertaining to everything from confidentiality to consent to overall beneficence. Review boards are in place to enforce these ethics. But the standards were not always so strict, which is how some of the most famous studies in psychology came about. 

1. The Little Albert Experiment

At Johns Hopkins University in 1920, John B. Watson conducted a study of classical conditioning, a phenomenon that pairs a conditioned stimulus with an unconditioned stimulus until they produce the same result. This type of conditioning can create a response in a person or animal towards an object or sound that was previously neutral. Classical conditioning is commonly associated with Ivan Pavlov, who rang a bell every time he fed his dog until the mere sound of the bell caused his dog to salivate.

Watson tested classical conditioning on a 9-month-old baby he called Albert B. The young boy started the experiment loving animals, particularly a white rat. Watson started pairing the presence of the rat with the loud sound of a hammer hitting metal. Albert began to develop a fear of the white rat as well as most animals and furry objects. The experiment is considered particularly unethical today because Albert was never desensitized to the phobias that Watson produced in him. (The child died of an unrelated illness at age 6, so doctors were unable to determine if his phobias would have lasted into adulthood.)

2. Asch Conformity Experiments

Solomon Asch tested conformity at Swarthmore College in 1951 by putting a participant in a group of people whose task was to match line lengths. Each individual was expected to announce which of three lines was the closest in length to a reference line. But the participant was placed in a group of actors, who were all told to give the correct answer twice then switch to each saying the same incorrect answer. Asch wanted to see whether the participant would conform and start to give the wrong answer as well, knowing that he would otherwise be a single outlier.

Thirty-seven of the 50 participants agreed with the incorrect group despite physical evidence to the contrary. Asch used deception in his experiment without getting informed consent from his participants, so his study could not be replicated today.

3. The Bystander Effect

Some psychological experiments that were designed to test the bystander effect are considered unethical by today’s standards. In 1968, John Darley and Bibb Latané developed an interest in crime witnesses who did not take action. They were particularly intrigued by the murder of Kitty Genovese , a young woman whose murder was witnessed by many, but still not prevented.

The pair conducted a study at Columbia University in which they would give a participant a survey and leave him alone in a room to fill out the paper. Harmless smoke would start to seep into the room after a short amount of time. The study showed that the solo participant was much faster to report the smoke than participants who had the exact same experience, but were in a group.

The studies became progressively unethical by putting participants at risk of psychological harm. Darley and Latané played a recording of an actor pretending to have a seizure in the headphones of a person, who believed he or she was listening to an actual medical emergency that was taking place down the hall. Again, participants were much quicker to react when they thought they were the sole person who could hear the seizure.

4. The Milgram Experiment

Yale psychologist Stanley Milgram hoped to further understand how so many people came to participate in the cruel acts of the Holocaust. He theorized that people are generally inclined to obey authority figures, posing the question , “Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?” In 1961, he began to conduct experiments of obedience.

Participants were under the impression that they were part of a study of memory . Each trial had a pair divided into “teacher” and “learner,” but one person was an actor, so only one was a true participant. The drawing was rigged so that the participant always took the role of “teacher.” The two were moved into separate rooms and the “teacher” was given instructions. He or she pressed a button to shock the “learner” each time an incorrect answer was provided. These shocks would increase in voltage each time. Eventually, the actor would start to complain followed by more and more desperate screaming. Milgram learned that the majority of participants followed orders to continue delivering shocks despite the clear discomfort of the “learner.”

Had the shocks existed and been at the voltage they were labeled, the majority would have actually killed the “learner” in the next room. Having this fact revealed to the participant after the study concluded would be a clear example of psychological harm.

5. Harlow’s Monkey Experiments

In the 1950s, Harry Harlow of the University of Wisconsin tested infant dependency using rhesus monkeys in his experiments rather than human babies. The monkey was removed from its actual mother which was replaced with two “mothers,” one made of cloth and one made of wire. The cloth “mother” served no purpose other than its comforting feel whereas the wire “mother” fed the monkey through a bottle. The monkey spent the majority of his day next to the cloth “mother” and only around one hour a day next to the wire “mother,” despite the association between the wire model and food.

Harlow also used intimidation to prove that the monkey found the cloth “mother” to be superior. He would scare the infants and watch as the monkey ran towards the cloth model. Harlow also conducted experiments which isolated monkeys from other monkeys in order to show that those who did not learn to be part of the group at a young age were unable to assimilate and mate when they got older. Harlow’s experiments ceased in 1985 due to APA rules against the mistreatment of animals as well as humans . However, Department of Psychiatry Chair Ned H. Kalin, M.D. of the University of Wisconsin School of Medicine and Public Health has recently begun similar experiments that involve isolating infant monkeys and exposing them to frightening stimuli. He hopes to discover data on human anxiety, but is meeting with resistance from animal welfare organizations and the general public.

6. Learned Helplessness

The ethics of Martin Seligman’s experiments on learned helplessness would also be called into question today due to his mistreatment of animals. In 1965, Seligman and his team used dogs as subjects to test how one might perceive control. The group would place a dog on one side of a box that was divided in half by a low barrier. Then they would administer a shock, which was avoidable if the dog jumped over the barrier to the other half. Dogs quickly learned how to prevent themselves from being shocked.

Seligman’s group then harnessed a group of dogs and randomly administered shocks, which were completely unavoidable. The next day, these dogs were placed in the box with the barrier. Despite new circumstances that would have allowed them to escape the painful shocks, these dogs did not even try to jump over the barrier; they only cried and did not jump at all, demonstrating learned helplessness.

7. Robbers Cave Experiment

Muzafer Sherif conducted the Robbers Cave Experiment in the summer of 1954, testing group dynamics in the face of conflict. A group of preteen boys were brought to a summer camp, but they did not know that the counselors were actually psychological researchers. The boys were split into two groups, which were kept very separate. The groups only came into contact with each other when they were competing in sporting events or other activities.

The experimenters orchestrated increased tension between the two groups, particularly by keeping competitions close in points. Then, Sherif created problems, such as a water shortage, that would require both teams to unite and work together in order to achieve a goal. After a few of these, the groups became completely undivided and amicable.

Though the experiment seems simple and perhaps harmless, it would still be considered unethical today because Sherif used deception as the boys did not know they were participating in a psychological experiment. Sherif also did not have informed consent from participants.

8. The Monster Study

At the University of Iowa in 1939, Wendell Johnson and his team hoped to discover the cause of stuttering by attempting to turn orphans into stutterers. There were 22 young subjects, 12 of whom were non-stutterers. Half of the group experienced positive teaching whereas the other group dealt with negative reinforcement. The teachers continually told the latter group that they had stutters. No one in either group became stutterers at the end of the experiment, but those who received negative treatment did develop many of the self-esteem problems that stutterers often show. Perhaps Johnson’s interest in this phenomenon had to do with his own stutter as a child , but this study would never pass with a contemporary review board.

Johnson’s reputation as an unethical psychologist has not caused the University of Iowa to remove his name from its Speech and Hearing Clinic .

9. Blue Eyed versus Brown Eyed Students

Jane Elliott was not a psychologist, but she developed one of the most famously controversial exercises in 1968 by dividing students into a blue-eyed group and a brown-eyed group. Elliott was an elementary school teacher in Iowa, who was trying to give her students hands-on experience with discrimination the day after Martin Luther King Jr. was shot, but this exercise still has significance to psychology today. The famous exercise even transformed Elliott’s career into one centered around diversity training.

After dividing the class into groups, Elliott would cite phony scientific research claiming that one group was superior to the other. Throughout the day, the group would be treated as such. Elliott learned that it only took a day for the “superior” group to turn crueler and the “inferior” group to become more insecure. The blue eyed and brown eyed groups then switched so that all students endured the same prejudices.

Elliott’s exercise (which she repeated in 1969 and 1970) received plenty of public backlash, which is probably why it would not be replicated in a psychological experiment or classroom today. The main ethical concerns would be with deception and consent, though some of the original participants still regard the experiment as life-changing .

10. The Stanford Prison Experiment

In 1971, Philip Zimbardo of Stanford University conducted his famous prison experiment, which aimed to examine group behavior and the importance of roles. Zimbardo and his team picked a group of 24 male college students who were considered “healthy,” both physically and psychologically. The men had signed up to participate in a “ psychological study of prison life ,” which would pay them $15 per day. Half were randomly assigned to be prisoners and the other half were assigned to be prison guards. The experiment played out in the basement of the Stanford psychology department where Zimbardo’s team had created a makeshift prison. The experimenters went to great lengths to create a realistic experience for the prisoners, including fake arrests at the participants’ homes.

The prisoners were given a fairly standard introduction to prison life, which included being deloused and assigned an embarrassing uniform. The guards were given vague instructions that they should never be violent with the prisoners, but needed to stay in control. The first day passed without incident, but the prisoners rebelled on the second day by barricading themselves in their cells and ignoring the guards. This behavior shocked the guards and presumably led to the psychological abuse that followed. The guards started separating “good” and “bad” prisoners, and doled out punishments including push ups, solitary confinement, and public humiliation to rebellious prisoners.

Zimbardo explained , “In only a few days, our guards became sadistic and our prisoners became depressed and showed signs of extreme stress.” Two prisoners dropped out of the experiment; one eventually became a psychologist and a consultant for prisons . The experiment was originally supposed to last for two weeks, but it ended early when Zimbardo’s future wife, psychologist Christina Maslach, visited the experiment on the fifth day and told him , “I think it’s terrible what you’re doing to those boys.”

Despite the unethical experiment, Zimbardo is still a working psychologist today. He was even honored by the American Psychological Association with a Gold Medal Award for Life Achievement in the Science of Psychology in 2012 .

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Understanding the Milgram Experiment in Psychology

A closer look at Milgram's controversial studies of obedience

Isabelle Adam (CC BY-NC-ND 2.0) via Flickr

Factors That Influence Obedience

  • Ethical Concerns
  • Replications

How far do you think people would go to obey an authority figure? Would they refuse to obey if the order went against their values or social expectations? Those questions were at the heart of an infamous and controversial study known as the Milgram obedience experiments.

Yale University  psychologist   Stanley Milgram  conducted these experiments during the 1960s. They explored the effects of authority on obedience. In the experiments, an authority figure ordered participants to deliver what they believed were dangerous electrical shocks to another person. These results suggested that people are highly influenced by authority and highly obedient . More recent investigations cast doubt on some of the implications of Milgram's findings and even the results and procedures themselves. Despite its problems, the study has, without question, made a significant impact on psychology .

At a Glance

Milgram's experiments posed the question: Would people obey orders, even if they believed doing so would harm another person? Milgram's findings suggested the answer was yes, they would. The experiments have long been controversial, both because of the startling findings and the ethical problems with the research. More recently, experts have re-examined the studies, suggesting that participants were often coerced into obeying and that at least some participants recognized that the other person was just pretending to be shocked. Such findings call into question the study's validity and authenticity, but some replications suggest that people are surprisingly prone to obeying authority.

History of the Milgram Experiments

Milgram started his experiments in 1961, shortly after the trial of the World War II criminal Adolf Eichmann had begun. Eichmann’s defense that he was merely following instructions when he ordered the deaths of millions of Jews roused Milgram’s interest.

In his 1974 book "Obedience to Authority," Milgram posed the question, "Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?"

Procedure in the Milgram Experiment

The participants in the most famous variation of the Milgram experiment were 40 men recruited using newspaper ads. In exchange for their participation, each person was paid $4.50.

Milgram developed an intimidating shock generator, with shock levels starting at 15 volts and increasing in 15-volt increments all the way up to 450 volts. The many switches were labeled with terms including "slight shock," "moderate shock," and "danger: severe shock." The final three switches were labeled simply with an ominous "XXX."

Each participant took the role of a "teacher" who would then deliver a shock to the "student" in a neighboring room whenever an incorrect answer was given. While participants believed that they were delivering real shocks to the student, the “student” was a confederate in the experiment who was only pretending to be shocked.

As the experiment progressed, the participant would hear the learner plead to be released or even complain about a heart condition. Once they reached the 300-volt level, the learner would bang on the wall and demand to be released.

Beyond this point, the learner became completely silent and refused to answer any more questions. The experimenter then instructed the participant to treat this silence as an incorrect response and deliver a further shock.

Most participants asked the experimenter whether they should continue. The experimenter then responded with a series of commands to prod the participant along:

  • "Please continue."
  • "The experiment requires that you continue."
  • "It is absolutely essential that you continue."
  • "You have no other choice; you must go on."

Results of the Milgram Experiment

In the Milgram experiment, obedience was measured by the level of shock that the participant was willing to deliver. While many of the subjects became extremely agitated, distraught, and angry at the experimenter, they nevertheless continued to follow orders all the way to the end.

Milgram's results showed that 65% of the participants in the study delivered the maximum shocks. Of the 40 participants in the study, 26 delivered the maximum shocks, while 14 stopped before reaching the highest levels.

Why did so many of the participants in this experiment perform a seemingly brutal act when instructed by an authority figure? According to Milgram, there are some situational factors that can explain such high levels of obedience:

  • The physical presence of an authority figure dramatically increased compliance .
  • The fact that Yale (a trusted and authoritative academic institution) sponsored the study led many participants to believe that the experiment must be safe.
  • The selection of teacher and learner status seemed random.
  • Participants assumed that the experimenter was a competent expert.
  • The shocks were said to be painful, not dangerous.

Later experiments conducted by Milgram indicated that the presence of rebellious peers dramatically reduced obedience levels. When other people refused to go along with the experimenter's orders, 36 out of 40 participants refused to deliver the maximum shocks.

More recent work by researchers suggests that while people do tend to obey authority figures, the process is not necessarily as cut-and-dried as Milgram depicted it.

In a 2012 essay published in PLoS Biology , researchers suggested that the degree to which people are willing to obey the questionable orders of an authority figure depends largely on two key factors:

  • How much the individual agrees with the orders
  • How much they identify with the person giving the orders

While it is clear that people are often far more susceptible to influence, persuasion , and obedience than they would often like to be, they are far from mindless machines just taking orders. 

Another study that analyzed Milgram's results concluded that eight factors influenced the likelihood that people would progress up to the 450-volt shock:

  • The experimenter's directiveness
  • Legitimacy and consistency
  • Group pressure to disobey
  • Indirectness of proximity
  • Intimacy of the relation between the teacher and learner
  • Distance between the teacher and learner

Ethical Concerns in the Milgram Experiment

Milgram's experiments have long been the source of considerable criticism and controversy. From the get-go, the ethics of his experiments were highly dubious. Participants were subjected to significant psychological and emotional distress.

Some of the major ethical issues in the experiment were related to:

  • The use of deception
  • The lack of protection for the participants who were involved
  • Pressure from the experimenter to continue even after asking to stop, interfering with participants' right to withdraw

Due to concerns about the amount of anxiety experienced by many of the participants, everyone was supposedly debriefed at the end of the experiment. The researchers reported that they explained the procedures and the use of deception.

Critics of the study have argued that many of the participants were still confused about the exact nature of the experiment, and recent findings suggest that many participants were not debriefed at all.

Replications of the Milgram Experiment

While Milgram’s research raised serious ethical questions about the use of human subjects in psychology experiments , his results have also been consistently replicated in further experiments. One review further research on obedience and found that Milgram’s findings hold true in other experiments. In one study, researchers conducted a study designed to replicate Milgram's classic obedience experiment. The researchers made several alterations to Milgram's experiment.

  • The maximum shock level was 150 volts as opposed to the original 450 volts.
  • Participants were also carefully screened to eliminate those who might experience adverse reactions to the experiment.

The results of the new experiment revealed that participants obeyed at roughly the same rate that they did when Milgram conducted his original study more than 40 years ago.

Some psychologists suggested that in spite of the changes made in the replication, the study still had merit and could be used to further explore some of the situational factors that also influenced the results of Milgram's study. But other psychologists suggested that the replication was too dissimilar to Milgram's original study to draw any meaningful comparisons.

One study examined people's beliefs about how they would do compared to the participants in Milgram's experiments. They found that most people believed they would stop sooner than the average participants. These findings applied to both those who had never heard of Milgram's experiments and those who were familiar with them. In fact, those who knew about Milgram's experiments actually believed that they would stop even sooner than other people.

Another novel replication involved recruiting participants in pairs and having them take turns acting as either an 'agent' or 'victim.' Agents then received orders to shock the victim. The results suggest that only around 3.3% disobeyed the experimenter's orders.

Recent Criticisms and New Findings

Psychologist Gina Perry suggests that much of what we think we know about Milgram's famous experiments is only part of the story. While researching an article on the topic, she stumbled across hundreds of audiotapes found in Yale archives that documented numerous variations of Milgram's shock experiments.

Participants Were Often Coerced

While Milgram's reports of his process report methodical and uniform procedures, the audiotapes reveal something different. During the experimental sessions, the experimenters often went off-script and coerced the subjects into continuing the shocks.

"The slavish obedience to authority we have come to associate with Milgram’s experiments comes to sound much more like bullying and coercion when you listen to these recordings," Perry suggested in an article for Discover Magazine .

Few Participants Were Really Debriefed

Milgram suggested that the subjects were "de-hoaxed" after the experiments. He claimed he later surveyed the participants and found that 84% were glad to have participated, while only 1% regretted their involvement.

However, Perry's findings revealed that of the 700 or so people who took part in different variations of his studies between 1961 and 1962, very few were truly debriefed.

A true debriefing would have involved explaining that the shocks weren't real and that the other person was not injured. Instead, Milgram's sessions were mainly focused on calming the subjects down before sending them on their way.

Many participants left the experiment in a state of considerable distress. While the truth was revealed to some months or even years later, many were simply never told a thing.

Variations Led to Differing Results

Another problem is that the version of the study presented by Milgram and the one that's most often retold does not tell the whole story. The statistic that 65% of people obeyed orders applied only to one variation of the experiment, in which 26 out of 40 subjects obeyed.

In other variations, far fewer people were willing to follow the experimenters' orders, and in some versions of the study, not a single participant obeyed.

Participants Guessed the Learner Was Faking

Perry even tracked down some of the people who took part in the experiments, as well as Milgram's research assistants. What she discovered is that many of his subjects had deduced what Milgram's intent was and knew that the "learner" was merely pretending.

Such findings cast Milgram's results in a new light. It suggests that not only did Milgram intentionally engage in some hefty misdirection to obtain the results he wanted but that many of his participants were simply playing along.

An analysis of an unpublished study by Milgram's assistant, Taketo Murata, found that participants who believed they were really delivering a shock were less likely to obey, while those who did not believe they were actually inflicting pain were more willing to obey. In other words, the perception of pain increased defiance, while skepticism of pain increased obedience.

A review of Milgram's research materials suggests that the experiments exerted more pressure to obey than the original results suggested. Other variations of the experiment revealed much lower rates of obedience, and many of the participants actually altered their behavior when they guessed the true nature of the experiment.

Impact of the Milgram Experiment

Since there is no way to truly replicate the experiment due to its serious ethical and moral problems, determining whether Milgram's experiment really tells us anything about the power of obedience is impossible to determine.

So why does Milgram's experiment maintain such a powerful hold on our imaginations, even decades after the fact? Perry believes that despite all its ethical issues and the problem of never truly being able to replicate Milgram's procedures, the study has taken on the role of what she calls a "powerful parable."

Milgram's work might not hold the answers to what makes people obey or even the degree to which they truly obey. It has, however, inspired other researchers to explore what makes people follow orders and, perhaps more importantly, what leads them to question authority.

Recent findings undermine the scientific validity of the study. Milgram's work is also not truly replicable due to its ethical problems. However, the study has led to additional research on how situational factors can affect obedience to authority.

Milgram’s experiment has become a classic in psychology , demonstrating the dangers of obedience. The research suggests that situational variables have a stronger sway than personality factors in determining whether people will obey an authority figure. However, other psychologists argue that both external and internal factors heavily influence obedience, such as personal beliefs and overall temperament.

Milgram S.  Obedience to Authority: An Experimental View.  Harper & Row.

Russell N, Gregory R. The Milgram-Holocaust linkage: challenging the present consensus . State Crim J. 2015;4(2):128-153.

Russell NJC. Milgram's obedience to authority experiments: origins and early evolution . Br J Soc Psychol . 2011;50:140-162. doi:10.1348/014466610X492205

Haslam SA, Reicher SD. Contesting the "nature" of conformity: What Milgram and Zimbardo's studies really show . PLoS Biol. 2012;10(11):e1001426. doi:10.1371/journal.pbio.1001426

Milgram S. Liberating effects of group pressure . J Person Soc Psychol. 1965;1(2):127-234. doi:10.1037/h0021650

Haslam N, Loughnan S, Perry G. Meta-Milgram: an empirical synthesis of the obedience experiments .  PLoS One . 2014;9(4):e93927. doi:10.1371/journal.pone.0093927

Perry G. Deception and illusion in Milgram's accounts of the obedience experiments . Theory Appl Ethics . 2013;2(2):79-92.

Blass T. The Milgram paradigm after 35 years: some things we now know about obedience to authority . J Appl Soc Psychol. 1999;29(5):955-978. doi:10.1111/j.1559-1816.1999.tb00134.x

Burger J. Replicating Milgram: Would people still obey today? . Am Psychol . 2009;64(1):1-11. doi:10.1037/a0010932

Elms AC. Obedience lite . American Psychologist . 2009;64(1):32-36. doi:10.1037/a0014473

Miller AG. Reflections on “replicating Milgram” (Burger, 2009) . American Psychologist . 2009;64(1):20-27. doi:10.1037/a0014407

Grzyb T, Dolinski D. Beliefs about obedience levels in studies conducted within the Milgram paradigm: Better than average effect and comparisons of typical behaviors by residents of various nations .  Front Psychol . 2017;8:1632. doi:10.3389/fpsyg.2017.01632

Caspar EA. A novel experimental approach to study disobedience to authority .  Sci Rep . 2021;11(1):22927. doi:10.1038/s41598-021-02334-8

Haslam SA, Reicher SD, Millard K, McDonald R. ‘Happy to have been of service’: The Yale archive as a window into the engaged followership of participants in Milgram’s ‘obedience’ experiments . Br J Soc Psychol . 2015;54:55-83. doi:10.1111/bjso.12074

Perry G, Brannigan A, Wanner RA, Stam H. Credibility and incredulity in Milgram’s obedience experiments: A reanalysis of an unpublished test . Soc Psychol Q . 2020;83(1):88-106. doi:10.1177/0190272519861952

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Stanley Milgram

  • When did science begin?
  • Where was science invented?

Blackboard inscribed with scientific formulas and calculations in physics and mathematics

Milgram experiment

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Open University - OpenLearn - Psychological research, obedience and ethics: 1 Milgram’s obedience study
  • Social Science LibreTexts - The Milgram Experiment- The Power of Authority
  • Verywell Mind - What was the Milgram Experiment?
  • BCcampus Open Publishing - Ethics in Law Enforcement - The Milgram Experiment
  • Nature - Modern Milgram experiment sheds light on power of authority
  • SimplyPsychology - Stanley Milgram Shock Experiment: Summary, Results, & Ethics
  • University of California - College of Natural Resources - Milgrams Experiment on Obedience to Authority

Stanley Milgram

Milgram experiment , controversial series of experiments examining obedience to authority conducted by social psychologist Stanley Milgram . In the experiment, an authority figure, the conductor of the experiment, would instruct a volunteer participant, labeled the “teacher,” to administer painful, even dangerous, electric shocks to the “learner,” who was actually an actor. Although the shocks were faked, the experiments are widely considered unethical today due to the lack of proper disclosure, informed consent, and subsequent debriefing related to the deception and trauma experienced by the teachers. Some of Milgram’s conclusions have been called into question. Nevertheless, the experiments and their results have been widely cited for their insight into how average people respond to authority.

Milgram conducted his experiments as an assistant professor at Yale University in the early 1960s. In 1961 he began to recruit men from New Haven , Connecticut , for participation in a study he claimed would be focused on memory and learning . The recruits were paid $4.50 at the beginning of the study and were generally between the ages of 20 and 50 and from a variety of employment backgrounds. When they volunteered, they were told that the experiment would test the effect of punishment on learning ability. In truth, the volunteers were the subjects of an experiment on obedience to authority. In all, about 780 people, only about 40 of them women, participated in the experiments, and Milgram published his results in 1963.

experiments unethical

Volunteers were told that they would be randomly assigned either a “teacher” or “learner” role, with each teacher administering electric shocks to a learner in another room if the learner failed to answer questions correctly. In actuality, the random draw was fixed so that all the volunteer participants were assigned to the teacher role and the actors were assigned to the learner role. The teachers were then instructed in the electroshock “punishment” they would be administering, with 30 shock levels ranging from 15 to 450 volts. The different shock levels were labeled with descriptions of their effects, such as “Slight Shock,” “Intense Shock,” and “Danger: Severe Shock,” with the final label a grim “XXX.” Each teacher was given a 45-volt shock themselves so that they would better understand the punishment they believed the learner would be receiving. Teachers were then given a series of questions for the learner to answer, with each incorrect answer generally earning the learner a progressively stronger shock. The actor portraying the learner, who was seated out of sight of the teacher, had pre-recorded responses to these shocks that ranged from grunts of pain to screaming and pleading, claims of suffering a heart condition, and eventually dead silence. The experimenter, acting as an authority figure, would encourage the teachers to continue administering shocks, telling them with scripted responses that the experiment must continue despite the reactions of the learner. The infamous result of these experiments was that a disturbingly high number of the teachers were willing to proceed to the maximum voltage level, despite the pleas of the learner and the supposed danger of proceeding.

Milgram’s interest in the subject of authority, and his dark view of the results of his experiments, were deeply informed by his Jewish identity and the context of the Holocaust , which had occurred only a few years before. He had expected that Americans, known for their individualism , would differ from Germans in their willingness to obey authority when it might lead to harming others. Milgram and his students had predicted only 1–3% of participants would administer the maximum shock level. However, in his first official study, 26 of 40 male participants (65%) were convinced to do so and nearly 80% of teachers that continued to administer shocks after 150 volts—the point at which the learner was heard to scream—continued to the maximum of 450 volts. Teachers displayed a range of negative emotional responses to the experiment even as they continued to obey, sometimes pleading with the experimenters to stop the experiment while still participating in it. One teacher believed that he had killed the learner and was moved to tears when he eventually found out that he had not.

experiments unethical

Milgram included several variants on the original design of the experiment. In one, the teachers were allowed to select their own voltage levels. In this case, only about 2.5% of participants used the maximum shock level, indicating that they were not inclined to do so without the prompting of an authority figure. In another, there were three teachers, two of whom were not test subjects, but instead had been instructed to protest against the shocks. The existence of peers protesting the experiment made the volunteer teachers less likely to obey. Teachers were also less likely to obey in a variant where they could see the learner and were forced to interact with him.

The Milgram experiment has been highly controversial, both for the ethics of its design and for the reliability of its results and conclusions. It is commonly accepted that the ethics of the experiment would be rejected by mainstream science today, due not only to the handling of the deception involved but also to the extreme stress placed on the teachers, who often reacted emotionally to the experiment and were not debriefed . Some teachers were actually left believing they had genuinely and repeatedly shocked a learner before having the truth revealed to them later. Later researchers examining Milgram’s data also found that the experimenters conducting the tests had sometimes gone off-script in their attempts to coerce the teachers into continuing, and noted that some teachers guessed that they were the subjects of the experiment. However, attempts to validate Milgram’s findings in more ethical ways have often produced similar results.

History Collection - Covering History's Untold Stories

  • American History

The 10 Cruelest Human Experimentation Cases in History

“First, do no harm,” is the oath taken by physicians the world over. And this has been the case for centuries now. For the most part, these men and women of science stay faithful to this oath, even defying orders to the contrary. But sometimes they not only break it, they do so in the worst way imaginable. There have been numerous instances of doctors and other scientists going way beyond the limitations of what’s moral or ethical in the name of ‘progress’. They have used humans as experimental guinea pigs for their tests.

In many cases, the test subjects were either kept in ignorance about what an experiment involved or they were simply in no position to offer their resistance or consent. Of course, it may well be the case that such dubious methods produced results. Indeed, some of the most controversial experiments of the past century produced results that continue to inform scientific understanding to this day. But that will never mean such experiments will be seen as just. Sometimes, the perpetrators of cruel research lose their good names or reputations. Sometimes they are prosecuted for their attempts to ‘play God’. Or sometimes they just get away with it.

You might want to brace yourself as we look at the ten weirdest and cruelest human experiments carried out in history:

The 10 Cruelest Human Experimentation Cases in History

Dr. Shiro Ishii and Unit 731

During World War II, Imperial Japan committed a number of crimes against humanity. But perhaps few were crueler than the experiments that were conducted at Unit 731. Part of the Imperial Japanese Army, this was a super-secret unit dedicated to undertaking research into biological and chemical weapons. Quite simply, the Imperial authority wanted to build weapons that were deadlier – or just crueler – than anything that had gone before. And they weren’t opposed to using human guinea pigs to test their creations.

Based in Harbon, the biggest city of Manchuko, the part of north-east China that Japan made its puppet state, Unit 731 was constructed between 1934 and 1939. Overseeing its construction was General Shiro Ishii. Though he was a medical doctor, Ishii was also a fanatical soldier and so he was happy to set his ethics aside in the name of total victory for Imperial Japan. In all, it’s estimated that as many as 3,000 men, women and children were used as forced participants in the experiments conducted here. For the most part, the horrific tests were carried out on Chinese people, though prisoners-of-war, including men from Korea and Mongolia, were used.

For more than five years, General Ishii oversaw a wide range of experiments, many of them of dubious medical value to say the least. Thousands were subjected to vivisections, usually without anaesthetic. Often, these were fatal. Countless types of surgery, including brain surgery and amputations, were also carried out without anaesthetic. At other times, inmates were injected directly with diseases such as syphilis and gonorrhoea, or with chemicals used in bombs. Other twisted experiments included tying men up naked outside and observing the effects of frostbite, or simply starving people and seeing how long they took to die.

Once it was clear Japan was going to lose the war, General Ishii tried to destroy all evidence of the tests. He burned down the facilities and swore his men to silence. He needn’t have worried. Senior researchers from Unit 731 were granted immunity by the U.S. In exchange, they contributed their knowledge to America’s own biological and chemical weapons programs. For decades, any stories of atrocities were dismissed as ‘Communist Propaganda’. In more recent years, the Japanese government has acknowledged the Unit’s existence as well as its work, though it maintains most official records have been lost to history.

NEXT >>

The 10 Cruelest Human Experimentation Cases in History

“The Little Albert Experiment”

After many months observing young children, John Hopkins University psychologist Dr. John B. Watson concluded that infants could be conditioned to be scared of non-threatening objects or stimuli. All he needed was first-hand proof. Since it was 1919 and experimental ethics were nowhere near as strict as they are today, Watson, along with his graduate student Rosalie Rayner, set about designing an experiment to test their theory. Thanks to their connections at the Baltimore hospital, they were able to find a young baby, named ‘Albert’, and ‘borrow’ him for the afternoon. While Albert’s mother might have consented to her son helping out scientific research, she had no idea what Watson was actually planning.

The young Albert was just nine months old when he was taken from a hospital and put to work as Watson’s guinea pig. At first, Watson carried out a series of baseline tests, to see that the child was indeed emotionally stable and at the accepted stage of development. But then the tests got creepier. Albert was shown several furry animals. These included a dog, a white rat and a rabbit. Watson would show these toys to Albert while at the same time banging a hammer against a metal bar. This was repeated a number of times. Before long, Albert was associating the sight of the furry animals with the fear provoked by the loud, unpleasant noise. Indeed, within just a short space of time, just seeing the furry rat could distress the child.

Watson noted at the time: “The instant the rat was shown, the baby began to cry. Almost instantly he turned sharply to the left, fell over on [his] left side, raised himself on all fours and began to crawl away so rapidly that he was caught with difficulty before reaching the edge of the table.” The scientist and his research partner had achieved their goal: they had proof that, just as in animals, classical conditioning can be used to influence or even dictate emotional responses in humans. Watson published his findings the following year, in the prestigious Journal of Experimental Psychology .

Even at the time, Watson’s methods were seen as unethical. After all, isn’t a doctor supposed to ‘do no harm’? What’s more, Watson never worked with Little Albert again, so he wasn’t able to reverse the process. But still, the results were heralded as a breakthrough in our understanding of popular psychology. Notably, Watson recorded the Little Albert Experiment, and the videos can be seen online today. And, for what it’s worth, most experts now agree that, though he would have most likely feared furry objects for a short spell of time during his childhood, Little Albert probably lost the association between cute toys and loud noises.

<< Previous

The 10 Cruelest Human Experimentation Cases in History

The “Monster” Study

These days, any tests carried out on children are subject to strict ethical rules and guidelines. This wasn’t the case back in the 1930s, however. So, when Wendell Johnson, a speech pathologist at the University of Iowa, wanted to carry out research on young participants, his institution was happy to oblige. Along with Mary Tudor, a grad student Johnson was supervising, work began in 1939. Over the next few years, dozens of kids would be subject to speech-related tests, with the effects of the experiment lasting for decades.

The purpose of the research sounded noble enough: Johnson wanted to see how a child’s upbringing affects their speech. In particular, he was fascinated by stuttering and determined to see what made one child stutter, yet another speak fluently. Thankfully, a local orphanage was able to ‘supply’ Johnson and Tudor with 22 children for them to work with. All of the young participants spoke without a stutter when they arrived at the University of Iowa labs for the first time. They were then divided into two equal groups, and the experiment got underway.

Both groups were asked to speak for the researchers. How they were treated, however, was completely different. In the first group, all of the children received positive feedback. They were praised for their fluent speech and command of the English language. The second group received the opposite kind of treatment. They were ridiculed for their inability to speak like adults. Johnson and Tudor would listen carefully for any little mistakes, and above all for any signs of stuttering, and criticize the children harshly for them.

Johnson’s methods shocked his academic peers. Not that they would have been so surprised. As a young researcher at the University of Iowa, he gained a reputation for experimenting with shock tactics. For instance, as a postgraduate student himself, Johnson would work with his colleagues trying to cure his own stutter, even electrocuting himself to see if that made a difference. But still, inflicting deliberate cruelty on children was seen as a step too far. As such, the Iowan academics nicknamed Johnson’s 1939 research ‘The Monster Study’. And the name was just about the only thing of significance it gave us.

With the University of Iowa keen to distances itself from news of human experimentation being carried out by the Nazis in war-torn Europe, they hushed-up the Monster Study. None of the findings were ever published in any academic journal of note. Only Johnson’s own thesis remains. The effects were clear, however. Many of the children in the second group went on to develop serious stutters. Some even had serious speech problems for the rest of their lives. The university finally acknowledged the experiment in 2001, apologising to those involved. Then, in 2007, six of the original orphan kids were awarded almost $1 million in compensation for the psychological impact Johnson’s work had on them.

Interestingly, however, while the methods used for the Monster Study have widely been condemned as being cruel and simply indefensible, some have argued that Johnson may have been onto something. Certainly, Mary Tudor said before her death that she and her research partner might have made serious contributions to our understanding of speech and speech pathology had they been allowed to publish their work. Instead, the experiment is now shorthand for bad science and a complete lack of ethics.

The 10 Cruelest Human Experimentation Cases in History

The Stanford Prison Experiment

Off all the ill-advised – and indeed, cruel – experiments North American universities have carried out over the decades, none is more infamous than the Stanford Prison Experiment. It’s so famous, in fact, that movies have been made based on the experiment which took place at Stanford University for one week in August 1971. Furthermore, while undoubtedly cruel, its findings are still used to inform popular understanding of psychological manipulation. Moreover, the behaviour of the participants involved is often held up as a warning about what can happen if humans are given power without accountability.

The experiment was led by Professor Phillip Zimbrano. As a psychologist, he was eager to see whether abuse in prisons can be explained by the inherent psychological traits of both guards and prisoners. Given the topic, he received funding from the U.S. Office of Naval Research. Funding in hand, Zimbrano set about recruiting participants. This turned out to be no problem at all, as a number of Stanford students volunteered to take part. Zimbrano then appointed some of the volunteers as guards and the others were designated as prisoners. The experiment could begin.

In the basement of the university’s psychology department, Zimbrano had built a makeshift ‘prison’. In all, 12 prisoners were kept here in small cells, while 12 guards were assigned a different part of the basement. While the prisoners had to endure tough conditions, the guards enjoyed comfortable, furnished quarters. The participants were also dressed for their parts, with the guards given uniforms and wooden batons. They were also kitted out with dark sunglasses so they could avoid eye contact with the people they were tasked with guarding.

Within 24 hours, any semblance of calm had gone. The prisoners started to revolt and the guards started to react. Special cells were set up to give well-behaving prisoners preferential treatment. The guards – who were barred from actually physically hitting their charges – started to use psychological methods to keep prisoners down. They would deny them food or put prisoners in darkened cells. Sleep was also denied to the prisoners. Within six days, Zimbrano agreed to halt the experiment. He did, at least, have more than enough evidence – some of it filmed – to draw on when making his conclusions.

Professor Zimbrano noted that around one third of the guards – again, young men taken randomly from the Stanford student population – exhibited genuine sadistic tendencies. At the same time, most of the inmates were seen to ‘internalise’ their roles. They took on the mentality of prisoners. While they could have left at any time, they instead gave up and became weak and passive. In the end, the experiment received, and continues to receive, criticism for the harsh methods used. Nevertheless, the findings of the Stanford Prison Experiment actually changed the way U.S. prisons are run and they are often held up as proof that most people can inflict cruelty and suffering on another human being if they are given a position of power and ordered to do so.

The 10 Cruelest Human Experimentation Cases in History

The South African ‘Aversion Project’

In Apartheid-era South Africa, national service was compulsory for all white males. At the same time, homosexuality was classed as a crime. Inevitably, therefore, any gay men who found themselves called into service were in for a tough time. But it wasn’t just name-calling or casual discrimination they had to contend with. Many were subjected to cruel experiments. The so-called ‘Aversion Project’, run throughout the 1970s and then the 1980s, was aimed at ‘treating’ homosexuals. As well as psychological treatments, it also used physical ‘treatments’, many of which would rightly be regarded as torture.

The project first really got started in 1969, with the creation of Ward 22. The creepily-named ward was part of a larger military hospital just outside of Pretoria and was designed to treat mentally-ill soldiers. For the unit chief Dr Aubrey Levin, this including homosexuals, regarded as unstable, or even ‘deviants’. For the most part, the doctor was determined to prove that electric shock therapy and conditioning could ‘cure’ the patients of their desires. Hundreds of men were electrocuted, often while being forced to look at pictures of gay men. The electric current would then be turned off and pictures of naked women shown instead in the hope that this would alter the mindset.

Inmates subjected to such experimental treatment would sometimes be tested, given temptations to see if they really were ‘cured’. Persistent ‘offenders’ were given hormone treatments, almost always against their will, and many were even chemically castrated. Even by the middle of the 1970s, when numerous, more ethical, studies had proven that ‘conversion therapies’ could change a person’s sexuality, Ward 22 carried on with its work. In fact, in only ended with the fall of the apartheid regime. To the very end of the project, Dr Levin maintained that all the men he treated were volunteers and asked for his help. Many of his peers disagreed, as did a judge, who sentenced him to five years in prison in 2014.

The 10 Cruelest Human Experimentation Cases in History

Project 4.1

On March 1, 1954, the United States carried out Castle Bravo , testing a nuclear bomb on the Bikini Atoll, in the middle of the Pacific Ocean. The test not only went without a hitch, it actually went better than expected. The yield produced by the bomb was much higher than scientists had anticipated. At the same time, the weather conditions in this part of the Pacific turned out to be different to what had been predicted. Radiation fallout from the blast was blown upwind, towards the Marshall Islands. But, instead of alerting the islanders to the danger, the project heads sensed an opportunity. How many times would they be able to see the affect of radiation fallout on a population for real?

Making the most of the opportunity, the American scientists simply sat back an observed. That is, they watched innocent people be affected by the fallout of an American nuclear bomb. Over the next decade, the project observers noted an upturn in the number of women on the Marshall Islands suffering miscarriages or stillbirths. But then, after ten years or so, this spike ended. Things seemingly returned to normal, and so scientists were unable – or unwilling – to make any formal conclusions. But then, things started to go downhill again.

At first, children on the Marshall Islands were observed to be growing less than would be expected. But then, it became clear that not only were they suffering from stunted growth, but a higher-than-expected proportion of youngsters were developing thyroid cancer. What’s more, by 1974, the data was showing that one in three islanders had developed at least one tumor. Later analysis, published in 2010, estimated that around half of all cancer cases recorded on the Marshall Islands could be attributed to the 1954 nuclear test, even if people never displayed any obvious signs of radiation poisoning in the immediate aftermath of the explosion.

Given that the initial findings of Project 4.1 as it was known were published in professional medical journals as early as 1955, the American government has never really denied that the experiment took place. Rather, what has been, and continues to be contested, is whether the U.S. actually knew that the islands would be affected before they carried out the test. Many on the Marshall Islands believe that Project 4.1 was premeditated, while the American authorities maintain that it was improvised in the wake of the explosion. The debate continues to rage.

The 10 Cruelest Human Experimentation Cases in History

The Tuskegee Experiments

For four decades, African-American men in Macon County, Alabama, were told by medical researchers that they had ‘bad blood’. The scientists knew that this was a term used by sharecroppers in this part of the country to refer to a wide range of ailments. They knew, therefore, that they wouldn’t question the prognosis. And neither would they raise any concerns or questions when the same researchers gave them injections. Which is how doctors working on behalf of the U.S. Public Health Service (PHS) were able to look on as hundreds of men went mad, blind or even died as a result of untreated syphilis.

When the experiment began back in 1932, there was no known cure for syphilis. As such, PHS researchers were determined to make a breakthrough. They went to Tuskegee College in Alabama and enlisted their help. Together, they enlisted 622 African-American men, almost all of them very poor. Of these men, 431 had already contracted syphilis prior to 1932, with the remaining 169 free from the disease. The men were told that the experiment would last for just six years, during which time they would be provided with free meals and medical care as doctors observed the development of the disease.

In 1947, penicillin became the recommended treatment for syphilis. Surely the doctors would give this to the men participating in the Tuskegee Experiment? Not so. Even though they knew the men could be cured, the PHR workers only gave them placebos, including aspirin and even combinations of minerals. With their condition untreated, the men slowly succumbed to syphilis. Some went blind, others went insane, and some died within a few years. What’s more, in the years after 1947, 19 syphilitic children were born to men enrolled in the study.

It was only in the mid-1960s that concerns started to be raised about the morality of the experiment. San Francisco-based PHS researcher Peter Buxton learned about what was happening in Alabama and raised his concerns. However, his superiors were unresponsive. As a result, Buxton leaked the story to a journalist friend. The story broke in 1972. Unsurprisingly, the public were outraged. The experiment was halted immediately, and the Congress inquiries began soon after. The surviving participants, as well as the children of those men who had died, were awarded $10 million in an out-of-court settlement. Finally, in 1993, President Bill Clinton offered a formal and official apology on behalf of the U.S. government to everyone affected by the experiment.

The 10 Cruelest Human Experimentation Cases in History

Project MK-Ultra

Though they had the Bomb, in the 1950s, the CIA were still determined to enjoy every advantage over their enemies. To achieve this, they were willing to think outside of the box. Perhaps the best example of this was MK-Ultra, a top-secret project where the CIA attempted to alter brain function and explore the possibility of mind control. While much of the written evidence, including files and witness testimonies, were destroyed soon after the experiments were brought to an end, we do know that the project involved a lot of drugs, some sex and countless instances of rule bending and breaking.

Project MK-Ultra was kick-started by the Office of Scientific Experiments at the start of the 1950s. Central to the project was determining how LSD affects the mind – and, more importantly, whether this could be turned to America’s advantage. In order to learn more, hundreds, perhaps even thousands of individuals, were given doses of the drug. In almost all cases, they were given LSD without their explicit knowledge or consent. For example, during Operation Midnight Climax in the early 1960s, the CIA opened up brothels. Here, the male clients were dosed up with LSD and then observed by scientists through one-way mirrors.

The experiments also included subjecting American citizens to sleep deprivation and hypnosis. Not all of the tests went plainly. Several people died as a direct result of Project MK-Ultra, including a US Army biochemist by the name of Frank Olsen. In 1953, the scientist was given a dose of LSD without his knowledge and, just a week later, died after jumping out of a window. While the official reason of his death was recorded as suicide, Olsen’s family have always maintained that he was effectively killed by the CIA.

When President Gerald Ford launched a special Commission on CIA activities in the United States, the work of Project MK-Ultra came to light. Two years previously, however, the-then Director of the CIA, Richard Helms, had ordered all files relating to the experiments to be destroyed. Witness testaments show that around 80 institutions were involved in the experiments, with thousands of people given hallucinogenic drugs, usually by CIA officers with no medical background. And so, in the end, was it all worth it? The CIA has acknowledged that the experiments produced nothing of real, scientific value. Project MK-Ultra has, however, lived on in the popular imagination and has inspired numerous books, video games and movies.

The 10 Cruelest Human Experimentation Cases in History

Guatemalan Syphilis Experiment

For more than two years in the middle of the 20 th century, the United States worked directly with the health ministries of Guatemala to infect thousands of people with a range of sexually transmitted diseases, above all syphilis. Since they wanted to do this without the study subjects knowing about it – after all, who would give their consent to being injected with syphilis? – it was decided that the experiment should take place in Guatemala, with soldiers and the most vulnerable members of society to serve as the guinea pigs.

The Guatemalan Syphilis Experiment (it was not given an official codename or even a formal project title) began in 1946. It was headed up by John Charles Cutler of the US Public Health Service (PHS). Despite being a physician himself, Cutler was happy to overlook the principle of ‘First, do no harm’ in order to carry out his work. Making use of local health clinics, he tasked his staff with infecting around 5,500 subjects. Most of them were soldiers or prisoners, though mental health patients and prostitutes were also used to see how syphilis and other diseases affect the body. Children living in orphanages were even used for the experiments.

In all cases, the subjects were told they were getting medication that was good for them. And, while all subjects were given antibiotics, an estimated 83 people died. In 1948, with the wider medical community hearing rumors of what was being done in Central America, and with the American government wary of the potential fallout, the experiments were brought to an abrupt end. Cutler would go on to carry out similar experiments in Alabama, though even here he stopped short of actually infecting his subjects with life-threatening diseases.

It was only in 2010, however, that the United States government issued a formal apology to Guatemala for the experiments it carried out in the 1940s. What’s more, President Barack Obama called the project “a crime against humanity”. That didn’t mean that the victims could get compensation, however. In 2011, several cases were put forward but then rejected, with the presiding judge noting that the U.S. government could not be held liable for actions carried out in its name outside of the country. A $1 billion lawsuit against the John Hopkins University and against the Rockefeller Foundation is still open.

The 10 Cruelest Human Experimentation Cases in History

Mengele’s Twins

A world at war gave the Nazi regime the ideal cover under which they would carry out some of the most horrific human experiments imaginable. At Auschwitz concentration camp, Dr Josef Mengele made full use of the tens of thousands of prisoners available to him. He would carry out unnecessarily cruel and unusual experiments, often with little or no scientific merit. And, above all, he was fascinated with twins. Or, more precisely, with identical twins. These would be the subjects of his most gruesome experiments.

Mengele would personally select prospective subjects from the ramps leading off the transport trains at the entrance to the concentration camp. Initially, his chosen twins were provided with relatively comfortable accommodation, as well as more generous rations than the rest of the inmate population. However, this was just a temporary respite. Mengele’s experiments were as varied as they were horrific. He would amputate one twin’s limbs and then compare the growth of both over the following days. Or he would infect one twin with a disease like typhoid. When they died, he would kill the healthy twin, too, and then compare their bodies.

Gruesomely, the records show that on one particularly bloody night, Mengele injected chloroform directly into the heart of 14 sets of twins. All died almost immediately. Another infamous tale tells of Mengele trying to create his own conjoined twins: he simply stitched two young Romani children back-to-back. They both died of gangrene after several long and painful days. Mengele also had a team of assistants working for him, and they were no less cruel.

Nobody will ever know just how many children or adults were victims of Mengele’s experiments. Despite being meticulous record keepers, the Nazis kept some things secret. Tragically for his victims and their relatives, Mengele never faced justice for his actions. He was smuggled out of Europe by Nazi sympathisers at the end of the war and lived for another 30 years, in hiding, in South America.

Where did we find this stuff? Here are our sources:

“Unmasking Horror: A special report.; Japan Confronting Gruesome War Atrocity”. Nicholas D. Kristof, The New York Times, 1995.

“Little Albert regains his identity”. American Psychology Association, 2010.

“Unit 731: Japan discloses details of notorious chemical warfare division”. Justin McCurry, The Guardian, April 2018.

“The Stuttering Doctor’s ‘Monster Study'”. Gretchen Reynolds, The New York Times, March 2003.

“The Real Lesson of the Stanford Prison Experiment” . Maria Konnikova, The New Yorker, June 2015.

“Gays tell of mutilation by apartheid army” . Chris McGreal, The Guardian, July 2000.

“Nuclear Savage: The Islands of Secret Project 4.1” . The Environment & Society Portal.

“Tuskegee Experiment: The Infamous Syphilis Study” . Elizabeth Nix, History.com, May 2017.

“The secret LSD-fuelled CIA experiment that inspired Stranger Things” . Richard Vine, The Guardian, August 2016.

“Guatemala victims of US syphilis study still haunted by the ‘devil’s experiment'” . Rory Carroll, The Guardian, June 2011.

“Nazi Medical Experiments” . The United States Holocaust Memorial Museum.

Online Psychology Degrees Logo

What Are The Top 10 Unethical Psychology Experiments?

  • By Cliff Stamp, BS Psychology, MS Rehabilitation Counseling
  • Published September 9, 2019
  • Last Updated November 13, 2023
  • Read Time 8 mins

unethical psychology experiments

Posted September 2019 by Clifton Stamp, B.S. Psychology; M.A. Rehabilitation Counseling, M.A. English; 10 updates since. Reading time: 8 min. Reading level: Grade 9+. Questions on unethical psychology experiments? Email Toni at: [email protected] .

Like all sciences, psychology relies on experimentation to validate its hypotheses. This puts researchers in a bit of a bind, in that experimentation requires manipulation of one set of variables. Manipulating human beings can be unethical and has the potential to lead to outright harm. Nowadays experiments that involve human beings must meet a high standard for safety and security for all research participants. Although ethical and safety standards in the 21st century keep people safe from any potential ill effects of experiments and studies, conditions just a few decades ago were far from ideal and were in many cases out and out harmful. 

The Top 10 Unethical Psychology Experiments

10. The Stanford Prison Experiment (1971). This example of unethical research studies occurred in August of 1971, Dr. Philip Zimbardo of Stanford University began a Navy-funded experiment examining the effects of power dynamics between prison officers and prisoners. It only took six days before the experiment collapsed. Participants so completely absorbed their roles that the “officers” began psychologically torturing the prisoners and the prisoners became aggressive toward the officers. The prisoners, in turn, fought the guards and refused to comply with requests.  By the second day, prisoners refused to obey guards and the guards started threatening prisoners with violence, far over their instructions.  By the 6 th day, guards were harassing the prisoners physically and mentally and some guards had harmed prisoners. Zimbardo stopped the experiment at that point.

9. The Monster Study (1939). The Monster Study is a prime example of an unethical psychology experiment on humans that changed the world. Wendell Johnson, a psychologist at the University of Iowa, conducted an experiment about stuttering on 22 orphans. His graduate student, Mary Tudor, experimented while Johnson supervised her work. She divided a group of 22 children into two groups. Each group was a mixture of children with and without speech problems. One group received encouragement and positive feedback, but the other was ridiculed for any speech problems, including non-existent problems. Children who received ridicule naturally made no progress, and some of the orphans with no speech problems developed those very problems.

The study continued for six months and caused lasting, chronic psychological issues for some of the children. The study caused so much harm that some of the former subjects secured a monetary award from the University of Iowa in 2007 due to the harm they’d suffered.

8. The Milgram Conformity Experiment (1961 ). After the horrors of the Second World War, psychological researchers like Stanley Milgram wondered what made average citizens act like those in Germany who had committed atrocities. Milgram wanted to determine how far people would go carrying out actions that might be detrimental to others if they were ordered or encouraged to do so by an authority figure. The Milgram experiment showed the tension between that obedience to the orders of the authority figure and personal conscience. 

In each experiment, Milgram designated  three people as either a teacher, learner or experimenter. The “learner” was an actor planted by Milgram and stayed in a room separate from the experimenter and teacher. The teacher attempted to teach the learner how to perform small sets of word associations. When the learner got a pair wrong, the teacher delivered an electric shock to the learner. In reality, there was no shock given. The learner pretended to be in increasingly greater amounts of distress. When some teachers expressed hesitation about increasing the level of shocks, the experimenter encouraged them to do so.

Many of the subjects (the teachers) experienced severe and lasting psychological distress. The Milgram Conformity Experiment has become the byword for well-intentioned psychological experiments gone wrong.

8. David Reimer (1967–1977) . When David Reimer was eighth months old, his penis was seriously damaged during a failed circumcision. His parents contacted John Money, a professor of psychology and pediatrics at Johns Hopkins, who was a researcher in the development of gender. As David had an identical twin brother, Money viewed the situation as a rare opportunity to conduct his own experiment into the nature of gender, by advising Reimer’s parents to have the David sexually reassigned as a female. Money’s theory was that gender was a completely sociological construct and primarily influenced by nurture, as opposed to nature. Money was catastrophically wrong. 

Reimer never identified as female and developed strong psychological attachment to being a male. At age 14, Reimer’s parents told him the truth about his condition and he elected to switch to a male identity. Although he later had surgery to correct the initial sex re-assignment, he suffered from profound depression related to his sex and gender issues and committed suicide in 2004. Money’s desire to test his controversial psychology experiment on humans without their consent cost someone his life. 

7. Landis’ Facial Expressions Experiment (1924). In 1924, at the University of Minnesota, Carney Landis created an experiment on humans to investigate the similarity of different people’s’ facial expressions. He wanted to determine if people displayed similar or different facial expressions while experiencing common emotions.

Carney chose students as participants, who were exposed to different situations to prompt emotional reactions. However, to elicit revulsion, he ordered the participants to behead a live rat. One-third of participants refused to do it, while another two-thirds complied, with a great deal of trauma done to them–and the rats. This unethical experiment is one of many reasons review boards were created and have made drastic changes in policy over experiments done on humans.

6. The Aversion Project (1970s and 80s). During the apartheid years in South Africa,  doctors in the South African military tried to “cure” homosexuality in conscripts by forcing them to undergo electroshock therapy and chemical castration. The military also forced gay conscripts to undergo sex-change operations. This happened as one segment of a secret military program headed by Dr. Aubrey Levin that sought to study and eliminate homosexuality in the military as recently as 1989. Exception for several cases of lesbian soldiers who were abused, most of the 900 soldiers to be abused were very young, from 16 to 24-year-old male conscripts. Astoundingly, Dr. Levin relocated to Canada where he worked until he was sent to prison for assaulting a patient.

5. Monkey Drug Trials (1969).  The Monkey Drug Trials experiment was ostensibly meant to test the effects of  illicit drugs on monkeys. Given that monkeys and humans have similar reactions to drugs, and that animals have long been part of medical experiments, the face of this experiment might not look too bad. It’s actually horrific. Monkeys and rats learned to self-administer a range of drugs, from cocaine, amphetamines, codeine, morphine and alcohol. The animals suffered horribly, tearing their fingers off, clawing fur away, and in some cases breaking bones attempting to break out of their cages. This psychology experiment study had no purpose other than to re-validate studies that had been validated many times before. 

4. Little Albert (1920). John Watson, the founder of the psychological school of behaviorism, believed that behaviors were primarily learned. Anxious to test his hypothesis, he used an orphan, “Little Albert,” as an experimental subject. He exposed the child to a laboratory rat, which caused no fear response from the boy, for several months. Next, at the same time, the child was exposed to the rat, he struck a steel bar with a hammer, scaring the little boy and causing a fear response. By associating the appearance of the rat with the loud noise, Little Albert became afraid of the rat. Naturally, the fear was a condition that needed to be fixed, but the boy left the facility before Watson could remedy things.

3. Harlow’s Pit of Despair (1970s). Harry Harlow, a psychologist at the University of Wisconsin-Madison, was a researcher in the field of maternal bonding. To investigate the effects of attachment on development, he took young monkeys and isolated them after they’d bonded with their mothers. He kept them completely isolated, in a vertical chamber that prevented all contact with other monkeys.  They developed no social skills and became completely unable to function as normal rhesus monkeys. These controversial psychology experiments, were not only staggeringly cruel but revealed no data that wasn’t already known.

2. Learned Helplessness (1965). In 1965, Doctors Martin Seligman and Steve Maier investigated the concept of learned helplessness. Three sets of dogs were placed in harnesses. One group of dogs were control subjects. Nothing happened to them. Dogs from group two received electric shocks; however, they were able to stop the shocks when they pressed a lever. In group three, the subjects were shocked, but the levers didn’t abort the shocks. Next, the psychologists placed the dogs in an open box the dogs could easily jump out of, but even though they received shocks, they didn’t leap out of the box. The dogs developed a sense of helplessness, or an inability to take successful action to change a bad situation.

1. The Robbers Cave Experiment (1954). Although the Robbers Cave Experiment is much less disturbing than some of the others in the list, it’s still a good example of the need for informed consent. In 1954, Muzafer Sherif, a psychologist interested in group dynamics and conflict, brought a group of preteen boys to a summer camp. He divided them into groups and engaged the boys in competitions. However, Sherif manipulated the outcomes of the contests, keeping the results close for each group. Then he gave the boys tasks to complete as a unified group, with everyone working together. The conflicts that had arisen when the boys were competing vanished when they worked as one large group.

More Psychology Rankings of Interest:

  • 10 Key Moments in the History of Psychology
  • 5 Key Moments in Social Psychology
  • 5 Learning Theories in Psychology
  • Top 10 Movies About Psychology

Trending now

  • 30 Most Unethical Psychology Human Experiments

Lead

Disturbing human experiments aren’t something the average person thinks too much about. Rather, the progress achieved in the last 150 years of human history is an accomplishment we’re reminded of almost daily. Achievements made in biomedicine and the f ield of psychology mean that we no longer need to worry about things like deadly diseases or masturbation as a form of insanity. For better or worse, we have developed more effective ways to gather information, treat skin abnormalities, and even kill each other. But what we are not constantly reminded of are the human lives that have been damaged or lost in the name of this progress. The following is a list of the 30 most disturbing human experiments in history.

30. The Tearoom Sex Study

30-Tea-Room-Sex-Study

Image Source Sociologist Laud Humphreys often wondered about the men who commit impersonal sexual acts with one another in public restrooms. He wondered why “tearoom sex” — fellatio in public restrooms — led to the majority of homosexual arrests in the United States. Humphreys decided to become a “watchqueen” (the person who keeps watch and coughs when a cop or stranger get near) for his Ph.D. dissertation at Washington University. Throughout his research, Humphreys observed hundreds of acts of fellatio and interviewed many of the participants. He found that 54% of his subjects were married, and 38% were very clearly neither bisexual or homosexual. Humphreys’ research shattered a number of stereotypes held by both the public and law enforcement.

29. Prison Inmates as Test Subjects

29-Prison-Inmates-as-Test-Subjects

Image Source In 1951, Dr. Albert M. Kligman, a dermatologist at the University of Pennsylvania and future inventor of Retin-A, began experimenting on inmates at Philadelphia’s Holmesburg Prison. As Kligman later told a newspaper reporter, “All I saw before me were acres of skin. It was like a farmer seeing a field for the first time.” Over the next 20 years, inmates willingly allowed Kligman to use their bodies in experiments involving toothpaste, deodorant, shampoo, skin creams, detergents, liquid diets, eye drops, foot powders, and hair dyes. Though the tests required constant biopsies and painful procedures, none of the inmates experienced long-term harm.

28. Henrietta Lacks

28-Henrietta-Lacks

Image Source In 1955, Henrietta Lacks, a poor, uneducated African-American woman from Baltimore, was the unwitting source of cells which where then cultured for the purpose of medical research. Though researchers had tried to grow cells before, Henrietta’s were the first successfully kept alive and cloned. Henrietta’s cells, known as HeLa cells, have been instrumental in the development of the polio vaccine, cancer research, AIDS research, gene mapping, and countless other scientific endeavors. Henrietta died penniless and was buried without a tombstone in a family cemetery. For decades, her husband and five children were left in the dark about their wife and mother’s amazing contribution to modern medicine.

27. Project QKHILLTOP

27-Project-QKHILLTOP

Image Source In 1954, the CIA developed an experiment called Project QKHILLTOP to study Chinese brainwashing techniques, which they then used to develop new methods of interrogation. Leading the research was Dr. Harold Wolff of Cornell University Medical School. After requesting that the CIA provide him with information on imprisonment, deprivation, humiliation, torture, brainwashing, hypnoses, and more, Wolff’s research team began to formulate a plan through which they would develop secret drugs and various brain damaging procedures. According to a letter he wrote, in order to fully test the effects of the harmful research, Wolff expected the CIA to “make available suitable subjects.”

26. Stateville Penitentiary Malaria Study

26-Stateville-Penitentiary-Malaria-Study

Image Source During World War II, malaria and other tropical diseases were impeding the efforts of American military in the Pacific. In order to get a grip, the Malaria Research Project was established at Stateville Penitentiary in Joliet, Illinois. Doctors from the University of Chicago exposed 441 volunteer inmates to bites from malaria-infected mosquitos. Though one inmate died of a heart attack, researchers insisted his death was unrelated to the study. The widely-praised experiment continued at Stateville for 29 years, and included the first human test of Primaquine, a medication still used in the treatment of malaria and Pneumocystis pneumonia.

25. Emma Eckstein and Sigmund Freud

25-Emma-Eckstein-and-Sigmund-Freud

Image Source Despite seeking the help of Sigmund Freud for vague symptoms like stomach ailments and slight depression, 27-year old Emma Eckstein was “treated” by the German doctor for hysteria and excessive masturbation, a habit then considered dangerous to mental health. Emma’s treatment included a disturbing experimental surgery in which she was anesthetized with only a local anesthetic and cocaine before the inside of her nose was cauterized. Not surprisingly, Emma’s surgery was a disaster. Whether Emma was a legitimate medical patient or a source of more amorous interest for Freud, as a recent movie suggests, Freud continued to treat Emma for three years.

24. Dr. William Beaumont and the Stomach

Image Source In 1822, a fur trader on Mackinac Island in Michigan was accidentally shot in the stomach and treated by Dr. William Beaumont. Despite dire predictions, the fur trader survived — but with a hole (fistula) in his stomach that never healed. Recognizing the unique opportunity to observe the digestive process, Beaumont began conducting experiments. Beaumont would tie food to a string, then insert it through the hole in the trader’s stomach. Every few hours, Beaumont would remove the food to observe how it had been digested. Though gruesome, Beaumont’s experiments led to the worldwide acceptance that digestion was a chemical, not a mechanical, process.

23. Electroshock Therapy on Children

23-Electroshock-Therapy-on-Children

Image Source In the 1960s, Dr. Lauretta Bender of New York’s Creedmoor Hospital began what she believed to be a revolutionary treatment for children with social issues — electroshock therapy. Bender’s methods included interviewing and analyzing a sensitive child in front of a large group, then applying a gentle amount of pressure to the child’s head. Supposedly, any child who moved with the pressure was showing early signs of schizophrenia. Herself the victim of a misunderstood childhood, Bender was said to be unsympathetic to the children in her care. By the time her treatments were shut down, Bender had used electroshock therapy on over 100 children, the youngest of whom was age three.

22. Project Artichoke

22-Project-Artichoke

Image Source In the 1950s, the CIA’s Office of Scientific Intelligence ran a series of mind control projects in an attempt to answer the question “Can we get control of an individual to the point where he will do our bidding against his will and even against fundamental laws of nature?” One of these programs, Project Artichoke, studied hypnosis, forced morphine addiction, drug withdrawal, and the use of chemicals to incite amnesia in unwitting human subjects. Though the project was eventually shut down in the mid-1960s, the project opened the door to extensive research on the use of mind-control in field operations.

21. Hepatitis in Mentally Disabled Children

21-Hepatitis-in-Mentally-Disabled-Children

Image Source In the 1950s, Willowbrook State School, a New York state-run institution for mentally handicapped children, began experiencing outbreaks of hepatitis. Due to unsanitary conditions, it was virtually inevitable that these children would contract hepatitis. Dr. Saul Krugman, sent to investigate the outbreak, proposed an experiment that would assist in developing a vaccine. However, the experiment required deliberately infecting children with the disease. Though Krugman’s study was controversial from the start, critics were eventually silenced by the permission letters obtained from each child’s parents. In reality, offering one’s child to the experiment was oftentimes the only way to guarantee admittance into the overcrowded institution.

20. Operation Midnight Climax

20-Operation-Midnight-Climax

Image Source Initially established in the 1950s as a sub-project of a CIA-sponsored, mind-control research program, Operation Midnight Climax sought to study the effects of LSD on individuals. In San Francisco and New York, unconsenting subjects were lured to safehouses by prostitutes on the CIA payroll, unknowingly given LSD and other mind-altering substances, and monitored from behind one-way glass. Though the safehouses were shut down in 1965, when it was discovered that the CIA was administering LSD to human subjects, Operation Midnight Climax was a theater for extensive research on sexual blackmail, surveillance technology, and the use of mind-altering drugs on field operations.

19. Study of Humans Accidentally Exposed to Fallout Radiation

19-1954-Castle-Bravo-nuclear-test

Image Source The 1954 “Study of Response of Human Beings exposed to Significant Beta and Gamma Radiation due to Fall-out from High-Yield Weapons,” known better as Project 4.1, was a medical study conducted by the U.S. of residents of the Marshall Islands. When the Castle Bravo nuclear test resulted in a yield larger than originally expected, the government instituted a top secret study to “evaluate the severity of radiation injury” to those accidentally exposed. Though most sources agree the exposure was unintentional, many Marshallese believed Project 4.1 was planned before the Castle Bravo test. In all, 239 Marshallese were exposed to significant levels of radiation.

18. The Monster Study

18-The-Monster-Study

Image Source In 1939, University of Iowa researchers Wendell Johnson and Mary Tudor conducted a stuttering experiment on 22 orphan children in Davenport, Iowa. The children were separated into two groups, the first of which received positive speech therapy where children were praised for speech fluency. In the second group, children received negative speech therapy and were belittled for every speech imperfection. Normal-speaking children in the second group developed speech problems which they then retained for the rest of their lives. Terrified by the news of human experiments conducted by the Nazis, Johnson and Tudor never published the results of their “Monster Study.”

17. Project MKUltra

17-Project-MKUltra

Image Source Project MKUltra is the code name of a CIA-sponsored research operation that experimented in human behavioral engineering. From 1953 to 1973, the program employed various methodologies to manipulate the mental states of American and Canadian citizens. These unwitting human test subjects were plied with LSD and other mind-altering drugs, hypnosis, sensory deprivation, isolation, verbal and sexual abuse, and various forms of torture. Research occurred at universities, hospitals, prisons, and pharmaceutical companies. Though the project sought to develop “chemical […] materials capable of employment in clandestine operations,” Project MKUltra was ended by a Congress-commissioned investigation into CIA activities within the U.S.

16. Experiments on Newborns

16-Experiments-on-Newborns

Image Source In the 1960s, researchers at the University of California began an experiment to study changes in blood pressure and blood flow. The researchers used 113 newborns ranging in age from one hour to three days old as test subjects. In one experiment, a catheter was inserted through the umbilical arteries and into the aorta. The newborn’s feet were then immersed in ice water for the purpose of testing aortic pressure. In another experiment, up to 50 newborns were individually strapped onto a circumcision board, then tilted so that their blood rushed to their head and their blood pressure could be monitored.

15. The Aversion Project

15-The-Aversion-Project

Image Source In 1969, during South Africa’s detestable Apartheid era, thousands of homosexuals were handed over to the care of Dr. Aubrey Levin, an army colonel and psychologist convinced he could “cure” homosexuals. At the Voortrekkerhoogte military hospital near Pretoria, Levin used electroconvulsive aversion therapy to “reorientate” his patients. Electrodes were strapped to a patient’s upper arm with wires running to a dial calibrated from 1 to 10. Homosexual men were shown pictures of a naked man and encouraged to fantasize, at which point the patient was subjected to severe shocks. When Levin was warned that he would be named an abuser of human rights, he emigrated to Canada where he currently works at a teaching hospital.

14. Medical Experiments on Prison Inmates

14-Medical-Experiments-on-Prison-Inmates

Image Source Perhaps one benefit of being an inmate at California’s San Quentin prison is the easy access to acclaimed Bay Area doctors. But if that’s the case, then a downside is that these doctors also have easy access to inmates. From 1913 to 1951, Dr. Leo Stanley, chief surgeon at San Quentin, used prisoners as test subjects in a variety of bizarre medical experiments. Stanley’s experiments included sterilization and potential treatments for the Spanish Flu. In one particularly disturbing experiment, Stanley performed testicle transplants on living prisoners using testicles from executed prisoners and, in some cases, from goats and boars.

13. Sexual Reassignment

13-Sexual-Reassignment

Image Source In 1965, Canadian David Peter Reimer was born biologically male. But at seven months old, his penis was accidentally destroyed during an unconventional circumcision by cauterization. John Money, a psychologist and proponent of the idea that gender is learned, convinced the Reimers that their son would be more likely to achieve a successful, functional sexual maturation as a girl. Though Money continued to report only success over the years, David’s own account insisted that he had never identified as female. He spent his childhood teased, ostracized, and seriously depressed. At age 38, David committed suicide by shooting himself in the head.

12. Effect of Radiation on Testicles

12-Effect-of-Radiation-on-Testicles

Image Source Between 1963 and 1973, dozens of Washington and Oregon prison inmates were used as test subjects in an experiment designed to test the effects of radiation on testicles. Bribed with cash and the suggestion of parole, 130 inmates willingly agreed to participate in the experiments conducted by the University of Washington on behalf of the U.S. government. In most cases, subjects were zapped with over 400 rads of radiation (the equivalent of 2,400 chest x-rays) in 10 minute intervals. However, it was much later that the inmates learned the experiments were far more dangerous than they had been told. In 2000, the former participants settled a $2.4 million class-action settlement from the University.

11. Stanford Prison Experiment

11-Stanford-Prison-Experiment

Image Source Conducted at Stanford University from August 14-20, 1971, the Stanford Prison Experiment was an investigation into the causes of conflict between military guards and prisoners. Twenty-four male students were chosen and randomly assigned roles of prisoners and guards. They were then situated in a specially-designed mock prison in the basement of the Stanford psychology building. Those subjects assigned to be guards enforced authoritarian measures and subjected the prisoners to psychological torture. Surprisingly, many of the prisoners accepted the abuses. Though the experiment exceeded the expectations of all of the researchers, it was abruptly ended after only six days.

10. Syphilis Experiments in Guatemala

10-Syphilis-Experiments-in-Guatemala

Image Source From 1946 to 1948, the United States government, Guatemalan president Juan José Arévalo, and some Guatemalan health ministries, cooperated in a disturbing human experiment on unwitting Guatemalan citizens. Doctors deliberately infected soldiers, prostitutes, prisoners, and mental patients with syphilis and other sexually transmitted diseases in an attempt to track their untreated natural progression. Treated only with antibiotics, the experiment resulted in at least 30 documented deaths. In 2010, the United States made a formal apology to Guatemala for their involvement in these experiments.

9. Tuskegee Syphilis Study

9-Tuskegee-Syphilis-Study

Image Source In 1932, the U.S. Public Health Service began working with the Tuskegee Institute to track the natural progression of untreated syphilis. Six hundred poor, illiterate, male sharecroppers were found and hired in Macon County, Alabama. Of the 600 men, only 399 had previously contracted syphilis, and none were told they had a life threatening disease. Instead, they were told they were receiving free healthcare, meals, and burial insurance in exchange for participating. Even after Penicillin was proven an effective cure for syphilis in 1947, the study continued until 1972. In addition to the original subjects, victims of the study included wives who contracted the disease, and children born with congenital syphilis. In 1997, President Bill Clinton formally apologized to those affected by what is often called the “most infamous biomedical experiment in U.S. history.”

8. Milgram Experiment

8-Milgram-Experiment

In 1961, Stanley Milgram, a psychologist at Yale University, began a series of social psychology experiments that measured the willingness of test subjects to obey an authority figure. Conducted only three months after the start of the trial of German Nazi war criminal Adolf Eichmann, Milgram’s experiment sought to answer the question, “Could it be that Eichmann and his million accomplices in the Holocaust were just following orders?” In the experiment, two participants (one secretly an actor and one an unwitting test subject) were separated into two rooms where they could hear, but not see, each other. The test subject would then read a series of questions to the actor, punishing each wrong answer with an electric shock. Though many people would indicate their desire to stop the experiment, almost all subjects continued when they were told they would not be held responsible, or that there would not be any permanent damage.

7. Infected Mosquitos in Towns

7-Infected-Mosquitos-in-Towns

In 1956 and 1957, the United States Army conducted a number of biological warfare experiments on the cities of Savannah, Georgia and Avon Park, Florida. In one such experiment, millions of infected mosquitos were released into the two cities, in order to see if the insects could spread yellow fever and dengue fever. Not surprisingly, hundreds of researchers contracted illnesses that included fevers, respiratory problems, stillbirths, encephalitis, and typhoid. In order to photograph the results of their experiments, Army researchers pretended to be public health workers. Several people died as a result of the research.

6. Human Experimentation in the Soviet Union

6-Human-Experimentation-in-the-Soviet-Union

Beginning in 1921 and continuing for most of the 21st century, the Soviet Union employed poison laboratories known as Laboratory 1, Laboratory 12, and Kamera as covert research facilities of the secret police agencies. Prisoners from the Gulags were exposed to a number of deadly poisons, the purpose of which was to find a tasteless, odorless chemical that could not be detected post mortem. Tested poisons included mustard gas, ricin, digitoxin, and curare, among others. Men and women of varying ages and physical conditions were brought to the laboratories and given the poisons as “medication,” or part of a meal or drink.

5. Human Experimentation in North Korea

5-Human-Experimentation-in-North-Korea

Image Source Several North Korean defectors have described witnessing disturbing cases of human experimentation. In one alleged experiment, 50 healthy women prisoners were given poisoned cabbage leaves — all 50 women were dead within 20 minutes. Other described experiments include the practice of surgery on prisoners without anesthesia, purposeful starvation, beating prisoners over the head before using the zombie-like victims for target practice, and chambers in which whole families are murdered with suffocation gas. It is said that each month, a black van known as “the crow” collects 40-50 people from a camp and takes them to an known location for experiments.

4. Nazi Human Experimentation

4-Nazi-Human-Experimentation

Image Source Over the course of the Third Reich and the Holocaust, Nazi Germany conducted a series of medical experiments on Jews, POWs, Romani, and other persecuted groups. The experiments were conducted in concentration camps, and in most cases resulted in death, disfigurement, or permanent disability. Especially disturbing experiments included attempts to genetically manipulate twins; bone, muscle, and nerve transplantation; exposure to diseases and chemical gasses; sterilization, and anything else the infamous Nazi doctors could think up. After the war, these crimes were tried as part of the Nuremberg Trial and ultimately led to the development of the Nuremberg Code of medical ethics.

3. Unit 731

3-Unit-731

Image Source From 1937 to 1945, the imperial Japanese Army developed a covert biological and chemical warfare research experiment called Unit 731. Based in the large city of Harbin, Unit 731 was responsible for some of the most atrocious war crimes in history. Chinese and Russian subjects — men, women, children, infants, the elderly, and pregnant women — were subjected to experiments which included the removal of organs from a live body, amputation for the study of blood loss, germ warfare attacks, and weapons testing. Some prisoners even had their stomachs surgically removed and their esophagus reattached to the intestines. Many of the scientists involved in Unit 731 rose to prominent careers in politics, academia, business, and medicine.

2. Radioactive Materials in Pregnant Women

2-Radioactive-Materials-in-Pregnant-Women

Image Source Shortly after World War II, with the impending Cold War forefront on the minds of Americans, many medical researchers were preoccupied with the idea of radioactivity and chemical warfare. In an experiment at Vanderbilt University, 829 pregnant women were given “vitamin drinks” they were told would improve the health of their unborn babies. Instead, the drinks contained radioactive iron and the researchers were studying how quickly the radioisotope crossed into the placenta. At least seven of the babies later died from cancers and leukemia, and the women themselves experienced rashes, bruises, anemia, loss of hair and tooth, and cancer.

1. Mustard Gas Tested on American Military

1-Mustard-Gas-Tested-on-American-Military

Image Source In 1943, the U.S. Navy exposed its own sailors to mustard gas. Officially, the Navy was testing the effectiveness of new clothing and gas masks against the deadly gas that had proven so terrifying in the first World War. The worst of the experiments occurred at the Naval Research Laboratory in Washington. Seventeen and 18-year old boys were approached after eight weeks of boot camp and asked if they wanted to participate in an experiment that would help shorten the war. Only when the boys reached the Research Laboratory were they told the experiment involved mustard gas. The participants, almost all of whom suffered severe external and internal burns, were ignored by the Navy and, in some cases, threatened with the Espionage Act. In 1991, the reports were finally declassified and taken before Congress.

28. Prison Inmates as Test Subjects Henrietta Lacks 26. Project QKHILLTOP 25. Stateville Penitentiary Malaria Study Stateville Penitentiary Malaria Study: Primaquine 24. Emma Eckstein 23. Dr. William Beaumont Dr. William Beaumont 21. Electroshock Therapy on Children 21. Project Artichoke 20. Operation Midnight Climax 19. Study of Humans Accidentally Exposed to Fallout Radiation 18. The Monster Experiment 17. Project MKUltra 16. Experiments on Newborns 15. The Aversion Project 14. Medical Experiments on Prison Inmates 13. Sexual Reassignment 12. Effect of Radiation on Testicles 11. Stanford Prison Experiment 10. Syphilis Experiment in Guatemala 9. Tuskegee Syphilis Study 8. Milgram Experiment 7. Infected Mosquitos in Towns 6. Human Experimentation in the Soviet Union 5. Human Experimentation in North Korea 4. Nazi Human Experimentation 3. Unit 731 2. Radioactive Materials in Pregnant Women 1. Mustard Gas Tested on American Military

  • Psychology Education
  • Bachelors in Psychology
  • Masters in Psychology
  • Doctorate in Psychology
  • Psychology Resources
  • Psychology License
  • Psychology Salary
  • Psychology Career
  • Psychology Major
  • What is Psychology
  • Up & Coming Programs
  • Top 10 Up and Coming Undergraduate Psychology Programs in the South
  • Top 10 Up and Coming Undergraduate Psychology Programs in the Midwest
  • Top 10 Up and Coming Undergraduate Psychology Programs in the West
  • Top 10 Up and Coming Undergraduate Psychology Programs in the East
  • Best Psychology Degrees Scholarship Opportunity
  • The Pursuit of Excellence in Psychology Scholarship is Now Closed
  • Meet Gemma: Our First Psychology Scholarship Winner
  • 50 Most Affordable Clinical Psychology Graduate Programs
  • 50 Most Affordable Selective Small Colleges for a Psychology Degree
  • The 50 Best Schools for Psychology: Undergraduate Edition
  • 30 Great Small Colleges for a Counseling Degree (Bachelor’s)
  • Top 10 Best Online Bachelors in Psychology Degree Programs
  • Top 10 Online Child Psychology Degree Programs
  • 10 Best Online Forensic Psychology Degree Programs
  • Top 10 Online Master’s in Psychology Degree Programs
  • Top 15 Most Affordable School Psychology Programs
  • Top 20 Most Innovative Graduate Psychology Degree Programs
  • Top 8 Online Sports Psychology Degree Programs
  • Recent Posts
  • Does Psychology Require Math? – Requirements for Psychology Majors
  • 10 Classes You Will Take as a Psychology Major
  • Top 15 Highest-Paying Jobs with a Master’s Degree in Psychology
  • The Highest Paying Jobs with an Associate’s Degree in Psychology
  • The Highest-Paying Jobs with a Bachelor’s in Psychology
  • Should I Major in Psychology?
  • How to Become a CBT Therapist
  • What is a Social Psychologist?
  • How to Become a Clinical Neuropsychologist
  • MA vs. MS in Psychology: What’s the Difference?
  • PsyD vs. PhD in Psychology: What’s the Difference?
  • What Can You Do with a Master’s in Psychology?
  • What Can You Do With A PhD in Psychology?
  • Master’s in Child Psychology Guide
  • Master’s in Counseling Psychology – A Beginner’s Guide
  • Master’s in Forensic Psychology – A Beginner’s Guide
  • 8 Reasons to Become a Marriage and Family Therapist
  • What Do Domestic Violence & Abuse Counselors Do?
  • What Training is Needed to Be a Psychologist for People of the LGBTQ Community?
  • 15 Inspiring TED Talks on Intelligence and Critical Thinking
  • The 30 Most Inspiring Personal Growth and Development Blogs
  • 30 Most Prominent Psychologists on Twitter
  • New Theory Discredits the Myth that Individuals with Asperger’s Syndrome Lack Empathy
  • 10 Crazy Things Famous People Have Believed
  • Psychology Infographics
  • Top Infographics About Psychology
  • The Birth Order Effect [Infographic]
  • The Psychology of Dogs [Infographic]
  • Can Going Green Improve Your Mental Health? [Infographic]
  • Surprising Alternative Treatments for Mental Disorders [Infographic]
  • What Can Humans Learn From Animals? [Infographic]

Topic: Human Experimentation

Addressing social justice through the lens of henrietta lacks.

Among the many disruptions of the pandemic, one particular disappointment was the cancellation of the in-person annual meeting of the American Society for Bioethics and Humanities (ASBH), scheduled for Baltimore and set to coincide with the Berman Institute’s 25th Anniversary Celebration and the centennial of Henrietta Lacks’s birth. Yet despite the switch to a virtual format, the Berman Institute was able to host a plenary session that was the talk of the meeting and continues to reverberate.

“Social Justice and Bioethics Through the Lens of the Story of Henrietta Lacks,” was moderated by Jeffrey Kahn and featured Ruth Faden as a panelist. She was joined by Henrietta Lacks’s granddaughter, Jeri Lacks, architect Victor Vines, and Georgetown University Law Center bioethicist Patricia King.

Faden began the session by providing an overview of the Henrietta Lacks story, famed in the context of structural injustice.

“The structural injustice of racism defined in pretty much every way how this story unfolded,” she said. “What is wrong about what happened to the Lacks family engages every core element of human well-being. There were assaults on the social basis of respect, and of self-determination, on attachments, on personal security and on health. Mrs. Lacks and her children were poor Black people in a segregated world in which the most profound injustices of racial oppression were daily features of their lives.”

Faden was followed by Jeri Lacks who expressed the importance of continuing to let the world know about her grandmother’s story.

“Her cells were used to develop the polio vaccine and to treat HIV, and in creating in vitro fertilization. She is a person who continues to give life, and to preserve life,” said Lacks. “No matter what your race, your age, your social circumstances, she continues to improve your life.”

Victor Vines, an architect who was part of the architect team leading programming and planning for the National Museum of African American History and Culture and led the feasibility study for what will be Johns Hopkins University’s Henrietta Lacks Hall, spoke next about addressing racial injustice through architecture and design.

“When we started work on Lacks Hall, we didn’t talk a lot about architecture or design. We talked about what that story is that we want to tell through the building. Meeting with the Lacks family was critically important to that,” Vines said. “We had to understand what they went through and what they care about. The building still has to function and house the Berman Institute, so we had to meet their needs. And we discovered a third client, the East Baltimore community. At the end of the day, this building and university reside within that community, and they will be called to embrace this project – or not.”

King concluded the panel with a riveting and wide-ranging discussion that touched upon intersectionality, segregation, the Tuskegee experiments and participation in clinical trials, COVID, race as a social construct, and the role of consent, all within the framework of Henrietta Lacks’s story.

“Our narratives are important and should be thought of as lessons or homework for institutions,” she said. “They not only document the deep distrust we bring to health encounters but also convey relevant aspects of our lives that should be appreciated.”

As the session ended Kahn noted that perhaps it was fortunate the session had been virtual, so the recording “could be shared with others for posterity. I’m not quite speechless, but maybe close,” he said.

Honoring an Immortal Contribution

Johns Hopkins University President Ronald J. Daniels and Paul B. Rothman, CEO of Johns Hopkins Medicine and dean of the medical faculty of the Johns Hopkins University School of Medicine, along with Berman institute Executive Director Jeffrey Kahn and descendants of Henrietta Lacks, recently announced plans to name a new multidisciplinary building on the Johns Hopkins East Baltimore campus in honor of Henrietta Lacks, who was the source of the HeLa cell line that has been critical to numerous advances in medicine.

Surrounded by descendants of Lacks, Daniels made the announcement at the 9th annual Henrietta Lacks Memorial Lecture in the Turner Auditorium in East Baltimore.

“Through her life and her immortal cells, Henrietta Lacks made an immeasurable impact on science and medicine that has touched countless lives around the world,” Daniels said. “This building will stand as a testament to her transformative impact on scientific discovery and the ethics that must undergird its pursuit. We at Johns Hopkins are profoundly grateful to the Lacks family for their partnership as we continue to learn from Mrs. Lacks’ life and to honor her enduring legacy.”

Henrietta Lacks’ contributions to science were not widely known until the 2010 release of the book The Immortal Life of Henrietta Lacks by Rebecca Skloot, which explored Lacks’ life story, her impact on medical science and important bioethical issues. In 2017, HBO and Harpo Studios released a movie based on the book, with Oprah Winfrey starring as Deborah Lacks, Henrietta Lacks’ daughter.

Several Lacks family members attended today’s event. “It is a proud day for the Lacks family. We have been working with Hopkins for many years now on events and projects that honor our grandmother,” said Jeri Lacks, granddaughter of Henrietta Lacks. “They are all meaningful, but this is the ultimate honor, one befitting of her role in advancing modern medicine.”

The building, which will adjoin the Berman Institute of Bioethics’ current home in Deering Hall will support programs that enhance participation and partnership with members of the community in research that can benefit the community, as well as extend the opportunities to further study and promote research ethics and community engagement in research through an expansion of the Berman Institute and its work.

The story portrayed in The Immortal Life of Henrietta Lacks points to several important bioethical issues, including informed consent, medical records privacy, and communication with tissue donors and research participants.

“The story of Henrietta Lacks has encouraged us all to examine, discuss and wrestle with difficult issues that are at the foundation of the ethics of research, and must inform our relationships with the individuals and communities that are part of that research,” said Jeffrey Kahn, director of the Johns Hopkins University Berman Institute of Bioethics. “As a result, students, faculty and the entire research community at Johns Hopkins and around the world do their work with a greater sensitivity to these critical issues.”

In 2013, Johns Hopkins worked with members of the Lacks family and the National Institutes of Health (NIH) to help broker an agreement that requires scientists to receive permission to use Henrietta Lacks’ genetic blueprint in NIH-funded research.

The NIH committee tasked with overseeing the use of HeLa cells now includes two members of the Lacks family. The medical research community has also made significant strides in improving research practices, in part thanks to the lessons learned from Henrietta Lacks’ story.

“It has been an honor for me to work with the Lacks family on how we can recognize the contribution of Henrietta Lacks to medical research and the community. Their willingness to focus on the positive impact of the HeLa cells has been inspiring to me. The Henrietta Lacks story has led many researchers to rededicate themselves to working more closely with patients,” said Daniel E. Ford, vice dean for clinical investigation in the school of medicine. “The new building will be a hub for the community engagement and collaboration program of the NIH-supported Institute for Clinical and Translational Research.”

Groundbreaking on the building that will be named for Henrietta Lacks is scheduled for 2020 with an anticipated completion in 2022.

To learn more about Henrietta Lacks and the wide-ranging impact of HeLa cells on medical research,

please visit: www.hopkinsmedicine.org/henriettalacks .

Alan Regenberg, MBE

Alan is also engaged in a broad range of research projects and programs, including the Berman Institute’s science programs: the Stem Cell Policy and Ethics (SCOPE) Program ; the Program in Ethics and Brain Sciences (PEBS-Neuroethics) ; and the Hinxton Group , an international consortium on stem cells, ethics and law; and the eSchool+ Initiative . Recent research has focused on using deliberative democracy tools to engage with communities about their values for allocating scarce medical resources like ventilators in disasters like pandemics. Additional recent work has focused on ethical challenges related to gene editing, stem cell research, social media, public engagement, vaccines, and neuroethics. ( Publications )

Joseph Ali, JD

Vaccinating pregnant women against ebola.

In a STAT News opinion piece, Johns Hopkins University experts, including our Ruth Faden, argued it is unfair  to deny pregnant and lactating women the experimental Ebola vaccine if they wish to take it, given the great risk the virus poses to those who are exposed to it.

“From a public health perspective and an ethical perspective, the decision to exclude pregnant and lactating women is utterly indefensible,” they wrote.

The authors are members of Pregnancy Research Ethics for Vaccines, Epidemics, and New Technologies (PREVENT) Working Group, which has brought together an international team of experts in bioethics, maternal immunization, maternal-fetal medicine, obstetrics, pediatrics, philosophy, public health, and vaccine research to provide specific recommendations developed to address this critical gap in vaccine research and development and epidemic response. This group recognizes that excluding pregnant women from efforts to develop and deploy vaccines against emerging threats is not acceptable.

Nancy E. Kass, ScD

Dr. Kass is coeditor (with Ruth Faden) of HIV, AIDS and Childbearing: Public Policy, Private Lives (Oxford University Press, 1996).

She has served as consultant to the President’s Advisory Committee on Human Radiation Experiments, to the National Bioethics Advisory Commission, and to the National Academy of Sciences. Dr. Kass currently serves as the Chair of the NIH Precision Medicine Initiative Central IRB; she previously co-chaired the National Cancer Institute (NCI) Committee to develop Recommendations for Informed Consent Documents for Cancer Clinical Trials and served on the NCI’s central IRB. Current research projects examine improving informed consent in human research, ethical guidance development for Ebola and other infectious outbreaks, and ethics and learning health care. Dr. Kass teaches the Bloomberg School of Public Health’s course on U.S. and International Research Ethics and Integrity, she served as the director of the School’s PhD program in bioethics and health policy from its inception until 2016, and she has directed (with Adnan Hyder) the Johns Hopkins Fogarty African Bioethics Training Program since its inception in 2000. Dr. Kass is an elected member of the Institute of Medicine (now National Academy of Medicine) and an elected Fellow of the Hastings Center.

Jeremy Sugarman, MD, MPH, MA

He was the founding director of the Trent Center for Bioethics, Humanities and History of Medicine at Duke University where he was also a professor of medicine and philosophy. He was appointed as an Academic Icon at the University of Malaya and is a faculty affiliate of the Kennedy Institute of Ethics at Georgetown University.

Dr. Sugarman was the longstanding chair of the Ethics Working Group of the HIV Prevention Trials Network. He is currently a member of the Scientific and Research Advisory Board for the Canadian Blood Service and the Ethics and Public Policy Committees of the International Society for Stem Cell Research. He co-leads the Ethics and Regulatory Core of the NIH Health Care Systems Research Collaboratory and is co-chair of the Johns Hopkins’ Institutional Stem Cell Research Oversight Committee.

Dr. Sugarman has been elected as a member of the American Society of Clinical Investigation, Association of American Physicians, and the National Academy of Medicine (formerly the Institute of Medicine). He is a fellow of the American Association for the Advancement of Science, the American College of Physicians and the Hastings Center. He also received a Doctor of Science, honoris causa, from New York Medical College.

Stanford Prison Experiment: Zimbardo’s Famous Study

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

  • The experiment was conducted in 1971 by psychologist Philip Zimbardo to examine situational forces versus dispositions in human behavior.
  • 24 young, healthy, psychologically normal men were randomly assigned to be “prisoners” or “guards” in a simulated prison environment.
  • The experiment had to be terminated after only 6 days due to the extreme, pathological behavior emerging in both groups. The situational forces overwhelmed the dispositions of the participants.
  • Pacifist young men assigned as guards began behaving sadistically, inflicting humiliation and suffering on the prisoners. Prisoners became blindly obedient and allowed themselves to be dehumanized.
  • The principal investigator, Zimbardo, was also transformed into a rigid authority figure as the Prison Superintendent.
  • The experiment demonstrated the power of situations to alter human behavior dramatically. Even good, normal people can do evil things when situational forces push them in that direction.

Zimbardo and his colleagues (1973) were interested in finding out whether the brutality reported among guards in American prisons was due to the sadistic personalities of the guards (i.e., dispositional) or had more to do with the prison environment (i.e., situational).

For example, prisoners and guards may have personalities that make conflict inevitable, with prisoners lacking respect for law and order and guards being domineering and aggressive.

Alternatively, prisoners and guards may behave in a hostile manner due to the rigid power structure of the social environment in prisons.

Zimbardo predicted the situation made people act the way they do rather than their disposition (personality).

zimbardo guards

To study people’s roles in prison situations, Zimbardo converted a basement of the Stanford University psychology building into a mock prison.

He advertised asking for volunteers to participate in a study of the psychological effects of prison life.

The 75 applicants who answered the ad were given diagnostic interviews and personality tests to eliminate candidates with psychological problems, medical disabilities, or a history of crime or drug abuse.

24 men judged to be the most physically & mentally stable, the most mature, & the least involved in antisocial behaviors were chosen to participate.

The participants did not know each other prior to the study and were paid $15 per day to take part in the experiment.

guard

Participants were randomly assigned to either the role of prisoner or guard in a simulated prison environment. There were two reserves, and one dropped out, finally leaving ten prisoners and 11 guards.

Prisoners were treated like every other criminal, being arrested at their own homes, without warning, and taken to the local police station. They were fingerprinted, photographed and ‘booked.’

Then they were blindfolded and driven to the psychology department of Stanford University, where Zimbardo had had the basement set out as a prison, with barred doors and windows, bare walls and small cells. Here the deindividuation process began.

When the prisoners arrived at the prison they were stripped naked, deloused, had all their personal possessions removed and locked away, and were given prison clothes and bedding. They were issued a uniform, and referred to by their number only.

zimbardo prison

The use of ID numbers was a way to make prisoners feel anonymous. Each prisoner had to be called only by his ID number and could only refer to himself and the other prisoners by number.

Their clothes comprised a smock with their number written on it, but no underclothes. They also had a tight nylon cap to cover their hair, and a locked chain around one ankle.

All guards were dressed in identical uniforms of khaki, and they carried a whistle around their neck and a billy club borrowed from the police. Guards also wore special sunglasses, to make eye contact with prisoners impossible.

Three guards worked shifts of eight hours each (the other guards remained on call). Guards were instructed to do whatever they thought was necessary to maintain law and order in the prison and to command the respect of the prisoners. No physical violence was permitted.

Zimbardo observed the behavior of the prisoners and guards (as a researcher), and also acted as a prison warden.

Within a very short time both guards and prisoners were settling into their new roles, with the guards adopting theirs quickly and easily.

Asserting Authority

Within hours of beginning the experiment, some guards began to harass prisoners. At 2:30 A.M. prisoners were awakened from sleep by blasting whistles for the first of many “counts.”

The counts served as a way to familiarize the prisoners with their numbers. More importantly, they provided a regular occasion for the guards to exercise control over the prisoners.

prisoner counts

The prisoners soon adopted prisoner-like behavior too. They talked about prison issues a great deal of the time. They ‘told tales’ on each other to the guards.

They started taking the prison rules very seriously, as though they were there for the prisoners’ benefit and infringement would spell disaster for all of them. Some even began siding with the guards against prisoners who did not obey the rules.

Physical Punishment

The prisoners were taunted with insults and petty orders, they were given pointless and boring tasks to accomplish, and they were generally dehumanized.

Push-ups were a common form of physical punishment imposed by the guards. One of the guards stepped on the prisoners” backs while they did push-ups, or made other prisoners sit on the backs of fellow prisoners doing their push-ups.

prisoner push ups

Asserting Independence

Because the first day passed without incident, the guards were surprised and totally unprepared for the rebellion which broke out on the morning of the second day.

During the second day of the experiment, the prisoners removed their stocking caps, ripped off their numbers, and barricaded themselves inside the cells by putting their beds against the door.

The guards called in reinforcements. The three guards who were waiting on stand-by duty came in and the night shift guards voluntarily remained on duty.

Putting Down the Rebellion

The guards retaliated by using a fire extinguisher which shot a stream of skin-chilling carbon dioxide, and they forced the prisoners away from the doors. Next, the guards broke into each cell, stripped the prisoners naked and took the beds out.

The ringleaders of the prisoner rebellion were placed into solitary confinement. After this, the guards generally began to harass and intimidate the prisoners.

Special Privileges

One of the three cells was designated as a “privilege cell.” The three prisoners least involved in the rebellion were given special privileges. The guards gave them back their uniforms and beds and allowed them to wash their hair and brush their teeth.

Privileged prisoners also got to eat special food in the presence of the other prisoners who had temporarily lost the privilege of eating. The effect was to break the solidarity among prisoners.

Consequences of the Rebellion

Over the next few days, the relationships between the guards and the prisoners changed, with a change in one leading to a change in the other. Remember that the guards were firmly in control and the prisoners were totally dependent on them.

As the prisoners became more dependent, the guards became more derisive towards them. They held the prisoners in contempt and let the prisoners know it. As the guards’ contempt for them grew, the prisoners became more submissive.

As the prisoners became more submissive, the guards became more aggressive and assertive. They demanded ever greater obedience from the prisoners. The prisoners were dependent on the guards for everything, so tried to find ways to please the guards, such as telling tales on fellow prisoners.

Prisoner #8612

Less than 36 hours into the experiment, Prisoner #8612 began suffering from acute emotional disturbance, disorganized thinking, uncontrollable crying, and rage.

After a meeting with the guards where they told him he was weak, but offered him “informant” status, #8612 returned to the other prisoners and said “You can”t leave. You can’t quit.”

Soon #8612 “began to act ‘crazy,’ to scream, to curse, to go into a rage that seemed out of control.” It wasn’t until this point that the psychologists realized they had to let him out.

A Visit from Parents

The next day, the guards held a visiting hour for parents and friends. They were worried that when the parents saw the state of the jail, they might insist on taking their sons home. Guards washed the prisoners, had them clean and polish their cells, fed them a big dinner and played music on the intercom.

After the visit, rumors spread of a mass escape plan. Afraid that they would lose the prisoners, the guards and experimenters tried to enlist help and facilities of the Palo Alto police department.

The guards again escalated the level of harassment, forcing them to do menial, repetitive work such as cleaning toilets with their bare hands.

Catholic Priest

Zimbardo invited a Catholic priest who had been a prison chaplain to evaluate how realistic our prison situation was. Half of the prisoners introduced themselves by their number rather than name.

The chaplain interviewed each prisoner individually. The priest told them the only way they would get out was with the help of a lawyer.

Prisoner #819

Eventually, while talking to the priest, #819 broke down and began to cry hysterically, just like two previously released prisoners had.

The psychologists removed the chain from his foot, the cap off his head, and told him to go and rest in a room that was adjacent to the prison yard. They told him they would get him some food and then take him to see a doctor.

While this was going on, one of the guards lined up the other prisoners and had them chant aloud:

“Prisoner #819 is a bad prisoner. Because of what Prisoner #819 did, my cell is a mess, Mr. Correctional Officer.”

The psychologists realized #819 could hear the chanting and went back into the room where they found him sobbing uncontrollably. The psychologists tried to get him to agree to leave the experiment, but he said he could not leave because the others had labeled him a bad prisoner.

Back to Reality

At that point, Zimbardo said, “Listen, you are not #819. You are [his name], and my name is Dr. Zimbardo. I am a psychologist, not a prison superintendent, and this is not a real prison. This is just an experiment, and those are students, not prisoners, just like you. Let’s go.”

He stopped crying suddenly, looked up and replied, “Okay, let’s go,“ as if nothing had been wrong.

An End to the Experiment

Zimbardo (1973) had intended that the experiment should run for two weeks, but on the sixth day, it was terminated, due to the emotional breakdowns of prisoners, and excessive aggression of the guards.

Christina Maslach, a recent Stanford Ph.D. brought in to conduct interviews with the guards and prisoners, strongly objected when she saw the prisoners being abused by the guards.

Filled with outrage, she said, “It’s terrible what you are doing to these boys!” Out of 50 or more outsiders who had seen our prison, she was the only one who ever questioned its morality.

Zimbardo (2008) later noted, “It wasn’t until much later that I realized how far into my prison role I was at that point — that I was thinking like a prison superintendent rather than a research psychologist.“

This led him to prioritize maintaining the experiment’s structure over the well-being and ethics involved, thereby highlighting the blurring of roles and the profound impact of the situation on human behavior.

Here’s a quote that illustrates how Philip Zimbardo, initially the principal investigator, became deeply immersed in his role as the “Stanford Prison Superintendent (April 19, 2011):

“By the third day, when the second prisoner broke down, I had already slipped into or been transformed into the role of “Stanford Prison Superintendent.” And in that role, I was no longer the principal investigator, worried about ethics. When a prisoner broke down, what was my job? It was to replace him with somebody on our standby list. And that’s what I did. There was a weakness in the study in not separating those two roles. I should only have been the principal investigator, in charge of two graduate students and one undergraduate.”
According to Zimbardo and his colleagues, the Stanford Prison Experiment revealed how people will readily conform to the social roles they are expected to play, especially if the roles are as strongly stereotyped as those of the prison guards.

Because the guards were placed in a position of authority, they began to act in ways they would not usually behave in their normal lives.

The “prison” environment was an important factor in creating the guards’ brutal behavior (none of the participants who acted as guards showed sadistic tendencies before the study).

Therefore, the findings support the situational explanation of behavior rather than the dispositional one.

Zimbardo proposed that two processes can explain the prisoner’s “final submission.”

Deindividuation may explain the behavior of the participants; especially the guards. This is a state when you become so immersed in the norms of the group that you lose your sense of identity and personal responsibility.

The guards may have been so sadistic because they did not feel what happened was down to them personally – it was a group norm. They also may have lost their sense of personal identity because of the uniform they wore.

Also, learned helplessness could explain the prisoner’s submission to the guards. The prisoners learned that whatever they did had little effect on what happened to them. In the mock prison the unpredictable decisions of the guards led the prisoners to give up responding.

After the prison experiment was terminated, Zimbardo interviewed the participants. Here’s an excerpt:

‘Most of the participants said they had felt involved and committed. The research had felt “real” to them. One guard said, “I was surprised at myself. I made them call each other names and clean the toilets out with their bare hands. I practically considered the prisoners cattle and I kept thinking I had to watch out for them in case they tried something.” Another guard said “Acting authoritatively can be fun. Power can be a great pleasure.” And another: “… during the inspection I went to Cell Two to mess up a bed which a prisoner had just made and he grabbed me, screaming that he had just made it and that he was not going to let me mess it up. He grabbed me by the throat and although he was laughing I was pretty scared. I lashed out with my stick and hit him on the chin although not very hard, and when I freed myself I became angry.”’

Most of the guards found it difficult to believe that they had behaved in the brutal ways that they had. Many said they hadn’t known this side of them existed or that they were capable of such things.

The prisoners, too, couldn’t believe that they had responded in the submissive, cowering, dependent way they had. Several claimed to be assertive types normally.

When asked about the guards, they described the usual three stereotypes that can be found in any prison: some guards were good, some were tough but fair, and some were cruel.

A further explanation for the behavior of the participants can be described in terms of reinforcement.  The escalation of aggression and abuse by the guards could be seen as being due to the positive reinforcement they received both from fellow guards and intrinsically in terms of how good it made them feel to have so much power.

Similarly, the prisoners could have learned through negative reinforcement that if they kept their heads down and did as they were told, they could avoid further unpleasant experiences.

Critical Evaluation

Ecological validity.

The Stanford Prison Experiment is criticized for lacking ecological validity in its attempt to simulate a real prison environment. Specifically, the “prison” was merely a setup in the basement of Stanford University’s psychology department.

The student “guards” lacked professional training, and the experiment’s duration was much shorter than real prison sentences. Furthermore, the participants, who were college students, didn’t reflect the diverse backgrounds typically found in actual prisons in terms of ethnicity, education, and socioeconomic status.

None had prior prison experience, and they were chosen due to their mental stability and low antisocial tendencies. Additionally, the mock prison lacked spaces for exercise or rehabilitative activities.

Demand characteristics

Demand characteristics could explain the findings of the study. Most of the guards later claimed they were simply acting. Because the guards and prisoners were playing a role, their behavior may not be influenced by the same factors which affect behavior in real life. This means the study’s findings cannot be reasonably generalized to real life, such as prison settings. I.e, the study has low ecological validity.

One of the biggest criticisms is that strong demand characteristics confounded the study. Banuazizi and Movahedi (1975) found that the majority of respondents, when given a description of the study, were able to guess the hypothesis and predict how participants were expected to behave.

This suggests participants may have simply been playing out expected roles rather than genuinely conforming to their assigned identities.

In addition, revelations by Zimbardo (2007) indicate he actively encouraged the guards to be cruel and oppressive in his orientation instructions prior to the start of the study. For example, telling them “they [the prisoners] will be able to do nothing and say nothing that we don’t permit.”

He also tacitly approved of abusive behaviors as the study progressed. This deliberate cueing of how participants should act, rather than allowing behavior to unfold naturally, indicates the study findings were likely a result of strong demand characteristics rather than insightful revelations about human behavior.

However, there is considerable evidence that the participants did react to the situation as though it was real. For example, 90% of the prisoners’ private conversations, which were monitored by the researchers, were on the prison conditions, and only 10% of the time were their conversations about life outside of the prison.

The guards, too, rarely exchanged personal information during their relaxation breaks – they either talked about ‘problem prisoners,’ other prison topics, or did not talk at all. The guards were always on time and even worked overtime for no extra pay.

When the prisoners were introduced to a priest, they referred to themselves by their prison number, rather than their first name. Some even asked him to get a lawyer to help get them out.

Fourteen years after his experience as prisoner 8612 in the Stanford Prison Experiment, Douglas Korpi, now a prison psychologist, reflected on his time and stated (Musen and Zimbardo 1992):

“The Stanford Prison Experiment was a very benign prison situation and it promotes everything a normal prison promotes — the guard role promotes sadism, the prisoner role promotes confusion and shame”.

Sample bias

The study may also lack population validity as the sample comprised US male students. The study’s findings cannot be applied to female prisons or those from other countries. For example, America is an individualist culture (where people are generally less conforming), and the results may be different in collectivist cultures (such as Asian countries).

Carnahan and McFarland (2007) have questioned whether self-selection may have influenced the results – i.e., did certain personality traits or dispositions lead some individuals to volunteer for a study of “prison life” in the first place?

All participants completed personality measures assessing: aggression, authoritarianism, Machiavellianism, narcissism, social dominance, empathy, and altruism. Participants also answered questions on mental health and criminal history to screen out any issues as per the original SPE.

Results showed that volunteers for the prison study, compared to the control group, scored significantly higher on aggressiveness, authoritarianism, Machiavellianism, narcissism, and social dominance. They scored significantly lower on empathy and altruism.

A follow-up role-playing study found that self-presentation biases could not explain these differences. Overall, the findings suggest that volunteering for the prison study was influenced by personality traits associated with abusive tendencies.

Zimbardo’s conclusion may be wrong

While implications for the original SPE are speculative, this lends support to a person-situation interactionist perspective, rather than a purely situational account.

It implies that certain individuals are drawn to and selected into situations that fit their personality, and that group composition can shape behavior through mutual reinforcement.

Contributions to psychology

Another strength of the study is that the harmful treatment of participants led to the formal recognition of ethical  guidelines by the American Psychological Association. Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented.

Most institutions, such as universities, hospitals, and government agencies, require a review of research plans by a panel. These boards review whether the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These boards may request researchers make changes to the study’s design or procedure, or, in extreme cases, deny approval of the study altogether.

Contribution to prison policy

A strength of the study is that it has altered the way US prisons are run. For example, juveniles accused of federal crimes are no longer housed before trial with adult prisoners (due to the risk of violence against them).

However, in the 25 years since the SPE, U.S. prison policy has transformed in ways counter to SPE insights (Haney & Zimbardo, 1995):

  • Rehabilitation was abandoned in favor of punishment and containment. Prison is now seen as inflicting pain rather than enabling productive re-entry.
  • Sentencing became rigid rather than accounting for inmates’ individual contexts. Mandatory minimums and “three strikes” laws over-incarcerate nonviolent crimes.
  • Prison construction boomed, and populations soared, disproportionately affecting minorities. From 1925 to 1975, incarceration rates held steady at around 100 per 100,000. By 1995, rates tripled to over 600 per 100,000.
  • Drug offenses account for an increasing proportion of prisoners. Nonviolent drug offenses make up a large share of the increased incarceration.
  • Psychological perspectives have been ignored in policymaking. Legislators overlooked insights from social psychology on the power of contexts in shaping behavior.
  • Oversight retreated, with courts deferring to prison officials and ending meaningful scrutiny of conditions. Standards like “evolving decency” gave way to “legitimate” pain.
  • Supermax prisons proliferated, isolating prisoners in psychological trauma-inducing conditions.

The authors argue psychologists should reengage to:

  • Limit the use of imprisonment and adopt humane alternatives based on the harmful effects of prison environments
  • Assess prisons’ total environments, not just individual conditions, given situational forces interact
  • Prepare inmates for release by transforming criminogenic post-release contexts
  • Address socioeconomic risk factors, not just incarcerate individuals
  • Develop contextual prediction models vs. focusing only on static traits
  • Scrutinize prison systems independently, not just defer to officials shaped by those environments
  • Generate creative, evidence-based reforms to counter over-punitive policies

Psychology once contributed to a more humane system and can again counter the U.S. “rage to punish” with contextual insights (Haney & Zimbardo, 1998).

Evidence for situational factors

Zimbardo (1995) further demonstrates the power of situations to elicit evil actions from ordinary, educated people who likely would never have done such things otherwise. It was another situation-induced “transformation of human character.”

  • Unit 731 was a covert biological and chemical warfare research unit of the Japanese army during WWII.
  • It was led by General Shiro Ishii and involved thousands of doctors and researchers.
  • Unit 731 set up facilities near Harbin, China to conduct lethal human experimentation on prisoners, including Allied POWs.
  • Experiments involved exposing prisoners to things like plague, anthrax, mustard gas, and bullets to test biological weapons. They infected prisoners with diseases and monitored their deaths.
  • At least 3,000 prisoners died from these brutal experiments. Many were killed and dissected.
  • The doctors in Unit 731 obeyed orders unquestioningly and conducted these experiments in the name of “medical science.”
  • After the war, the vast majority of doctors who participated faced no punishment and went on to have prestigious careers. This was largely covered up by the U.S. in exchange for data.
  • It shows how normal, intelligent professionals can be led by situational forces to systematically dehumanize victims and conduct incredibly cruel and lethal experiments on people.
  • Even healers trained to preserve life used their expertise to destroy lives when the situational forces compelled obedience, nationalism, and wartime enmity.

Evidence for an interactionist approach

The results are also relevant for explaining abuses by American guards at Abu Ghraib prison in Iraq.

An interactionist perspective recognizes that volunteering for roles as prison guards attracts those already prone to abusive tendencies, which are intensified by the prison context.

This counters a solely situationist view of good people succumbing to evil situational forces.

Ethical Issues

The study has received many ethical criticisms, including lack of fully informed consent by participants as Zimbardo himself did not know what would happen in the experiment (it was unpredictable). Also, the prisoners did not consent to being “arrested” at home. The prisoners were not told partly because final approval from the police wasn’t given until minutes before the participants decided to participate, and partly because the researchers wanted the arrests to come as a surprise. However, this was a breach of the ethics of Zimbardo’s own contract that all of the participants had signed.

Protection of Participants

Participants playing the role of prisoners were not protected from psychological harm, experiencing incidents of humiliation and distress. For example, one prisoner had to be released after 36 hours because of uncontrollable bursts of screaming, crying, and anger.

Here’s a quote from Philip G. Zimbardo, taken from an interview on the Stanford Prison Experiment’s 40th anniversary (April 19, 2011):

“In the Stanford prison study, people were stressed, day and night, for 5 days, 24 hours a day. There’s no question that it was a high level of stress because five of the boys had emotional breakdowns, the first within 36 hours. Other boys that didn’t have emotional breakdowns were blindly obedient to corrupt authority by the guards and did terrible things to each other. And so it is no question that that was unethical. You can’t do research where you allow people to suffer at that level.”
“After the first one broke down, we didn’t believe it. We thought he was faking. There was actually a rumor he was faking to get out. He was going to bring his friends in to liberate the prison. And/or we believed our screening procedure was inadequate, [we believed] that he had some mental defect that we did not pick up. At that point, by the third day, when the second prisoner broke down, I had already slipped into or been transformed into the role of “Stanford Prison Superintendent.” And in that role, I was no longer the principal investigator, worried about ethics.”

However, in Zimbardo’s defense, the emotional distress experienced by the prisoners could not have been predicted from the outset.

Approval for the study was given by the Office of Naval Research, the Psychology Department, and the University Committee of Human Experimentation.

This Committee also did not anticipate the prisoners’ extreme reactions that were to follow. Alternative methodologies were looked at that would cause less distress to the participants but at the same time give the desired information, but nothing suitable could be found.

Withdrawal 

Although guards were explicitly instructed not to physically harm prisoners at the beginning of the Stanford Prison Experiment, they were allowed to induce feelings of boredom, frustration, arbitrariness, and powerlessness among the inmates.

This created a pervasive atmosphere where prisoners genuinely believed and even reinforced among each other, that they couldn’t leave the experiment until their “sentence” was completed, mirroring the inescapability of a real prison.

Even though two participants (8612 and 819) were released early, the impact of the environment was so profound that prisoner 416, reflecting on the experience two months later, described it as a “prison run by psychologists rather than by the state.”

Extensive group and individual debriefing sessions were held, and all participants returned post-experimental questionnaires several weeks, then several months later, and then at yearly intervals. Zimbardo concluded there were no lasting negative effects.

Zimbardo also strongly argues that the benefits gained from our understanding of human behavior and how we can improve society should outbalance the distress caused by the study.

However, it has been suggested that the US Navy was not so much interested in making prisons more human and were, in fact, more interested in using the study to train people in the armed services to cope with the stresses of captivity.

Discussion Questions

What are the effects of living in an environment with no clocks, no view of the outside world, and minimal sensory stimulation?
Consider the psychological consequences of stripping, delousing, and shaving the heads of prisoners or members of the military. Whattransformations take place when people go through an experience like this?
The prisoners could have left at any time, and yet, they didn’t. Why?
After the study, how do you think the prisoners and guards felt?
If you were the experimenter in charge, would you have done this study? Would you have terminated it earlier? Would you have conducted a follow-up study?

Frequently Asked Questions

What happened to prisoner 8612 after the experiment.

Douglas Korpi, as prisoner 8612, was the first to show signs of severe distress and demanded to be released from the experiment. He was released on the second day, and his reaction to the simulated prison environment highlighted the study’s ethical issues and the potential harm inflicted on participants.

After the experiment, Douglas Korpi graduated from Stanford University and earned a Ph.D. in clinical psychology. He pursued a career as a psychotherapist, helping others with their mental health struggles.

Why did Zimbardo not stop the experiment?

Zimbardo did not initially stop the experiment because he became too immersed in his dual role as the principal investigator and the prison superintendent, causing him to overlook the escalating abuse and distress among participants.

It was only after an external observer, Christina Maslach, raised concerns about the participants’ well-being that Zimbardo terminated the study.

What happened to the guards in the Stanford Prison Experiment?

In the Stanford Prison Experiment, the guards exhibited abusive and authoritarian behavior, using psychological manipulation, humiliation, and control tactics to assert dominance over the prisoners. This ultimately led to the study’s early termination due to ethical concerns.

What did Zimbardo want to find out?

Zimbardo aimed to investigate the impact of situational factors and power dynamics on human behavior, specifically how individuals would conform to the roles of prisoners and guards in a simulated prison environment.

He wanted to explore whether the behavior displayed in prisons was due to the inherent personalities of prisoners and guards or the result of the social structure and environment of the prison itself.

What were the results of the Stanford Prison Experiment?

The results of the Stanford Prison Experiment showed that situational factors and power dynamics played a significant role in shaping participants’ behavior. The guards became abusive and authoritarian, while the prisoners became submissive and emotionally distressed.

The experiment revealed how quickly ordinary individuals could adopt and internalize harmful behaviors due to their assigned roles and the environment.

Banuazizi, A., & Movahedi, S. (1975). Interpersonal dynamics in a simulated prison: A methodological analysis. American Psychologist, 30 , 152-160.

Carnahan, T., & McFarland, S. (2007). Revisiting the Stanford prison experiment: Could participant self-selection have led to the cruelty? Personality and Social Psychology Bulletin, 33, 603-614.

Drury, S., Hutchens, S. A., Shuttlesworth, D. E., & White, C. L. (2012). Philip G. Zimbardo on his career and the Stanford Prison Experiment’s 40th anniversary.  History of Psychology ,  15 (2), 161.

Griggs, R. A., & Whitehead, G. I., III. (2014). Coverage of the Stanford Prison Experiment in introductory social psychology textbooks. Teaching of Psychology, 41 , 318 –324.

Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). A study of prisoners and guards in a simulated prison . Naval Research Review , 30, 4-17.

Haney, C., & Zimbardo, P. (1998). The past and future of U.S. prison policy: Twenty-five years after the Stanford Prison Experiment.  American Psychologist, 53 (7), 709–727.

Musen, K. & Zimbardo, P. (1992) (DVD) Quiet Rage: The Stanford Prison Experiment Documentary.

Zimbardo, P. G. (Consultant, On-Screen Performer), Goldstein, L. (Producer), & Utley, G. (Correspondent). (1971, November 26). Prisoner 819 did a bad thing: The Stanford Prison Experiment [Television series episode]. In L. Goldstein (Producer), Chronolog. New York, NY: NBC-TV.

Zimbardo, P. G. (1973). On the ethics of intervention in human psychological research: With special reference to the Stanford prison experiment.  Cognition ,  2 (2), 243-256.

Zimbardo, P. G. (1995). The psychology of evil: A situationist perspective on recruiting good people to engage in anti-social acts.  Japanese Journal of Social Psychology ,  11 (2), 125-133.

Zimbardo, P.G. (2007). The Lucifer effect: Understanding how good people turn evil . New York, NY: Random House.

Further Information

  • Reicher, S., & Haslam, S. A. (2006). Rethinking the psychology of tyranny: The BBC prison study. The British Journal of Social Psychology, 45 , 1.
  • Coverage of the Stanford Prison Experiment in introductory psychology textbooks
  • The Stanford Prison Experiment Official Website

Print Friendly, PDF & Email

  • The Magazine
  • Stay Curious
  • The Sciences
  • Environment
  • Planet Earth

While Some Unethical, These 4 Social Experiments Helped Explain Human Behavior

How have we learned about human behavior some studies caused a baby to fear animals — and other experiments helped us explore human nature..

psycologist taking notes

From the CIA’s secret mind control program, MK Ultra, to the stuttering “Monster” study, American researchers have a long history of engaging in human experiments. The studies have helped us better understand ourselves and why we do certain things.

These four experiments did just this and helped us better understand human behavior. However, some of them would be considered unethical today due to either lack of informed consent or the mental and/or emotional damage they caused.

1. Cognitive Dissonance Experiment

After proposing the concept of cognitive dissonance , psychologist Leon Festinger created an experiment to test his theory that was also known as the boring experiment. 

Participants were paid either $1 or $20 to engage in mundane tasks, including turning pegs on a board and moving spools on and off a tray. Despite the boring nature of the activities, they were asked to tell the next participant that it was interesting and fun.

The people who were paid $20 felt more justified lying to others because they were better compensated — and they experienced less cognitive dissonance . Participants who were paid $1 felt greater cognitive dissonance due to their inability to rationalize lying.

In an attempt to reconcile their dissonance, they convinced themselves that the tasks were actually enjoyable.

2. The Little Albert Experiment  

In 1920, psychologist John. B. Watson and graduate student (and future wife) Rosalie Rayner wanted to see if they could produce a response in humans using classical conditioning — the way Pavlov did with dogs.  

They decided to expose a 9-month-old baby, whom they called Albert, to a white rat. At first, the baby displayed no fear and played with the rat. To startle Albert, Watson and Rayner would then make a loud noise by hitting a steel bar with a hammer. 

Each time they made the loud sound while Albert was playing with the rat, he became frightened, started crying, and crawled away from the rat. He had become classically conditioned to fear the rat because he associated it with something negative. He then developed stimulus generalization, where he feared other furry white objects — including a rabbit, white coat, and a Santa mask. 

3. Stanford Prison Experiment

In 1971, Stanford psychologist Philip Zimbardo designed a study to examine societal roles and situational power — through an experiment that recreated prison conditions. 

Zimbardo created a mock prison in a building on Stanford’s campus. He assigned study participants to be either guards or prisoners. Prisoners were given numbers instead of names, had a chain attached to one leg, and were dressed in smocks and stocking caps.

Those assigned to the role of a guard quickly conformed to their new position of power. They became hostile and aggressive toward the prisoners, subjecting them to psychological and verbal abuse — despite never having previously demonstrated such attitudes or behavior. The experiment was slated to last two weeks but needed to be ended after only six days. 

4. The Facial Expression Experiment

In 1924, psychology graduate student Carney Landis wanted to study how people’s emotions were reflected in their facial expressions, exploring whether certain emotions caused the same facial expressions in everyone.

Landis marked participants’ faces with black lines to study the movement of their facial muscles as they reacted. At first, he had them do innocuous tasks, such as listening to jazz music or smelling ammonia. 

As Landis grew frustrated that their responses weren’t strong enough, he had participants engage in increasingly shocking acts, such as sticking their hands into a bucket with live frogs in it. Eventually, Landis instructed participants to decapitate a live mouse. If they refused, he decapitated the mouse himself to elicit a strong reaction from them.

Read More: 5 Unethical Medical Experiments Brought Out of the Shadows of History

Article Sources

Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:

Advance Research Journal of Social Science . Cognitive dissonance: its role in decision making

New Scientist. How a baby was trained to fear

Stanford Prison Experiment. Philip G. Zimbardo

Incarceration . The dirty work of the Stanford Prison Experiment: Re-reading the dramaturgy of coercion

Journal of Experimental Psychology. Studies of emotional reactions. I. 'A preliminary study of facial expression."

The American Journal of Psychology. Carney Landis: 1897-1962

Allison Futterman is a Charlotte, N.C.-based writer whose science, history, and medical/health writing has appeared on a variety of platforms and in regional and national publications. These include Charlotte, People, Our State, and Philanthropy magazines, among others. She has a BA in communications and a MS in criminal justice.

  • brain structure & function

Already a subscriber?

Register or Log In

Discover Magazine Logo

Keep reading for as low as $1.99!

Sign up for our weekly science updates.

Save up to 40% off the cover price when you subscribe to Discover magazine.

Facebook

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 25 April 2018

The ethics of experimenting with human brain tissue

  • Nita A. Farahany 0 ,
  • Henry T. Greely 1 ,
  • Steven Hyman 2 ,
  • Christof Koch 3 ,
  • Christine Grady 4 ,
  • Sergiu P. Pașca 5 ,
  • Nenad Sestan 6 ,
  • Paola Arlotta 7 ,
  • James L. Bernat 8 ,
  • Jonathan Ting 9 ,
  • Jeantine E. Lunshof 10 ,
  • Eswar P. R. Iyer 11 ,
  • Insoo Hyun 12 ,
  • Beatrice H. Capestany 13 ,
  • George M. Church 14 ,
  • Hao Huang 15 &
  • Hongjun Song 16

Nita A. Farahany is professor of law and philosophy at Duke University, director of the Duke Initiative for Science & Society, Duke University, Durham, North Carolina, USA.

You can also search for this author in PubMed   Google Scholar

Henry T. Greely is professor of law, director of the Center for Law and the Biosciences, and director of the Stanford Program in Neuroscience and Society at Stanford University, California, USA.

Steven Hyman is director of the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard University; and Harvard University distinguished service professor in the Department of Stem Cell and Regenerative Biology at Harvard University, Cambridge, Massachusetts, USA.

Christof Koch is the chief scientist and president at the Allen Institute for Brain Science, Seattle, Washington, USA.

Christine Grady is chief of the Department of Bioethics at the National Institutes of Health Clinical Center, Bethesda, Maryland, USA.

Sergiu P. Pașca is assistant professor of psychiatry and behavioural sciences at Stanford University, Palo Alto, California, USA.

Nenad Sestan is professor of neuroscience, of genetics, of psychiatry, and of comparative medicine at the Yale School of Medicine, New Haven, Connecticut, USA.

Paola Arlotta is professor of stem cell and regenerative biology at Harvard University, Cambridge, Massachusetts, USA.

James L. Bernat is professor of neurology and medicine (active emeritus) at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, USA.

Jonathan Ting is assistant investigator at the Allen Institute for Brain Science, Seattle, Washington, USA.

Jeantine E. Lunshof is research scientist-ethicist at MIT Media Lab in Cambridge, Massachusetts; ethics consultant to the Department of Genetics at Harvard Medical School, Boston, Massachusetts, USA; assistant professor in the Department of Genetics, University of Groningen, Groningen, the Netherlands.

Eswar P. R. Iyer is a postdoctoral fellow at Harvard Medical School and the Wyss Institute for Biologically Inspired Engineering at Harvard University.

Insoo Hyun is associate professor of bioethics and philosophy at Case Western Reserve University School of Medicine, Cleveland, Ohio, USA.

Beatrice H. Capestany is a postdoctoral fellow at the Science, Law, and Policy Lab at the Duke Initiative for Science & Society, Duke University, Durham, North Carolina, USA.

George M. Church is professor of genetics at Department of Genetics, Harvard Medical School, and Wyss Institute for Biologically Inspired Engineering, Boston, Massachusetts, USA.

Hao Huang is associate professor of radiology at University of Pennsylvania, Philadelphia, USA.

Hongjun Song is Perelman professor of neuroscience at University of Pennsylvania in Philadelphia, USA.

If researchers could create brain tissue in the laboratory that might appear to have conscious experiences or subjective phenomenal states, would that tissue deserve any of the protections routinely given to human or animal research subjects?

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 556 , 429-432 (2018)

doi: https://doi.org/10.1038/d41586-018-04813-x

Quadrato, G., Brown, J. & Arlotta, P. Nature Med. 22 , 1220–1228 (2016).

Article   PubMed   CAS   Google Scholar  

Pasca, S. P. Nature 553 , 437–445 (2018).

Arlotta, P. Nature Meth. 15 , 27–29 (2018).

Article   CAS   Google Scholar  

Eisenstein, M. Nature Meth. 15 , 19–22 (2018).

Lancaster, M. A. & Knoblich, J. A. Nature Protoc . 9, 2329–2340 (2014).

Qian, X. et al. Cell 165 , 1238–1254 (2016).

Kadoshima, T. et al. Proc. Natl Acad. Sci. USA 110 , 20284–20289 (2013).

Birey, F. et al. Nature 545 , 54–59 (2017).

Sloan, S. A. et al. Neuron 95 , 779–790 (2017).

Mariani, J. et al. Cell 162 , 375–390 (2015).

Ye, F. et al. Neuron 96 , 1041–1054 (2017).

Qian, X., Nguyen, H. N., Jacob, F., Song, H. & Ming, G. L. Development 144 , 952–957 (2017).

Quadrato, G. et al. Nature 545 , 48–53 (2017).

Mansour, A. A. et al. Nature Biotechnol . https://doi.org/10.1038/nbt.4127 (2018).

Article   PubMed   Google Scholar  

Koch, C., Massimini, M., Boly, M. & Tononi, G. Nature Rev. Neurosci. 17 , 307–321 (2016).

Bourret, R. et al. Stem Cell. Res. Ther. 7 , 87 (2016).

Greely, H. T., Ramos, K. M. & Grady, C. Neuron 92 , 637–641 (2016).

Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society (Presidential Commission for the Study of Bioethical Issues, 2014); available at https://go.nature.com/2qvm83f

Google Scholar  

Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society (Presidential Commission for the Study of Bioethical Issues, 2015); available at https://go.nature.com/2vdqx5j

Download references

Reprints and permissions

Related Articles

experiments unethical

Method of the Year 2017: Organoids

  • Neuroscience

Found: a brain-wiring pattern linked to depression

Found: a brain-wiring pattern linked to depression

News 04 SEP 24

The hepatitis C virus envelope protein complex is a dimer of heterodimers

The hepatitis C virus envelope protein complex is a dimer of heterodimers

Article 04 SEP 24

How rival weight-loss drugs fare at treating obesity, diabetes and more

How rival weight-loss drugs fare at treating obesity, diabetes and more

News 03 SEP 24

Frontostriatal salience network expansion in individuals in depression

Frontostriatal salience network expansion in individuals in depression

DNA methylation controls stemness of astrocytes in health and ischaemia

DNA methylation controls stemness of astrocytes in health and ischaemia

13 PhD Positions at Heidelberg University

GRK2727/1 – InCheck Innate Immune Checkpoints in Cancer and Tissue Damage

Heidelberg, Baden-Württemberg (DE) and Mannheim, Baden-Württemberg (DE)

Medical Faculties Mannheim & Heidelberg and DKFZ, Germany

experiments unethical

Postdoctoral Associate- Environmental Epidemiology

Houston, Texas (US)

Baylor College of Medicine (BCM)

experiments unethical

Open Faculty Positions at the State Key Laboratory of Brain Cognition & Brain-inspired Intelligence

The laboratory focuses on understanding the mechanisms of brain intelligence and developing the theory and techniques of brain-inspired intelligence.

Shanghai, China

CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT)

experiments unethical

Research Associate - Good Manufacturing Practices (GMP)

Faculty position in pathology research.

Dallas, Texas (US)

The University of Texas Southwestern Medical Center (UT Southwestern Medical Center)

experiments unethical

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Proc Natl Acad Sci U S A
  • v.117(48); 2020 Dec 1

Logo of pnas

Ethics in field experimentation: A call to establish new standards to protect the public from unwanted manipulation and real harms

Rose mcdermott.

a Political Science, Brown University, Providence, RI, 02912;

Peter K. Hatemi

b Political Science, Pennsylvania State University, University Park, PA, 16802;

c Microbiology and Biochemistry, Pennsylvania State University, University Park, PA, 16802

Author contributions: R.M. and P.K.H. designed research, performed research, and wrote the paper.

Associated Data

There are no data underlying this work.

In 1966, Henry Beecher published his foundational paper “Ethics and Clinical Research,” bringing to light unethical experiments that were routinely being conducted by leading universities and government agencies. A common theme was the lack of voluntary consent. Research regulations surrounding laboratory experiments flourished after his work. More than half a century later, we seek to follow in his footsteps and identify a new domain of risk to the public: certain types of field experiments. The nature of experimental research has changed greatly since the Belmont Report. Due in part to technological advances including social media, experimenters now target and affect whole societies, releasing interventions into a living public, often without sufficient review or controls. A large number of social science field experiments do not reflect compliance with current ethical and legal requirements that govern research with human participants. Real-world interventions are being conducted without consent or notice to the public they affect. Follow-ups and debriefing are routinely not being undertaken with the populations that experimenters injure. Importantly, even when ethical research guidelines are followed, researchers are following principles developed for experiments in controlled settings, with little assessment or protection for the wider societies within which individuals are embedded. We strive to improve the ethics of future work by advocating the creation of new norms, illustrating classes of field experiments where scholars do not appear to have recognized the ways such research circumvents ethical standards by putting people, including those outside the manipulated group, into harm’s way.

There has been a rapid and dangerous decline in adherence to the core foundations of ethical research on human participants when it comes to field experiments in the social, behavioral, and psychological sciences ( 1 – 7 ). For example, just looking at one discipline, a review of all articles published in the preeminent political science journals from 2013 to 2017 found that almost none of the field experiments in that period reflected compliance with the current ethical requirements that govern research with human participants * ; it is common knowledge that many field experiments are conducted without the consent, knowledge, or debriefing of participants ( 7 , 8 ). Critically, even when researchers adhere to ethical guidelines, they are following principles developed for a different era, designed to protect and limit risk to individual participants in controlled laboratory settings. The basic principles of “respect for persons,” “justice,” and “beneficence,” † while clearly applicable to the design of field experiments, were not written to address the influence that large-scale manipulations can have on entire societies. The rise of large-scale real-world interventions raises new ethical dilemmas because experimenters now routinely target outcomes that affect whole societies, and often do so without the public’s consent, knowledge, debriefing, or any means to identify or reverse long-term real-life negative effects. Manipulation effects routinely influence both the target population as well as the wider public who are equally likely to be harmed by an intervention, without their awareness. Indeed, as far as we could find, no work in any discipline has even attempted a review of the long-term effects of real-life manipulations from social science field experiments. Efforts to change the outcome of real elections, purposely stoke intergroup resentment and sectarian conflict, retraumatize people in conflict zones, and increase corruption represent only a few examples of recent real-world social science field experiments ( 12 – 17 ). ‡

Such experiments have become mainstream, and pose a fundamental challenge to the ethical principles enshrined in the Declaration of Helsinki, the Belmont Report, and other established ethical guidelines ( 19 ). Remarkably little attention has been given to the societal harms that result from real-world manipulations. As such, there is an acute need for both enforcement of current ethical norms as well as updated ethical standards to protect individuals and populations from societal harms resulting from field interventions. Such an advance offers the additional benefit of working to protect the credibility and value of the scientific enterprise itself. It is also important to recognize the distinction between adhering to universal research standards and institutional ethics approval. While institutional review board (IRB) guidelines may increase ethical awareness, they exist to provide legal protection for institutions. As we illustrate below in a Cornell and Facebook experiment, IRBs may be necessary, but they are not sufficient to ensure the ethical treatment of subjects or the protection of broader communities because they do not primarily exist for the purpose of protecting participants, nor do they necessarily accomplish this goal ( 20 ). Additionally, there is great variation in IRB standards, both within and across institutions and countries. Therefore, we do not advocate for increased responsibility on the part of IRBs. Rather, we focus on universal ethical standards, with a goal of updating those standards to shape appropriate ethical principles for field experiments going forward.

Here we discuss only some of the kinds of risks and contraventions of established ethical guidelines resulting from large-scale real-world experiments. Our examples are not provided to render any judgment on intent. Rather, just the opposite: We assume that all of these cases did not intend to bypass ethical concerns. Science is an undertaking of learning and trial and error, but, often, as an enterprise, it forgets the lessons of the past. Mistakes, including those retroactively declared as such, are how we learn, and discussing new dilemmas openly and honestly is how science improves. In that spirit, we strongly admonish those who seek to blame, shame, or play “gotcha.” We make these observations not to throw stones from afar, but rather in an attempt to aid from within, raise these concerns, and encourage a new consensus around the protection of populations during field experiments. Indeed, we too have come to learn as we have made our own mistakes; those are, in part, what led us to raise these concerns more broadly. With deep humility and respect for all those seeking to conduct good research, we recognize the need for correction and hope to change minds for the future, not to place blame for past decisions or judge anyone’s intentions. We strive to improve the ethics of future work by illustrating classes of field experiments where the broader academy does not appear to have fully recognized the ways such research circumvents ethical standards by putting people, including those outside the manipulated group, into harm’s way. In this way, we identify new risks that require the creation of new norms explicated below.

New Risks from Large-Scale Social Science Research

In our discussion of field experiments that appear to violate principles of respect for persons, justice, and beneficence, as well as our introduction of novel concerns, we do not provide a systematic review of problematic studies, since no such analysis exists. Rather, we selected classes of experiments that: 1) appeared in high-impact top-tier field journals and interdisciplinary journals such as PNAS , Science , and Nature ; 2) have been highly cited; 3) are common; 4) are carried out in conjunction with large state entities, governments, or corporations; 5) affected large populations; or 6) caused real public harms. These are not the only studies that demonstrate the concerns we raise, but instead represent classes of studies that set trends for work in the future or follow a problematic trend now. Indeed, the types or experiments we selected do not constitute outliers, nor are they extreme or rare. However, it is important to keep in mind that the studies we discuss here represent only a handful of examples from hundreds of such studies.

One might be concerned that the classes of experiments we discuss, or the cases where violations of ethical guidelines are apparent, are the result of cherry-picking. The classic example of cherry-picking would be if we were claiming the barrel of cherries were all bad, and then we picked out only the handful of bad cherries to make the case, but this is not what we are doing here. Rather, we are picking out the bad cherries to save the barrel, and we think this is a critical difference. Nevertheless, there are a lot of bad cherries that are easy to find. Discussing them openly allows us to identify the dangers that such systematic ethical disease presents. To be clear, we are describing the kind of problems that have arisen because many experiments in certain classes are increasingly being conducted without adherence to basic research ethics. As a result, new problems have arisen from technological advances not covered by current ethics. Our goal is to facilitate potential solutions going forward.

Social Pressure Manipulations.

One of the most common contraventions of respect for persons and beneficence, including lack of informed consent and debriefing and disregard of the do-no-harm axiom, involve social pressure experiments. In seeking to identify what increases or depresses voter turnout, for example, scores of studies have undertaken large-scale interventions in real elections ( 21 – 25 ). Some explicitly state the intent is to change outcomes, generate feelings of group conflict, or pursue activist and partisan goals. These studies use a variety of tactics, including mailers, phone calls, and door-to-door visits, including from fake candidates. Several studies have targeted very large minority populations in such ventures, large enough to change electoral outcomes, by sending racially charged group-conflict messages and other anxiety-inducing stimuli. For example, one study targeted approximately half of the registered Black voters in a southern state in the United States with a history of racial inequality and sent a quarter of this population a racially charged group-conflict message ( 15 ). This produced a reduction in minority turnout in a real election. Such manipulations are reminiscent of the types of action that led to the installation of voting rights laws requiring Supreme Court supervision of protected classes during the Civil Rights era.

Shifting the actual outcome of an election has real effects on local and national society. This alone should merit discussion on what experimenters can ethically do. However, more critical to our concerns is the public’s welfare. Reducing a minority population’s turnout and representation will have negative consequences for that community for years to come. Tens if not hundreds of thousands of people were manipulated. This outcome violates the principle of beneficence where “persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being.” Two explicit rules serve to underscore beneficent actions: 1) “do no harm”; and 2) “maximize possible benefits and minimize possible harms.” Given historical racial inequality in access to voting, generating feelings of interethnic hostility plausibly adds to the discord, disharmony, and racial tension in a population that has not yet recovered from past (and current) transgressions. Furthermore, even those experiments that claim to increase voter turnout disproportionately advantage those with the means to vote, and thus enhance negative societal effects by reducing the relative turnout for the least advantaged in society, specifically minorities, resulting in further electoral inequality ( 26 ).

None of the scores of studies we found in this class reported obtaining informed consent prior to the manipulation or debriefed the unknowing participants, letting them know that they had been manipulated. By not doing so, the experimenters did not follow the basic principle of respect for persons that requires researchers to: 1) inform participants of the potential risks related to their participation and 2) acquire permission before conducting research on anyone. Informed consent allows participants to avoid potential harms by opting out and is a “moral prerequisite” for any study to take place ( 9 ); it constitutes “the fundamental principle of human-subjects protection” ( 27 ) where “…a researcher is only (ethically and/or legally) justified in using a research subject if the research subject has consented to being so used” ( 28 ). This applies to all experiments, laboratory or field. Debriefing also constitutes an integral component of respect for persons that serves two critical functions: to remediate negative consequences and to revert participants to their prior state, including allowing people to return to how they felt about themselves and others before the study began. This process gives researchers a chance to correct unintended harms that may have accrued through participation and to inform participants about the purpose of the study, thereby removing any confusion the experiment might have caused ( 29 ).

Participants did not have a chance to opt out through informed consent or to return to their prior psychological state subsequent to the intervention through debriefing (i.e., some attempt to demobilize group hostility or remove harmful consequences). The principles of respect for persons and informed consent rest on expectations of individual autonomy. Such self-determination is fundamentally violated when field experiments manipulate people and elections without consent or debriefing. As a result, participants’ access to an important public good (i.e., voting), critical for democratic governance, was influenced without their knowledge or approval. Indeed, if made aware of it, more than half the subjects would likely oppose the research, much less consent to taking part in it ( 8 , 30 ). Nor do such groups receive any benefit from the research. These two factors lie in perfect opposition to respect for persons and beneficence.

Other common manipulations explicitly “threatened” as many as tens of thousands of people with “exposure” to their peers and community if they did not vote ( 31 ), with the specific goal to shame the public or induce anxiety and negative emotions if they did not engage in the behaviors the researchers desired ( 32 , 33 ). Yet we could find no social pressure study in this group that addressed the effects of surveillance manipulation on public health, particularly regarding the effects that social pressure can have on vulnerable individuals. Social threats, such as posting one’s name or telling one’s neighbors about their personal life, are likely to produce higher rates of mental health trauma, especially for people with anxiety disorders. According to the National Institutes of Mental Health, 18% of the US population suffers from some type of anxiety condition ( 34 ). People with anxiety and others often experience severe stress as a result of believing that some negative trait has been publicly exposed. This means that, for every 1,000 people pressured, the health of 180 of them is likely to have been negatively affected as a direct result of this manipulation. We do not claim that all such individuals will inevitably experience enough anxiety to put them at a health risk as a result of this manipulation ( 35 ). Yet, it remains important to consider the more than minimal risk to vulnerable populations prior to such manipulations; this does not appear to have happened, despite evidence from medical and public health studies that indicate threatening anxious people can precipitate symptoms. While some healthy people can become more resilient following major crises ( 36 ), the opposite tends to be true for highly anxious people. Such individuals may feel they cannot say no to social pressure manipulations because of fear of social stigma, and, as such, these people are not only denied the option of not participating, but they can also be pressured to act in a manner that the experimenter wants while still being more likely to suffer negative outcomes ( 37 ). Placing high-anxiety individuals under social pressure is equivalent to placing undue influence on at-risk populations, such as prisoners, children, or vulnerable others; even if unknowingly, it takes advantage of them. These experiments run counter to the principles of beneficence and justice that require fair and equal distribution of the risks and benefits of participation, including in the recruitment and selection of participants. Of critical importance, justice forbids exposing one group of people to risks solely for the benefit of another group.

As the number of unknowing participants increases, so too does the magnitude of unintended spillover. People are embedded in social networks and share their experiences, leading to a greater number of individuals affected, posing long-term consequences for the larger society, without any means for prevention or correction. In all of the field experiments of this class we could find, none conducted or reported assessments of informed consent or postexperimental checks on health or neighbor relations of the intended (and forced) participants or the wider affected communities (e.g., increased rates of suicide, hospitalization or other medical treatments, burden on friends and family). The guiding documents of ethical research tell us the public should not be manipulated without consent, debriefing (respect for persons), and a full understanding of potential health risks consequent to intervention (beneficence and justice), yet this guidance is not reflected in many published studies in the highest-impact journals.

Social Media.

There is no more salient example of the influence field experiments can exert on the wider society than studies using social media platforms ( 6 ). Tens of millions of people in single events have been manipulated in academic experiments ( 12 , 38 ). We have already learned in a short time the negative effects of such manipulations, including the ability of domestic and foreign powers to weaponize social media and manipulate democratic elections. Basic truths are now questioned, and trust in public institutions is at an all-time low. The ability to engage in microtargeting, and the rapid way in which negative and hostile information, real or fake, is shared on social media, only serves to increase the potential danger of manipulating large groups of people without the ability to manage or understand the widespread effects that occur outside the investigator’s control. Facebook and their academic colleagues’ now-infamous experiments that manipulated the mood of hundreds of thousands of people by randomly pushing positive or negative posts to their feed investigated how human emotional states are transferred to others by contagion. The studies, however, did not consider all of the untold negative events that occurred from this manipulation. How many people were put over the edge, thrown into a bad mood, engaged in domestic violence, caused emotional distress to others, or lost their jobs due to the manipulation? Such follow-up was never undertaken.

A recent emotional-contagion study ( 39 ) conducted on hundreds of thousands of people by researchers at Cornell University simply did not obtain any ethics approval ( 3 ). Cornell’s IRB decided that the study did not need approval because the data had been collected by Facebook. According to the defenders of these studies, users consent to this kind of manipulation when they agree to a company’s terms of service. This is factually untrue. Terms of service for social media platforms do not meet the standards of informed consent for ethical research; rather, they are designed for purposes of civil liability. Others argue that such manipulations reflect nothing more than what people encounter every day ( 40 , 41 ). First, this is a common misinterpretation and misapplication of Common Rule 45CFR 46. This rule defines minimal risk as “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” The class of experiments clearly rise above minimal risk, however, and thus require review. Indeed, a consensus has formed that this does not mean it is relative to the population under study ( 42 ). Rather, for example, experimenters cannot put people who are normally at risk for death or corruption in a situation where those might occur. Second, and critically important, even if minimal risk is determined, it does not obviate the ethical requirements for consent. Third, it is questionable whether researchers have a right to influence someone’s mood for their own self-interest. Finally, claims that “things like this happen every day” must be taken to their logical conclusion. Rape happens every day; racism happens every day; sexism happens every day; homophobia happens every day. Frequency does not provide ethical justification. If we follow the “every day” argument, this means that researchers have a right to conduct studies that launch racist profanity at others, that inspire sexist behavior, that create homophobic fear, undermine public trust, and delegitimize science. Terrible things do happen every day and people endure them, but to argue that the public must endure such things in the name of social science, without their consent, especially when such studies have yet to prove any tangible benefit to the manipulated public, is not an ethically defensible position. Clearly, it is beyond the purview of academics to try to regulate social media platforms; however, this does not abnegate scholars of the responsibility to establish and police ethical principles for our own work. Indeed, no serious scholar should ever look to Facebook or Twitter for the ethical standard by which to guide their field research. Cambridge Analytica provides all of the evidence we need to demonstrate the folly of such an undertaking.

Resource Allocation.

Scholars increasingly partner with international organizations, governments, and others to examine the effects of various processes on outcomes such as electoral accountability or support for the government. These efforts are almost always proclaimed to be designed for the public good, but they often produce negative side effects. In one study, half of the “subjects”—people who were behind on rent and in danger of eviction—were denied monetary assistance for more than 1 y so researchers could determine who ended up homeless ( 43 ). Perhaps the most poignant example is a class of studies where scholars worked with activists and nongovernmental organizations (NGOs) to empower women in developing countries through microfinance or direct cash infusion. Women benefitted in numerous ways; however, domestic violence against the women also increased substantially, as they were seen to violate prevailing norms of patriarchal rule ( 44 , 45 ). They also upset the matriarchal hierarchy that existed among the women, fracturing support systems in the future when the money ran out. These consequences go unaddressed, yet ethically should be addressed before any intervention takes place through thorough, context-specific assessments drawn from observational and qualitative research. Insufficient thought and attention to negative downstream consequences appears common in the design of the intervention in these types of studies where field experimenters do not engage the population to anticipate what effects their “good deeds” might have.

There are at least four sets of interrelated problems that emerge from these designs. First, when such experiments offer rewards that far exceed average monthly incomes, the design is coercive, since individuals do not have true freedom to refuse such a large influx of cash. This violates the principle of respect for persons. Second, giving life-altering benefits to some people and not to others, no matter how random the assignment, can often result in resentment and anger in the larger community toward those who do receive the benefit, including the stimulation of tribal warfare in developing countries. As any learned scientist knows, relative gains matter, and such effects can and do exacerbate inter- and intragroup conflict. Imposed inequality can and does have negative consequences, particularly when investigators are unaware of the history of tribal rivalries and familial hierarchies that their interventions exacerbate. Third, as some receive benefits and others do not, imbalances and inequity often wreak mayhem on social networks, families, and communities. These latter two violate the principles of justice and beneficence. Finally, when researchers partner with governments, NGOs, and other organizations, they are compromised. No matter how well-intentioned, the design and execution of research is influenced by the goals and resources provided those organizations, who are not bound by the same standards of professional research ethics. Scholars cannot rely on the ethical requirements of such organizations any more than they can on the regulations of social media platforms.

Conflict Generation.

Some studies have stoked sectarian fighting, others have encouraged protest at the risk of jail and other real harms to the public ( 17 ), and still others increase ideological, ethnic, and racial polarization ( 14 ). Some studies have presented subjects with videos or other media of actual violence being perpetrated against members of their in-group in order to investigate the effect on group identity and behavior toward members of the perpetrating out-group ( 16 ). It was no surprise that exposure to violent repression pushed subjects toward stronger in-group identification and out-group hostility. None of these studies reported any consideration of the downstream consequences for the larger society affected by these studies or the health of those exposed to such videos in these vulnerable populations. None of those studies reported a follow-up or plan to monitor whether or not the study itself generated prolonged hatred or engagement in subsequent violent retaliation generated by what they observed. Such effects are likely and could last for years, especially in the absence of debriefing. Rarely, if ever, do such studies report clinical professionals on staff to address these risks. Such unnecessary and disturbing exposure challenges the principles of respect for persons, justice, and beneficence.

Experiments that manipulate and change larger societies without consent, controls, proper testing, debriefing, and dialogue with the population are unethical regardless of whether the motivation, intent, or result is “good” or “bad.” Studies that seek to justify the means by the ends for the good of society ignore that their good is often very different from the subjects’ definition of good, and one researcher’s good can constitute another person’s notion of evil. This is particularly true for moral, political, religious, cultural, and social beliefs, where ideas on what is right, just, fair, or positive can be highly contentious and dependent on individual and local norms and culture.

Corruption.

Another increasingly common domain of field experiments involves corruption. The argument for such experiments is obvious; it is very difficult to study dishonest behavior openly ( 7 ). However, these studies pose significant risks to individuals who are peripheral to the subjects. This class of experiments is often conducted in underdeveloped countries. For example, one group of experimenters, in attempting to understand and reduce government officials’ demands for bribes, raised salaries in Ghana, believing higher incomes would reduce corruption. The intervention actually increased police demands for bribes and the amounts given by truck drivers to the police ( 46 ). The public, while not the target, was and will be negatively affected for the foreseeable future. Other studies involved creating false businesses and agencies. In a region where government corruption is high, the effects of these interactions reduced public trust in societies where trust in institutions is already low, but necessary in order to maintain stable governments and societies. In these and other studies, the principles of respect for persons and beneficence are violated as individual subjects’ and society’s welfare become superseded by investigator interest. Equally important, the effects on society from contagion cannot be controlled. Similar concerns apply to many other types of studies where international organizations and scholars seek to impose their own personal value-laden outcomes, all the while ignoring the negative societal effects on the affected population.

Life-Course Manipulation.

If there is a culmination of all of the preceding classes of field experiments surrounding what happens when experimenters alter the lives of subjects without their knowledge, consent, or debriefing, or without adherence to principles of respect for persons, justice, or beneficence, it is the class of studies that purposely seek to change life path. Some appear innocuous at first, such as researchers and OKCupid using dating sites to create intentional mismatches to see what happens in mating behavior ( 5 ). But imagine finding out years later that your relationship was based on a “lie.” How might that disrupt a life? Other examples, reminiscent of Watson’s “Little Albert,” are far more sinister. The case of the “three identical strangers” (Yale University) separated at least five sets of twins and triplets at birth, purposely placing children into families with different socioeconomic status and other characteristics to see what would happen to their lives. For years, researchers conducted home visits while lying to the families, stating that they were part of routine monitoring after adoption ( 4 ). The families were never told that their child had siblings. The study specifically targeted the vulnerable biological parents who could not take care of their children, children who needed to be adopted, and parents who deeply desired a child. Yale sealed all details until 2065, when the likelihood of all injured parties being dead or unable to recover damages is high, while the probability of finding surviving biological family members is low. None of the “subjects” provided informed consent. This is neither an extreme example nor an outlier. In fact, until very recently, researchers at Yale were still following up on the siblings. The fact that such deception is ongoing demonstrates that this is the type of study we invite if we rest our arguments on researchers as activists, putting faith in their own beliefs about what is good for others, as opposed to allowing people to choose for themselves if they want to be part of an experiment on not. A natural question is “could this happen again?” Sadly, we believe so. Even if Yale was to change their approach, many institutions, including those in wealthy advanced democracies, either do not require ethics approval for social science research or simply do not have an IRB at the institutional level. More than IRB adherence is needed to protect the public from such experiments in the future.

Only a Small Sample of the Ongoing Harms.

All of the above examples are real studies that have been conducted. These examples illustrate only a tiny portion of the classes of already-realized harms that have and will continue to result from large-scale real-world public manipulations without proper adherence to appropriate ethical standards. The lack of adherence to the basic principles of respect for persons, beneficence, and justice are particularly problematic when vulnerable populations, who are at the highest risk for serious psychological, economic, physical, sexual, and social harms, are involved.

New Requirements and a New Standard: Respect for Societies

We join a growing list of scholars across disciplines who argue we must respect the voice of the public who are asking not to be experimented on without consent ( 30 , 47 , 48 ). To promote that end, we strongly encourage greater interdisciplinary teaching of research ethics at the graduate level across the social sciences, including principles applied to both laboratory and field settings. In addition, we suggest that it is time for academic professional associations, journals, and institutions to update our policies to adhere to existing ethical norms and formulate new requirements to address potential harms raised by large-scale field experiments that impact entire populations, especially in light of technological advances. If we are to believe authors’ ostensible claims about the significant nature of their results, then we have all the evidence we need to know that these studies affect individuals and societies in negative ways, without their knowledge, consent, or the possibility for remediation of effects. It is not logically possible, nor ethically defensible, to have it both ways. Manipulations have effects, and the consequences of such widespread effects must be properly considered by the researcher at the design stage and evaluated again at the publication stage. If ethical procedures are not followed, or unnecessary harms occur, then publication should not follow.

Field experiments are a powerful tool, and some argue that they should not be subject to the same minimal ethical standards as other research ( 41 , 49 ). This is akin to arguing that pistols should have a 10-round limit on magazine capacity but assault weapons can have unlimited rounds. Both the realized and potential risks to larger populations from field experiments are far greater than those in social science lab experiments. To argue that, since they pose higher risk, they should be subject to lower ethical thresholds is not reasonable or scientific. There is an overwhelming consensus in the scholarly world that the benefits of research must outweigh the costs.

We further advocate extending ethical guidelines to include an additional standard providing for societies. This proposed fourth basic principle, respect for societies, requires addressing the potential effects manipulations can have on both local and large-scale societal outcomes. This consideration represents more than an aggregation of individual rights. Ethics designed for the protection of individuals are not designed to protect groups or to address the effects of manipulating entire communities and social structures. When manipulations are conducted in a living society, effects are unpredictable and influence more than the target population through contagion. When individuals have not been given the opportunity to consent, or are not in the group under direct manipulation, researchers are still ethically obligated to respect their rights and welfare.

We are not arguing that field experiments should be abandoned; just the opposite. Indeed, we recognize their value, and therefore wish to highlight the need for more responsible and stringent adherence to ethical guidelines designed for their particular effects and challenges. We encourage a more robust debate about how best to accomplish this goal. There is great value in understanding how small-scale processes can affect large-scale outcomes through real-world investigation, and no other method can outperform field experiments for external validity. However, they must be justified first, and at the very least adhere to the same minimal principles required of all other forms of experimental research. From an ethical standpoint (see the Nuremburg Code), an experiment should not be conducted if there are more appropriate ways to explore existing phenomena than to create real-life situations that harm actual populations. Simply put, there is no need to run an experiment on millions of people when a sample size of a 1,000 will provide all of the power needed for a meaningful effect size.

In cases where the larger society might be affected through large-scale intervention and experimentation, additional protections for the wider publics should be included. Manipulating real public outcomes should not occur without broader public discussion, debate, approval, and sanction. Indeed, in all other types of studies or efforts that affect the greater public or living societies, there are strict procedures. For example, the Food and Drug Administration has well-established guidelines for releasing interventions into the public that include at least four phases over the course of years, where small-scale controlled studies are scaled up gradually until the intervention is deemed safe to release into society. Such standards provide a valuable template for experimenters who seek to manipulate a large-scale living society. Just as no ethical researcher would release a new medication without major testing, even for the social good of eliminating horrible diseases or viruses, similarly impactful social manipulations should undergo equivalent scrutiny.

Yet, when it comes to social science field experiments, we have somehow entered into a Wild West where anything goes when it takes place in the public sphere in large populations, while small controlled laboratory experiments must follow established guidelines. Thus, we suggest that, for any field experiment on a real-life society, the relevant publics should be consented and debriefed. Otherwise, scholars will be engaging in public manipulation without public protection of a kind that, if it were conducted by a foreign government, would be considered a violation of international law, if not an act of war. To those who advocate that such consent is not possible, we argue, if one can manipulate millions, then one can consent millions. Even if individual consent is difficult, technology allows for a number of ways to inform the public that a large-scale experiment is about to be released. Local, state, and federal governments do this when making public service announcements, including notifications regarding road closures, risk of fire, and Amber alerts. Radio, Internet, billboards, phone notifications, and television do work. Giving the public the ability to be aware of, and potentially find a way to opt out of, a manipulation and the means to report negative side effects is enshrined in the Nuremberg Code, the Declaration of Helsinki, and the Belmont Report. Simply offering public notice is a low-threshold means for acquiring at least passive consent. In the face of such warning, large-scale public outcry might warn researchers that their design may pose serious risks to the welfare of the wider population and should be abandoned.

Some argue such processes are too onerous, too costly, or too time-consuming ( 1 , 41 ). Manipulating people’s real lives and changing outcomes in a real society, according to the basic standards of human protections, should be onerous and meticulous and take time. It should require substantial public debate and scrutiny. That is the purpose of individual and public protections. If we abandon such principles, how far should academic investigation be able to go? Should scholars be allowed to start a riot to see how violence spreads? This appears to be close to the case in Hong Kong ( 17 ). Or should they be allowed to place transgender people at risk to see how the public engages with them, as was done recently in the United States ( 10 )?

What matters is the standards we adopt, not simply the effects of any given study. Otherwise, we are placing our own welfare over that of the subjects and populations we purport to be investigating and often claim to be helping. If we advocate for unlimited and unlicensed real-world manipulations, we open a door that is not controllable, where there is little ability or avenue for people to recover and return to their original state and no ability to stop unintended spillover effects, which an investigator may not be able to anticipate or recognize in advance.

It is critically important to recognize the inherent conflict of interest in creating a real-world outcome, analyzing the results, benefiting from the findings, and then serving as judge and jury on the social value of such studies. Intentions to manipulate the public for the sake of changing a societal outcome may be the enterprise of private corporations, campaigns, foreign governments, and other entities, but manipulating a person’s real life or an entire living society without consent or notification, or proper preliminary testing, ethically cannot, and should not, be the goal of legitimate scholarly research. Scholarly research is intended to understand, not change, public outcomes. Activism is a personal choice, not a scholarly one. One can be both, but a declaration that the scholar seeks to change opinion in a given study must be declared and be part of the approval process for the study, as well as any publication, particularly if funding comes from an interested source. This important distinction is what lends credibility to academic research and heightens legitimacy, and any study that could negatively affect the public’s health or incite violence and long-term discord should be regarded with increased scrutiny.

How can we institute these changes? The most proven avenue is through enhanced training and education, journals, external funding, and professional associations, which set the guidelines for each field and offer a potent mechanism to institutionalize norms and provide oversight of ethical issues. Once high-impact journals and funding opportunities require adherence to particular ethical standards, research incentives shift quickly, as has been the case with data transparency and replication. Such policies might include mandated ethics statements as part of the submission process, which is common in many psychology and health-related outlets. Changes in training may also help institutionalize the protections we advocate. Most researchers at US institutions typically only complete an online training module; this is no substitute for the kind of extensive, discipline-wide, consensual education that can take place through mentorship, apprenticeship, and coursework.

We hope to inspire greater discussion, debate, and the eventual emergence of a consensus around appropriate policy to address ethical concerns for wider public welfare in field experiments. What constitutes a well-designed study and what constitutes an ethical one can be contiguous or mutually contradictory, and serious thought must be given to the relevant trade-offs. Participant and societal protection, however, should never be sacrificed solely to advance individual research interests or professional success. When we began this work and circulated our first paper on this topic in 2013, the response was some curiosity and confusion, but often hostility. Most field experimenters appeared unaware of the ethical issues, and we were even told that field experiments were exempt from consent (we have yet to find this blanket exemption). A few years later, especially in the wake of the upheavals over the Montana, Facebook, and other experiments, a wave of recognition identifies that serious problems often result from widespread social interventions. The public has made clear they consider this to be a problem. High-profile news articles, public debate, and admonishment of experimenters, including by legislators, alongside a social-media firestorm, has provided ample evidence that the public does not want to be experimented upon without their consent. We see this article as one step forward in an attempt to address these concerns and explore ways to improve the ethical consensus surrounding field experiments. We also suggest that all types of research will benefit from more self-conscious ethical review. Ultimately, the welfare of participants and the public depends on knowledgeable, caring, and responsible investigators who place participant well-being and the public welfare ahead of all other aspects of the research enterprise.

Acknowledgments

We thank the American Academy of Arts and Sciences, especially President David Oxtoby, for their sponsorship of a meeting on ethics in field experiments held in Cambridge, MA, in November 2019; we also thank all participants. We thank the committee on revising Human Subjects Guidelines for the American Political Science Association; we are grateful to chair Scott Desposato for additional comments. We thank Margaret Levi and the Stanford Center for Advanced Study in the Behavioral Sciences for sponsoring a meeting on ethics in March 2018. We thank Ann Arvin and other participants at that meeting for helpful comments. We also thank participants at a meeting on ethics and methods at London School of Economics organized by Denisa Kostovicova and Ellie Knott; Dara Kay Cohen offered helpful additional comments after this meeting.

The authors declare no competing interest.

This article is a PNAS Direct Submission.

*R. McDermott, C. Crabtree, P. K. Hatemi, Ethics and Methods, May 4, 2018, London, England.

† The Nuremberg Code of 1947 ( 9 ) and the 1964 Declaration of Helsinki ( 10 ) adopted by the World Medical Association form a set of principles widely regarded as the cornerstone of ethical research on human subjects. They declared that participants must give informed consent, there must be a substantial scientific basis for the study, and experiments should yield findings that cannot be obtained any other way. High-profile cases of questionable research, including the Tuskegee Syphilis Study, Milgram’s Obedience Study, and the Stanford Prison Experiment, led to the National Commission for the Protection of Human Subjects and the Belmont Report ( 11 ), which codified a set of three basic principles to protect human participants: respect for persons, justice, and beneficence. In this article, we focus on the functional ability of these principles, not their underlying foundations. Nevertheless, we believe it important to recognize that these principles are grounded in basic principles of ethics and human rights, with a foundation in moral philosophy going back to Socrates. For instance, respect for persons rests on the principle of autonomy where people should not be treated as a means to a researcher’s end, but as individuals in their own right, and derives justifications from categorical imperative and moral law theories. Similarly, justice is grounded in contractualist perspectives, especially Rawlsian types, but is also informed by feminist care ethics types of approaches that focus attention on the experiences of individuals. Beneficence reflects an integration of virtue ethics and consequentialist perspectives and is designed to make sure that group benefits do not neglect individual welfare.

‡ Purposefully unethical behavior, including acts of libel, hate speech, fraud, or electoral violations [see Bonica, Rodden, and Dropp’s ( 18 ) attempt to influence elections in Montana, for example], while important to reduce, is not the focus of this discussion.

Data Availability.

  • svg]:fill-accent-900">

These 1950s experiments showed us the trauma of parent-child separation. Now experts say they’re too unethical to repeat—even on monkeys.

By Eleanor Cummins

Posted on Jun 22, 2018 7:00 PM EDT

10 minute read

John Gluck’s excitement about studying parent-child separation quickly soured. He’d been thrilled to arrive at the University of Wisconsin at Madison in the late 1960s, his spot in the lab of renowned behavioral psychologist Harry Harlow secure. Harlow had cemented his legacy more than a decade earlier when his experiments showed the devastating effects of broken parent-child bonds in rhesus monkeys. As a graduate student researcher, Gluck would use Harlow’s monkey colony to study the impact of such disruption on intellectual ability.

Gluck found academic success, and stayed in touch with Harlow long after graduation. His mentor even sent Gluck monkeys to use in his own laboratory. But in the three years Gluck spent with Harlow—and the subsequent three decades he spent as a leading animal researcher in his own right—his concern for the well-being of his former test subjects overshadowed his enthusiasm for animal research.

Separating parent and child, he’d decided, produced effects too cruel to inflict on monkeys.

Since the 1990s, Gluck’s focus has been on bioethics; he’s written research papers and even a book about the ramifications of conducting research on primates. Along the way, he has argued that continued lab experiments testing the effects of separation on monkeys are unethical. Many of his peers, from biology to psychology, agree. And while the rationale for discontinuing such testing has many factors, one reason stands out. The fundamental questions we had about parent-child separation, Gluck says, were answered long ago.

The first insights into attachment theory began with studious observations on the part of clinicians.

Starting in the 1910s and peaking in the 1930s, doctors and psychologists actively advised parents against hugging , kissing, or cuddling children on the assumption such fawning attention would condition children to behave in a manner that was weak, codependent, and unbecoming. This theory of “behaviorism” was derived from research like Ivan Pavlov’s classical conditioning research on dogs and the work of Harvard psychologist B.F. Skinner , who believed free will to be an illusion. Applied in the context of the family unit, this research seemed to suggest that forceful detachment on the part of ma and pa were essential ingredients in creating a strong, independent future adult. Parents were simply there to provide structure and essentials like food.

But after the end of World War II, doctors began to push back. In 1946, Dr. Benjamin Spock (no relation to Dr. Spock of Star Trek ) authored Baby and Child Care, the international bestseller, which sold 50 million copies in Spock’s lifetime. The book, which was based on his professional observation of parent-child relationships, advised against the behaviorist theories of the day. Instead, Spock implored parents to see their children as individuals in need of customized care—and plenty of physical affection.

At the same time, the British psychiatrist John Bowlby was commissioned to write the World Health Organization’s Maternal Care and Mental Health report. Bowlby had gained renowned before the war for his systematic study of the effects of institutionalization on children, from long-term hospital stays to childhoods confined to orphanages.

Published in 1951, Bowlby’s lengthy two-part document focused on the mental health of homeless children. In it, he brought together anecdotal reports and descriptive statistics to paint a portrait of the disastrous effects of the separation of children from their caretakers and the consequences of “deprivation” on both the body and mind. “Partial deprivation brings in its train acute anxiety, excessive need for love, powerful feelings of revenge, and, arising from these last, guilt and depression,” Bowlby wrote. Like Spock, this research countered behaviorist theories that structure and sustenance were all a child needed. Orphans were certainly fed, but in most cases they lacked love. The consequences, Bowlby argued, were dire—and long-lasting.

The evidence of the near-sanctity of parent-child attachment was growing thanks to the careful observation of experts like Spock and Bowlby. Still, many experts felt one crucial piece of evidence was missing: experimental data. Since the Enlightenment, scientists have worked to refine their methodology in the hopes of producing the most robust observations about the natural world. In the late 1800s, randomized, controlled trials were developed and in the 20th century came to be seen as the “gold standard” for research —a conviction that more or less continues to this day.

While Bowlby had clinically-derived data, he knew to advance his ideas in the wider world he would need data from a lab . But by 1947, the scientific establishment required informed consent for research participants (though notable cases like the Tuskegee syphilis study violated such rules into at least the 1970s). As a result, no one would condone forcibly separating parents and children for research purposes. Fortunately, Bowlby’s transatlantic correspondent, Harry Harlow, had another idea.

Over the course of his career, Harlow conducted countless studies of primate behavior and published more than 300 research papers and books. Unsurprisingly, in a 2002 ranking the impact of 20th century psychologists , the American Psychological Association named him the 26th most cited researcher of the era, below B.F. Skinner (1), but above Noam Chomsky (38). But the (ethically-fraught) experiments that cemented his status in Psychology 101 textbooks for good began in earnest only in the 1950s.

Around the time Bowlby published WHO report, Harlow began to push the psychological limits of monkeys in myriad ways—all in the name of science. He surgically altered their brains or beamed radiation through their skulls to cause lesions, and then watched the neurological effect, according to a 1997 paper by Gluck that spans history, biography, and ethics. He forced some animals to live in a “deep, wedge-shaped, stainless steel chambers… graphically called the ‘pit of despair'” in order to study the effect of such solitary confinement on the mind, Gluck wrote. But Harlow’s most well-known study, begun in the 1950s and carefully documented in pictures and videos made available to the public, centered around milk.

To test the truth of the behaviorist’s claims that things like food mattered more than affection, Harlow set up an experiment that allowed baby monkeys, forcibly separated from their mothers at birth, to choose between two fake surrogates. One known as the “iron maiden” was made only of wire, but had bottles full of milk protruding from its metal chest. The other was covered in a soft cloth, but entirely devoid of food. If behaviorists were right, babies should choose the surrogate who offered them food over the surrogate who offered them nothing but comfort.

As Spock or Bowlby may have predicted, this was far from the case.

“Results demonstrated that the monkeys overwhelmingly preferred to maintain physical contact with the soft mothers,” Gluck wrote. “It also was shown that the monkeys seemed to derive a form of emotional security by the very presence of the soft surrogate that lasted for years, and they ‘screamed their distress’ in ‘abject terror’ when the surrogate mothers were removed from them.” They visited the iron maiden when they were too hungry to avoid her metallic frame any longer.

As anyone in behavioral psychology will tell you, Harlow’s monkey studies are still considered foundational for the field of parent-child research to this day. But his work is not without controversy. In fact, it never has been. Even when Harlow was conducting his research, some of his peers criticized the experiments , which they considered to be cruel to the animal and degrading to the scientists who executed them. The chorus of dissenting voices is not new; it’s merely grown.

Animal research today is more carefully regulated by individual institutions, professional organizations like the American Psychological Association and legislation like the Federal Animal Welfare Act. Many activists and scholars argue research on primates should end entirely and that experiments like Harlow’s should never be repeated. “Academics should be on the front lines of condemning such work as well, for they represent a betrayal of the basic notions of dignity and decency we should all be upholding in our research, especially in the case of vulnerable populations in our samples—such as helpless animals or young children,” psychologist Azadeh Aalai wrote in Psychology Today .

Animal studies have not disappeared. Research on attachment in monkeys continues at the University of Wisconsin at Madison . But animal studies have declined. New methods—or, depending on how you look at it, old methods—have filled the void. Natural experiments and epidemiological studies, similar to the kind Bowlby employed, have added new insight into the importance of “tender age” attachment .

Romanian orphanages established after the fall of the Soviet Union have served as such a study site. The facilities, which have been described as “slaughterhouses of the soul” , have historically had great disparities between the number of children and the number of caregivers (25 or more kids to one adult), meaning few if any children received the physical or emotional care they needed. Many of the children who were raised in these environments have exhibited mental health and behavioral disorders as a result. It’s even had a physical effect, with neurological research showing a dramatic reduction in the literal size of their brains and low levels of brain activity as measured by electroencephalography, or EEG, machines.

Similarly, epidemiological research has tracked the trajectories of children in the foster care system in the United States and parts of Europe to see how they differ, on average, from youths in a more traditional home environment. They’ve shown that the risk of mental disorders , suicidal ideation and attempts , and obesity are elevated among these children. Many of these health outcomes appear to be even worse among children in an institutional setting , like a Romanian orphanage, than children placed in foster care, which typically offers kids more individualized attention.

Scientists rarely say no to more data. After all, the more observations and perspectives we have, the better we understand a given topic. But alternatives to animal models are under development and epidemiological methodologies are only growing stronger. As a result, we may be able to set some kinds of data—that data collected at the expense of humans or animal —aside.

When it comes to lab experiments on parent-child attachment, we may know everything we need to know—and have for more than 60 years. Gluck believes that testing attachment theory at the expense of primates should have ended with Harry Harlow. And he continues to hope people will come to see the irony inherent in harming animals to prove, scientifically, that human children deserve compassion.

“Whether it is called mother-infant separation, social deprivation, or the more pleasant sounding ‘nursery rearing,'” Gluck wrote in a New York Times op-ed in 2016, “these manipulations cause such drastic damage across many behavioral and physiological systems that the work should not be repeated.”

Latest in Mental Health

How food affects our mental health how food affects our mental health.

By Mary Scourboutakos / The Conversation

Why EMDR trauma therapy is gaining popularity Why EMDR trauma therapy is gaining popularity

By Laurel Niep / The Conversation

IMAGES

  1. 6 Unethical Experiments That Were Authorized by the Government

    experiments unethical

  2. 20 Most Unethical Experiments in Psychology

    experiments unethical

  3. 5 Most Unethical Psychological Experiments

    experiments unethical

  4. 5 Most Unethical Psychological Experiments

    experiments unethical

  5. Unethical Human Embryos Experiments Expanded, as Scientists Abandon a

    experiments unethical

  6. 20 Most Unethical Experiments in Psychology

    experiments unethical

VIDEO

  1. unsuccessful experiment: crossbreeding animals with food

  2. Unethical Experiments on Disabled Children

  3. Unethical Human Experiments That Went Too Far

  4. SUBPROJECT 68: What you NEED to know about the ISOLATED MIND

  5. Top 5 Most Unethical Experiments Done in History

  6. 5 लाख रुपये देख के लड़कियों ने किया गलत कम 😳

COMMENTS

  1. 20 Most Unethical Experiments in Psychology

    In one well known and especially unethical experiment, Watson used a nine-month old orphan known as Little Albert. At first, Little Albert was exposed to a variety of sights and sounds, including rabbits, monkeys, burning newspaper, and masks of all sorts. In the experiment's second phase, Watson introduced Little Albert to a white rat.

  2. Unethical human experimentation in the United States

    A subject of the Tuskegee syphilis experiment has his blood drawn, c. 1953.. Numerous experiments which were performed on human test subjects in the United States in the past are now considered to have been unethical, because they were performed without the knowledge or informed consent of the test subjects. [1] Such tests have been performed throughout American history, but have become ...

  3. Controversial and Unethical Psychology Experiments

    At a Glance. Some of the most controversial and unethical experiments in psychology include Harlow's monkey experiments, Milgram's obedience experiments, Zimbardo's prison experiment, Watson's Little Albert experiment, and Seligman's learned helplessness experiment. These and other controversial experiments led to the formation of rules and ...

  4. 5 Unethical Medical Experiments Brought Out of the Shadows of History

    Most people are aware of some of the heinous medical experiments of the past that violated human rights. Participation in these studies was either forced or coerced under false pretenses. Some of the most notorious examples include the experiments by the Nazis, the Tuskegee syphilis study, the Stanford Prison Experiment, and the CIA's LSD ...

  5. Unethical experiments' painful contributions to today's medicine

    Such experiments have been criticized as unethical but have advanced medicine and its ethical codes, such as the Nuremberg Code. When He made his claim of genetically altering humans, the response ...

  6. 10 Psychological Experiments That Could Never Happen Today

    Some psychological experiments that were designed to test the bystander effect are considered unethical by today's standards. In 1968, John Darley and Bibb Latané developed an interest in crime ...

  7. Unethical human experimentation

    Unethical human experimentation is human experimentation that violates the principles of medical ethics.Such practices have included denying patients the right to informed consent, using pseudoscientific frameworks such as race science, and torturing people under the guise of research. Around World War II, Imperial Japan and Nazi Germany carried out brutal experiments on prisoners and ...

  8. Milgram Experiment: Overview, History, & Controversy

    The Milgram experiment was an infamous study that looked at obedience to authority. Learn what it revealed and the moral questions it raised. ... Controversial and Unethical Psychology Experiments. 15 Sources. Verywell Mind uses only high-quality sources, including peer-reviewed studies, to support the facts within our articles.

  9. Milgram experiment

    In the experiment, an authority figure, the conductor of the experiment, would instruct a volunteer participant, labeled the "teacher," to administer painful, ... Although the shocks were faked, the experiments are widely considered unethical today due to the lack of proper disclosure, informed consent, and subsequent debriefing related to ...

  10. Milgram Shock Experiment

    The Milgram experiment was considered unethical because participants were deceived about the true nature of the study and subjected to severe emotional distress. They believed they were causing harm to another person under the instruction of authority. Additionally, participants were not given the right to withdraw freely and were subjected to ...

  11. The 10 Cruelest Human Experimentation Cases in History

    Baby 'Albert' was traumatized by doctors in the name of science. Psychology Today. "The Little Albert Experiment" After many months observing young children, John Hopkins University psychologist Dr. John B. Watson concluded that infants could be conditioned to be scared of non-threatening objects or stimuli.

  12. What Are The Top 10 Unethical Psychology Experiments?

    Zimbardo stopped the experiment at that point. Featured Programs. 9. The Monster Study (1939). The Monster Study is a prime example of an unethical psychology experiment on humans that changed the world. Wendell Johnson, a psychologist at the University of Iowa, conducted an experiment about stuttering on 22 orphans.

  13. 30 Most Unethical Psychology Human Experiments

    The following is a list of the 30 most disturbing human experiments in history. 30. The Tearoom Sex Study. Image Source Sociologist Laud Humphreys often wondered about the men who commit impersonal sexual acts with one another in public restrooms.

  14. Topic: Human Experimentation

    Dr. Kass is coeditor (with Ruth Faden) of HIV, AIDS and Childbearing: Public Policy, Private Lives (Oxford University Press, 1996). She has served as consultant to the President's Advisory Committee on Human Radiation Experiments, to the National Bioethics Advisory Commission, and to the National Academy of Sciences.

  15. PDF History of Unethical Human Experimentation: Where Do We Go From Here?

    OTHER NOTABLE UNETHICAL HUMAN EXPERIMENTS • 2001: A 24-year-old healthy volunteer died one month after inhaling an unapproved drug as part of an asthma study. • 2005-2009: The SUPPORT study of 1,316 prematureinfants who were exposed to increased risk of blindness and death due to

  16. Ethics in field experimentation: A call to establish new ...

    In 1966, Henry Beecher published his foundational paper "Ethics and Clinical Research," bringing to light unethical experiments that were routinely being conducted by leading universities and government agencies. A common theme was the lack of voluntary consent. Research regulations surrounding laboratory experiments flourished after his work.

  17. The victims of unethical human experiments and coerced research under

    Human experiments were more extensive than often assumed with a minimum of 15,750 documented victims. •. Experiments rapidly increased from 1942, reaching a high point in 1943 and sustained until the end of the war. •. There were more victims who survived than were killed as part of or as a result of the experiments.

  18. 5 Forgotten (and Ethically Questionable) Experiments on Social ...

    While many of these experiments were no doubt dangerous and unethical, some were more innocent and led to valuable insights regarding how social isolation affects our perceptions, behaviors, and ...

  19. Stanford Prison Experiment: Zimbardo's Famous Study

    The experiment was conducted in 1971 by psychologist Philip Zimbardo to examine situational forces versus dispositions in human behavior. 24 young, healthy, psychologically normal men were randomly assigned to be "prisoners" or "guards" in a simulated prison environment. The experiment had to be terminated after only 6 days due to the ...

  20. While Some Unethical, These 4 Social Experiments Helped Explain Human

    These four experiments did just this and helped us better understand human behavior. However, some of them would be considered unethical today due to either lack of informed consent or the mental and/or emotional damage they caused. 1. Cognitive Dissonance Experiment. After proposing the concept of cognitive dissonance, psychologist Leon ...

  21. The ethics of experimenting with human brain tissue

    In fact, the promise of brain surrogates is such that abandoning them seems itself unethical, given the vast amount of human suffering caused by neurological and psychiatric disorders, and given ...

  22. Ethics in field experimentation: A call to establish new standards to

    In 1966, Henry Beecher published his foundational paper "Ethics and Clinical Research," bringing to light unethical experiments that were routinely being conducted by leading universities and government agencies. A common theme was the lack of voluntary consent. Research regulations surrounding laboratory experiments flourished after his work.

  23. These 1950s experiments showed us the trauma of parent-child separation

    Along the way, he has argued that continued lab experiments testing the effects of separation on monkeys are unethical. Many of his peers, from biology to psychology, agree.