The Human Biomedical Research Act passed in 2015[1] and subsequent regulations passed in 2017[2] and 2018[3] after extensive public consultations attest to the Singapore government’s attention to the ethics of biomedical human subject research. The Act and regulations inform the decisions of a network[4] of biomedical institutional review boards (IRBs) that are registered with the Ministry of Health. This regulatory activity surrounding the ethics of biomedical human research overshadows the ethical salience of another sphere of human subject research that is not biomedical in nature.  In this essay, I will consider three prominent examples in the non-biomedical sphere, to argue for this ethical salience.

An IRB is a committee set up within a research institution to exercise ethical oversight on research produced by members of the institution. The type of research that come under the scope of IRB oversight is research that involves human subjects and which aims at producing generalizable knowledge. Archival research, for example, does not fall under IRB oversight because it does not involve human subjects, and research aimed only at improving the efficiency of intra-institutional processes does not fall under IRB oversight because it is not aimed at producing generalizable knowledge.

There are two types of research that involve human subjects and that are aimed at producing generalizable knowledge: biomedical research and social, behavioural and educational research (SBER). The first IRBs came into existence in the United States in reaction to heinous breaches of ethics in biomedical research. This ranged from the brutal treatment of prisoners of war in Nazi human experimentation to the generation-long mistreatment of poor farmers in the Tuskegee Syphilis Study. The former thrust informed consent into official consideration in the Nuremberg Code. The latter motivated the venerable Belmont Report, which laid out the ethical principles of beneficence, justice and respect for persons. This in turn informed the United States’ Code of Federal Regulations for research on human subjects.

Obscured by the attention that the biomedical cases received and the policies that safeguard the ethics of human subject research that arose from them are the SBER cases. Considering death and permanent physical disabilities are the bad consequences of unethical biomedical research, one may think that the relatively milder risks of SBER put such research outside the scope of ethical safeguards, and hence that IRBs need not bother with SBER. This is not the case. In what follows, I present three famous ethically-controversial cases of SBER research and summarise their ethically relevant aspects.

In the 1970s, Laud Humphreys, a sociology doctoral student, conducted research on homosexual activity in public restrooms (known as ‘tearooms’), while posing as a lookout or a ‘watchqueen’. Immediately after observing the acts of fellatio, he would fill out a prepared template detailing the time, place, description of the participant and his car plate number, and would take notes with a voice recorder in his own car. A year later, with a different appearance and under the guise of a public health survey, he would call on the participants that he identified by their car plate numbers and interview them.[5] The resulting work was ground-breaking as it revealed that many men who engage in homosexual acts in public live everyday lives in exemplary heterosexual marriages. This revelation resulted in a reduction of arrests by the police because it convinced them that the transactions were ‘victimless’.[6]

The case is ethically controversial because homosexuality was highly stigmatising in American society in the 1970s. The participants in the research, holding regular jobs and in stable marriages had much to lose were they to be identified. They had not consented to having their sexual behaviour studied, much less having personally identifiable information like their car plate numbers recorded, or allowing the researcher to re-contact them a year later for interviews. The ethical principle of respect for persons could have been violated because consent was not sought. Many believed the right to privacy to be violated.[7] The ethical principle of respect for persons may have been violated because participants were deceived twice: first into believing that the researcher was a watchqueen, and second, that the researcher was conducting an interview about public health. We have an idea about the extent to which the identifying information was secured.[8] But even if the information had been secured, a participant could have suffered mental distress knowing that “someone knows”, violating the ethical injunction to do no harm.[9]

In 1971, psychologist Philip Zimbardo paid research subjects to participate in an experiment where they were to be randomly assigned roles as prison guards or prisoners and to be confined in a ‘prison’ –– the basement of the Stanford University psychology department building –– for two weeks.[10] The aim of the study was to investigate the hypothesis that participants who were assigned roles as ‘guards’ or as ‘prisoners’ would exhibit different behaviours.[11] The participants enacted their roles to such a deep extent that the ‘guards’ ill-treated the ‘prisoners’, subjecting them to insults and punishments, some ‘prisoners’ suffered nervous breakdowns and the ‘prisoners’ rebelled.[12] The study was called off after six days.[13]

The study, which became known as ‘The Stanford Prison Experiment’, is ethically controversial primarily because participants were not allowed to withdraw from the study, despite being informed by Zimbardo that they could do so. This not only violated the principle of respect for persons but also the injunction to do no harm. The ‘prisoners’ suffered psychological and physical duress at the hands of the ‘guards’. These, if known as possibilities should have been communicated as risks to the participants in seeking their consent to participate in the experiment. However, Zimbardo himself was taken by surprise by the turn of events.[14] In this case, there should have been a procedure for reporting and acting in response to unexpected adverse events such as these to mitigate harm, and a debrief of the participants at the end of the study. Such reporting policies were not in existence at that time and Zimbardo debriefed participants only a year after the study was completed.

Starting in 1961, Yale psychologist Stanley Milgram devised a series of experiments aimed at studying subjects’ obedience to authority figures. In them, participants were informed that they were recruited for a study about the effect of punishment on memorisation in learning. They would draw lots to be assigned the role of a teacher or a learner. Unbeknownst to them, the lots both said ‘teacher’. An actor assumed the role of learner, and would be strapped into an electric chair. He would then be made to memorise word lists, and the ‘teacher’ would be ordered by an authoritative figure (the researcher) to administer shocks of increasing voltages each time the ‘learner’ made a mistake. As delivered shocks increased in voltage, the ‘teacher’ would hear vocalisations of increasing distress from the ‘learner’ until, at a very high voltage, there is silence. Many participants complied with the orders by the authority figure to deliver shocks of increasing voltage even though they were themselves very distressed.[15] In Milgram’s words:

In a large number of cases the degree of tension reached extremes that are rarely seen in sociopsychological laboratory studies. Subjects were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh.[16]

The study revealed that 65% of participants complied with the authority figure to deliver the highest shock voltage of 450 volts, going against the predictions that Milgram obtained before the experiments that very few participants would do so.[17] Participants were encouraged to continue in the experiment when they started to show signs of wanting to withdraw; in fact, that was part of the experimental design because the study was to ascertain at what stage participants would withdraw rather than continue torturing the ‘learner’.[18] Participants were debriefed immediately after the studies and met the ‘victim’ to see that he was not harmed; such measures were taken so that participants left the study “in a state of well being”.[19] There were no reported cases of sustained emotional trauma and many participants were appreciative that they had a chance to learn about themselves.[20]

While the Milgram Experiment might trigger one’s ethical qualms due to the possibility that the participant can be unusually distressed, one is hard put to identify the locus of the qualm. One might argue that the experiment was unethical because the participants were deceived about the study in which they were about to participate. However, many psychological experiments have to proceed by deception in order that participant knowledge does not sully the results. Many significant contributions to generalizable knowledge have been generated through studies that involve deception. It is now standardly held that participant debrief explaining the aim of the deception immediately after the study restores respect for persons. In Milgram’s case, his debrief included showing the participants that the ‘learner’ was not harmed. For studies like Milgram’s the emotional distress of participants may not be avoidable for the acquisition of generalizable knowledge.

Nowadays, whether or not studies like Milgram’s and other less extreme ones should be allowed is the result of deliberations by IRBs. As can be seen in these three examples, ethical considerations can arise from non-biomedical research that aim to produce generalizable knowledge and that involve human subjects. A researcher’s drive to advance a frontier of knowledge can infringe on the rights of research participants. When should ethical considerations impede the advancement of knowledge? The balance is at the heart of every IRB deliberation of a research protocol, for the role of IRBs is to protect and to promote: in the first instance to ensure that reasonably robust safeguards are in place to protect research participants from physical or emotional harm, and then to promote the advancement of knowledge.

[1] Attorney-General’s Chambers, Human Biomedical Research Act 2015, retrieved from, accessed 13 February 2019.

[2] Attorney-General’s Chambers, Human Biomedical Research Regulations 2017, retrieved from, accessed 13 February 2019, and Attorney-General’s Chambers, Human Biomedical Research (Restricted Research) Regulations 2017, retrieved from, accessed 13 February 2019.

[3] Attorney-General’s Chambers, Human Biomedical Research (Exemption) Regulations 2018, retrieved from, accessed 13 February 2019.

[4] Ministry of Health, TIARAS, retrieved from—updated.pdf, accessed, 13 February 2019.

[5] Warwick, D. P., “Tearoom Trade: Means & Ends in Social Research,” The Hastings Center Studies, Vol. 1, No. 1 (1973), p. 28.

[6] Sieber, J., “Laud Humphreys and the Tearoom Sex Study,” The Kennedy Institute (1977), retrieved from, accessed 4 February 2019.

[7] Warwick, op. cit., p. 31.

[8] Warwick, op. cit., p. 34: “During the process of data-collection and analysis the master list of names was stored in a safe-deposit box, and then was later destroyed.”

[9] Ibid.

[10] McLeod, S. A., Zimbardo – Stanford Prison Experiment (September 16 2018), retrieved from, accessed 4 February 2019.

[11] Haney, C., Banks, C. and Zimbardo, P., “A Study of Prisoners and Guards in a Simulated Prison,” Naval Research Reviews (Washington D. C., Office of Naval Research, 1973), p. 4.

[12] McLeod, op. cit.

[13] Wikipedia Contributors, “Stanford Prison Experiment,” Wikipedia, The Free Encyclopedia, retrieved from, accessed 4 February 2019.

[14] McLeod, op. cit.

[15] Beauchamp, T. L. and Faden, Ruth R., A History and Theory of Informed Consent (New York: Oxford University Press, 1986), pp. 174-175.

[16] Milgram, S., “Behavioral Study of Obedience,” Journal of Abnormal and Social Psychology, vol. 67, no. 4 (1963), p. 375.

[17] Ibid., pp. 176 and 175.

[18] Ibid., p. 374.

[19] Ibid.

[20] Milgram, S., Obedience to Authority: An Experimental View (New York: Harper & Row, 1975), p. 195.