ISCI is a cross-disciplinary research centre working to further our understanding of state crime: organisational deviance violating human rights

Of Babies and Bathwaters: A Response to Brannigan and Perry

state-crime-journal-image-horizontal

By Nestar Russell and Robert Gregory

… in order to demonstrate that subjects may behave like so many Eichmanns the experimenter had to act the part, to some extent, of a Himmler.

Dannie Abse [1]

Introduction

In its Autumn 2015 edition, State Crime Journal (SCJ) published our article titled ‘The Milgram-Holocaust Linkage: Challenging the Present Consensus’ (Russell & Gregory, 2015). In it we attempted to show how aspects of Stanley Milgram’s Obedience to Authority (OTA) experiments can be generalised to better understand perpetrator behaviour during the Holocaust. Subsequently, the Autumn 2016 edition of the SCJ published a critique of our article by Brannigan and Perry (2016) ‘Milgram, Genocide and Bureaucracy: A Post-Weberian Perspective’. Their critique comprises two main sections. First, they claim that we rely on an outdated use of Weber’s classical model of bureaucracy to bolster our thesis. Secondly, that because Milgram’s OTA experiments were methodologically flawed it is misleading to generalise from them in regard to the Holocaust. We address Brannigan and Perry’s critique as follows.

  1. The Reliance on a Weberian Milgram-Holocaust Linkage 

Brannigan and Perry argue that what they call a ‘post-Weberian’ interpretation of modern bureaucracy shows that the Nazi state was ‘polycratic’, by no means a bureaucratic monolith, and in its practices departed greatly from Weber’s original conception. We would make three main points in reply. First, we focused in particular on the importance of a bureaucratic division of labour, which was a key element of Weber’s model of rationally organised hierarchy. Secondly, Weber’s model was an ‘ideal type’, a pure form of bureaucracy not to be found in practice, a heuristic concept only. It was not rigorously formulated by Weber and therefore is not amenable to a post-bureaucratic (or ‘post-Weberian’) comparison (Höpfl, 2006). In general, large, complex organisations can be loosely assessed as more or less bureaucratic, when compared to the various elements that comprised Weber’s model. We did not deem it necessary to labour this well-recognised point in our article, and nor did we have space to bolster our argument with detailed and specific examples of Nazi bureaucratic behaviour. Instead, our paper focused mostly on our interpretation of the OTA experiments, illustrating their relevance to a better, but of course by no means sufficient, understanding of the Holocaust. Thirdly, we at no stage suggested that Nazi bureaucrats were simply unthinking automatons locked within what Brannigan and Perry call ‘Weber’s iron cage of rationality’(p. 296), so that in their interpretation of our argument ‘Experiment = Iron Cage = Holocaust’ (p. 296). Apart from the fact that they put their own convenient gloss on what Weber intended by his sweeping metaphor, as contestably translated into English by Talcott Parsons (Gregory, 2007; Ghosh, 2014; Kent, 1983), this would have been a misleading caricature, one that is completely at odds with our argument that many of Milgram’s compliant subjects struggled with a painful moral dilemma rather than displaying mindless obedience.

One of us, however, made the point in another publication, of which both Brannigan and Perry are aware, since they cite it in their critique. Russell (2009: 199-261) delved in some detail into the M-H linkage discussed in our JSC article. He included the following quotation from Browning (1978: 2, cited in Russell 2009: 197), which ironically sounds much like the version of the Nazi bureaucracy that Brannigan and Perry claim we overlook:

The result of this management style was a fiercely competitive bureaucratic environment:  ‘composed of factions centered around the Nazi chieftains, who were in perpetual competition to outperform one another. Like a feudal monarch, Hitler stood above his squabbling vassals. He allotted “fiefs” to build up the domains of his competing vassals as they demonstrated their ability to accomplish the tasks most appreciated by the Führer’.

In short, Brannigan and Perry’s ‘post-Weberian’ critique is a red herring. Or do they wish to argue that the Nazi state was not profoundly bureaucratic in its basic structures and operations? (They might consider, for example, the Nazis’ commitment to meticulous record-keeping or Bauer’s (2001: 102) conclusion that during the Holocaust, ‘ordinary people were guided by the bureaucratic machine.’)

They are, however, correct to argue that Milgram’s experiments were not standardised, because Williams (the experimenter) frequently ignored his boss’s (Milgram’s) hierarchical instructions to use a set number of prods when encountering subjects’ resistance, instead taking it upon himself to invent his own, sometimes even more effective prods (p. 292). This innovative behaviour by a lower level operative like Williams within a microcosmic and inherently bureaucratic setting, however, is exactly the kind of M-H linkage that bolsters our original claim. More specifically, as Russell (2009:  181-183) argued, Williams seems to have been influenced by what Friedrich terms the ‘rule of anticipated reactions’ (Friedrich, 1946: 589) which, according to Brief, Buttram, Elliott, Reizenstein, and McCline (1995: 180), implies that subordinates are expected to ask themselves, ‘How would my superior wish me to behave under these circumstances?’ The implementation of authority does not necessarily require that a command be uttered; rather, the ‘order’ may be inferred by the subordinate.

Williams chose to use a degree of discretionary authority in anticipation of what he suspected his boss desired: maximisation of all experimental variations’ completion rates (Darley, 1995: 130-131). The exercise of discretionary authority is often an essential means of fulfilling the organisation’s purposes. Williams’ discretionary behaviour resembled perpetrator behavior during the Holocaust (see, for example, Browning, 1995: 28-56). Milgram’s formal top-down authority in conjunction with Williams’ bottom-up informal discretion have much in common, for example, with Browning’s chapter title, ‘German Killers: Orders from Above, Initiative from Below’ (2000: 116-142).  We return to this part of Brannigan and Perry’s critique in the next section.

Brannigan and Perry show that Williams was tenacious in preventing some subjects from escaping, as if this further demonstrates the shortcomings of the classical Weberian bureaucratic model and undermines our argument about the M-H linkage. Apart from the fact that Weber’s classical model has nothing to say about discretionary authority, and we at no point argue that it does, we again find this claim of theirs to be supportive rather than dismissive of our argument. Williams’ determination to fulfill his superior’s main aim, in the knowledge that he (Williams) would be rewarded generously for doing so, was buttressed by what we call ‘responsibility ambiguity’, and his belief that he could probably do so with impunity. Had any subjects been harmed by his aggressive and sometimes unrelenting prods, it would not have been in Williams’ best interests to mention his creative, yet stress-inducing innovations. Instead, he could have claimed—as many a Holocaust perpetrator did after the war—that he was ‘just following [in his case, Milgram’s] orders.’

Brannigan and Perry also question another of our key arguments, asking ‘What is the evidence that ‘both Yale and the NSF [National Science Foundation] signed off in support of the project (Russell and Gregory 2015: 137)?’ (p. 295). They present evidence that the NSF refused to support Milgram’s requests for more funding (evidence which ironically implies the NSF initially financed and therefore supported the project). And they rightly show that Yale University became so concerned about what Milgram had done that they engaged a psychiatrist to assess many of his subjects for psychological and emotional damage.

However, both the NSF and Yale University expressed these concerns and took these actions after Milgram collected his data, and after some of his critics had begun to question the ethics of the project. There is ample evidence to show that before critics raised questions about the ethics of the OTA experiments both the NSF [2] and Yale University [3] had been strongly supportive of Milgram’s experiments.

  1. Milgram’s OTA Experiments are Methodologically Flawed 

According to Brannigan and Perry, because Milgram’s OTA experiments are methodologically flawed any attempt to infer from their results human behaviour in the world beyond the laboratory—like that displayed during the Holocaust—is futile. The experiments, in their view, have little internal validity and no external validity. In particular, as they repeatedly claim in their critique of our article, they do not believe that Milgram successfully deceived most of his subjects into believing that the learner was receiving dangerous electric shocks. They provide numerous examples of subjects who did not believe the learner was being shocked, and thus convinced of this fact, these subjects experienced no stress while inflicting every ‘shock’. Again, in our article we did not have space to examine this important methodological issue, but one of us had earlier discussed the matter in some detail (see Russell, 2014). As Brannigan and Perry do not seem to be aware of Russell’s (2014) argument, we briefly reiterate it here.

Orne and Holland (1968) provide a variety of reasons why Milgram’s subjects were unlikely to have been fooled by his experiments. Milgram’s (1972: 151-153) rebuttal addressed most of Orne and Holland’s methodological criticisms. One, however, lingered—the so-called issue of trust—the suggestion that subjects would have known all along that the experimenter or Yale University would not have allowed an innocent person to be seriously harmed, and that by demanding the subject continue to inflict more severe shocks, the experimenter was implying that it was safe to do so (Harré, 1979: 105). As one subject who completed the experiment later said, ‘The way I figured it, you’re not going to cause yourselves trouble by actually giving serious physical damage to a body’ (Stanley Milgram Papers [hereafter SMP], Box 153, Audiotape #2430). For his part, Mixon’s (1976, 1989) research bolstered the validity of Orne and Holland’s issue of trust. He argued that observers find the OTA studies convincing because of the subjects’ outpouring of emotional tension (1976: 93). The subjects’ palpable displays of stress appear to illustrate that they believed the shocks were really harmful. But Mixon’s point of difference is that Milgram’s subjects were stressed not because they confronted an intense moral dilemma to stop or continue inflicting shocks (as Milgram believed, and as we do), but because of their exposure to an ambiguous situation in which the information coming from the experimenter and learner was starkly contradictory. The shocks were apparently harmful, but not dangerous; the experimenter was calm, but the learner was screaming in agony. As Mixon observed, ‘No wonder many subjects showed such stress. What to believe? The right thing to do depends on which actor is believed (1989: 33).’ According to Mixon (1989: 35), as ‘increasingly large chunks of the social and physical world that we live in can be understood only by experts’, most subjects resolved the stressfully ambiguous situation by trusting the authority figure’s word that the learner would not be hurt. If correct, Mixon’s critique suggests that Milgram had failed to deceive most subjects into believing they were hurting the learner even though they showed great stress: it was the purposefully ambiguous situation that caused Milgram’s high completion rates. During his pilot studies Milgram purposefully injected ambiguity into his basic procedure with the intention of maximising obedience. He changed the title of the last button on his shock machine from “LETHAL” to “XXX.” Likewise, he substituted a translucent screen between subject and learner in the first pilot study with a full partition. Each change was to create a more ambiguous, and therefore more stressful and potentially obedient, situation (Russell, 2011: 148-151).

However, Mixon’s methodological critique is not entirely convincing. While it may help to shed light on why some subjects completed the experiment, it is incapable of explaining why others decided to stop, especially those in Milgram’s ‘Relationship Condition’. In this variant, subjects were required to bring a friend, with one becoming the teacher and the other the learner. The learners were covertly informed that the experiment was actually exploring whether their friend would obey an experimenter’s orders to hurt them. Learners were quickly trained in how to react to the impending ‘shocks’. This condition, which Milgram never published for fear of stimulating an ethical firestorm (Perry, 2012: 201-202), nonetheless included just as much ambiguity as the New Baseline condition, but generated a 50 percent lower completion rate (15 versus 65 percent, respectively). So despite the Relationship Condition’s inherent ambiguity, 85 percent of subjects refused to place their trust in the experimenter’s ‘expert’ status. Most subjects in this variation did not react with confusion, and their actions and words indicated an unambiguous certitude. One subject was certain his friend was not being shocked—and was therefore confident that the experiment was a ruse—but he still refused to trust the experimenter:

Teacher: “I don’t believe this! I mean, go ahead.”

Experimenter: “You don’t believe what?” […]

Teacher: “I don’t believe you were giving him the shock.”

Experimenter: “Then why, why won’t you continue?”

Teacher: “Well I, I just don’t want to take a chance, I mean I, I.”

Experimenter: “Well if you don’t believe that he’s getting the shocks, why don’t you just continue with the test and we’ll finish it?”

Teacher: “Well I, I can’t, because I can’t take that chance” (SMP, Box 153, Audiotape #2439).

Although this subject did not believe his friend was being shocked, the fact is he could not be certain. The subject’s uncertainty dictated that he could not afford to ‘take that chance’ and trust the experimenter, because there was still a possibility his hunch might be wrong. And this subject was obviously well aware of the consequences that such a mistake would have for his friend. This kind of response was not limited to the Relationship Condition. As another suspicious and uncooperative subject in one of the non-Relationship conditions said, ‘When I decided that I wouldn’t go along with any more shocks, my feeling was “plant or not…I was not going to take a chance that our learner would get hurt”’ (SMP, Box 44, Divider (no label), #1106). Another subject noted that although he ‘wasn’t sure he [the learner] was getting the shocks’, when ‘he started to complain vigorously…I refused to go on’ (SMP, Box 44, Divider “8”, #1818). These and similar responses suggest that, as far as these subjects knew, the experimenter could have been, as Perry (2012: 135) herself has pointed out, a rogue ‘mad scientist’, someone not to be trusted. Mixon’s and Orne and Holland’s suspicions that trust caused the high completion rates become less credible when considered alongside examples of how some disobedient subjects resolved the ambiguous situation they confronted.

Nevertheless, as Mixon correctly argued, Milgram’s inherently ambiguous baseline procedure ensured that subjects could not be certain they were inflicting real shocks on the learner. This ambiguity was accentuated by the inability of subjects to see the learner because of the partition separating them. The ambiguity inherent in the procedure weakens the methodological strength of the OTA studies because certainty regarding the infliction of harm would likely have caused subjects to disobey. But it can also be argued that the deliberate creation of ambiguity was a necessary ingredient in the construction of the moral dilemma—either to stop or to continue inflicting shocks. If a subject suspected that the learner was not being shocked and, for whatever reason continued doing as they were told, they would be taking a potentially devastating risk. Inherent ambiguity in the procedure therefore created the possibility of the subject being wrong. This risk, we would argue, is actually an essential methodological ingredient in the creation of the moral dilemma the studies tested. Would the subject continue to place the learner’s welfare at risk because of their suspicion that he was not being harmed? Or would they choose the safer option, and eliminate any possibility of being wrong, by refusing to inflict further ‘shocks’?

The ambiguity and uncertainty inherent in Milgram’s basic experimental procedure was not a methodological weakness. On the contrary, it was a necessary component in the construction of the moral dilemma that subjects were forced to resolve, and thus a crucial component of his methodology (Russell, 2014). If all subjects were certain that the learner was not being shocked, there would not have been any chance of being wrong. And if there was no possibility of being wrong, then subjects would not have had to resolve a moral dilemma, which is what the experiment ultimately tested.

For his part, Mixon might dispute that the subjects’ dilemma had a moral dimension. As OTA scholar, Jerry Burger, has argued:

When you’re in that situation, wondering, should I continue or should I not, there are reasons to do both. What you do have is an expert in the room who knows all about this study, and presumably has been through this many times before with many participants, and he’s telling you, [t]here’s nothing wrong. The reasonable, rational thing to do is to listen to the guy who’s the expert when you’re not sure what to do (Quoted in Perry, 2012: 359).

Yet according to our interpretation doing as one was told was neither a reasonable nor rational response. It was far better to err on the side of caution (Coutts, 1977: 520, as cited in Darley, 1995: 133). Doing so would eliminate the risk associated with being wrong and secure the wellbeing of a fellow human being. For these reasons, the subjects’ conundrum constitutes a moral dilemma (see Russell & Gregory, 2011). When viewed from this perspective, the 35 percent of subjects who stopped the New Baseline experiment—whether fully deceived or suspicious—were unwilling to place the learner’s wellbeing at risk. Conversely, the 65 percent of subjects who completed the experiment—whether fully deceived or suspicious—were willing to place the learner’s wellbeing at risk (Russell, 2014).

This counter argument is at the heart of our response to Brannigan and Perry’s critique. On the one hand, they argue that many subjects were not fooled by Milgram’s fabrications, and that is why these unconcerned subjects inflicted every ‘shock’. On the other hand, they are strongly critical of Milgram for exposing many of his subjects to a believable, highly stressful, and unethical experiment—Milgram mistreated many subjects by getting them to believe they were harming an innocent person who suffered from a heart condition.

But as Section 2 of our response illustrates, Brannigan and Perry’s methodological argument regarding the fact that many subjects suspected the experiments were fake, while many others were totally fooled, is a red herring. Although the experiments were indeed highly unethical, his methodology enabled him to demonstrate that when subjects were faced with the choice between inflicting all the shocks or withdrawing, most chose the former course of action.

Finally, we return to the point discussed earlier regarding Williams’ use of his own discretionary prods. We fully agree with Brannigan and Perry that this compromised the requirement for standardisation of the experimental procedures. Perry is on strong ground when arguing that this behaviour by Williams was a ‘case of moving the goalposts’ (2012: 134; see also Gibson, 2013a, 2013b). Nevertheless, unless it can be shown—which seems highly unlikely—that Williams’ actions in this regard were reproduced in the many replications of Milgram’s experiments, and which produced results similar to those that he generated, then this methodological criticism loses its force.

The essential point is that independent replications with slight variations have largely shown that when people are faced with the moral dilemma to hurt or help an innocent person during a Milgram-type ‘obedience’ experiment, most choose what independent observers identify as the morally dubious option. In support of this point, consider the following table produced by Blass (2012: 200):

Table 1: A cross-cultural comparison of obedience rates in replications of Milgram’s standard conditions

U.S. Studies   Foreign Studies     
Author(s)         Obedience rate in % Author(s)         Country Obedience rate in %
Milgram (1974)(Average of Exps. 1, 2, 3, 5, 6, 8, 10) 56.43 Ancona and Pareyson (1968) Italy 85
Holland (1967) 75 Edwards et al. (1969) South Africa 87.5
Rosenhan (1969) 85 Mantell (1971) Germany 85
Podd (1970) 31 Kilham and Mann (1974) Australia 28
Ring et al. (1970) 91 Shanab and Yahya (1977) Jordan 73
Bock (1972) 40 Shanab and Yahya (1978) Jordan 62.5
Powers and Geen (1972) 83 Miranda et al. (1981) Spain 50
Rogers (1973) 37 Gupta (1983) (Average of 1 Remote and 3 Voice-Feedback conditions) India 42.5
Shalala (1974) 30 Schurz (1985) Austria  80
Costanzo (1976) 81      

U.S. mean obedience rate = 60.94%; Foreign mean obedience rate = 65.94%.[4]

It would seem to us that Milgram’s baffling yet replicable finding continues to render the OTA studies both fascinating and disturbing.

As Miller, Collins, and Brief (1995: 12) state, ‘a range of diverse methodologies that triangulate on the same conceptual issues is a vital criterion of scientific inquiry (e.g., Brewer & Collins, 1981; Campbell & Fiske, 1959)’. In the case of the OTA studies, this methodological triangulation continues to reinforce the internal validity of Milgram’s research. It is largely the replicability of Milgram’s studies, and the results produced by these replications, that have encouraged us to further explore the contribution of the OTA experiments to an understanding of human behaviour beyond Milgram’s laboratory walls—including the actions displayed by perpetrators of the Holocaust.

Concluding Remarks 

We agree with many of Brannigan and Perry’s criticisms of Milgram and his laboratory work. We too believe that his experiments were highly unethical, and displayed some significant methodological weaknesses, that they may not in fact have been about ‘obedience’ per se, and that Milgram can be seen to have ruthlessly used people in pursuit of his own academic career. There is this dark side of the OTA experiments that most ‘Milgram-friendly’ social psychology textbooks continue to ignore (Griggs & Whitehead, 2015: 571; 573; see also Perry, 2012, Stam, Lubek, & Radtke, 1998). However, largely because it has been independently replicated, and despite all our misgivings, we remain persuaded that Milgram’s OTA research still speaks of how people—including the Nazi perpetrators of the Holocaust—can be induced to behave in ways that they might otherwise find morally objectionable, of how the humanly undoable can become doable.

 

References

Abse, D. (1973). The dogs of Pavlov. London: Vallentine, Mitchell.

Bauer, Y. (2001). Rethinking the Holocaust. New Haven, CT: Yale University Press.

Blass, T. (2004). The man who shocked the World: the life and legacy of Stanley Milgram. New York: Basic Books.

Blass, T. (2012). A Cross-Cultural Comparison of Studies of Obedience Using the Milgram Paradigm: A Review. Social and Personality Psychology Compass, 6 (2), 196–205.

Brannigan, A., & Perry, G. (2016). Milgram, Genocide and Bureaucracy: A Post-Weberian Perspective. State Crime Journal, 5(2), 287-305.

Brief, A. P., Buttram, R. T., Elliott, J. D., Reizenstein, R. M., & McCline, R. L. (1995). Releasing the beast: a study of compliance with orders to use race as a selection criterion. Journal of Social Issues, 51(3), 177-193.

Browning, C. R. (1978). The final solution and the German Foreign Office: A study of Referat D III of Abteilung Deutschland 1940-43. New York: Holmes and Meier.

Browning, C. R. (1995). The path to genocide: essays on launching the final solution. New York: Cambridge University Press.

Browning, C. R. (2000). Nazi policy, Jewish workers, German killers. New York: Cambridge University Press.

Darley, J. M. (1995). Constructive and destructive obedience: a taxonomy of principal-agent relationships. Journal of Social Issues, 51(3), 125-154.

Friedrich, C. J. (1946). Constitutional government and democracy: theory and practice in Europe and America. Boston: Ginn and Company.

Ghosh, P. (2014). Max Weber and The Protestant Ethic: Twin Histories, Oxford UK: Oxford University Press.

Gibson, S. (2013a). Milgram’s obedience experiments: A rhetorical analysis. British Journal of Social Psychology, 52(2), 290-309.

Gibson, S. (2013b). “The last possible resort”: A forgotten prod and the in situ standardization of Stanley Milgram’s voice-feedback condition. History of psychology, 16(3), 177-194.

Gregory, R. (2007). New public management and the ghost of Max Weber: Exorcised or still haunting? In T. Christensen and P. Laegreid (eds), Transcending new public management: The transformation of public sector reforms. Surrey: Ashgate.

Griggs, R. A., & Whitehead, G. I. (2015). Coverage of recent criticisms of Milgram’s obedience experiments in introductory social psychology textbooks. Theory & Psychology, 25(5), 564–580.

Harré, R. (1979). Social being: a theory for social psychology. Oxford, United Kingdom: Basil Blackwell.

Höpfl, H. M. (2006). Post-bureaucracy and Weber’s ‘modern’ bureaucrat, Journal of Organizational Change and Management, 19(1), 8-21.

Kent, S. (1983). Weber, Goethe, and the Nietzschean allusion: capturing the source of the “Iron Cage” metaphor, Sociological Analysis, 44(4), 297-320.

Milgram, S. (1972). Interpreting obedience: error and evidence. A reply to Orne and Holland. In A. G. Miller (Ed.), The social psychology of psychological research (pp. 138-154). New York: Free Press.

Miller, A. G., Collins, B. E., & Brief, D. E. (1995). Perspectives on obedience to authority: the legacy of the Milgram experiments. Journal of Social Issues, 51(3), 1-19.

Mixon, D. (1976). Studying feignable behavior. Representative Research in Social Psychology, 7, 89-104.

Mixon, D. (1989). Obedience and civilization: authorized crime and the normality of evil. London: Pluto Press.

Orne, M. T., & Holland, C. C. (1968). On the ecological validity of laboratory deceptions. International Journal of Psychiatry, 6(4), 282-293.

Perry, G. (2012) Beyond the Shock Machine: The Untold Story of the Milgram Obedience Experiments. Melbourne: Scribe.

Russell, N. J. C. (2009). Stanley Milgram’s Obedience to Authority Experiments: Towards an Understanding of their Relevance in Explaining Aspects of the Nazi Holocaust. PhD thesis: Victoria University of Wellington.

Russell, N. J. C. (2011). Milgram’s obedience to authority experiments: Origins and early evolution. British Journal of Social Psychology, 50(1), 140-162.

Russell, N. (2014). Stanley Milgram’s Obedience to Authority “Relationship” Condition: Some Methodological and Theoretical Implications. Social Sciences, 3(2), 194-214.

Russell, N. J., & Gregory, R. J. (2011). Spinning an organizational “web of obligation”? Moral choice in Stanley Milgram’s “obedience” experiments. The American Review of Public Administration, 41(5), 495-518.

Russell, N., & Gregory, R. (2015). The Milgram-Holocaust Linkage: Challenging the Present Consensus. State Crime Journal, 4(2), 128-153.

Stam, H. J., Lubek, I., & Radtke, H. L. (1998). Repopulating social psychology texts: Disembodied “subjects” and embodied subjectivity. In B. M. Bayer & J. Shotter (Eds.), Reconstructing the psychological subject: Bodies, practices, and technologies (pp. 153–186). London, UK: Sage.

Notes

[1] Abse (1973, p. 29).

[2] According to one document from Milgram’s archive Alan T. Waterman of the NSF officially informed Yale University on 13 July 1961 that the NSF had upheld its original decision to fund Milgram’s research (SMP, Box 43, Folder 127). In fact, according to Blass (2004: 72), on agreeing to fund Milgram’s research proposal, ‘The NSF final panel rating was “Meritorious.”’

[3] One of many examples being that Claude E. Buxton (chair of Yale’s Department of Psychology) was a strong backer of Milgram’s research proposal, informing the Provost, Norman S. Buck, “I have read and approve of Mr. Milgram’s application to the National Science Foundation and trust you will sign for the University and forward his application” (SMP, Box 43, Folder 127). On the same day, Buck did as Buxton trusted of him and endorsed Milgram’s research proposal.

[4] See Perry (2012: 307) for a conflicting but comparatively unconvincing view.