The Greeks coined the term utopia to mean “no place.” It became colloquial and used in conversation in 1516 when Sir Thomas Moore wrote a two-volume book on the perfect society called Utopia. He wanted to wax and wane about the various considerations of how a perfect society would coalesce. Moore used the term utopia to allude to the fact that there is no such thing as a perfect society.
Yet, what is morality? What is good and bad? How do we define these terms? The history of their meanings has evolved as time has passed, and with a bevy of philosophers offering research and social hypotheses, we must accept that it is based on ethics and current culture. As we progress forward from century to century, ethics and values change, but our interest in morality does not.

Are we good or bad?

When we start the debate on whether humans are inherently good or bad, we should start with Thomas Hobbes’ and John Locke’s debate about the government and its interaction with people. According to Hobbes, people are vile beasts. Therefore, it is necessary for government to be very much involved in protecting people from themselves.

Locke later proposed that people are in fact good. Therefore, government can take a step back. He believed that if people have to interact with others they’ll make the right choices because they know what’s good.

The interesting part of this debate is that with Hobbes’ idea, people are “vile beasts” and the government needs to be involved, but the government is made up of people. So how are they to protect people if they themselves are vile beasts? The main consideration was Locke offering this idea that people will do what’s good. And the real question against Locke is: How do you define what’s good?

What is good?

When young parents were asked to convey the most vital element of a child’s social development, morality was at the top of the list. Morality is the capacity to make evaluations about what is right or wrong and to act in accordance with what is deemed right (Broderick and Blewit, 2015). When we start to approach the idea of learning morality, we give this broad definition in terms of what is right. When we anchor morality around what is good, then we must posit that it is dependent upon values and ethical codes, which further obscure and complicate these ideas of right and wrong. But only through this understanding can we begin to consider how people make moral decisions.

One theory about this decision-making process is the Social Intuitionist Model (SIM) from Jonathan Haidt (2001). Haidt posits that a set of causal links join three psychological processes, namely intuition, judgment, and reasoning. SIM advances the notion that intuition is the driving force behind moral judgment, and once that judgment has been made, reasoning sets in post hoc.

Joshua D. Greene developed an alternate theoretical model called the dual processing model of moral judgment, which holds that morality can be impelled by cognition and not intuition. Current research highlights the role of emotion and intuition in moral adjudication, countering research that cognition and reasoning are the most integral considerations of determining morality (Paxton and Greene, 2010).

Natural intuition

According to Haidt, the argument for the decision—the logical part—only comes after you’ve made the decision. There’s a set of causal links—intuition, judgment, and reasoning. Intuition makes you feel something, which generates the judgment you have about it, which then forces you to come up with a reason for your feeling and judgment.

As a determinist, Haidt’s perspective posits that cognition never really plays a role. A determinist believes that our decision-making is very narrow—meaning many unconscious mechanisms are at the core of the way we navigate life. So it makes sense that he would believe intuition would be the way that people would make decisions as opposed to the logic. Rational thought is really not deterministic; it’s more of an autonomous means of how people experience life.

 

Intuition is a limbic indicator—an emotional beacon that points you in the right direction. ... The origination is not coming from a rational argument; it’s how you feel about the question.

Intuition is a limbic indicator—an emotional beacon that points you in the right direction. SIM establishes that after you limbicly, or emotionally, conclude something, you generate an argument. The origination is not coming from a rational argument; it’s how you feel about the question.
SIM considers reasoned moral judgment to be a rarity, no matter the circumstances within an individual. According to the SIM principle, morality that guides behavior is intuitive, and no cerebral reasoning will alter another person’s behavior unless that

person has a change in sentiment.

The dual processing model disagrees with this presumption and advances that dialogic reasoned intercourse on morality can change a person’s thinking, which engenders a new sentiment (Paxton and Greene, 2010).

Cognition and rationality

Paxton and Greene’s dual processing model does not discount the fact that the way you feel about a question is involved in the decision, but adds that one can use moral logic in order to come to a conclusion prior to the decision. This is a key component when discussing changing someone’s mind as well. There must be rational thought in order for it to be possible.

Paxton and Greene (2010) conveyed an example of Martin Luther King Jr.’s “I Have a Dream” speech as a modality of changing someone’s perspective. The use of imagery and metaphor engages the emotions of others as a way of altering perspective. Paxton and Greene (2010) suggest that in reality it is not possible to say that emotion alone is the deciding factor when engaging another person. The reason is because emotional decisions can lead someone to make choices without considering the morality of the decision. For a person to engage another person and alter moral tenets, reasoning must be employed and is accomplished by the “pain of inconsistency.”

Another consideration was advanced by Pizzaro, Uhlmann, and Bloom (2003) who studied people’s reaction to the fictitious case of Barbara who wanted to kill her husband John. She slipped poison into his food at a restaurant. The poison was not strong enough to kill him, but it impaired the taste of the food, causing him to change his order. The exchanged food is something that John is allergic to, and he dies after eating it. When participants evaluated this scenario initially, they indicated that it appears Barbara is less blameworthy for her actions; nevertheless, they could not logically explain their intuitive judgment. However, when participants were asked prior to make a rational moral judgment, they were unlikely to say that Barbara was less blameworthy, citing her intention as a prime reason for her moral mea culpa.

This buttresses the dual processing model of social influence of moral reasoning in that it was the instruction the researchers gave prior for making a rational moral judgment (social discourse) that elicited an altered cognitive response by the participants. Paxton and Greene (2010) further cited various studies of brain scans during moral reasoning before judgment was made, which found activity in the dorsolateral prefrontal cortex. This is an area identified with cognitive processing. If SIM was correct, then this activity should occur after the moral judgment has been made.

 

Cultural impact

As we know, an integral element regarding moral reasoning is culture. According to Zhang and colleagues (2013), major distinctions exist between different cultures in moral reasoning. They give an example of a moral dilemma told to Chinese and American fifth grade students. Thomas was a poor child who never won anything in his life and who had few friends. Thomas finally gets a chance to win a model car making competition; however, he does not do so fairly because he has help from his brother. Thomas tells his secret to Jack. The moral question asked is if Jack should tell on Thomas.

Initial reactions by Chinese students were allocentric, namely that their concern for Thomas was initially centered on a collective perspective. Chinese children concluded that Jack should tell on Thomas so that it will help correct his ways and make him a better part of society in the future. Americans were more idiocentric, namely that their concern was centered on an individualistic perspective. Americans thought that Jack should not tell because he would get in trouble with Thomas.

These reactions are in line with the collective element of Chinese culture and the individualistic mindset of American culture. Nevertheless, collaborative discussions regarding moral reasoning helped to modify and clarify subsequent behaviors within groups. This means that rational and logical dialogues aided each group to consider alternate ways of viewing the story and subsequently modify perspectives. This example suggests that the dual processing model regarding the social influence of cognition on morality is accurate.

 

The therapist’s dilemma

As clinicians, the division between SIM and the dual processing model becomes integral regarding the approach that the clinician will use when contending with the moral tenets of the client. The question is: Should the clinician engage the client’s intuitions, or should the clinician focus on reason and logical discussion?

In advancing the notion of the dual processing model as an implementation of addressing morality in the therapeutic environment, one must be cognizant of moral development. When we think back to how importantly parents ranked teaching their children morality, it’s important to remember this cognizance.

According to Piaget (1932) and Kohlberg (1981), moral development is a cognitive process. Piaget advanced that children are initially egocentric in cognition, and punishment is not connected to any specific act but instead an arbitrary response imposed by powerful adult authority figures. The end of the preoperational phase (age 7) is when children start to understand the interplay between action and consequence predicated on mutuality rather than arbitrary elements. This is an extension and development of the theory of mind where a child can recognize others and their perspective and intent.

Callender (2002) suggests that depressed people, who Beck (1979) understood to be those who had a negative opinion of themselves, of the world, and of the future, may need to graduate from Piaget’s first moral stage of powerful authority figures to the second stage of recognizing others’ point of view through mutuality. Piaget’s third level, which is about age 10 or 11, is when an appreciation for rules develops and the potentiality that they can change through consensus.

 

In therapy, many times people couch their circumstances through a moral lens, asking, “Did I do the right thing?” Or simply saying, “I’m bad,” especially when discussing addiction or even in marriage therapy.

In therapy, many times people couch their circumstances through a moral lens, asking, “Did I do the right thing?” Or simply saying, “I’m bad,” especially when discussing addiction or even in marriage therapy. Whiting (2008) advanced that in a clinical setting focused on couples’ therapy, couples spoke in moral terminology regarding responsibility for behavior and regarding their self-appraisals. Many times, people on the defensive bend morality or modify the recollection of an episode in order to bring confluence between an event and a personal moral tenet.

Callender (2002) advances that this is common. Persons who hold a belief about being failures will modify praise and progress as either not genuine or as a disappointment because of some perceived flaw. Clients who behave in conflict with moral principles, for example with violence and aggression, may rationalize that their behavior is defensive and that the victim was deserving of this consequence to be free of moral impingement. It could also be that the moral stage in which a client is situated impedes understanding of the moral imperative of respecting others.

As a therapist, it’s important to recognize the role that the concept of good and bad may have in the conversation. However, “utopia” does not exist. There is no morally right society or set dichotomy of good and bad. When we consider the ways that the dual processing model affects our patients, we can more adequately assist them in their journeys.

About Rabbi Ron Finkelstein

Rabbi Ron Finkelstein serves as the director of a mental health and addiction clinic in Brooklyn, New York. Rabbi Finkelstein earned his master’s degree in clinical mental health counseling at Touro Graduate School of Behavioral Science and is currently obtaining a doctorate in clinical psychology at Saybrook. His research is focused on religion and psychology.