GPT users, (em)brace yourself (for) cognitive dissonance!

Viktoria Popova
5 min readMay 29, 2023
Viktoria × DALL·E

Remember as children, we would turn into magicians by casting shadows and creating worlds of known and unknown creatures. Our magic power was to blur the bounds between realities of shadows and imaginations of sentient beings. Most of us probably didn’t know the meaning of the word “sentient” but we could easily traverse across realities and non-realities of our imagination. We were thriving in the state of cognitive dissonance. Fast forward to post-November 2022, and we are in a similar game … Have you noticed where cognitive dissonance has come into play with our GPT experiences?

First a quick reminder on the definition of cognitive dissonance. Cognitive Dissonance Theory (CDT) is quite popular and has been creating a lot of buzz in the field of social psychology since its inception in 1957 (Vaidis & Bran, 2019). As a very broad definition, cognitive dissonance describes the tension that arises when we hold two contradictory beliefs (attitudes, practices, behaviors) at the same time.

So, what are the two contradictory behaviors we are practicing with GPT? Here they are:

1. We are encouraged to engage with GPT in an iterative dialogue, as we would with a sentient human being (for example, we simulate discussions with a tutor, an interviewer, a colleague, etc.)

2. We are cautioned not to anthropomorphize AI, as it is not sentient — it does not have emotions, it does not have opinions, it even does not “understand” or “think” in the sense that we, humans, put into these faculties.

Let’s address each in a bit more detail:

1. The power of GPT is not as much in responding to a single-prompt question but in engaging the user in iterative dialogues that lead to problem solving, introspection, self-assessment, discovery, creativity, and other dynamic and intellectual experiences. Furthermore, GPT adapts to various roles and can engage with us as a tutor, colleague, interviewer, etc. To unleash the powers of GPT, it is suggested that we engage in these experiences and explore the path where such interactions can take us. These interactions can be very engaging — and human-like — experiences. You will also find yourself using such social communication norms as saying “please” and “could you,” etc. These behaviors are also encouraged as they help GPT better understand and mimic social norms, generate more detailed responses, and create the overall enriching user experience. Bottom line, we are encouraged to engage in human-like behaviors with AI. These behaviors, however, are in stark contrast with constant reminders not to anthropomorphize our interaction with GPT.

2. Understanding the concept of anthropomorphism, which involves attributing human characteristics to non-human entities, is crucial in the context of our interactions with AI: It has profound ethical implications. Therefore, a comprehensive understanding of our anthropomorphic tendencies and our relationship with AI is integral to navigating the ethical landscape of AI applications.

If we ask GPT for its opinion on a given subject matter, it will initially most likely remind us that it does not have opinions or feelings. And then it still goes on (in its inherent strive to support our knowledge queries) to provide information that we are seeking. Most of us understand that AI is non-sentient; we may even understand some logic that drives its outputs, so we recognize that AI does not actually “understand” what we are asking or what it is producing. Afterall, it is not even “aware” of its own existence, or, at least, of its changing states. It is “aware” of being a ChatGPT: “I’m ChatGPT, a language model developed by OpenAI” and “I am an instance of GPT-3” (OpenAI, 2023). However, it is not “aware” of currently being a GPT-3.5 or a GPT-4.0 model: “As of my training cutoff in September 2021, the most recent version of me was GPT-3. I’m not able to provide information on any subsequent versions like GPT-4 because I wouldn’t have been trained on that information.” (OpenAI, May 28, 2023).

Nevertheless it is almost impossible for us, users, not to engage in some anthropomorphic behaviors with AI (as addressed in the iterative dialogue discussion above). Besides, our language does not give us much of a choice with distinguishing between human and non-human references. Think about it, many of the words we use to describe the operation of technology are anthropomorphic (this is largely because human language has evolved to describe human experiences, and so when we need to describe new concepts or phenomena, we often borrow from existing human descriptions). For example, we say that a computer “runs” a program, that an AI “learns” from data, a sensor “sees” an object, a barcode scanner “reads” a barcode, a virtual assistant like Siri or Alexa “speaks” or “talks, “ etc. These are all verbs that we also use to describe human actions. Thus, we could say that language itself is intrinsically anthropomorphic. Where language distinction lies is in our awareness that we use these verbs metaphorically in relation to technology. A computer doesn’t “run” in the same way that a person does, and an AI doesn’t “learn” in the same way that a human does. It is our responsibility, as technology users, to understand that association of these words with technology (or other non-human entities) does not imply that the technology has human-like abilities. Thus, anthropomorphism itself could be considered as a form of cognitive dissonance. The concept and behavior of anthropomorphism (especially with AI) is fascinating and is of importance as we are exploring ethical implications of our newly emerging relationship with AI. In his dissertation “Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI),” Pineda concludes that “[…] anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies” (2023, Abstract, para. 5).

To learn more about the duality of cognitive dissonance touched upon in this discussion, I did ask GPT (GPT-4.0) how we should address this paradox throughout our GPT musings. You can access its full response here (OpenAI, May 27, 2023). One of the points it listed (point # 5) is to “Embrace Learning Opportunities: Use this experience to deepen your understanding of the nature of AI and its implications. This could lead to intriguing questions about what consciousness really means, what role AI can and should play in our society, and how we can use these technologies responsibly” (OpenAI, May 27, 2023). Spooky, I thought, as I looked at the title that I created for this article. Should I take this resemblance (use of the concept and the word “embrace”) as a “compliment”? Or should I view it as an affirmation of my generic thinking (since AI creates the most probable string of answers). After all, it is our own reflection that we see in the shadows of AI. But let’s not judge ourselves, or AI — let’s learn to thrive in this new cognitive dissonance shadow play.

References:

OpenAI. (May 27, 2023). GPT-4.0 (ChatGPT May 24 Version) [Large language model]. https://chat.openai.com/share/48dbc25a-e031-40cb-9d56-ce806e58ab89

OpenAI. (May 28, 2023). GPT-4.0 (ChatGPT May 24 Version) [Large language model]. https://chat.openai.com/share/9abfee16-fc43-4621-a6b6-a15bb3cb56a7

Pineda, J. (2023). Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI). Florida International University. DOI: 10.25148/etd.fidc009720

Vaidis, D., & Bran, A. (2019). Respectable Challenges to Respectable Theory: Cognitive Dissonance Theory Requires Conceptualization Clarification and Operational Tools. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2019.01189

--

--

Viktoria Popova

I like to stare at the intersection of complexity and chaos. My writing ranges across topics on Problem Solving, Complexity, EdTech, Folklore, and Etymology.