Essentialism, Anthropomorphism, and Artificial Intelligence

 How does essentialist thinking about technology affect attitudes about AI? How does anthropomorphic thinking about technology affect attitudes about AI? What is the relationship between anthropomorphism, essentialism, & attitudes about AI?

The aim of this research is to explore the relationship between attitudes about artificial intelligence (AI) and intuitive thought patterns by answering the above questions.

Anthropomorphism, one common type of intuitive thought, is the attribution of human characteristics to non-human entities. Prior research has examined the relationship between anthropomorphism and technology. Technology appears better able to perform its intended design when it seems to have a humanlike mind (Waytz, Heafner, and Epley, 2014).

Prior research by Gray and Wegner (2012) talks about the uncanny valley - the unnerving nature of human-like robots. They found that feelings of uncanniness are tied to perceptions of experience (the capacity to feel and sense), and also suggest that experience—but not agency (the capacity to act and do) — is seen as fundamental to humans, and fundamentally lacking in machines.

Essentialism, another common type of intuitive thought, is the belief that an underlying ‘essence’ exists, gives rise to observable features, and determines category membership. However, there doesn’t seem to be any research examining the relationship between essentialism and technology.

Procedure

 

Participants will be recruited through PsyLink, Northeastern University’s recruitment portal, or through a flyer.

Participants will be randomly placed into one of two conditions: paper or computer. Based on which condition they are in, they will answer the survey on printed paper or on qualtrics. They will be asked to read and sign the consent form prior to taking the survey.

The participants will answer questions from 5 measures followed by a demographics questionnaire (Appendix A). They will be presented with the attitudes and comfort measure first, followed by the two essentialism and two anthropomorphism measures in a randomised order.

At the end of the study, they will be provided with a debrief and compensation (course credit or $20 depending on how they were recruited)

Methodology

 

Essentialist thinking will be measured using Bastian and Haslam’s (2006) 23-item scale and Gelman and Wellman’s (1991) switched-at-birth (SWAB) paradigm. The Haslam scale is broken into three categories: informativeness, biological basis, and discreteness. This project will be using an adapted version of this scale - for example, one of the original statements is ‘The kind of person someone is can be largely attributed to their genetic inheritance’, but the participants in this study will see, ‘The type of computer that it is can be largely attributed to the type of hardware inside of it.’ Participants will rate on a likert scale how much they agree or disagree with the statement. The measure was adapted so that we could measure essentialist thinking about technology specifically. The SWAB task has also been modified to fit the aims of the study: it will ask participants whether they think a certain action is more likely to be performed by a human baby or an AI robot baby.

Anthropomorphic thinking will be measured using Keleman’s Gaia measure and an explicit anthropomorphism measure (Betz, Pitt & Coley). Similar to Haslam, these measures will also be adapted to talk about technology. For example, from the gaia measure, the statement ‘I believe that everything in nature is balanced between yin and yang’ will be adapted to ‘I believe that everything in technology is balanced between computer and human.’ Both scales will present participants with anthropomorphic statements about technology and they will rate on a likert scale how much they agree or disagree with the statement.

Lastly, participants will be presented with a measure examining their attitudes about AI. This measure is being developed specifically for this project. We will present participants with a scenario of AI being integrated into their life and then ask them to rate on a likert how comfortable they would be in that situation. We will also ask them to give a brief description of why they would be comfortable or uncomfortable.

The main objective for this project is a close examination of the similarities and differences in how essentialist thinking and anthropic thinking affect the way people think about advancements in technology, specifically artificial intelligence.