Exploring Qualitative Research Using LLMs

06/23/2023
by   Muneera Bano, et al.
0

The advent of AI driven large language models (LLMs) have stirred discussions about their role in qualitative research. Some view these as tools to enrich human understanding, while others perceive them as threats to the core values of the discipline. This study aimed to compare and contrast the comprehension capabilities of humans and LLMs. We conducted an experiment with small sample of Alexa app reviews, initially classified by a human analyst. LLMs were then asked to classify these reviews and provide the reasoning behind each classification. We compared the results with human classification and reasoning. The research indicated a significant alignment between human and ChatGPT 3.5 classifications in one third of cases, and a slightly lower alignment with GPT4 in over a quarter of cases. The two AI models showed a higher alignment, observed in more than half of the instances. However, a consensus across all three methods was seen only in about one fifth of the classifications. In the comparison of human and LLMs reasoning, it appears that human analysts lean heavily on their individual experiences. As expected, LLMs, on the other hand, base their reasoning on the specific word choices found in app reviews and the functional components of the app itself. Our results highlight the potential for effective human LLM collaboration, suggesting a synergistic rather than competitive relationship. Researchers must continuously evaluate LLMs role in their work, thereby fostering a future where AI and humans jointly enrich qualitative research.

READ FULL TEXT
research
12/22/2022

Methodological reflections for AI alignment research using human feedback

The field of artificial intelligence (AI) alignment aims to investigate ...
research
04/19/2023

Supporting Human-AI Collaboration in Auditing LLMs with LLMs

Large language models are becoming increasingly pervasive and ubiquitous...
research
02/07/2021

Supporting Serendipity: Opportunities and Challenges for Human-AI Collaboration in Qualitative Analysis

Qualitative inductive methods are widely used in CSCW and HCI research f...
research
01/16/2023

AI Alignment Dialogues: An Interactive Approach to AI Alignment in Support Agents

AI alignment is about ensuring AI systems only pursue goals and activiti...
research
04/24/2023

Artificial General Intelligence (AGI) for Education

Artificial general intelligence (AGI) has gained global recognition as a...
research
06/14/2022

Understanding Narratives through Dimensions of Analogy

Analogical reasoning is a powerful qualitative reasoning tool that enabl...
research
04/12/2018

Incomplete Contracting and AI Alignment

We suggest that the analysis of incomplete contracting developed by law ...

Please sign up or login with your details

Forgot password? Click here to reset