HR Unplugged
  • Home
  • Blog
  • About

Can AI Guess Your Personality? What a chat with ChatGPT revealed about me

21/4/2025

0 Comments

 
Picture
Introduction

What happens when AI stops being a tool and starts becoming a mirror?

A classmate in my Organisational Psychology programme recently tested this question in a brilliantly simple way — by asking ChatGPT to guess their personality profile across popular psychometric models without providing a single formal input. No questionnaires. No Likert scales. Just conversation history.

The results? *Freakishly accurate,* according to them.


In recent years, there has been a surge of interest in using artificial intelligence to analyse personality traits, both in academic research and practical applications. Studies have demonstrated that AI-powered language models can approximate human personality assessments with surprising accuracy, sometimes matching or even exceeding the reliability of traditional self-report questionnaires. This trend is reshaping how we think about self-knowledge, coaching, and even recruitment.

This piqued my curiosity, especially as someone who's obsessed with human behaviour, empathy, and equity in the workplace. So, I tried it myself — and the results were as fascinating as they were uncomfortable.

The Science

At the heart of this experiment is a hypothesis rooted in personality psychology and linguistic analysis: our language reveals our personality traits. Research in computational psychometrics — such as the work by James Pennebaker and the use of linguistic inquiry and word count (LIWC) — shows how consistent patterns in our communication can be analysed to reveal psychological states and traits.

Recent research suggests that AI models can predict personality profiles with up to 85% accuracy after analysing just a couple of hours of conversational data, rivalling the consistency of established psychometric tools. However, it's important to recognise that these models are not infallible: they may overlook context, misinterpret sarcasm or cultural nuances, and sometimes reflect biases present in their training data. As with any tool, their insights should be viewed as probabilistic rather than definitive.

ChatGPT, through large language modelling, mimics a similar approach: it processes patterns in your language to infer your motivations, habits, and even blind spots.

But what’s even more fascinating is how this intersects with equity theory — a framework that explains how we evaluate fairness in relationships based on the balance between what we put in (inputs) and what we get out (outcomes). When AI reflects back a version of ourselves — including our hidden investments and unmet emotional returns — it doesn’t just analyse. It disrupts the perception of equity within ourselves.

Key Findings

Here’s what ChatGPT surfaced when I asked it to analyse me as a comprehensive insight and growth advisor:
  • Over-identification with professional success: My inputs (effort, ambition) are heavily tied to my sense of self-worth, creating an internal imbalance where outcomes never quite feel enough.
  • Strategic overthinking that delays action: A tendency to over-analyse risks and rewards before making decisions, often leading to inertia disguised as thoughtfulness.
  • Avoidance of emotional vulnerability: A preference for intellectualising emotions rather than sitting with them — creating emotive distance in the name of professionalism.
  • Perception management: A drive to be seen as competent often overrides authenticity, especially in challenging interpersonal or emotional situation,
  • Reflective but avoidant: A tendency to often pre-emptively manage others’ reactions or discomfort — especially in leadership and inclusion work — sometimes at the expense of bolder or more provocative ideas.

Each of these traits was grounded in excerpts from my conversations — not as judgment, but as data. And, in true coaching style, each came with a tailored growth challenge.

But it’s worth pausing here.

These raw points might touch on truths, but they sit on a spectrum — and where exactly I land on that spectrum isn’t always clear. The analysis doesn’t tell me how much I over-identify with professional success or to what extent I manage perceptions; it simply signals that these tendencies are present. That distinction matters. Self-awareness isn’t about clinging to fixed identities but about recognising the fluidity of our behaviours and how they manifest differently across roles, relationships, and moments in time.
It’s worth also noting that the insights provided by AI are not absolute truths, but rather data-driven probabilities based on language patterns. While these findings can be eerily accurate, they may also miss the complexity and context of human experience, especially for those who communicate in less direct or culturally distinct ways.

The Equity Principle — A Deeper Look
Equity theory is traditionally concerned with how we compare our own input/output ratios to those of others, shaping our sense of fairness and motivation in social and workplace contexts. When AI reflects our personality back to us, it creates a new kind of internal comparison: we measure our self-perception against an external, algorithmic “mirror.” This can either affirm our self-concept or highlight uncomfortable discrepancies, prompting us to re-evaluate our motivations and sense of fairness within ourselves.

Applications: Organisational Risks and Benefits

This exercise points to a broader shift in how we might approach coaching, leadership development, and organisational behaviour analysis.

The organisational implications of AI-driven personality analysis are profound. On one hand, these tools could help managers and HR professionals identify strengths, growth areas, and potential blind spots at scale, supporting more tailored development and fairer decision-making. On the other, there are risks: if misapplied, AI assessments could reinforce biases, undermine trust, or be used to justify unfair treatment. Organisations must balance the promise of these technologies with careful attention to transparency, context, and human judgment.


  • Personal development: AI may become a scalable, always-available self-coaching tool — one that sees patterns we are too close to recognise.
  • Manager insights: Just as I analysed myself, imagine running email correspondence or one-to-one transcripts with your manager through a similar lens. What could that tell you about their style, values, and potential equity blind spots?
  • Equity at work: Just as people seek fairness in pay, space, and recognition, we also seek internal equity — between who we think we are and how we actually show up. If this internal balance is off, performance, motivation, and even loyalty can suffer. AI may offer a provocative but precise way to spot these psychological inequities.
  • Ethical provocation: If an AI can “read” you without explicit consent — simply by accumulating enough of your words — how do we responsibly integrate these insights into work and life?

Ethics: Guidelines for Responsible Use

Given the sensitivity of personality data, ethical use of AI in this context requires clear guidelines. Transparency about how data is collected and analysed, explicit opt-in consent, and safeguards against misuse are essential. AI should augment, not replace, human insight—and its findings should always be interpreted in context, with respect for individual privacy and autonomy.

A Quote to Reflect On
“We are what we pretend to be, so we must be careful about what we pretend to be.”
— Kurt Vonnegut

A Question to Reflect On

If an AI analysed your conversations, what might it reveal that you're not ready to admit?
As you reflect on these possibilities, consider: how would you feel if your workplace used AI to analyse your emails or conversations for personality insights? Would this feel empowering, invasive, or something in between? What safeguards would you want to see in place?

Further Readings
  • James O Donnell. November 2024 AI can now create a replica of your personality MIT Technology Review,
  • Pennebaker, J. W. (2011). The Secret Life of Pronouns: What Our Words Say About Us. New York: Bloomsbury Press.
  • Turkle, S. (2015). Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin Press. ISBN: 9781594205552
  • Walster, E., Walster, G. W., & Berscheid, E. (1978). Equity: Theory and Research. Boston: Allyn & Bacon. 

0 Comments



Leave a Reply.

    Author

    Just me,

    Picture
    ​a HR professional listening, learning and working towards an enhanced people experience at work
    View my profile on LinkedIn

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    May 2020
    January 2020
    November 2019
    September 2019
    May 2019

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • Blog
  • About