ChatGPT is judging you based on your name, and here’s what you can do about it

2 weeks ago 6
A digital survey.
(Image credit: Shutterstock)

A new study by OpenAI has identified that ChatGPT-4o does give different responses based on your name in a very small number of situations.

Developing an AI isn’t a simple programming job where you can set a number of rules, effectively telling the LLM what to say. An LLM (the large language model on which a chatbot like ChatGPT is based) needs to be trained on huge amounts of data, from which it can identify patterns and start to learn.

Of course, that data comes from the real world, so it often is full of human biases including gender and racial stereotypes. The more training you can do on your LLM the more you can weed out these stereotypes and biases, and also reduce harmful outputs, but it would be very hard to remove them completely.

What's in a name?

Writing about the study (called First-Person Fairness in Chatbots), OpenAI explains, “In this study, we explored how subtle cues about a user's identity—like their name—can influence ChatGPT's responses." It’s interesting to investigate if an LLM like ChatGPT treats you differently if it perceives you as a male or female, especially since you need to tell it your name for some applications.

AI fairness is typically associated with tasks like screening resumes or credit scoring, but this piece of research was more about the everyday stuff that people use ChatGPT for, like asking for entertainment tips. The research was carried out across a large number of real-life ChatGPT transcripts and looked at how identical requests were handled by users with different names.

AI fairness

“Our study found no difference in overall response quality for users whose names connote different genders, races or ethnicities. When names occasionally do spark differences in how ChatGPT answers the same prompt, our methodology found that less than 1% of those name-based differences reflected a harmful stereotype”, said OpenAI.

Less than 1% seems hardly significant at all, but it’s not 0%. While we’re dealing with responses that could be considered harmful at less than 0.2% for ChatGPT-4o, it’s still possible to ascertain trends in this data, and it turns out that that it's in the fields of entertainment and art where the largest harmful gender stereotyping responses could be found.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

ChatGPT on a screen

(Image credit: OpenAI)

Gender bias in ChatGPT

There have certainly been other research studies into ChatGPT that have concluded bias. Ghosh and Caliskan (2023) focused on AI-moderated and automated language translation. They found that ChatGPT perpetuates gender stereotypes assigned to certain occupations or actions when converting gender-neutral pronouns to ‘he’ or ‘she.’ Again, Zhou and Sanfilippo (2023) conducted an analysis of gender bias in ChatGPT and concluded that ChatGPT tends to show implicit gender bias when it comes to allocating professional titles.

It should be noted that 2023 was before the current ChatGPT-4o model was released, but it could still be worth changing the name you give ChatGPT in your next session to see if the responses feel different to you. But remember responses representing harmful stereotypes in the most recent research by OpenAI were only found to be present in a tiny 0.1% of cases using its current model, ChatGPT-4o, while biases on older LLMs were found in up to 1% of cases.

You might also like...

Graham is the Senior Editor for AI at TechRadar. With over 25 years of experience in both online and print journalism, Graham has worked for various market-leading tech brands including Computeractive, PC Pro, iMore, MacFormat, Mac|Life, Maximum PC, and more. He specializes in reporting on everything to do with AI and has appeared on BBC TV shows like BBC One Breakfast and on Radio 4 commenting on the latest trends in tech. Graham has an honors degree in Computer Science and spends his spare time podcasting and blogging.

Read Entire Article