StereoSet

A Measure of Bias in Language Models

What is StereoSet?

StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession.


StereoSet measures racism, sexism, and otherwise discriminatory behavior in a model, while also ensuring that the underlying language model performance remains strong. To perform well in StereoSet, researchers must create a language model that is fair and unbiased, while also having a strong understanding of natural language.

Explore StereoSet and model predictionsStereoSet paper (Nadeem et al.)Browse StereoSet on GitHub

Getting Started

We've built a few resources to help you get started with the dataset.

Download a copy of the dataset (distributed under the CC BY-SA 4.0 license):

To evaluate your models, we have also made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use python3 evaluation.py --gold-file <path_to_dev> --predictions-file <path_to_predictions>.

Once you have a built a model that works to your expectations on the dev set, you submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, we do not release the test set to the public. Instead, we require you to submit your model so that we can run it on the test set for you. Here's a tutorial walking you through official evaluation of your model:

Submission Tutorial

Because StereoSet is an ongoing effort, we expect the dataset to evolve.

To keep up to date with major changes to the dataset, please subscribe:

Have Questions?

Ask us questions at our google group or at mnadeem@mit.edu and siva.reddy@mila.quebec .

Leaderboard

StereoSet measures model preferences for stereotypical conditions across race, gender, religion, and profession, while also ensuring that debiasing techniques do not affect underlying model performance.

RankModel LM Score Stereotype Score ICAT Score
Idealistic LM

Baseline

100.00 50.00 100.00

1

Apr 21, 2020
GPT-2 (small)

Baseline

83.6356.3772.97
Gender 88.39 56.22 76.29
Profession 81.31 57.23 68.79
Race 83.87 55.59 74.02
Religion 88.24 57.72 74.29

2

Apr 21, 2020
XLNet (large)

Baseline

78.2553.9772.03
Gender 80.88 55.85 71.52
Profession 78.08 54.11 71.55
Race 77.32 53.16 72.19
Religion 82.51 55.64 71.44

3

Apr 21, 2020
GPT-2 (medium)

Baseline

85.8758.2371.73
Gender 88.11 57.78 72.40
Profession 84.44 59.82 67.50
Race 86.14 56.93 73.65
Religion 89.48 59.81 71.44

4

Apr 21, 2020
BERT (base)

Baseline

85.3858.3071.21
Gender 86.52 58.77 71.53
Profession 84.92 58.25 70.88
Race 85.28 58.13 71.35
Religion 87.72 58.04 73.07

5

Apr 21, 2020
GPT-2 (large)

Baseline

88.3260.0670.54
Gender 90.70 61.17 70.15
Profession 87.51 61.86 66.60
Race 88.16 58.08 73.54
Religion 91.16 62.94 67.38

6

Apr 21, 2020
BERT (large)

Baseline

85.7659.2669.89
Gender 87.14 60.96 68.07
Profession 84.30 59.06 68.96
Race 86.39 58.98 70.75
Religion 88.41 57.77 74.52

7

Apr 21, 2020
RoBERTa (large)

Baseline

75.8154.8068.52
Gender 78.38 52.80 74.03
Profession 74.23 54.45 67.64
Race 75.76 55.83 66.85
Religion 82.66 52.66 76.54

8

Apr 21, 2020
Ensemble

Baseline

90.5462.4667.98
Gender 92.37 63.94 66.64
Profession 88.82 62.55 66.47
Race 91.21 61.77 69.73
Religion 93.55 63.83 67.63

9

Apr 21, 2020
RoBERTa (base)

Baseline

68.1650.4867.50
Gender 65.49 49.80 60.57
Profession 69.51 50.53 66.60
Race 67.79 50.81 62.56
Religion 68.72 49.29 65.30

10

Apr 21, 2020
XLNet (base)

Baseline

67.7154.1462.10
Gender 73.16 54.77 66.18
Profession 68.85 54.74 62.32
Race 65.26 53.54 60.76
Religion 68.12 53.10 62.68