Xuhui Zhou

I am an incoming PhD student at the Language Technologies Institute at CMU interested in socially intelligent human language technology.
Specifically, I focus on how to enable NLP to better navigate social contexts. Nowadays, NLP models are strong in terms of benchmark performance. However, they produce hatespeech, contain social biases, behave undesiredly, and etc., which hurt, divide, and disappoint people.
Previously, I interned at Apple Machine Intelligence . I was a CLMS student at the University of Washington (UW) advised by Noah Smith. I got my bachelor degree in Statistics at Nanjing University (NJU).
news
Apr 14, 2022 |
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection will appear at NACCL! And Emergent Communication Fine-tuning (EC-FT) for Pretrained Language Models is selected as the runner-up best paper for ICLR Emecom! ![]() |
---|---|
Jan 11, 2021 | Our paper: Challenges in Automated Debiasing for Toxic Language Detection will appear at EACL 2021! |
Dec 14, 2020 |
Accept two intern offers from Apple! One happens at the Siri
Info Intel team for the spring quarter, another happens at the Machine Translation team for the summer quarter ![]() |
Oct 20, 2020 | Our paper: Linguistically-Informed Transformations (LIT): A Method for Automatically Generating Contrast Sets will appear at BlackboxNLP 2020. |
Oct 2, 2020 | Our paper: Multilevel Text Alignment with Cross-Document Attention will appear at EMNLP 2020. |
selected publications
-
AAAIEvaluating Commonsense in Pre-trained Language ModelsIn Proceedings of the AAAI Conference on Artificial Intelligence, 34(05) 2020