Xuhui Zhou

I am an incoming PhD student at the Language Technologies Institute at CMU interested in socially intelligent human language technology.

Specifically, I focus on how to enable NLP to better navigate social contexts. Nowadays, NLP models are strong in terms of benchmark performance. However, they produce hatespeech, contain social biases, behave undesiredly, and etc., which hurt, divide, and disappoint people.

Previously, I interned at Apple Machine Intelligence . I was a CLMS student at the University of Washington (UW) advised by Noah Smith. I got my bachelor degree in Statistics at Nanjing University (NJU).

news

Apr 14, 2022 Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection will appear at NACCL! And Emergent Communication Fine-tuning (EC-FT) for Pretrained Language Models is selected as the runner-up best paper for ICLR Emecom! :tada:
Jan 11, 2021 Our paper: Challenges in Automated Debiasing for Toxic Language Detection will appear at EACL 2021!
Dec 14, 2020 Accept two intern offers from Apple! One happens at the Siri Info Intel team for the spring quarter, another happens at the Machine Translation team for the summer quarter :tada:
Oct 20, 2020 Our paper: Linguistically-Informed Transformations (LIT): A Method for Automatically Generating Contrast Sets will appear at BlackboxNLP 2020.
Oct 2, 2020 Our paper: Multilevel Text Alignment with Cross-Document Attention will appear at EMNLP 2020.

selected publications

  1. EACL
    Challenges in Automated Debiasing for Toxic Language Detection
    Zhou, Xuhui, Sap, Maarten, Swayamdipta, Swabha, Choi, Yejin, and Smith, Noah A.
    In EACL 2021
  2. EMNLP
    Multilevel Text Alignment with Cross-Document Attention
    Zhou, Xuhui, Pappas, Nikolaos, and Smith, Noah A.
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
  3. AAAI
    Evaluating Commonsense in Pre-trained Language Models
    Zhou, Xuhui, Zhang, Y., Cui, Leyang, and Huang, Dandan
    In Proceedings of the AAAI Conference on Artificial Intelligence, 34(05) 2020

CMU LTI