Xuhui Zhou🐳

Incoming CS PhD · SALT · Georgia Institute of Technology

I am an incoming PhD student in the School of Interactive Computing at Georgia Tech:bee:, advised by Diyi Yang. I am interested in socially intelligent human language technology.

Specifically, I focus on how to enable NLP to better navigate social contexts. Nowadays, NLP models are strong in terms of benchmark performance. However, they produce hatespeech, contain social biases, behave undesiredly, and etc., which hurt, divide, and disappoint people.

Previously, I interned at Apple as a machine learning engineer. I was a CLMS student at the University of Washington (UW) advised by Noah Smith. I got my bachelor degree in Statistics at Nanjing University (NJU).


Jan 11, 2021 Our paper: Challenges in Automated Debiasing for Toxic Language Detection will appear in EACL 2021!
Dec 14, 2020 Accept two intern offers from Apple! One happens at the Siri Info Intel team for the spring quarter, another happens at the Machine Translation team for the summer quarter :tada:
Oct 20, 2020 Our paper: Linguistically-Informed Transformations (LIT): A Method for Automatically Generating Contrast Sets will appear in BlackboxNLP 2020.
Oct 2, 2020 Our paper: Multilevel Text Alignment with Cross-Document Attention will appear in EMNLP 2020.
Apr 20, 2020 My undergraduate thesis: RPD: A Distance Function Between Word Embeddings get accepted by ACL SRW 2020.

selected publications

  1. EACL
    Challenges in Automated Debiasing for Toxic Language Detection
    Zhou, Xuhui, Sap, Maarten, Swayamdipta, Swabha, Choi, Yejin, and Smith, Noah A.
    In EACL 2021
  2. EMNLP
    Multilevel Text Alignment with Cross-Document Attention
    Zhou, Xuhui, Pappas, Nikolaos, and Smith, Noah A.
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
  3. AAAI
    Evaluating Commonsense in Pre-trained Language Models
    Zhou, Xuhui, Zhang, Y., Cui, Leyang, and Huang, Dandan
    In Proceedings of the AAAI Conference on Artificial Intelligence, 34(05) 2020