Specifically, I focus on how to enable NLP to better navigate social contexts. Nowadays, NLP models are strong in terms of benchmark performance. However, they produce hatespeech, contain social biases, behave undesiredly, and etc., which hurt, divide, and disappoint people.
Previously, I interned at Apple as a machine learning engineer. I was a CLMS student at the University of Washington (UW) advised by Noah Smith. I got my bachelor degree in Statistics at Nanjing University (NJU).
|Jan 11, 2021||Our paper: Challenges in Automated Debiasing for Toxic Language Detection will appear in EACL 2021!|
|Dec 14, 2020||Accept two intern offers from Apple! One happens at the Siri Info Intel team for the spring quarter, another happens at the Machine Translation team for the summer quarter|
|Oct 20, 2020||Our paper: Linguistically-Informed Transformations (LIT): A Method for Automatically Generating Contrast Sets will appear in BlackboxNLP 2020.|
|Oct 2, 2020||Our paper: Multilevel Text Alignment with Cross-Document Attention will appear in EMNLP 2020.|
|Apr 20, 2020||My undergraduate thesis: RPD: A Distance Function Between Word Embeddings get accepted by ACL SRW 2020.|
EACLChallenges in Automated Debiasing for Toxic Language DetectionIn EACL 2021
EMNLPMultilevel Text Alignment with Cross-Document AttentionIn Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
AAAIEvaluating Commonsense in Pre-trained Language ModelsIn Proceedings of the AAAI Conference on Artificial Intelligence, 34(05) 2020