References
-
On the dangers of stochastic parrots: can language models be too big?
. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 610–623. External Links: ISBN 9781450383097, Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification, Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Language (technology) is power: a critical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 5454–5476. External Links: Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Red Hook, NY, USA, pp. 4356–4364. External Links: ISBN 9781510838819 Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Nuanced metrics for measuring unintended bias with real data for text classification. In Companion of The 2019 World Wide Web Conference, WWW, External Links: Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- ELECTRA: pre-training text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification, Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Funnel-transformer: filtering out sequential redundancy for efficient language processing. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification, Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Intrinsic bias metrics do not correlate with application bias. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, Online. External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
-
Equality of opportunity in supervised learning
. Advances in neural information processing systems 29, pp. 3315–3323. Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification. - DeBERTa: decoding-enhanced BERT with disentangled attention. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification, Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Characterising bias in compressed models. ArXiv abs/2010.03058. External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Social biases in NLP models as barriers for persons with disabilities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 5491–5501. External Links: Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
-
SqueezeBERT: what can computer vision teach NLP about efficient neural networks?
. InProceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
, Online, pp. 124–135. External Links: Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification. - ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- RoBERTa: a robustly optimized bert pretraining approach. External Links: 1907.11692 Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification, Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
-
Right for the wrong reasons: diagnosing syntactic heuristics in natural language inference
. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 3428–3448. External Links: Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification. -
Using Machine Learning to Reduce Toxicity Online
. Note: https://perspectiveapi.com/how-it-works/[Online; accessed 21-July-2021] Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification. - Language Models are Unsupervised Multitask Learners. External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification, Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Know what you don’t know: unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia, pp. 784–789. External Links: Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- A primer in bertology: what we know about how bert works. Transactions of the Association for Computational Linguistics 8, pp. 842–866. Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. External Links: 1910.01108 Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- MobileBERT: a compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, D. Jurafsky, J. Chai, N. Schluter, and J. R. Tetreault (Eds.), pp. 2158–2170. External Links: Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, FairWare ’18, New York, NY, USA, pp. 1–7. External Links: ISBN 9781450357463, Link, Document Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- The State of Online Harassment. Note: https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/[Online; accessed 21-July-2021] Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- SuperGLUE: a stickier benchmark for general-purpose language understanding systems. arXiv preprint 1905.00537. Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
- Measuring and reducing gendered correlations in pre-trained models. CoRR abs/2010.06032. External Links: Link, 2010.06032 Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.
-
Optimized score transformation for fair classification.
In
The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy]
, S. Chiappa and R. Calandra (Eds.), Proceedings of Machine Learning Research, Vol. 108, pp. 1673–1683. External Links: Link Cited by: Your fairness may vary: Group fairness of pretrained language models in toxic text classification.