Visualizing and Explaining Language Models

04/30/2022
by   Adrian M. P. Braşoveanu, et al.
0

During the last decade, Natural Language Processing has become, after Computer Vision, the second field of Artificial Intelligence that was massively changed by the advent of Deep Learning. Regardless of the architecture, the language models of the day need to be able to process or generate text, as well as predict missing words, sentences or relations depending on the task. Due to their black-box nature, such models are difficult to interpret and explain to third parties. Visualization is often the bridge that language model designers use to explain their work, as the coloring of the salient words and phrases, clustering or neuron activations can be used to quickly understand the underlying models. This paper showcases the techniques used in some of the most popular Deep Learning for NLP visualizations, with a special focus on interpretability and explainability.

READ FULL TEXT
research
03/20/2021

Local Interpretations for Explainable Natural Language Processing: A Survey

As the use of deep learning techniques has grown across various fields o...
research
07/17/2019

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI

Recently, artificial intelligence, especially machine learning has demon...
research
06/12/2021

Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features

Despite the high accuracy offered by state-of-the-art deep natural-langu...
research
01/17/2022

Chatbot System Architecture

The conversational agents is one of the most interested topics in comput...
research
12/03/2020

Self-Explaining Structures Improve NLP Models

Existing approaches to explaining deep learning models in NLP usually su...
research
03/10/2023

Does ChatGPT resemble humans in language use?

Large language models (LLMs) and LLM-driven chatbots such as ChatGPT hav...

Please sign up or login with your details

Forgot password? Click here to reset