Revising scientific papers based on peer feedback is a challenging task ...
Entity linking (EL) is the task of linking a textual mention to its
corr...
Large language models have introduced exciting new opportunities and
cha...
When reading a scholarly article, inline citations help researchers
cont...
Scholars who want to research a scientific topic must take time to read,...
The volume of scientific output is creating an urgent need for automated...
Learned representations of scientific documents can serve as valuable in...
How to usefully encode compositional task structure has long been a core...
The vast scale and open-ended nature of knowledge graphs (KGs) make
expl...
Training and inference with large neural models is expensive. However, f...
With the advent of large language models, methods for abstractive
summar...
Knowledge graph (KG) link prediction is a fundamental task in artificial...
Systems that can automatically define unfamiliar terms hold the promise ...
We stand at the foot of a significant inflection in the trajectory of
sc...
Mentorship is a critical component of academia, but is not as visible as...
The ever-increasing pace of scientific publication necessitates methods ...
Abstractive summarization systems today produce fluent and relevant outp...
Self-rationalization models that predict task labels and generate free-t...
Explanations are well-known to improve recommender systems' transparency...
Conversations aimed at determining good recommendations are iterative in...
Classifying the core textual components of a scientific paper-title, aut...
Determining coreference of concept mentions across multiple documents is...
Author Name Disambiguation (AND) is the task of resolving which author
m...
Managing the data for Information Retrieval (IR) experiments can be
chal...
Numerous studies have demonstrated the effectiveness of pretrained
conte...
Identification of new concepts in scientific literature can help power
f...
Neural Network Language Models (NNLMs) generate probability distribution...
Recent advances in commonsense reasoning depend on large-scale
human-ann...
Language models pretrained on text from a wide variety of sources form t...
Representation learning is a critical ingredient for natural language
pr...
Representation learning is a critical ingredient for natural language
pr...
Research in human-centered AI has shown the benefits of machine-learning...
Neural network language models (NNLMs) have achieved ever-improving accu...
Word embeddings capture syntactic and semantic information about words.
...
Abductive reasoning is inference to the most plausible explanation. For
...
Commonsense reasoning is a critical AI capability, but it is difficult t...
Commonsense reasoning is a critical AI capability, but it is difficult t...
Topic models are in widespread use in natural language processing and be...
We describe a deployed scalable system for organizing published scientif...
Distributed representations of words have been shown to capture lexical
...