Can Foundation Models Talk Causality?

06/14/2022
by   Moritz Willig, et al.
11

Foundation models are subject to an ongoing heated debate, leaving open the question of progress towards AGI and dividing the community into two camps: the ones who see the arguably impressive results as evidence to the scaling hypothesis, and the others who are worried about the lack of interpretability and reasoning capabilities. By investigating to which extent causal representations might be captured by these large scale language models, we make a humble efforts towards resolving the ongoing philosophical conflicts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2023

Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance

As large language models (LLMs) are continuously being developed, their ...
research
02/27/2023

LLaMA: Open and Efficient Foundation Language Models

We introduce LLaMA, a collection of foundation language models ranging f...
research
08/24/2023

Causal Parrots: Large Language Models May Talk Causality But Are Not Causal

Some argue scale is all what is needed to achieve AI, covering even caus...
research
09/19/2023

Language Modeling Is Compression

It has long been established that predictive models can be transformed i...
research
09/13/2022

LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning

Can foundation models be guided to execute tasks involving legal reasoni...
research
09/13/2016

Some Open Problems related to Creative Telescoping

Creative telescoping is the method of choice for obtaining information a...
research
08/27/2023

Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP

Mechanistic interpretability seeks to understand the neural mechanisms t...

Please sign up or login with your details

Forgot password? Click here to reset