New Vistas to study Bhartrhari: Cognitive NLP

10/10/2018
by   Jayashree Gajjam, et al.
0

The Sanskrit grammatical tradition which has commenced with Panini's Astadhyayi mostly as a Padasastra has culminated as a Vakyasastra, at the hands of Bhartrhari. The grammarian-philosopher Bhartrhari and his authoritative work 'Vakyapadiya' have been a matter of study for modern scholars, at least for more than 50 years, since Ashok Aklujkar submitted his Ph.D. dissertation at Harvard University. The notions of a sentence and a word as a meaningful linguistic unit in the language have been a subject matter for the discussion in many works that followed later on. While some scholars have applied philological techniques to critically establish the text of the works of Bhartrhari, some others have devoted themselves to exploring philosophical insights from them. Some others have studied his works from the point of view of modern linguistics, and psychology. Few others have tried to justify the views by logical discussions. In this paper, we present a fresh view to study Bhartrhari, and his works, especially the 'Vakyapadiya'. This view is from the field of Natural Language Processing (NLP), more specifically, what is called as Cognitive NLP. We have studied the definitions of a sentence given by Bhartrhari at the beginning of the second chapter of 'Vakyapadiya'. We have researched one of these definitions by conducting an experiment and following the methodology of silent-reading of Sanskrit paragraphs. We collect the Gaze-behavior data of participants and analyze it to understand the underlying comprehension procedure in the human mind and present our results. We evaluate the statistical significance of our results using T-test, and discuss the caveats of our work. We also present some general remarks on this experiment and usefulness of this method for gaining more insights in the work of Bhartrhari.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2020

Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond

Machine reading comprehension (MRC) aims to teach machines to read and c...
research
12/16/2021

Bridging between Cognitive Processing Signals and Linguistic Features via a Unified Attentional Network

Cognitive processing signals can be used to improve natural language pro...
research
09/01/2023

When Do Discourse Markers Affect Computational Sentence Understanding?

The capabilities and use cases of automatic natural language processing ...
research
10/15/2020

Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention

A lack of corpora has so far limited advances in integrating human gaze ...
research
11/15/2021

Revisiting C.S.Peirce's Experiment: 150 Years Later

An iconoclastic philosopher and polymath, Charles Sanders Peirce (1837-1...
research
06/10/2023

Universal Language Modelling agent

Large Language Models are designed to understand complex Human Language....
research
02/02/2023

The Fewer Splits are Better: Deconstructing Readability in Sentence Splitting

In this work, we focus on sentence splitting, a subfield of text simplif...

Please sign up or login with your details

Forgot password? Click here to reset