On the universal structure of human lexical semantics

04/29/2015
by   Hyejin Youn, et al.
0

How universal is human conceptual structure? The way concepts are organized in the human brain may reflect distinct features of cultural, historical, and environmental background in addition to properties universal to human cognition. Semantics, or meaning expressed through language, provides direct access to the underlying conceptual structure, but meaning is notoriously difficult to measure, let alone parameterize. Here we provide an empirical measure of semantic proximity between concepts using cross-linguistic dictionaries. Across languages carefully selected from a phylogenetically and geographically stratified sample of genera, translations of words reveal cases where a particular language uses a single polysemous word to express concepts represented by distinct words in another. We use the frequency of polysemies linking two concepts as a measure of their semantic proximity, and represent the pattern of such linkages by a weighted network. This network is highly uneven and fragmented: certain concepts are far more prone to polysemy than others, and there emerge naturally interpretable clusters loosely connected to each other. Statistical analysis shows such structural properties are consistent across different language groups, largely independent of geography, environment, and literacy. It is therefore possible to conclude the conceptual structure connecting basic vocabulary studied is primarily due to universal features of human cognition and language use.

READ FULL TEXT
research
09/16/2017

SKOS Concepts and Natural Language Concepts: an Analysis of Latent Relationships in KOSs

The vehicle to represent Knowledge Organization Systems (KOSs) in the en...
research
05/15/2023

A Crosslingual Investigation of Conceptualization in 1335 Languages

Languages differ in how they divide up the world into concepts and words...
research
02/05/2018

Semantic projection: recovering human knowledge of multiple, distinct object features from word embeddings

The words of a language reflect the structure of the human mind, allowin...
research
04/05/2023

Behavioral estimates of conceptual structure are robust across tasks in humans but not large language models

Neural network models of language have long been used as a tool for deve...
research
05/22/2023

Discovering Universal Geometry in Embeddings with ICA

This study employs Independent Component Analysis (ICA) to uncover unive...
research
04/11/2023

Human-machine cooperation for semantic feature listing

Semantic feature norms, lists of features that concepts do and do not po...

Please sign up or login with your details

Forgot password? Click here to reset