Semantic Channel and Shannon's Channel Mutually Match for Multi-Label Classification

05/02/2018
by   Chenguang Lu, et al.
0

A group of transition probability functions form a Shannon's channel whereas a group of truth functions form a semantic channel. Label learning is to let semantic channels match Shannon's channels and label selection is to let Shannon's channels match semantic channels. The Channel Matching (CM) algorithm is provided for multi-label classification. This algorithm adheres to maximum semantic information criterion which is compatible with maximum likelihood criterion and regularized least squares criterion. If samples are very large, we can directly convert Shannon's channels into semantic channels by the third kind of Bayes' theorem; otherwise, we can train truth functions with parameters by sampling distributions. A label may be a Boolean function of some atomic labels. For simplifying learning, we may only obtain the truth functions of some atomic label. For a given label, instances are divided into three kinds (positive, negative, and unclear) instead of two kinds as in popular studies so that the problem with binary relevance is avoided. For each instance, the classifier selects a compound label with most semantic information or richest connotation. As a predictive model, the semantic channel does not change with the prior probability distribution (source) of instances. It still works when the source is changed. The classifier changes with the source, and hence can overcome class-imbalance problem. It is shown that the old population's increasing will change the classifier for label "Old" and has been impelling the semantic evolution of "Old". The CM iteration algorithm for unseen instance classification is introduced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2018

From Shannon's Channel to Semantic Channel via New Bayes' Formulas for Machine Learning

A group of transition probability functions form a Shannon's channel whe...
research
01/28/2019

The CM Algorithm for the Maximum Mutual Information Classifications of Unseen Instances

The Maximum Mutual Information (MMI) criterion is different from the Lea...
research
09/03/2018

From Bayesian Inference to Logical Bayesian Inference: A New Mathematical Frame for Semantic Communication and Machine Learning

Bayesian Inference (BI) uses the Bayes' posterior whereas Logical Bayesi...
research
07/30/2018

Making Classifier Chains Resilient to Class Imbalance

Class imbalance is an intrinsic characteristic of multi-label data. Most...
research
09/09/2022

Estimating Multi-label Accuracy using Labelset Distributions

A multi-label classifier estimates the binary label state (relevant vs i...
research
05/23/2023

Reviewing Evolution of Learning Functions and Semantic Information Measures for Understanding Deep Learning

A new trend in deep learning, represented by Mutual Information Neural E...
research
01/17/2020

Channels' Confirmation and Predictions' Confirmation: from the Medical Test to the Raven Paradox

After long arguments between positivism and falsificationism, the verifi...

Please sign up or login with your details

Forgot password? Click here to reset