Learning Opposites with Evolving Rules

04/21/2015
by   Hamid R. Tizhoosh, et al.
0

The idea of opposition-based learning was introduced 10 years ago. Since then a noteworthy group of researchers has used some notions of oppositeness to improve existing optimization and learning algorithms. Among others, evolutionary algorithms, reinforcement agents, and neural networks have been reportedly extended into their opposition-based version to become faster and/or more accurate. However, most works still use a simple notion of opposites, namely linear (or type- I) opposition, that for each x∈[a,b] assigns its opposite as x̆_I=a+b-x. This, of course, is a very naive estimate of the actual or true (non-linear) opposite x̆_II, which has been called type-II opposite in literature. In absence of any knowledge about a function y=f(x) that we need to approximate, there seems to be no alternative to the naivety of type-I opposition if one intents to utilize oppositional concepts. But the question is if we can receive some level of accuracy increase and time savings by using the naive opposite estimate x̆_I according to all reports in literature, what would we be able to gain, in terms of even higher accuracies and more reduction in computational complexity, if we would generate and employ true opposites? This work introduces an approach to approximate type-II opposites using evolving fuzzy rules when we first perform opposition mining. We show with multiple examples that learning true opposites is possible when we mine the opposites from the training data to subsequently approximate x̆_II=f(x,y).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2016

Learning Opposites Using Neural Networks

Many research works have successfully extended algorithms such as evolut...
research
05/05/2018

Confluent terminating extensional lambda-calculi with surjective pairing and terminal type

For the lambda-calculus with surjective pairing and terminal type, Curie...
research
12/16/2019

Network of Evolvable Neural Units: Evolving to Learn at a Synaptic Level

Although Deep Neural Networks have seen great success in recent years th...
research
05/27/2019

Evolving Self-supervised Neural Networks: Autonomous Intelligence from Evolved Self-teaching

This paper presents a technique called evolving self-supervised neural n...
research
10/30/2020

The New Rewriting Engine of Dedukti

Dedukti is a type-checker for the λΠ-calculus modulo rewriting, an exten...
research
05/20/2018

An Online RFID Localization in the Manufacturing Shopfloor

Radio Frequency Identification technology has gained popularity for chea...

Please sign up or login with your details

Forgot password? Click here to reset