 # LRA: an accelerated rough set framework based on local redundancy of attribute for feature selection

In this paper, we propose and prove the theorem regarding the stability of attributes in a decision system. Based on the theorem, we propose the LRA framework for accelerating rough set algorithms. It is a general-purpose framework which can be applied to almost all rough set methods significantly . Theoretical analysis guarantees high efficiency. Note that the enhancement of efficiency will not lead to any decrease of the classification accuracy. Besides, we provide a simpler prove for the positive approximation acceleration framework.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Rough set (RS) theory is a theory that mathematically analyzes the expression of incomplete data, imprecise knowledge, and learning induction. Introduced by Polish mathematician Zdzisław Pawlak in 1982 , the foundational concepts are information granulation and approximation. An equivalence relation granulates discrete samples into disjoint granules of equivalent sample information, and the uncertainty of the information granules is characterized by upper and lower approximation bounds to obtain an approximation of the arbitrary knowledge in the sample. Rough set theory does not require any prior knowledge to handle imprecise or uncertain problems; it has strong objectivity. Rough set theory has been extended to the fuzzy rough set model [2, 3, 4], the probabilistic rough set model [5, 6, 7], the covering rough set model [8, 9], the neighborhood rough set model [10, 11, 12], multigranular rough set model [13, 14, 15]

, and more. In recent years, rough set theory has become a subject of ever-increasing interest and enthusiasm among academics. Many knowledge discovery systems in rough set theory are being explored, which has resulted in the successful application of rough set theory to machine learning, decision analysis, process control, pattern recognition, data mining, and other fields

[16, 20, 17, 19, 18, 21, 22, 23, 24].

## Ii A novel framework to accelerate rough sets

We propose and prove two theorems, which we call the stability of redundancy attribute (SR theorem) and stability of local redundancy attribute (SLR theorem), that can be generally used to accelerate almost all existing rough set algorithms.

Theorem 1 {The stability of redundancy (SR) attribute}. Given a decision system , let and , are given attributes. If , where , is a redundant attribute relative to .

Proof: Let , , , we have,

 X1∪X2∪...∪Xl=U′,Xi∩Xj=\O(i≠j), (1)
 X′1∪X′2∪...∪X′l′=U′,X′i∩X′j=\O(i≠j), (2)
 X′′1∪X′′2∪...∪X′′l′′=U′,X′′i∩X′′j=\O(i≠j). (3)
 U′/R=U′/(R+b), (4)
 {X1,X2,...,Xl}={X1∩X′1,X1∩X′2,...,X2∩X′1, (5) X2∩X′2,...,Xi∩X′j,...,Xl∩X′l′},
 Xi⊆X′j(i=1,2,...,l,j=1,2,...,l′). (6)
 U′/(R+a)={X1∩X′′1,X1∩X′′2,...,X2∩X′′1, (7) X2∩X′′2,...,Xi∩X′′j,...,Xl∩X′′l′′}.

Since , combined with formula (6),

 Xi∩X′′j⊆X′j, (8)
 Xi∩X′′j∩X′j=Xi∩X′′j. (9)
 U′/(R+a+b)={X1∩X′′1∩X′1,X1∩X′′2∩X′1, (10) ...,X2∩X′′1∩X′1,X2∩X′′2∩X′1,..., Xi∩X′′j∩X′j,...,Xl∩X′′l′′∩X′l′}.

Combined with formula (9),

 (10)={X1∩X′′1,X1∩X′′2,...,X2∩X′′1, (11) X2∩X′′2,...,Xi∩X′′j,...,Xl∩X′′l′′}=(7),
 U′/(R+a+b)=U′/(R+a), (12)
 POSR+a+b(D)=POSR+a(D). (13)

also is a redundant attribute relative to .∎

That is, the redundant attribute relative to the child attribute set is also the redundant attribute relative to its parent attribute set. In other words, the redundancy of the attribute is stable.

Inspired by Theorem 1, we described the definition of the active region and the non-active region.

Definition 1. {Active Region and Non-Active Region}. Let be a decision system. Let and (but ) be a given attribute. The equivalence class that is divided into under the attribute set is and the equivalence class that is divided into under the attribute is . For , if () exists, then we define set as the non-active region of attribute and set as the active region of the attribute .

Theorem 2 {The stability of local redundancy (SLR) attribute}. Given a decision system , let and (but ) be a given attribute. The equivalence class that is divided into under the attribute set is and the equivalence class that is divided into under the attribute is . Let the active region of be , we only need to pay attention to the active region of to determine whether is a non-redundant attribute relative to the attribute set .

Proof: let , is the active region of , and is the non-active region of .

Since is the non-active region of , we have

 Xj⊆X′t(j=1,2,...,i,t=1,2,...,s), (14)
 Xj∩X′t=Xj. (15)
 U/(R+a)={X1∩X′1,X1∩X′2,...,X2∩X′1, (16) X2∩X′2,...,Xi∩X′1,...,Xl∩X′s} ={X1,X2,...,Xi,Xi+1∩X′1,Xi+1∩X′2,...,Xl∩X′s} ={U′a,Xi+1∩X′1,Xi+1∩X′2,...,Xl∩X′s}

Focusing only on the active region of , we have,

is the same as the formula above.

Focusing only on the active region of can determine the redundancy of relative to the attribute set .∎

## Iii Experiments

In this section, we implement our framework on the neighborhood rough set method and compare it with both the classic rough set method and the fast NRS method FARNeMF, or Forward Attribute Reduction Based on Neighborhood Rough Sets and Fast Search, as our baselines.

We demonstrate the effectiveness of the proposed LRA framework on selected UCR benchmark data sets (http://archive.ics.uci.edu/ml/datasets.html), which are described in detail in Table I.

Figure 1 is a comparative experiment result of the neighborhood rough set. The red solid line represents the neighborhood rough set algorithm (NRS), the black solid line represents the FARNeMF method accelerated only by Theorem 1, the green solid line represents the method of FSPA applied to the neighborhood rough set, and the blue solid line represents the version (FARLRA) accelerated by three theorems regarding the stability of attributes. We fixed the neighborhood radius at 0.16 to eliminate the neighborhood radius as a variable in the experiment and compare only the efficiency between the algorithms. Fig. 1: Efficiency Comparison on the Following Datasets: (a) anneal (b) credit (c) heart1 (d) hepatitis (e) horse (f) iono (g) wdbc (h) zoo (i) german (j) mocap. The unit of the ordinate axis is seconds.

We can see that the blue solid curve is below the other curves in most cases, and the LRA framework allows for a much more efficient algorithm than the baseline comparison algorithm whether it is applied to classic rough sets or neighborhood rough sets. This is because the LRA framework can significantly reduce the number of objects and attributes considered in the iterative process, thus reducing the computation times of the positive region of joint attributes. In addition, as the number of samples increases, the advantage of LRA based algorithms becomes more large.

## Iv Conclusion

This paper presents and proves two theorems regarding the stability of attributes in a decision system. Based on two theorems, we propose the LRA framework for accelerating rough set algorithms. Theoretical analysis guarantees its high efficiency. Our experimental results also demonstrate that our LRA framework can considerably accelerate these rough set algorithms in most cases ranging from several times faster up to ten times faster. In addition, the neighborhood rough set method accelerated with LRA overcomes a shortcoming of existing neighborhood rough set methods, namely that it still functions when the neighborhood radius is large where existing neighborhood rough set methods will fail. In spite of these advances, there are still some interesting issues that it will be valuable to investigate in the future. To improve its performance in efficiency in large-scale data, we will consider parallelization.

## References

•  Zdzisaw Pawlak. Rough sets. International Journal of Computer & Information Sciences, 11(5):341–356, 1982.
•  Yaojin Lin, Yuwen Li, Chenxi Wang, and Jinkun Chen. Attribute reduction for multi-label learning with fuzzy rough set. Knowledge-Based Systems, 152:51–61, 2018.
•  Changzhong Wang, Yang Huang, Mingwen Shao, and Xiaodong Fan. Fuzzy rough set-based attribute reduction using distance measures. Knowledge-Based Systems, 164:205–212, 2019.
•  Anna Maria Radzikowska and Etienne E Kerre. A comparative study of fuzzy rough sets. Fuzzy sets and systems, 126(2):137–155, 2002.
•  Yiyu Yao. The superiority of three-way decisions in probabilistic rough set models. Information Sciences, 181(6):1080–1096, 2011.
•  Wojciech Ziarko. Probabilistic rough sets. In International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing, 2005.
•  Bo Wen Fang and Bao Qing Hu. Probabilistic graded rough set and double relative quantitative decision-theoretic rough set. International Journal of Approximate Reasoning, page S0888613X16300317, 2016.
•  William Zhu and Fei Yue Wang. Relationships among three types of covering rough sets. In IEEE International Conference on Granular Computing, 2006.
•  Sang-Eon Han. Covering rough set structures for a locally finite covering approximation space. Information Sciences, 2018.
•  H. U. Qing-Hua, Y. U. Da-Ren, and Zong Xia Xie. Numerical attribute reduction based on neighborhood granulation and rough approximation. Journal of Software, 2008.
•  Qinghua Hu, Daren Yu, Jinfu Liu, and Congxin Wu. Neighborhood rough set based heterogeneous feature subset selection. Information Sciences An International Journal, 178(18):3577–3594, 2008.
•  Changzhong Wang, Yunpeng Shi, Xiaodong Fan, and Mingwen Shao. Attribute reduction based on k-nearest neighborhood rough sets. International Journal of Approximate Reasoning, 106, 2018.
•  R Raghavan and BK Tripathy. On some topological properties of multigranular rough sets. Advances in Applied Science Research, 2(3):536–543, 2011.
•  BK Tripathy and Anirban Mitra. On approximate equivalences of multigranular rough sets and approximate reasoning. International Journal of Information Technology and Computer Science, 10(10):103–113, 2013.
•  BK Tripathy and Urmi Bhambhani. Properties of multigranular rough sets on fuzzy approximation spaces and their application to rainfall prediction. International Journal of Intelligent Systems and applications, 10(11):76, 2018.
•  Manish Aggarwal. Rough information set and its applications in decision making. IEEE Transactions on Fuzzy Systems, 25(2):265–276, 2017.
•  Ramiro Saltos, Richard Weber, and Sebastián Maldonado.

Dynamic rough-fuzzy support vector clustering.

IEEE Transactions on Fuzzy Systems, 25(6):1508–1521, 2017.
•  Ivo Düntsch and Günther Gediga. Uncertainty measures of rough set prediction. Artificial Intelligence, 106(1):109–137, 1998.
•  JW Guan and David A Bell. Rough computational methods for information systems. Artificial intelligence, 105(1-2):77–103, 1998.
•  Gunther Gediga and Ivo Duntsch. Rough approximation quality revisited. Artificial Intelligence, 2001.
•  Eric C. C. Tsang, Qinghua Hu, and Degang Chen.

Feature and instance reduction for pnn classifiers based on fuzzy rough sets.

International Journal of Machine Learning & Cybernetics, 7(1):1–11, 2016.
•  Noor Rehman, Abbas Ali, Muhammad Irfan Ali, and Choonkil Park. Sdmgrs: Soft dominance based multi granulation rough sets and their applications in conflict analysis problems. IEEE Access, 6:31399–31416, 2018.
•  Tutut Herawan, Mustafa Mat Deris, and Jemal H. Abawajy. A rough set approach for selecting clustering attribute. Knowledge Based Systems, 23(3):220–231, 2010.
•  JingTao Yao and Nouman Azam. Web-based medical decision support systems for three-way medical decision making with game-theoretic rough sets. IEEE Transactions on Fuzzy Systems, 23(1):3–15, 2014.
•  Kenji Kira and Larry A. Rendell. The feature selection problem: Traditional methods and a new algorithm. In National Conference on Artificial Intelligence San Jose, 1992.
•  Alexandros Kalousis, Julien Prados, and Melanie Hilario. Stability of feature selection algorithms: a study on high-dimensional spaces. Knowledge and information systems, 12(1):95–116, 2007.