I Introduction
Rough set (RS) theory is a theory that mathematically analyzes the expression of incomplete data, imprecise knowledge, and learning induction. Introduced by Polish mathematician Zdzisław Pawlak in 1982 [1], the foundational concepts are information granulation and approximation. An equivalence relation granulates discrete samples into disjoint granules of equivalent sample information, and the uncertainty of the information granules is characterized by upper and lower approximation bounds to obtain an approximation of the arbitrary knowledge in the sample. Rough set theory does not require any prior knowledge to handle imprecise or uncertain problems; it has strong objectivity. Rough set theory has been extended to the fuzzy rough set model [2, 3, 4], the probabilistic rough set model [5, 6, 7], the covering rough set model [8, 9], the neighborhood rough set model [10, 11, 12], multigranular rough set model [13, 14, 15]
, and more. In recent years, rough set theory has become a subject of everincreasing interest and enthusiasm among academics. Many knowledge discovery systems in rough set theory are being explored, which has resulted in the successful application of rough set theory to machine learning, decision analysis, process control, pattern recognition, data mining, and other fields
[16, 20, 17, 19, 18, 21, 22, 23, 24].Ii A novel framework to accelerate rough sets
We propose and prove two theorems, which we call the stability of redundancy attribute (SR theorem) and stability of local redundancy attribute (SLR theorem), that can be generally used to accelerate almost all existing rough set algorithms.
Theorem 1 {The stability of redundancy (SR) attribute}. Given a decision system , let and , are given attributes. If , where , is a redundant attribute relative to .
Proof: Let , , , we have,
(1) 
(2) 
(3) 
(4) 
(5)  
(6) 
(7)  
Since , combined with formula (6),
(8) 
(9) 
(10)  
Combined with formula (9),
(11)  
(12) 
(13) 
also is a redundant attribute relative to .∎
That is, the redundant attribute relative to the child attribute set is also the redundant attribute relative to its parent attribute set. In other words, the redundancy of the attribute is stable.
Inspired by Theorem 1, we described the definition of the active region and the nonactive region.
Definition 1. {Active Region and NonActive Region}. Let be a decision system. Let and (but ) be a given attribute. The equivalence class that is divided into under the attribute set is and the equivalence class that is divided into under the attribute is . For , if () exists, then we define set as the nonactive region of attribute and set as the active region of the attribute .
Theorem 2 {The stability of local redundancy (SLR) attribute}. Given a decision system , let and (but ) be a given attribute. The equivalence class that is divided into under the attribute set is and the equivalence class that is divided into under the attribute is . Let the active region of be , we only need to pay attention to the active region of to determine whether is a nonredundant attribute relative to the attribute set .
Proof: let , is the active region of , and is the nonactive region of .
Since is the nonactive region of , we have
(14) 
(15) 
(16)  
Focusing only on the active region of , we have,
is the same as the formula above.
Focusing only on the active region of can determine the redundancy of relative to the attribute set .∎
Iii Experiments
In this section, we implement our framework on the neighborhood rough set method and compare it with both the classic rough set method and the fast NRS method FARNeMF, or Forward Attribute Reduction Based on Neighborhood Rough Sets and Fast Search, as our baselines.
We demonstrate the effectiveness of the proposed LRA framework on selected UCR benchmark data sets (http://archive.ics.uci.edu/ml/datasets.html), which are described in detail in Table I.
Dataset  Samples 


Class  

1  anneal  798  6  32  5  
2  credit  690  6  9  2  
3  german  1000  7  12  2  
4  heart1  270  7  6  2  
5  hepatitis  155  6  13  2  
6  horse  368  7  16  2  
7  iono  351  34  0  2  
8  wdbc  569  30  0  2  
9  zoo  101  0  16  7  
10  mocap  78000  33  0  2 
Figure 1 is a comparative experiment result of the neighborhood rough set. The red solid line represents the neighborhood rough set algorithm (NRS), the black solid line represents the FARNeMF method accelerated only by Theorem 1, the green solid line represents the method of FSPA applied to the neighborhood rough set, and the blue solid line represents the version (FARLRA) accelerated by three theorems regarding the stability of attributes. We fixed the neighborhood radius at 0.16 to eliminate the neighborhood radius as a variable in the experiment and compare only the efficiency between the algorithms.
We can see that the blue solid curve is below the other curves in most cases, and the LRA framework allows for a much more efficient algorithm than the baseline comparison algorithm whether it is applied to classic rough sets or neighborhood rough sets. This is because the LRA framework can significantly reduce the number of objects and attributes considered in the iterative process, thus reducing the computation times of the positive region of joint attributes. In addition, as the number of samples increases, the advantage of LRA based algorithms becomes more large.
Iv Conclusion
This paper presents and proves two theorems regarding the stability of attributes in a decision system. Based on two theorems, we propose the LRA framework for accelerating rough set algorithms. Theoretical analysis guarantees its high efficiency. Our experimental results also demonstrate that our LRA framework can considerably accelerate these rough set algorithms in most cases ranging from several times faster up to ten times faster. In addition, the neighborhood rough set method accelerated with LRA overcomes a shortcoming of existing neighborhood rough set methods, namely that it still functions when the neighborhood radius is large where existing neighborhood rough set methods will fail. In spite of these advances, there are still some interesting issues that it will be valuable to investigate in the future. To improve its performance in efficiency in largescale data, we will consider parallelization.
References
 [1] Zdzisaw Pawlak. Rough sets. International Journal of Computer & Information Sciences, 11(5):341–356, 1982.
 [2] Yaojin Lin, Yuwen Li, Chenxi Wang, and Jinkun Chen. Attribute reduction for multilabel learning with fuzzy rough set. KnowledgeBased Systems, 152:51–61, 2018.
 [3] Changzhong Wang, Yang Huang, Mingwen Shao, and Xiaodong Fan. Fuzzy rough setbased attribute reduction using distance measures. KnowledgeBased Systems, 164:205–212, 2019.
 [4] Anna Maria Radzikowska and Etienne E Kerre. A comparative study of fuzzy rough sets. Fuzzy sets and systems, 126(2):137–155, 2002.
 [5] Yiyu Yao. The superiority of threeway decisions in probabilistic rough set models. Information Sciences, 181(6):1080–1096, 2011.
 [6] Wojciech Ziarko. Probabilistic rough sets. In International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and GranularSoft Computing, 2005.
 [7] Bo Wen Fang and Bao Qing Hu. Probabilistic graded rough set and double relative quantitative decisiontheoretic rough set. International Journal of Approximate Reasoning, page S0888613X16300317, 2016.
 [8] William Zhu and Fei Yue Wang. Relationships among three types of covering rough sets. In IEEE International Conference on Granular Computing, 2006.
 [9] SangEon Han. Covering rough set structures for a locally finite covering approximation space. Information Sciences, 2018.
 [10] H. U. QingHua, Y. U. DaRen, and Zong Xia Xie. Numerical attribute reduction based on neighborhood granulation and rough approximation. Journal of Software, 2008.
 [11] Qinghua Hu, Daren Yu, Jinfu Liu, and Congxin Wu. Neighborhood rough set based heterogeneous feature subset selection. Information Sciences An International Journal, 178(18):3577–3594, 2008.
 [12] Changzhong Wang, Yunpeng Shi, Xiaodong Fan, and Mingwen Shao. Attribute reduction based on knearest neighborhood rough sets. International Journal of Approximate Reasoning, 106, 2018.
 [13] R Raghavan and BK Tripathy. On some topological properties of multigranular rough sets. Advances in Applied Science Research, 2(3):536–543, 2011.
 [14] BK Tripathy and Anirban Mitra. On approximate equivalences of multigranular rough sets and approximate reasoning. International Journal of Information Technology and Computer Science, 10(10):103–113, 2013.
 [15] BK Tripathy and Urmi Bhambhani. Properties of multigranular rough sets on fuzzy approximation spaces and their application to rainfall prediction. International Journal of Intelligent Systems and applications, 10(11):76, 2018.
 [16] Manish Aggarwal. Rough information set and its applications in decision making. IEEE Transactions on Fuzzy Systems, 25(2):265–276, 2017.

[17]
Ramiro Saltos, Richard Weber, and Sebastián Maldonado.
Dynamic roughfuzzy support vector clustering.
IEEE Transactions on Fuzzy Systems, 25(6):1508–1521, 2017.  [18] Ivo Düntsch and Günther Gediga. Uncertainty measures of rough set prediction. Artificial Intelligence, 106(1):109–137, 1998.
 [19] JW Guan and David A Bell. Rough computational methods for information systems. Artificial intelligence, 105(12):77–103, 1998.
 [20] Gunther Gediga and Ivo Duntsch. Rough approximation quality revisited. Artificial Intelligence, 2001.

[21]
Eric C. C. Tsang, Qinghua Hu, and Degang Chen.
Feature and instance reduction for pnn classifiers based on fuzzy rough sets.
International Journal of Machine Learning & Cybernetics, 7(1):1–11, 2016.  [22] Noor Rehman, Abbas Ali, Muhammad Irfan Ali, and Choonkil Park. Sdmgrs: Soft dominance based multi granulation rough sets and their applications in conflict analysis problems. IEEE Access, 6:31399–31416, 2018.
 [23] Tutut Herawan, Mustafa Mat Deris, and Jemal H. Abawajy. A rough set approach for selecting clustering attribute. Knowledge Based Systems, 23(3):220–231, 2010.
 [24] JingTao Yao and Nouman Azam. Webbased medical decision support systems for threeway medical decision making with gametheoretic rough sets. IEEE Transactions on Fuzzy Systems, 23(1):3–15, 2014.
 [25] Kenji Kira and Larry A. Rendell. The feature selection problem: Traditional methods and a new algorithm. In National Conference on Artificial Intelligence San Jose, 1992.
 [26] Alexandros Kalousis, Julien Prados, and Melanie Hilario. Stability of feature selection algorithms: a study on highdimensional spaces. Knowledge and information systems, 12(1):95–116, 2007.