DeepAI
Log In Sign Up

Cloud Service Provider Evaluation System using Fuzzy Rough Set Technique

10/17/2018
by   Parwat Singh Anjana, et al.
0

Cloud Service Provider (CSPs) offers a wide variety of scalable, flexible, and cost-efficient services to the cloud customers on demand and pay-per-utilization. However, vast diversity in available cloud services leads to various challenges for users to determine and select the best suitable service. Also, sometimes users need to hire the required services from multiple CSPs which introduce difficulties in managing interfaces, accounts, security, supports, and Service Level Agreements (SLAs). To circumvent such problems having a Cloud Service Broker (CSB) be aware of service offerings and users Quality of Service (QoS) requirements will benefit both the CSPs as well as users. In this work, we proposed a Fuzzy Rough Set based Cloud Service Brokerage Architecture, which is responsible for ranking and selection of services based on users QoS requirements, and finally monitoring the service executions. We have used the fuzzy rough set technique for dimension reduction, and to rank the CSPs we have used weighted Euclidean distance. To prioritize user QoS request, we intended to use user assign weights, also incorporated system assigned weights to give the relative importance to QoS attributes. We compared the proposed ranking technique with an existing method based on the system response time taken. The case study experiment results show that the proposed approach is scalable, resilience, and produce better results with less searching time.

READ FULL TEXT VIEW PDF
04/09/2021

Harnessing the Potential of Function-Reuse in Multimedia Cloud Systems

Cloud-based computing systems can get oversubscribed due to the budget c...
05/25/2019

CBC Approach for Evaluating Potential SaaS on the Cloud

The cloud computing is evolving as a key computing platform for sharing ...
06/06/2019

Judicious QoS using Cloud Overlays

We revisit the long-standing problem of providing network QoS to applica...
02/05/2018

A Data as a Service (DaaS) Model for GPU-based Data Analytics

Cloud-based services with resources to be provisioned for consumers are ...
05/07/2019

FASS: A Fairness-Aware Approach for Concurrent Service Selection with Constraints

The increasing momentum of service-oriented architecture has led to the ...
07/05/2019

On the Importance of demand Consolidation in Mobility on Demand

Mobility on Demand (MoD) services, like Uber and Lyft, are revolutionizi...
11/25/2020

DeepTriage: Automated Transfer Assistance for Incidents in Cloud Services

As cloud services are growing and generating high revenues, the cost of ...

1 Introduction

With the emergence of Cloud, Cloud Service Providers (CSPs) offers a wide variety of flexible, scalable, on-demand, and pay-as go online resources to the cloud users [1]. Nowadays, Cloud becomes an indispensable component of the many organizations; it requires a right approach for adoption, deployment, and preservation of the resources [2]. Also, it introduces several challenges for the cloud user for choosing the best CSP among a considerable number of CSPs [3]. Further, organizations may even need to rent different services from different CSPs which lead to the challenges of operating multiple interfaces, accounts, supports, and Service Level Agreements (SLAs) [2]. To facilitate the CSPs and to circumvent the difficulties faced by an organization (while for ranking, selecting, and dealing with multiple CSPs) we need a Cloud Service Broker (CSB). According to Gartner [4], a CSB is a third party (individual or an organization) between CSPs and users. It provides intermediation, aggregation, and arbitrage services to consult, mediate, and facilitates the cloud computing solutions on behalf of the users or business organizations [4]. An Intermediation service broker offers additional value-added service on top of existing services by appending specific capabilities, for example, identity, security, and access reporting and administration. To advance data transfer security and integration, an Aggregation service broker includes consolidation and integration of various services, for example, data integration and portability assurance between different CSPs and users. An Arbitrage service broker is similar to aggregation broker, but the number of services being aggregated will not be fixed. An Arbitrage service broker acquires comprehensive services and presents it in smaller containers with more excellent cumulative value to the users, for example, a large block of bandwidth at wholesale rates. A CSB reduces processing costs, increase flexibility, provides access to various CSPs within one contract channel, and offers timely execution of services by removing acquisition limits through reporting and billing policies [5]. The proposed architecture is an Aggregation service broker.

Recent advancement in CSB focuses on developing efficient systems and strategies that help the user to select and monitor cloud resource efficiently. Service execution and evaluation help in getting historical information about services and affected by dynamic, quantifiable and non-quantifiable QoS attribute [6]. A quantifiable QoS attribute mainly refers to functional QoS requirements and can efficiently be measured without any ambiguity, e.g., vCPU speed (frequency), service response time, cost, etc. While a non-quantifiable attribute primarily depends on non-functional QoS requirements and cannot be quantified easily. In general, a non-quantifiable attribute depends on user experience, e.g., accountability, security, feedback, support, etc. [1]

. However, the importance of quantifiable and non-quantifiable attributes are classified, existing approaches do not present an efficient technique to handle non-quantifiable QoS attributes efficiently and objectively. Furthermore, quantified patterns are used to analyze user QoS requirements in existing techniques

[1] [7]. While user requirements are imprecise or vague, we need a technique to handle it efficiently. The precise QoS attribute comprises only crisp values (un-ambiguous) while imprecise QoS attribute usually includes fuzzy values that cannot be quantified. We presented a hybrid imprecision technique which consists of quantifiable and non-quantifiable QoS attributes. A service user submits his desired QoS requirement along with weights using a Graphical User Interface (GUI) during CSP ranking phase.

In the proposed architecture information system consists both quantifiable and non-quantifiable QoS attributes (to describe CSP service offerings and to formulate user’s QoS requirements). We employ Fuzzy Rough Set Technique (FRST) to deal with hybrid information system with real-valued entries (to provide the solution for real-time conditional attributes with a fuzzy decision) and for search space reduction. For search space reduction all reducts of the decision system are computed using FRST, and the best reduct is selected to generate Reduced Decision System (RDS). The best reduct is the reduct which consists maximum overlapping QoS attributes with user QoS request. The proposed architecture is an Aggregation broker designed using FRST that offers the cloud user facility to rank, select, and monitor the services based on their desired QoS requirements.

The major contribution of this work includes:

  • An effective cloud service broker to rank service providers based on the user QoS requirements, to select and monitor the service execution of the selected service provider.

  • The principal focus is on dealing with hybrid system and search space reduction using fuzzy rough set technique. And to improve the accuracy of service provider selection with the incorporation of dynamic and network layer QoS parameters (dynamic and Network QoS parameter changes with time and availability of the resources).

  • To oblige the CSPs to give satisfying services to compete in further assignments by induction of user experience (feedback) along with CSPs performance monitoring during execution. By using this factor (past performances, user experience) in the service description, we enhance the correctness of ranking procedure.

  • Introduction to user assigned weights to support user to prioritize their needs along with system assigned weights to QoS attributes during the ranking procedure to improve ranking efficiency. Subsequently allowing the user to choose their desired CSP from a ranked list.

The rest of the paper is structured in five sections as follows. The detailed study of existing work and significant contributions to CSP selection (using rough, fuzzy set) presented in 2. 3 provides the overview of Fuzzy Rough Set and need of Fuzzy Rough Set Approach. 4 introduce proposed architecture and basic blocks of the architecture along with ranking algorithm. 5 gives a comprehensive case study on ranking compute cloud service providers (Infrastructure as a Service) along with results. At the end 6, conclude with some future directions.

2 Related Work

With the advancement and increasing use of cloud services, researchers analyzed the CSPs for various types of application. A wide range of discovery and selection techniques have been developed for the evaluation of CSPs based on QoS requirements of the user. This section presents the work carried out by researchers for CSP ranking and selection using rough, fuzzy set theory based techniques, along with some other significant contribution which includes essential specification used in this paper.

To address the challenges of CSPs discovery and selection, Le et al., [8] presented a comprehensive survey of existing service selection approaches. They evaluate the CSP selection approaches based on five aspects and characterized into four groups (Multi-Criteria Decision Making, Multi-Criteria Optimization, Logic-Based, and Other Approaches). The Multi-Criteria Decision-Making based approaches have successfully implemented to discover desired cloud services. It includes Analytic Hierarchy Process (AHP), Multi-Attribute Utility Theory (MAUT), Outranking, and Simple Additive Weighting based techniques [8] which are an extension of Web services. Godse et al., [9] presented an MCDM based cloud services selection technique, performed a case study to prove the significance of methodology to solve SaaS selection. Garg et al., [1] introduced AHP technique based CSP ranking framework known as SMICloud. This framework enables users to compare and select the CSP using “Service Measurement Index (SMI)” attributes to satisfy users QoS requirements. They computed key performance indicators defined in the “Cloud Service Measurement Index Consortium [10] QoS standards to compare cloud services. However, they did not examine trustworthiness, user experience, and Network Layer QoS attribute for ranking and selection of CSPs.

Alhamad et al., [11] introduced a fuzzy set theory based technique for CSP selection using availability, security, usability, and scalability as QoS attributes. Hussain et al., [12] proposed an MCDM based CSP selection technique for IaaS services and presented a case study on service provider selection among thirteen providers using five performance criteria. A Cloud-QoS Management Strategy (C-QoSMS) using rough set technique was proposed by Ganghishetti et al., [13]. Specifically, they considered ranking of IaaS cloud services using SLA and QoS attributes. They also extended their work in Modified C-QoSMS [14] and presented a case study using Random and Round Robin algorithms. However, they did not examine non-quantifiable QoS attribute and considered only categorical information hence they need to discretize the numerical value for selection of cloud services. Qu et al., [7] introduced a fuzzy hierarchy based trust evaluation using inference system to evaluates users trust based on fuzzy QoS requirements of users and progressive fulfillment of services to advance CSP selection. With a case study and simulation, they illustrated the effectiveness and efficiency of their model. In [15] Patiniotakis et al., introduced a PuLSaR: Preference-based Cloud Service Recommender system which is an MCDM based optimization brokerage technique, in the proposed method to deal vagueness of imprecise QoS attributes they used fuzzy set theory. Furthermore, to demonstrate the performance of the proposed procedure, they conducted experiments. In [16] Aruna et al., suggested a fuzzy set based ranking framework for federated cloud IaaS infrastructure to rank the CSPs based on QoS attributes and SLAs. Their proposed framework consists of three phases of AHP process as decomposition of the problem, priority judgment, and aggregation with simple rule inductions. The first contribution to Fuzzy Rough Set based CSP ranking is introduced by Anjana et al., [3]. They proposed a Fuzzy Rough Set Based Cloud Service Broker (FRSCB) architecture in which they did the QoS Attribute Categorization into different types, also includes network layer and non-quantifiable QoS attributes, ranked the CSPs by mean of the total score using Euclidean distance. They presented a case study with fifteen service provider along with fifteen SMI based QoS attributes.

The proposed work E-FRSCB is an extension of work presented in FRSCB architecture [3]. In our work, to deal with dynamic quantifiable, and non-quantifiable characteristics of service measurement index based QoS attributes, we used the fuzzy rough set based hybrid technique. We assigned different weights to attributes at different levels of ranking procedure, incorporate quantifiable and non-quantifiable QoS attributes including network layer parameters, fetch real-time values of dynamic attributes, monitor service execution to improve next ranking assignment. We also simulated the behavior of our proposed work using CloudSim [17] and demonstrated the significance of E-FRSCB against FRSCB.

3 Fuzzy Rough Set Theory (FRST)

In traditional set theory, human reasoning is described using a Boolean logic i.e. true or false (0/1) and are not enough to reason efficiently. Therefore there is a need for decision terms that take the value ranging within an interval from 0 to 1 to represent human reasoning. The Fuzzy Set Theory (FST) proposed by Lotfi Zadeh [18] in 1965 can realize human reasoning in the form of a degree ’d’ such that 0 d 1. For example, in FST a person is healthy by 70% (d = ) or unhealthy by 30% (d = ), while in traditional set theory a person can be healthy (1/true) or unhealthy (0/false). FST based set membership function determined by the Equation 1.

(1)

Where: i.e. is an element of , and is a set of elements.

Rough Set Theory (RST) proposed by Pawlak [19]

is a mathematical way to deal with vagueness (uncertainty) present in the data. RST proves its importance in Artificial Intelligence manly in expert systems, decision systems, knowledge discovery, acquisition, and many more. The fundamental advantage of using RST is that there is no need for holding any prior knowledge or information about data. The notion of boundary region is used to describe the uncertainty associated with data in RST. A set is defined as a rough set when boundary region is non-empty while defined as a crisp set when boundary region is empty

[19].

One limitation of the RST is that it can deal with only categorical or non-quantifiable attributes. In general, quantifiable and interval-fuzzy values also exist in real-world data as explained in Section 1. The RST fails when we have quantifiable data in our Information System (IS) (TABLE II)). One plausible solution to the problem can be obtained by performing discretion so that quantifiable attributes can be categorized and FST can be employed. Alternatively, we can also use FST to deal directly with quantifiable characteristics in the IS. However to deal with a hybrid information system (as shown in the TABLE II) which consists of both categorical (non-quantifiable) and quantifiable attribute we need a hybrid technique. Therefore, the Fuzzy Rough Set Theory (FRST) can be employed for CSP ranking and selection. A FRST is a generalized form of a crisp RST and FST, it is derived from the approximation of a fuzzy set in a crisp approximation space. FRST is helpful when we are dealing with a decision system in with conditional attributes are real-valued attributes [20]. It is primarily employed in the search space reduction to improve the classification accuracy in several aspects including storage, accuracy, speed [20]. Search space reduction can be achieved by determining reduct of the system. A reduct is a minimal subset of attributes of the system that give the same classification power as given by the entire set of attributes [21]. In a real-valued (conditional attributes) based decision system, it is done with the help of finding a minimum set of conditional attribute that preserves discernment information by concerning decision attribute. For further detailed understanding of discernibility matrix based all reduct computation of FRST (which we used in this paper), readers can refer to [22], [20].

The proposed FRST based hybrid technique deals with hybrid real-valued information system and also provides the solution for real-time conditional attributes with the fuzzy decision. It computes all possible reducts of the decision system (TABLE III) using FRST all reduct computation function presented in [22], and selected the best reduct using Best Reduct Algorithm (Algorithm 3) for search space reduction. It incorporates the user feedback and monitors the cloud service execution using Service Execution Monitor (Section 4.2.3) once CSP selection is made by the users to improve the accuracy of further CSP selection. Which is missing in most of the existing CSB techniques.

4 Proposed Brokerage Architecture

4.1 System Architecture

The proposed architecture attempts to help the cloud users by providing cloud brokerage services such as ranking CSPs, selection of the best CSP, after selection execution, and monitoring of service execution. The proposed Extended-Fuzzy Rough Set based Cloud Service Broker (E-FRSCB) brokerage architecture (Figure 1) consists of several basic software components classified into three layers as Cloud User Layer, Cloud Service Broker Layer, and Resource Layer. Cloud User User layer includes the number of cloud users either requesting for service provider ranking or using cloud services. Cloud Service Broker (CSB) layer is the central component of the architecture responsible for CSPs ranking, selection, service executions, and monitoring (we focused on this layer only). It consists of Cloud Service Repository (CSR), Service Execution Monitor (SEM) and Broker Resource Manager (BRM). Finally, Resource layer includes the number of CSPs along with service models which is practiced using a simulator (CloudSim) [17]. A cloud user requesting for brokering services at time ‘t’ with QoS request and attribute weight is stored in an individual service definition document with BRM. The detailed introduction to each component of E-FRSCB is introduced in Subsection 4.2.

Fig. 1: Extended Fuzzy Rough Set based Cloud Service Brokerage Architecture. (Square Bracket shows E-FRSCB algorithm steps)
Fig. 2: Service Ranking Procedure a High-level View (Information-flow)

Input: Definition Document, Feedback, CSPs Service Information
Output: Ranked List of Cloud Service Providers (CSPs), Service Execution

procedure e-frscb)
STEP [A] (i) E-FRSCB User-Request(QoS values, weights)
STEP [B] (i) Definition Document STEP [A] (i)
                 (ii) Ranked CSPs List RankingAlgorithm(DD, CF, SEM, WSC, CSR)
STEP [C] (i) User-ID STEP [B] (ii)
                (ii) User-Select one CSP, send CSP-ID to E-FRSCB
                (iii) User-invoke(CS-API) Selected(CSP)
                (iv) E-FRSCB-SLA-User-ID establish-SLA(User-ID, CSP-ID)
STEP [D] (i) E-FRSCB-BRM-ResourceReservation(User-ID, Service API)
                 (ii) E-FRSCB-BRM-ServiceExe(User-ID, Service API, SLA-User-ID)
STEP [E] (i) Profile-CSP-ID User-ID-Feedback

BRM: Broker Resource Manager; CSR: Cloud Service Registry; DD: Definition Document; CF: User Feedback; SEM: Service Execution Monitor; SLA: Service Level Agreement; DS: Decision System; DQoS: Dynamic QoS Attributes; NLQoS: Network Layer QoS Attributes; CSP: Cloud Service Provider.

Algorithm 1 E-FRSCB Algorithm

4.2 E-FRSCB Components

4.2.1 Broker Resource Manager (BRM):

The overall functionality of our proposed system is controlled by Broker Resource Manager (BRM) from ranking CSPs to service execution monitoring. Figure 1 shows BRM as a principal component of E-FRSCB architecture. It has several sub-components such as Cloud Service Repository (CSR), Ranking Component (RC), Web Service Component (WSC), and Service Execution Monitor (SEM). The profile of service providers are accumulated in the information container know as Cloud Service Repository. Ranking Component takes user QoS request, performs CSP ranking, and returns the ranked list as shown in Figure 2. On every new user request, Web Service Component is used to fetch the real-time values of dynamic QoS attributes including network layer QoS parameters. A user submits his QoS request using GUI, on submission E-FRSCB Algorithm (Algorithm 1) is invoked. Once service selection is made by user, E-FRSCB algorithm contacts the selected service provider. The SLA will be established between user and CSP via negotiation process. Furthermore, resources will be reserved, and service execution will be performed. Finally, Service Execution Monitor monitors the execution of the services and maintain an Execution Monitoring Log to keep track of the CSPs performance and to facilitate next CSP ranking process.

(1) Definition Document: Definition document is used to record the user QoS request along with desire priority given by the user in the form of weights to each requested QoS attribute (TABLE IV). The Definition document is utilized in ranking and service execution monitoring phase.

(2) Ranking Component: Ranking begins with invocation of Ranking Algorithm (Algorithm 2) of Ranking Component. It is a central part of the BRM and is composed of several phases such as information gathering, analysis, search space reduction, and ranking. For these, it interacts with Cloud Service Repository (CSR), Service Execution Monitor (SEM), and Web Service Components (WSC).

  • Profiling Phase does the information gathering by sending a request to CSR, SEM, and WSC. Once profiling phase receives the response from all the part, it generates the Information System (IS) based on the latest information of monitoring phase and dynamic QoS attribute including network layer QoS attributes (using WSC gets the current value of QoS attributes). At the end of this phase, it sends the IS along with user QoS request to next phase (clustering phase) for further information analysis.

  • At Clustering Phase to designing Decision System (DS) K-means [23] is employed over the IS which gives different clustering labels. Each object of the IS (CSP) is associated with respective clustering label to generate the DS, i.e., each cluster is labeled with distinct labels and labels are used as a decision attribute in DS. During this process, if CSPs are grouped under the same clustering label then they offer related service. In the proposed method to determine the optimal number of clustering labels Elbow method [24] is employed, and decision attribute is kept at the end of the DS (i.e., last row) as shown in TABLE III.

  • Search Space Reduction Phase In this phase, reduct concept of Fuzzy Rough Set Theory (FRST) is applied. All reducts of the DS (TABLE V) are computed using all reduct computation function presented in [22], and the best reduct is selected using Best Reduct Algorithm (Algorithm 3). The best reduct is the reduct which consists the maximum overlapping QoS attributes with user QoS requests. If more then one reducts have the same number of user requested QoS attribute then to break the tie Best Reduct Algorithm selects the reduct which has more number of dynamic QoS attributes. Based on the selected reduct Reduced Decision System (RDS) is generated. Finally based on the RDS and Definition Document ranking of CSPs is performed.

  • Ranking Phase, Weighted Euclidean Distance (Score) is computed for CSPs by using attribute values of CSP and user request. Smaller the Score represents better CSPs, therefore ranking is done based on increasing order of Score. Finally, ranking algorithm terminates with sending ranked list of CSPs to BRM.

Relative Weight Assignment for QoS Attributes: We need to consider the relative importance of QoS attributes for service provider comparison before calculating respective Score of the CSPs. For this, we need to assign some weights to QoS attributes. We employed System Assigned Weights and User Assigned Weights to respective QoS attributes (user requested QoS and Network Layer QoS in RDS TABLE VI). During weight assignment, at the first-level system will assign weights, while at the second-level customer can assign his QoS preferred weights. By doing this, we try to incorporate both user preferences and actual relative importance of quality attributes in the ranking process. Assigning the weight at lower levels prioritize the user request alongside also gives importance to the qualities which are not part of the user request but have a critical role in the ranking process (e.g., network QoS attributes) at the higher level.

System Assigned Weights: If user requested attribute is not present in RDS, then that quality does not have enough potential concerning selected reduct and hence not counted for ranking process. However, all other attributes which are not part of the user request and are part of the RDS are assigned the weights based on Golden Section Search Method [25]. Where 66.67% (i.e., 0.67) weights allocated to user requested attributes and % (i.e., 0.33) weights to others which are part of the RDS (e.g., Network Layer QoS attributes, monitored qualities). Here the sum of the weight is considered to be equal to 1. A critical remark to acknowledge here is that if the user does not wish to assign weights. Then only one level of weight assignment will be done for the ranking process.

User Assigned Weights: The user assigned weights indicate the relative importance of QoS attribute sought by the users in their request. A user can use his/her own scale to assign different weights to QoS attributes in his request [1] (as shown in TABLE I). This weight assignment is done based on the suggestion given in AHP technique [26], for this, the restriction of the sum of all weights need not be equal to one, unlike system assigned weights. However, for each first level user requested QoS attribute (e.g., Accountability, Agility, Cost, Assurance, Security, and Satisfaction) we considered the sum of the weights is equal to 1 as shown in TABLE IV. This weight assignment technique is proposed originally to assign different weights to each QoS attributes in AHP technique which we adopted for weight assignment in our proposed E-FRSCB architecture.

Relative Importance Value
Equaly important 1
Somewhat more important 3
Definitely more important 5
Much more important 7
Extremely more important 9
TABLE I: Relative Importance of QoS Attributes

(3) Web Service Component (WSC): It is used to fetches the current state of dynamic QoS attributes including network layer QoS attributes from CSPs using web-services. The state of dynamic QoS attributes change from one ranking process to another ranking process, and we need the real-time values to improve the accuracy of the ranking process. For this, we used various APIs (Cloud Harmony APIs [27]) to fetch the current state of dynamic QoS attribute of selected CSPs. Dynamic QoS attributes are specified in advance, so there is no need of determining them again during each ranking process.

4.2.2 Cloud Service Repository (CSR):

It is a repository service employed to store information about CSPs services. CSPs needs to register their QoS plans/services, capabilities, offerings, and initial SLAs in our Cloud Service Repository (CSR). In the proposed architecture, we assume CSPs have been recorded their services in CSR beforehand. So from CSR, we can obtain required information of the CSPs, their QoS service offerings, and other information to generate the initial Information System (IS) as shown in TABLE II. In the absence of global CSR, the CSPs need to register their services in our local CSR registry. In the Figure 1 and Figure 2 we have shown our local CSR, as for experiment purpose we used local CSR.

4.2.3 Service Execution Monitor (SEM):

The process of cloud monitoring includes dynamic tracking of the QoS attributes relevant to virtualized cloud resources, for example, vCPU, Storage, Network, etc. [28]. The cloud computing resources configuration is a genuinely challenging task which consists of various heterogeneous virtualized cloud computing resources [29]. Furthermore, sometime, there will be a massive demand for a specific cloud service. And because of the change in requirement and availability of various resources including network parameter the performance may change. Which directly and indirectly also affects the user service experience. Therefore, there is a need for cloud monitoring to keep track of resources operation at high demand, to detect fluctuations in performance and to account the SLA breaches of specific QoS attributes [30]. The performance fluctuations may also happen due to failures and other runtime configuration. Cloud monitoring solutions (tools) can be classified into three types as Generic solutions, Cluster-Grid solutions, and Cloud specific solutions. Cloud specific solutions are designed explicitly to monitor computing environments in the cloud and developed by academic researchers or commercial efforts [28]. Existing Cloud specific solutions (tools) includes Amazon CloudWatch, Private Cloud Monitoring Systems (PCMONS) (open source [31]), Cloud Management System (CMS), Runtime Model for Cloud Monitoring (RMCM), CAdvisor (open source), and Flexible Automated Cloud Monitoring Slices (Flex-ACMS) [28].

In CSB, cloud service monitoring can be used to improve the CSP ranking and to build healthy competition between CSPs to adhere to provide excellent services to the user to contest in next CSP ranking process. In proposed E-FRSCB execution monitor is implemented using [31]. Furthermore, it can also be implemented as a third party monitoring service offered by CSPs (e.g., Amazon CloudWatch, AppDynamics). Service execution monitor is used to provide the guarantee that the deployed cloud services perform (performance monitoring aspect) at the expected level to satisfy the SLA established between user and CSPs. This component includes monitoring task regarding the currently running services (offered by CSPs selected by respective users). This responsibility consists of the detection and collection of QoS attribute values. Data collected during monitoring is utilized for next CSP ranking process and is send to BRM whenever BRM issues a new request for data.

Input: Definition Document, User Feedback, QoS Attributes, Optimal Number of Clusters
Output: Ranked List of Cloud Service Providers (CSPs)

procedure ranking
STEP [1] (i) RC definition_Document
                (ii) RC user_Feedback
STEP [2] (i) Fetch latest information about QoS from different components
                    (a) qoS_Attribute request(CSR)
                    (b) dynamic_QoS_Values request(WSC)
                    (c) performance_QoS request(SEM)
                (ii) IS generate([1](ii), [2](i)(a), [2](i)(b), [2](i)(c))
STEP [3] (i) clustering_Labels kmeans(IS, optimal_Clusters, nstart)
                (ii) decision_ Attribute clustering_Label(IS)
                (iii) DS generateDS(IS, decision_Attribute)
STEP [4] (i) all_Reducts FS.all.reducts.computation(DS)
                (ii) best_Reduct best.Reduct(DS, definition_Document)
                (iii) RDS generate(DS, best_Reduct)
STEP [5] (i) RDSN generate(RDS, NQoS)
                (ii) Weight Assignment
                    (a) W_RDSN assign(RDS QoS 67%, NQoS 33%)
                    (b) if(definition_Document(user.Weights))
                             W_RDSN assign(W_RDSN, user.Weights)
STEP [6] (i) score-CSP weighted_Euclidean_Distance(user_Request, W_RDSN)
                (ii) if CSPs Score is equal, give priority to CSP with more Dynamic QoS
                (iii) ranked_List_CSPs ascending_order(score-CSP)
                (iv) return(ranked_List_CSPs)

RC: Ranking Component; IS: Information System; DS: Decision System; RDS: Reduced Decision System; NQoS: Network QoS Attributes; RDSN: Reduced Decision System + Network QoS Attributes; W_RDSN: Weighted RDSN;

Algorithm 2 Ranking Algorithm

Input: Decision System, All Reducts of Decision System, Definition Document
Output: Best Reduct

procedure best.Reduct
STEP [A] find number of overlapping QoS with User QoS Request for Each Reduct
STEP [B] select all the Reduct which has maximum number of overlapping QoS
STEP [C] if more then one such Reduct selected which have maximum overlapping QoS
                  (i) count Number of Dynamic QoS in each such Reduct
                  (ii) select the Reduct which has more number of Dynamic QoS
                  (iii) if more then one such Reduct has equal number of Dynamic QoS select anyone Reduct
STEP [D] return(selected.Reduct)
Algorithm 3 Best Reduct Algorithm

5 CASE STUDY: Compute Cloud Service Provider (IaaS CSP) Ranking based on User QoS Requirement

The ranking method of E-FRSCB Architecture given in the Section 4 is analyzed for Computation Services (IaaS) offered by CSPs with the help of a case study example in this section. However, this can also work with other types of services like SaaS, PaaS also. [32] is used as a development IDE and [33] for implementation. To submit the user QoS request to the system GUI is developed using [34] package. We have referred Cloud Service Measurement Index Consortium (CSMIC) [10], defined Service Measurement Index (SMI) matrices for evaluation of Compute Cloud Service Providers and to design initial Information System (IS) (TABLE II) for this we considered CSPs along with total QoS Attributes (scalable). We have designed IS for general purpose Compute Cloud Service Provider (IaaS) services by considering first level SMI matrices which consist of third level QoS attributes. We experimented with synthesized data, however, tried to incorporate actual QoS values. We have taken data from different sources including CSPs websites, SLAs, literature [1], and CloudSim [17] for most of the QoS as shown in TABLE II. For dynamic and Network Layer QoS attributes (Availability, Latency, Throughput, Service Downtime) data is collected using Cloud Harmony API [27]. For performance-intensive QoS attribute (vCPU speed, Response Time) information from Service Execution Monitor is used. However, in initial IS design, actual values of vCPU speed offered by CSPs are used, and for Response Time values are assigned randomly. The categorical (non-quantifiable) QoS attribute (such as security, accountability, user feedback, and supports) values are randomly assigned. System assigned weights to QoS attribute are assigned using Golden Section Search Method [25], while the user specifies desired weights along with QoS request (as explained in the Section 4.2.1).

In the literature to standardize the categorical (qualitative/non-quantifiable) QoS attributes for CSPs ranking, there is no single globally accepted standardization. The categorical attribute represents the enumerated type of quality where the expected value of the attribute is presented in the form of levels [35]. In proposed architecture for categorical attributes such as accountability, support, security, and user feedback different levels ([1:10]) are introduced based on the work presented in [3]. We demonstrate here the quantification of Security Levels in proposed technique (for support type, accountability, and user feedback level quantification is done similarly).

To rank the CSPs security is one of the critical Cloud QoS metrics; its primary objective is to strengthen the security mechanisms and mitigate threats. It helps in developing the user trust and improves the operational performance. In E-FRSCB, security consists the ten different level where random values are assigned for all the CSPs. It includes various performance indicators such as certificates provisioning, vCPU configuration, firewall, access and log management policies, encryption/masking of data, etc. Assurance framework proposed by European Union Network Information Security Agency (ENISA) for Cloud consists of 10, 69, 130 first, second, and third-degree security indicators respectively [36]

. Cloud Security Alliance introduced fundamental principles that service providers must follow and support user for security estimation

[37]. We did security levels quantification into ten different levels [1:10] (can be easily extended) based on the risk of security threat and security performance indicators. In quantization, level 1 represents the most straightforward security mechanisms while level 10 represent the complex and highest level of security mechanisms offered by CSPs. Each level of quantized security level consists of one or more security performance indices (130 third-degree security indicators). The straightforward example can be at level 1 only provider and user authentication is done, at level 2 including level 1 multi-factor and fine granular authentication and authorization is performed. Similarly, at further levels (i.e., from level 3-10) including upper-level different firewall administration, privileged controls over accesses, application identification and other security metrics takes place to achieve higher security. In the following, we present the proposed ranking method in multiple steps.

In the first step, whenever a new user submits his QoS request (TABLE IV), Ranking Algorithm (Algorithm 2) is invoked. During this process to generate the initial IS, it fetches the data from CSR for Compute Cloud Services (IaaS) offered by CSPs, as shown in TABLE II. To fetch the actual values of dynamic and network layer QoS attribute ranking algorithm send a request to WSC component to execute the web services.

QoS Attributes
QoS
Unit
Cloud Service Providers (CSPs)
QoS
Type
Require-
ment
Google
Compute
Engine
Storm
on
Demand
Century
Link
Amazon
EC2
Vultr
Cloud
IBM
Soft
Layer
Linode
Digital
Ocean
Micro-
soft
Azure
Racks-
pace
Accountability Levels (1-10) 8 4 4 9 3 7 1 6 8 2 Dynamic Categorical
Agility
(Capacity)
Number
of
vCPUs
(4 core
each)
16 8 1 8 6 4 3 8 8 8 Numerical
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Disk (TB) 3 0.89 2 1 2 1 0.76 2 0.195 1
Memory (GB) 14.4 16 16 15 24 32 16 16 32 15
Cost vCPU ($/h) 0.56 0.48 0.56 0.39 0.44 0.69 0.48 0.23 0.38 0.51 Static
Data
Transfer
Bandwidth
In
($/TB-m)
0 8 10 0 7 8 10 9 18.43 15.2
Out
($/TB-m)
70.6 40 51.2 51.2 22.86 92.16 51.2 45.72 18.43 40
Storage ($/TB-m) 40.96 122.8 40.96 21.5 102 36.86 80.86 40.96 40.96 36.86
Assurance
Support
(Levels)
(1-10) 8 4 7 10 5 7 3 10 8 2 Categorical
Availability (%) 99.99 100 99.97 99.99 99.89 99.97 100 99.99 100 99.95 Dynamic Numerical
Security Levels (1-10) 9 8 6 10 8 10 6 8 10 2 Static Categorical
Satisfaction
Feedback
(Levels)
(1-10) 9 7 6 10 6 9 6 8 9 7 Dynamic
Performance
Response
Time
(sec) 83.5 90 97 52 90 100 97 85 76 57 Numerical
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Network
Layer
QoS
Down
Time
(min) 1.02 0 1.98 0.51 9.05 7.5 0 1.53 2.83 2.83
Latency (ms) 31 57 31 29 28 29 28 28 32 32
Throughput (mb/s) 20.24 16.99 24.99 16.23 10.11 16.23 8.12 24.67 23.11 23.67
TABLE II: Information System (IS). row: attributes of IS; column: objects of IS

In the second step, on generated IS, K-means Clustering (optimal number of clusters (i.e., k value) is determined using Elbow Method [24]) is performed. Where generated clustering labels are used as a decision attribute to design the Decision System (DS) and corresponding clustering label is attached to the CSPs. In the paper, for presentation clarity, we transferred the tables (IS, DS, Reduced Decision System, and Ranking Table) where row shows attribute and column shows the object of the IS/DS. Therefore clustering labels are kept in the last row as shown in TABLE III.

Cloud Service Providers (CSPs)
QoS Attributes
QoS
Unit
Google
Compute
Engine
Storm
on
Demand
Century
Link
Amazon
EC2
Vultr
Cloud
IBM
Soft
Layer
Linode
Digital
Ocean
Micro-
soft
Azure
Racks-
pace
Accountability Levels (1-10) 8 4 4 9 3 7 1 6 8 2
Number
of
vCPUs
(4-core
each)
16 8 1 8 6 4 3 8 8 8
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Disk (TB) 3 0.89 2 1 2 1 0.76 2 0.195 1
Agility
(Capacity)
Memory (GB) 14.4 16 16 15 24 32 16 16 32 15
vCPU ($/h) 0.56 0.48 0.56 0.39 0.44 0.69 0.48 0.23 0.38 0.51
In
($/TB-m)
0 8 10 0 7 8 10 9 18.43 15.2
Data
Transfer
Bandwidth
Out
($/TB-m)
70.6 40 51.2 51.2 22.86 92.16 51.2 45.72 18.43 40
Cost Storage ($/TB-m) 40.96 122.8 40.96 21.5 102 36.86 80.86 40.96 40.96 36.86
Support
(Levels)
(1-10) 8 4 7 10 5 7 3 10 8 2
Assurance Availability (%) 99.99 100 99.97 99.99 99.89 99.97 100 99.99 100 99.95
Security Levels (1-10) 9 8 6 10 8 10 6 8 10 2
Satisfaction
Feedback
(Levels)
(1-10) 9 7 6 10 6 9 6 8 9 7
Response
Time
(sec) 83.5 90 97 52 90 100 97 85 76 57
Performance vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Down
Time
(min) 1.02 0 1.98 0.51 9.05 7.5 0 1.53 2.83 2.83
Latency (ms) 31 57 31 29 28 29 28 28 32 32
Network
Layer
QoS
Throughput (mb/s) 20.24 16.99 24.99 16.23 10.11 16.23 8.12 24.67 23.11 23.67
Decision Attribute 3 1 3 2 1 3 1 2 2 2
TABLE III: Decision System (DS)
QoS Attributes
QoS
Unit
Consumer
QoS Request
Consumer
QoS Weights
Accountability Levels (1-10) 4 1.0 = 1
Agility
(Capacity)
Number
of
vCPUs
(4-core
each)
4 0.4 = 1
vCPU Speed (GHZ) 3.6 0.2
Disk (TB) 0.5 0.3
Memory (GB) 16 0.1
Cost vCPU ($/h) 0.54 0.6 = 1
Data
Transfer
Bandwidth
In
($/TB-m)
10 0.1
Out
($/TB-m)
51 0.1
Storage ($/TB-m) 50 0.2
Assurance
Support
(Levels)
(1-10) 8 0.3 = 1
Availability (%) 99.9 0.7
Security Levels (1-10) 10 1.0 = 1
Satisfaction
Feedback
(Levels)
(1-10) 9 1.0 = 1
Performance
Response
Time
(sec)
vCPU Speed (GHZ)
Network
Layer
QoS
Down
Time
(min)
Latency (ms)
Throughput (mb/s)
TABLE IV: Definition Document: User QoS Request and Weights

In the third step, once DS is generated, search space reduction is achieved by employing all reduct concept of Fuzzy Rough Set Theory (R Language package RoughSet [38]) function. This function gives all possible reducts of the DS, out of which one reduct is selected. Further, the number of reducts depend on the precision value ( precision), we also analyzed the precision value impact on the number of reducts as shown in Table VIII and in Figure 4 5. With the change in precision value number of QoS attributes in reduct is decreases but the total number of reducts increases. For the experiment, we fixed the precision as default value and got four reducts as shown in TABLE V. Among four reducts the best reduct is selected using Best Reduct Algorithm (Algorithm 3). Here all four reducts consists of seven dynamic attributes, among these reducts anyone reduct can be selected. Based on selected reduct-1 Reduced Decision System (RDS) is generated and shown in TABLE VI. During the best reduct selection, Network Layer QoS attributes may not present in the reduct since user request does not consist this QoS attribute. The apparent reason for that can be that user does not have control over Network Layer QoS, it depends on the network traffic. Hence we added the Network Layer QoS in RDS if attributes are not part of RDS. A critical observation here is that in primary IS (TABLE II) we have different quality attributes while after performing search space reduction based on selected best reduct and Network QoS attributes (if not in RDS add them) we have only QoS attributes. In search space reduction the reduction of information we achieved is %. At present, more than 500 CSPs offering more than thousand different services (this static are based on the number of CSPs registered with Cloud Service Market [39] and Cloud Harmony [27]). So if IS is vast (thousands of CSPs and a large number of QoS attributes), search space reduction will help a lot in better ranking of the CSPs.

Reduct 1 2 3 4
Accountability Level vCPU Accountability Level vCPU
vCPU Speed vCPU Speed vCPU Speed vCPU Speed
Memory Memory Memory Memory Green Color: Dynamic Attributes
vCPU Cost vCPU Cost vCPU Cost vCPU Cost
In Bound Cost In Bound Cost Security Level Security Level
Out Bound Cost Out Bound Cost Out Bound Cost Out Bound Cost Red Color: Static Attribute
Availability Availability Availability Availability
Response Time Response Time Response Time Response Time
Latency Latency Latency Latency
QoS
Attributes
Throughput Throughput Throughput Throughput Blue Color: Network QoS Attributes (Dynamic)
TABLE V: All Possible Reducts of The Decision System
Cloud Service Providers (CSPs)
QoS Attributes
QoS
Unit
Google
Compute
Engine
Storm
on
Demand
Century
Link
Amazon
EC2
Vultr
Cloud
IBM
Soft
Layer
Linode
Digital
Ocean
Micro-
soft
Azure
Racks-
pace
System QoS
Weights
(Level-1)
Consumer
QoS Weights
(Level-2)
Accountability Levels (1-10) 8 4 4 9 3 7 1 6 8 2 0.095 1.0
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3 0.095 0.2
Agility
(Capacity)
Memory (GB) 14.4 16 16 15 24 32 16 16 32 15 0.095 0.1
vCPU ($/h) 0.56 0.48 0.56 0.39 0.44 0.69 0.48 0.23 0.38 0.51 0.095 0.6
In
($/TB-m)
0 8 10 0 7 8 10 9 18.43 15.2 0.095 0.1
Cost
Data
Transfer
Bandwidth
Out
($/TB-m)
70.6 40 51.2 51.2 22.86 92.16 51.2 45.72 18.43 40 0.095 0.1
Assurance Availability (%) 99.99 100 99.97 99.99 99.89 99.97 100 99.99 100 99.95 0.095
0.095
*
7
=
0.67
0.1
Performance Response Time (sec) 83.5 90 97 52 90 100 97 85 76 57 0.082
Down
Time
(min) 1.02 0 1.98 0.51 9.05 7.5 0 1.53 2.83 2.83 0.082
Latency (ms) 31 57 31 29 28 29 28 28 32 32 0.082
Network
Layer
QoS
Throughput (mb/s) 20.24 16.99 24.99 16.23 10.11 16.23 8.12 24.67 23.11 23.67 0.082
0.082 * 4
=
0.33
Decision Attribute 3 1 3 2 1 3 1 2 2 2 -
TABLE VI: Reduced Decision System (RDS) with QoS Weights

In the fourth step, once RDS is available, two-level weight assignment to QoS attribute is performed (as explained earlier in Section 4.2.1). After weight assignment, by using User QoS Request (as shown in TABLE IV) and RDS (TABLE VI), Weighted Euclidean Distance (Score) of respective CSP is computed. Based on the Score CSPs are sorted in ascending order where smaller score represents better CSPs by concerning user QoS request (TABLE VII). Hence, the corresponding ranking of all the CSPs can be determined based on Score (Weighted Euclidean Distance) (4.989, 5.404, 6.843, 7.372, 7.387, 7.916, 8.403, 8.450, 8.668, 8.814). The ranked list of the CSPs is as follows: Amazon EC2 Rackspace Microsoft Azure Google Compute Engine Digital Ocean Vultr Cloud Century Link IBM Soft Layer Storm on Demand. Finally, based on user QoS request Amazon EC2 give the best service. In next step, a ranked list of CSPs is sent to the user for service selection. Finally, the user selects a CSP and gives his selection response to the system, system communicated with selected service provider for resource reservation and service execution. During service execution we monitor the execution based on SLA to improve the accuracy and use the monitoring data in designing IS for next ranking process (as shown in Figure 2).

QoS
Attributes
Unit Cloud Service Providers (CSPs)
Amazon
EC2
Rackspace
Microsoft
Azure
Google
Compute
Engine
Digital
Ocean
Vultr
Cloud
Century
Link
Linode
IBM
Soft
Layer
Storm
on
Demand
Accountability Levels (1-10) 0.861 0.191 0.766 0.766 0.574 0.287 0.383 0.096 0.670 0.383
Agility
(Capacity)
vCPU Speed (GHZ) 0.069 0.063 0.067 0.050 0.042 0.073 0.069 0.044 0.065 0.052
Memory (GB) 0.431 0.431 0.919 0.413 0.459 0.689 0.459 0.459 0.919 0.459
Cost vCPU ($/h) 0.022 0.029 0.022 0.032 0.013 0.025 0.032 0.028 0.040 0.028
Data
Transfer
Bandwidth
In
($/TB-m)
0.000 0.145 0.176 0.000 0.086 0.067 0.096 0.096 0.077 0.077
Out
($/TB-m)
0.490 0.383 0.176 0.676 0.438 0.219 0.490 0.490 0.882 0.383
Assurance Availability (%) 6.699 6.697 6.700 6.699 6.699 6.693 6.698 6.700 6.698 6.700
Performance
Response
Time
(sec) 4.290 4.703 6.270 6.889 7.013 7.425 8.003 8.003 8.250 7.425
Network
Layer
QoS
Down Time (min) 0.042 0.233 0.233 0.084 0.126 0.747 0.163 0.000 0.619 0.000
Latency (ms) 2.393 2.640 2.640 2.558 2.310 2.310 2.558 2.310 2.393 4.703
Throughput (mb/s) 1.339 1.953 1.907 1.670 2.035 0.834 2.062 0.670 1.339 1.402
Weighted Euclidean Distance (Score) 4.989 5.404 6.843 7.372 7.387 7.916 8.403 8.450 8.668 8.814
Service Provider Rank 1 2 3 4 5 6 7 8 9 10
TABLE VII: Normalized Weighted Reduced Decision System and Weighted Euclidean Distance
Precision Value 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Number of Reducts 2 2 4 8 20 14 84 89 78 289 365 405 301 291 226 240 203 172 106 1
Dynamic Attributes 6 6 8 7 7 6 5 5 4 6 4 5 5 4 3 4 4 3 2 12
Static Attributes 3 3 3 3 4 4 4 3 5 2 3 2 1 2 2 1 0 1 1 5
Reduct With
Maximum
No. of QoS
Total Attributes 9 9 11 10 11 10 9 8 9 8 7 7 6 6 5 5 4 4 3 17
Dynamic Attributes 6 6 8 6 6 5 4 4 3 2 2 2 1 2 1 1 0 1 1 12
Static Attributes 3 3 3 4 3 2 2 2 2 3 2 2 2 1 1 1 2 1 1 5
Reduct With
Minimum
No. of QoS
Total Attributes 9 9 11 10 9 7 6 6 5 5 4 4 3 3 2 2 2 2 2 17
TABLE VIII: Change in the Number of Reducts, Static, and Dynamic Attributes in Maximum and Minimum QoS Reduct with Change in Precision Value ( ).
Fig. 3: Cloud Service Provider QoS Values
Fig. 4: Precision Value vs Number of Reducts
Fig. 5: Precision Value vs Static and Dynamic Attributes in Maximum and Minimum Size QoS Reduct
Fig. 6: Number of QoS Request vs System Response Time
Fig. 7: Number of CSPs vs System Response Time

Two different experiments are performed to analyze the performance of the proposed technique with existing fuzzy rough set-based technique (FRSCB [3]). To simulate Compute cloud service infrastructure we used CloudSim [17]. During simulation for the first test, we kept the number of CSPs to fix and submitted to number of the user request for CSP ranking. In second experiment user, QoS request was set to , and the number of CSPs ranges from to . For simulation, random QoS values are generated to design the IS based on the domain of each QoS attribute. For user request also QoS values and weights are assigned randomly. Furthermore using the information available in IS, service providers were created where each CSP consists of time shared Virtual Machine Scheduler and 50 computing hosts. During simulation for each task execution time is selected randomly from to ms. Finally based on both the experiment set up the response time (in the sec.) of proposed and FRSCB ranking technique is recorded. Experiment result shows (Figure 6, 7) that proposed method outperforms then FRSCB ranking technique. Here, Figure 6, 7 shows that our proposed architecture is scalable when both number user QoS request increases and also when the number of CSPs increases.

6 Conclusions and Future Work

Because of the vast diversity of available cloud service from the user point of view, it leads to many challenges for discovering and managing their desired cloud services. As cloud service discovery and management involve various operational aspects, it is desired to have a cloud service broker who can do this task on behalf of the user. In this paper, we have presented an efficient Cloud Service Provider evaluation system using the Fuzzy Rough Set technique and presented a case study on IaaS service provider ranking. Our proposed architecture not only rank the cloud services but also monitor execution. The significant contributions of proposed brokerage system can be summarized as follows: 1) it evaluates the number of cloud service providers based on user QoS requirements and offers an opportunity to the user to select the best cloud service from a ranked list of the CSPs. 2) it also priorities the user request by incorporating user assign weights and gives the relative priority to non-user requested QoS by considering system assigned weights. 3) the primary focus was on search space reduction so that we can minimize the searching time and also improve the efficiency of the ranking procedure. 4) we used Weighted Euclidean Distance to lead to the ideal value (i.e., zero line) it shows the improved representation of the method. 5) finally we monitor the service execution once the selection is made by the users to get the historical data and actual performance of the cloud services to improve the accuracy of next service provider ranking process. The proposed approach can deal with hybrid information system and also scalable and efficient. This technique helps new users and the brokerage based organization to directly deal with fuzzy information system with there rough QoS requirement for CSPs ranking, and selection. In future, we are working to develop an online model by dynamically fetching all QoS attributes for service provider ranking and selection based on user QoS requirements.

References

  • [1] S. K. Garg, S. Versteeg, and R. Buyya, “A framework for ranking of cloud computing services,” Future Generation Computer Systems, vol. 29, no. 4, pp. 1012–1023, 2013.
  • [2] D. Rane and A. Srivastava, “Cloud brokering architecture for dynamic placement of virtual machines,” in Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, pp. 661–668, IEEE, 2015.
  • [3] P. S. Anjana, R. Wankar, and C. R. Rao, “Design of a cloud brokerage architecture using fuzzy rough set technique,” in Multi-disciplinary Trends in Artificial Intelligence - 11th International Workshop, MIWAI 2017, Gadong, Brunei, November 20-22, 2017, Proceedings, pp. 54–68, 2017.
  • [4] “Cloud service broker,” Accessed Jan 2018. https://www.techopedia.com/definition/26518/cloud-broker”.
  • [5] M. Guzek, A. Gniewek, P. Bouvry, J. Musial, and J. Blazewicz, “Cloud brokering: Current practices and upcoming challenges,” IEEE Cloud Computing, vol. 2, no. 2, pp. 40–47, 2015.
  • [6] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility,” Future Generation computer systems, vol. 25, no. 6, pp. 599–616, 2009.
  • [7] C. Qu and R. Buyya, “A cloud trust evaluation system using hierarchical fuzzy inference system for service selection,” in Advanced information networking and applications (aina), 2014 ieee 28th international conference on, pp. 850–857, IEEE, 2014.
  • [8] L. Sun, H. Dong, F. K. Hussain, O. K. Hussain, and E. Chang, “Cloud service selection: State-of-the-art and future research directions,” Journal of Network and Computer Applications, vol. 45, pp. 134–150, 2014.
  • [9] M. Godse and S. Mulik, “An approach for selecting software-as-a-service (saas) product,” in Cloud Computing, 2009. CLOUD’09. IEEE International Conference on, pp. 155–158, IEEE, 2009.
  • [10] “Cloud service measurement index consortium (csmic), smi framework,” Accessed Jan 2018. http://csmic.org”.
  • [11]

    M. Alhamad, T. Dillon, and E. Chang, “A trust-evaluation metric for cloud applications,”

    International Journal of Machine Learning and Computing

    , vol. 1, no. 4, p. 416, 2011.
  • [12] Z. ur Rehman, O. K. Hussain, and F. K. Hussain, “Iaas cloud selection using mcdm methods,” in e-Business Engineering (ICEBE), 2012 IEEE Ninth International Conference on, pp. 246–251, IEEE, 2012.
  • [13] P. Ganghishetti and R. Wankar, “Quality of service design in clouds,” CSI Communications, vol. 35, no. 2, pp. 12–15, 2011.
  • [14] P. Ganghishetti, R. Wankar, R. M. Almuttairi, and C. R. Rao, “Rough set based quality of service design for service provisioning in clouds,” in Rough Sets and Knowledge Technology, pp. 268–273, Springer, 2011.
  • [15] I. Patiniotakis, Y. Verginadis, and G. Mentzas, “Pulsar: preference-based cloud service selection for cloud service brokers,” Journal of Internet Services and Applications, vol. 6, no. 1, p. 26, 2015.
  • [16] L. Aruna and M. Aramudhan, “Framework for ranking service providers of federated cloud architecture using fuzzy sets,” International Journal of Technology, vol. 7, no. 4, pp. 643–653, 2016.
  • [17] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. De Rose, and R. Buyya, “Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms,” Software: Practice and experience, vol. 41, no. 1, pp. 23–50, 2011.
  • [18] L. A. Zadeh, “Fuzzy sets,” Information and control, vol. 8, no. 3, pp. 338–353, 1965.
  • [19] Z. Pawlak and A. Skowron, “Rudiments of rough sets,” Information sciences, vol. 177, no. 1, pp. 3–27, 2007.
  • [20] D. Chen, L. Zhang, S. Zhao, Q. Hu, and P. Zhu, “A novel algorithm for finding reducts with fuzzy rough sets,” IEEE Transactions on Fuzzy Systems, vol. 20, no. 2, pp. 385–389, 2012.
  • [21]

    R. Jensen and Q. Shen, “Rough set-based feature selection,”

    Rough Computing: Theories, Technologies, p. 70, 2008.
  • [22] “Package ‘roughsets’,” Accessed Jan 2018. https://cran.r-project.org/web/packages/RoughSets/RoughSets.pdf”.
  • [23] A. K. Jain, “Data clustering: 50 years beyond k-means,” Pattern recognition letters, vol. 31, no. 8, pp. 651–666, 2010.
  • [24] R. Tibshirani, G. Walther, and T. Hastie, “Estimating the number of clusters in a data set via the gap statistic,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 63, no. 2, pp. 411–423, 2001.
  • [25]

    B. Zhao and Y.-K. Tung, “Determination of optimal unit hydrographs by linear programming,”

    Water resources management, vol. 8, no. 2, pp. 101–119, 1994.
  • [26] T. L. Saaty, Theory and applications of the analytic network process: decision making with benefits, opportunities, costs, and risks. RWS publications, 2006.
  • [27] “Cloud harmony,” Accessed Jan 2018. http://cloudharmony.com”.
  • [28] G. Da Cunha Rodrigues, R. N. Calheiros, V. T. Guimaraes, G. L. d. Santos, M. B. De Carvalho, L. Z. Granville, L. M. R. Tarouco, and R. Buyya, “Monitoring of cloud computing environments: concepts, solutions, trends, and future directions,” in Proceedings of the 31st Annual ACM Symposium on Applied Computing, pp. 378–383, ACM, 2016.
  • [29] K. Alhamazani, R. Ranjan, K. Mitra, F. Rabhi, P. P. Jayaraman, S. U. Khan, A. Guabtni, and V. Bhatnagar, “An overview of the commercial cloud monitoring tools: research dimensions, design issues, and state-of-the-art,” Computing, vol. 97, no. 4, pp. 357–377, 2015.
  • [30] H. J. Syed, A. Gani, R. W. Ahmad, M. K. Khan, and A. I. A. Ahmed, “Cloud monitoring: A review, taxonomy, and open research issues,” Journal of Network and Computer Applications, 2017.
  • [31] “Private cloud monitoring systems (pcmons),” Accessed Jan 2018. https://github.com/pedrovitti/pcmons”.
  • [32] “R development environment,” Accessed Jan 2018. https://www.rstudio.com/”.
  • [33] R. C. Team, “R language definition,” Vienna, Austria: R foundation for statistical computing, 2000.
  • [34] “Cran-package fgui: Gui interface,” Accessed Jan 2018. https://cran.r-project.org/web/packages/fgui/index.html”.
  • [35] R. N. Gould and C. N. Ryan, Introductory statistics: Exploring the world through data. Pearson, 2015.
  • [36] D. Catteddu, G. Hogben, et al., “Cloud computing information assurance framework,” European Network and Information Security Agency (ENISA), 2009.
  • [37] “Cloud security alliance (csa): Cloud control matrix (ccm),” Accessed Jan 2018. https://cloudsecurityalliance.org/group/cloud-controls-matrix/”.
  • [38] “Cran-package roughsets,” Online; Accessed July 2017. https://CRAN.R-project.org/package=RoughSets”.
  • [39] “Cloud service market: A comprehensive overview of cloud computing services,” Accessed Jan 2018. http://www.cloudservicemarket.info”.