Behavior-aware Service Access Control Mechanism using Security Policy Monitoring for SOA Systems

08/23/2019 ∙ by Yunfei Meng, et al. ∙ 0

Service-oriented architecture (SOA) system has been widely utilized at many present business areas. However, SOA system is loosely coupled with multiple services and lacks the relevant security protection mechanisms, thus it can easily be attacked by unauthorized access and information theft. The existed access control mechanism can only prevent unauthorized users from accessing the system, but they can not prevent those authorized users (insiders) from attacking the system. To address this problem, we propose a behavior-aware service access control mechanism using security policy monitoring for SOA system. In our mechanism, a monitor program can supervise consumer's behaviors in run time. By means of trustful behavior model (TBM), if finding the consumer's behavior is of misusing, the monitor will deny its request. If finding the consumer's behavior is of malicious, the monitor will early terminate the consumer's access authorizations in this session or add the consumer into the Blacklist, whereby the consumer will not access the system from then on. In order to evaluate the feasibility of proposed mechanism, we implement a prototype system. The final results illustrate that our mechanism can effectively monitor consumer's behaviors and make effective responses when malicious behaviors really occur in run time. Moreover, as increasing the rule's number in TBM continuously, our mechanism can still work well.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In cloud computing era, massive free cloud services and low-cost computing resources are driving more and more enterprises to migrate their business processes into the cloud for the purpose of maximizing their profits. As an important enabling technique, service-oriented architecture (SOA) allows service consumers to build their flexible customized systems which integrate multiple piece of component services into one big service composition SOA1 . Since can implement the interconnection between heterogeneous systems with low cost, SOA system has been widely utilized at present different business areas, such as healthcare, education, government and e-commerce SOA2 . Especially with the rapid development of Internet of Things (IoT), SOA-based IoT has been proposed and attracted more attentions in recent years IOT1 IOT2 . However, SOA system is loosely coupled with multiple cloud services or some device services deployed in edge servers IOT1

, lacks the relevant security protection mechanisms, thus it can easily be attacked by unauthorized access and information theft. In SOA system, service requests may come from service consumers, intermediate services or both of them. Consumers and intermediate services may be distributed in different trust domains, thus service invocations across trust domains will incur new challenges for service access authorization management. For service consumers, the services they invoke probably need to access the other services to fulfill some additional requirements. In this case, the sensitive data carried in the request may be forwarded to the other distrusted services, which inevitably may lead to the potential information leakage for service consumers

SOA3 .

The access control mechanism has been introduced to solve the problems of illegal and unauthorized access in SOA system. But traditional access control mechanisms, such as identity-based access control (IBAC) SOA4 , role-based access control (RBAC) SOA5 , are vulnerable when facing insider attacks, i.e., insiders misuse their privileges for malicious purposes insider1 . That is, traditional access control mechanism can only prevent unauthorized users from accessing the system, but due to lacking effective user behavior monitoring mechanism, they can not prevent those authorized users (insiders) from attacking the system, e.g., high-frequency access to services incurs the paralysis of entire system. To address these problems, runtime monitoring and assertion checking need to be introduced as means to enforce the security of SOA system SOA6 . Runtime execution monitoring (REM) is a class of methods of tracking the temporal behavior of an underlying application, REM methods range from simple print statement logging methods to run-time tracking of complete formal requirements for verification purposes SOA7 .

Based on these insights, we propose a behavior-aware service access control mechanism using security policy monitoring for SOA system. The primary goal our mechanism wants to reach has two perspectives. The first one is to implement fine-grained service access control by defining service releasing policy. Service releasing policy is defined by service provider, which strictly regulate which service in SOA system could be accessed by which service consumer for what target. Another perspective is to implement runtime monitoring by using trustful behavior model (TBM) and behavior-aware access control algorithm. By means of TBM, behavior-aware access control algorithm run in the monitor can supervise consumer’s behaviors in run time. When consumer’s behavior is judged to be malicious behavior, the monitor will adjust the consumer’s access authorizations and prevent the consumer from continuing to use the system automatically.

In summary, this paper makes the following contributions: (1) We propose a system model by which we can formalize a SOA system. (2) Based on the system model, we propose a behavior-aware service access control mechanism using security policy monitoring, which consists of service releasing policy (SRM), trustful behavior model (TBM), behavior risk measurement models and behavior-aware access control algorithm. (3) We implement a proof-of-concept experimental system, whereby we can evaluate the effectiveness and performance of our proposed mechanism.

The remainder of the paper is structured as follows. Section 2 presents the system model. Section 3 is the main body of this paper, where we presents our proposed mechanism in detail. Section 4 implements the mechanism and evaluates its effectiveness and performance. Section 5 reviews some related works regarding to runtime monitoring for SOA system. Finally, Section 6 concludes this paper and presents some future directions.

2 System Model

In this paper, we only consider the SOA systems with single trust domain, i.e., if a service provider is the owner of SOA system, the service provider can define the security policy for entire SOA system. Following, we first establish the accurate formal definitions for modeling service provider, service consumer and SOA system respectively. Then, based on these definitions, we utilize an example to illustrate how to establish the SOA system model in detail.

Definition 1

(Service Provider): Service provider is the owner of SOA system. It is defined as follows. = , , where: is the identity of service provider in SOA system. is the private key of service provider in SOA system.

Definition 2

(Service Consumer): Service consumer is the user of SOA system. It is defined as follows. = , , where: is the identity of service consumer in SOA system. is the private key of service consumer in SOA system.

Definition 3

(SOA System): SOA system is a service composition with single trust domain. It is defined as follows. = , , , , , where: = is a finite set of services. is a finite set of system services, is a finite set of sensitive services. System services can be accessed by any consumers, while sensitive services are those services which have massive sensitive data or privacies, thus they are protected by system and can only be accessed by those authorized users who are defined by service provider directly. is a finite set of directed transitions. If , then the pair =, is a transition from service to service . is the initial service of SOA system. is a finite set of uniform resource identifier (URI) of service interface. is a function , and , . And if , , then = represents the URI of ’s interface is at .

Example 1

The electronic medical records sharing system (EMRSS) is a typical SOA system used in healthcare. By means of EMRSS, medical facility can manage their patients’ electronic medical records (EMR) and share these records with medical experts for disease treatments. We assume that Bob is the service provider of an EMRSS. The EMRSS can provide two categories of sensitive services, i.e., electrocardiogram (ECG) services and X-Ray image services. ECG services can be further divided into ECG online exploring service and ECG updating service. X-Ray image service can only provide image online exploring service. If we let is X-Ray image online exploring service, is ECG online exploring service, is ECG updating service, then the EMRSS can be modelled as a SOA system model shown in Figure 1. Here =, , =, , , =, , , , = , = /SBA/0.jsp, /SBA/X0.jsp, /SBA/1.jsp, /SBA/X1.jsp, /SBA/X2.jsp , == /SBA/0.jsp, = /SBA/X0.jsp, = /SBA/1.jsp, = /SBA/X1.jsp, = /SBA/X2.jsp.

3 Behavior-aware Service Access Control mechanism

Based on the SOA system model mentioned above, we propose a behavior-aware service access control mechanism using security policy monitoring. In some sense, this mechanism can be thought as a combination of fine-grained service access control and runtime monitoring. In the following of this section, we first present the entire mechanism. Then we introduce two important formal model, service releasing model and trustful behavior model. Next, we present a group risk quantitative measurement models for evaluating user’s behavior risks in SOA system. Finally, we give the detailed algorithmic description of behavior-aware access control algorithm in our mechanism.

3.1 Mechanism Description

The proposed mechanism can be described as a flow chart shown in Figure 2. The primary goal our mechanism wants to reach has two perspectives, The first one is to implement fine-grained service access control by defining service releasing policy. Another perspective is to implement runtime monitoring by using trustful behavior model and behavior-aware access control algorithm.

Thus, we design an architecture based on check points and a monitor for our mechanism. Check point is a section of script code embedded in each service interface of SOA system, which is responsible for collecting service consumer (SC)’s behavior elements, such as identity, location, IP address and etc. Monitor is designed as an independent supervising program which is responsible for monitoring SC’s behaviors in run time.

Firstly, service provider (SP) defines a service releasing policy (SRM), which is stored in the monitor and strictly regulates which sensitive service in system could be released to which SC for what target. Here releasing a sensitive service refers to the sensitive service could be accessed or operated by SC. Then SRM will be converted into a group corresponding trustful behavior models (TBM) for different SCs. Each TBM is a set of trustful behavior rules which are taken as the criterion of behavior-aware access control algorithm. After that, once a service invoked by SC, the monitor will be activated automatically. The behavior elements of SC collected by check point will be sent to the monitor for analysing. If SC’s access request doesn’t meet TBM, monitor will deny SC’s request because the current behavior is unpermitted. If finding SC’s behaviors have malicious intentions, i.e., the new behavior risk calculated by risk measurement models has exceeded its risk threshold, monitor will early terminate the SC’s access authorization in this session for guaranteeing the security of system. In this case, SC will not access any services in system, but also can access the system in next new session.

3.2 Service Releasing Policy

Leveraging P-Spec policy specification language we proposed in reference meng , we can formalize SP’s service releasing policy as formal P-Spec policy model. By means of SRM, SP can precisely define which sensitive service could be released to which SC for what target. The relative definitions are as follows.

Definition 4

(service Releasing Rule): Given system , . Each service releasing rule is defined as follows: = , . =, , , , where represents a service consumer, represents the SC’s private key in system, represents a specific purpose, represents a sensitive service which could be released to the for . =, , , , where is the corresponding variable of keyword , is the corresponding variable of keyword , is the corresponding variable of keyword , is the corresponding variable of keyword , and . is defined as follows:

(1)

Where The symbol = represents semantical equal, the symbol represents logical or, and the symbol represents logical imply to. Semantically, each regulates if service consumer whose private key and target can pass the authentication, then will be granted the relevant authorizations to access or operate these sensitive services accordingly.

Definition 5

(Service Releasing Model): Service Releasing model is defined as a set of rules, we denote it as: .

Example 2

Following the background of example 1, we assume that Dr. Mike is a cardiologist, he requires patient’s electrocardiogram (ECG) and chest X-ray images for treating cardiopathy. Dr. Mary is an internist, she requires patient’s chest X-ray images for treating influenza. Thus, both Mike and Mary are of service consumers. As service provider, Bob has authorized Dr. Mike and Dr. Mary to access the EMRSS. Due to all ECG and X-Ray images being of sensitive privacies for patients, thus Mike and Mary can’t download or update any data except exploring them online. Hence, SRM can be defined as follows. Firstly, let = Mike, = Mike’s private key in system, = Mary, = Mary’s private key in system, = cardiopathy, = influenza, = X-Ray exploring service, = ECG exploring service. Then, SRM is the set , and shown in Table 1.

Table 1: The established SRM.

3.3 Trustful Behavior Model

In some sense, our SRM is similar with the existed role-based access control (RBAC) or attribute-based access control (ABAC). RBAC or ABAC can also authenticate a SC’s legitimacy, and grant the SC with relevant authorizations. But they can’t monitor an authenticated SC’s behaviors in run time. Hence, we introduce a trustful behavior model (TBM), which is converted from SRM and can assist the monitor to supervise an authenticated users’ behaviors in run time. The relative definitions are as follows.

Definition 6

(Trusted Behavior Rule): Each trustful behavior rule is defined as a 3-tuple: , , , where is the identification of a SC, is the URI of current service interface, is the URI of target service interface. Semantically, each rule represents if a SC whose identity equals wants to invoke the target service from the current service , then the behavior is believed trustful for SP, otherwise it is distrustful.

Definition 7

(Model Conversion): Given system model and , whose =. If , there still exists a route = , ,…, in , whose initial service is and the end service is , then set , can be converted into a group of corresponding trustful behavior rules , and the conversion procedure can be defined as a function.

(2)

Where =, , is the URI of ’s service interface, is the URI of ’s service interface. Semantically, equation (2) represents the set can be converted into three trustful behavior rules respectively, the first rule represents the behavior of invoking service from service , the second rule represents the behavior of refreshing ’s client, and the third one represents the behavior of refreshing ’s client.

Definition 8

(Trusted Behavior Model): Given system model and , whose = has releasing service set = , ,…, . If , there still exists a route = , ,…, in , whose initial service is and the end service is , then trusted behavior model of denoted as can be converted from set and set = , ,…, , and the conversion procedure can be defined as a function.

(3)

Where , , , =. Here is the set of all trusted behavior rules converted from and using equation (2).

Based on SRM in Table 1 and SOA system model in Figure 1, we can create the trustful behavior model for Dr.Mike and Dr. Mary respectively. Firstly, according to SRM, releasing service set of Mike is , , and releasing service of Mary is . Then in SOA, the route from to is transition set , the route from to is transition set , , and the entire derivation process can be depicted in Figure 3. Next using equation (2) and (3), TBM of Mike and Mary can be established and shown in Table 2 and Table 3 respectively.

Table 2: Trusted behavior model of Mike.
Table 3: Trusted behavior model of Mary.

3.4 Risk Measurement Model

In order to find and capture user’s malicious behaviors during run time, we need to evaluate the user’s behavior using quantitative risk measurement models. If a SC’s behavior risk has exceeded its threshold, then we think the SC’s behavior has malicious intentions. In this case, system will early terminate the SC’s access authorization in this session. The relative risk measurement models are defined as follows.

Definition 9

(Behavior Element): Behavior element of SC is an environment variable collected by check point embedded in service interface. The set of behavior elements is defined as , ,…,.

Definition 10

(Behavior Risk): Given a behavior element , the risk of behavior is defined as a function.

(4)

The equation (4) represents the new behavior risk at time is determined by the existed risk at time .

Definition 11

(Unauthorized Access Risk): Given and behavior element set = , , , , where is SC’s identify, is the URI of current service and is the URI of target service. If , and = , , , makes =, =, =, then SC’s unauthorized access risk (UAF) is defined as follows.

(5)

Otherwise, it is defined as follows.

(6)

Where is the existed unauthorized access risk.

Definition 12

(Access Retention Risk): If is the environment time, then access retention risk (ARR) is defined as follows.

(7)

Where is the existed access retention risk.

Definition 13

(Access Frequency Risk): If is the times of SC accessing services, then access frequency risk (AFR) is defined as follows.

(8)

where is the times of SC accessing services at the time , is the times of SC accessing services at the time .

Definition 14

(Risk Evaluation): Given a behavior element , is the behavior risk at the time , then the risk evaluation towards is defined as follows.

(9)

Where is the risk threshold of behavior .

Definition 15

(Risk Evidence): Given a behavior element , is the behavior risk at the time , the risk evaluation towards is , then the risk evidence towards is defined as follows.

(10)

Semantically, if , it represents the accumulated risk has exceeded the risk threshold at the time , thus the risk evidence is set to 1 (true). If , then risk is still acceptable at the time , thus we set the evidence to 0 (false).

3.5 Behavior-aware Access Control Algorithm

The behavior-aware access control algorithm can be described as a flow chart shown in Figure 4. According to the primary goal of our mechanism, the algorithm will implement two basic functions, the first one is to supervise SC’s behaviors using , another one is to evaluate SC’s behavior risks in run time and makes the relevant decisions.

Thus, once the monitor session being activated by user , the algorithm will first check the ’s behavior elements , , using . If ’s behavior elements don’t meet , algorithm will deny the request directly, otherwise it goes on to calculate ’s new behavior risks using the associated risk measurement models. Here we only consider two behavior risks, i.e., unauthorized access risk (UAR) and access frequency risk (AFR). If =1 and =1 at the same time, then is definitely a malicious user because s/he is trying to access those unauthorised services with abnormal access frequency. In this case, algorithm will add into the Blacklist and prohibit from then on. If finding and at the same time, that represents ’s behaviors have no any risks for SP, then algorithm will return the target service’s interfaces to and update the existed risks with new calculated values. Otherwise, ’s behaviors still have risks, algorithm will deny ’s request in this session, but can also access the system in next new session.

4 Experiment and Evaluation

In this section, we implement a prototype to validate the feasibility of proposed mechanism. Following, we first introduce our experimental system. Then we further evaluate the effectiveness and performance of mechanism under different test scenarios.

4.1 Experimental System

In order to validate the feasibility of proposed mechanism, we implement a prototype system using Eclipse Java EE Neon3 and Apache Tomcat 7.0 container. All experiments are conducted on one PC using Intel Core i5 2.5GHz processors, running Windows 7 platform. The SOA system of designed prototype system is shown in Figure 5, which has 3 system services , , and 6 sensitive services , , , , , respectively. Here we leverage java server page (JSP) to simulate the real service interface in system. In each JSP, we have embedded a section of check point script code which is responsible for collecting user’s behavior elements in run time. We simulate the monitor using a Java Session Bean, and have implemented the behavior-aware access control algorithm mentioned in section 4.5 with Java.

For convenient of establishing different test scenarios rapidly, we also developed a simple policy management toolkit in our prototype system, whose screenshot is shown in Figure 6. By means of the toolkit, service providers can rapidly design different service releasing policy (SRM) and trustful behavior model (TBM) for different service consumers. Here there are totally 6 privacy services (Service 1 Service 6) which could be chosen for releasing to SC. Create refers to creating a completely new TBM file, Append refers to inserting some new trust behavior rules into the existed TBM file.

4.2 Evaluation

We first evaluate user behavior supervision, i.e., once the monitor session being activated, only access requests having satisfied with TBM could be responded, while others will be denied. Next we want to validate whether our mechanism can make effective responses when unauthorized access risk or access frequency risk really occur. Finally, we want to evaluate the performance of our mechanism.

4.2.1 User Behavior Supervision

This evaluation is to validate whether only access requests satisfied with TBM can be responded, while others can not. Specifically, we first design two test scenarios using the policy management toolkit shown in Figure 6. The first scenario () is defined as only Service 1 and Service 5 could be released to data users, and the second one () is defined as only Service 1, Service 3 and Service 4 could be released to data users.

Figure 1: User behavior supervision evaluation under .

Based on the SOA system model shown in Figure 5, and can be converted into their corresponding TBM models by policy management toolkit automatically. Next we randomly create 10000 access requests whose range covers from Service 1 to Service 6. We further design that if a service receives an access request, it will return a response to system, then we can record all responses time created by the services having been accessed successfully. After experiments finished, we can get the final result. Figure 7 shows the execution result under and Figure 8 shows the execution result under . We can observe that, only access requests sent for Service 1 and Service 5 could be responded under , while other requests have been denied by monitor. Taking Service 1 as an example in Figure 7, the access times is totally 1682, and all accesses for Service 1 have been responded and the average response time is 51 uS. Under the scenario of , we find only access requests for Service 1, Service 3 and Service 4 could be responded accordingly. Hence, the final results have proofed that our mechanism based on check point and TBM is surely effective, i.e., only those trustful access requests defined by TBM could be permitted and responded, while those distrustful requests will be denied automatically. Thus, by means of the mechanism, system can effectively monitor SC’s access behaviors in run time.

Figure 2: User behavior supervision evaluation under .

4.2.2 Dynamic Access Authorization

Figure 3: Dynamic access authorization evaluation under and threshold of UAR is set to 1000 times.

Dynamic access authorization evaluation is to validate whether our mechanism can timely terminate a SC’s access authorizations when the SC’s unauthorized access risk (UAR) or access frequency risk (AFR) really occurs. We first evaluate the mechanism under UAR being triggered. We set the threshold of UAR to 1000 times, i.e., =1000, which implies that the SC’s access authorization will be early terminated in this session when his or her unauthorized access times exceeds 1000 times. Then based on this setting, we repeat the experiments of evaluating user behavior supervision again. When experiments finished, we get the final result and show them in Figure 9 and Figure 10 respectively. We can observe that, compared with the earlier results shown in Figure 7 and Figure 8, the access times in new experiment have dramatically dropped. Taking example of Service 1 in Figure 9, its access times and response times is only 236 times. That is because SC’s access authorizations have been early terminated by monitor when unauthorized access times has exceeded 1000 times.

Figure 4: Dynamic access authorization evaluation under and threshold of UAR is set to 1000 times.

Next, we evaluate whether our mechanism can make effective decisions when AFR really occurs. We set the threshold of AFR to 350 times per minute, i.e., =350/min, which implies that when SC’s access frequency exceeds 350 times per minute, his or her access authorizations will be early terminated in this session. Here access frequency can be either the access frequency of authorised services or the access frequency of unauthorized services. Specifically, we first randomly produce 20 groups of access frequencies. Then, under the scenario , we access Service 1 with the created random access frequency and record the response time of Service 1 in each group. After experiments finished, we can get the final result and show them in Figure 11. We can observe that, Service 1 can still be responded and we can record the response time in first 6 groups because the access frequency is still under 350 times per minute. But in the 7th group, the frequency is up to 388 times per minute, which has exceeded the threshold, thus AFR is triggered. From the 7th group on, we can’t record any response time from Service 1 because SC’s access authorization has been early terminated in this session.

Figure 5: Dynamic access authorization evaluation under and threshold of AFR is set to 350 times per minute.

In conclusion, the final experimental results of dynamic access authorization evaluation have proofed that our mechanism is truly effective when UAR or AFR really occurs. Thus, by means of our mechanism, the system can effectively detect a SC’s malicious behaviors in run time and make effective response towards these malicious behaviors, i.e., early terminate SC’s access authorization in this session.

4.2.3 Performance Evaluation

The performance evaluation is to validate whether our mechanism can still work well as increasing the scale of continuously. The significance of this experiment is that when SOA system model expands, the scale of model will be expanded synchronously, thus we want to know whether the system can still work well and its response time can still be acceptable. Specifically, based on the scenario , we carry out 10 groups of experiments, and increase the rule’s number of from 100 to 1000 with 100 as the basic unit. For each group experiment, we record the response time of accessing Service 1, then we repeat 5 times and calculate an average response time as the final result. After experiments finished, we can get the final result and we plot them in Figure 12 as follows. We can observe that, as increasing the rule’s number of gradually, the response time of accessing Service 1 is synchronously increased and nearly linear with the number of trustful behavior rules. And all of response time can be acceptable.

Figure 6: Performance evaluation of accessing Service 1 under .

5 Related Work

After surveying the related works concerning security policy monitoring for SOA systems, we found an important direction in this area is to monitor the business process model and notation (BPMN) BPMN1 of SOA system. These methods BPMN2 ; BPMN3 ; BPMN4 ; BPMN5 ; BPMN5 ; BPMN6 are based on the runtime monitoring of a service composition to ensure that the service behaves in compliance with a pre-defined security policy. If finding policy violations, the monitor program will send alerts to service provider or system manager. However, these methods didn’t consider the security requirements of different users. An ideal policy monitoring might allow different users to be given the opportunity to apply their own security policies enforced through a combination of design-time and run-time checking BPMN6 . Compared with these methods, our mechanism doesn’t need to maintain a BPMN of system, and SRM defined by service provider can be converted into different security policies (i.e TBM) for different service consumers, then monitor program can track the user’s behavior using TBM in run time.

Branimir et al. Branimir proposed a performance monitoring approach for WS-BPEL service compositions. This study depends on business activity monitoring (BAM), which enables continuous, near real-time monitoring of processes based on process performance metrics (PPMs). Business people define PPMs based on business goals, then PPMs are translated to the monitoring models by IT engineers. In case of severe deviations from planned values, alerts are raised and notifications are sent to the responsible people. The similar research works based on BAM are presented in Ou ; B1 ; B2 . By means of process engine, these works can also find abnormal behaviors, but their monitoring target is of a virtual business process model. Thus, they can only provide near real-time monitoring results.

Zuma et al. zuma proposed a dynamic monitoring framework for SOA execution environment. The designed framework is based on the monitoring scenario and interceptor mechanism. Monitoring scenario is a group monitoring rules (interceptor chain) defined by service provider and published in the interceptors registry of container. Interceptor socket is an independent service which consists of observer, interceptor chain and data processing engine (CEP). Once a service in SOA system invoked, the socket will be activated at the same time. The collected attributes of the service by observer are send to CEP for analysing. If finding abnormal behaviors, the socket will notify service provider in run time. The similar research work is presented in Z1 . Although these works can find user’s abnormal behaviors in run time, but compared with our mechanism, they lack the associated dynamic response mechanism.

Chen et al. Chen proposed a framework-based runtime monitoring approach for service-oriented systems. The primary goal of the framework is to support software engineers by creating high-level views of how services interact, i.e., the runtime topology of SOA system. Once the listener of a Web service receives an incoming request, it will forward the message to the proxy, which will parse the request to obtain the information of the invoking service and interface. Then the content of the request is delivered to the target service. After the response sent back, the proxy will stores the addressing information in logging system, by which service provider can find some abnormal behaviors. Although this study can establish a high-level system runtime topology, yet it can not find user’s abnormal behaviors automatically in run time. In some sense, it is similar to an offline monitoring mode.

6 Conclusion

In this paper, we propose a behavior-aware service access control mechanism using security policy monitoring for SOA system. Firstly, we present a formal system model for modeling SOA system. Then based on the system model, we present our proposed mechanism in detail. We introduce two important formal model, service releasing model and trustful behavior model, in our mechanism. Next, we present a group risk quantitative measurement models for evaluating user’s behavior risks in system. Finally, we give the detailed algorithmic description of behavior-aware access control algorithm in our mechanism. In order to evaluate the feasibility of proposed mechanism, we implement a proof-of-concept experimental system. The final experimental results have proofed that our mechanism can effectively monitor SC’s access behaviors in run time and truly effective when UAR or AFR really occurs. Thus, by means of our mechanism, the system can effectively detect a SC’s malicious or misusing behaviors and make effective dynamic response towards these malicious behaviors in run time, i.e., early terminate SC’s access authorization in this session. As increasing the rule’s number in TBM continuously, our mechanism can still work well and the response time can be still acceptable.

The present work is based on the assumption that one service provider has the ownership of entire SOA system, i.e., all services in SOA system are in one trust domain. But in open cloud computing environment, SOA system always has multiple trust domains. Which implies that different service belongs to different service provider, thus their security policies are also different. Hence, runtime monitoring mechanism for multiple-domains SOA system is a more challenging problem and will be as one important part of our ongoing works.

Acknowledgments

This paper has been sponsored and supported by National Natural Science Foundation of China (Grant No.61772270), partially supported by National Natural Science Foundation of China (Grant No.61602262).

References

References

  • (1) A. Arsanjani, L. J. Zhang, M. Ellis, A. Allam, K. Channabasavaiah, A service-oriented reference architecture, It Professional 9 (3) (2007) 10–17.
  • (2) C. Momm, M. Gebhart, S. Abeck, A model-driven approach for monitoring business performance in web service compositions, in: International Conference on Internet and Web Applications and Services, 2009.
  • (3) D. Guinard, V. Trifa, S. Karnouskos, P. Spiess, D. Savio, Interacting with the soa-based internet of things: Discovery, query, selection, and on-demand provisioning of web services, IEEE Transactions on Services Computing 3 (3) (2010) 223–235.
  • (4) I. R. Chen, J. Guo, F. Bao, Trust management for soa-based iot and its application to service composition, IEEE Transactions on Services Computing 9 (3) (2017) 482–495.
  • (5) X. Ning, J. Ma, S. Cong, Z. Tao, Decentralized information flow verification framework for the service chain composition in mobile computing environments, in: IEEE International Conference on Web Services, 2013.
  • (6) W. J. Tolone, G. J. Ahn, T. Pai, S. P. Hong, Access control in collaborative systems, Acm Computing Surveys 37 (1) (2005) 29–41.
  • (7) R. Wonohoesodo, Z. Tari, A role based access control for web services, in: IEEE International Conference on Services Computing, 2004.
  • (8) M. B. Salem, S. Hershkop, S. J. Stolfo, A Survey of Insider Attack Detection Research, 2008.
  • (9) T. S. Cook, D. Drusinksy, M. T. Shing, Specification, validation and run-time monitoring of soa based system-of-systems temporal behaviors, in: IEEE International Conference on System of Systems Engineering, 2007.
  • (10) D. Drusinsky, G. Watney, Applying run-time monitoring to the deep-impact fault protection engine, in: Software Engineering Workshop, 2003.
  • (11) Y. Meng, Z. Huang, Z. Yu, C. Ke, Privacy-aware cloud service selection approach based on p-spec policy models and privacy sensitivities, Future Generation Computer Systems 86 (2018) 1–11.
  • (12) OMG, Business process model and notation (bpmn) version 2.0 (2011).
    URL http://www.omg.org/spec/BPMN/2.0/
  • (13) T. Rademakers, R. V. Liempd, Activiti in action: Executable business processes in bpmn 2.0.
  • (14) L. Baresi, S. Guinea, O. Nano, G. Spanoudakis, Comprehensive monitoring of bpel processes, IEEE Internet Computing 14 (3) (2010) 50–57.
  • (15)

    H. Zhang, Z. Shao, Z. Hong, Runtime monitoring web services implemented in bpel, in: International Conference on Uncertainty Reasoning and Knowledge Engineering, 2011.

  • (16) G. Wu, J. Wei, T. Huang, Flexible pattern monitoring for ws-bpel through stateful aspect extension, in: IEEE International Conference on Web Services, 2008.
  • (17) M. Asim, A. Yautsiukhin, A. D. Brucker, B. Lempereur, Q. Shi, Security Policy Monitoring of Composite Services, 2014.
  • (18) B. Wetzstein, S. Strauch, F. Leymann, Measuring performance metrics of ws-bpel service compositions, in: International Conference on Networking and Services, 2009.
  • (19) T. Ou, S. Wei, C. Guo, L. Jing, Visualized monitoring of virtual business process for soa, in: IEEE International Conference on E-business Engineering, 2008.
  • (20) F. Barbon, P. Traverso, M. Pistore, M. Trainotti, Run-time monitoring of instances and classes of web service compositions, in: IEEE International Conference on Web Services, 2006.
  • (21) L. Baresi, S. Guinea, Dynamo: Dynamic monitoring of ws-bpel processes, in: International Conference on Service-oriented Computing, 2005.
  • (22) D. Zmuda, M. Psiuk, K. Zielinski, Dynamic monitoring framework for the soa execution environment, Procedia Computer Science 1 (1) (2010) 125–133.
  • (23) N. E. Ioini, A. Garibbo, A. Sillitti, G. Succi, An Open Source Monitoring Framework for Enterprise SOA, 2013.
  • (24) C. Chen, A. Zaidman, H. G. Gross, A framework-based runtime monitoring approach for service-oriented software systems, in: International Workshop on Quality Assurance for Service-based Applications, 2011.