Cyber Physical Systems: Prospects and Challenges

02/14/2018 ∙ by Walid Gomaa, et al. ∙ Egypt-Japan University of Science and Technology (E-JUST) 0

Cyber physical systems CPSs embodies the conception as well as the implementation of the integration of the state-of-art technologies in sensing, communication, computing, and control. Such systems incorporate new trends such as cloud computing, mobile computing, mobile sensing, new modes of communications, wearables, etc. In this article we give an exposition of the architecture of a typical CPS system and the prospects of such systems in the development of the modern world. We illustrate the three major challenges faced by a CPS system: the need for rigorous numerical computation, the limitation of the current wireless communication bandwidth, and the computation/storage limitation by mobility and energy consumption. We address each one of these exposing the current techniques devised to solve each one of them.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The modern world is witnessing an explosion in the computing and communication technology. Computing power and facilities are becoming more and more integrated into every aspect of our private and public lives. Computing devices can be found everywhere from traditional computing machines such workstations, laptops, etc, to smart phones and smart home appliances, and finally with the current revolution in wearable devices such as headsets (for virtual and augmented reality VR/AR), smart watches, etc. Communication networks, particularly wireless networks, have become very mundane and widespread throughout the globe. Web connection is now very popular and affordable to a large portion of the population. The raw material for computation and communication is data which is typically gathered through sensing equipment, which are now embedded in most popular devices and appliances. Such sensors include different varieties of cameras, acceleromters, gyroscopes, barometers, weather stations, VR/AR headsets, etc. Similarly, the other way around of affecting the environment via actuators operated by control systems such as in robotics and different varieties of mechtronic systems. The integration of all of these: sensing the physical environment, data communications, computation, and actuation, forms what is called a cyber-physical system (CPS). This terminology is typically used by computer and control scientists and engineers. Another term that has been commonly used by communication scientists and engineers, though limited in its scope than CPS, is the Internet of Things (IOT). For the rest of this article we use ‘cyber physical systems’ to mean such conception.

Figure 1: A typical illustration of a CPS system

Fig.1 illustrates the main components of a typical CPS system. We can figure out four major components: sensing, communication, computing, and actuation/control. We are currently witnessing a revolution in the sensing technology. Sensors are embedded in almost every electronic device including: smart watches, home appliances, air conditioning, vehicles electronic system, computers, wearables, smart phones, etc. Prices are getting cheaper, sizes are getting smaller, and more quantities and qualities are being measured by sensors. Sensors enable the CPS system perceive the environment. Of course, there must be circuitry designed to convert the electric signals measured by the sensor into the target property such as temperature, humidity, velocity, acceleration, etc. Communication provides the necessary interconnectivity among the different components and parts of the CPS system. Communication technologies can vary from Wi-Fi to wired networks to communication through power lines, etc. The third main component is ‘computation’, the transformation of incoming data into another space that is dependent on the underlying process and/or application. The computing power today is shifted from the traditional Moore’s law into a new form. Moore’s law states that: the number of transistors on a microprocessor will double every two years or so, and accordingly the computing performance will. However, we have currently reached the nano-scale level in transistors manufacturing, and we are almost at the end barrier, after which we will hit the atom level scale and quantum effects and uncertainties will prevail making things complicated and unreliable. So the Moore’s law has somehow be reinterpreted to indicate the same goal of exponential performance increase, however, not through increasing the number of transistors, but through other more abstract means. One advancement is to include more than one processor, called core, in the chip. For example, four and eight cores are common in most of today’s desktop computers, laptops, and even smartphones [19]. High performance is then achieved through parallelism. The final major component of a CPS system is that of control/actuation. This can be considered as the converse of sensing. Through actuation we want to change the current state of the environment. Examples include the thermostats of an air-conditioning system to change the temperature of the surrounding environment, a robot arm to move an object to a new place, the legs joints of a robot to move it to a new place, etc.

Figure 2: A bundle of different sensors

In this article we discuss some essential challenges that face the design, implementation, and deployment of modern cyber-physical systems. The first challenge is concerned with the need for rigorous numerical computation in order to first deal with the kind of non-traditional computational requirements of such systems and second to obtain reliable computation and consequential outputs. The second major challenge is that of the limitation of the current technology of wireless communications. The current spectrum will not be enough to cover the prospect transmission demands from all kinds of devices including wearbles such as headsets for virtual and augment realty which are heavily hungry for bandwidth. The third major challenge that we illustrate in the current paper is that of the computation and storage requirements for such new kind of computation.

Section 1 is an introduction. Section 2 addresses the challenge of rigorous numerical computation. Section 3 addresses the current status and crisis communication and networking. Finally, section 4 addresses the issues and new trends in the computation realm.

2 Non-Rigorous Computation

The first fundamental problem encountered in CPS systems is the discrete/continuous dilemma; said another way the digital/analog dilemma. Modern computer and communication systems are inherently digital in nature: every object, over which computation is performed and/or needs to be transmitted over communication channels, is represented by a finite string. This string consists of letters from an alphabet, typically taken to be the binary alphabet . For example, if the object of computation is an integer, then the integer is represented in its binary radix: is represented by , etc. A rational number such as can be represented as . This goes along with any object with finitary nature: encode it with a finite binary string. Computation can then be performed exactly, there is no loss of information and/or accuracy. However, the problem arises when we deal with real physical and engineered systems, where such systems are typically designed, modeled, and analyzed using mathematical analysis tools whose underlying objects have inherently infinitary nature such as the real numbers, complex numbers, continuous functions, manifolds, etc. For example, weather prediction systems are modeled using a set of three differential equations, called the Lorenz equations (with variant extensions). Mechanical systems are typically modeled by Lagrangian mechanics, etc. The variables and parameters of all these equations range over the real and complex numbers.

In order to appreciate the essential difficulty in representing such objects, take a very simple example as representing the real number . This number has no finite pattern and so no finite representation. So it can not be represented exactly on a digital computer, there must be a loss of information and accuracy. The best feasible solution is to approximate within a finite known precision. For example, the number (floating point or rational as a computer data type) is an approximation to within an error of , that is, . Though this approximation may be good enough and controlled at the current point of execution, the error can propagate and expand in unexpected way with further operations such as arithmetic and/or logical ones. Failure to address such discrepancy between the discrete and the continuous, non-rigorous computation, can lead to unimaginable disasters. In the following we cite one of them.

Patriot missile failure: During the Gulf War on February 25th 1991, an American Patriot Missile battery in Dharan, Saudi Arabia, failed to track and intercept an incoming Iraqi Scud missile; see fig 3. This resulted in the killing of 28 soldiers and injuring around 100 other people (this is almost the whole American casualities during the entire war). Investigation into the incident concluded that this is essentially due to a software failure, specifically an apparent case of non-rigorous computational numerics. The technical problem is described as follows. The time in tenths of second as measured by the system’s internal clock was multiplied by to produce the time in seconds. This calculation was performed using a -bit fixed point register. The value has an infinite binary expansion:
. The bit register in the Patriot rocket only stored
. So the number was truncated at bits after the binary radix point introducing an error of
, or about in decimal. This small truncation error, when multiplied by the large number giving the time in tenths of a second, led to a significant error. The Patriot rocket battery had been up for around hours, so multiplying by the number of tenths of a second in hours magnifies the induced error to give seconds. The attacking Scud missile travels at about meters per second, and so travels more than half a kilometer in seconds. This was far enough that the incoming Scud was outside the range window that the Patriot tracked.

Figure 3: Patriot missile

The key to the solution to this problem is rigorous computation. It is a very wide spectrum area of research ranging from the pure theoretical to the pure practical. From the theoretical side the area of computable analysis investigates mathematical analysis from the computablility perspective. It provides a foundational framework for the study of computablility and complexity theoretic of what can be done effectively and efficiently over continuous domains. An example of a result in this field is as follows. Assume a function . Then, is computable if and only if is effectively continuous and effectively approximable. Intuitively, this means is continuous and its continuity can be effectively computed and the function values can be approximated by a computable function over discrete objects (the integers or the rationals). The books [14, 20] provide general and advanced introduction to the subject. Algebraic characterizations of computable analysis (an algebraic perspective of the computability and complexity theoretic conceptions) can be found in [7, 10, 2, 8, 9]. The next step down the abstraction is numerical analysis. This area provides the necessary algorithms for numerical calculations of the vast majority of problems in mathematical analysis. For example, root finding, integration, solution of differential equations, solution of systems of linear and non-linear equations. Such plethora of algorithms come with numerical analysis techniques and tools indicating different factors such as convergence guarantee and rate, stability, error bounding, etc. For an introduction to the subject, see [3, 11]. The third step down the hierarchy of rigorous computation is providing the necessary data structures for the proper handling, both from the computability and complexity perspectives, of numerical computations. This is evident, for example, in the rigorous simulation of dynamical systems which can also be used for computer-assisted proofs about such systems. A notable example of such kind of research is the framework of finite resolution dynamics introduced by Stefano Luzzato et al. in [17]. They introduce a combinatorial representation of discrete-time continuous-space dynamical systems as a graph, and map mathematical properties of dynamical systems such as transitivity and mixing to graph-theoretic properties such as connectivity and aperiodicity. An improvement over such framework from the perspective of computational efficiency is introduced by I. Elshaarawy and W. Gomaa in [6, 5]. Fig 4 shows an illustration of the combinatorial graph representation of the famous logistic map over the unit interval .

(a) Discretized phase space: the dotted rectangle is the exact image and the gray area is the ideal representation of [6]
(b) Every node maps to itself due to the original map behavior, but maps to elements and also due to overlapping only.
Figure 4: Combinatorial representation of the logistic map

3 Communication and Networking

An essential component in a CPS/IOT system is communication network. One of the big hurdles and challenges in such systems is the data traffic jam. By the end of year 2014, the global mobile data traffic reached about exabytes/month ( bytes) according to Cisco systems. Around million wearable devices around the world have been contributing about million gigabytes/month (about of the total mobile traffic) and this number is expected to increase five-fold by 2019 [1]. All these devices may consume and choke off the available bandwidth and hence, degrade the communication performance of the Internet traffic. There are several possibilities to tackle this problem.

The first intuitive solution is to free up an extra bandwidth, essentially in the radio spectrum, that is used, for example, in other purposes such as the military. One implementation of this strategy is done by the United States government who pledged in 2010 to free up an extra MHz which doubles the bandwidth available for mobile devices at that time [1]. However, this traditional solution is unlikely to be enough to cope with the huge proliferation and increase in the technology and number of wearable devices, particularly, headsets used for virtual and augmented reality VR/AR applications. Besides, this solution may lack standardization as dealing with limited bandwidth can be specific to each country according to its own regularization and governmental usage. For example, Indaia has access to only of the bandwidth available to people in the USA [1]. Of course, there are many other issues with the current traditional wireless radio communication including efficiency regarding energy consumption, availability (such as in hospitals, airplanes, etc), and security and privacy. So a more demanding solution is to use the available bandwidth in a more efficient way. One possibility is to create a hierarchy of networks. Wearables on a particular person should communicate with each other through a local network, coined as body-area network , which is designed to use a different part of the electromagnetic spectrum such as the millimetre wavelenngth MmWave ranging from GHz to GHz. Body area network consists essentially of wearable devices in and around the human body. It is supposed to connect low rate sensors such as accelerometers and pedometers and high rate devices such as VR/AR headsets. A prominent researcher in the MmWave network is Robert Heath who is the director of the Wireless Networking and Communications Group and the Wireless Systems Innovations Lab in the University of Texas At Austin. The group has developed theoretical models for performance analysis of such wearable networks under different situations and scenarios including the effect of reflection of walls, ceilings, and floors in addition to the effect of blocking by the particular pose and orientation of the underlying person. Such kinds of networks might use wireless standards like IEEE 802.11ad or WirelessHD, based on which, commercial products are already available. One device can act as a hub in the body-area network which uses the MmWave spectrum and would use congested conventional bands to communicate data to the Internet.

A rather radical approach to the communication bottleneck is the use of the available spectrum that comes from the visible light radiation from light-emitting diodes LEDS. LEDS produce light and can act also as photoreceptors. By looking at the electromagnetic spectrum as shown in fig 5, we can have a global perspective on the potentiality for wireless communication.

Figure 5: Electromagnetic spectrum

The gamma rays band is dangerous health-wise. The technology of X-rays is in general not that popular, except in hospitals, and are expensive for large scale use. The ultraviolet light can also be dangerous for the human body. Infrared can only be used with low power for safety of the eyes. The radio and microwave band is the current technology used for wireless communication. However, this current technology is facing several challenges including:

  1. Scarcity of the available radio bandwidth.

  2. Large amount of energy is needed for radio communication.

  3. Due to safety and security issues radio WiFi can not be available in several situations such as hospitals, flights, etc.

  4. There are many security holes in radio transmission, they can penetrate through walls and be intercepted.

On the other hand communication via the visible light spectrum potentially provides solutions to all such problems. The radio and microwave spectrum range from several KHz up to several GHz, whereas the visible light ranges from the early GHz up to early PHz (Peta Hz) implying that with LEDS we have about more than wider spectrum, and we already have more than LEDS already installed in the infrastructure (compared with the current wireless facilities). Therefore, we have much more capacity than the current radio wireless. LEDS are efficient source of illumination from the perspective of energy consumption. And at the same time they serve as data carriers which greatly improves the energy budget over the current radio transmission. Illumination can be found everywhere and LEDS can be installed and replace other kinds of illumination at relatively cheap prices. Hence, communication availability comes for free. Data transmission through illumination has better security than radio transmission. On one hand, light does not penetrate through walls; on the other hand light sources can be easily moved away from the receptors in order not to receive data.

This wireless technology based on the light wave spectrum is prospected to be used for both communication among the wearables as well as communication with the global Internet. Body-area network on a person can communicate through wearables that incorporate LEDS for such kind of communication. Furthermore, such information would be sent to light installations in the location (an office for example), that would be connected to the Internet through the power wiring. An important thing to mention here is that blinking of LEDS for data transmissions is so fast that it is imperceptible to the human eyes. One of the leading research groups in the world in this area is the mobile communications group at the University of Edinburgh, UK, led by Professor Harald Haas. They call this new technology Li-Fi. Harald Haas plans to test a Li-Fi system in hospitals this year 2016, where patients will wear wristbands that monitor their temperature and transmit the data using LEDS on these wristbands to the hosptial’s lighting system [1].

Another way to overcome the wireless communication hurdle is the smart use of the current technology by reducing the amount of redundant data. The idea is similar in abstraction to Li-Fi in that multi-tiered network is used. A kind of a local network, for example, among people in the same area trying to access the same information content from the Internet such as that of a sports match or a musical concert: one device can act as a hub seeding the same information content to all other devices around through the local network. Actually, this is the idea of the fifth generation 5G mobile communication systems.

4 Computing/Storage Power

Generally, there are two trends regarding the processing of the huge amount of stream data available today. The first one is cloud computing, where the mobile devices (and/or fixed devices), whatever they are, transmit their data to a cloud computing center, where all the software tools and processing are done, and then results are relied back to the mobile devices. Cloud computing has high performance computing power as well as massive storage capabilities and so relieves the burden of computing and storage from the mobile devices. However, this creates a pressure on the limited communication resources as mentioned in the previous section. A notable example of the use of the cloud computing is the area of cloud robotics. Unlike the traditional approach to robotics systems where all sensing, computing, and memory are integrated into a single standalone system, computing and memory storage are relied to cloud computing using the additional resource of communication networks. A preliminary conception of the idea can be found at [12, 18]. An active research group in this area is the Jouhou System Kougaku Lab at the University of Tokyo in Japan http://www.jsk.t.u-tokyo.ac.jp/research/rbr/portablehumanoid.html. And a survey of the current research in cloud robotics can be found at [13].

The second trend is to use on-device CPU and GPU capabilities (embedded CPU/GPU processors) in a smart way in order to perform quick and critical computations without the need of a remote computing power that maybe blocked by the limitation of the communication channel. This creates a new area of research and application that is concerned with the integration of mobile sensing and mobile computing. This is yet to be investigated, particularly, the use of the machine learning technology in object recognition, speech recognition, activity recognition, image classification, etc. Although mobile sensing/computing share many of the same data modeling challenges as in other arenas, there are unique characteristics in the mobile case. This involves the following:

  1. Measurements of mobile sensors can be greatly affected by the particular location, orientation, and context of the embedding mobile device. For example, a smartphone can be in a pocket, a bag, or in a conversation context, etc.

  2. Mobile sensors are highly noisy, either by the inherent noise of the physical characteristics of the sensors, or by the background noise that can vary significantly according to the particular context and situation such as driving, resting, indoors, outdoors, etc. Modeling of such noise processes can be very complicated.

  3. Unlike fixed sensing devices, mobile sensors are by nature very personalized. Measurements can depend to a large extent on the particular user and her style of living.

  4. Power consumption should be an essential issue of any mobile application.

All the above challenges can be addressed by the machine learning technology. Deep learning in particular has been investigated for mobile sensing/computing. On one hand this is mainly due to the fact that such paradigm has achieved the state-of-the-art accuracy in many tasks related to speech recognition and computer vision such as object detection, recognition, and segmentation which are the main ingredients for higher level more complicated tasks such as activity, behavior, and emotion recognition, and speaker recognition. On the other hand, mobile computing architectures have advanced much in recent years alleviating many of the computational constraints in previous generations of mobile devices. For example, iPhone 6 is a 10x computational improvement over the previous iPhone 3GS 

[16]. The main challenge with deep learning networks are the tremendous amount of computational and storage resources required for training and testing. For example, AlexNet [15], a moderate size convolution deep network uses million weights making up about MB of storage and correspondingly requires huge processing power. The large number of weights and channels require substantial data movement which consumes a great deal of energy [4]. These issues become more pressing when migrating over mobile computing. Some recent work have addressed deep learning on embedded devices, however, there is still much to do. In the following we list two prominent research in that direction.

The authors in [16]

design and implement a low-power deep neural network inference engine that uses both the CPU and DSP (Digital Signal Processing) capabilities of mobile devices such as the smartphones Samsung Galaxy S5 and Nexus 6. Their prototype consists of two phases. An offline phase for training which is performed using deep neural network on conventional computing, and an online realtime phase that is operating on the mobile device. They test their proposed framework against other common modeling techniques such as decision trees and support vector machines using datasets for common behavioral inference tasks. These involve several modes of activity recognition, emotion recognition, and speaker identification. They achieve better or comparable results from the accuracy perspective and at the same time they use much lower resources (lower energy and runtime overhead) than the conventional Gaussian mixture models and comparable resources with the decision tree models.

The authors in [16] have used a simple version of deep neural networks. However, in order to achieve the state-of-the-art accuracy for typical vision

tasks such as object detection, recognition, and segmentation, convolutional neural networks CNN should be adapted. CNNs are increasingly being used; the only problem with them is the huge amount of computational resources required by an operational CNN (such as the AlexNet network mentioned above). Y. Chen, T. Krishna, J. Emer, and V. Sze 

[4] have developed a hardware solution to the problem by designing and implementing an accelerator for CNN that maintains the state-of-the-art accuracy whilst minimizing the energy consumption in the system in real-time. Their design is based on two key methods: (1) efficient dataflow (which in general accounts for a great deal of the consumed energy) along with the supporting hardware including spatial array, memory hierarchy, and on-chip network; the aim is to minimize dataflow through data reuse and the support of different shapes (of the different layers of the network) and (2) exploitation of data statistics in order to minimize energy consumption through avoiding unnecessary reads and computations. They implemented a test chip for the AlexNet convolutional neural network. They characterize the details of their design and implementation along with the energy consumption at every layer of the network and every component of the chip. Briefly, they can achieve a frame rate of fps at a power of mW at V. They can increase the frame rate to fps at the expense of an increase on the voltage limit to V. The authors gave a very well detailed charts about the relationship between the performance measured in terms of frame rate and the corresponding energy efficiency. They also provide a performance chart of the different five layers of the AlexNet CNN in terms of the reduction of data movement (DRAM access).

References

  • [1] K. Austen. What could derail the wearables revolution? nature, 525, September 2015.
  • [2] O. Bournez, W. Gomaa, and E. Hainry. Algebraic Characterizations of Complexity-Theoretic Classes of Real Functions. International Journal of Unconventional Computing, 7(5):331–351, November 2011.
  • [3] R. L. Burden, J. D. Faires, and A. M. Burden. Numerical Analysis. Brooks Cole, Boston, MA, 10 edition edition, Jan. 2015.
  • [4] Y.-H. Chen, T. Krishna, J. Emer, and V. Sze. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks. http://dspace.mit.edu/handle/1721.1/101151, Feb. 2016.
  • [5] I. Elshaarawy and W. Gomaa. An Efficient Computational Framework for Studying Dynamical Systems. In Proc. of the 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2013), Timisoara, Romania, September 2013.
  • [6] I. Elshaarawy and W. Gomaa. Ideal Quantification of Chaos at Finite Resolution. In Proc. of the 14th International Conference on Computational Science and Its Applications (ICCSA 2014), University of Minho, Guimaraes, Portugal, June 30 - July 3, 2014.
  • [7] H. Férée, W. Gomaa, and M. Hoyrup. Analytical Properties of Resource-Bounded Real Functionals. Journal of Complexity, 30(5):647–671, October 2014.
  • [8] W. Gomaa. Algebraic Characterizations of Computable Analysis Real Functions. International Journal of Unconventional Computing, 7(4):245–272, 2011.
  • [9] W. Gomaa. A Survey of Recursive Analysis and Moore’s Notion of Real Computation. Journal of Natural Computing, 11(1):37–49, March 2012.
  • [10] W. Gomaa. Rational vs. Real Computation. International Journal of Software and Informatics, 7(4):629–654, 2013.
  • [11] R. W. Hamming. Numerical Methods for Scientists and Engineers. Dover Publications, New York, 2nd revised ed. edition edition, Mar. 1987.
  • [12] M. Inaba. Remote-brained robots. In

    Proceedings of Fifteenth International Joint Conference on Artificial Intelligence (IJCAI-97)

    , pages 1593–1606, 1997.
  • [13] B. Kehoe, S. Patil, P. Abbeel, and K. Y. Goldberg. A survey of research on cloud robotics and automation. IEEE Transactions on Automation Science and Engineering, 12(2):1–12, April 2015.
  • [14] K.-I. Ko. Complexity Theory of Real Functions. Birkhäuser, 1991.
  • [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.
  • [16] N. D. Lane and P. Georgiev. Can Deep Learning Revolutionize Mobile Sensing? In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, HotMobile ’15, pages 117–122, New York, NY, USA, 2015. ACM.
  • [17] S. Luzzatto and P. Pilarczyk. Finite Resolution Dynamics. Foundations of Computational Mathematics, 11:211–239, 2011.
  • [18] M.Inaba, S. Kagami, F. Kanehiro, Y. Hoshino, and H. Inoue. A platform for robotics research based on the remote-brained robot approach. International Journal of Robotics Research, 19(10):933–954, 2000.
  • [19] M. M. Waldrop. The chips are down for Moore’s law. Nature, 530(7589):144–147, Feb. 2016.
  • [20] K. Weihrauch. Computable Analysis: An Introduction. Springer, 1 edition, 2000.