SolarGest: Ubiquitous and Battery-free Gesture Recognition using Solar Cells

12/05/2018 ∙ by Dong Ma, et al. ∙ Alexandria University UNSW 0

We design a system, SolarGest, which can recognize hand gestures near a solar-powered device by analyzing the patterns of the photocurrent. SolarGest is based on the observation that each gesture interferes with incident light rays on the solar panel in a unique way, leaving its distinguishable signature in harvested photocurrent. Using solar energy harvesting laws, we develop a model to optimize design and usage of SolarGest. To further improve the robustness of SolarGest under non-deterministic operating conditions, we combine dynamic time warping with Z-score transformation in a signal processing pipeline to pre-process each gesture waveform before it is analyzed for classification. We evaluate SolarGest with both conventional opaque solar cells as well as emerging see-through transparent cells. Our experiments with 6,960 gesture samples for 6 different gestures reveal that even with transparent cells, SolarGest can detect 96 compared to light sensor based systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

page 11

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

1.1. Motivation

Figure 1. Illustration of a transparent solar powered smartwatch with solar-based gesture recognition.

As all types of devices around us become smart and capable of taking input from us, we need to explore more natural ways to interact with them. There is a growing trend to integrate gesture recognition to consumer electronics (chaudhary2013intelligent, ; ren2011robust, ), because it is one of the most natural ways for human to communicate with anyone or anything. Given the diversity of devices, many of which would be powered by small batteries, we need gesture systems that work with any device and consume zero energy in addition to the normal device operation. By using solar panels, we can achieve these two goals simultaneously, i.e., any device fitted with solar panels for energy harvesting can also recognize gestures. Since solar energy harvesting responds to any form of light, SolarGest can find applications both indoor and outdoor. For example, users can purchase from solar-powered vending machines, configure solar-powered garden lights, or operate solar-powered calculators by simply using gestures.

There is a new development in solar technology, transparent solar cells (wang2008transparent, ; traverse2017emergence, ), which makes solar panels more attractive for mobile devices. Made from novel organic materials, transparent cells absorb and harvest energy from infrared and ultraviolet lights, but let the visible lights pass through so we can see through the solar panel like a clear glass. With the discovery of transparent cells, solar panels now can be fitted to the entire device body, including on top of the screen, to harvest more energy. Figure 1 illustrates how a transparent solar cell fitted on the screen of a smart watch can be used for the dual purpose of energy harvesting as well as gesture recognition.

1.2. Limitations of Existing Work

There is a growing trend in exploring gesture systems for consumer devices using a variety of sensors and modalities, such as WiFi (electromagnetic) (abdelnasser2015wigest, ; sbirlea2013automatic, ; pu2013whole, ), camera (image) (izadi2011kinectfusion, ; howe2000bayesian, ), microphone (acoustic) (gupta2012soundwave, ; pittman2016multiwave, ), accelerometer (motion) (ruiz2011user, ; xu2012taplogger, ), and light sensor (ambient light) (venkatnarayan2018gesture, ; kaholokula2016reusing, ; li2017reconstructing, ; li2015human, ; li2016practical, ). Some of them, such as WiFi and accelerometer, are more ubiquitous than others and most of them can achieve high gesture accuracies up to 98%. However, none of them harvest, but consume energy. Work on solar-based gesture is rare with the exception of a recent work by Varshney et al. (varshney2017battery, ) that has experimentally demonstrated the feasibility of gesture recognition with a specific silicon-based opaque solar cell. The findings from the preliminary work in (varshney2017battery, ) open the door for ubiquitous solar-based gesture recognition for future Internet of Things (IoT), but it is limited in following ways.

First, (varshney2017battery, ) differentiates only three gestures based on the number of times the user repeats a basic hand movement, which is basically recognition of one gesture but with different counts. Although this method can be easily implemented using a simple threshold-based algorithm with a counter, it requires the user to remember the hand movement counts to ensure correct gesture is communicated.

The second limitation is lack of a theoretical model to simulate solar gesture recognition under different parameters. For example, we do not know how to analyze gesture recognition performance of solar cells as a function of lighting condition, efficiency and form factor of the solar cell, user hand size, and proximity of hand to the solar panel. Without a simulation model, design optimization of user-friendly solar gesture systems can be exhausting as one has to experimentally estimate the performance of the system for a large combination of parameter values. For example, how transparency of a solar cell may affect gesture recognition performance cannot be studied without first acquiring a series of transparent solar cells of specific properties, which can be very expensive, limiting the possibility of future research in the area to explore new algorithms.

1.3. Proposed Methodology

We propose SolarGest, which detects user-friendly gestures of arbitrary design. It is based on the observation that any hand gesture interferes with incident light rays on the solar panel in a unique way, leaving its distinguishable signature in harvested photocurrent’s time series data. By delegating the learning and detection responsibilities to machine learning, we can focus on designing user-friendly gestures beyond the simple counting-based gestures.

We observe that any influence from hand gesture on the photocurrent is governed by solar energy harvesting laws, which can provide a quantitative estimation of the generated photocurrent given the intensities and incident angles of ambient lights, and the form factor as well as the energy harvesting density of the solar panel. Hand gestures change the volumes and angles of incident lights in a specific pattern, which can be explained using basic geometry. Combining solar energy harvesting law with geometry, we propose a model to simulate photocurrent waveforms produced by arbitrary hand gestures. A key utility of the model is that the future designers of SolarGest system can estimate gesture recognition performance of different types of solar cells for arbitrary gestures under different lighting environments before committing to costly experiments. Actual experiments can be done sparingly only for fine tuning the system.

1.4. Contributions

In realizing SolarGest, we faced several challenges. Our model revealed that both duration and amplitude of the waveform can vary significantly from sample to sample for the same gesture due to variations in a number of hardware, environmental and user parameters such as solar cell form factor, intensity of ambient light, hand size, hand angle, speed of hand motion, and proximity of hand to the solar panel. These variations make it challenging to train classifiers for accurate detection of specific gestures. We designed a combination of dynamic time warping (DTW) and Z-score transformation to pre-process all gesture waveforms to reduce these variations before they are used by the classifier. As a result of this, we were able to achieve high gesture recognition accuracy even with basic machine learning. Validating the model was another challenge, especially for the emerging transparent solar cells, which are currently not available off-the-shelf.

(a) Incident light.
(b) 3D geometric model.
(c) 2D analysis - vertical movement.
(d) 2D analysis - horizontal movement.
Figure 2. (a) Illustration of incident angle. (b) 3D geometric model of SolarGest. (c) 2D geometric analysis of vertical movement. (d) 2D geometric analysis of horizontal movement.

Key contributions of this paper can be summarized as follows:

  • Using solar energy harvesting laws, we develop a model to simulate photocurrent waveforms produced by arbitrary hand gestures in both vertical and horizontal planes relative to the solar panel. The model allows us to analyze gesture recognition performance of solar cells as a function of important parameters such as lighting condition, efficiency and form factor of the solar cell, user hand size, and proximity of hand to the solar panel. Using practical examples, we illustrate how the model can be used to optimize design and usage of SolarGest (Section 2).

  • We propose a general machine learning framework to detect any type of gestures. By combining discrete wavelet transform (DWT), dynamic time warping (DTW), and Z-score transformation, we design an end-to-end signal processing pipeline to protect SolarGest performance against variations in operating conditions (Section 3).

  • Using organic material, we developed two transparent solar cells of different levels of transparency and energy harvesting density in our photovoltaic lab. We conduct real experiments with both silicon-based opaque solar panels as well as see-through organic solar cells. With 6,960 gesture samples collected for six user-friendly gestures under different light conditions, we validate our model and demonstrate that even for transparent cells, SolarGest can detect gestures with an accuracy of 96%, which is comparable to that achieved with light sensors (Section 4).

  • Finally, we experimentally demonstrate that SolarGest consumes 44% less power compared to systems that detect gestures using light sensors (Section 5).

2. Solar Gesture Simulator

Using fundamentals of solar energy harvesting and simple geometrical arguments, we derive a model to simulate photocurrent waveforms produced by hand gestures containing arbitrary hand movements in both vertical and horizontal planes relative to the solar panel. The model allows us to study the impact of different system parameters such as lighting condition, efficiency and form factor of the solar cell, etc., on the photocurrent waveform. Using numerical experiments, we illustrate the utility of the model in terms of predicting gesture recognition accuracy and optimizing the design and usage of solar-based gesture recognition systems.

2.1. Modeling Solar Gestures

Due to photovoltaic effect (parida2011review, ), solar cells convert incident light energy into electrical current (photocurrent). The amount of photocurrent generated is a function of the form factor of the solar cell and its current density, i.e., the amount of photocurrent generated per unit area (e.g., ), which is a measure of solar energy harvesting efficiency and depends on the light intensity of the operating environment. To fairly compare the efficiency of different solar cells, current density is typically reported under a standard lighting condition, named Global Standard Spectrum (AM1.5g) (smestad2008reporting, ; riordan1990air, ). Then, the standard current density is obtained as (wright2012organic, ):

(1)

where is the elementary charge, is the speed of light in free space, and is the Planck’s constant. Symbol refers to the wavelength of incident light. and represent the solar cell absorption efficiency and light intensity at wavelength , respectively. Due to the linear relationship between current density and light intensity (cai2011effect, ), one can calculate current density () at any light intensity by,

(2)

where is the light radiance power under Global Standard Spectrum (AM1.5g). Then, using Lambert’s cos()-law (weik2000lambert, ), generated photocurrent , is obtained as

(3)

where is the form factor of the solar cell and is the incident angle, i.e., the angle between light beam and surface normal (see Figure 2(a)). Light from different sources, like sun, fluorescent lamp and LED, can have a different spectral irradiance profile, resulting in different amount of photocurrent even under the same light intensity. Although we derive the model based on AM1.5g, which is specifically for sunlight, it is applicable to other irradiance spectrum as gestures are differentiated due to their unique patterns, rather than the absolute values. This will be further validated in Figure 13, which confirms that gesture patterns collected under fluorescent light are consistent with the modeled counterparts.

Figure 3. (a) Simulated recognition accuracy versus light intensity for different energy harvesting efficiency. (b) Simulated recognition accuracy versus proximity for different hand size.

To model solar photocurrent under hand gestures, we present a 3D geometric model, as shown in Figure 2(b), in which human hand and solar cell are modeled as round surfaces with radius and , respectively. As many IoT devices have small form factors (blaauw2014iot, ), in this paper, we consider the case where solar cell is smaller than hand size (e.g., Lunar Watch (lunar, )), i.e., (note that the model can be easily extended  111In this case, the inner part of the solar cell, a circle with radius , will be affected by hand movement, but the residual area will generate steady photocurrent during a gesture. Thus, total photocurrent would be obtained as the sum of current from the two parts.). The solar cell is assumed to be placed on a horizontal surface and a hand performs different gestures in a parallel plane above it. During a gesture, we define the minimum distance between the solar cell and hand as proximity, denoted by , and define the vertical movement space as displacement, denoted by . Since , only the light rays with incident angles larger than a certain threshold () can hit the solar cell.

Figure 2(c) and (d) show the longitudinal section of the 3D model, in which solar cell and hand are represented by two line segments with lengths and , respectively. The green area indicates the angular space in which light can be absorbed by the solar cell, while the light in gray area is blocked. In fact, a gesture is comprised of a time series of hand positions. Given the initial hand position, moving direction and speed of hand movement, one can calculate hand positions at any successive points in time. Taking Up gesture as an example, if the initial distance (at time zero) between hand and solar cell is and the hand moves in a constant speed , at time , the distance between hand and solar cell becomes . Thus, the corresponding threshold angles and for the two absorption angular spaces are

(4)

Since only light beams from the two green areas can be absorbed, the photocurrent can be calculated as

(5)

From Eq.5, the complete gesture waveform can be obtained by generating photocurrent values at successive points in time, i.e., , where and represent the start and end of the gesture, respectively. Finally, presence of noise can be easily modeled by adding a noise term to each sample as .

2.2. Estimating Recognition Performance

Figure 4. SolarGest System Architecture.

Using the equations derived in Section 2.1, our model is able to simulate gestures under various conditions, such as varying light intensity, proximity, as well as energy harvesting efficiency (). For the same gesture, we can also generate many synthetic samples by simulating human imperfection, such slight variations in speed, proximity, displacement, etc., or hand size variations of a family of users. These simulated gesture samples, on the one hand, can be used by future researchers to explore and compare different gesture recognition algorithms. On the other hand, solar-powered IoT designers can use these synthetic gesture samples to analyze and optimize various design tradeoffs. Next, we simulate 5 different gestures: Up, Down, UpDown, DownUp, LeftRight (see Figure 13), and utilize the gesture recognition framework proposed in Section 3 to illustrate such trade-off analysis using numerical experiments.

For all numerical experiments, we set the following default parameter values: , , and . Solar cell current density under the Global Standard Spectrum (AM1.5g) is set to and the average speed of a gesture is set to . The light intensity is set to 5000 lux. By randomly varying the speed of hand motion and proximity, we generate 100 random samples for each of the five gestures.

Figure 3 plots simulated recognition accuracy as a function of light intensity, for three different solar energy harvesting efficiencies or different transparency levels. Here T1 and T2 denote transparent solar cells and S1 denotes opaque cell (see details of our solar cell prototyping in Section 4.1). We can observe that, solar cells with different transparency levels require different lighting conditions to achieve a target gesture recognition accuracy. For example, to achieve 90% accuracy in relatively dim lighting environment (50 lux), we must use a less transparent cell (), but a highly transparent cell () can be used for improved visibility if the typical operating environment is well lit (200 lux).

Another interesting observation from Figure 3 is that, according to our model, transparent solar cells (both T1 and T2) are predicted to recognize hand gestures with very high accuracy (close to 100%) just like the opaque cells (S1) as long as the light intensity is higher than 400 lux. Given that many indoor and even very cloudy outdoor environments enjoy light intensities above 400 lux, this numerical experiment suggests that use of transparent solar cells will not have any negative effect on gesture recognition for typical use scenarios of future transparent solar-powered IoTs. Indeed, our practical experiments in Section 4 with both opaque and transparent solar cells will validate this finding.

For different hand sizes, Figure 3 plots simulated recognition accuracy as a function of gesture proximity. The results indicate that, to achieve a certain recognition accuracy, users with smaller hand size should perform gestures closer to the solar panel, compared to those with larger hand size. Such numerical experiments can be used by solar-powered IoT manufacturers to release gesture guidelines for different hand sizes, which would help all users of the product to enjoy high gesture recognition accuracies.

3. Recognition Framework

Figure 4

illustrates the system architecture and workflow of SolarGest. During a hand gesture near it, a solar-powered device captures time-series of photocurrent and delivers the data to a gesture recognition system, which could be located on an edge device, such as smartphone, laptop, or home hub (note that such edge-based processing will ensure that latency is minimal), using low-power communications like backscatter or BLE. The gesture recognition system detects the gesture and either sends that information back to the originating device if local control in the device is involved, or communicates with other IoT devices based on the desired action from the gesture. Recognition accuracy is the key performance measure for the gesture recognition system. We propose a machine learning based gesture recognition framework that trains a classifier with specific features extracted from the photocurrent time series of the gesture. Before extracting features, we pass the signal through a processing pipeline to deal with a number of issues that may cause high classification errors. Signal processing and classification details of our proposed gesture recognition framework are presented next.

3.1. Signal Processing

The signal processing pipeline deals with three specific issues. First, it removes noise contained in photocurrent signal using discrete wavelet transform (DWT). Then, the boundaries of the gesture are detected using a segmentation algorithm. Finally, a signal alignment module applies a combination of dynamic time warping (DTW) and Z-score transformation on the segmented signal to address specific alignment issues that are caused by variations in operating conditions, such as hand motion speed, lighting conditions etc.

3.1.1. Denoising

Raw photocurrent signals are noisy as shown in the bottom row of Figure 13

. The fast Fourier transform (FFT) graphs in Figure 

5 reveal that there is a 50Hz noise when the signal is collected indoor under a ceiling light powered by 50Hz AC current, but such noise is absent when measured outdoor under the sun. In addition, due to the minor imperfections in micro-controller of Arduino, Gaussian noise also exists in the photocurrent signal. Earlier works (venkatnarayan2018gesture, ) have found that discrete wavelet transform (DWT) can effectively capture both temporal and frequency information, thereby filtering noise from both time and frequency domain (lang1996noise, ). Unlike FFT that decomposes a signal in equal resolution over the whole frequency span, DWT is able to resolve a signal in various resolutions at different frequency range. More specifically, DWT hierarchically decomposes a signal and provides detail coefficients and approximation coefficients at each level. The key insight to denoise signal using DWT is to modify the detail coefficients based on thresholding strategies. Specifically, we divide the denoising procedure into three steps.

Figure 5. (a) FFT analysis of photocurrent signals collected under indoor fluorescent light (up) and outdoor natural light (bottom), (b) CDFs of gesture duration.

First, SolarGest decomposes the photocurrent signal to level 5. The intuition to choose level 5 is based on the sampling rate. Since we sample data in 500Hz, the highest frequency contained in the signal is 250 Hz due to the Nyquist Theorem. As observed from Figure 5, the gesture frequency is actually less than around 5Hz. During DWT decomposition, the frequency span at each level is half of the level before it. Thus, at level 5, the frequency range is [0, ]Hz, i.e., [0, 7.8] Hz, which covers the gesture frequency. Second, a soft thresholding scheme is applied to the detail coefficients at level 5, which shrinks both positive and negative coefficients towards zero. The threshold is adaptively computed using the principle of Stein’s Unbiased Risk Estimate (SURE)  (gradolewski2013use, ). Finally, inverse DWT is applied to the altered detail coefficients and unmodified approximation coefficients to reconstruct the denoised signal. Due to space limitation, theory of DWT is not provided and readers can refer to (burrus1998introduction, ; lang1996noise, ) for more details.

3.1.2. Gesture Segmentation

After denoising, the next step is to segment exact gestures from the time-series of signal. To help detect the start and stop of a gesture, like many other gestures recognition systems (virmani2017position, ; venkatnarayan2018gesture, ; kaholokula2016reusing, ; abdelnasser2015wigest, ), SolarGest requires users to take a short pause before and after a gesture. To detect the start and end of a gesture, previous works either use a preamble scheme (abdelnasser2015wigest, ) or a threshold-based method (i.e., a start is detected once the value is higher than a predefined threshold and an end is detected when the values fall below the threshold) (venkatnarayan2018gesture, ; kaholokula2016reusing, ). However, the former requires users to perform an additional gesture every time, which is not user-friendly. The threshold-based method does not work if the amplitude before and after a gesture is different (e.g., Up and Down shown in Figure 11). Thus, we proposed a new segmentation algorithm, which can accurately detect the plateau periods (i.e., pauses) before and after a gesture.

Figure 6. Segmentation performance. The green dots represent the detected start points and the red squares represent the detected end points.
Figure 7. The impact of different parameters on gesture profile. In each graph, only a specific parameter varies and the rest are in default value.
Figure 8. Illustration of signal alignment using Z-score transformation and DTW.

Specifically, we apply a sliding temporal window on the denoised signal. A gesture start is detected if the following two conditions hold: (1) the standard deviation of the samples in current window is lower than a pre-defined threshold

stdThr; (2) the difference between the last sample in current window and the mean of all the samples in the window is higher than a threshold meanThr. The first condition ensures that the current window is in a plateau, while the second condition determines that a gesture starts right after a pause. Thus, the last sample in current window is regarded as a gesture start. The same principle is utilized to detect the end of a gesture and consecutive samples between start and end are extracted as a gesture.

To minimize the probability of falsely extracting an un-occurred gesture, we further apply a gesture length constraint based on our experimental data. Figure 

5 presents the CDFs of gesture durations when three subjects perform 6 different gestures. We can observe that around 90% gestures are completed within 1s. Therefore, we apply a length constraint which ensures gestures less than 0.2s or greater than 1.4s are discarded. meanThr and stdThr are optimized through trial-and-error procedure and the values used in our work is 0.5 and 0.25 respectively.

Figure 6 illustrates the gesture segmentation result, where the green dots represent the start points and red squares represent the end points. Note that during a data collection session, the user always keeps his/her hand within the operating region thus avoiding any transition effects, i.e., a slightly descending/ascending signal caused by entering/leaving the operating region. With the proposed segmentation algorithm, SolarGest successfully identifies 96% of gestures in our dataset while incurring no false positives.

3.1.3. Signal Alignment

Using our simulator presented in Section 2, we have identified specific alignment issues for gesture waveforms. We first explain these issues, followed by the techniques we have used to address them.

Figure 7 studies the impact of 8 practical parameters, i.e., device parameters such as solar cell form factor and efficiency, environment parameters such as light intensity and size, as well as gesture parameters such as speed, hand angle, proximity, and displacement, on the gesture profiles. In each graph, only a specific parameter varies and the rest are set to default values as presented in Section 2.2. It can be observed that each parameter indeed affects the gesture waveform and the impact can be categorized into temporal variation (variation in waveform duration) and amplitude variation. Specifically, different gesture speeds and displacements lead to varying gesture durations, making the same gestures mismatched in time dimension. Other parameters, such as light intensity and hand angle (refers to the angle between hand and horizontal plane, as shown in Figure 2(c)), result in amplitude shifts.

We apply Z-score transformation to align gesture amplitudes and dynamic time warping (DTW) to align gestures in time dimension. We illustrate the alignment process in Figure 8, which plots two detected signals of LeftRight gesture. We can see that there is an amplitude shift between the two signals as well as a mismatch between the peak-to-peak time difference. These mismatch effects may stem from variations in gesture proximity, light intensity, speed of hand motion, and so on.

We first apply Z-score transformation, which is known to be an effective function to make multiple signal with different amplitudes comparable (cheadle2003analysis, )

. After transformation, distribution of the signal follows the normal distribution (i.e., mean 0 and standard deviation 1). Figure 

8(b) illustrates the waveforms after Z-score, in which we can see their amplitudes are converted to the same scale between [-2,2] and look very similar. After Z-score transformation, we can still observe the temporal misalignment issue. DTW has been successfully applied to various applications such as speech recognition (myers1980performance, ) and activity recognition (sempena2011human, ) to cope with such temporal mismatch. Thus, we apply DTW to gesture signals after Z-score transformation. The performance is shown in Figure 8(c), from which we can see that the two signals almost overlap. With signal alignment, SolarGest is able to minimize the impact of parameters that cannot be avoided in practical use due to human imperfection.

3.2. Feature Selection and Classification

After signal processing, SolarGest extracts features from each detected gesture window and use them as the input for classification. In our system, two feature sets are considered and compared:

  • Statistical features: include 22 typical time and frequency domain features, such as

    MEAN, STD, IQR, SKEW, KURT, Q2, DominantFrequency

    , that have been widely used in human-related sensing (lara2013survey, ).

  • DWT coefficients: As mentioned before, DWT decomposes a signal and provides detail coefficients at multiple levels. Using these coefficients, DWT can perfectly reconstruct the original signal. Employing such DWT detail coefficients as features in classification has been extensively demonstrated in a wide range of human sensing applications (virmani2017position, ; abdelnasser2015wigest, ; wang2015understanding, ; tzanetakis2001audio, ). Motivated by these applications, we extract DWT detail coefficients as another feature set.

Before applying DWT for feature extraction, we utilize spline interpolation to ensure that each detected gesture window has the same length of 512. The reasons for harmonizing the window lengths are three folds. First, classifiers require the same number of features for each gesture during training and classification. To obtain the same number of detail coefficients for each gesture, DWT requires detected gesture windows to have the identical length. Second, as DWT decimates the signal length by half at each level, it is efficiently calculated when the length of each gesture signal is a power of 2. We have measured the time duration of the six gestures and found that 90% of them can be completed within 1s. Due to the sampling rate of 500Hz, we therefore interpolate each gesture to 512 samples.

Then, we perform 5 level DWT decomposition on each gesture and extract the detail coefficients in the 5th level as the features. The reason to choose level 5 is explained in Section 3.1.1. Regarding the selection of wavelet, five different wavelets are considered including: Haar(haar), Daubechies1(db1), Daubechies2(db2), Daubechies4(db4), and Coiflet2(coif2). Although Daubechies4 and Haar have shown good performance for Wi-Fi (virmani2017position, ) and light sensor based (venkatnarayan2018gesture, ) systems, we will investigate the effect of wavelet selection on solar-based gesture recognition in the evaluation section.

Figure 9. (a) Effect of placing the two transparent solar cells T1 and T2 on an iPhone 7 screen that displays the text ‘Hello Word’. (b) The silicon based solar cell S1.

After feature extraction, machine learning classifiers are trained to recognize different gestures. In our system, we implemented four typical classifiers that are widely used for gesture recognition including: SVM, KNN, Decision Tree (DT), and Random Forest (RF). For DT, the confidence factor (C) and minimum number of instances (M) are set to 35% and 2, respectively. For KNN, the number of nearest neighbors is set to 10 and the distance is weighted. For RF, the number of iterations (I) is set to 100. For SVM, we choose the cubic kernel. The performance comparison between different classifiers will be given in the following section.

4. Performance Evaluation

We use real solar panels, both opaque and transparent, to evaluate our solar-based gesture recognition framework as well as qualitatively validate the simulation model derived in Section 2.

4.1. Solar Cell Prototype

As shown in Figure 9, in our advanced photovoltaic laboratory, we prototyped three different solar cells, a 10x5cm silicon-based opaque solar cell (S1) and two 1x1cm transparent solar cells (referred to as T1 and T2) which were made from the same organic material (PBDB-T: ITIC) but with different transparencies (T1 20.2% and T2 35.3%) and thickness (T1 143nm and T2 53nm). To demonstrate the ‘see-through’ property of the transparent solar cells, we placed them on the screen of an iPhone7. As shown in Figure 9(a), we can clearly see the displayed ‘Hello World’ context through both T1 and T2, but T2 provides a better ‘see-through’ performance as it has a higher transparency to allow more visible light to pass through. In terms of the energy harvesting efficiency, T1 and T2 provide current densities of and , respectively. More details of our transparent solar cell prototypes are available in (upama2017high1, ).

In Figure 10, we plot the absorption efficiency of the three solar cells in the visible light band. We can notice that the opaque solar cell S1 achieves nearly 100% absorption efficiency over the entire wavelength range, whereas, the absorption rate of T1 and T2 is only 50% and 30%, respectively, on average. As discussed in our theoretical model in Section 2.2 and will be verified in the following evaluation, the energy harvesting efficiency (i.e., the transparency) will affect recognition performance.

Figure 10. Absorption spectra of the three solar cells S1, T1, and T2 within visible light band.
Figure 11. Illustrations of the 6 hand gestures conducted over the solar cells. The figures in the second row show the photocurrent profile collected under the 6 gestures.
Figure 12. Data collection setup.

4.2. Gesture Data Collection

Using our prototype solar cells, we have collected a comprehensive gesture dataset for the performance evaluation of our proposed solar gesture recognition framework.222Ethical approval for carrying out this experiment has been granted by the corresponding organization.. For data collection, we connect the solar cells to an Arduino Uno board as shown in Figure 12. The output of the solar cell is sampled by the Arduino via the onboard ADC at 500Hz and saved in the microSD card. For comparison purpose, we also collected the photocurrent signal from two different light sensors, TI OPT101 and Honeywell SD3410, which are widely used in ambient light based gesture recognition systems (li2016practical, ; li2017reconstructing, ; an2015visible, ; li2015human, ). Figure 12 illustrates our data collection setup at an indoor environment, which is conducted in our photovoltaic research lab due to the special (and bulky) tools required to connect the transparent cell output to the Arduino.

During data collection, we have considered many different settings, including: (1) three solar cells with different energy harvesting efficiencies/transparencies; (2) five light intensity levels for indoor and outdoor combined (i.e., 800 lux and 2600 lux for transparent solar cells only under indoor lab environment; 10 lux, 50 lux, 800 lux, 2600 lux and 70k lux for the opaque solar cell under different scenarios including indoor and sunny outdoor); (3) six hand gestures as introduced in Figure 11; (4) threes subjects to perform the gestures; and (5) scenarios with/without human interference to investigate the robustness of SolarGest against interference (data collected using the two transparent solar cells only). Specifically, the human interference is introduced by asking one subject to walk around in a half circle with radius of 30cm when another subject is performing gestures. As suggested by (li2018self, ), light incident angles have little impact on the gesture recognition accuracy. Thus, we consider the case where light source is located at the top of the solar cell.

Table 1 summarizes the considered experiment settings. In total, our data collection includes ten sessions (i.e., 2 transparent solar cell 2 light intensities 2 interference conditions + 1 opaque solar cell 5 light intensities). In each of the session, subjects were asked to perform each of the 6 gestures 40 times. To avoid human fatigue, there was two minutes break between each session. The entire data was collected over five days. In total, we created a dataset consisting of =6960 gestures.

Parameter Option Value
Solar cell 3 transparent solar cell: T1, T2
opaque solar cell: S1
Light intensity 4 10lux,50lux,800lux,2600lux,70klux
Interference 2 with, without
Gesture 6 Down, DownUp, FlipPalm,
LeftRight, Up, UpDown
Subject 3 1 male, 2 female
Photodiode 2 TI OPT101, Honeywell SD3410
Table 1. Experiment setting.
Figure 13. Comparison of simulated gesture signal with the signal generated by solar cells.

4.3. Simulated vs. Real Waveforms

Figure 13 compares simulated gesture waveforms (top row) against actual waveforms (bottom row) collected from prototype transparent solar cells for 5 different gestures (note that FlipPalm is not captured in our model). It is clear that even though we model hand and solar cell as circles, the gesture signals simulated by our model are very similar to those generated by real solar cells in terms of signal features and patterns. This demonstrates that our model can be an effective tool to study gesture performance of next generation solar cells under a variety of scenarios.

4.4. Gesture Recognition Performance

In this subsection, we evaluate the performance of our gesture recognition framework in terms of different system design choice and practical environmental factors. In addition, we also compare the performance of SolarGest with that of light sensor based approaches. We use recognition accuracy, which represents the percentage of gestures that are correctly recognized by the classifier, as the metric. For each individual test, we perform 10-fold cross-validation and present the average recognition accuracy. The training and classification are implemented in Matlab.

Comment 22:Sampling Rate

First, we investigate the minimum required sampling rate for SolarGest, as it directly affects the system power consumption. To achieve this, we down-sample the original 500Hz data to different sampling rates. Then, we apply the entire signal precessing and gesture recognition pipeline to each sampling rate. From Figure 14 we can observe that although both segmentation and gesture recognition accuracies improve with increasing sampling rate at the beginning, performance stabilizes at 50 Hz. Therefore, we will consider a sampling rate of 50 Hz in the subsequent analyses.

Figure 14.

(a) Recognition accuracy given different sampling rates; Confusion matrix of transparent solar cell (b) T1 and (c) T2.

Solar Cell Classifier
KNN Decision Tree SVM Random Forest
T1 96.1% 94.1% 95.1% 95.9%
T2 95.6% 93.0% 94.5% 95.2%
Table 2. Recognition accuracy given different transparencies and classifiers.
Figure 15. Recognition accuracy given (a) different wavelets; (b) different feature sets; and (c) light intensity.

Performance of transparent solar cells:

Using DWT coefficients as features, Table 2 presents average classification accuracies of the two transparent solar cells for the dataset obtained from all three subjects under 800 lux and 2600 lux indoor lighting with and without human interference. We can see that, despite being transparent with limited energy harvesting capacities compared to existing opaque cells, both T1 and T2 prototypes achieved very high accuracies under all four typical classifiers. This finding directly validates our earlier simulation-based predictions in Section 2.2, which indicated that transparency will not reduce gesture recognition capability of solar cells in environments illuminated above 400 lux. Figure 14 - (c) show the confusion matrices of solar cell T1 and T2, respectively, which suggest that some gestures, e.g., FlipPalm and UpDown, are still likely to be confused with others as their patterns look very similar.

Effect of Features:

Figure 15 compares accuracies when KNN is trained with different feature sets. As shown, DWT wavelets achieve approximately 98% of accuracy compares to that of only 87% for the statistical feature set. A more detailed wavelet analysis in Figure 15 reveals that Daubechies2 gives the highest accuracy for our solar cell based gesture recognition system, although Haar was reported to be the best wavelet for light sensor based gesture recognition (venkatnarayan2018gesture, ).

Effect of environment factors:

First, we test five light intensity levels that correspond to common conditions, 10 lux-dark room, 50 lux-living room, 800 lux-office, 2600 lux-cloudy, and 70k lux-sunny. Transparent solar cells are tested under 800 lux and 2600 lux only due to the sensitivity to environment (e.g., humidity), while the opaque solar cell is tested under all the five conditions. Figure 15 presents the recognition accuracy of the three solar cells under the five intensity levels. We can observe that, for the same solar cell, higher light intensity ensures a higher recognition accuracy, although the improvement is minor. This indicates that for common environment, SolarGest is able to achieve superior and consistent performance. We can notice that S1 obtains higher accuracy compared to T1 and T2 at 800 lux and 2600 lux. The reason is that the energy harvesting efficiency as well as the form factor of S1 is larger than that of T1 and T2.

To assess the limit of SolarGest, we create an extremely dark environment (i.e., 10 lux) by turning off all lights in a dark room except a laptop screen. We find that, with our current prototype (Arduino UNO), the collected signal always remains zero, making it impossible to detect any gesture. The reason is that the resolution of Arduino ADC (10bit) is not enough to capture the minor changes in photocurrent. However, we found that dark environment problem can be solved by using either a high resolution ADC (e.g., 16bit) or amplifying the current. We implemented an amplification circuit and tested two amplification factors: 32 and 64. The results show that, with both amplification levels, gesture accuracy reaches to around 94%.

Second, we investigate the robustness of SolarGest against ambient human interference in Figure 16. We can see that human walking near the solar panel introduces some fluctuations in the signals. To investigate the impact of such signal interference on gesture accuracy, Figure 16 plots accuracy with without human interference, which shows that interference reduced accuracy by only 1.5% and SolarGest still achieved 96% recognition accuracy.

Third, we investigate the gesture recognition performance when subjected global light intensity changes (e.g., walking from indoor to outdoor or sunlight is blocked by cloud during a gesture). We conduct the experiment using the simulator presented in Section 2. Specifically, we train the model using gestures simulated under stable light intensity, while test using the distorted gestures (simulated by switching the light intensity in different levels and frequencies) only. Our results indicate that when light intensity changes very fast (e.g., >50Hz), the accuracy is not affected, while almost half of the distorted gestures are wrongly recognized when intensity switching rate is low (e.g., 2Hz). However, as suggested in (li2018self, ), such low dynamic global light change can be effectively filtered out by subtracting it.

Performance under Unseen Scenarios

We consider two unseen cases. First, we train the classifier using the data collected under one light intensity and test it by the data collected under another light intensity. The performance of training and testing with the same light intensity, i.e., seen scenario, is also obtained. From Figure 17, we can see that SolarGest still achieves 88% accuracy even in unseen lightning environment case. Second, we train the classifier using the data collected from two subjects and test it on the remaining one. The results in Figure 17 indicate that SolarGest is robust to subject difference. Although the accuracy suffers from a significant drop when training with subject1 and subject2, but testing with subject3, SolarGest achieve 93% accuracy on average for unseen users. As a result, training on a large number of subjects may not be necessary.

Comparison with light sensor based systems

Figure 18 compares the signal traces from solar cell T1 and two light sensor collected at the same time, where we can observe that signal traces from light sensors are noisier. As a result, by using the signal from solar cell, the system can perfectly detect all the ten gestures, whereas, both of the two light sensors can only detect eight of the ten gestures. Table 3 compares the overall performance of light sensors and transparent solar cells in terms of both segmentation and recognition accuracies. We can notice that solar cells achieve 12% to 26% higher segmentation accuracies and at least 5% better gesture recognition accuracy compared to light sensors. These results demonstrated that, for gesture recognition, even transparent solar cells are no worse off than light sensors.

5. Power Measurements

In this section, we investigate the power saving advantage of SolarGest against conventional light sensor based systems. As shown in Figure 4, the power consumption of SolarGest comes from two parts: MCU sampling and data transmission. In contrast, light sensor based systems consumes additional energy in powering the light sensors. In the following, we perform a conservative comparison which assumes that only one sensor is required for light sensor based systems (current works require a array of sensors (kaholokula2016reusing, ; venkatnarayan2018gesture, )).

Figure 16. (a) Comparison of raw signals with and without interference. (b) Impact of interference on recognition accuracy.
Metric OPT101 SD3410 SolarGest(T1) SolarGest(T2)
Segmentation Accuracy 70.2% 84.3% 96.2% 96.1%
Recognition Accuracy 55.2% 89.9% 96.1% 95.6%
Table 3. Performance comparison between light sensors and solar cells.

MCU Power Measurement: since both solar cell and light sensor are sampled by analog-to-digital converter (ADC), we conducted an experiment to measure the power consumption in ADC sampling. We select the Texas Instrument SensorTag as the target device, which is equipped with an ultra-low power ARM CortexM3 MCU. The SensorTag is running with the Contiki operating system. As discussed in Figure 14, to achieve over 95% of accuracy, a sampling rate of 50Hz is required by SolarGest. Thus, we duty-cycled the MCU at 50Hz for sampling and applied an oscilloscope to measure the average power consumption of SensorTag during the sampling. According to our measurement, the system consumes 20.28W in sampling the signal at 50Hz.

Figure 17. Recognition accuracy on (a) unseen lighting environment, (b) unseen user.
Figure 18. Segmentation performance comparison using signals from (a) solar cell T1, (b) photodiode OPT101, and (c) SD3410, under gesture FlipPalm. The green dots represent the detected start points and the red squares represent the detected end points.

Light sensor Power Measurement: in addition, we also measure the power consumed by the light sensor itself. We consider two light sensors, namely TI OPT 101 and Honeywell SD 3410, that are widely used in the literature (li2015human, ; an2015visible, ; li2017reconstructing, ; li2016practical, ). In particular, we measured the power consumption of the sensors under different light intensities (assuming normal operation scenarios), as the datasheet only gives the power consumption when the sensor is operated in dark environment. Figure 19 illustrates the measurement setup. To minimize the effect of ambient light, we conduct the experiment in a box with one side open. A smartphone is placed on top of the box and its Flash is used as the light source. We create an aperture with a radius of 1cm on the top of the box and place the light sensor right below the aperture to ensure of light incident angle. The light sensor is powered by a 3V battery and a multimeter is used to measure the current. Figure 20 presents the power consumption of the two light sensors under different light intensities. We can observe that the power consumption is not constant. When the light intensity is lower than 100 lux, the power consumption increases linearly with light intensity. Once the light intensity is higher than 100 lux, the energy consumption becomes stable. Since the light intensity of normal environment is usually higher than 100 lux, e.g., 200-800 lux for office environment, it means that, without duty-cycling (sensor always turn on), OPT101 and SD3410 consumes around 650uW and 730uW, respectively. With 50 Hz duty-cycle, the power consumption reduces to 39.78 and , respectively. In addition, our results is consistent with the datasheet when light intensity is 0 (OPT101datesheet, ). In contrast, solar cell is passive and does not require any external power.

Figure 19. Light sensor power measurement setup.

Overall System Power Saving: Now, we analyze the overall system power consumption. Considering 50Hz sampling rate and a duty-cycled system, Table 4 compares the power consumption of SolarGest and light sensor (i.e., photodiode) based system. Note that the photodiodes are assumed to operate in photoconductive mode, which requires external power supply, in order to provide faster response rate (chen2015high, ). The recent advancement in Wi-Fi backscattering has demonstrated that 1 Mbps data rate can be achieved with only 14.5 power consumption (kellogg2016passive, ). Given a sampling frequency of 50Hz, SolarGest has 100 Bytes data (2 Bytes for each 12-Bits ADC reading) to be transmitted per second. Thus, it means that 0.023  333 is required for backscattering-based data transmission. Overall, the power consumption of SolarGest will be around 20.3 , while the consumption of light sensor based system is about 60.1 . Thus, SolarGest is able to save over 66% of the energy. In a more general case where BLE is used for communication, 31.11 power is required for the data transmission (100 Bytes per second) based on our measurement (using TI SensorTag as the target platform.). In this case, the overall system power consumption for SolarGest and the two light sensors based system increase to 51.39 , 91.17 , and 93.57 , respectively. But SolarGest still saves at least 44% of the energy compares to light sensor based systems. Furthermore, current light sensor based systems implement an array of light sensors (e.g., 9 in (kaholokula2016reusing, ) and 36 in (venkatnarayan2018gesture, )), which definitely incur much higher power consumption.

Figure 20. Power consumption of light sensors with different light intensities.
Sensor Power Consumption (uW) Savings
MCU Sensor Backscatter BLE
Solar Cell 20.28 0 0.023 31.11
OPT101 20.28 39.78 0.023 31.11 66.2%/43.6%
SD3410 20.28 42.18 0.023 31.11 67.5%/45.1%
Table 4. Power consumption comparison.

6. Related work

6.1. Gesture Recognition

Gesture recognition has been extensively investigated in the literature. Vision based gesture recognition leverages a camera or depth sensor to detect gestures (izadi2011kinectfusion, ; howe2000bayesian, ). Motion sensor-based approach exploits accelerometers and gyroscopes to track human body/hand movement (ruiz2011user, ; xu2012taplogger, ). RF signal based gesture recognition systems utilize informations extracted from the RF signal, such as RSS (abdelnasser2015wigest, ; 8519328, ), CSI (sbirlea2013automatic, ), and Doppler shift (pu2013whole, ) to recognize different gestures. Acoustic signal based method operates by leveraging Doppler shift of the reflected sound waves caused by gestures (gupta2012soundwave, ; pittman2016multiwave, ). The underlying principle of light sensor based approach is that different gestures leave distinct shadows that can be captured by an array of photodiodes (venkatnarayan2018gesture, ; kaholokula2016reusing, ; li2017reconstructing, ; li2015human, ; li2016practical, ; li2018self, ).

Although such systems achieve excellent gesture recognition accuracy, they actually suffer from some limitations. For example, vision based systems usually incur heavy computation cost and encounter privacy concerns that arise from the sensitive camera data (sbirlea2013automatic, ; weinberg2011still, ). Moreover, these systems suffer from high energy consumption due to the use of various sensors, like accelerometer, depth sensor, and microphone, which impedes the aim of ubiquitous and perpetual gesture recognition. In contrast, SolarGest utilizes the energy harvesting signal for gesture recognition, which not only eliminates sensor-consumed energy but also provides inexhaustible power supply to the IoT device.

6.2. Solar Energy Harvesting based Sensing

Based on the principle that solar energy harvesting signal can be a reflection of the environment light intensity, researchers also utilize the solar cell as a light indicator to perform indoor positioning (randall2007luxtrace, ). In addition, coarse-grained asset localization is achieved by analyzing the harvested energy patterns of a solar panel (chen2016sunspot, ). A recent work (li2018self, ) employed an array of photodiodes around a smartwatch for the dual-use of solar energy harvesting as well as gesture recognition. Compared to (li2018self, ), a key advantage of SolarGest is that it can be seamlessly integrated to the smartwatch screen without impacting its appearance.

In terms of solar cell based gesture recognition, the most relevant work is (varshney2017battery, ), in which the authors utilized an opaque solar cell to identify three hand gestures. However, SolarGest differs in three aspects. First, (varshney2017battery, ) differentiates three gestures, Swipe, Two Taps, and Four Taps, based on repetitions of a basic gesture, while SolarGest recognizes gestures baed on their unique patterns. Second, since transparent solar cell has much lower energy harvesting efficiency compared to the opaque counterpart, its gesture recognition capability was hitherto untested. We demonstrated their gesture recognition potential by prototyping transparent cells and performing practical experiments with them. Third, we developed a theoretical model to investigate the effect of different practical parameters in a solar based gesture recognition system and conducted a comprehensive experiment study to evaluate the gesture recognition performance with different solar cell transparencies and light intensities.

7. Conclusion

We have proposed SolarGest, a solar-based gesture recognition system for ubiquitous solar-powered IoTs. Using solar energy harvesting fundamentals and geometric analysis, we derived a model that accurately simulates arbitrary hand gestures and enables estimation of gesture recognition performance under various conditions. Employing real solar cells, both opaque and transparent, we have demonstrated that our system can detect six gestures with 96% accuracy under typical use scenarios while consuming 44% less power compared to light sensor based approach. Although we motivated the use case of transparent solar cells on the screens of mobile devices, we have not analyzed the impact of backlight on gesture recognition. Transparent solar cells are still at early stages of research and as such we do not have access to commercially available cells complete with development kits for integrating to IoT development platforms. However, such experiments can be conducted in the near future as soon as transparent cells become commercially available at low cost. When such opportunities arrive, we intend to extend our simulator with capabilities to analyze impact of incident lights from both sides of a transparent solar cell.

References

  • [1] Ankit Chaudhary, Jagdish Lal Raheja, Karen Das, and Sonia Raheja. Intelligent approaches to interact with machines using hand gesture recognition in natural way: a survey. arXiv preprint:1303.2292, 2013.
  • [2] Zhou Ren, Jingjing Meng, Junsong Yuan, and Zhengyou Zhang. Robust hand gesture recognition with kinect sensor. In Proceedings of the 19th ACM international conference on Multimedia, pages 759–760, 2011.
  • [3] Xuan Wang, Linjie Zhi, and Klaus Müllen. Transparent, conductive graphene electrodes for dye-sensitized solar cells. Nano letters, 8(1):323–327, 2008.
  • [4] Christopher J Traverse, Richa Pandey, Miles C Barr, and Richard R Lunt. Emergence of highly transparent photovoltaics for distributed applications. Nature Energy, 2(11):849, 2017.
  • [5] Heba Abdelnasser, Moustafa Youssef, and Khaled A Harras. Wigest: A ubiquitous wifi-based gesture recognition system. In Computer Communications (INFOCOM), 2015 IEEE Conference on, pages 1472–1480. IEEE, 2015.
  • [6] Dragos Sbîrlea, Michael G Burke, Salvatore Guarnieri, Marco Pistoia, and Vivek Sarkar. Automatic detection of inter-application permission leaks in android applications. IBM Journal of Research and Development, 57(6):10–1, 2013.
  • [7] Qifan Pu, Sidhant Gupta, Shyamnath Gollakota, and Shwetak Patel. Whole-home gesture recognition using wireless signals. In Proceedings of the 19th annual international conference on Mobile computing & networking, pages 27–38. ACM, 2013.
  • [8] Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, et al. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 559–568. ACM, 2011.
  • [9] Nicholas R Howe, Michael E Leventon, and William T Freeman. Bayesian reconstruction of 3d human motion from single-camera video. In Advances in neural information processing systems, pages 820–826, 2000.
  • [10] Sidhant Gupta, Daniel Morris, Shwetak Patel, and Desney Tan. Soundwave: using the doppler effect to sense gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1911–1914. ACM, 2012.
  • [11] Corey Pittman, Pamela Wisniewski, Conner Brooks, and Joseph J LaViola Jr. Multiwave: Doppler effect based gesture recognition in multiple dimensions. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pages 1729–1736, 2016.
  • [12] Jaime Ruiz, Yang Li, and Edward Lank. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 197–206. ACM, 2011.
  • [13] Zhi Xu, Kun Bai, and Sencun Zhu. Taplogger: Inferring user inputs on smartphone touchscreens using on-board motion sensors. In Proceedings of the fifth ACM conference on Security and Privacy in Wireless and Mobile Networks, pages 113–124. ACM, 2012.
  • [14] Raghav H Venkatnarayan and Muhammad Shahzad. Gesture recognition using ambient light. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(1):40, 2018.
  • [15] M Kaholokula. Reusing ambient light to recognize hand gestures. Dartmouth college, 2016.
  • [16] Tianxing Li, Xi Xiong, Yifei Xie, George Hito, Xing-Dong Yang, and Xia Zhou. Reconstructing hand poses using visible light. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(3):71, 2017.
  • [17] Tianxing Li, Chuankai An, Zhao Tian, Andrew T Campbell, and Xia Zhou. Human sensing using visible light communication. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, pages 331–344. ACM, 2015.
  • [18] Tianxing Li, Qiang Liu, and Xia Zhou. Practical human sensing in the light. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services, pages 71–84. ACM, 2016.
  • [19] Ambuj Varshney, Andreas Soleiman, Luca Mottola, and Thiemo Voigt. Battery-free visible light sensing. In Proceedings of the 4th ACM Workshop on Visible Light Communication Systems, pages 3–8. ACM, 2017.
  • [20] Bhubaneswari Parida, S_ Iniyan, and Ranko Goic. A review of solar photovoltaic technologies. Renewable and sustainable energy reviews, 15(3):1625–1636, 2011.
  • [21] Greg P Smestad, Frederik C Krebs, Carl M Lampert, Claes G Granqvist, KL Chopra, Xavier Mathew, and Hideyuki Takakura. Reporting solar cell efficiencies in solar energy materials and solar cells, 2008.
  • [22] C Riordan and R Hulstron. What is an air mass 1.5 spectrum?(solar cell performance calculations). In Photovoltaic Specialists Conference, 1990., Conference Record of the Twenty First IEEE, pages 1085–1088. IEEE, 1990.
  • [23] Matthew Wright and Ashraf Uddin. Organic inorganic hybrid solar cells: A comparative review. Solar energy materials and solar cells, 107:87–111, 2012.
  • [24] Xiaomei Cai, Shengwei Zeng, Xin Li, Jiangyong Zhang, Shuo Lin, Ankai Lin, and Baoping Zhang. Effect of light intensity and temperature on the performance of gan-based pin solar cells. In Electrical and Control Engineering (ICECE), 2011 International Conference on, pages 1535–1537. IEEE, 2011.
  • [25] Martin H Weik. Lambert’s cosine law. In Computer Science and Communications Dictionary, pages 868–868. Springer, 2000.
  • [26] D Blaauw, D Sylvester, P Dutta, Y Lee, I Lee, S Bang, Y Kim, G Kim, P Pannuto, YS Kuo, et al. Iot design space challenges: Circuits and systems. In Symp. VLSI Circuits Dig. Tech. Papers, pages 1–2, 2014.
  • [27] Lunar watch. https://lunar-smartwatch.com/.
  • [28] Markus Lang, Haitao Guo, Jan E Odegard, C Sidney Burrus, and Raymond O Wells. Noise reduction using an undecimated discrete wavelet transform. IEEE Signal Processing Letters, 3(1):10–12, 1996.
  • [29] Dawid Gradolewski and Grzegorz Redlarski. The use of wavelet analysis to denoising of electrocardiography signal. In XV International PhD Workshop OWD, volume 19, 2013.
  • [30] C Sidney Burrus, Ramesh A Gopinath, Haitao Guo, Jan E Odegard, and Ivan W Selesnick. Introduction to wavelets and wavelet transforms: a primer, volume 1. Prentice hall New Jersey, 1998.
  • [31] Aditya Virmani and Muhammad Shahzad. Position and orientation agnostic gesture recognition using wifi. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pages 252–264. ACM, 2017.
  • [32] Chris Cheadle, Marquis P Vawter, William J Freed, and Kevin G Becker. Analysis of microarray data using z score transformation. The Journal of molecular diagnostics, 5(2):73–81, 2003.
  • [33] Cory Myers, Lawrence Rabiner, and Aaron Rosenberg. Performance tradeoffs in dynamic time warping algorithms for isolated word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(6):623–635, 1980.
  • [34] Samsu Sempena, Nur Ulfa Maulidevi, and Peb Ruswono Aryan. Human action recognition using dynamic time warping. In Electrical Engineering and Informatics (ICEEI), 2011 International Conference on, pages 1–5. IEEE, 2011.
  • [35] Oscar D Lara, Miguel A Labrador, et al. A survey on human activity recognition using wearable sensors. IEEE Communications Surveys and Tutorials, 15(3):1192–1209, 2013.
  • [36] Wei Wang, Alex X Liu, Muhammad Shahzad, Kang Ling, and Sanglu Lu. Understanding and modeling of wifi signal based human activity recognition. In Proceedings of the 21st annual international conference on mobile computing and networking, pages 65–76. ACM, 2015.
  • [37] George Tzanetakis, Georg Essl, and Perry Cook. Audio analysis using the discrete wavelet transform. In Proc. Conf. in Acoustics and Music Theory Applications, volume 66, 2001.
  • [38] Mushfika Baishakhi Upama, Matthew Wright, Naveen Kumar Elumalai, Md Arafat Mahmud, Dian Wang, Cheng Xu, and Ashraf Uddin. High-efficiency semitransparent organic solar cells with non-fullerene acceptor for window application. ACS Photonics, 4(9):2327–2334, 2017.
  • [39] Chuankai An, Tianxing Li, Zhao Tian, Andrew T Campbell, and Xia Zhou. Visible light knows who you are. In Proceedings of the 2nd International Workshop on Visible Light Communications Systems, pages 39–44. ACM, 2015.
  • [40] Yichen Li, Tianxing Li, Ruchir A Patel, Xing-Dong Yang, and Xia Zhou. Self-powered gesture recognition with ambient light. In The 31st Annual ACM Symposium on User Interface Software and Technology, pages 595–608. ACM, 2018.
  • [41] Opt101 datesheet. http://www.ti.com/lit/ds/symlink/opt101.pdf.
  • [42] Zefeng Chen, Zhenzhou Cheng, Jiaqi Wang, Xi Wan, Chester Shu, Hon Ki Tsang, Ho Pui Ho, and Jian-Bin Xu. High responsivity, broadband, and fast graphene/silicon photodetector in photoconductor mode. Advanced Optical Materials, 3(9):1207–1214, 2015.
  • [43] Bryce Kellogg, Vamsi Talla, Shyamnath Gollakota, and Joshua R Smith. Passive wi-fi: Bringing low power to wi-fi transmissions. In NSDI, volume 16, pages 151–164, 2016.
  • [44] H. Abdelnasser, K. A. Harras, and M. Youssef. A ubiquitous wifi-based fine-grained gesture recognition system. IEEE Transactions on Mobile Computing, 2018.
  • [45] Zachary Weinberg, Eric Y Chen, Pavithra Ramesh Jayaraman, and Collin Jackson. I still know what you visited last summer: Leaking browsing history via user interaction and side channel attacks. In Security and Privacy (SP), 2011 IEEE Symposium on, pages 147–161. IEEE, 2011.
  • [46] Julian Randall, Oliver Amft, Jürgen Bohn, and Martin Burri. Luxtrace: indoor positioning using building illumination. Personal and ubiquitous computing, 11(6):417–428, 2007.
  • [47] Dong Chen, Srinivasan Iyengar, David Irwin, and Prashant Shenoy. Sunspot: Exposing the location of anonymous solar-powered homes. In Proceedings of the 3rd ACM International Conference on Systems for Energy-Efficient Built Environments, pages 85–94. ACM, 2016.