I Introduction
One of the important modules in reliable recovery of data sent over a communication channel is the detection algorithm, where the transmitted signal is estimated from a noisy and corrupted version observed at the receiver. The design and analysis of this module has traditionally relied on mathematical models that describe the transmission process, signal propagation, receiver noise, and many other components of the system that affect the endtoend signal transmission and reception. Most communication systems today convey data by embedding it into electromagnetic (EM) signals, which lend themselves to tractable channel models based on a simplification of Maxwell’s equations. However, there are cases where tractable mathematical descriptions of the channel are elusive, either because the EM signal propagation is very complicated or when it is poorly understood. In addition, there are communication systems that do not use EM wave signalling and the corresponding communication channel models may be unknown or mathematically intractable. Some examples of the latter are underwater communication using acoustic signals [1] as well as molecular communication, which relies on chemical signals to interconnect tiny devices with submillimeter dimensions in environments such as inside the human body [2, 3, 4, 5].
Even when the underlying channel models are known, since the channel conditions may change with time, many modelbased detection algorithms rely on the estimation of the instantaneous channel state information (CSI) (i.e., channel model parameters) for detection. Typically, this is achieved by transmitting and receiving a predesigned pilot sequence, which is known by the receiver, for estimating the CSI. However, this estimation process entails overhead that decreases the data transmission rate. Moreover, the accuracy of the estimation may also affect the performance of the detection algorithm.
In this paper, we investigate how different techniques from artificial intelligence and deep learning
[6, 7, 8] can be used to design detection algorithms for communication systems that learn directly from data. We show that these algorithms are robust enough to perform detection under changing channel conditions, without knowing the underlying channel models or the CSI. This approach is particularly effective in emerging communication technologies, such as molecular communication, where accurate models may not exist or are difficult to derive analytically. For example, tractable analytical channel models for signal propagation in molecular communication channels with multiple reactive chemicals have been elusive [9, 10, 11].Some examples of machine learning tools applied to design problems in communication systems include multiuser detection in codedivision multipleaccess (CDMA) systems [12, 13, 14, 15], decoding of linear codes [16], design of new modulation and demodulation schemes [17, 18], detection and channel decoding [19, 20, 21, 22, 23, 24], and estimating channel model parameters [25, 26]. A recent survey of machine learning techniques applied to communication systems can be found in [27]. The approach taken in most of these previous works was to use machine learning to improve one component of the communication system based on the knowledge of the underlying channel models.
Our approach is different from prior works since we assume that the mathematical models for the communication channel are completely unknown. This is motivated by the recent success in using deep neural networks (NNs) for endtoend system design in applications such as image classification [28, 29], speech recognition [30, 31, 32], machine translation [33, 34], and bioinformatics [35]. For example, Figure 1 highlights some of the similarities between speech recognition, where deep NNs have been very successful at improving the detector’s performance, and digital communication systems for wireless and molecular channels. As indicated in the figure, for speech processing, the transmitter is the speaker, the transmission symbols are words, and the carrier signal is acoustic waves. At the receiver the goal of the detection algorithm is to recover the sequence of transmitted words from the acoustic signals that are received by the microphone. Similarly, in communication systems, such as wireless or molecular communications, the transmitted symbols are bits and the carrier signals are EM waves or chemical signals. At the receiver the goal of the detection algorithm is to detect the transmitted bits from the received signal. One important difference between communication systems and speech recognition is the size of transmission symbol set, which is significantly larger for speech.
Motivated by this similarity, in this work we investigate how techniques from deep learning can be used to train a detection algorithm from samples of transmitted and received signals. We demonstrate that, using known NN architectures such as a recurrent neural network (RNN), it is possible to train a detector without any knowledge of the underlying system model. In this approach, the receiver goes through a training phase where a NN detector is trained using known transmission signals. We also propose a realtime NN sequence detector, which we call the sliding bidirectional RNN (SBRNN) detector, that detects the symbols corresponding to a data stream as they arrive at the destination. We demonstrate that if the SBRNN detector or the other NN detectors considered in this work are trained using a diverse dataset that contains sequences transmitted under different channel conditions, the detectors will be robust to changing channel conditions, eliminating the need for instantaneous CSI estimation for the specific channels considered in this work.
At first glance, the training phase in this approach may seem like an extra overhead. However, if the underlying channel models are known, then the models could be used offline to generate training data under a diverse set of channel conditions. We demonstrate that using this approach, it is possible to train our SBRNN algorithm such that it would not require any instantaneous CSI. Another important benefit of using NN detectors in general is that they return likelihoods for each symbol. These likelihoods can be fed directly from the detector into a soft decoding algorithm such as the belief propagation algorithm without requiring a dedicated module to convert the detected symbols into likelihoods.
To evaluate the performance of NN detectors, we first use the Poisson channel model, a common model for optical channels and molecular communication channels [36, 37, 38, 39, 40, 41]. We use this model to compare the performance of the NN detection to the Viterbi detector (VD). We show that for channels with long memories the SBRNN detection algorithm is computationally more efficient than the VD. Moreover, the VD requires CSI estimation, and its performance can degrade if this estimate is not accurate, while the SBRNN detector can perform detection without the CSI, even in a channel with changing conditions. We show that the bit error rate (BER) performance of the proposed SBRNN is better than the VD with CSI estimation error and it outperforms other wellknown NN detectors such as the RNN detector. As another performance measure, we use the experimental data collected by the molecular communication platform presented in [42]. The mathematical models underlying this experimental platform are currently unknown. We demonstrate that the proposed SBRNN algorithm can be used to train a sequence detector directly from limited measurement data. We also demonstrate that this approach perform significantly better than the detector used in previous experimental demonstrations [43, 44], as well as other NN detectors.
The rest of the paper is organized as follows. In Section II we present the problem statement. Then, in Section III, detection algorithms based on NNs are introduced including the newly proposed SBRNN algorithm. The Poisson channel model and the VD are introduced in Section IV. The performance of the NN detection algorithms are evaluated using this channel model and are compared against the VD in Section V. In Section VI, the performance of NN detection algorithms are evaluated using a small data set that is collected via an experimental platform. Concluding remarks are provided in Section VII.
Ii Problem Statement
In a digital communication system data is converted into a sequence of symbols for transmission over the channel. This process is typically carried out in two steps: in the first step, source coding is used to compress or represent the data using symbols or bits; in the second step, channel coding is used to introduce extra redundant symbols to mitigate the errors that may be introduced as part of the transmission and reception of the data [45]. Let be the finite set of symbols that could be sent by the transmitter, and be the
symbol that is transmitted. The channel coding can be designed such that the individual symbols in a long sequence are drawn according to the probability mass function (PMF)
.The signal that is observed at the destination is noisy and corrupted due to the perturbations introduced as part of transmission, propagation, and reception processes. We refer to these three processes collectively as the communication channel or simply the channel
. Let the random vector
of length be the observed signal at the destination during the transmission. Note that the observed signal is typically a vector while the transmitted symbol is typically a scalar. A detection algorithm is then used to estimate the transmitted symbols from the observed signal at the receiver. Let be the symbol that is estimated for the transmitted symbol . After detection, the estimated symbols are passed to a channel decoder to correct some of the errors in detection, and then to a source decoder to recover the data. All the components of a communication system, shown in Figure 2, are designed to ensure reliable data transfer.Typically, to design these modules, mathematical channel models are required, which describe the relationship between the transmitted symbols and the observed signal through
(1) 
where are the model parameters. Some of these parameters can be static (constants that do not change with channel conditions) and some of them can dynamically change with channel conditions over time. In this work, model parameters are considered to be the parameters that change with time. Hence, we use the terms model parameter and instantaneous CSI interchangeably. Using this model, the detection can be performed through symbolbysymbol detection, where is estimated from , or using sequence detection where the sequence is estimated from the sequence ^{1}^{1}1Note that the sequence of symbols can also be estimated from for some integer . However, to keep the notation simpler, without loss of generality we assume .. As an example, for a simple channel with no intersymbol interference (ISI), given by the channel model , and a known PMF for the transmission symbols
, a maximum a posteriori estimation (MAP) algorithm can be devised as
(2) 
Therefore for detection, both the model and the parameters of the model , which may change with time, are required. For this reason, many detection algorithms periodically estimate the model parameters (i.e., the CSI) by transmitting known symbols and then using the observed signals at the receiver for CSI estimation [46]
. This extra overhead leads to a decrease in the data rate. One way to avoid CSI estimation is by using blind detectors. These detectors typically assume a particular probability distribution over
, and perform the detection without estimating the instantaneous CSI at the cost of higher probability of error. However, estimating the joint distribution over all model parameters
can also be difficult, requiring a large amount of measurement data under various channel conditions. One of the problems we consider in this work is whether NN detectors can learn this distribution during training, or learn to simultaneously estimate the CSI and detect the symbols. This approach results in a robust detection algorithm that performs well under different and changing channel conditions without any knowledge of the channel models or their parameters.When the underlying channel models do not lend themselves to computationally efficient detection algorithms, or are partly or completely unknown, the best approach to designing detection algorithms is unclear. For example, in communication channels with memory, the complexity of the optimal VD increases exponentially with memory length, and quickly becomes infeasible for systems with long memory. Note that the VD also relies on the knowledge of the channel model in terms of its inputoutput transition probability. As another example, tractable channel models for molecular communication channels with multiple reactive chemicals are unknown [9, 10, 11]. We propose that in these scenarios, a data driven approach using deep learning is an effective way to train detectors to determine the transmitted symbols directly using known transmission sequences.
Iii Detection Using Deep Learning
Estimating the transmitted symbol from the received signals can be performed using NN architectures through supervised learning. This is achieved in two phases. First, a training dataset is used to train the NN offline. Once the network is trained, it can be deployed and used for detection. Note that the training phase is performed once offline, and therefore, it is not part of the detection process after deployment. We start this section by describing the training process.
Iiia Training the Detector
Let be the cardinality of the symbol set, and let be the oneof representation of the symbol transmitted during the transmission, given by
(3) 
where is the indicator function. Therefore, the element corresponding to the symbol that is transmitted is 1, and all other elements of are 0. Note that this is also the PMF of the transmitted symbol during the transmission where, at the transmitter, with probability 1, one of the symbols is transmitted. Also note that the length of the vector is , which may be different from the length of the vector of the observation signal at the destination.
The detection algorithm goes through two phases. In the first phase, known sequences of symbols from are transmitted repeatedly and received by the system to create a set of training data. The training data can be generated by selecting the transmitted symbols randomly according to a PMF, and generating the corresponding received signal using mathematical models, simulations, experimental measurements, or field measurements. Let be a sequence of consecutively transmitted symbols (in the oneof encoded representation), and the corresponding sequence of observed signals at the destination. Then, the training dataset is represented by
(4) 
which consists of training samples, where the sample has consecutive transmissions.
This dataset is then used to train a deep NN classifier that maps the received signal
to one of the transmission symbols in . The input to the NN can be the raw observed signals , or a set of features extracted from the received signals. The NN outputs are the vectors , where are the parameters of the NN. Using the above interpretation of as a probability vector, are the estimations of the probability of given the observations and the parameters of the NN. Note that this output is also useful for soft decision channel decoders (i.e., decoders where the decoder inputs are PMFs), which are typically the next module after detection as shown in Figure 2. If channel coding is not used, the symbol is estimated using .During the training, known transmission sequences of symbols are used to find the optimal set of parameters for the NN such that
(5) 
where
is the loss function. This optimization algorithm is typically solved using the training data, variants of stochastic gradient decent, and back propagation
[7]. Since the output of the NN is a PMF, the crossentropy loss function can be used for this optimization [7]:(6) 
where is the cross entropy between the correct PMF and the estimated PMF, and
is the KullbackLeibler divergence
[47]. Note that minimizing the loss is equivalent to minimizing the crossentropy or the KullbackLeibler divergence distance between the true PMF and the one estimated based on the NN. It is also equivalent to maximizing the loglikelihoods. Therefore, during the training, known transmission data are used to train a detector that maximizes loglikelihoods. Using Bayes’ theorem, it is easy to show that minimizing the loss is equivalent to maximizing (
2). We now discuss how several wellknown NN architectures can be used for symbolbysymbol detection and for sequence detection.IiiB SymbolbySymbol Detectors
The most basic NN architecture that can be employed for detection uses several fully connected NN layers followed by a final softmax layer
[6, 7]. The input to the first layer is the observed signal or the feature vector , which is selectively extracted from the observed signal through preprocessing. The output of the final layer is of length(i.e., the cardinality the symbol set), and the activation function for the final layer is the softmax activation. This ensures that the output of the layer
is a PMF. Figure 3(a) shows the structure of this NN.A more sophisticated class of NNs that is used in processing complex signals such as images is a convolution neural network (CNN)
[48, 49, 6]. Essentially, the CNN is a set of filters that are trained to extract the most relevant features for detection from the received signal. The final layer in the CNN detector is a dense layer with output of length , and a softmax activation function. This results in an estimate from the set of features that are extracted by the convolutional layers in the CNN. Figure 3(b) shows the structure of this NN.For symbolbysymbol detection the estimated PMF is given by
(7) 
where is the probability of estimating each symbol based on the NN model used. The better the structure of the NN at capturing the physical channel characteristics based on in (1), the better this estimate and the results.
IiiC Sequence Detectors
The symbolbysymbol detector cannot take into account the effects of ISI between symbols^{2}^{2}2It is possible to use the received signal from multiple symbols as input to a CNN for detection in the presence of ISI.. In this case, sequence detection can be performed using recurrent neural networks (RNN) [6, 7]
, which are well established for sequence estimation in different problems such as neural machine translation
[33], speech recognition [30], or bioinformatics [35]. The estimated in this case is given by(8) 
where
is the probability of estimating each symbol based on the NN model used. In this work, we use long shortterm memory (LSTM) networks
[50], which have been extensively used in many applications.Figure 3(c) shows the RNN structure. One of the main benefits of this detector is that after training, similar to a symbolbysymbol detector, it can perform detection on any data stream as it arrives at the receiver. This is because the observations from previous symbols are summarized as the state of the RNN, which is represented by the vector . Note that the observed signal during the transmission slot, where , may carry information about the symbol due to delays in signal arrival which results in ISI. However, since RNNs are feedforward only, during the estimation of , the observation signal is not considered.
One way to overcome this limitation is by using bidirectional RNNs (BRNNs), where the sequence of received signals are once fed in the forward direction into one RNN cell and once fed in backwards into another RNN cell [51]. The two outputs are then concatenated and may be passed to more bidirectional layers. Figure 3(d) shows the BRNN structure. For a sequence of length , the estimated for BRNN is given by
(9) 
where . In this work we use the bidirectional LSTM (BLSTM) networks [52].
The BRNN architecture ensures that in the estimation of a symbol, future signal observations are taken into account, thereby overcoming the limitations of RNNs. The main tradeoff is that as signals from a data stream arrive at the destination, the block length increases, and the whole block needs to be reestimated again for each new data symbol that is received. Therefore, this quickly becomes infeasible for long data streams as the length of the data stream can be on the order of tens of thousands to millions of symbols. In the next section we present a new technique to solve this issue.
IiiD Sliding BRNN Detector
Since the data stream that arrives at the receiver can have any arbitrary length, it is not desirable to detect the whole sequence for each new symbol that arrives, as the sequence length could grow arbitrarily large. Therefore, we fix the maximum length of the BRNN. Ideally, the length must be at least the same size as the memory length of the channel. However, if this is not known in advance, the BRNN length can be treated as a hyperparameter to be tuned during training. Let
be the maximum length of the BRNN. Then during training, blocks of consecutive transmissions are used for training. Note that sequences of different lengths could be used during training as long as all sequence lengths are smaller than or equal to . After training, the simplest scheme would be to detect the stream of incoming data in fixed blocks of length as shown in the top portion of Figure 4. The main drawback here is that the symbols at the end of each block may affect the symbols in the next block, and this relation is not captured in this scheme. Another issue is that consecutive symbols must be received before detection can be performed. The top portion of Figure 4 shows this scheme for .To overcome these limitations, inspired by some of the techniques used in speech recognition [53], we propose a dynamic programing scheme we call the sliding BRNN (SBRNN) detector. In this scheme the first symbols are detected using the BRNN. Then as each new symbol arrives at the destination, the position of the BRNN slides ahead by one symbol. Let the set be the set of all valid starting positions for a BRNN detector of length , such that the detector overlaps with the symbol. For example, if and , then is not in the set since the BRNN detector overlaps with symbol positions 1, 2, and 3, and not the symbol position 4. Let be the estimated PMF for the symbol, when the start of the sliding BRNN is on . The final PMF corresponding to the symbol is given by the weighted sum of the estimated PMFs for each of the relevant windows:
(10) 
One of the main benefits of this approach is that, after the first symbols are received and detected, as the signal corresponding to a new symbol arrives at the destination, the detector immediately estimates that symbol. The detector also updates its estimate for the previous symbols dynamically. Therefore, this algorithm is similar to a dynamic programming algorithm.
The bottom portion of Figure 4 illustrates the sliding BRNN detector. In this example, after the first 3 symbols arrive, the PMF for the first three symbols, , is given by . When the 4th symbol arrives, the estimate of the first symbol is unchanged, but for , the second and third symbol estimates are updated as , and the 4th symbol is estimated by . Note that although in this paper we assume that the weights of all are the same (i.e., ), the algorithm can use different weights. Moreover, the complexity of the SBRNN increases linearly with the length of the BRNN window, and hence with the memory length.
To evaluate the performance of all these NN detectors, we use both the Poisson channel model (a common model for optical and molecular communication systems) as well as an experimental platform for molecular communication where the underlying model is unknown [42]. The sequel discusses more details of the Poisson model and experimental platform, and how they were used for performance analysis of our proposed techniques.
Iv The Poisson Channel Model
The Poisson channel has been used extensively to model different communication systems in optical and molecular communication [36, 37, 38, 39, 40, 41]. In these systems, information is encoded in the intensity of the photons or particles released by the transmitter and decoded from the intensity of photons or particles observed at the receiver. In the rest of this section, we refer to the photons, molecules, or particles simply as particles. We now describe this channel, and a VD for the channel.
In our model it is assumed that the transmitter uses onoffkeying (OOK) modulation, where the transmission symbol set is , and the transmitter either transmits a pulse with a fixed intensity to represent the 1bit or no pulse to represent the 0bit. Note that OOK modulation has been considered in many previous works on optical and molecular communication and has been shown to be the optimal input distribution for a large class of Poisson channels [54, 55, 56]. Later in Section VD, we extend the results to larger symbol sets by considering the general level pulse amplitude modulation (PAM), where information is encoded in amplitudes of the pulse transmissions. Note that OOK is a special case of this modulation scheme with .
Let be the symbol interval, and the symbol corresponding to the transmission. We assume that the transmitter can measure the number of particles that arrive at a sampling rate of samples per second. Then the number of samples in a given symbol duration is given by , where we assume that is an integer. Let
be the system response to a transmission of the pulse corresponding to the 1bit. For optical channels, the system response is proportional to the Gamma distribution, and given by
[57, 58, 59]:(11) 
where is the proportionality constant, and and
are parameters of the channel, which can change over time. For molecular channels, the system response is proportional to the inverse Gaussian distribution
[60, 61, 39, 40] given by:(12) 
where is the proportionality constant, and and are parameters of the channel, which can change over time.
Since the receiver samples the data at a rate of , for and , let
(13) 
be the average intensity observed during the sample of the symbol in response to the transmission pulse corresponding to the 1bit. Figure 5 shows the system response for both optical and molecular channels. Although for optical channels the symbol duration is many orders of magnitude smaller than for molecular channels, the system responses are very similar in shape. Some notable differences are a faster rise time for the optical channel, and a longer tail for the molecular channel.
The system responses are used to formulate the Poisson channel model. In particular, the intensity that is observed during the sample of the symbol is distributed according to
(14) 
where
is the Poisson distribution, and
is the mean of an independent additive Poisson noise due to background interference and/or the receiver noise^{3}^{3}3Note that is the noise term that is typically used in the Poisson channel model. In the optical communication literature this noise is also known as the dark current [36, 37, 38]. The noise is due to imperfect receiver, or background noise (due to ambient optical noise or molecules that may exist in the environment).. Using this model, the signal that is observed by the receiver, for any sequence of bit transmissions, can be generated as illustrated in Figure 6. This signal has a similar structure to the signal observed using the experimental platform in [43, see Figure 13], although this analyticallymodeled signal exhibits more noise.The model parameters (i.e., the CSI) for the Poisson channel model are and , respectively for optical and molecular channels. In this work, we assume that the sampling rate , and the proportionality constants and are fixed and are not part of the model parameters. Note that and can change over time due to atmospheric turbulence or mobility. Similarly, and are functions of the distance between the transmitter and the receiver, flow velocity, and the diffusion coefficient, which may change over time, e.g., due to variations in temperature and pressure [5]. The background noise may also change with time. Note that although the symbol interval may be changed to increase or decrease the data rate, both the transmitter and receiver must agree on the value of . Thus, we assume that the value of is always known at the receiver, and therefore, it is not part of the CSI. In the next subsection, we present the optimal VD, assuming that the receiver knows all the model parameters and perfectly.
Iva The Viterbi Detector
The VD assumes a certain memory length where the current observed signal is affected only by the past transmitted symbols. In this case (14) becomes
(15) 
Since the marginal distribution of the sample of the symbol is Poisson distributed according to (15), given the model parameters , we have
(16) 
This is because, given the model parameters as well as the current symbol and the previous symbols, the samples within the current bit interval are generated independently and distributed according to (15). Note that (IVA) holds only if the memory length is known perfectly. If the estimate of is inaccurate, then (IVA) is also inaccurate.
Let be the set of states in the trellis of the VD, where the state corresponds to the previous transmitted bits forming the binary representation of . Let , be the information bits to be estimated. Let be the state corresponding to the symbol interval, where is the binary representation of . Let denote the loglikelihood of the state . For a state , there are two states in the set that can transition to :
(17)  
(18) 
where is the floor function. Let the binary vector be the binary representation of and similarly the binary representation of . The loglikelihoods of each state in the next symbol slot are updated according to
(19) 
where , , is the loglikelihood increment of transitioning from state to . Let
(20) 
Using the PMF of the Poisson distribution, (15), (IVA), and (20) we have
(21) 
where the extra term is dropped since it will be the same for both transitions from and . Using these transition probabilities and setting the and , for , the most likely sequence , , can be estimated using the Viterbi algorithm [62]. When the memory length is long, it is not computationally feasible to consider all the states in the trellis as they grow exponentially with memory length. Therefore, in this work we implement the Viterbi beam search algorithm [63]. In this scheme, at each time slot, only the transition from the previous states with the largest loglikelihoods are considered. When , the Viterbi beam search algorithm reduces to the traditional Viterbi algorithm.
We now evaluate the performance of NN detectors using the Poisson channel model.
V Evaluation Based on Poisson Channel
In this section we evaluate the performance of the proposed SBRNN detector based on the Poisson channel model, and in the next section we use the experimental platform developed in [42] to demonstrate that the SBRNN detector can be implemented in practice to perform realtime detection. The rest of this section is organized as follows. First, we describe the training procedure and the simulation setup in Section VA. Then, in Section VB, we evaluate the effects of and , the symbol duration, and noise on the BER performance. In particular, in this section we demonstrate that SBRNN detection is resilient to changes in symbol duration and noise, and outperforms VD with perfect CSI if the memory length is not estimated correctly. In Section VC, the performance of the SBRNN detector and VD are evaluated for different channel parameters. To show that the SBRNN algorithm works on larger symbol sets (i.e., higher order modulations), in section VD we consider an optical channel that uses PAM, , instead of OOK (i.e., 2PAM). We also demonstrate that although the training is performed on transmission sequences of length 100, the SBRNN can generalize to longer transmission sequences. The effects of the RNN cell type is also evaluated and it is demonstrated that LSTM cells achieve the best BER performance. The performance of the SBRNN in rapidly changing channels is evaluated in Section VE, and the complexity of this algorithm compared to the VD is discussed in Section VF. Table I summarizes all the results that will be presented in this section.
sec.  Chan Types  Evaluates 

B  Optical/Molecular (OOK)  sequence length, symbol duration, noise 
C  Optical/Molecular (OOK)  channel parameters (i.e., impulse response) 
D  Optical (PAM)  symbol size, transmission length, RNN type 
E  Optical/Molecular (OOK)  rapidly changing channels 
Va Training and Simulation Procedure
For evaluating the performance of the SBRNN on the Poisson channel, we consider both the optical channel and the molecular channel. For the optical channel, we assume that the channel parameters are , and assume and . We use these values for and since they resulted in system responses that resembled the ones presented in [57, 58, 59]. For the molecular channel the model parameters are , and . The value of was selected to resemble the system response in [43]. For the optical channel we use GS/s and for the molecular channel we use S/s.
For the VD algorithm we consider Viterbi with beam search, where only the top states with the largest loglikelihoods are kept in the trellis during each time slot. We also consider two different scenarios for CSI estimation. In the first scenario we assume that the detector estimates the CSI perfectly, i.e., the values of the model parameters and are known perfectly at the receiver. In practice, it may not be possible to achieve perfect CSI estimation. In the second scenario we consider the VD with CSI estimation error. Let be a parameter in or . Then the estimate of this parameter is simulated by , where
is a zeromean Gaussian noise with a standard deviation that is 2.5% or 5% of
. In the rest of this section, we refer to these cases as the VD with 2.5% and 5% error, and the case with perfect CSI as the VD with 0% error. Table II shows the BER performance of the VD for different values of . It can be seen that , which is used in the rest of this section, is sufficient to achieve good performance with the VD.10  100  200  500  1000  

Opti. VD 0.0% error  0.0466  0.03937  0.03972  0.03906  0.03972 
Opti. VD 2.5% error  0.226  0.175  0.17561  0.15889  0.1509 
Opti. VD 5.0% error  0.4036  0.385  0.38519  0.39538  0.36 
Mole. VD 0.0% error  0.00466  0.00398  0.00464  0.00448  0.00432 
Mole. VD 2.5% error  0.0066  0.0055  0.00524  0.0056  0.00582 
Mole. VD 5.0% error  0.41792  0.34667  0.30424  0.29314  0.30588 
Both the RNN and the SBRNN detectors use LSTM cells [50], unless specified otherwise. For the SBRNN, the size of the output is 80. For the RNN, since the SBRNN uses two RNNs, one for the forward direction and one for the backward direction, the size of the output is 160. This ensures that the SBRNN detector and the RNN detector have roughly the same number of parameters. The number of layers used for both detectors in this section is 3. The input to the NNs are a set of normalized features extracted from the received signal
. The feature extraction algorithm is described in the appendix. This feature extraction step normalizes the input, which assists the NNs to learn faster from the data
[7].To train the RNN and SBRNN detectors, transmitted bits are generated at random and the corresponding received signal is generated using the Poisson model in (14). In particular, the training data consists of many samples of sequences of 100 consecutively transmitted bits and the corresponding received signal. Since in this work we focus on uncoded communication, we assume the occurrence of both bits in the transmitted sequence are equiprobable. For each sequence, the CSI are selected at random. Particularly, for the optical channel, for each 100bit sequence,
(22) 
where
indicates uniform distribution over the set
. Similarly, for the molecular channel,(23) 
For the SBRNN training, each 100bit sequence is randomly broken into subsequences of length . For all training, the Adam optimization algorithm [64] is used with learning rate of , and batch size of 500. We train on 500k sequences of 100 bits.
Over the next several subsections we evaluate the performance of the SBRNN detector and compare it to that of the VD.
VB Effects of Sequence Length, Symbol Duration, and Noise
First, we evaluate the BER performance with respect to the memory length used in the VD, and the sequence length used in the SBRNN. For all the BER performance plots in this section, to calculate the BER, 1000 sequences of 100 random bits are used. Figure 7(a) shows the results for the optical (top plots) and the molecular (bottom plots) channels with the parameters described above. From the results it is clear that the performance of the VD relies heavily on estimating the memory length of the system correctly. We define the memory length as the number of symbol durations it takes for the impulse response to be sufficiently small such that ISI is negligible or, equivalently, such that increasing the memory length of the detector does not decrease BER significantly. For example, let be the peak value of the impulse response. Let , , be the time it takes for impulse response to fall to . Then, for the optical channel in Figure 7(a), the time it takes for the impulse response to fall to 0.01% of is s. Therefore, at a symbol duration of , the memory length is on the order of symbols. From Figure 7(a) it can be seen that the BER performance of the VD with perfect CSI does not improve beyond a negligible amount for . The molecular channel’s impulse response has a much longer tail, where at s it takes 382 symbol durations for the impulse response to fall to 0.1% of the peak value . This is evident in Figure 7(a) where the BER of the VD with perfect CSI always improves as increases.
Figure 7(a) also demonstrates that if the estimate of is inaccurate, the SBRNN algorithm outperforms the VD with perfect CSI. We also observe that the SBRNN achieves a better BER when there is a CSI estimation error of 2.5% or more. Note that the RNN detector does not have a parameter that depends on the memory length and has a significantly larger BER compared to the SBRNN. For the optical channel, the RNN detector outperforms the VD with 5% error in CSI estimation. Moreover, it can seen that the optical channel has a shorter memory length compared to the molecular channel.
Remark 1: When the VD has perfect CSI, it can estimate the memory length correctly by using the system response. However, if there is CSI estimation error, the memory length may not be estimated correctly, and as can be seen in Figure 7(a), this can have degrading effects on the performance of the VD. However, in the rest of this section, for all the other VD plots, we use the memory length of 99, i.e., the largest possible memory length in sequences of 100 bits. Although this does not capture the performance degradation that may result from the error in estimating the memory length, as we will show, the SBRNN still achieves a BER performance that is as good or better than the VD plots with CSI estimation error under various channel conditions.
Next we evaluate the BER for different symbol durations in Figure 7(b). Again we observe that the SBRNN achieves a better BER when there is a CSI estimation error of 2.5% or more. The RNN detector outperforms the VD with 5% CSI estimation error for the optical channel, but does not perform well for the molecular channel. All detectors achieve zeroerror in decoding the 1000 sequences of 100 random bits used to calculate the BER for the optical channel with s. Similarly, for the molecular channel at s, all detectors except the RNN detector achieve zero error.
Figure 7(c) evaluates the BER performance at various noise rates. The SBRNN achieves a BER performance close to the VD with perfect CSI across a wide range of values. For larger values of , i.e., low signaltonoise ratio (SNR), both the RNN detector and the SBRNN detector outperform the VD with CSI estimation error.
VC Effects of Channel Parameters
In this section we evaluate the performance with respect to the channel parameters that affect the system response. Recall that for the optical channel the parameter affects the system response in (11) (note that here we assume that does not change), and for the molecular channel the parameters and affect the system response in (12). The range of values that is assumed to take is given in (22), and the range of values for and are given in (23).
In Figure 8, we evaluate the performance of the detection algorithms with respect to these parameters. Note that in optical and molecular communication these parameters can change rapidly due to atmospheric turbulence, changes in temperature, or changes in the distance between the transmitter and the receiver. Therefore, estimating these parameters accurately can be challenging. Furthermore, since these parameters change the shape of the system responses they change the memory length as well.
Figure 9 shows the system response for the optical and molecular channels over the range of values for , , and in (22) and (23). For a fixed symbol duration, the system response can have a considerable effect on the delay spread (i.e., memory order) of the system. From Figure 8, it can be seen that the SBRNN performs as well or better than the VD with an estimation error of 2.5%. Moreover, for the optical channel, the RNN detector performs better than the VD with 5% estimation error. In all cases, the SBRNN learns to detect over the wide range of system responses shown in Figure 9.
VD Effects of Symbol Set Size, Transmission Length, and RNN Cell Type
Modulation  SER (Perfect CSI)  SER (2.5% CSI Error) 

OOK  
4PAM  
8PAM 
In the previous sections we considered OOK modulation. However, it is not clear if higher order modulations can be used to achieve better results. In this section we first evaluate the performance of OOK and higher order PAM modulations using VD. We demonstrate that for system parameters under consideration, 4PAM achieves the best BER performance. Then we demonstrate that the SBRNN detector can be trained on modulations with larger symbol sets. In fact for detection and estimation problems in speech and language processing, where RNNs are extensively used, the symbol set (i.e., the number of phonemes or vocabulary size) can be on the order of hundreds to millions of symbols. We also consider the affect of different RNN cell types on the symbol error rate (SER) performance and demonstrate that the LSTM cell, which was used in the previous sections, achieves the best performance. Finally, the generalizability of the SBRNN detector to longer transmission sequences is evaluated where we show that the SBRNN achieves the same or better SER performance on longer transmission sequences, despite being trained on sequences of length 100.
First we compare the performance of OOK, 4PAM, and 8PAM modulation, where 2, 4 or 8 amplitude levels are used for encoding 1, 2 or 3 bits of information during each symbol duration. We assume that amplitudes are equally spaced and include the zero amplitude (i.e., sending no pulse). Because of space limitations, we only focus on the optical channel with the following parameters: OOK with s, , ; 4PAM with s, , ; and 8PAM with s, , . For all modulations we use and . We chose these parameters to keep the average transmit power, the data rate, and the peak signaltonoise ratio (SNR) the same for all modulations. We then evaluate the SER using the VD with perfect CSI and the VDs with CSI estimation errors of 2.5%. We use 500k symbols for evaluating the SER. Table III shows the results. When perfect CSI is available at the receiver, 4PAM achieves the best SER, while when there is an error in CSI estimation, OOK achieves the best SER. Note that since the number of bits presented by each symbol of each modulation scheme is different, SER is not the best performance measure. However, even if we assume that each symbol error is due to a single bit error, which results in the best BER possible for 8PAM, we still observe that 4PAM achieves the best BER performance when perfect CSI is available at the receiver, while OOK achieves the best BER performance when there is CSI estimation error.
Since 4PAM achieves the best BER performance, we trained a new SBRNN detector based on 4PAM modulation. For training, the channel parameter is assumed to be uniformly random in the interval and the noise parameter is assumed to be uniformly random in the interval . We trained three SBRNN detectors based on the LSTM cell, the GRU cell [65], and the vanilla RNN architecture [7]. Figure 10(a)(b) shows the results. As can be seen, the SBRNN with the LSTM cell achieves a better SER performance compared with the GRU cell and the vanilla RNN cell types. Compared with the VDs, we observe a trend similar to that in OOK modulation: the SBRNN outperforms VD with CSI estimation error, while its performance comes close to the VD with perfect CSI. This demonstrates that the SBRNN algorithm can be extended to larger symbol sets.
We last evaluate the performance of the SBRNN detector over longer transmission sequences for OOK and 4PAM. In particular, for each modulation, two differently trained SBRNN networks are evaluated. The first set of networks are the same networks used to generate Figures 7, 8, and 10(a)(b). These networks are trained using a data set that contains sample transmissions under various channel conditions. The second set of networks are trained using sample received signals from a very specific set of channel and noise parameters. Specifically, the training data is generated using the same set of parameters that are used during testing (i.e., s, , for OOK and s, , for 4PAM). Note that all the SBRNN detectors are trained on transmission sequences of length 100.
Figure 10(c) shows the performance for transmission sequences of various lengths. Interestingly, we observe that the SER drops as the length of the transmission sequence increases. This is because the probability of error for symbols at the beginning and end of the transmission sequence is higher as shown in Figure 11. The larger probability of error for the first few symbols is due to the signal rising rapidly at the start of the transmission, as was shown in Figure 6, which has a different structure compared to the signal corresponding to the rest of the symbols. This can be mitigated by using a separate neural network that is trained only on the signal corresponding to the initial symbols, or using a sequence of random transmission bits at the beginning of the transmission sequence as a guard interval. The error at the end of transmission sequence can be mitigated by observing the received signal after the last symbol duration and using that signal as part of the detection.
VE Effects of Rapidly Changing Channels
In this section we evaluate the performance of the SBRNN algorithm for rapidly changing channels. Due to lack of space we focus on the optical channel; we have observed similar performance results for the molecular channel as well. For modeling the rapidly changing channel, we assume that the channel parameter and the noise parameter change from one symbol interval to the next. In particularly, we assume these parameters change according to a diffusion model with drift using the equations:
(24)  
(25) 
where and are the channel and noise parameters at the beginning of the transmission sequence, and control the diffusion and the drift velocities, and
is a zero mean unit variance Gaussian random variable. The received signal is then given by
(26) 
The parameter controls the degree of dispersion, while the parameter controls how and change on average over time. When , and . Note that controls the deviation from this mean. When , the channel is degrading over time since and , which result in larger ISI and noise components on average. Similarly, when , the channel is improving over time because the ISI and the noise component are decreasing on average.
To evaluate the resiliency of the SBRNN detector to rapid changes in the channel, we use the same trained networks that were used to generate Figures 7, 8, and 10(a)(b). Note that although these networks are trained using a data set that contains samples from various channel conditions, the channel parameters are fixed for the duration of the transmission of the whole sequence. However, the model that is used for testing is the one in (26), where the channel parameters changes from one symbol to the next during a transmission sequence. Specifically, for testing, sequences of length 200 symbols are used. The parameters of the channel are assumed to be , , and . For the OOK modulation s, and , while for 4PAM s, and . The channel parameters and in (26) are assumed to diffuse according to (24) and (25) over a bounded intervals of and , respectively.
Figure 12 shows the results. For the VD plots, we assume that and is known perfectly at the receiver, i.e., the receiver has the perfect CSI at the beginning of the transmission sequence. If the diffusion rate is very small, and there is no drift (i.e., the channel is not changing), the VD performs very well, as expected. However, if the channel is drifting over time (i.e. ), the performance of the VD degrades significantly. Although the SBRNN algorithm is trained on a dataset where the channel does not change rapidly, it performs well under rapidly changing conditions. Also note that the training dataset has 100 symbol sequences while the test data has symbol sequences of length 200. These results demonstrate that the SBRNN can be very useful in detection over rapidly changing channels, where traditional detection algorithms that cannot adapt well to the changing channel have performed poorly.
VF Computational Complexity
We conclude this section by comparing the computational complexity of the SBRNN detector, the RNN detector, and the VD. Let be the length of the sequence to be decoded. Recall that is the length of the sliding BRNN, is the memory length of the channel, and is the number of states with the highest loglikelihood values among the states of the trellis that are kept at each time instance in the beam search Viterbi algorithm. Note that for the traditional Viterbi algorithm . The computational complexity of the SBRNN is given by , while the computational complexity of the VD is given by . Therefore, for the traditional VD, the computational complexity grows exponentially with memory length . However, this is not the case for the SBRNN detector. The computational complexity of the RNN detector is . Therefore, the RNN detector is the most efficient in terms of computational complexity, while the SBRNN detector and the beam search VD algorithm can have similar computational complexity. Finally, the traditional VD algorithm is impractical for the channels considered due to its exponential computational complexity in the memory length .
Vi Evaluation Based on Experimental Platform
In this section, we use a molecular communication platform for evaluating the performance of the proposed SBRNN detector. Note that although the proposed techniques can be used with any communication system, applying them to molecular communication systems enable many interesting applications. For example, one particular area of interest is inbody communication where biosensors, such as synthetic biological devices, constantly monitor the body for different biomarkers for diseases [66]. Naturally, these biological sensors, which are adapt at detecting biomarkers in vivo [67, 68, 69], need to convey their measurements to the outside world. Chemical signaling is a natural solution to this communication problem where the sensor nodes chemically send their measurements to each other or to other devices under/on the skin. The device on the skin is connected to the Internet through wireless technology and can therefore perform complex computations. Thus, the experimental platform we use in this work to validate NN algorithms for signal detection can be used directly to support this important application.
We use the experimental platform in [42] to collect measurement data and create the dataset that is used for training and testing the detection algorithms. In the platform, timeslotted communication is employed where the transmitter modulates information on acid and base signals by injecting these chemicals into the channel during each symbol duration. The receiver then uses a pH probe for detection. A binary modulation scheme is used in the platform where the 0bit is transmitted by pumping acid into the environment for 30 ms at the beginning of the symbol interval, and the 1bit is represented by pumping base into the environment for 30 ms at the beginning of the symbol interval. The symbol interval consists of this 30 ms injection interval followed by a period of silence, which can also be considered as a guard band between symbols. In particular, four different silence durations (guard bands) of 220 ms, 304 ms, 350 ms, and 470 ms are used in this work to represent bit rates of 4, 3, 2.6, and 2 bps. This is similar to the OOK modulation used in the previous section for the Poisson channel model, except that chemicals of different types are released for both the 1bit and the 0bit.
To synchronize the transmitter and the receiver, every message sequence starts with one initial injection of acid into the environment for 100 ms followed by 900 ms of silence. The receiver then detects the starting point of this pulse by employing an edge detection algorithm and uses it to synchronize with the transmitter. Since the received signal is corrupted and noisy, this results in a random offset. However, since the NN detectors are trained directly on this data, as we will show, they learn to be resilient to this random offset.
The training and test data sets are generated as follows. For each symbol duration, random bit sequences of length 120 are transmitted 100 times, where each of the 100 transmissions are separated in time. Since we assume no channel coding is used, the bits are i.i.d. and equiprobable. This results in 12k bits per symbol duration that is used for training and testing. From the data, 84 transmissions per symbol duration (10,080 bits) are used for training and 16 transmissions are used for testing (1,920 bits). Therefore, the total number of training bits is 40,320, and the total number of bits used for testing is 7,680.
Although we expect from the physics of the chemical propagation and chemical reaction that the channel should have memory, since the channel model for this experimental platform is currently unknown, we implement both symbolbysymbol and sequence detectors based on NNs. Note that due to the lack of a channel model, we cannot use the VD for comparison since it cannot be implemented without an underlying channel model. Instead, as a baseline detection algorithm, we use the slope detector that was used in previous work [43, 44, 42]. For all training of the NN detectors, the Adam optimization algorithm [64] is used with learning rate of
. Unless specified otherwise, the number of epochs used during training is 200 and the batch size is 10. All the hyperparameters are tuned using grid search.
We consider two symbolbysymbol NN detectors. The first detector uses three fully connected layers with 80 hidden nodes and a final softmax layer for detection. Each fully connected layer uses the rectified linear unit (ReLU) activation function. The input to the network is a set of features extracted from the received signal, which are chosen based on performance and the characteristics of the physical channel as explained in the appendix. We refer to this network as
BaseNet. A second symbolbysymbol detector uses 1dimensional CNNs. The best network architecture that we found has the following layers. 1) 16 filters of length 2 with ReLU activation; 2) 16 filters of length 4 with ReLU activation; 3) max pooling layer with pool size 2; 4) 16 filters of length 6 with ReLU activation; 5) 16 filters of length 8 with ReLU activation; 6) max pooling layer with pool size 2; 7) flatten and a softmax layer. The stride size for the filters is 1 in all layers. We refer to this network as
CNNNet.For the sequence detection, we use three networks, two based on RNNs and one based on the SBRNN. The first network has 3 LSTM layers and a final softmax layer, where the length of the output of each LSTM layer is 40. Two different inputs are used with this network. In the first, the input is the same set of features as the BaseNet above. We refer to this network as LSTM3Net. In the second, the input is the pretrained CNNNet described above without the top softmax layer. In this network, the CNNNet chooses the features directly from the received signal. We refer to this network as CNNLSTM3Net. Finally, we consider three layers of bidirectional LSTM cells, where each cell’s output length is 40, and a final softmax layer. The input to this network is the same set of features used for BaseNet and the LSTM3Net. When this network is used, during testing we use the SBRNN algorithm. We refer to this network as SBLSTM3Net. For all the sequence detection algorithms, during testing, sample data sequences of the 120 bits are treated as an incoming data stream, and the detector estimates the bits onebyone, simulating a real communication scenario. This demonstrates that these algorithms can work on any length data stream and can perform detection in realtime as data arrives at the receiver.
Via System’s Memory and ISI
We first demonstrate that this communication system has a long memory. We use the RNN based detection techniques for this, and train the LSTM3Net on sequences of 120 consecutive bits. The trained model is referred to as LSTM3Net120. We run the trained model on the test data, once resetting the input state of the LSTM cell after each bit detection, and once passing the state as the input state for the next bit. Therefore, the former ignores the memory of the system and the ISI, while the latter considers the memory. The bit error rate (BER) performance for the memoryless LSTM3Net120 detector is 0.1010 for 4 bps, and 0.0167 for 2 bps, while for the LSTM3Net120 detector with memory, they are 0.0333 and 0.0005, respectively. This clearly demonstrates that the system has memory.
To evaluate the memory length, we train a length10 SBLSTM3Net on all sequences of 10 consecutive bits in the training data. Then, on the test data, we evaluate the BER performance for the SBLSTM of length 2 to 10. Figure 13 shows the results for each symbol duration. The BER reduces as the length of the SBLSTM increases, again confirming that the system has memory. For example, for the 500 ms symbol duration, from the plot, we conclude that the memory is longer than 4. Note that some of the missing points for the 500 ms and 380 ms symbol durations, which result in discontinuity in the plots, are because there were zero errors in the test data. Moreover, BER values below are not very accurate since the number of errors in the test dataset are less than 10 (in a typical BER plot the number of errors should be about 100). However, given enough test data, it would be possible to estimate the channel memory using the SBLSTM detector by finding the minimum length after which BER does not improve.
ViB Performance and Resiliency
Table IV summarizes the best BER performance we obtain for all detection algorithms, including the baseline algorithm, by tuning all the hyperparameters using grid search. The number in front of the sequence detectors, indicates the sequence length. For example, LSTM3Net120 is an LSTM3Net that is trained on 120 bit sequences. In general, algorithms that use sequence detection perform significantly better than any symbolbysymbol detection algorithm including the baseline algorithm. This is partly due to significant ISI present in the molecular communication platform. Overall, the proposed SBLSTM algorithm performs better than all other NN detectors considered.
Another important issue for detection algorithms are changing channel conditions and resiliency. As the channel conditions worsen, the received signal is further degraded, which increases the BER. Although we assume no channel coding is used in this work, one way to mitigate this problem is by using stronger channel codes that can correct some of the errors. However, given that the NN detectors rely on training data to tune the detector parameters, overfitting may be an issue. To evaluate the susceptibility of NN detectors to this effect, we collect data with a pH probe that has a degraded response due to normal wear and tear.
We collect 20 samples of 120 bit sequence transmissions for each of the 250 ms and 500 ms symbol durations using this degraded pH probe. First, to demonstrate that the response of the probe is indeed degraded, we evaluate it using the baseline slopebased detection algorithm. The best BERs obtained using the baseline detector are 0.1583 and 0.0741 for symbol durations of 250 ms and 500 ms, respectively. These values are significantly larger than those in Table IV, because of the degraded pH probe. We then use the SBLSTM3Net10 and the LSTM3Net120, trained on the data from the good pH, on the test data from the degraded pH. For the SBLSTM3Net10, the BERs obtained are 0.0883 and 0.0142, and for the LSTM3Net120, the BERs are 0.1254 and 0.0504. These results confirm again that the proposed SBRNN algorithm is more resilient to changing channel conditions than the RNN.
Symb. Dur.  250 ms  334 ms  380 ms  500 ms 

Baseline  0.1297  0.0755  0.0797  0.0516 
BaseNet  0.1057  0.0245  0.0380  0.0115 
CNNNet  0.1068  0.0750  0.0589  0.0063 
CNNLSTM3Net120  0.0677  0.0271  0.0026  0.0021 
LSTM3Net120  0.0333  0.0417  0.0083  0.0005 
SBLSTM3Net10  0.0406  0.0141  0.0005  0.0000 
Feature/Parameter  &  mean & var  

Sec. V: Optical Channel  10  1  No  Yes  Yes  Yes  Yes 
Sec. V: Molecular Channel  10  1000  No  Yes  Yes  Yes  Yes 
Sec. VI: BaseNet  9  1  No  Yes  Yes  Yes  Yes 
Sec. VI: CNNNet  30  1  Yes  No  No  No  No 
Sec. VI: CNNLSTM3Net120  30  1  Yes  No  No  No  No 
Sec. VI: LSTM3Net120  9  1  No  Yes  Yes  Yes  Yes 
Sec. VI: SBLSTM3Net10  9  1  No  Yes  Yes  Yes  Yes 
Finally, to demonstrate that the proposed SBRNN algorithm can be implemented as part of a realtime communication system, we use it to support a text messaging application built on top of the experimental platform. We demonstrate that using the SBRNN for detection at the receiver, we are able to reliably transmit and receive messages at 2 bps. This data rate is an order of magnitude higher than previous systems [43, 44].
Vii Conclusions
This work considered a machine learning approach to the detection problem in communication systems. In this scheme, a neural network detector is directly trained using measurement data from experiments, data collected in the field, or data generated from channel models. Different NN architectures were considered for symbolbysymbol and sequence detection. For channels with memory, which rely on sequence detection, the SBRNN detection algorithm was presented for realtime symbol detection in data streams. To evaluate the performance of the proposed algorithm, the Poisson channel model for molecular communication was considered as well as the VD for this channel. It was shown that the proposed SBRNN algorithm can achieve a performance close to the VD with perfect CSI, and better than the RNN detector and the VD with CSI estimation error. Moreover, it was demonstrated that using a rich training dataset that contains sample transmission data under various channel conditions, the SBRNN detector can be trained to be resilient to the changes in the channel, and achieves a good BER performance for a wide range of channel conditions. Finally, to demonstrate that this algorithm can be implemented in practice, a molecular communication platform that uses multiple chemicals for signaling was used. Although the underlying channel model for this platform is unknown, it was demonstrated that NN detectors can be trained directly from experimental data. The SBRNN algorithm was shown to achieve the best BER performance among all other considered algorithms based on NNs as well as a slope detector considered in previous work. Finally, a text messaging application was implemented on the experimental platform for demonstration where it was shown that reliable communication at rates of 2 bps is possible, which is an order of magnitude faster than the data rate reported in previous work for molecular communication channels.
As part of future work we plan to investigate how techniques from reinforcement learning could be used to better respond to changing channel conditions. We would also like to study if the evolution of the internal state of the SBRNN detector could help in developing channel models for systems where the underlying models are unknown.
Appendix
Feature Extraction
In this appendix we describe the set of features that are extracted from the received signal and are used as the input to the different NN detectors considered in this work. The set of features , extracted from the received signal during the channel use , must preserve and summarize the important informationbearing components of the received signal. For the Poisson channel, since the information is encoded in the intensity of the signal, much of the information is contained in the rate of change of intensity. In particular, intensity increases in response to the transmission of the 1bit, while intensity decreases or remains the same in response to transmission of the 0bit. Note that this is also true for the pH signal in the experimental platform used in Section VI. First the symbol interval (i.e., the time between the green lines in Figure 6) is divided into a number of equal subintervals or bins. Then the values inside each bin are averaged to represent the value for the corresponding bin. Let be the number of bins, and the corresponding values of each bin. We then extract the rate of change during a symbol duration by differentiating the bin vector to obtain the vector , where . We refer to this vector as the slope vector and use it as part of the feature set extracted from the received signal.
Other values that can be used to infer the rate of change are and , the value of the first and the last bins, and the mean and the variance of the . Since the intensity can grow large due to ISI, may be normalized with the parameter as . Therefore, instead of and , and , and the mean and the variance of the may be used as part of the feature set . Finally, since the transmitter and the receiver have to agree on the symbol duration, the receiver knows the symbol duration, which can be part of the feature set. Table V summarizes the set of features that are used as input to the each of the NN detection algorithms in this paper.
References
 [1] M. Stojanovic and J. Preisig, “Underwater acoustic communication channels: Propagation models and statistical characterization,” IEEE Communications Magazine, vol. 47, no. 1, pp. 84–89, 2009.
 [2] Y. Moritani et al., “Molecular communication for health care applications,” in Proc. of 4th Annual IEEE International Conference on Pervasive Computing and Communications Workshops, Pisa, Italy, 2006, p. 5.
 [3] I. F. Akyildiz et al., “Nanonetworks: A new communication paradigm,” Computer Networks, vol. 52, no. 12, pp. 2260–2279, August 2008.
 [4] T. Nakano et al., Molecular communication. Cambridge University Press, 2013.
 [5] N. Farsad et al., “A comprehensive survey of recent advancements in molecular communication,” IEEE Communications Surveys & Tutorials, vol. 18, no. 3, pp. 1887–1919, thirdquarter 2016.
 [6] Y. LeCun et al., “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015. [Online]. Available: http://dx.doi.org/10.1038/nature14539
 [7] I. Goodfellow et al., Deep Learning. MIT Press, Nov. 2016.
 [8] M. Ibnkahla, “Applications of neural networks to digital communications – a survey,” Signal Processing, vol. 80, no. 7, pp. 1185–1215, 2000.
 [9] N. Farsad and A. Goldsmith, “A molecular communication system using acids, bases and hydrogen ions,” in 2016 IEEE 17th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2016, pp. 1–6.
 [10] B. Grzybowski, Chemistry in Motion: ReactionDiffusion Systems for Micro and Nanotechnology. Wiley, 2009.

[11]
L. Debnath,
Nonlinear partial differential equations for scientists and engineers
. Springer Science & Business Media, 2011.  [12] B. Aazhang et al., “Neural networks for multiuser detection in codedivision multipleaccess communications,” IEEE Transactions on Communications, vol. 40, no. 7, pp. 1212–1222, Jul 1992.
 [13] U. Mitra and H. V. Poor, “Neural network techniques for adaptive multiuser demodulation,” IEEE Journal on Selected Areas in Communications, vol. 12, no. 9, pp. 1460–1470, Dec 1994.
 [14] J. J. Murillofuentes et al., “Gaussian processes for multiuser detection in cdma receivers,” in Advances in Neural Information Processing Systems 18, Y. Weiss et al., Eds. MIT Press, 2006, pp. 939–946.
 [15] Y. Işık and N. Taşpınar, “Multiuser detection with neural network and pic in cdma systems for awgn and rayleigh fading asynchronous channels,” Wireless Personal Communications, vol. 43, no. 4, pp. 1185–1194, 2007.
 [16] E. Nachmani et al., “Learning to decode linear codes using deep learning,” in 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Sept 2016.
 [17] S. Dörner et al., “Deep learningbased communication over the air,” arXiv preprint arXiv:1707.03384, 2017.
 [18] T. J. O’Shea et al., “Learning to communicate: Channel autoencoders, domain specific regularizers, and attention,” in 2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Dec 2016, pp. 223–228.
 [19] E. Nachmani et al., “RNN decoding of linear block codes,” arXiv preprint arXiv:1702.07560, 2017.
 [20] ——, “Deep learning methods for improved decoding of linear codes,” IEEE Journal of Selected Topics in Signal Processing, 2018.
 [21] F. Liang et al., “An iterative BPCNN architecture for channel decoding,” IEEE Journal of Selected Topics in Signal Processing, 2018.
 [22] S. Cammerer et al., “Scaling deep learningbased decoding of polar codes via partitioning,” arXiv preprint arXiv:1702.06901, 2017.
 [23] S. Dörner et al., “Deep learningbased communication over the air,” IEEE Journal of Selected Topics in Signal Processing, 2017.
 [24] N. Samuel et al., “Deep MIMO detection,” arXiv preprint arXiv:1706.01151, 2017.
 [25] C. Lee et al., “Machine learning based channel modeling for molecular MIMO communications,” in IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2017.
 [26] T. J. O’Shea et al., “Learning approximate neural estimators for wireless channel state information,” arXiv preprint arXiv:1707.06260, 2017.
 [27] T. J. O’Shea and J. Hoydis, “An introduction to machine learning communications systems,” arXiv preprint arXiv:1702.00832, 2017.

[28]
A. Krizhevsky et al.
, “Imagenet classification with deep convolutional neural networks,” in
Advances in neural information processing systems, 2012, pp. 1097–1105. 
[29]
K. He et al., “Deep residual learning for image recognition,” in
Proceedings of the IEEE conference on computer vision and pattern recognition
, 2016, pp. 770–778.  [30] G. Hinton et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.
 [31] A. Graves and N. Jaitly, “Towards endtoend speech recognition with recurrent neural networks,” in Proceedings of the 31st International Conference on Machine Learning (ICML14), 2014, pp. 1764–1772.
 [32] D. Amodei et al., “Deep speech 2: Endtoend speech recognition in english and mandarin,” in International Conference on Machine Learning, 2016, pp. 173–182.
 [33] D. Bahdanau et al., “Neural Machine Translation by Jointly Learning to Align and Translate,” arXiv:1409.0473 [cs, stat], Sep. 2014.
 [34] K. Cho et al., “Learning phrase representations using RNN encoderdecoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
 [35] Z. Li and Y. Yu, “Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks,” arXiv:1604.07176, 2016.
 [36] S. R. Z. Ghassemlooy, W. Popoola, Optical Wireless Communications: System and Channel Modelling with MATLAB®, 1st ed. CRC Press, 2012.
 [37] C. Gong and Z. Xu, “Channel estimation and signal detection for optical wireless scattering communication with intersymbol interference,” IEEE Transactions on Wireless Communications, vol. 14, no. 10, pp. 5326–5337, Oct 2015.
 [38] G. Aminian et al., “Capacity of diffusionbased molecular communication networks over ltipoisson channels,” IEEE Transactions on Molecular, Biological and MultiScale Communications, vol. 1, no. 2, pp. 188–201, June 2015.
 [39] V. Jamali et al., “Channel estimation for diffusive molecular communications,” IEEE Transactions on Communications, vol. 64, no. 10, pp. 423—4252, Oct 2016.
 [40] ——, “Scw codes for optimal csifree detection in diffusive molecular communications,” in IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 3190–3194.
 [41] ——, “Noncoherent detection for diffusive molecular communications,” arXiv preprint arXiv:1707.08926, 2017.
 [42] D. P. N. Farsad and A. Goldsmith, “A novel experimental platform for invessel multichemical molecular communications,” in IEEE Global Communications Conference (GLOBECOM), 2017.
 [43] N. Farsad et al., “Tabletop molecular communication: Text messages through chemical signals,” PLOS ONE, vol. 8, no. 12, p. e82935, Dec 2013.
 [44] B. H. Koo et al., “Molecular MIMO: From theory to prototype,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 3, pp. 600–614, March 2016.
 [45] A. J. Viterbi and J. K. Omura, Principles of digital communication and coding. Courier Corporation, 2013.
 [46] E. Dahlman et al., 4G: LTE/LTEadvanced for mobile broadband. Academic press, 2013.
 [47] T. M. Cover and J. A. Thomas, Elements of Information Theory 2nd Edition, 2nd ed. WileyInterscience, 2006.

[48]
S. Lawrence et al.
, “Face recognition: A convolutional neuralnetwork approach,”
IEEE transactions on neural networks, vol. 8, no. 1, pp. 98–113, 1997.  [49] A. Krizhevsky et al., “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
 [50] S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
 [51] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
 [52] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional lstm and other neural network architectures,” Neural Networks, vol. 18, no. 5, pp. 602–610, 2005.
 [53] A. Graves et al., “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in In Proceedings of the International Conference on Machine Learning, ICML 2006, 2006, pp. 369–376.
 [54] S. Shamai, “Capacity of a pulse amplitude modulated direct detection photon channel,” IEE Proceedings I  Communications, Speech and Vision, vol. 137, no. 6, pp. 424–430, Dec 1990.
 [55] J. Cao et al., “Capacityachieving distributions for the discretetime poisson channel–Part I: General properties and numerical techniques,” IEEE Transactions on Communications, vol. 62, no. 1, pp. 194–202, 2014.
 [56] N. Farsad et al., “Capacity of molecular channels with imperfect particleintensity modulation and detection,” in IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 2468–2472.
 [57] N. Hayasaka and T. Ito, “Channel modeling of nondirected wireless infrared indoor diffuse link,” Electronics and Communications in Japan (Part I: Communications), vol. 90, no. 6, pp. 9–19, 2007.

[58]
A. K. Majumdar et al.
, “Reconstruction of probability density function of intensity fluctuations relevant to freespace laser communications through atmospheric turbulence,” in
Proc. SPIE, vol. 6709, 2007, p. 67090.  [59] H. Ding et al., “Modeling of nonlineofsight ultraviolet scattering channels for communication,” IEEE Journal on Selected Areas in Communications, vol. 27, no. 9, 2009.
 [60] K. V. Srinivas et al., “Molecular communication in fluid media: The additive inverse gaussian noise channel,” IEEE Transactions on Information Theory, vol. 58, no. 7, pp. 4678–4692, 2012.
 [61] A. Noel et al., “Optimal receiver design for diffusive molecular communication with flow and additive noise,” IEEE Transactions on NanoBioscience, vol. 13, no. 3, pp. 350–362, Sept 2014.
 [62] G. D. Forney, “The viterbi algorithm,” Proceedings of the IEEE, vol. 61, no. 3, pp. 268–278, March 1973.
 [63] X. Lingyun and D. Limin, “Efficient viterbi beam search algorithm using dynamic pruning,” in Proceedings. of 7th International Conference on Signal Processing, vol. 1, Aug 2004, pp. 699–702.
 [64] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
 [65] K. Cho et al., “Learning phrase representations using rnn encoderdecoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
 [66] B. Atakan et al., “Body area nanonetworks with molecular communications in nanomedicine,” IEEE Communications Magazine, vol. 50, no. 1, pp. 28–34, 2012.
 [67] J. C. Anderson et al., “Environmentally controlled invasion of cancer cells by engineered bacteria,” Journal of Molecular Biology, vol. 355, no. 4, pp. 619–627, 2006.
 [68] T. Danino et al., “Programmable probiotics for detection of cancer in urine,” Science Translational Medicine, vol. 7, no. 289, pp. 289ra84–289ra84, 2015.
 [69] S. Slomovic et al., “Synthetic biology devices for in vitro and in vivo diagnostics,” Proceedings of the National Academy of Sciences, vol. 112, no. 47, pp. 14 429–14 435, Nov. 2015.
Comments
There are no comments yet.