Evaluation of Surrogate Models for Multi-fin Flapping Propulsion Systems

10/31/2019 ∙ by Kamal Viswanath, et al. ∙ 0

The aim of this study is to develop surrogate models for quick, accurate prediction of thrust forces generated through flapping fin propulsion for given operating conditions and fin geometries. Different network architectures and configurations are explored to model the training data separately for the lead fin and rear fin of a tandem fin setup. We progressively improve the data representation of the input parameter space for model predictions. The models are tested on three unseen fin geometries and the predictions validated with computational fluid dynamics (CFD) data. Finally, the orders of magnitude gains in computational performance of these surrogate models, compared to experimental and CFD runs, vs their tradeoff with accuracy is discussed within the context of this tandem fin configuration.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

To address the need for more effective and efficient maneuvering in marine environments, propulsion and control systems inspired by fish and other aquatic organisms are starting to provide viable alternatives to traditional vehicle thrusters and control surfaces in a range of underwater regimes. Research to identify the principles of fish locomotion and characterize the propulsive performance for a variety of artificial, bio-inspired underwater propulsion systems has steadily grown over the past few decades with the predominant focus on flapping fins or foils, in various configurations, to achieve thrust. However, while biologists and engineers have worked together to study and take inspiration from nature in the development of robotic fins, research seeking to understand and model the propulsive performance of multiple fins operating on a vehicle is limited.

Biologists have observed the coordinated body and fin motions exhibited by various fish species and studied the wake interactions between these moving surfaces [1, 2, 3]. Various research groups have studied a multitude of fin shapes, stroke parameters, and configurations, as well as different fin materials and surface curvature control techniques, for robotic underwater propulsion systems [4, 5, 6, 7, 8, 9, 10, 11], and some have derived reduced order models for individual fins to capture relevant features at significantly lower computational cost [12]. While there is a rich set of parametric studies of isolated flapping fins, research on the propulsive performance of a system of multiple fins is limited including studies of interactions between dorsal and caudal fins [13, 14], tandem pectoral fins [15, 16], and pectoral and caudal fins [17]. Reduced order models of these multi-fin systems is even less common including quasi-steady models of tandem flapping foils [18, 19]. To create an accurate and time-effective tool for the design, analysis and control of vehicles propelled by artificial flapping fins, comprehensive studies that identify the effects of various fin parameters on multi-fin flow interactions must be used to develop lower computational cost surrogate models of forces and propulsive efficiencies.

To characterize the effects of flow interactions between oscillating propulsive surfaces caused by time-varying wake structures, we have studied a configuration of tandem, identical geometry fins flapping perpendicular to the direction of flow [15, 16], similar to lift-based pectoral fin motions of some fish species [20]. Using the results of these previous studies, the current research seeks to design and evaluate surrogate models of stroke-averaged and time-varying thrust for this multi-fin system. The goal is to develop models that could give a quick, accurate force profile given a fin configuration and a set of kinematics, without explicitly embedding the mathematics within it. Beyond accuracy, we seek models that are data efficient to train and generalizable, so we can be confident in tests of different fin configurations.

As shown in fig. 1, the propulsion system consists of two fins in tandem and on an underwater test vehicle there will be one set of the above configuration on each side propelling it. For this work we focus on predicting the thrust/forward force generated by the tandem fins, both the cyclic average thrust and the profile of the time-history of the force generation. The shape of the time history is important towards understanding the nature of the flapping kinematics and it also holds clues towards fluid dynamic events such as fluid vortex separation and recapture around the fins that can significantly enhance or retard overall thrust generation. A major challenge in the training of the neural networks is the ability to learn these fluid dynamic events without any explicit programming.

Fig. 1: Tandem fin configuration.

In this work, all training set data are from two fin shapes, a rectangular fin (fig. 2(a)) and a bio-inspired fin or bio-fin (fig. 2(b)). Tandem fin configuration experimental and CFD results for these fins have been published previously[16, 21] for various flow regimes, fin configurations, and driven kinematics. We test the accuracy of the surrogate models for new fin shapes, fig. 3, that lie in-between the bio-fin and the rectangular fin, based on their ability to predict fluid dynamic effects in the thrust profile for these previously unseen geometries and compute the average thrust force.

(a) Rectangular fin
(b) Bio-inspired fin
Fig. 2: Fin geometries used in experimental and CFD data collection.
(a) Bio-fin 1
(b) Bio-fin 2
(c) Bio-fin 3
Fig. 3: Bio-inspired geometries that transition from bio-fin to rectangular fin as the test set.

Surrogate models are used extensively in engineering design due to the prohibitive cost of high-fidelity simulations that are an impractical tool to effectively search the design space. There are a number of surrogate modeling approaches, including reduced order models and lookup tables. However, machine learning approaches—approaches in which the model learns from data as opposed to being explicitly programmed—have become increasingly popular due to the increasing amount of available data and the rapid theoretical advances in the field. Machine learning approaches to surrogate modeling include nonlinear regression, tree ensembles, and kernel-based interpolations methods such as kriging.

More recently, deep learning approaches have been introduced as powerful modeling tool. Deep neural networks are hierarchical networks of computation units, or ”neurons”, that are capable of approximating arbitrary functions

[22]

. Deep learning has had a transformative effect in fields such as computer vision

[23]

and natural language processing

[24], and it has shown promising results in physical modeling and design [25, 26, 27]. Deep learning algorithms have several major strengths: they can represent arbitrarily complex functions; they can learn incrementally, reducing the memory cost of training and allowing for on-going model refinement; and their flexible structure and training process allows for easier sensitivity testing and analysis.

In this paper, we investigate the effectiveness of several deep learning surrogate models for fin geometry and configuration in multi-fin flapping propulsion systems.

The major contributions of this research are

  1. a new surrogate model for predicting the force profiles of novel fin geometries in multi-fin flapping propulsion systems, and

  2. a demonstration of the potential of neural-network-based surrogate modeling for propulsion system design.

We hope that this work is a step towards a robust, fast surrogate model that is able to explore many dimensions of the design space.

Ii Methods

In addition to a nonlinear regression baseline, we test five increasingly complex neural networks to predict the thrust given the propulsion system kinematics and geometry. Initial models are trained only on the lead fin data and their predictive accuracy is used to refine the input parameters and hyperparameters for the various models. The performance of the lead fin models is used to inform the setup and training of the rear fin neural network model. The architectures, training setups, and evaluation criteria of each model are described in the following sections.

Ii-a Training Dataset

Ii-A1 Input Parameters

The first problem to tackle is choosing the inputs that can effectively span the high dimensional space of the problem, the thrust being a function of many parameters. For the configuration shown in fig. 1, we have an input space that consists of several variables as described in table I.

Parameters Description
Geometry
5 or more geometric At equidistant points
chord lengths in the spanwise direction
Fin leading edge length (meters) Reference length for normalization
Lead fin to rear fin offset distance
Kinematics
T (secs) Time period of a fin stroke(flap) cycle
Stroke angle(radians) Time varying history of the flapping
angle over a single cycle
Pitch angle(radians) Time varying history of the pitching
angle over a single cycle
Forward speed(m/s) Incoming fluid flow velocity
Fin tip average speed(m/s) Average speed of the
wing tip over a single cycle
Stroke phase offset Phase offset of the rear
fin flapping w.r.t to lead fin
Pitch phase offset Phase offset of rear fin
pitching w.r.t to lead fin
Forces
Force (N) Time varying profile of the
thrust over a single cycle
As shown in fig. 4.
TABLE I: Training Parameters

The data from CFD and experimental runs are preprocessed to normalize input length scales, use force coefficient representation, match timescales, and remove outliers. The forward speed is normalized with the average fin tip speed. The chord lengths data, normalized with the leading-edge length, are constant for each fin geometry. Stroke and pitch angles are intended to give the spatial orientation of the fin over a cycle and are time dependent.

Fig. 4: Fin geometry with chord lengths at 5 equidistant points.

Ii-B Neural Network Architectures

We considered two major architectures during these experiments: densely-connected neural networks (DNNs) and 1D convolutional neural networks (CNNs).

Fig. 5: A basic densely-connected network. Each neuron is connected to all neurons in previous and subsequent layers.

A DNN is a feed-forward neural network in which each neuron is connected to all neurons in the previous and subsequent layers as seen in fig.

5. Training a DNN means learning a set of weight matrices mapping each neuron in layer to each neuron in layer . We use this as a baseline neural model because the dense connections mean that DNNs treat all data equally: they make no assumptions about the underlying structure of the data. Case B uses this basic architecture.

Fig. 6: A 1D convolutional neural network (CNN) with pooling layers and dropout regularization. Instead of learning parameters relating all neurons in adjoining layers, a convolutional neural network learns a smaller kernel of shared parameters.

CNNs, in contrast, make assumptions about the data. Instead of learning the full weight matrices, CNNs learn a smaller set of shared parameters called a kernel for each layer (fig. 6). This parameter sharing enforces translational invariance and restricts the solution space by reducing the number of trainable parameters. As a result, it tends to be better than DNNs at capturing local structure.

In our convolutional models (cases C-E), we use the basic architecture shown in fig. 6

: a seven layer, feed-forward architecture consisting of an input layer, two 1D convolution layers, a max pooling layer, two more 1D convolution layers, a global average pooling layer, and finally a fully-connected output layer. Each convolutional layer uses a leaky ReLU activation function

[28] with the inactive gradient set at . Desirable traits for the trained network include robustness over the range of variations of the input quantities and minimizing overfitting to the training data spectrum. To this end, the network utilizes dropout regularization during training that randomly turns off some of the connections while training and steers the network away from learning an overly complex function, with 20% of nodes being dropped per iteration [29]. All kernels are 3 channels deep, and the width varies per experiment.

Ii-C Training Setup

The goal of training is to learn the mapping between parameters of the flapping-fin configuration(see table I

) and the resulting force profile by minimizing a loss function

.

(1)

where are the trainable parameters of the network, is the predicted force profile and is the true force profile for a given input. All runs used the mean squared error as the loss function unless noted (eq.2).

(2)

Networks were tuned using a grid search (as seen in table II

). The network architecture and optimization hyperparameters were varied along the following axes: number of layers, number of units/layer, activation function, loss function, learning rate, and data granularity. Due to the high cost of collecting and preparing training data, this last value, data efficiency, was of particular interest. All networks were trained for 1,500 epochs with a batch size of 500 examples. Networks were optimized used the Adam

[30] optimizer using decay rates and and a learning rate of

. All networks were trained using Keras with a TensorFlow

[31] backend.

Number of Layers 2
Number of units 16 32
Activation Function TanH Leaky ReLU
Loss Function MSE
Learning Rate 0.001 0.003
Data Granularity 1000 or more discrete pts. per cycle
TABLE II: Hyperparameters

The processed dataset was split randomly into training and test datasets of 85% and 15% of the data respectively. For reproducability and consistency between runs, the random seed was set to 8 for all runs.

Ii-D Evaluation Criteria

The objective of our surrogate models is to quickly and accurately predict the thrust profile of a novel multi-fin configuration. To achieve this, we evaluate models on empirical error, generalizability, data efficiency, and computational performance.

Empirical error, or the correctness of our predictions, is our primary evaluation criteria. This is measured as mean squared error of our prediction versus the expected value as seen in eq. 2. We also verify if the cycle averaged thrust predictions lie within the experimental error bounds of for the lead fin and for the rear fin.

To measure generalizability, or the model’s ability to make accurate predictions for unfamiliar fin geometries, we validate the models’ performance on two biologically-inspired geometries that were not part of the training set.

Data efficiency, or the network’s ability to learn an accurate model given limited training data, is evaluated during the hyperparameter tuning process. Models are trained for different fractions of the training data, and the final selected models are able to produce accurate predictions given varying subsets of the full training set.

Finally, we measure computational performance by tracking the training time and benchmarking prediction time of five fin stroke cycles. This is then compared against the time requirements of experimental trials and traditional CFD modeling.

Iii Results and Discussion

Iii-a Case A:Nonlinear Regression

An initial approach using nonlinear regression is implemented to model cycle-averaged thrust as a function of pitch-stroke phase offset within individual fins and stroke-stroke phase offset between fins. Using a cross-correlated harmonic function (eq. 3), predictions of stroke-averaged thrust for the lead and rear fins have MSE = and MSE = , respectively, where average thrust for each fin ranges between -1.1 N and 1.1 N (fig. 7

). While this method produced a reasonably good estimate for stroke-averaged thrust across a limited number of variables, understanding the effects of more variables on the time-varying thrust profiles will be prohibitively complex for this methodology.

(3)

where is the stroke-stroke phase offset (between fins), and is the pitch-stroke phase offset (single fin), and are coefficients being solved for.

Fig. 7: Tandem pectoral fin thrust results as a function of stroke phase offset between fins and pitch-stroke reversal offset within an individual fin fitted with cross-correlated harmonic functions..

Iii-B Case B: DNN Baseline

The goal of this first experiment is to establish a baseline for a neural network surrogate model mapping kinematics to the force profile. As a precursor, we explored what a DNN network could learn with minimal input parameters, like time, shape, spacing, and offset of the fins by training a large number of models with varying hyperparameters like number of hidden layers, units, learning rate, and activation functions. The best training convergence achieved is with a 3 layer densely connected network with the relevant hyperparameters listed in table IV and the inputs listed in table III. The learning rate during the training varied between 0.001 to 0.003 and the training data included the lead fin data of both fin shapes, bio-inspired and rectangular.

Input Parameters Output
Lead fin Predictions
Categorial shape of 0(bio-fin) and 1(rectangular fin)
Stroke and pitch angle time history Force output
Forward speed
TABLE III: Case B: Parameters
Number of Layers 3
Number of units 32
Activation Function TanH
Loss Function MSE
TABLE IV: Case B: Hyperparameters

The model predictions fit for both fins shapes are shown in fig. 8. The model is able to learn some general relationship between the fin and its force output. However, with a MSE = and the force range being approximately 2 N, the predictions are clearly off in magnitude and do not capture local peaks too well, even though it exhibits subtle maxima.

(a) Rectangular fin
(b) Bio-inspired fin
Fig. 8: DNN predictions.

Iii-C Case C: CNN with Spatial Inputs

In order to better capture local structure, reduce overfitting, and improve data efficiency, the next experiment is a 1-D CNN architecture. As generalizability is a primary goal in this work, there is a need to develop an improved representation of the model geometry. In addition, we have to improve the capture of local peaks, that might be driven by fluid dynamic events such as vortex shedding or recapture, and are also dependent on fin geometry. To do this, we refine the input space, given in table V, by providing five cross-sectional chord lengths, equi-spaced along the fin span from the root to the tip of the fin, to better represent the fin shape. The chord length data, normalized with the leading-edge length, are constant for each fin geometry. The time dependent stroke and pitch angle variation phase matched with output force during network training will impart an understanding of the spatial orientation of a given fin during the training. Filters from the 1-D CNN focus on a specific part of the sequence so that the network progressively reduces the size of the representation/number of parameters, making this more data efficient. The hyperparameters for this network are listed in table VI.

Input Parameters Output
Lead fin Predictions
5 geometric chord lengths
Stroke and pitch angle time history Force output
Forward speed
As shown in fig. 4.
TABLE V: Case C: Parameters
Initial dense layer yes
Number of Layers 2
Number of units 16
Activation Function TanH
TABLE VI: Case C: Hyperparameters

To test generalizability, in this experiment the network is trained only on the bio-inspired fin data and the trained network is used to predict the force output for the rectangular fin. Figure 9 shows the network is able to get a good fit for the bio-inspired fin it trained on, with the cycle averaged thrust matching the CFD. The MSE = implies, over the thrust cycle, the fit of the prediction with CFD data is within 4% of the approximate peak to peak spread of 1.5 N. Interestingly, the network does seem able to predict events for the rectangular fin. The magnitudes of the force prediction on the rectangular fin show considerable discrepancies but it exhibits comparable peaks as the CFD data with a phase delay during the cycle, as seen in fig. 9(b). The CFD results clearly have a local vortex recapture sequence at stroke reversal in addition to the bigger peaks from the up and down strokes and the network weakly predicts these events. As expected, a single fin geometry is not sufficient training data for the network and therefore, for all subsequent cases we train on both fin shapes.

(a) Bio-inspired fin
(b) Rectangular fin
Fig. 9: Predictions from network trained only on bio-inspired fin data.

Iii-D Case D: Generalization to Unseen Geometries

The main goal in the previous experiment, along with adding geometric data on the five chord lengths, was to enable the networks ability to generalize to a shape it has not seen. For a surrogate model to be used as a design tool, the performance of the model on unseen geometries is a critical performance metric and we take another step towards this goal. This experiment tests the previous case’s architecture on three new fin geometries that keep the area,75 sq. cm, constant and geometrically transitions from the bio-fin to the rectangular fin. Figure 3 shows the geometry and nomenclature of these new fins.

Training is done using both fin shapes and fig. 10 shows that the network is able to learn the relation between the force profile and the shapes it trained on, with cycle averaged thrust of both fins lying within experimental bounds of . Figure 11 shows the network’s force predictions on the new test geometries. Bio-fins 1(fig. 11(a)) and 3(fig. 11(c)) exhibit an expected thrust profile with peaks for stroke reversal and positive thrust during both upstroke and downstroke phases. Bio-fin 2 has the most departure with only one peak predicted, even though the predicted average thrust might seem reasonable. This outcome is not unexpected since bio-fin 2 is the furthest in geometry from the two shapes the network trained on.

(a) Bio-inspired fin
(b) Rectangular fin
Fig. 10: 5-point geometry network fit, after training on both fins.
(a) Bio-fin 1 prediction
(b) Bio-fin 2 prediction
(c) Bio-fin 2 prediction
Fig. 11: 5-point geometry network predictions on different configurations.

Iii-E Case E: Refined Spatial Inputs

To mitigate this error, our intuition was that the network needs more spatial data to better understand the relationship between the predicted force and the shape of the fin. To implement this, this new network is trained with ten chord lengths instead of five for both fin shapes. The hyperparameters are shown in tableVII and other parameters are the same as the previous experiment. Qualitatively, the predictions of the new model looks better than the one trained with just five. For validation, CFD simulations are done for both bio-fin 1 and 2 shapes and fig. 12 shows the comparison with both network predictions. The impact of the geometric refinement is clearly visible with the 10-point geometry producing a better fit, fig. 12(b) compared with fig. 12(a). Though predicted average thrust is lower(MSE gives a fit that is within 13% of the peak to peak spread of 1.5 N), the ten-chord length model seems to exhibit a greater understanding of the relationship between a fin’s shape and the force produced. This is further validated by the excellent fit(within 6% of the peak to peak spread) of the bio-fin 1 prediction with the CFD result.

Initial dense layer yes
Number of Layers 2
Number of units 32
Activation Function Leaky ReLU
TABLE VII: Case E: Hyperparameters
(a) 5-point geometry comparison of bio-fin 2
(b) 10-point geometry comparison of bio-fin 2
(c) 10-point geometry comparison of bio-fin 1
Fig. 12: Comparison of two geometry-based models, plotted against CFD data.

Iii-F Case F: Extension to the Rear Fin

Finally, the above model is extended to a multi-fin configuration. This experiment uses two independently-trained models: a lead fin model that is identical to Case E, and a rear fin model that has additional inputs. The force production of the rear fin is more dependent on the fluid dynamics, as a result of the lead fin wake structures being incident on it, and thus requires more inputs as well. The spacing and phase offset between the two fins and the force output of the lead fin are added, since they directly impact the effect of the lead fin wake on the rear fin, and hence its force output. Table VIII shows the inputs for the rear fin network with column one inputs being specific to the single fin and column two inputs defining the single rear fin’s dependence on the spatial configuration and output of the lead fin. The hyperparameters for the rear fin network are the same as the lead fin but without the initial dense layer.

Input Parameters Output
Rear fin + data from Lead fin Predictions
10 geometric + X offset(spatial) from lead fin
chord lengths + stroke phase offset from lead fin
Stroke and pitch Force output
angle time history + Lead fin force output
Forward speed
As shown in fig. 4.
TABLE VIII: Case F: Parameters

After training, we again validate the predictions against simulation data. In figures 13(a) and 13(b), we see that qualitatively the predictions closely track the CFD data. For the bio-fin 2 run, we see a dip down to 0 N between the up/down strokes, that the network does not predict. Quantitatively, the network overpredicts the average rear fin thrust in both cases with the bio-fin 1 falling just within the rear fin experimental bounds of and bio-fin 2 rear having a larger deviation from the CFD results.

(a) 10-point geometry comparison of bio-fin 1 rear
(b) 10-point geometry comparison of bio-fin 2 rear
Fig. 13: Comparison between prediction and CFD data for rear fin configurations.

Iii-G Computational Performance

As previously mentioned, the main objective of a surrogate model is to provide a fast approximation of some variables of interest given similar input framework as an experimental or CFD run. All cases tested showed a significant speedup over both experimental trials and CFD simulation.

The time required to collect data from experimental trials depends primarily on fin fabrication and measurement apparatus setup and calibration, that together takes on the order of days. Once the experiment is set up, for our cases, collection of approximately 1000 stroke cycles takes about one hour. Post-processing the data to obtain forces takes a few more hours after that. A CFD simulation of a new geometry is significantly faster, but the process still incorporates grid generation, case setup, high performance computational resources, and post processing that still takes on the order of hours to days. In contrast, predicting the force curve for a new fin configuration takes approximately 400 ms, that is four orders-of-magnitude faster than traditional data collection and five orders-of-magnitude faster than a full experimental trial. Training a neural network from scratch on our dataset (Cases C-F) takes several hours, depending on the specific architecture and input data format.

Iv Conclusion

We demonstrate the development of neural networks, trained on existing experimental and CFD data, as surrogate reduced order models for fast accurate prediction of thrust forces generated through flapping fin propulsion. The results from multiple network architectures and different input frameworks indicate that neural networks are a viable option to handle the large input parameter space that span the kinematics, fin shapes, and fluid dynamic effects reflected in the output thrust forces.

The validation with CFD results show that the accuracy of the network predictions are within experimental bounds for the cycle averaged thrust and have low MSE’s over a single cycle giving an accurate time history of the thrust profile for those cases that are closer to the training data in the parametric space. In the above cases, bio-fin 2 geometry has the biggest departure in shape and thus is the least accurate of the thrust profile model predictions. While still insightful, this accuracy needs to be further improved to enable a useful design and control tool. With this in mind, future steps will explore alternate loss functions during training and combined training for lead and rear fins. Further preprocessing of the training data and improving the input parameter space will also be investigated to improve the robustness of the trained networks. Given the outstanding performance gains with the above surrogate models, the tradeoff with accuracy, in our estimation favors these neural network models.

Acknowledgment

This research has been sponsored by the Naval Research Laboratory (NRL) 6.2 Base Program. Computational and experimental resources were provided by the Laboratories for Computational Physics and Fluid Dynamics at NRL.

References

  • [1] J. Hove, L. O’Bryan, M. Gordon, P. Webb, and D. Weihs, “Boxfishes (Teleostei: Ostraciidae) as a model system for fishes swimming with many fins: kinematics,” Journal of Experimental Biology, vol. 204, no. 8, p. 1459, Apr. 2001. [Online]. Available: http://jeb.biologists.org/content/204/8/1459.abstract
  • [2] G. V. Lauder and P. G. A. Madden, “Learning from fish: Kinematics and experimental hydrodynamics for roboticists,” International Journal of Automation and Computing, vol. 3, no. 4, pp. 325–335, Oct. 2006. [Online]. Available: http://link.springer.com/10.1007/s11633-006-0325-0
  • [3] B. E. Flammang, G. V. Lauder, D. R. Troolin, and T. E. Strand, “Volumetric imaging of fish locomotion,” Biology Letters, vol. 7, no. 5, pp. 695–698, Oct. 2011. [Online]. Available: https://royalsocietypublishing.org/doi/10.1098/rsbl.2011.0282
  • [4] D. S. Barrett, M. S. Triantafyllou, D. K. P. Yue, M. A. Grosenbaugh, and M. J. Wolfgang, “Drag reduction in fish-like locomotion,” Journal of Fluid Mechanics, vol. 392, pp. 183–212, Aug. 1999.
  • [5] S. Licht, V. Polidoro, M. Flores, F. Hover, and M. Triantafyllou, “Design and Projected Performance of a Flapping Foil AUV,” IEEE Journal of Oceanic Engineering, vol. 29, no. 3, pp. 786–794, Jul. 2004. [Online]. Available: http://ieeexplore.ieee.org/document/1353431/
  • [6] C. Zhou, L. Wang, Z. Cao, S. Wang, and M. Tan, “Design and Control of Biomimetic Robot Fish FAC-I,” in Bio-mechanisms of Swimming and Flying, N. Kato and S. Kamimura, Eds.   Tokyo: Springer Japan, 2008, pp. 247–258.
  • [7] P. E. Sitorus, Y. Y. Nazaruddin, E. Leksono, and A. Budiyono, “Design and Implementation of Paired Pectoral Fins Locomotion of Labriform Fish Applied to a Fish Robot,” Journal of Bionic Engineering, vol. 6, no. 1, pp. 37–45, Mar. 2009. [Online]. Available: http://link.springer.com/10.1016/S1672-6529(08)60100-6
  • [8] N. Kato, Y. Ando, A. Tomokazu, H. Suzuki, K. Suzumori, T. Kanda, and S. Endo, “Elastic Pectoral Fin Actuators for Biomimetic Underwater Vehicles,” in Bio-mechanisms of Swimming and Flying, N. Kato and S. Kamimura, Eds.   Tokyo: Springer Japan, 2008, pp. 271–282.
  • [9]

    J. S. Palmisano, J. D. Geder, R. Ramamurti, W. C. Sandberg, and R. Banahalli, “Robotic pectoral fin thrust vectoring using weighted gait combinations,”

    Applied Bionics and Biomechanics, no. 3, pp. 333–345, 2012.
  • [10] C. J. Esposito, J. L. Tangorra, B. E. Flammang, and G. V. Lauder, “A robotic fish caudal fin: effects of stiffness and motor program on locomotor performance,” Journal of Experimental Biology, vol. 215, no. 1, pp. 56–67, Jan. 2012. [Online]. Available: http://jeb.biologists.org/cgi/doi/10.1242/jeb.062711
  • [11] K. W. Moored, W. Smith, J. Hester, W. Chang, and H. Bart-Smith, “Investigating the Thrust Production of a Myliobatoid-Inspired Oscillating Wing,” Advances in Science and Technology, vol. 58, pp. 25–30, Sep. 2008. [Online]. Available: https://www.scientific.net/AST.58.25
  • [12] M. Bozkurttas, J. Tangorra, G. Lauder, and R. Mittal, “Understanding the Hydrodynamics of Swimming: From Fish Fins to Flexible Propulsors for Autonomous Underwater Vehicles,” Advances in Science and Technology, vol. 58, pp. 193–202, Sep. 2008. [Online]. Available: https://www.scientific.net/AST.58.193
  • [13] I. Akhtar, R. Mittal, G. V. Lauder, and E. Drucker, “Hydrodynamics of a biologically inspired tandem flapping foil configuration,” Theoretical and Computational Fluid Dynamics, vol. 21, no. 3, pp. 155–170, Apr. 2007. [Online]. Available: http://link.springer.com/10.1007/s00162-007-0045-2
  • [14] A. Mignano, S. Kadapa, J. Tangorra, and G. Lauder, “Passing the Wake: Using Multiple Fins to Shape Forces for Swimming,” Biomimetics, vol. 4, no. 1, p. 23, Mar. 2019. [Online]. Available: https://www.mdpi.com/2313-7673/4/1/23
  • [15] R. Ramamurti, J. Geder, K. Viswanath, and M. Pruessner, “Computational Fluid Dynamics Study of the Propulsion Characteristics of Tandem Flapping Fins,” in 2018 AIAA Aerospace Sciences Meeting.   Kissimmee, Florida: American Institute of Aeronautics and Astronautics, Jan. 2018. [Online]. Available: https://arc.aiaa.org/doi/10.2514/6.2018-0039
  • [16] J. D. Geder, R. Ramamurti, K. Viswanath, and M. Pruessner, “Underwater thrust performance of tandem flapping fins: Effects of stroke phasing and fin spacing,” in MTS/IEEE OCEANS ’17 Anchorage.   Anchorage, Alaska: Institute of Electrical and Electronics Engineers, Sep. 2017, oCLC: 1020270128. [Online]. Available: http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8190426
  • [17] R. Ramamurti, J. Geder, K. Viswanath, and M. Pruessner, “Propulsion Characteristics of Flapping Caudal Fins and its Upstream Interaction with Pectoral Fins,” in AIAA Scitech 2019 Forum.   San Diego, California: American Institute of Aeronautics and Astronautics, Jan. 2019. [Online]. Available: https://arc.aiaa.org/doi/10.2514/6.2019-1618
  • [18] J. D. Geder, R. Ramamurti, J. Palmisano, M. Pruessner, B. Ratna, and W. C. Sandberg, “Four-Fin Bio-Inspired UUV: Modeling and Control Solutions,” in Volume 2: Biomedical and Biotechnology Engineering; Nanoengineering for Medicine and Biology.   Denver, Colorado, USA: ASMEDC, Jan. 2011, pp. 799–808.
  • [19] L. E. Muscutt, G. D. Weymouth, and B. Ganapathisubramani, “Performance augmentation mechanism of in-line tandem flapping foils,” Journal of Fluid Mechanics, vol. 827, pp. 484–505, Sep. 2017.
  • [20] F. Fish, “Diversity, mechanics and performance of natural aquatic propulsors,” in WIT Transactions on State of the Art in Science and Engineering, 1st ed., R. Liebe, Ed.   WIT Press, Nov. 2006, vol. 1, pp. 57–87. [Online]. Available: http://library.witpress.com/viewpaper.asp?pcode=1845640012-201-1
  • [21] J. D. Geder, R. Ramamurti, K. Viswanath, M. Pruessner, and R. Koehler, “Effect of Flow Interaction between Median Paired and Caudal Fins on Propulsion,” in MTS/IEEE OCEANS Conference ’18, Charleston, SC, Oct. 2018.
  • [22] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, Jan. 1989.
  • [23]

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in

    Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
  • [24]

    T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in

    Advances in Neural Information Processing Systems, 2013, pp. 3111–3119.
  • [25]

    M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,”

    Journal of Computational Physics, vol. 378, pp. 686–707, Feb. 2019.
  • [26] R. Tripathy and I. Bilionis, “Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification,” Journal of Computational Physics, vol. 375, pp. 565–588, Dec. 2018.
  • [27] Y. Zhu and N. Zabaras, “Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification,” Journal of Computational Physics, vol. 366, pp. 415–447, Aug. 2018.
  • [28] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier Nonlinearities Improve Neural Network Acoustic Models,” in International Conference on Machine Learning, 2013, p. 6.
  • [29] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, p. 30, 2014.
  • [30] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations, 2015.
  • [31] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Yangqing Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015.