1 Introduction
Modern massively parallel computational resources have revolutionized the ways by which engineers and scientists can simulate and study complex physical phenomena. The capability of partial differential equation solvers has exploded, enabling the study of detailed threedimensional systems with evermore complicated setups. Not only are simulations growing in size, but smaller problems are growing in quantity with rapid access to data and near realtime performance enabling massive ensembles of lowfidelity simulations. However, the proliferation of high performance simulation capabilities has outpaced the storage capabilities required to house the massive data output, leading to a bottleneck for practitioners needing to gain insight from their simulations. Saving all analysis for a postprocessing procedure becomes progressively more expensive and time intensive as the size and quantity of simulations increases. State of the art visualization systems are combatting the data bloat by incorporating visualization and data extraction pipelines into simulation runtimes.
With in situ visualization systems in place, the next significant bottleneck for simulation practitioners is the setup and initialization process. Modifying and exploring simulation input parameters and their effects on simulation output is difficult and nonintuitive even with rapid in situ visualization systems due to the difficulty required in setting up sophisticated massively parallel simulations. This problem is exasperated when the geometry of the simulation domain is the variable parameter of interest. Typically, modifying the geometry of a finiteelement domain requires expensive remeshing and repartitioning. Studying the dynamics of a system with respect to geometry is tedious at best, and prohibitive typically. A powerful strategy to mitigate this growing difficulty is to develop new in situ simulation modification tools that enable a practitioner to manipulate and explore their simulation’s geometric parameter space without stopping and restarting the simulation.
To motivate this work, we turn to the field of cardiovascular systems research, where integrated simulation pipelines are beginning to show their immense utility. For example, the open source software Simvascular
[48] enables a robust pipeline for studying cardiovascular flows, and the growing community adoption of such tools [31, 35, 40, 49] highlights the need for tightly integrated simulation workflows. However, a lasting impact could be made here by tools which enable rapid visual iteration of complex cardiovascular flows. Figure 1 depicts a cardiovascular stenosis bypass. The physical lifespan of this implant is determined largely by the flow characteristics at the outlet of the bypass which is deeply dependent on the geometry of the bypass itself. The bypass geometry is characterized by, among other considerations, the diameters of the inlet and outlet, the inlet angle with respect to the artery, the outlet angle, the total length of the bypass, as well as the locations of the inlet and outlet along the length of the artery. Generating a new mesh for every geometric consideration would be completely prohibitive for this problem. As such, we are motivated in this current work to demonstrate a new workflow and set of new tools which enable a practitioner to modify and manipulate a simulation domain as the simulation is ongoing.In this paper, we demonstrate a new workflow for steering simulations where the parameter of interest is the physical geometry defining the simulation domain. This workflow takes the form of a modular pipeline system interfacing with existing high performance mesh data structures, and it is constructed to be agnostic of the simulation system in use. For our demonstration, we wrap our pipeline around the CFD program Parallel Hierarchic Adaptive Stabilized Transient Analysis (PHASTA) to demonstrate the geometry steering procedure with a wellestablished high performance simulation system [54, 39, 20]. Within the pipeline, we employ a serverclient architecture. The server manages volumetric mesh deformation and communication with the CFD solver. User interaction takes place through a client application which passes data to ParaView, where custom plugins are employed to enable modification of the surface geometry. In order to maintain performance of the client application, the server extracts a surface mesh representation of the simulation domain’s boundary and only passes that relatively small mesh to the client. By separating the system into two loops managed by distinct applications, we can employ high performance computing resources to simulate complex systems, while allowing the user to operate the steering of the simulation from the comfort of their own workstation. This workflow is the first of its kind to target large highperformance systems, enabling scientists and engineers to more deeply interact with and explore their evergrowing scientific simulations.
2 Background
Computational steering takes many forms and varies significantly in levels of interactivity across the spectrum of applications. The domain spans applications from monitoring tools with basic means of changing simulation parameters to fully immersive virtual reality environments with complex controls for realtime interaction with the simulated domain. The proceeding sections provide some context for the work presented in this article.
2.1 Notification and Monitoring Systems
One of the key design goals of computational steering systems is to be as minimally invasive to the simulation as possible. In an effort to minimally probe typical ‘black box’ simulations, efforts have been made to employ monitoring and notification systems which query data generated by simulation systems during runtime [41, 46, 38]. These monitoring systems can provide basic readouts and inform a user of undesirable simulation behavior such as solver divergence, but they do not enable live configuration of the simulation as it is ongoing.
2.2 Visual Steering
It is becoming a common practice to render video of system solutions on the compute resource or a separate visualization resource. Usually, this is coupled with basic controls to modify solver parameters or potentially simple boundary conditions to ensure the simulation is running according to plan. In the simplest case, an off switch is exposed to the user so they may kill their simulation if they see in their visualization environment that the simulation is running off in the wrong direction. This system is typified by ParaView Catalyst [3]. Related systems have enabled in situ visualization such as [22] which employed LibSim in the visualization software VisIt and [28] which employed ADIOS also in VisIt. Custom solutions also exist, such as for combustion dynamics [57].
2.3 Ensemble Steering
Expanding computational resources have also enabled practitioners to spin up many iterations of their problem at once in a process known as ensemble simulation. This technique allows the practitioner to quickly probe a parameter space, but this also generates increasingly massive amounts of data. As such, steering techniques for these ensemble simulations are attractive. Ensemble steering is the practice of spinning up many simulations at a time varying parameters across the ensemble, then providing interactivity for a user to quickly glean insights about how system solution is dependent on the parameter space being explored. This also provides capability for spinning up new simulations fast enough to be interactive [8]. This makes the design space interactive, rather than the simulations themselves. Visualizing the parameter space of an ensemble is a difficult problem, as complex parameter spaces can be highdimensional and nonlinear. Exploring that space requires complex tools for analyzing crosssections of the parameter space. Doing this interactively injects intuition into this problem [33]. Ensemble steering is a powerful tool for gaining insights from highthroughput lowfidelity simulations, but for moderate to high fidelity simulations, this technique is absolutely prohibitive in terms of data storage capacity.
2.4 Steering with Parameters in Phasta
Computational steering in the high performance computational fluid dynamics software PHASTA has been explored in a limited sense. In [56], the goal was to modify simulation parameters such as solver coefficients and time step size while a simulation was running, and to do so informed by in situ visualization provided by ParaView. The ability to adjust simulation parameters on the fly allowed them to start up a large simulation without needing to know all the parameters up front, and then avoid restarting the simulation over and over (avoiding long queue times on highperformance compute resources) by simply modifying parameters which lead to better time step convergence. This capability is convenient for very large simulations requiring tens or hundreds of thousands of computer cores. However, the goals of this previous work targeted a single flow configuration, and did not allow for significant changes to the problem definition, and a truly interactive experience was not their intention. Further, this system lacks any capacity for modifying the geometric domain of the simulation.
2.5 Interactivity
Recently, multiple papers have been published on interactive fluid dynamics simulation which employ a compute resource to handle the fluid simulation, while a user operates interactively from their workstation [26, 17, 16, 14, 51]. Most of the interactivity was attained through custom solutions. One group went so far as to use the video game engine Unity as their interactivity driver [51]. The choice of a video game engine gave them significantly more flexibility to develop custom rendering techniques. They mentioned ParaView as their foil where custom rendering techniques are more difficult to implement. In [51], a communication layer was in place and a server ran a 3D LatticeBoltzmann simulation, and a client pinged the server for a volume fraction which represented their solution variable for rendering in Unity. Then, they were able to ‘manipulate’ their simulation domain by employing Unity features for moving geometry around. The only significant example provided was moving a simple dam wall (represented by a rectangle) up and down. These interactive simulation environments are limited to low order LatticeBoltzmann solvers because of their speed and efficient implementation on GPGPU systems. The approaches taken in the works cited in this section target realtime simulation which necessitates lowfidelity simulations, fast solvers, and low order methods, leaving much to be desired in physical accuracy of the fluid state. In contrast, our interactive system targets large, highfidelity simulations running on highperformance computing clusters where scientific analysis is the goal.
2.6 Existing Geometric Manipulation Schemes
Interacting with the simulation domain is a key component of the work presented in this paper. Domain modification takes two main forms in the literature: rigid body motion of objects, and deformation of the computational mesh.
2.6.1 Moving Colliders
Interactive simulations targeting realtime performance typically employ simple controls for moving around objects within a domain. These systems rely on particle methods where colliders can be moved without mapping or modifying a solution field defined over the domain [26, 17, 16, 14, 51]. This methodology has the advantage of being computationally inexpensive, and when coupled with inexpensive particle based fluid simulations, this can lead to intuitive and interesting user interaction with the simulated fluid domain.
2.6.2 Moving Mesh
Currently, significant research efforts have been made to create immersive interactive biomedical simulations for use as teaching tools and virtual surgical planning and training systems. These systems employ finite element softbody simulations of biological tissues to model the tissue’s response to user interaction. These interactions range from dynamically modifying state variables and simulation parameters [27], to complex interactive systems employing virtual reality headsets and interactive controllers that approximate the user’s touch [32, 21]. In the context of surgical planning and training, these systems typically employ linear elasticity models for soft tissue, trading physical accuracy for computational performance.
3 Geometry Deformation Steering System
Our geometry deformation steering system is architected as a serverclient system. The server application is responsible for managing the full computational domain and triggering the physical simulation to step in time. The client application runs on a user’s workstation and is responsible for handling user interaction with the simulation domain. Communication between the server and the client is performed via a lowlevel transmission control protocol (TCP) connection directly between the two applications. Visualization and interaction take place in ParaView where we have developed a suite of custom plugins to perform mesh deformation actions on the user’s workstation.
The flow of information through the system is illustrated in Figure 2. First, the server application loads each part of the prepartitioned mesh onto processes in the server’s message passing interface (MPI) communicator. We employ the PUMI mesh infrastructure [19] to handle the mesh data. Then, the server extracts the surface mesh from the volume and gathers that surface mesh on to process zero. Next, the server waits for communication with a client application. Once a connection is established, the server sends the surface mesh to the client application. The client application then packages the surface mesh into a Visualization Toolkit (VTK) file [42] which is then loaded into ParaView. From ParaView, the user can manipulate and deform the mesh using a stack of custom mesh deformation plugins. Upon completing the deformation, ParaView
exports that deformation field as a file which the client application parses and then transmits to the server. Upon receiving the surface deformation field, the server linearly interpolates the surface deformation in order to apply the deformation of the computational volume over a series of steps. It is noted that linear interpolation is problematic in general
[50], but for the well behaved deformations we are targeting, there are no problems with linear interpolation, and linear interpolation is simply unbeatable in terms of computational cost. Now ready with a series of surface deformations, the server can sequence through the series of deformations while running the simulation for a prescribed number of timesteps between deformations. Generally, the goal is to run the unsteady simulation to a steady state configuration between deformation steps. Upon completing the full series of deformation steps, the server can accept a new deformation order from the client application, thus ensuring the server and client stay synced without transmitting mesh data unnecessarily.For our demonstration, we use PHASTA [54] as a highperformance incompressible fluid dynamics simulation system. As PHASTA has been demonstrated to scale to very large computational resources, its capability is representative of the computational scale we wish to achieve.
3.1 Surface Mesh Extraction
After loading the parallel partitioned mesh data, the server application extracts a surface mesh from the volume. The PUMI mesh infrastructure makes this a straightforward task of looping over every element face and adding faces to the surface mesh if they lie on the boundary of the volume mesh. We employ standard C++ library data structures for the surface mesh representation as simple data structures ease the process of serializing the data and passing it between processes on the server and over TCP connection between the server and the client applications. During this surface extraction process, elements in the surface mesh are tagged by their corresponding geometric feature. These tags are generally dictated by features of the CAD model progenitor of the mesh. These feature tags will be used by the interactive surface mesh deformation system to represent geometric handles which the user may interact with. In Figure 3, we see a sparse set of individual features called out, despite the surface being made up of 33,530 elements.
3.2 Surface Mesh Deformation in ParaView
Upon receiving the surface mesh, the client application immediately passes the mesh onward to ParaView. In ParaView, we leverage the custom plugin system to provide a suite of geometry modifications which can be stacked to build up a desired mesh deformation. These plugins utilize a biharmonic deformation field algorithm based on Laplacian surface editing [43] and vary by the particular deformation.
Computing a biharmonic deformation field over the triangulated surface requires first constructing the discrete LaplaceBeltrami operator . To do so, we employ a cotangent formula which takes the form
(1) 
where is the list of vertices in the mesh which are connected via a single edge to vertex , and the terms and are angles within the triangles opposite to the edge .
Next we require a mass matrix . We opt for a diagonal mass matrix utilizing the Voronoi area of each node as the diagonal entry of the matrix at row [34]:
(2) 
where is the location of node in physical space.
With these matrices, we can produce the biharmonic operator
(3) 
We can then solve for the biharmonic deformation field by solving the matrix system
(4) 
Here, we modify the lefthand side to account for boundary conditions generated by the usersupplied feature selection and modification action. These plugins use
Eigen [15] sparse linear algebra solvers to compute the biharmonic deformation field.We use a biharmonic field here in order to maintain tangency constraints at the boundary of fixed and moved features. Each plugin works by allowing the user to select moveable handle features and fixed features. Features on a mesh are specified by a scalar field in the VTK data labeled ‘features.’ These features are either assigned from the initial CAD description of the geometry used to generate the mesh as described previously or can be automatically assigned via feature detection. Our suite includes a ParaView plugin which implements the CGAL shape detection system [37] for automatically assigning features based on sharpedges in the mesh.
Currently, we have two mesh deformation actions implemented in this ParaView plugin series. First, a feature translation plugin allows the user to perform arbitrary translation of a feature. Figure 4 depicts the end of a pipe translated upward with the back half of the pipe is fixed, demonstrating how this tool can be used to explore routing of pipes and ducts or more organic geometries such as segments of cardiovascular networks. Second, a feature scaling plugin enables scaling of a feature. Figure 5 depicts a scale deformation operating on a single feature on the lower face of the duct. In this example, the top faces of the duct and both ends are held fixed, and the biharmonic deformation field is computed over the unprescribed faces. The deformation field computed by a plugin’s operation is passed through the plugin so that operations may be stacked. Upon finishing the desired deformation procedure, the user can activate an export plugin to pass the computed displacement field from ParaView back to the client application via writing to a plaintext file. The advantage of this method is that the prescribed displacement field is saved and can be loaded for subsequent simulations without needing to repeat interactions with ParaView.
3.3 Volume Mesh Deformation
A vibrant ecosystem of volume mesh deformation techniques pervades the literature. Typically, these techniques fall into one or more of several camps: physically inspired methods based on material deformation [12, 47, 44, 45], and more exotic methods based on intrinsic discrete differential properties of the mesh [52, 18], or based on interpolation schemes [10, 29]. Nonlinear deformation methods such as [12] provide typically superior deformation quality, especially compared to linear elasicity methods and interpolation methods, though at a significant computational overhead for the requisite nonlinear equation solve.
For large deformations, it is often advantageous to adapt the mesh via local refinement or coarsening along with an expensive interpolation procedure which maps solution fields from the old mesh to the adapted mesh [1, 4, 55]. However, for large meshes, this becomes a significant computational cost, and as such, we avoid this process in this work.
In this work, we implement two linear methods for volume deformation. The two methods for computing this volume deformation are a linear elasticity method with Jacobian based stiffening detailed in Section 3.3.1, and a method based on discrete harmonic maps detailed in Section 3.3.2. A comparison of the two methods is given in Section 3.3.3. These methods were selected primarily for their low computational cost. We desire this workflow to be significantly computationally cheaper than the targeted fluid dynamics simulation in order to be unobtrusive to the user. In this workflow, upon receiving the displacement field of the surface, the server resource computes the deformation of the fluid domain volume using the surface displacement field as a boundary condition. In each case, we employ the high performance linear algebra library PETSc to solve the resulting matrix systems [5, 6]. For both volume deformation strategies, the system is assembled and solved in parallel, utilizing the same mesh partitioning used by the CFD solver.
3.3.1 Volume Deformation via Linear Elasticity
First, we employ a linear elasticity based deformation scheme with Jacobian based stiffening [47, 44, 45] where element stiffness is controlled by the Jacobian determinant of the element. This has the effect of making smaller elements stiffer and thus reducing the warping of small elements. This generally leads to better deformation of boundary layer meshes, where very small elements with large aspect ratios are used to resolve nearwall fluid structures. We start formulating this method by defining our domain with boundary
. We define the Cauchy stress tensor and strain tensor in terms of the displacement field
. The strain tensor is defined as(5) 
and the Cauchy stress tensor is defined as
(6) 
where and are Lamé parameters, tr is the trace operator and is the identity tensor.
From here, we can state the linear elastic equation governing the deformation of the interior of . We will simplify the classical statement by omitting body forces and Neumann boundary conditions. Our mesh deformation scheme fully determines every boundary node via Dirichlet conditions. As such, our governing equation states: find on such that
∇⋅σ(u) = 0 on Ω,
u = g on Γ,
where is prescribed Dirichlet boundary condition data. To solve this partial differential equation, we turn to Galerkin’s method, and define a pair of finite element spaces over which our solution will be cast:
S^h = {u^h ∈[H^1(Ω)]^3 — u^h—_Ω^e∈P^1(Ω^e), u^h—_Γ= g },
V^h = {w^h ∈[H^1(Ω)]^3 — w^h—_Ω^e∈P^1(Ω^e), w^h—_Γ= 0 },
where is the finite element space of linear polynomials defined over tetrahedra. Armed with our finite elment spaces, we can pose the weak formulation of the linear elasticity problem as: find such that
(7) 
This integral is carried out via a pullback to the parent element on each element and a summation over all elements in the domain as
(8) 
where is the Jacobian determinant of the mapping from parent element to physical element. We can exploit this mapping via a process known as Jacobian based stiffening by scaling that pull back by a factor of . This has the effect of stiffening smaller elements more than larger elements. For our work, we set and .
3.3.2 Volume Deformation via Harmonic Maps
Second, we implement a method of generating harmonic maps between a mesh and a deformed mesh. This method bares its origin in volumetric feature identification and registration [52]. To formulate this method, we follow the derivation of [9] toward a mimetic definition of the Laplacian operator, we start with our tetrahedral complex with vertex set and edge set . We can define a discrete Laplacian operator as
(9) 
In Equation eq:discreteLaplacian, the sum is taken over all tetrahedra with vertices containing edge . The term is an edge length of the subscripted edge, and is the dihedral angle at edge of the tetrahedron .
With our Laplacian operator defined, we can generate harmonic maps , where is a deformed mesh. As in [52], we will define as a piecewise harmonic function whose components are decoupled:
. Using the displacement field received from the client, we write three vectors encoding the displacements to the boundary nodes
. Now, we can setup the matrix system(10) 
Here, we modify the system to account for Dirichlet boundary conditions. In order to solve Equation eq:harmonicMatrixEqn, we employ the PETSc GMRES iterative solver as a powerful choice for solving these three distributed matrix equations. Upon coming to a solution, the vectors in represent the , , and displacements of mesh to .
3.3.3 Comparison
For comparison of these two volume deformation techniques, we perform the same deformation to the channel geometry shown in Figure 7 using the Jacobian scaled linear elasticity technique and again using the harmonic map technique. The channel mesh consists of 256,961 tetrahedral elements, partitioned into four parts. Figure 6 shows a crosssection of the deformed mesh at a location 0.05 m from the center of the channel. Both techniques produce acceptable deformation of this mesh. Typically, the harmonic map technique will be computationally cheaper for simple deformation operations which deform the mesh in one cardinal direction, as the solver will not need to compute deformation in the directions with no deformation. The linear elasticity solver is designed to be robust for deformations of anisotropic meshes, and thus will outperform the harmonic map technique when deforming meshes with high aspect ratio boundary layer meshes.
3.4 Communication Protocol
Communicating data between the server and the client is a critical component to ensure smooth synchronous operation of the interaction loop and the simulation loop. To serve this workflow, we developed a network layer API built on lowlevel TCP networking protocols. This API is constructed for communication between a single server and a single client application, and developed specifically for Unix based operating systems. For reference, this system targeted a client application running on macOS with the server application running on Linux.
We utilize C++ standard library vector data structures to package data for transfer between client and server applications. This was chosen to ensure portability of the API, minimizing external dependencies. Also, via templating, this API can transmit any set of serialized data stored in a standard vector data structures.
In order to reduce the complexity of the client application communication management, the communication API is designed to send and receive data from a single process. As such, the distributed server application must gather all data set for transmission to the client on to a single process. Then that single process can communicate data to and from the client. This gather operation must only occur once at the beginning of the workflow, as the surface mesh topology will not change throughout the lifetime of the simulation. The communicator process will need to scatter data back to the rest of the MPI communicator received from the client any time the client sends a message. As this process will likely happen infrequently compared to the total computation happening on the server, this adds little internal overhead to the server application. Currently, the communicator process on the server is the zeroth process in the global communicator used by the simulation. This system will scale well to fairly large problems as message data size is only on the order of the size of the surface mesh, but for extreme sized problems, data reduction techniques such as surface mesh decimation and simplification [24, 25, 13] may need to be employed. In future iterations of this system, we would like to separate the server’s communication process from the simulation processes as a separate MPI communicator so that the server may operate in a truly asynchronous fashion.
3.5 Visualization System
ParaView was chosen for the surface mesh deformation system precisely for its robust visualization infrastructure. ParaView already has a tight link with PHASTA as its preferred visualization software for high performance CFD computations, and we leverage that relationship in this work. As our system timeevolves the PHASTA flow simulation, visualization data is written out at a userspecified cadence. This can then be readily loaded into ParaView for visualization as the simulation is ongoing. For the following demonstration, the problem size was small enough to handle writing many time steps worth of data to file for visualization postsimulation. However, with the goal of targeting significantly larger simulations, the pairing of PHASTA and ParaView catalyst has already demonstrated in situ visualizations of simulations on millions of MPI processes [39] where visualization data is streamed to the user in a condense fashion, immediately reducing the data bloat associated with largescale simulations. In the ParaView Catalyst workflow, the user may link their workstation’s instance of ParaView to the Catalyst system so that visualization of the simulation occurs in the same application as user interaction. This streamlined workflow then becomes an immersive computational steering system.
3.6 Deformation Scheduling
Here we note that the deformation of the geometry is not a physical phenomenon related to processes of the ongoing simulation. The mesh deformation is a nonphysical process and is thus likely to garner a response from the fluid simulation which is nonphysical. As the fluid simulation is a transient process, a short, nonphysical transient response should relax to a physically realizable flow state. This fluid simulation behavior is common, for instance, when starting a simulation from a zeroflow initial condition  the discontinuity between the initial condition within the volume of the fluid, and a nonzero inlet velocity can produce nonphysical transient behavior. With that preface, the question remains, how should a sequence of deformations making up a significant geometric modification be scheduled with respect to the fluid simulation? Taking larger deformations per step, one should expect a more lengthy nonphysical transient in the fluid simulation, while smaller deformations will perturb the fluid flow to a smaller degree. Our system allows for any user defined deformations scheduling strategy. Section 4 demonstrates three such strategies.
4 Demonstration: Flow in a Constricting Nozzle
To demonstrate our workflow, we are interested in how controlling the geometry of a diffuser can control or mitigate flow separation at the outlet. Flow separation is a common problem in many engineering disciplines, and the mitigation of separation is an active area of research across fluidscentric engineering disciplines [30, 2, 53]. Nozzles are common flow control devices with a wide array of applications. In the design of divergent nozzles  those with wider outlets than inlets  it is critical to avoid separation of the flow from the walls of the nozzle, as the resulting vortices sap energy from the flow and lead to bad overall performance. For this demonstration, the simulation and server application operated on a small compute node running the Debian 3.16 operating system with 40 total cores over dual socket Intel Xeon E52650 CPUs with 256 GB available RAM. The client application and ParaView operated on a 2018 Apple MacBook Pro running macOS 10.15 with a quadcore Intel i5 CPU and 8 GB of RAM.
4.1 Setting Up the Domain
The initial flow configuration for this demonstration is illustrated in Figure 7. The domain is setup to be a rectilinear channel m. Incompressible flow is driven by an inlet velocity at the m face of m/s and an outflow condition is set at the m with outflow pressure Pa. All other faces are set as noslip, nopassthrough walls. This flow is driven to steady state before any deformation is applied. The domain is meshed isotropically into 256,961 linear tetrahedral elements. Boundary layer meshing was not employed as the near wall behavior of the fluid flow was not of interest, thus the extra resolution would have been an excessive computational expense. A fixed time step size was chosen of 0.001 s. An initial condition of was applied.
4.2 Steering the Computation
Once presented with the surface mesh on the client application, the throat region of the channel is reduced to 50% of its thickness. We rely on the biharmonic surface deformation solver to produce smooth transitions from the wide to narrow region of the channel. This deformation field is then immediately passed back to the server. We investigate three deformation schedules:

1step instantaneous deformation at the fluid simulation time s,

1step instantaneous deformation at the fluid simulation time s,

10step deformation at the fluid simulation time s with s simulated between steps.
We selected the Jacobian scaled linear elasticity volume deformation strategy for this demonstration, as we found that the nearwall region of the nozzle was more desirably deformed by this strategy.
4.3 Results
As a basis for comparison, we first investigate how long the flow takes to reach steady state from a zero initial condition on the fully deformed mesh. In Figure 8, we note that the fluid behavior upstream of the nozzle rapidly attains an expected physical behavior. Downstream of the nozzle, due to the flow separation induced by the nozzle geometry, the dynamics exhibit unsteady swirling and mixing as shown in Figure 9.
As shown in Figure 10, under schedules two and three, we note that within 0.01 s of deformation being completed, no spurious or unphysical behavior upstream of the nozzle is noted. This suggests that for this particular flow problem, the CFD solver is robust to the rapid modification of the domain.
5 Conclusions
In this work, we designed a new computational steering workflow for leveraging high performance simulations to explore how geometry may effect the simulated dynamics of incompressible fluid flow. A clientserver architecture was employed to provide accessible userinteraction on a user’s workstation while also leveraging high performance computing resources in order to enable exploration of detailed, highly resolved incompressible fluid dynamics simulations. A communication layer was developed to isolate the userinteraction loop occurring on a user’s workstation from the simulation loop occurring on a high performance compute resource. Custom interactivity plugins were written using ParaView’s plugin system to transform the visualization software into a computational geometry steering interface, tethering visualization and user interaction to a single platform.
The high performance CFD application PHASTA was utilized as the incompressible fluid flow simulation system in our workflow. PHASTA was chosen for its wellestablished track record as a scalable tool targeting large computing resources, coupled with its history of in situ visualization and computational steering. We demonstrated our workflow with PHASTA with a walled channel which the user can pinch into a convergingdiverging nozzle. The deformation of the domain was transferred into the PHASTA runtime which thus transitioned the flow from attached channel flow to detached separated flow. The capacity to change the observed dynamics of a simulation in situ is a powerful tool for exploring the design space of a given system.
Based on the constricting nozzle demonstration, a rapid domain modification schedule is appropriate in some contexts. This result is encouraging in the context of computational steering, where rapid response to user input is a desirable feature. However, for more complex flow scenarios, a deformation schedule which eases the deformation over many simulation time steps could be beneficial to numerical stability.
6 Future Work
Moving forward, several paths exist for extending this work. First, projecting simulation solutions onto the updated mesh is currently done using a naive approach. We desire to utilize an arbitrary EularianLagrangian formulation [36, 11, 7, 23] as popularized in the fluidstructure interaction literature. Moving to this formulation, though intrusive to the simulation code (requiring modifying the actual fluid dynamics solver), would greatly increase our minimum time step restriction, and dramatically reduce the nonphysical transient in the flow solution postdeformation, which will likely be important for very complex flow scenarios. Second, modifying the mesh sequentially in time requires a linear system solve every time step a deformation occurs. Though we mitigated the computational cost of this solve by selecting linear methods, removing the need for a linear system solve, or reducing that to a single solve for every user interaction would be a boon to computational efficiency. Third, currently, the surface mesh modification tool is limited to the simple feature translation and scaling. We desire to extend this to more complex modification tools, such as dragging features about userdefined paths, or rotating features about arbitrary points in space. Finally, before making this system openly available to the community, our intent is to incorporate some existing steering systems such as in situ solver parameter modification.
7 Acknowledgements
The authors would like to thank Kenneth Jansen for useful conversations regarding the topic of this paper and vital assistance with PHASTA. The authors would also like to thank Cameron Smith for his continued assistance with SCOREC tools. This material is based upon work supported by the National Science Foundation under Grant Number 1740330.
References
 [1] F. Alauzet and A. Loseille. A decade of progress on anisotropic mesh adaptation for computational fluid dynamics. ComputerAided Design, 72:13–39, 2016.
 [2] M. Amitay. Synthetic jets and their applications for fluid/thermal systems. In IUTAM Symposium on Flow Control and MEMS, pp. 77–93. Springer, 2008.
 [3] U. Ayachit, A. Bauer, B. Geveci, P. O’Leary, K. Moreland, N. Fabian, and J. Mauldin. Paraview catalyst: Enabling in situ data analysis and visualization. In Proceedings of the First Workshop on In Situ Infrastructures for Enabling ExtremeScale Analysis and Visualization, pp. 25–29, 2015.
 [4] T. J. Baker. Mesh movement and metamorphosis. Engineering with Computers, 18(3):188–198, 2002.
 [5] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. A. May, L. C. McInnes, R. T. Mills, T. Munson, K. Rupp, P. Sanan, B. F. Smith, S. Zampini, H. Zhang, and H. Zhang. PETSc users manual. Technical Report ANL95/11  Revision 3.11, Argonne National Laboratory, 2019.
 [6] S. Balay, W. D. Gropp, L. C. McInnes, and B. F. Smith. Efficient management of parallelism in object oriented numerical software libraries. In E. Arge, A. M. Bruaset, and H. P. Langtangen, eds., Modern Software Tools in Scientific Computing, pp. 163–202. Birkhäuser Press, 1997.
 [7] Y. Bazilevs, V. M. Calo, T. J. Hughes, and Y. Zhang. Isogeometric fluidstructure interaction: theory, algorithms, and computations. Computational mechanics, 43(1):3–37, 2008.

[8]
B. Bush, N. BrunhartLupo, B. Bugbee, V. Krishnan, K. Potter, and K. Gruchalla.
Coupling visualization, simulation, and deep learning for ensemble steering of complex energy models.
In 2017 IEEE Workshop on Data Systems for Interactive Analysis (DSIA), pp. 1–5. IEEE, 2017.  [9] K. Crane. The ndimensional cotangent formula. Online note. URL: https://www. cs. cmu. edu/~ kmcrane/Projects/Other/nDCotanFormula. pdf, p. 11, 2019.

[10]
A. De Boer, M. Van der Schoot, and H. Bijl.
Mesh deformation based on radial basis function interpolation.
Computers & structures, 85(1114):784–795, 2007.  [11] J. Donea, S. Giuliani, and J.P. Halleux. An arbitrary lagrangianeulerian finite element method for transient dynamic fluidstructure interactions. Computer methods in applied mechanics and engineering, 33(13):689–723, 1982.
 [12] B. Froehle and P.O. Persson. Nonlinear elasticity for mesh deformation with highorder discontinuous galerkin methods for the navierstokes equations on deforming domains. In Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2014, pp. 73–85. Springer, 2015.
 [13] M. Garland and P. S. Heckbert. Surface simplification using quadric error metrics. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 209–216, 1997.
 [14] J. H. Göbbert, T. Kreuzer, A. Grosch, A. Lintermann, and M. Riedel. Enabling interactive supercomputing at jsc lessons learned. In International Conference on High Performance Computing, pp. 669–677. Springer, 2018.
 [15] G. Guennebaud, B. Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010.
 [16] A. R. Harwood. Gpupowered, interactive flow simulation on a peertopeer group of mobile devices. Advances in Engineering Software, 133:39–51, 2019.
 [17] A. R. Harwood, P. Wenisch, and A. J. Revell. A realtime modelling and simulation platform for virtual engineering design and analysis. In Proceedings of 6th European Conference on Computational Mechanics (ECCM 6) and 7th European Conference on Computational Fluid Dynamics (ECFD 7), 11–15 June 2018, Glasgow, UK. ECCOMAS, 2018.
 [18] B. T. Helenbrook. Mesh deformation using the biharmonic operator. International journal for numerical methods in engineering, 56(7):1007–1021, 2003.
 [19] D. A. Ibanez, E. S. Seol, C. W. Smith, and M. S. Shephard. Pumi: Parallel unstructured mesh infrastructure. ACM Transactions on Mathematical Software (TOMS), 42(3):1–28, 2016.
 [20] K. E. Jansen, M. Rasquin, J. A. Evans, R. Balakrishnan, T. J. Williams, J. Brown, C. Carothers, M. S. Shephard, O. Sahni, C. W. Smith, et al. Extreme scale unstructured adaptive cfd: From multiphase flow to aerodynamic flow control. Technical report, Argonne National Lab.(ANL), Argonne, IL (United States), 2017.
 [21] K. Jayasudha and M. G. Kabadi. Soft tissues deformation and removal simulation modelling for virtual surgery. International Journal of Intelligence and Sustainable Computing, 1(1):83–100, 2020.
 [22] T. Kuhlen, R. Pajarola, and K. Zhou. Parallel in situ coupling of simulation with a fully featured visualization system. In Proceedings of the 11th Eurographics Conference on Parallel Graphics and Visualization (EGPGV), vol. 10, pp. 101–109. Eurographics Association AirelaVille, Switzerland, 2011.
 [23] M. Lesoinne and C. Farhat. Geometric conservation laws for flow problems with moving boundaries and deformable meshes, and their impact on aeroelastic computations. Computer methods in applied mechanics and engineering, 134(12):71–90, 1996.
 [24] P. Lindstrom and G. Turk. Fast and memory efficient polygonal simplification. In Proceedings Visualization’98 (Cat. No. 98CB36276), pp. 279–286. IEEE, 1998.
 [25] P. Lindstrom and G. Turk. Evaluation of memoryless simplification. IEEE Transactions on Visualization and Computer Graphics, 5(2):98–115, 1999.
 [26] J. Linxweiler, M. Krafczyk, and J. Tölke. Highly interactive computational steering for coupled 3d flow problems utilizing multiple gpus. Computing and visualization in science, 13(7):299–314, 2010.
 [27] J. E. Lloyd, I. Stavness, and S. Fels. Artisynth: A fast interactive biomechanical modeling toolkit combining multibody and finite element simulation. In Soft tissue biomechanical modeling for computer assisted surgery, pp. 355–394. Springer, 2012.
 [28] J. F. Lofstead, S. Klasky, K. Schwan, N. Podhorszki, and C. Jin. Flexible io and integration for scientific codes through the adaptable io system (adios). In Proceedings of the 6th international workshop on Challenges of large applications in distributed environments, pp. 15–24, 2008.
 [29] E. Luke, E. Collins, and E. Blades. A fast mesh deformation method using explicit interpolation. Journal of Computational Physics, 231(2):586–601, 2012.
 [30] V. Maldonado, J. Farnsworth, W. Gressick, and M. Amitay. Active control of flow separation and structural vibrations of wind turbine blades. Wind Energy: An International Journal for Progress and Applications in Wind Power Conversion Technology, 13(23):221–237, 2010.
 [31] A. L. Marsden, I. E. VignonClementel, F. P. Chan, J. A. Feinstein, and C. A. Taylor. Effects of exercise and respiration on hemodynamic efficiency in cfd simulations of the total cavopulmonary connection. Annals of biomedical engineering, 35(2):250–263, 2007.
 [32] P. Martins, S. Pinto, and A. André. Interactive demonstration of medical simulations using a virtual reality approach: Application to the male urinary system. In 2019 5th Experiment International Conference (exp. at’19), pp. 251–252. IEEE, 2019.
 [33] K. Matković, D. Gračanin, M. Jelović, and H. Hauser. Interactive visual analysis of large simulation ensembles. In 2015 Winter Simulation Conference (WSC), pp. 517–528. IEEE, 2015.
 [34] M. Meyer, M. Desbrun, P. Schröder, and A. H. Barr. Discrete differentialgeometry operators for triangulated 2manifolds. In Visualization and mathematics III, pp. 35–57. Springer, 2003.
 [35] M. E. Moghadam, Y. Bazilevs, T.Y. Hsia, I. E. VignonClementel, A. L. Marsden, et al. A comparison of outlet boundary treatments for prevention of backflow divergence with relevance to blood flow simulations. Computational Mechanics, 48(3):277–291, 2011.
 [36] F. Nobile and L. Formaggia. A stability analysis for the arbitrary lagrangian eulerian formulation with finite elements. EastWest Journal of Numerical Mathematics, 7(ARTICLE):105–132, 1999.
 [37] S. Oesau, Y. Verdie, C. Jamin, P. Alliez, F. Lafarge, S. Giraudot, T. Hoang, and D. Anisimov. Shape detection. In CGAL User and Reference Manual. CGAL Editorial Board, 5.0.2 ed., 2020.
 [38] J. T. Pintas, D. de Oliveira, K. A. Ocaña, E. Ogasawara, and M. Mattoso. Scilightning: a cloud provenancebased event notification for parallel workflows. In International Conference on ServiceOriented Computing, pp. 352–365. Springer, 2013.
 [39] M. Rasquin, C. Smith, K. Chitale, E. S. Seol, B. A. Matthews, J. L. Martin, O. Sahni, R. M. Loy, M. S. Shephard, and K. E. Jansen. Scalable implicit flow solver for realistic wing simulations with flow control. Computing in Science & Engineering, 16(6):13–21, 2014.
 [40] O. Sahni, J. Müller, K. E. Jansen, M. S. Shephard, and C. A. Taylor. Efficient anisotropic adaptive discretization of the cardiovascular system. Computer methods in applied mechanics and engineering, 195(4143):5634–5655, 2006.
 [41] E. Santos, J. Tierny, A. Khan, B. Grimm, L. Lins, J. Freire, V. Pascucci, C. T. Silva, S. Klasky, R. Barreto, et al. Enabling advanced visualization tools in a webbased simulation monitoring system. In 2009 Fifth IEEE International Conference on eScience, pp. 358–365. IEEE, 2009.
 [42] W. J. Schroeder, B. Lorensen, and K. Martin. The visualization toolkit: an objectoriented approach to 3D graphics. Kitware, 2004.
 [43] O. Sorkine, D. CohenOr, Y. Lipman, M. Alexa, C. Rössl, and H.P. Seidel. Laplacian surface editing. In Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, pp. 175–184, 2004.
 [44] K. Stein, T. Tezduyar, and R. Benney. Mesh moving techniques for fluidstructure interactions with large displacements. J. Appl. Mech., 70(1):58–63, 2003.
 [45] K. Stein, T. E. Tezduyar, and R. Benney. Automatic mesh update with the solidextension mesh moving technique. Computer Methods in Applied Mechanics and Engineering, 193(2122):2019–2032, 2004.
 [46] R. Tchoua, S. Klasky, N. Podhorszki, B. Grimm, A. Khan, E. Santos, C. Silva, P. Mouallem, and M. Vouk. Collaborative monitoring and analysis for simulation scientists. In 2010 International Symposium on Collaborative Technologies and Systems, pp. 235–244. IEEE, 2010.
 [47] T. E. Tezduyar, M. Behr, S. Mittal, and A. Johnson. Computation of unsteady incompressible flows with the stabilized finite element methods: Spacetime formulations, iterative strategies and massively parallel implementations. ASME Pressure Vessels Piping Div Publ PVP., 246:7–24, 1992.
 [48] A. Updegrove, N. M. Wilson, J. Merkow, H. Lan, A. L. Marsden, and S. C. Shadden. Simvascular: an open source pipeline for cardiovascular simulation. Annals of biomedical engineering, 45(3):525–541, 2017.
 [49] I. E. VignonClementel, C. A. Figueroa, K. E. Jansen, and C. A. Taylor. Outflow boundary conditions for threedimensional finite element modeling of blood flow and pressure in arteries. Computer methods in applied mechanics and engineering, 195(2932):3776–3796, 2006.
 [50] C. VonTycowicz, C. Schulz, H.P. Seidel, and K. Hildebrandt. Realtime nonlinear shape interpolation. ACM Trans. Graph., 34(3), May 2015. doi: 10 . 1145/2729972
 [51] M. Wang, N. Ferey, P. Bourdot, and F. Magoulès. Interactive 3d fluid simulation: steering the simulation in progress using lattice boltzmann method. In 2019 18th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES), pp. 72–75. IEEE, 2019.
 [52] Y. Wang, X. Gu, S.T. Yau, et al. Volumetric harmonic map. Communications in Information & Systems, 3(3):191–202, 2003.
 [53] P. Wenisch, C. v. Treeck, A. Borrmann, E. Rank, and O. Wenisch. Computational steering on distributed systems: Indoor comfort simulations as a case study of interactive cfd on supercomputers. International Journal of Parallel, Emergent and Distributed Systems, 22(4):275–291, 2007.
 [54] C. H. Whiting and K. E. Jansen. A stabilized finite element method for the incompressible navier–stokes equations using a hierarchical basis. International Journal for Numerical Methods in Fluids, 35(1):93–116, 2001.
 [55] M. Yano and D. L. Darmofal. An optimizationbased framework for anisotropic simplex mesh adaptation. Journal of Computational Physics, 231(22):7626–7649, 2012.
 [56] H. Yi, M. Rasquin, J. Fang, and I. A. Bolotnov. Insitu visualization and computational steering for largescale simulation of turbulent flows in complex geometries. In 2014 IEEE International Conference on Big Data (Big Data), pp. 567–572, Oct 2014. doi: 10 . 1109/BigData . 2014 . 7004275
 [57] H. Yu, C. Wang, R. W. Grout, J. H. Chen, and K.L. Ma. In situ visualization for largescale combustion simulations. IEEE computer graphics and applications, 30(3):45–57, 2010.
Comments
There are no comments yet.