1 Introduction
Topology optimization has become popular as an effective technique in structural and additive manufacturing, and has found uses in architecture, medicine and material science [Adam2019, Jang2008, Liu2018]. The objective is to find the optimal distribution of a fluid or material within a given domain that minimizes a problemspecific cost functional. In contrast to shape optimization, the topology of the structure does not need to be chosen a priori.
There are several mathematical formulations for topology optimization including density approaches [Bendsoe1989, Bendsoe2004, Borrvall2003, Mlejnek1992], topological derivatives [Sokolowski1999], level set methods [Allaire2002, Allaire2004, Wang2003] and evolutionary methods [Xie1993]. We focus on the density approach. This introduces a function, denoted , that represents the material distribution over the given domain. Ideally we would find an optimizing material distribution indicating presence or absence of material. However, this is numerically intractable in general and we therefore consider densities in order to exploit continuous optimization techniques. The model is then regularized to favor solutions where is close to zero or one.
Due to the nonlinear relation between and the solution of the underlying physical system, multiple local minima can occur even in problems with a linear governing partial differential equation (PDE). For example, minimizing the power dissipation of a fluid governed by the Stokes equations flowing through a pipe can give rise to distinct pipe configurations that locally minimize the power lost to dissipation [Borrvall2003, Sec. 4.5]. Currently, the main technique to address this is the use of continuation methods to promote convergence to better local minima. However, Stolpe and Svanberg [Stolpe2001] have provided elementary examples where these continuation methods fail. For example, the compliance minimization of a sixbar truss can be reduced to the optimization problem [Stolpe2001, Sec. 3.1],
Here, is the continuation parameter and , where is the Poisson ratio and is the modulus of elasticity. Suppose we fix and . A poor starting guess can converge to the local minimum . Then even as , the continuation method will always return and will not converge to the true global solution.
The calculation of multiple stationary points is important because iterative methods often give no guarantee whether the minimum they converge to is local or global. By finding multiple stationary points, one is able to choose the best available, in a postprocessing step. Furthermore, an iterative method may converge to a stationary point which is undesirable due to manufacturing or aesthetic reasons; thus industrial applications can benefit from having a choice of multiple locally optimal configurations [Doubrovski2011].
In this paper we formulate an algorithm, which we call the deflated barrier method, for finding multiple stationary points of topology optimization problems and present several largescale numerical examples arising from the finite element discretization of PDEs. An example we consider is the topology optimization of the power dissipation of fluid flow governed by the incompressible Navier–Stokes equations on a rectangular domain with five small decagonal holes. We discover 42 stationary points of this optimization problem with the deflated barrier method. The material distribution of these solutions are shown in Fig. 1.
The deflated barrier method is a combination of deflation [Brown1971, Farrell2015, Farrell2019], barrier methods [Fiacco1990, Forsgren2002, Frisch1955, Schiela2007, Schiela2008, Ulbrich2009, Weiser2008], primaldual active set solvers [Benson2003, HintermullerIto2003] and predictorcorrector methods [Seydel2010]. While the most popular approach for solving topology optimization problems is the method of moving asymptotes [Svanberg1987], barrier methods have also been successfully employed [Evgrafov2014, Hoppe2002, Maar2000, RojasLabanda2015]. The combination of primaldual active set solvers, barrier and deflation methods in the manner proposed is novel. The combination does not suffer the poor behavior that barrier methods traditionally exhibit as the barrier parameter approaches zero. In fact, in our numerical examples, the combination performs better than the optimizethendiscretize formulation of the primaldual interior point method where Newton–Kantorovich iterates are used to solve the subproblems, either approximately or exactly. The predictorcorrector method is also adapted for use with boxconstrained variables to ensure the predictor is feasible. The main contribution of this work is an algorithm to robustly determine multiple solutions to nonconvex, inequality and boxconstrained infinitedimensional optimization problems starting from poor initial guesses.
Other approaches to computing multiple solutions of topology optimization problems are possible. Zhang and Norato [Zhang2018] apply the tunneling method [Levy1984] to these problems, adapting the method of moving asymptotes. Tunneling proceeds by finding a single minimum, then looking for other controls that yield the same functional value (attempting to tunnel into other basins) by solving an auxiliary equation. Deflation is used in the tunneling phase to ensure that the Gauss–Newton procedure applied to the tunneling functional does not converge to the current state.
The outline of the paper is as follows. In Section 2 we formulate some topology optimization problems for pipe design and structural compliance. The deflated barrier method is described in Section 3. Several examples of topology optimization problems are given in Section 4, where we discover multiple solutions for Navier–Stokes flow, Stokes flow, and structural compliance, and consider the performance of our algorithm. In Section 5 we outline our conclusions. A result concerning the equivalence of Hintermüller et al.’s primaldual active set strategy [HintermullerIto2003] and Benson and Munson’s reduced space activeset strategy [Benson2003] is given in Appendix A. In Appendix B we describe our novel feasible tangent prediction method.
2 Topology optimization formulations
2.1 Topology optimization of Stokes flow
We consider the formulation of the topology optimization of fluids proposed in the pioneering work of Borrvall and Petersson [Borrvall2003]. They derive a ‘generalized Stokes problem’ incorporating a material distribution variable which has a value of one where fluid is present and zero where there is void. The derived optimization problem requires no further regularization for wellposedness, in contrast to structural topology optimization. The optimization problem supports (not necessarily unique) local minima.
The topology optimization problem of Borrvall and Petersson is
(BP) 
where denotes the velocity of the fluid, is the material distribution of the fluid and
In this work, denotes the Sobolev space and
denotes the vector space of essentially bounded measurable functions equipped with the essential supremum norm. Furthermore,
is a Lipschitz domain with dimension or , is a body force and is the (constant) viscosity. Moreover, , on , with , i.e. has nonzero Hausdorff measure on the boundary. Here, is the inverse permeability, modelling the influence of the material distribution on the flow. For values of close to one, is small, permitting fluid flow; for small values of , is very large, restricting fluid flow. The function satisfies the following properties:
[label=(A0)]

with ;

is convex and monotonically decreasing;

and ,
generating a superposition operator also denoted . Typically, in the literature takes the form [Borrvall2003, Evgrafov2014]
(1) 
where is a penalty parameter, so that .
Remark 1
The integral in Eq. BP is well defined. Indeed, since is assumed to be convex, it is Borel measurable; also since is Lebesgue measurable, the composition is Lebesgue measurable.
Theorem 1
Due to the lack of strict convexity in Eq. BP, a minimizing pair is not necessarily unique.
2.2 Construction of the barrier functional
In this subsection we formulate a barrier functional with an enlarged feasible set that will be employed by our algorithm to find multiple solutions of the Borrvall–Petersson optimization problem.
We first consider the volume constraint. This constraint is typically modeled as an inequality constraint. However, as we show below, this constraint is active at an optimal solution, and so we may also apply it as an equality constraint. To the best of our knowledge, the following result is novel.
Proposition 1
If the pair is an isolated local or global minimizer of as defined in Eq. BP and , then .
Proof (Proof by contradiction)
Suppose there exists a pair that is an isolated local or global minimizer of such that . By the definition of an isolated local minimizer, there exists an such that for any that satisfies,
then . Then for any function such that
(2)  
(3)  
(4) 
we have that from Eq. 2 and Eq. 4 and lies in the neighborhood of from Eq. 3. Such a exists, for example,
We see that since and . Furthermore satisfies Eq. 2–Eq. 4 since,
Since is monotonically decreasing and and are nonnegative and not equal to zero, then a.e. and hence .
Given we can tighten the inequality volume constraint to an equality volume constraint, we now define the Lagrangian and the enlarged feasibleset barrier functional, respectively, as:
(5)  
(6) 
where denotes the pressure, is the Lagrange multiplier for the volume constraint, is the Lagrange multiplier to fix the integral of the pressure, and , where is the barrier parameter.
The classical barrier functional is given by . The role of is to enlarge the feasible region permitted by the barrier terms. In the deflated barrier method we do not use the barrier terms to enforce the boxconstraints on , but rather to perform continuation in the barrier parameter to follow a central path. This provides robust convergence and offers an opportunity to find other solutions of the optimization problem, as explained in Section 3.
We note that the Euler–Lagrange equation of with respect to satisfies the generalized Stokes momentum equation formulated by Borrvall and Petersson [Borrvall2003, Eq. 12]. Hence, we are only required to enforce the incompressibility and volume constraints. In the case where we wish to minimize the power dissipation of a fluid flow governed by a generalized Navier–Stokes momentum equation, we are required to introduce three extra Lagrange multipliers, as done in Section 4.4.
2.3 Topology optimization of the compliance of elastic structures
A significant portion of the topology optimization literature focuses on minimizing the compliance of a structure, such as a Messerschmitt–Bölkow–Blohm (MBB) beam or a cantilever. Compliance problems involve finding the optimal topology of a structure obeying a volume constraint within a specified domain that minimizes the displacement of the structure under a body or boundary force. For simplicity we consider structures that obey linear elasticity. The optimization problem we consider is posed as follows,
(C) 
such that,
where, , denotes the displacement of the structure,
denotes the stress tensor, the traction
is given, are known boundaries on such that , and are the Lamé coefficients, is the matrixtrace operator, is the identity matrix, is the outward normal andwhere and . Unless stated otherwise, we choose and . The use of is known as the Solid Isotropic Material with Penalization (SIMP) model. Bendsøe and Sigmund [Bendsoe2004, Ch. 1] provide a concise physical interpretation of the SIMP model. In essence, for close to one, is close to one, indicating the presence of material, whereas where is close to zero, approaches , indicating void. Thus, is the reverse of the inverse permeability, . It is typical to raise to the power of in order to penalize intermediate values of .
We introduce a Lagrange multiplier and reformulate Eq. C as finding the stationary points of
(7) 
such that a.e. in , and .
By deriving the Euler–Lagrange equations of Eq. 7, we see that the linear elasticity PDE constraint on must be satisfied. However, if we consider the adjoint equation involving , it can be verified that . Substituting this relation into Eq. 7, we see that Eq. 7 is equivalent to finding the stationary points of
(8) 
such that a.e. in , and . The substitution is useful as it greatly reduces the size of the problem after discretization.
Unfortunately, the problem in general is illposed and does not have minimizers in the continuous setting. Naïve attempts at finding minimizers often yield checkerboard patterns of . Although a different choice of finite element spaces may avoid the checkerboarding, the solutions will still be meshdependent. As the mesh is refined, the beams of the solutions will become ever thinner, leading to nonphysical solutions in the limit. There are several schemes employed by the topology optimization community to obtain physically reasonable solutions for and they are known as restriction methods [Bendsoe2004]. We opt for the addition of a Ginzburg–Landau energy term,
with , , to the objective function. Physically, this term corresponds to penalizing fluctuations in the values of . As , it was shown by Modica [Modica1987] that the Ginzburg–Landau energy converges to the perimeter functional associated with restricting , providing rigorous mathematical grounding for this choice of regularization. For sufficiently large values of , this introduces minima and removes the checkerboarding effect. Other restriction methods used by the topology optimization community include gradient control [Borrvall2001a], perimeter constraints [Borrvall2001a], sensitivity filtering [Bourdin2001, Sigmund1994], design filtering [Bruns2001, Lazarov2011] and regularized penalty [Borrvall2001a].
After these manipulations, the Lagrangian is given by
where is the Lagrange multiplier for the equality volume constraint. We then define the enlarged feasibleset barrier functional as in Eq. 6.
We have formulated enlarged feasibleset barrier functionals for both Borrvall–Petersson and structural compliance optimization problems. Finding stationary points of these barrier functionals is equivalent to computing minima, maxima and saddle points of the underlying optimization problems. In the next section we will introduce our algorithm and explain how we obtain multiple stationary points.
3 The deflated barrier method
In the following sections, we describe the components of the deflated barrier method. More specifically, we justify the usage of a barrier method where the subproblems are solved with a primaldual active set solver to handle the effects of the barrier parameter in the Hessian. This is in contrast to a direct application of a discretizethenoptimize (DTO) primaldual interior method, which does not use the structure of the original infinitedimensional optimization problem. In the context of PDEconstrained optimization, ignoring the problem structure often results in meshdependence of the solver. Meshdependence is the phenomenon whereby with each refinement of the mesh, the number of iterations required by the optimization algorithm increases in an unbounded way [Schwedes2017].
3.1 Choosing a solver for the subproblems
Approximately solving the first order conditions of as is the classical primal interior point approach to finding the minima of Eq. BP and Eq. C. Without additional care, a direct implementation results in the following poor numerical behavior:

[label=(B0)]

The Hessian of has condition number . Hence as decreases, the computed Newton updates may become inaccurate and require more solver time [Forsgren2002, Th. 4.2];

An initial guess of for the subproblem is asymptotically infeasible if an exact full Newton update of the primal interior point method is used. More precisely, if is the calculated Newton update for at the first iteration of the Newton solver at , then as , we see that a.e. does not hold [Forsgren2002, Sec. 4.3.3].
Typically, to avoid the poor numerical behavior of 1 and 2, the DTO primal interior point method is reformulated as a primaldual interior point method, eliminating the rational expressions. Since the problem is first discretized, the slack variables associated with box constraints are associated to the primal variable componentwise. This manifests as a block identity matrix within the full Hessian. The Hessian can then be reduced and the primaldual approach is reformulated into a condensed form.
It is well known that PDEconstrained optimization solvers suffer from meshdependence when they do not properly treat the structure of the underlying infinitedimensional problem [Schwedes2017]. In order to obtain accurate solutions, where it is clear if the material distribution indicates material or void, we may require several refinements of the mesh; in this context, it is clear that meshdependence would be particularly disadvantageous. The meshindependence of our algorithm will be carefully studied in the subsequent numerical examples, and analyzed in future work.
In order to properly treat the structure of the underlying infinitedimensional problem, we opt for an optimizethendiscretize (OTD) method. The full Hessian arising from an OTD primaldual interior point method is no longer easily reduced, since the block associated with the slack variables is now a mass matrix, rather than the identity. To avoid solving uncondensed large systems involving three times the number of degrees of freedom of a primal approach, the goal is to develop an OTD barrier method that avoids the poor numerical behavior of
1 and 2. In a novel approach, we achieve this by solving the subproblems arising from the first order conditions of the enlarged feasibleset barrier functional , while still enforcing the true box constraints, a.e., with a primaldual active set solver. Whereas in a standard barrier method, the barrier terms act as a replacement for the box constraints on , here we retain the box constraints to be handled by the primaldual active set solver. The barrier terms are instead used for continuation of the problem, to aid global convergence and to search for other branches of solutions.The two inner solvers we consider are Hintermüller et al.’s primaldual active set strategy (HIK) [HintermullerIto2003] and Benson and Munson’s activeset reduced space strategy (BM) [Benson2003]. We briefly illustrate the basic approach taken to solve the individual subproblems using the logbarrier approach coupled with a primaldual active set solver. Let be a twicecontinuously differentiable function and consider the following boxconstrained nonlinear program:
(9) 
Here, we assume that such that (in each component) and we understand the inequality constraints componentwise. Next, we formulate an ‘outer approximation’ of Eq. 9 using enlarged feasibleset logbarrier terms (for any ):
We emphasize that there are two pairs of box constraints: the true box constraints and the enlarged feasibleset box constraints , . For any fixed , the associated KKTsystem has the form
(10)  
(11)  
(12) 
where, are Lagrange multipliers associated with the true box constraints, , and
(13) 
where the rational expressions are interpreted componentwise. Assuming we are given a strictly enlargedset feasible iterate , , we linearize around the point
using the associated Newtonderivative and reduce the system based on the estimates of the active and inactive sets predicted by the semismooth Newton step.
In HIK, the linearized system in the direction of is given by
(14) 
where
(15) 
and
(16)  
(17)  
(18)  
(19) 
We define the active set by and the inactive set by . By substituting Eq. 16–Eq. 19 into Eq. 14 and removing the rows associated with the active set, we observe that
(20) 
We can therefore solve the reduced linear system Eq. 20 to find the remaining unknown components of .
In BM, given a feasible iterate with respect to the true box constraints, , the active set is defined by
(21) 
and the inactive set is given by . The linearized system in the direction of takes the form
(22) 
The next iterate is then given by , where is the componentwise projection onto the true box constraints, i.e.
(23) 
The HIK solver is a well established method and under suitable assumptions is equivalent to a semismooth Newton method [Qi1993, Qi1993b, Ulbrich2003] in both finite and infinitedimensions [HintermullerIto2003]. This equivalence ensures local superlinear convergence and under further assumptions guarantees meshindependence [Hintermuller2004]. Until now, the BM solver had no supporting theoretical results, although is conveniently included in PETSc [petsc]. Experimentally, we observe that the BM solver enjoys superlinear convergence. At first glance, the two solvers may appear quite different, but in Appendix A we prove that for a linear elliptic control problem, if the active and inactive sets coincide between the two algorithms, then the updates given by HIK and BM are identical.
One common critique of barrier methods is that the step size rules for the update of the distributed control go to zero. We observe this in numerical examples if we use a Newton solver; however, this issue is averted when using HIK or BM. A step size of one is always taken for the update of the primal variable’s active set, whereas a linesearch can be used for the update of the primal variable’s inactive set. Hence, areas of the domain where the control attains the constraint do not influence the step sizes of the updates for sections of the control which are strictly feasible.
Both HIK and BM perform a pointwise projection on the iterates generated by the subproblems of the barrier functional. In the context of a classical OTD primaldual interior point method applied to a PDEconstrained optimal control problem, under certain assumptions, Ulbrich and Ulbrich [Ulbrich2000, Ulbrich2009] prove that local superlinear convergence holds if the iterates of the control and its associated Lagrange multipliers are pointwise projected to a controlled neighborhood of the central path. Although not all their assumptions hold in our case (in particular these problems are not convex), the combination of a primaldual active set solver and barrier method mimics the computation of a Newton step of a primaldual approach and then performing a pointwise projection. An advantage of our method is that our pointwise projection is unique and cheap to compute.
Numerically, this method only requires solving linear systems that are less than or equal to the size of the linear systems in a standard barrier method. Moreover, in the BM solver, the constrained variables can never reach the bounds of the enlarged feasibleset, ensuring the Hessian remains bounded. Furthermore, both the BM and HIK solvers remove the rows and columns in the Hessian associated with the active constraints. It is these active constraints which are the source of the unbounded eigenvalues that cause the illconditioning of the barrier method as
approaches zero. In Fig. 7 we give an example demonstrating that the condition number is controlled by the elimination of the active set. Removing rows and columns associated with the active set mimics the principle of Nash et al.’s stabilized barrier method [Nash1994, Nash1993].3.2 Deflation
Deflation is an algorithm for the calculation of multiple solutions of systems of nonlinear equations from the same initial guess. Suppose a system of PDEs, has multiple solutions , that we wish to find. We find the first solution by utilizing a Newtonlike algorithm to find . Now instead of using a standard multistart approach which may converge to the same solution, we instead introduce a modified system such that:

if and only if for ;

A Newtonlike solver starting from any initial guess applied to will not converge to .
This process is visualized in Fig. 2. In principle, one can use the same initial guess to converge to multiple solutions. The modified system is obtained by applying a deflation operator, to such that:

[label = (D0)]

is invertible for all in a neighborhood of ;

.
1 ensures that the resulting system has a solution if the original problem has an unknown solution, and 2 ensures that a Newtonlike method applied to the newly deflated system does not converge as . The conditioning of the Jacobian of the deflated system does not cause computational difficulty, since the Newton update of the deflated system is expressed as a scaling of the Newton update of the original system via the Sherman–Morrison formula [Farrell2015, Sec. 3].
Deflation was first introduced in the context of polynomials by Wilkinson [Wilkinson1963]. It was then extended to differentiable finitedimensional maps by Brown and Gearhart [Brown1971]. More recently, Farrell et al. extended the original Brown and Gearhart technique to Fréchetdifferentiable maps [Farrell2015]. Deflation has been used to discover multiple solutions of cholesteric liquid crystals, Bose–Einstein condensates, mechanical metamaterials, aircraft stiffeners, and other applications [charalampidis2016, Emerson2018, medina2019a, Robinson2017, xia2019a]. It has also been extended to semismooth mappings [Farrell2019], which is necessary in the current context of topology optimization.
3.3 Implementation of the deflated barrier method
The essential idea is to use deflation to attempt to find other branches during the continuation of the barrier parameter, as visualized in Fig. 3. As summarized in Fig. 4, the deflated barrier method is divided into three phases: prediction, continuation and deflation.
Prediction: Given a solution at , the algorithm calculates an initial guess for the corresponding solution at . This is done via a feasible tangent prediction method (as described in Appendix B), a classical tangent prediction method [Seydel2010, Sec. 4.4.1] or a secant prediction method [Seydel2010, Sec. 4.4.2]. A feasible tangent prediction method is identical to its classical counterpart but with box constraints on the predictor step to ensure the initial guess is feasible.
Continuation: Given an initial guess for each branch at the new barrier parameter , the algorithm calculates the new solution along each branch with a primaldual active set solver whilst deflating away all solutions already known at .
Deflation: At some subset of the continuation steps, the algorithm searches for new branches at using solutions on different branches found at as initial guesses. The search terminates when all the initial guesses have been exhausted (reached a maximum number of iterations without converging) or when a certain number of branches have been found.
We now explain the notation used in Algorithm 1. Let in the Borrvall–Petersson case and in the compliance case. The value of the barrier parameter at subproblem iteration is denoted . The initial guess for the density is denoted and the initial guess for the volume constraint Lagrange multiplier is denoted . The generator for the next value of is denoted by . The update can be adaptive or chosen a priori, provided it gives a strictly decreasing sequence. Under suitable conditions, the first order conditions of together with the box constraints on can be reformulated into perturbed KKT conditions [Ulbrich2009, Rem. 3] which in turn can be reformulated as a semismooth system of partial differential equations, . Let
(24) 
Let denote the Fréchet derivative with respect to . Let denote the set of solutions, , found at . Let denote the deflation operator and denote the function space of .
4 Numerical results
In all examples the systems were discretized with the finite element method using FEniCS [fenics] and the arising linear systems were solved by a sparse LU factorization with MUMPS [mumps] and PETSc [petsc]. The meshes were either created in FEniCS or Gmsh [Geuzaine2009]. We present three different examples of the minimization of the power dissipation of a fluid constrained by the Stokes equations, one constrained by the Navier–Stokes equations, and two examples of the minimization of the compliance constrained by linear elasticity. Throughout the numerical examples, denotes the minimum diameter of all simplices in the mesh, where the simplex diameter is defined as the maximum edge length. Similarly denotes the maximum diameter of all simplices in the mesh. All solutions depicted are presented as computed by the deflated barrier method, with no truncation or postprocessing of the material distribution.
4.1 Borrvall–Petersson doublepipe
We consider the doublepipe problem with volume fraction , two prescribed flow inputs and two prescribed outputs, and the boundary conditions as prescribed in Fig. 5. We use as given in Eq. 1, with and . Here is a penalty parameter which controls the level of intermediate values (between zero or one) in the optimal design.
We use a Taylor–Hood finite element discretization for the velocity and pressure and elements for the material distribution. For BM, we begin with and apply deflation immediately to find the second branch of solutions. For HIK, this strategy did not converge to the second branch, although the second branch is discovered with . In both cases tangent prediction is used, as well as a damped minimizing linesearch [Brune2015, Alg. 2]. Fig. 6 shows the minimizers of the doublepipe problem computed using the deflated barrier method.
In Table 1 we explore the meshindependence of primaldual active set solver iterations. We observe that with each refinement of the mesh, the number of iterations stay roughly constant. In particular, we notice that the behavior is consistent for both HIK and BM. This is a recurring theme and holds in subsequent examples. To exemplify that the meshindependence is not an artifact of our choice of finite element spaces, we also display the results of a divergencefree Scott–Vogelius finite element discretization for the velocity and pressure and for the material distribution. Stability of this discretization is ensured by using a barycentricallyrefined mesh [qin1994].
In Fig. 7 we plot the condition number of the Hessian as in a classical barrier method, and the condition number of the Hessian with the rows and columns associated with the activeset removed. We observe that the condition number of the latter is significantly smaller, accounting for why our proposed methodology does not suffer from illconditioning.
BM Solver  Taylor–Hood  Branch 0  Branch 1  

Dofs  Cont.  Defl.  Pred.  Cont.  Defl.  Pred.  
0.0283  38,256  124  0  22  115  30  22 
0.0177  97,206  123  0  22  109  30  22 
0.0141  151,506  110  0  22  116  29  22 
HIK solver  Taylor–Hood  Branch 0  Branch 1  
Dofs  Cont.  Defl.  Pred.  Cont.  Defl.  Pred.  
0.0283  38,256  174  0  43  261  14  43 
0.0177  97,206  189  0  43  223  13  43 
0.0141  151,506  173  0  43  197  13  43 
BM solver  Scott–Vogelius  Branch 0  Branch 1  
/  Dofs  Cont.  Defl.  Pred.  Cont.  Defl.  Pred. 
0.0278/0.0501  58,685  155  0  22  139  29  22 
0.0139/0.0250  234,005  124  0  22  120  29  22 
4.2 Neumannoutlet doublepipe
One could argue that fixing the outlet flows is inherently nonphysical and a more realistic model would prescribe natural boundary conditions on the outlets (while keeping the Dirichlet boundary conditions on the inlets) [Deng2018]. In 2007, Limache et al. [Limache2007] observed that in order to avoid violating the principle of objectivity, the following natural boundary condition should be used,
(25) 
where denotes the symmetrized gradient, denotes the identity matrix and denotes the outlets. This natural boundary condition is achieved by altering the objective functional to
(26) 
Since and , we note that the minimizers of Eq. 26 are the same as those of the original functional, , combined with the natural boundary conditions as described in Eq. 25. The other alteration in the optimization problem is the removal of the Lagrange multiplier, , since the absolute pressure level is set by the outflow boundary condition.
We employ the Taylor–Hood discretization and initialize . Deflation finds the second, third and fourth branches at . For , deflation discovers branch 2, then branch 1 and 3, whereas for the other mesh sizes, deflation discovers the branches in ascending order.
The removal of an imposed outlet flow has an interesting effect. The global minimizer in the shape of a doubleended wrench is now a local minimizer. Two new symmetric global minimizers now exist as shown in Fig. 8. This is not entirely surprising. There is a cost associated with the pipe splitting and if the optimization problem does not require the flow to leave both outlets, then it is favorable for the flow to exit via one outlet, not both. This is reflected in the resulting cost.
The meshindependence of the algorithm is investigated in Table 2. As before, meshindependence is observed.
BM Solver  Branch 0  Branch 1  

Dofs  Cont.  Defl.  Pred.  Cont.  Defl.  Pred.  
0.0333  27,455  118  0  53  108  49  34 
0.0250  48,605  136  0  37  107  34  37 
0.0125  193,205  113  0  35  106  45  36 
Branch 2  Branch 3  
Dofs  Cont.  Defl.  Pred.  Cont.  Defl.  Pred.  
0.0333  27,455  166  199  55  166  149  55 
0.0250  48,605  145  123  45  145  157  45 
0.0125  193,205  128  151  46  128  146  46 
4.3 Rollertype pump
In this example problem [Deng2018, Sec. 2.1.4.4], the domain is given by
The boundary conditions on are given by:
These boundary conditions model an inlet on the bottom of the domain and an outlet on the right of the domain with a pump rotating at a constant velocity in the center of the domain where the fluid experiences noslip boundary conditions. We employ the Taylor–Hood discretization and initialize . Deflation finds the second branch at .
A global and local minimum of the problem are shown in Fig. (a)a. The local minimum chooses to avoid the pump in favor of taking the path with the shortest distance from the inlet to the outlet, while the global minimum exploits the rotation given by the pump. The local minimizer for has areas where , which has an ambiguous physical interpretation. In order to verify whether should be equal to zero or one in such areas, a mixture of gridsequencing and continuation in was performed, resulting in the solution shown in Fig. (b)b. The meshindependence of the algorithm is verified in Table 3.
BM solver  Branch 0  Branch 1  

/  Dofs  Cont.  Defl.  Pred.  Cont.  Defl.  Pred. 
0.0258/0.0509  7388  260  0  55  118  80  35 
0.0127/0.0255  29,174  186  0  51  75  117  25 
0.0064/0.0127  113,096  177  0  46  83  99  29 

4.4 Fiveholes doublepipe with Navier–Stokes
We consider the original Borrvall–Petersson doublepipe problem with Dirichlet outflow conditions, but modify the domain to include five small decagonal holes with inscribed radius 0.05 positioned at , , , and , as shown in Fig. 10. We further show the flexibility of our method by considering fluid flow constrained by the incompressible Navier–Stokes equations. This is achieved by introducing Lagrange multipliers, , , and , to enforce the Navier–Stokes equations. We then define the Lagrangian as
(27) 
where denotes the (constant) fluid density. We choose and , with other variables equal to those in the original doublepipe problem. We employ the Taylor–Hood discretization and initialize