Academic/Computational Contributions

Computationally guided agents approach to financial portfolio optimization that incorporates varying problem formulations, variety of parameters, and complex business constraints.

Conceptualization of a computational model comprising of various classes of solver agents that contribute optimal solutions to a bank of solutions that guides a superagent solver towards obtaining optimal solutions for a global investment objective.

Development of a novel mechanism in Simulated Annealing algorithm for neighbourhood generation using solution bank approach.

Conceptualization of a perturbation strategy using decision matrix for controlled disturbances to decisions variables during neighbourhood generation.
Contributions to SAS Research and Development

Portfolio Optimization Component of Asset Performance Management Solution

Development of Robust Portfolio Optimization Formulation

Development of Riskbased Asset Allocation Strategies


Platform for Innovation (SASFoundry) and Knowledge Management (SAS ToolPool)

Simulated Annealing for Global Optimization (Algorithm Development)

Ant Colony Optimization for Continuous Domains (Algorithm Development)

A Suite of Benchmark Functions for Global Optimization

1.1 Introduction, Background & Motivation
In financial literature, a portfolio is considered as an appropriate collection of investments held by an individual or a financial institution. These investments or financial assets constitute shares of a company (often referred as equities), government bonds, fixed income securities, commodities (such as Gold, Silver, etc.), derivatives (incl. options, futures and forwards), mutual funds, and, various mathematically complex and business driven financial instruments. The individual responsible for making investment decisions using the money (or capital) that individual investors or financial institutions have placed under his/her control is referred as the Portfolio Manager. In principle, a portfolio manager holds responsibility for managing the asset and liability portfolios of a financial institution.
From a very simplistic viewpoint, consider we have a capital of One Lakh Rupees to invest in equities. Further, consider that there exists a market that has three equities: Infosys, Tata Steel and Reliance Industries. An investment perspective on the three equities raises the following fundamental questions:

In which of these would we invest?

How much would we invest in each of them?
The proportion of the entire amount of investment that is divided amongst the three equities would address both the above questions. i.e. if X1 amount of money were to be invested in Infosys, X2 amount of money were to be invested in Tata Steel and X3 amount of money were to be invested in Reliance Industries such that the entire One Lakh Rupees is completely invested, then we are looking at enforcing a budget constraint such that X1 + X2 + X3 = 100% of the entire amount of investment. Hence, if we are able to precisely determine the amount to be invested in each of these, i.e. X1, X2 and X3, then, we are determining an optimal set of weights that would correspond to equities we considered to invest. Fundamentally, determining this optimal structure of weights is considered as the Portfolio Optimization problem in Mathematical and Financial literature. In principle, the portfolio may consist of any of the complex investment options available, and would have a variety of realistic constraints.
Formally, financial portfolio optimization adheres to a formal approach in making investment decisions:

for selection of investment portfolios containing the financial instruments,

to allocate a specified capital over a number of available assets,

to meet certain predefined objectives,

to mitigate financial risks and ensure better preparedness for uncertainties,

to establish mathematical and computational methods on realistic constraints,

to manage profit and loss of the portfolio (across long and short positions), and,

to provide stability across inter and intraday market fluctuations etc.
Banks, fund management firms, financial consulting institutions and large institutional investors, on a continual basis and repeated timeframes, are faced with the challenges of managing their funds, assets and stocks towards selecting, creating, balancing, and evaluating optimal portfolios. Markowitz Modern Portfolio Theory (MPT) has provided a fundamental breakthrough towards strengthening the meanvariance analysis framework. Ever since then, modifications, extensions and alternatives to MPT have been worked out to simplify and prioritize assumptions of the theory and to address the limitations of the framework. Parallel works to incorporate various other challenges have been introduced by numerous research papers consistently.
Financial crisis, economic imbalances, algorithmic trading and highly volatile movements of asset prices in the recent times have raised high alarms on the management of financial risks. Inclusion of risk measures towards balancing optimal portfolios has become very crucial and equally critical. In recent times, varied mathematical models have emerged leading towards practical riskbased asset allocation strategies. Undoubtedly, incorporation of risk factors brings additional realistic constraints with an increased number of parameters towards formulation of mathematically complex and higher order objective functions that challenge the role of deterministic mathematical optimization routines.
In today’s world, technological advances to computation speed, power and volume needs minimal introduction. Powerful computing infrastructure that account for highperformance computing, gridbased analytics and inmemory analysis of large volumes of data lay strong basis for the role of computational modeling in the world of finance. And, larger possibilities of uncertain events, growing number of markets, mathematically modeled complex financial instruments etc. are some of the few challenges that computational models and algorithms tend to address.
The diffusion of lowcost highperformance computing capabilities have allowed for the broad use of numerical methods. The importance of finding closedform solutions and the consequent search for simpler models, and when required, complex models provide stronger emphasis for computationallyintensive methods such as Monte Carlo simulations, numerical approximations to differential equations (ordinary and stochastic), population based approaches, heuristic branching of uncertain possibilities in a search space etc. provide just a gist of a variety of possibilities that computational sciences tend to offer. Addressing the availability of such highvalued computing techniques, and to overcome challenges faced by deterministic optimization methods, this work does focus towards having a look at stochasticsearch based optimization routines towards optimal asset allocation strategies.
Interestingly, deeper understanding of the mechanisms of financial markets seems to suggest that they tend to behave similar to the systems, processes and phenomena that occur in the natural world around us. In computational sciences, treatments to complex problems via robust algorithms have repeatedly taken metaphorical inspirations from natureinspired methods of reaching optimal solutions, just like the way, bioorganisms explore vast number of possibilities to reach to an optimal solution. Physical processes that occur in nature assist in providing solutions via repeated trials and facilitate in reducing estimation errors for the decision variables of objective functions. And, evolutionary approaches in nature strengthen the stability of the solution across trials from one generation to another.
In light of the above introduction and background, the next section inclines towards exploring further insights from literature and then takes a relook at the problem of financial portfolio optimization.
1.2 Literature Survey
Formal approach towards making investment decisions for obtaining an optimal portfolio with a specific objective requires a mathematical formulation for the problem. Validation of the models from a mathematical perspective is undoubtedly challenging and requires rigorous understanding. However, with increasing computational capabilities and the growing volumes of data, computational models and algorithms are often developed to validate the mathematical models with empirically available data. Standing on the shoulders of the giants, this section intends to explore their work by providing a literature survey on the theoretical developments and computational approaches to the problem.
Fundamental breakthrough in the problem of asset allocation and portfolio optimization is often dated to the Markowitz’s Modern Portfolio Theory (Markowitz, 1952). It considers rational investors and models the problem keeping the meanvariance analysis framework in perspective, where, the variance of the portfolio is minimized with a fixed value for the expected return on the entire portfolio. The framework also assumes a market without any taxes or transaction costs, and where short selling is disallowed but assets are infinitely divisible and can be traded with any nonnegative fractions.
Tobin James’s work (Hester and James, 1967) presents the inclusion of riskfree assets in the traditional Markowitz formulation by the development of the Separation theorem which states that in the presence of a riskfree asset, the optimal risky portfolio can be obtained without any knowledge of the investor’s preferences, whereas, Sharpe’s Capital Asset Pricing Model (CAPM) (Sharpe, 1964) takes into account the asset’s sensitivity to nondiversifiable risk while it is being added to an already existing welldiversified portfolio. It considers the importance of the covariance structure of the returns, the variance of the portfolio and the market premium (i.e. the difference between the expected return on the asset from the market and the riskfree rate of return on the asset). The model assumes that the investors are rational and riskaverse, are broadly diversified across a range of investments, and that they cannot influence the prices of the assets. Assumptions regarding trade or transaction costs, shortselling and trades with nonnegative fractions do apply from the traditional Markowitz’s framework.
More recently, Ross’s Arbitrage Pricing Theory (APT) (Ross, 1976) models the expected return of a financial asset as a linear function of various macroeconomic factors (where, sensitivity to changes in each factor is represented by the factorspecific beta coefficients of the regression model) and Miller’s Cost of Capital (Modigliani and Miller, 1958) or the Capital Structure Irrelevance Principle forms the foundation for looking at the capital structure. Mathematically, referred as ModiglianiMiller Theorem (Wikipedia, 2012b), it states that under a certain market the price process is unaffected by the financing structure of the firm. The theorem assumes the absence of trade and transaction costs, taxes, and assumes presence of information asymmetry and an efficient market.
Mathematical/Statistical perspectives to the problem of portfolio optimization range across various other schools of thought as well. Over the years, tremendous advances in literature (SCI, 2009), (Balbás, 2007), (Sereda et al., 2010) and (Ortobelli et al., 2005)
have been in place with regard to exploring return and risk measures for quantifying the parameters of the portfolio optimization problem including constant and timevarying higher moments on the returns. In order to characterize the uncertainties in parameter estimates, Michaud’s resampled efficiency technique
(Michaud and Michaud, 2008)lays emphasis on continuous resampling from empirical data. Approaches from mathematical physics, include the use of asset selection filters based on Random Matrix Theory as mentioned in
(Sharifi et al., 2004), (Daly et al., 2008) and (Conlon et al., 2007) are being applied to covariance structures of the returns on the assets to provide stability to the problem and provides mechanisms for assessment of risks. Bootstrapping techniques are often considered in such approaches.In studies of financial calculus, considering the equity markets in perspective, Fernholz’s Stochastic Portfolio Theory (Fernholz, 2002) and (Karatzas and Fernholz, 2008) discusses a descriptive theory that provides a framework for analysing portfolio behaviour and equity market structure that has both theoretical and practical applications. It provides insights into questions of market behaviour and arbitrage principle. It is used to construct portfolios with controlled behaviour under certain conditions and provides methods for managing the performance of the portfolio.
From a behavioural science and economic perspective, Von NeumannMorgenstern’s Expected Utility Hypothesis (Wikipedia, 2012a)
plays a crucial role in looking at the investor’s betting preferences with regard to uncertain price movements being represented by a function of the payouts, the probabilities of occurrence, risk aversion, and the different utilities of the same payout to other investors with different assets or personal preferences. It tends to provide an alternative perspective at considering the expected value approach referred in statistics by focusing on four fundamental axioms that would characterize a rational decision maker. These axioms include completeness, transitivity, independence and continuity. Various works including
(Neumann and Morgenstern, 1953), (Marschinski et al., 2007) and Markowitz (2010) discusses the inclusion of such a hypothesis while considering the portfolio optimization problem.Over the recent years, with regard to various social media information being generated on a continual basis, Bayesian perspective to the problem seems to be gaining popularity. BlackLitterman’s model (Black and Litterman, 1992) considers including the views of the investors as prior information into the estimates of the expected asset returns and the covariance structure. It provides a mechanism to model userspecified confidence levels that are based on the strong/weak opinions of the investors and financial experts, either in totality or with partial knowledge, and tends to span across arbitrary and/or overlapping sets of assets. Related works that discuss the Bayesian approach to the portfolio optimization problem include (Christodoulakis, 2002), (Hoffman et al., 2011), (He and Litterman, 1999), (Walters, 2009) and (Zhou, 2009).
The above portfolio optimization theories and various other theoretical advances have been proposed with the aid of mathematical and statistical modeling methods. For varied surveys on advances in literature, one is referred to (Nishimura, 1990), (Bolshakova et al., 2009), (Moore, 1972) and (FigueroaLopez, 2005).
Considering the wide range of advances in the field of computational sciences in terms of multiple paradigms for computations, availability of high performance computing infrastructure and methodical simulation strategies, analysis and empirical validation of mathematical and statistical models is inclined towards exploring these enhanced computational capabilities. Further literature presented in this section, intends to broadly highlight computational developments without presenting mathematical variants of the portfolio optimization problem that have been considered in them. The spectrum of such advancements range from conventional numerical methods of analytic and nonanalytical approximations, to stochastic programming via simulations exploring multiperiod scenario generations, stochastic optimization and stochasticsearch based optimization using local search procedures, evolutionary computations and swarm intelligence using population based algorithmic developments, and, machine learning based computationally guided mechanisms.
Formulation of Markowitz’s portfolio optimization problem is viewed upon as a quadratic optimization problem. (Boyd and Vandenberghe, 2009) and (Nocedal and Wright, 1999)
provides a comprehensive literature to convex and numerical optimization methods to solve such a formulation. A commonly known pitfall for numerical approximations via computational systems is viewed upon by finite precision arithmetic including both the fixedpoint and the numerical errors in floatingpoint arithmetic.
(Antia, 1991) and (Press et al., 2007) provides a detailed overview of scientific computing methods and (Fernando, 2000) considers a practical cum numerical perspective on the portfolio optimization problem.In the literature corresponding to stochastic programming and simulations, (Yu et al., 2003) provides a survey of stochastic programming models. (Parpas and Rustem, 2006) explores a global optimization approach to scenario generation and portfolio optimization looking at them as individual problems. (Geyer et al., 2009) proposes a stochastic programming approach for multiperiod portfolio optimization. (Deniz, 2009) presents a multiperiod scenario generation approach to support portfolio optimization and (Guastaroba, 2010) discusses scenario generation, mathematical models and algorithms for the portfolio optimization problem. (Troha, 2011) proposes the problem as stochastic programming with polynomial decision rules by presenting coherency and timeconsistency as risk measures for the problem. And, (Greyserman et al., 2006)
explores portfolio selection using hierarchical Bayesian analysis and Markov Chain Monte Carlo (MCMC) methods.
Stochastic optimization methods refer to optimization models where random variables appear in the problem formulation. These methods tend to appear as solution approaches to portfolio optimization problem modeled with reference to the Stochastic Portfolio Theory as mentioned above. Stochasticsearch based optimization using local search procedures refers to introducing randomness in to the search process to accelerate progress towards leading to the optimal local solution. Challenges in such a methodology are with regard to handling constraints.
(Crama and M., 2003) provides a simulated annealing based approach to complex portfolio selection problems by a systematic inclusion of constraints. (Ardia et al., 2010) and (Krink and Paterlini, 2011) provide differential evolution based stochasticsearch heuristic methods for multiobjective portfolio optimization problem with realistic constraints.Computational methods inspired by biological mechanisms of evolution including reproduction, mutation, recombination and selection are referred in evolutionary computation. They use iterative progression amongst a population of individuals, select them for guided random search and process them in parallel. Evolutionary algorithms are populationbased metaheuristic optimization methods, and
(Hochreiter, 2008)performs evolutionary stochastic portfolio optimization using genetic algorithm as the metaheuristic.
(Branke et al., 2009) discusses the portfolio optimization with an envelopebased multiobjective evolutionary algorithm with a variety of nonconvex constraints. (Roudier, 2007) designs a multifactor objective function reflecting investment preferences and solves the portfolio optimization problem using genetic algorithm. (Chan et al., 2002) applies genetic algorithms in multistage portfolio optimization system, and (Lin et al., 2005) solves the problem with the same method but considers transaction costs and minimum transaction lot constraints.Computational algorithms that believe in collective behaviour of systems which are decentralized and selforganized are referred as swarm intelligence based algorithms. (Doerner et al., 2001) considers the use of metaheuristics towards performing an ant colony optimization in the multiobjective portfolio selection problem and (Thong, 2007) presents constrained Markowitz portfolio selection using ant colony optimization. (Mishra et al., 2009)
considers multiobjective particle swarm optimization approach to the portfolio optimization problem, and
(Chen et al., 2006) considers particle swarm optimization for constrained portfolio selection problems. (Karaboga and Akay, 2009a) and (Karaboga and Akay, 2009b) provide a comprehensive survey of algorithms that simulate bee swarm intelligence algorithms and its applications. (Vassiliadis and Dounias, 2008) applies the artificial bee colony optimization algorithm for the constrained portfolio optimization problem. An experimental study on a hybrid natureinspired algorithm considering the ant colony optimization and the firefly algorithm is proposed in (Giannakouris et al., 2010) for financial portfolio optimization. Various other swarm intelligence based techniques include cuckoo search, stochastic diffusion search, artificial immune systems, charged system search, and many more.Machine learning based methods that refer to statistical learning with data are widely applicable in computational finance. (Still and Kondor, 2010) provides a mechanism for regularizing the portfolio optimization that uses the L2 norm of the weight vector as a regularizer to act upon as a pressure towards portfolio diversification and resembles the same to support vector regression. (Diagne, 2002)
discusses financial risk management and portfolio optimization using artificial neural networks and extreme value theory.
(Zimmermann et al., 2001)performs active portfolio management based on error correction neural networks where the problem is modeled as a feedforward neural network and the underlying expected return forecasts are based on error correction neural networks.
(Steiner and Wittkemper, 1997) considers portfolio optimization with a neural network implementation of the coherent market hypothesis. And, (Zhang and Chau, 2008)presents the portfolio optimization with predictive distribution using dynamic graphical models including vector autoregressive, Kalman filters and proposes SparseGaussian Kalman filters to regularize the inverse covariance of the state and observation noise to reduce the variance of model estimation and the risk associated with the resulting portfolios.
The theoretical advances and computational techniques that appear in literature on financial portfolio optimization reflect the ongoing and progressive work in this field. The plethora of information available cites the relevance of the asset allocation and portfolio optimization problem in financial studies. This work may not be able to do complete justice in exploring the literature to as exhaustive and comprehensive it could be, but it believes to have covered the broader spectrum of available literature. Considering the challenging nature of market dynamics, and growing complexities, this work focuses to contribute to the pool of knowledge by looking at the problem from a computational perspective.
1.3 Problem Statement
Mathematical modeling of the portfolio optimization problem attempts to consider various parameters of the problem addressing towards return measures and more recently, even risk measures. Considering the significance of the challenges faced in the management of portfolio, theoreticians and practitioners constantly revive and revise various methods of measuring returns, risks and performance of portfolio. Based on the literature survey of the problem as discussed in the previous section, and considering the realworld complexities of working under financial limitations and policy considerations, the models of portfolio optimization are worked out with a number of constraints and with varying objectives.
In practice, portfolio managers of financial institutions are constantly faced with the challenges of adapting the theoretical models formulated earlier to the dynamics of the current scenarios. Each portfolio manager has a different outlook at the portfolio that he/she is responsible to manage and with the nature of empirical data; the need to relax a few assumptions of those models has gained greater importance. Some portfolio managers may allocate their entire wealth to be invested equally across all assets, whereas, some might strictly adhere to the principles formulated by Markowitz and work towards minimizing the variance of the portfolio.
A portfolio manager working with a few number of assets might believe in the diversification of the wealth across all assets by maximizing the diversification ratio of the portfolio whereas, another portfolio manager who believes in reasoning and decision making under uncertainty might tend to consider the uncertainties involved in estimating the parameters of the optimization problem. Some manager might even have a relook at the entire formulation by considering riskallocations across the portfolio as compared to allocations in assets thereby ensuring that the contributions to risk by all assets are neutralized. Further, each portfolio manager may also be interested in evaluating his/her portfolio’s performance on a continual basis and may tend to relate the portfolio performance by associating its comparison with a relevant market driven benchmark.
Studying the plethora of diverse mathematical formulations in theory that are driven by a breadth of objective functions, a variety of realistic constraints and a varied set of parameters as described in the previous section, and considering the computational approaches that have been applied to the problem, along with strategical incorporation of challenges faced by portfolio managers in practice, the financial institution and/or individual investor is faced with the fundamental question of reaching to an optimal weight structure for the investments that would align towards the investor’s global objective.
This work intends to focus on the problem by combining optimal solutions obtained by a variety of sources including formulations, solvers, strategies and perspectives to further align the investor’s wealth allocation towards an optimal structure that would potentially facilitate further guidance towards informed decision making in the financial portfolio optimization circumstances and challenges. The forthcoming Chapter 2 explains a few mathematical formulations of the problem and Chapter 3 discusses the proposed solution approach using computational guidance from a set of optimal solutions. Chapter 4 presents empirical validation of the proposed work via computational experiments and results are further discussed. Towards the end, in Chapter 5, findings of this work are concluded, and it discusses future course of work that may be taken up to strengthen the proposed solution.
2.1 Markowitz Modern Portfolio Theory (MMPT)
The fundamental breakthrough towards managing financial investments was provided by Harry Markowitz by constructing a portfolio optimization formulation with the meanvariance analysis framework. In statistical terms, investments are described in terms of their expected longterm return rate and their expected shortterm volatility. This formulation supports the concept of diversification of entire investment across all assets. The theory focusses on minimization of the variance of the portfolio for a specified expected portfolio return under the assumptions that are briefly mentioned in Section 1.2. In general, the meanvariance tradeoff dictates that with increased levels of risks that an investor wishes to take, the investor is then expected to obtain increased returns. This associates a linear relationship between the risk levels and the expected returns.
Markowitz Formulation
objective function  (2.1)  
subject to  return constraint  
budget constraint  
longonly constraint  
where,  correlation coefficient 
Markowitz Formulation (Linear Algebra Form)
objective function  (2.2)  
subject to  return constraint  
budget constraint  
longonly constraint 
In order to avoid the slow or infeasible convergence of the above optimization formulation, this work relaxes the return constraint from a hard constraint to a soft constraint.
Markowitz Formulation (Soft Return Constraints)
objective function  (2.3)  
subject to  return constraint  
budget constraint  
longonly constraint 
Where, the asset returns considered for the above formulation are the absolute return measures and absolute risk measures. The formulation considers arithmetic returns for the assets as the return measure and the standard deviation as the risk measure.
Additionally, one could consider formulating the problem with other classes of parameters of estimating return measures that would address critical aspects of the financial markets which are significant to large financial institutional investors and public/private banks. These may include the Average Profit and Loss i.e. Average PnL (as an absolute value, or as a %age, or per each industrial sector, or as per a prespecified time interval), and Compounded Annual Growth Rate (CAGR). Other critically relevant measures may include the relative return measures (incl. upwarddownward movement capture ratio, or the upwarddownward movement number/percentage ratio), or the absolute riskadjusted return measures (incl. Sharpe ratio, Calmar ratio, Sterling ratio or Sortino ratio), or the relative riskadjusted return measures (incl. Annualized alpha, Jenson alpha, Treynor ratio, or Information ratio).
In practice, economic policies and financial trades impose a variety of constraints on the traditional formulation. Relevance of the budget constraint and the return constraint seem natural to the problem. Further imposition of constraints such as holding constraints, riskfactor (riskfraction) constraints, trading (transaction size) constraints, cardinality constraints, round lot constraints, volatility constraints, closing position constraints, turnover (purchase/sale) constraints, tracking error constraints etc. tend to various multiple formulations of the portfolio optimization problem. For a detailed overview of the potential parameters and feasible constraints usually taken into consideration, refer Appendix A and B.
2.2 Robust Formulation addressing Parameter Uncertainty
The presence of a statistical measure for estimating returns, risk and the covariance matrix of asset returns that are parameters to the mathematical model are subject to uncertainty with respect to the expected measure as compared to the actual one. Practical reasons for such uncertainties include minor fluctuations to the financial data, lack of availability of accurate information for very small time points, and the existence of uncertain and often unknown factors. In practice, sensitivity analysis does address these issues to some extent but tends to be wellsuited only after an optimal solution is obtained. In order to handle such critical relevance of parameter uncertainty during the computation of optimality it is significant from a mathematical perspective, as well, because imperfect information to the model does tend to threaten the relevance of the solution.
To address parameter uncertainty issues, methods in computational statistics literature suggest the formulation of the problem as stochastic processes to inherently handle the stochastic behaviour of the asset returns, or, to deal with the stochastic uncertainty (due to randomness in the model) over multiple stages of the solution space search. The former is regarded as stochastic programming whereas the latter is referred as dynamic programming. However, obtaining the probability distributions of the uncertainties in the model poses tactical difficulties towards the stochastic programming approach, and, the lookout for an optimal solution in a very large search space due to scenario generation methods of dynamic programming highlights the practical difficulties of computational expensiveness, greater costs, and smaller revenues for optimality.
An alternative method, acknowledged to (Soyster, 1973), considers modeling of every uncertain parameter as taken equivalent to its worstcase value within a set of feasible parameters. This method is inspired from the robust control engineering literature, and is hence, referred as Robust Optimization. Such a methodology does address the problem but tends to be a bit too conservative. And, to work around the overconservatism, the uncertain parameters are restricted to belong to an uncertainty set. Research in the 90’s by (BenTal and Nemirovski, 1998), (BenTal and Nemirovski, 1999), (BenTal and Nemirovski, 2000), (ElGhaoui and Lebret, 1997) and (ElGhaoui et al., 1998), suggest the use of ellipsoidal uncertainty sets, whereas, more recently, (Bertsimas and Sim, 2003), (Bertsimas et al., 2004) and (Bertsimas and Sim, 2004)
have proposed the robust optimization approach based on polyhedral uncertainty sets to preserve the class of problems under analysis i.e. a linear programming problem stays as a linear programming problem as compared to the ellipsoidal set where the linear programming problem transforms to a secondorder cone problem.
Robust Optimization
objective function  (2.4)  
subject to  budget constraint  
uncertainty constraint  
longonly constraint  
nonnegativity constraint 
Robust Optimization (Simplified)
objective function  (2.5)  
subject to  budget constraint  
uncertainty constraint  
longonly constraint  
nonnegativity constraint 
It can be seen that the above robust formulation of portfolio optimization resembles the Markowitz’s formulation except that the portfolio risk (standard deviation) is being penalized instead of the variance. However, one may note that this robust optimization technique towards performing mathematical optimization under uncertainty considers an uncertainty model that is deterministic. Theoretically, for different types of uncertainty sets, in order to obtain the uncertain returns, one can define different risk measures on the expected portfolio return. In this way, there seems to be a natural link between the robust optimization methodology and the meanrisk portfolio optimization.
2.3 Riskbased Asset Allocation Strategies
Asset allocation strategies in theory are broadly classified as: Strategic, Tactical and Dynamic. The strategic asset allocation methods address to create an asset mix that provides an optimal balance between expected returns and risk, and are often, preferred for the long run whereas, tactical asset allocation strategy takes a more active approach by positioning a portfolio into those assets, sectors or individual stocks that portray most potential for gains and is preferred for shortterm profits. Tactical asset allocation is more suited towards highperformance trading scenarios. And, the dynamic asset allocation focusses on creating a constant mix of assets as markets rise/fall and as the economy strengthens/weakens. It believes in taking short positions in the declining assets, and taking long positions in the increasing assets. Other asset allocation strategies include insured, integrated etc.
In recent years, the global financial crisis has led to numerous formulations with a growing amount of literature on effective management and optimization of financial portfolios via Strategic asset allocation. (Lee, 2011) suggests that many studies have been attributed to better performance of strategic riskbased asset allocation techniques as compared to conventional methods of superior diversification. However, as discussed in Section 1.3, given the absence of a welldefined objective function for investment management and portfolio optimization based on the strategic approaches as well as the metrics used to evaluate the portfolio performance, (Lee, 2011) approaches the problem into the traditional context of meanvariance efficiency in an attempt to understand their theoretical underpinnings. Some of these portfolio strategies of strategic riskbased asset allocation are briefed below.
EquallyWeighted (EW) Portfolio aka. 1/N Portfolio:
This strategy believes in diversification by allocating equal weights to all assets in the composition of the portfolio. If the number of assets in the portfolio is large, then, smaller weights are allocated to the assets. Hence, it is the least concentrated in terms of weights. Such a portfolio management strategy ignores the characteristics of the assets, does not apply any constraint other than the budget constraint, and lacks the presence of an objective function.
equal weights  (2.6) 
Global Minimum Variance (GMV) Portfolio:
This strategy believes in minimizing the variance of the portfolio without the presence of a return constraint. It is constructed by only considering the covariance matrix of the returns on the assets and the imposition of the budget constraint. It tends to allocate higher weights to the lowvolatility assets which inturn tend to be more sensitive to the parameter estimates for both the variances and the covariance matrix of returns. By minimizing the portfolio variance without considering expected returns, the marginal contributions to risks of all assets in the portfolio are identical and rather, equal to the volatility of the portfolio and wellconcentrated.
objective function  (2.7)  
budget constraint  
longonly constraint  
where,  correlation coefficient 
or, in linear algebra form, it can be written as:
objective function  (2.8)  
budget constraint  
longonly constraint 
Most Diversified Portfolio (MDP):
This strategy believes in maximizing the entire diversification of wealth across all assets in the portfolio. In principle, maximum diversification can be achieved by defining a Diversification Ratio that would maximize the distance between two volatility measures of the same portfolio or by minimizing the variance of the portfolio with a riskweighted constraint that would allow for budgeting of the risks involved in maximizing the diversification in the portfolio. The diversification ratio approach is often referred as the Maximum Sharpe Ratio (MSR) approach, because if the Sharpe ratio is the same for all assets then maximizing the diversification ratio is equivalent to maximizing the Sharpe ratio. And, even though the ratio is maximized, the portfolio is relatively concentrated with regard to the weights and risk contributions of the assets.
objective function  (2.9)  
budget constraint  
longonly constraint  
where,  correlation coefficient 
or, in linear algebra form, it can be written as:
objective function  (2.10)  
budget constraint  
longonly constraint 
The riskweighted approach is similar to the diversification ratio approach, where riskweights are modeled as constraints and the budget constraint is relaxed. For optimality, it allows for weights to be allocated more than the entire wealth, and to meet the financial budget constraints, it further normalizes the weights. It has been observed in practice that such a riskbased indexation promises diversification and is independent of the market capitalization.
objective function  (2.11)  
riskweight constraint  
longonly constraint  
where,  correlation coefficient 
or, in linear algebra form, it can be written as:
objective function  (2.12)  
riskweight constraint  
longonly constraint 
Equallyweighted Risk Contribution (ERC) Portfolio:
This strategy of asset allocation believes in minimizing the variance of rescaled risk contributions. The risk contribution of an asset is the share of the total portfolio risk that is attributable to the constituting asset. Formally, it is defined as the product of the weight associated with the asset and its marginal contribution to risk which is beta times the volatility of the underlying asset. Risk allocations and risk budgeting are commonly known terms in practice that follow such principles of analysing the portfolio in terms of the risk contributions. And, when the risk contributions of all assets are equalized, this strategy is also referred as the Risk Parity approach. In principle, it constructs a highly diversified portfolio which is least sensitive to the covariance matrix of asset returns, and the percentage contribution to risks of all assets is equal.
(Maillard et al., 2008) discusses the theoretical properties of the ERC portfolio construction strategy and mathematically, discusses two formulations of constructing the optimization problem: one, with the relative risk of the asset compared to the rest of the portfolio, and the other, as a portfolio variance minimization problem subject to a nonlinear inequality constraint that would lead to sufficient diversification of weights. This work considers the first formulation. (Levell, 2010) and (Group, 2010) present the relevance of risk allocation in recent times of financial crisis.
objective function  (2.13)  
budget constraint  
longonly constraint  
where,  correlation coefficient  
and,  rescaled risk contributions 
Looking upon the practical relevance of the riskbased asset allocation strategies, theoretical interests in these methods have been on the rise. Some approaches alike (Lindberg, 2009) even consider modeling the returns on the assets as unobservable, independent Brownian motions with a similar drift across all assets. These Brownian motions seem to collectively represent the return characteristics incorporating the volatilities and the correlations across assets. Furthermore, equal weights are allocated to the Brownian motions. In principle, theoretically driven complex stochastic processes may be considered as representatives for the return and risk measures of the assets. To drive a little more towards reality, perhaps one may even consider identifying the exact distribution of the asset returns, and based upon which, a representative stochastic process may be simulated for each asset, and then, various tactical and strategic asset allocation mechanisms may further be considered for optimal allocation of weights.
Considering the brief exploration of various theoretical and practical problem formulations as presented above, and keeping in perspective the problem statement as described in Section 1.3, the next section focusses towards addressing the financial portfolio optimization problem by combining various optimal solutions and further driving a computationally guided approach towards an acceptable solution.
3.1 Conceptual Representation
Previous chapters have presented the challenges faced in the area of financial portfolio optimization. Theoreticians and practitioners have constantly worked around this problem by proposing various formulations using business and realworld constraints that seem applicable to the portfolio of assets that are being managed, or, by the consideration of critical business factors as measures for parameters that are relevant for better understanding the processes and realities of the financial scenarios.
Given the plethora of a large number of problem formulations, and a number of solutions for the portfolio optimization problem, identifying the exact formulation that would be applicable to the empirical data, and further taking a decision to select any one or a combination of a few, is a fairly complex decision that the portfolio manager would have to make on a continual basis. Furthermore, when the number of constraints to each formulation would vary, in such scenarios, identifying the set of constraints that are more suited to the nature of the empirical data and the behaviour of the financial assets; transforms the decision making process to even more challenging. Interestingly, it is often observed that when portfolio optimization problem formulations are not wellaligned to the empirical data, then, human bias and expertise does take control. With such realworld challenges, ensuring that the human bias relates to the numerical solutions in a more realistic way needs critical attention.
In light of the above challenges and complexities involved in the decision making process of financial portfolio optimization, this work focusses towards addressing the problem by combining a number of optimal solutions in a solution bank. It shall further investigate, analyse and optimize the solution with a bit more realistic global objective as directed by the individual investor or the financial institution such that the portfolio management process is realized using computationally guided assistance towards an informed decision making for financial investments.
This work suggests an agent based framework comprising of solver agents that contribute optimal solutions to a solution bank. Conceptually, these solver agents may represent various formulations of the problem, or mathematically oriented deterministic solvers, or information from published sources, or human knowledge encoded in numerical form, or a stochasticsearch solver, or potentially, any entity that could provide a numerical solution to the weights that are associated to the assets in the portfolio. In principle, the solution bank shall then constitute of a variety of optimal solutions that meet various objective criteria under varying constraints, formulations and optimality reaching mechanisms.
Despite the presence of various mathematical formulations, the problem of selecting a specific solution instance across the entire spectrum of solutions has been a challenge in portfolio optimization. This work suggests the formulation of a global investment objective aligned according to the goals of the institution, and further optimize the allocation of weights by guiding the search for optimal solution by a superagent solver, in the direction as guided by the set of optimal solutions in the solution bank.
The notion of a superagent solver is referred aesthetically and the superness is with regard to the enhanced capabilities of the search mechanism using the bank of optimal solutions. The superagent solver may be a stochasticsearch solver or a deterministic solver with multiple starting points. In principle, as long as the search is guided by a set of optimal solutions, the superagent may be any of the solver agents as well, and as appropriate, it may be suitably interchanged with any solver agent.
This chapter shall further discuss the classes of solver agents, the solution bank orientation and computational methodology of reaching an optimal solution.
3.2 Classes of Solver Agents
Entities that provide an optimal solution to any of the portfolio optimization formulation, by any of the solution strategies, or via knowledge from any internal/external sources of information are regarded as a potential agent that provides a solution, and is hence, further referred as a solver agent. Conceptually, based on the nature and formulation of the problem, and based upon the mechanism to obtain the solution, the solver agents are broadly classified amongst the following classes: human experience agents, domain knowledge agents, deterministic solver agents, and stochasticsearch agents.
Human Experience Agents:
Solver agents that constitute human experience from the financial markets, and those that represent numerical solutions encoded via human knowledge and sentiments are termed as the human experience agents. As well known, the process of portfolio optimization is a continual task that repeats over a period of time. This agent tends to include such periodic allocation information (from previous point in time, say previous month’s allocation, or previous week’s allocation etc.) as a potential solution in the solution bank; because, not making any changes while portfolio rebalancing, or, reverting back to a previous portfolio allocation, may well be considered as an optimal solution.
Human knowledge as encoded information from highly rated journals, trusted sources, syndicated services, or other published sources of information, when quantified, may also be considered as potential candidates for optimal solutions in the solution bank. Also, with the advent of the current day World Wide Web, and other electronic sources, human knowledge and experience are wellrepresented in social media information as user generated content in the form of blogs, user groups, discussion forums, mailing lists etc. Quantification of such information via Bayesian techniques, or other statistical methods, when compiled and integrated as numerical solutions, can be regarded as potential solutions.
Domain Knowledge Agents:
Solver agents that represent knowledge from the financial domain by focusing on the domainspecific mathematical formulations are constituted as domain knowledge agents. These agents provide optimal solutions by focusing on varying investment objectives, and address business constraints from a more realistic point of view. Domain knowledge agents and domain expert agents shall interchangeably be referred, in this work, to symbolize the broad class of diverse mathematical formulations of the portfolio optimization problems of financial and statistical literature.
As discussed in previous chapter, problem formulations that address the uncertainties in parameter estimation of the optimization problem via development of robust investment objective, or problem formulations that focus on strategic asset allocation mechanisms via riskbased asset allocation strategies focusing on riskaligned investment objectives, and other mathematically driven formulations of the problem may be considered as domain knowledge agents. Deterministic (or stochastic) solutions of all such agents that have been formulated from financial literature are considered optimal towards their own formulation, but when viewed upon a broader perspective of investment management, each of them provide significant insights to the constraints on which these formulations are in place. A set of all such optimal solutions can potentially be taken for inclusion in the bank of solutions.
Deterministic Solver Agents:
Solver agents that focus on reaching the solution in a deterministic manner, based on methods and algorithms developed by mathematical rigour and proofs, are broadly labelled as deterministic solver agents. In principle, diverse formulations of the global investment objective may vary across linear programming, or quadratic programming, or even more general nonlinear programming, or with the inclusion of integer constraints, it may be a mixedinteger nonlinear programming problem, or may be solved using a sequential programming methodology etc. any many more such deterministic methods of mathematical programming. In general, solutions that are motivated across such disciplined methods are typically focused on the initial start of the numerical algorithm; and, may initiate from a single starting point or across multiple starting points.
Approaches ranging across various single and multiple starting points, and algorithms that provide numerical approximations to the deterministic methods provide optimal solutions to the global investment objective formulation. The decision to consider multiple starting points may be regarded as a better one, but to decide the number of such points is often judged upon by heuristic insights. Larger number of starting points may span the entire search space and tends to be computationally expensive, whereas, a lower number of starting points may not be adequate enough. However, trying the multistart approach with less number of starting points across multiple iterations with a cyclicity check is often considered useful. Optimal solutions obtained by the singlestart and multistart approaches of the deterministic methods are potential candidates for the solution bank, and can hence, be included.
Stochasticsearch Agents:
Solver agents that focus on evolutionary computation techniques and swarm intelligence based methods for optimization problems are categorized as stochasticsearch agents. These techniques emphasize on the use of random variables during the search process of an optimal solution. Computational methods that are inspired by biological mechanisms of evolution including reproduction, mutation, recombination and selection are considered as potential solver agents. These methods use iterative progression across a population of individuals, select them for guided random search and process them in parallel; referred as populationbased metaheuristic optimization methods.
Other classes of algorithmic solution approach that believe in collective behaviour of systems which are decentralized and selforganized are referred as swarm intelligence based algorithms. Solutions obtained via numerical approximations from such natureinspired algorithmic techniques are considered as potential candidates for the solution bank, and are hence included.
As studied during the literature survey, a number of computational methods have been tried to the portfolio optimization problem. The decision of selecting an algorithm is driven by the formulation and the author’s knowhow of the proposed methodology. The rationale for inclusion of solutions obtained via such computationally oriented stochasticsearch methods is to gain deeper insights on the algorithm selection mechanism, especially when the problem formulations are complex and nonconvex. The computationally guided mechanism of learning from a set of optimal solutions intends to strengthen the reasoning for the choice of using a particular algorithm towards a specific problem formulation
3.3 Superagent Solver
The notion of a superagent solver is an aesthetic conceptualization of any solver that would guide the search mechanism referring to the bank of optimal solutions. As mentioned earlier, it may be a mathematically oriented deterministic solver or a computationally oriented stochasticsearch solver. This work does not discuss the notion of a deterministic solver and shall focus on the use of a stochasticsearch mechanism as a potential superagent solver.
The field of Metallurgy presents one of the fundamental motivations for an algorithmic implementation of a stochasticsearch technique by the annealing process in metals. It refers to a heat treatment to a solid state metal causing changes to its structural properties. It is followed by gradual cooling according to a specific schedule thereby obtaining a desired crystallized form. The metal is heated to a very high temperature to ensure free and random movement of the atoms of the solid state metal, and is further cooled, to ensure order amongst the atoms such that a thermal equilibrium is attained. By such an annealing process, the atoms position themselves corresponding to a desired crystallized form which happens only when the minimum energy configuration is reached; resulting in optimality of the solution.
The technique mentioned above is referred as the Simulated Annealing methodology. It has the ability to stochastically disturb any current state of the solution and generate a neighbourhood state. The acceptance of the neighbourhood state as a potential solution is validated as per an acceptance criterion function. This process of disturbing the state, and accepting it further, is simulated across a number of Monte Carlo iterations and to avoid the algorithm to get stuck in the local optimum, the methodology of solution acceptance is guided by a temperature parameter as specified in the annealing schedule. Inclusion of such a temperature value, guides the algorithm towards the global optimum, and is hence, considered as a suitable candidate for a superagent solver.
1. Start: Assign an initial state via some heuristic  
2. Evaluate the objective function for initial state  
3.  Initialize:  
a. Max. temperature value to a very high temperature  
b. Max. no. of Monte Carlo iterations to a large number  
4.  While {Max. temperature value frozen state temperature} do  
a.  While {no. of trials = 1 Max. no. of Monte Carlo iterations} do  
i. Generate a neighbourhood state with stochastic perturbation  
ii. Evaluate the objective function for the neighbourhood state  
iii. Apply the Metropolis rule and accept the generated neighbour  
iv. Increment the no. of trials iterator  
b. End of loop for Monte Carlo iterations/trials  
c. Decrease the temperature according to the geometric annealing schedule  
5. End of loop for temperature changes (or annealing schedules)  
6. End of SA algorithm 
Detailed discussion on the process of Simulated Annealing (SA) can be obtained from (Kirkpatrick et al., 1983). In essence, it is represented by the selection of the initial state, the neighbourhood generation mechanism, the acceptance criterion function, the annealing schedule and the stopping criterion. As a superagent, this work considers a random initialization of the initial state, considers the Metropolis rule of the MetropolisHastings Algorithm as a function that accepts the generated solutions, follows a Geometric cooling schedule as a degenerating function for the annealing schedule, and a high temperature value with maximum number of Monte Carlo iterations for the termination condition of the SA algorithm.
Conventionally, the generation of the neighbourhood state is performed by the generation of a uniform random number as a perturbation to the current state, and then the objective function is evaluated. This work proposes a methodology for generation of a neighbourhood state. The use of a solution bank tends to provide an increased guarantee for the solution as the search process is guided by the bank of optimal solutions and tends to reduce the randomness prevalent in the search mechanism.
The approach, as shown in Figure 3.3, suggests calculation of a statistical distance measure between each solution instance in the solution bank with regard to the current state, and further selection of that solution instance which has minimum distance with that of the current state. Euclidean distance measure is used in this work. Then, the perturbation to the current state is based on the direction of further search guided by the orientation of the current state towards the nearest neighbour – across all solutions – from the bank.
While generating the neighbour, the way in which any current state is disturbed is responsible for the stochastic nature of the algorithm that facilitates in selection of the perturbed state as the possible neighbourhood state. This nature of stochastic perturbations to the parameters of the objective function for obtaining a feasible neighbour state is a critical aspect of SA technique. In traditional SA, stochastic perturbation to the current state is done to all decision variables of the objective function, whereas, this work further proposes the use of a perturbation strategy based on a decision matrix, as shown in Figure 3.4, that shall control the perturbation mechanism of a select or all decision variables. The following perturbation strategies are proposed to maintain an optimal balance between the stochastic behaviours of various parameters:
Strategy I: Random Similar Perturbations:
In this strategy, a decision vector of random zeros and ones is generated. It is based on a specific threshold that governs the uniform random number generator for filling the decision vector. The same decision vector is repeatedly applied to all parameters of the objective function. Application of similar perturbations to all parameters tends to reflect uniformity across parameters, and controls the nature of randomness while disturbing the current state.
Strategy II: Random Varying Perturbations:
In this strategy, the generation of a decision vector is replaced with the generation of a decision matrix where each parameter has a varying perturbation as compared to the other; however, the nature of generating the decision values of zeros and ones remains the same as the previous strategy. Application of varying perturbations to each parameter tends to reflect diversity across parameter perturbations and allows for free random movements from the current state.
Strategy III: Perturb All:
In this strategy, the use of any decision vector or matrix is avoided. It suggests perturbations to be applied to all parameters. This strategy is the simplest of all and reflects a direct perturbation based on the neighbourhood generation mechanism with the use of a solution bank.
In principle, the use of a decision matrix can be much more rigorous, and may be considered for careful stochastic perturbation. Possible alternative mechanisms for construction of the decision matrix include: (i) considering the distribution properties of the underlying assets in consideration, or (ii) generating a set of zeros and ones from a multivariate distribution, or (iii) enabling a suitable representation of the cardinality or holding constraints of the problem, if any, or (iv) provision for inclusion of the sectoral or riskfraction constraints, if any, that would govern the fraction of risks that are associated with the assets, or (v) inclusion of heuristic information via deciding a threshold for the quantified information derived from the analysis of social media inputs for all assets in the portfolio.
This chapter has presented a computationally guided approach comprising of a variety of solver agents contributing their optimal solutions to a solution bank. It then proposed the construction of a problem formulation with a global investment objective. It further discussed an enhanced Simulated Annealing algorithm as a superagent solver that represents a stochasticsearch mechanism guided by the set of optimal solutions. It further proposed the use of a decision matrix for selective perturbation to the parameters of the objective function. The next chapter shall now present the detailed structure of the computational experiment and discuss the nature of the empirical data followed by an empirical validation of the proposed solution framework.
4.1 Experimental Setup
The computationally guided agents approach as proposed in the previous chapter addresses many fundamental problems and concerns that were discussed in Chapters 1 and 2. The proposed framework has been validated across various computational experiments that are discussed further in this chapter. The experimental setup fundamentally comprises of a number of solver agents, a conceptual solution bank, a global investment objective formulation and a superagent solver that would computationally be guided by the solution bank whilst the stochasticsearch process. This section shall provide detailed information on each of these components of the solution framework, and the next section shall be followed by the nature of the empirical data by preliminary exploratory analysis and problem scenarios.
Global Investment Objective:
The traditional Markowitz formulation with soft constraints on the returns, along with the budget constraint as mentioned in Section 2.1 is considered as the global objective. Since this work focusses on using a stochasticsearch algorithm as a superagent, the Markowitz formulation is modified. The return and budget constraints are addressed using the Penalty Functions approach. The penalty regularization parameters were experimented with varying values.
Markowitz Formulation with Penalty Functions
objective function  (4.1)  
longonly constraint  
where,  correlation coefficient 
Solver Agents:
The broad classes of solver agents represent human experience agents, domain knowledge agents, deterministic solver agents, and stochasticsearch agents. Brief information about the characteristics of the solver agents is mentioned in Appendix C, and details are briefed below.
In the class of human experience agents, obtaining numerical solution for a select set of assets was not feasible and is therefore not considered in the experimental setup.
In the class of domain knowledge agents, the construction of a robust formulation of the problem addressing parameter uncertainty, and various problem formulations that address the riskbased asset allocation strategies as discussed in Chapter 2 were considered and deterministic solutions to these were obtained. These agents are represented in Purple (Solver Agents A3 to A8) in the extended model in Figure 4.1. For the robust formulation, under the polyhedral uncertainty sets with the deterministic uncertainty model, (Bertsimas and Thiele, 2006) suggests the inclusion of an uncertainty constraint and an additional parameter to represent the budget of uncertainty. The uncertainty constraint controls the risk associated with the acceptable tolerance for constraint violation. This is reflected by the budget of uncertainty parameter in the objective function that refers to the deviation of such violations.
The budget of uncertainty parameter is also related to the performance guarantee of the solution as it accounts for the constraint violations considered to achieve a suitable guarantee for the optimal solution. A higher performance guarantee requires a larger budget of uncertainty, whereas, a lower budget of uncertainty reflects a lesser guarantee for the obtained performance. Identifying a specific value for this parameter was not feasible; hence, solutions with varying performance guarantees were considered as potential candidates for the solution bank such that a suitable tradeoff is maintained between the protection level of the constraint (by accounting for violations of constraints) and the degree of conservatism of the solution (by varying uncertainties in parameter estimates). Such variations in the budget of uncertainty parameter were accounted by obtaining fixed increments and were scaled across the nominal, conservative and robust strategies for parameter uncertainties.
In the class of deterministic solver agents, the global investment objective function was evaluated with both the singlestart and multistart approaches of the deterministic solver thereby solving the traditional Markowitz formulation as a Quadratic Programming problem. These agents are represented in Orange (Solver Agents A1 and A2) in the extended model in Figure 4.1
. The Markowitz formulation requires a riskfree rate of return (expected portfolio return) as input to the return constraint, and determining a suitable return value was not feasible. In this work, a number of expected portfolio return between the minimum and maximum return values (of the assets in the portfolio) were interpolated. Further, the deterministic solver agent was run a number of times corresponding to each interpolated expected portfolio return, in both the singlestart and multistart setup, so that portfolios representing various asset returns as portfolio returns could be considered as potential solutions for the bank.
In the class of stochasticsearch agents, the Ant Colony Optimization (ACO) algorithm as adapted to continuous domains by (Socha and Dorigo, 2008) and (Liao, 2011) was considered. The ACO agent is represented in Green (Solver Agent A9) in the extended model in Figure 4.1. In the stochastic setup, the global investment objective function of the Markowitz formulation was modified to incorporate the role of return and budget constraints using the Penalty functions approach. The modified formulation as presented in Equation 4.1 was considered. Various values for the penalty regularization parameters were tried. Heuristic knowledge was also considered. Further, rather than selecting a single optimal solution for the bank of solutions, considering the stochastic nature of the solution methodology during the search process, various optimal weight structures that represent the optimal objective value were included in the solution bank. Finetuning of the parameters of ACO algorithm was carried out and a suitable combination is presented.
As mentioned earlier, many other solver agents within each class of solvers could potentially be considered such that the optimal solutions are contributed to the solution bank. The underlying principle is to ensure that solutions that are optimal in any form, either via a problem formulation, or via a solution strategy, or via human expertise, or via numerical accuracy can all be considered as part of the solution bank such that the global investment objective is guided in a much informed manner.
Solution Bank:
Considering the optimal solutions contributed by each solver agent as mentioned above and in Appendix C, the solution bank is a conceptual place holder for the set of optimal solutions. It comprises of the number of decision variables that are equivalent to the number of assets in the portfolio, and is hence, a matrix of optimal solutions where each row corresponds to an optimal solution and each column represents a decision variable as weights corresponding to each asset in the portfolio.
SuperAgent Solver:
As discussed in the previous chapter, the superagent considered for the experimental setup is a stochasticsearch methodology of Simulated Annealing (SA) algorithm with the solution bank approach. The inclusion of the decision matrix as a perturbation strategy is not considered as part of this empirical validation, and hence, all decision variables are perturbed.
4.2 Nature of the Data
The equities that constitute the BSE Sensex as on April, 2012 were chosen for the validation of the proposed solution framework. The daily close prices of the constituent assets were collected from the BSE Stock Exchange website. In the preliminary analysis phase, the behavior of the BSE Sensex was studied for the past 12 years. An increasing trend was observed for the daily close prices, except for the duration that represented the financial crisis of 2008.
The downfall to recovery period from 2008 to 2010 as categorized by the downfall of the Sensex during the financial period 20082009 was observed, followed by an upward trend during the recovery of the Sensex in the financial period 20092010. The Vshaped trend of the market downward and upward trend was selected for validation of the proposed framework as representatives of the bearish and bullish market scenarios.
In general, BSE Sensex is composed of 30 equities; however, as per the selection of the above period, data sufficiency was obtained for 29 equities only. One of the equity i.e. Coal India Ltd. was observed to be listed on the stock exchange in later half of the year 2010, hence, daily close prices for the equity were not available.
For both the market scenarios, the arithmetic and log returns were computed on the daily and weekly close prices, but despite the lack of the symmetry property, the arithmetic returns on the daily close prices were considered realistic for the problem of portfolio optimization and was hence taken up as a suitable return measure. Also, the standard deviation on the daily and weekly arithmetic returns were computed, and the standard deviation on the daily arithmetic returns was considered as a suitable risk measure for the Markowitz formulation of the global investment objective function.
4.3 Scenario 1: Bearish Market
The BSE Sensex data corresponding to the financial year from 1st April, 2008 to 31st March, 2009 presents a declining trend representing the bearish market driven by the financial crisis of 2008. As discussed in the previous section, the daily close prices were considered for 29 equities. Return and risk measures, and the covariance matrix of asset returns were estimated for inputs to all the solver agents.
Exploratory analysis, by means of a riskreturn plot, of the riskreturn measures presents that most of the assets in the portfolio provide a negative return. This behaviour is usually expected in the bearish market and is empirically validated as well. The correlation of the returns across all assets was studied, using the heat map visualization technique. It was observed that most of the assets provide a very high positive correlation. This affirms the belief of the bearish market in such a way that when the markets start falling, the investors prefer to sell all the equities as early as possible and thereby ensuring minimum loss due to further downfall, if any.
The expected behaviour of the empirical data during the bearish market scenario was validated by the exploratory analysis. The empirical data was further provided as input to the solver agents. Each of the solver agents were run, in series i.e. one after the other, and independent of each other. Optimal solutions from them were compiled in to the solution bank. The Simulated Annealing (SA) superagent solver was executed to perform stochastic optimization by computational guidance from the solution bank. The search direction was hence guided by the optimal solutions. It was observed that the search process was inclined maximum number of times towards the optimal solution as obtained from the domain knowledge agent or robust formulation representing parameter uncertainty with the budget of uncertainty accountable as 0.5. And, upon convergence, SA inclined towards the same solution, thereby ensuring the belief of the local optimum to be the global optimum.
The SA algorithm further tried to improve upon the optimal solution from the solution bank towards a bit more finer optimality for a better riskreturn combination as compared to the one provided by the Robust formulation. The weight structure seems to suggest that SA tried to allocate more weights in those industrial sectors that have performed better during the bearish market scenario. These industrial sectors include FMCG, Health care, and Transport equipments. It is often considered that certain industrial sectors form the backbone necessities of the fundamental infrastructure of the Country, and by such an increased allocation in those sectors, SA further emphasizes their importance, in particular, during the financial crisis. Also, it is observed that under the Markowitz formulation of diversification, SA tries to diversify suitably across industrial sectors and not across all assets in the portfolio, thereby providing an allocation that prefers a concentrated portfolio.
4.4 Scenario 2: Bullish Market
The BSE Sensex data corresponding to the financial year from 1st April, 2009 to 31st March, 2010 presents an upward trend representing the bullish market driven by the recovery from the financial crisis of 2008. As discussed earlier, the daily close prices were considered for 29 equities. Return and risk measures, and the covariance matrix of asset returns were estimated for inputs to all the solver agents.
Exploratory analysis, by means of a riskreturn plot, of the riskreturn measures presents that most of the assets in the portfolio provide a positive return. This behaviour is usually expected in the bullish market and is empirically validated as well. The correlation of the returns across all assets was studied, using the heat map visualization technique. It was observed that most of the assets provide a relatively lower positive correlation as compared to the empirical data corresponding to the bearish market. This affirms the belief of the bullish market in such a way that when the markets start rising, the investors prefer to take long positions in the equities but are equally cautious and sceptical about further rise, if at all any.
The expected behaviour of the empirical data during the bullish market scenario was validated by the exploratory analysis. The empirical data was further provided as input to the solver agents. Each of the solver agents were run, in series i.e. one after the other, and independent of each other. Optimal solutions from them were compiled in to another solution bank independent of the one compiled during the bearish scenario. The Simulated Annealing (SA) superagent solver was executed to perform stochastic optimization by computational guidance from the solution bank. The search direction was hence guided by the optimal solutions. It was observed that the search process was inclined maximum number of times towards the optimal solution as obtained from the domain knowledge agent or robust formulation representing parameter uncertainty with the budget of uncertainty accountable as 1. And, upon convergence, SA inclined towards the robust formulation with the budget of uncertainty as 0.5. Such a behaviour thereby ensures the belief of the SA methodology with reference to not getting stuck in the local optimum and reaching out to the global optimum in the solution search space.
The SA algorithm further tried to improve upon the optimal solution from the solution bank towards a bit more finer optimality for a better riskreturn combination as compared to the one provided by the Robust formulation. The weight structure seems to suggest that SA tried to allocate more weights in those industrial sectors that usually drive the economy in the bullish market scenario. These industrial sectors include Capital Goods, Finance, Metal, Metal products and Mining, and Transport equipments. By such an increased allocation in these sectors, SA further emphasizes their importance, in particular, for recovery from the financial crisis. Interestingly, it was again observed that under the Markowitz formulation of diversification, SA tried to diversify suitably across industrial sectors and not across all assets in the portfolio, thereby providing an allocation that preferred a concentrated portfolio over a welldiversified portfolio of all assets.
In this chapter, the proposed solution framework was validated across the bearish and bullish market scenarios. In both the circumstances, the Simulated Annealing methodology as adapted with the Solution Bank approach for neighbourhood generation was adequately validated thereby reaching to an optimal solution by suitable guidance from the a bank of optimal solutions and, more importantly, by keeping intact with the principles of stochasticsearch optimization. The next chapter shall present the conclusions drawn from this work and discuss the future work and enhancements.
Appendix A Return and Risk Measures
This appendix highlights a nonexhaustive list of the, absolute and relative, return and risk measures that are represented as parameters of the portfolio optimization problem.
a.1 Return Measures
a.1.1 Absolute Return Measures
Arithmetic Returns:
It refers to the return on the asset as the ratio of the difference of the future price and current price with that of the current price.
Log Returns:
It refers to the return on the asset as the logarithmic ratio of the future price and the current price of the asset.
Average Monthly Gain:
It refers to arithmetic mean of the periods with a gain. It is calculated by summing the returns for gain periods, and dividing the total by the number of gain periods.
Average Monthly Loss:
It refers to arithmetic mean of the periods with a loss. It is calculated by summing the returns for loss periods, and dividing the total by the number of loss periods.
a.1.2 Relative Return Measures
Up/Down Capture Ratio:
The Up Capture Ratio is measured as an asset’s compound return divided by the benchmark’s compound return when the benchmark return increased. A higher value for this ratio is preferred. Whereas, the Down Capture Ratio is measured as an asset’s compound return divided by the benchmark’s compound return when the benchmark was down. A smaller value for the down ratio is preferred.
Up/Down Number Ratio:
The Up Number Ratio is measured as the number of periods by which an asset’s return increased, when the benchmark return increased, divided by the number of periods that the benchmark increased. A larger value for this ratio is preferred. Whereas, the Down Number Ratio is measured as the number of periods that an asset was down when the benchmark was down, divided by the number of periods that the benchmark was down. A smaller value for the ratio is preferred.
Up/Down Percentage Ratio:
Also referred as the Proficiency Ratio. The Up Percentage Ratio is measured as the number of periods that an asset outperformed the benchmark when the benchmark increased, divided by the number of periods that the benchmark return increased. Whereas, the Down Percentage Ratio is measured as the number of periods that an asset outperformed the benchmark when the benchmark was down, divided by the number of periods that the benchmark was down. A larger value is preferred for both the Up and Down Percentage ratios.
a.1.3 Absolute RiskAdjusted Return Measures
Sharpe Ratio:
It is the measure of an asset’s return relative to its risk. The return is defined as the asset’s incremental average return over the riskfree rate. The risk is defined as the standard deviation of the asset’s returns.
Calmar Ratio:
It is the return to risk ratio. The return is defined as the compound annualized return over the previous years, and the risk is defined as the maximum draw down (in absolute terms) over the previous years.
Sterling Ratio:
It is the return to risk ratio. The return is defined as the compound annualized return over the previous years, and the risk is defined as the average yearly maximum draw down over the previous years subtracted by an arbitrary 10%.
Sortino Ratio:
It is the return to risk ratio. The return is defined as the incremental compound average period return over a minimum acceptable return (MAR), and the risk is defined as the downside deviation below the MAR. It is a modification of the Sharpe ratio but penalizes only those returns that fall below a specified target, or required rate of return, while the Sharpe ratio penalizes both upside and downside volatility equally.
a.1.4 Relative RiskAdjusted Return Measures
Annualized Alpha:
It measures an asset’s value added relative to any benchmark. It is referred as the intercept of the regression line.
Jensen Alpha:
It measures the extent to which an asset has added value relative to a benchmark. It is equal to an asset’s average return in excess of the riskfree rate of return, subtracted by beta times the benchmark’s average return in excess of the riskfree rate.
Treynor Ratio:
It is similar to the Sharpe ratio, but it refers to beta as the volatility measure rather than standard deviation. The return is defined as the incremental average return of an asset over the riskfree rate of return. The risk is defined as an asset’s beta relative to a benchmark. A larger value is preferred.
Information Ratio:
It measures an asset’s active premium divided by the asset’s tracking error. It relates the degree to which an asset beats the benchmark to the consistency by which the asset has beaten the benchmark.
a.2 Risk Measures
a.2.1 Absolute Risk Measures
Standard Deviation over weekly, monthly and yearly asset returns, Standard Deviation for Gain and Loss periods, GainLoss Ratio (that measures an asset’s average gain in a gain period divided by the asset’s average loss in a loss period), Skewness, Kurtosis etc. and other higher moments are considered as Absolute Risk Measures.
a.2.2 Relative Risk Measures
Beta:
It represents the slope of the regression line. It measures an asset’s risk relative to the market as a whole (where, the ’market’ is any index or investment). Beta describes the sensitivity of the asset to broad market movements.
a.2.3 Tail Risk Measures
Value at Risk (VaR):
It is the maximum loss that can be expected within a specified holding period and a specified confidence level. It assumes that the assets are normally distributed. In order to relax the normality assumption, the Modified Value at Risk is calculated in the same manner as VaR. The modified VaR ’corrects’ the VaR using the calculated skewness and kurtosis of the distribution of returns.
Expected Tail Loss (ETL):
Also referred as the Conditional Value at Risk (CVaR). It is the average expected loss beyond VaR; and, is used for risk budgeting. It can be interpreted as the expected shortfall assuming VaR as the benchmark. It is a downside risk measure that can recognize diversification opportunities. It is subadditive and is used to aggregate/decompose risk at various portfolio levels.
STARR Ratio:
Abbreviated as Stable Tail Adjusted Return Ratio. The evaluation of risk adjusted performance is an alternative to the Sharpe Ratio, but STARR takes into account the drawback of standard deviation as a risk measure, which penalizes not only for upside but for downside potential as well. It employs the Expected Tail Loss of the asset returns for the performance adjustment.
Rachev Ratio:
It is a rewardtorisk measure and defined as the ratio between the expected tail loss of the opposite of the excess return at a given confidence level and the expected tail loss of the excess return at another confidence level. It is the expected tail return (ETR) divided by expected tail loss (ETL). Where, ETL is the average of the returns that exceed the VaR number in the left tail, and ETR is the average of say 5% of returns in the right tail at say 95% confidence level.
a.1 Return Measures
a.1.1 Absolute Return Measures
Arithmetic Returns:
It refers to the return on the asset as the ratio of the difference of the future price and current price with that of the current price.
Log Returns:
It refers to the return on the asset as the logarithmic ratio of the future price and the current price of the asset.
Average Monthly Gain:
It refers to arithmetic mean of the periods with a gain. It is calculated by summing the returns for gain periods, and dividing the total by the number of gain periods.
Average Monthly Loss:
It refers to arithmetic mean of the periods with a loss. It is calculated by summing the returns for loss periods, and dividing the total by the number of loss periods.
a.1.2 Relative Return Measures
Up/Down Capture Ratio:
The Up Capture Ratio is measured as an asset’s compound return divided by the benchmark’s compound return when the benchmark return increased. A higher value for this ratio is preferred. Whereas, the Down Capture Ratio is measured as an asset’s compound return divided by the benchmark’s compound return when the benchmark was down. A smaller value for the down ratio is preferred.
Up/Down Number Ratio:
The Up Number Ratio is measured as the number of periods by which an asset’s return increased, when the benchmark return increased, divided by the number of periods that the benchmark increased. A larger value for this ratio is preferred. Whereas, the Down Number Ratio is measured as the number of periods that an asset was down when the benchmark was down, divided by the number of periods that the benchmark was down. A smaller value for the ratio is preferred.
Up/Down Percentage Ratio:
Also referred as the Proficiency Ratio. The Up Percentage Ratio is measured as the number of periods that an asset outperformed the benchmark when the benchmark increased, divided by the number of periods that the benchmark return increased. Whereas, the Down Percentage Ratio is measured as the number of periods that an asset outperformed the benchmark when the benchmark was down, divided by the number of periods that the benchmark was down. A larger value is preferred for both the Up and Down Percentage ratios.
a.1.3 Absolute RiskAdjusted Return Measures
Sharpe Ratio:
It is the measure of an asset’s return relative to its risk. The return is defined as the asset’s incremental average return over the riskfree rate. The risk is defined as the standard deviation of the asset’s returns.
Calmar Ratio:
It is the return to risk ratio. The return is defined as the compound annualized return over the previous years, and the risk is defined as the maximum draw down (in absolute terms) over the previous years.
Sterling Ratio:
It is the return to risk ratio. The return is defined as the compound annualized return over the previous years, and the risk is defined as the average yearly maximum draw down over the previous years subtracted by an arbitrary 10%.
Sortino Ratio:
It is the return to risk ratio. The return is defined as the incremental compound average period return over a minimum acceptable return (MAR), and the risk is defined as the downside deviation below the MAR. It is a modification of the Sharpe ratio but penalizes only those returns that fall below a specified target, or required rate of return, while the Sharpe ratio penalizes both upside and downside volatility equally.
a.1.4 Relative RiskAdjusted Return Measures
Annualized Alpha:
It measures an asset’s value added relative to any benchmark. It is referred as the intercept of the regression line.
Jensen Alpha:
It measures the extent to which an asset has added value relative to a benchmark. It is equal to an asset’s average return in excess of the riskfree rate of return, subtracted by beta times the benchmark’s average return in excess of the riskfree rate.
Treynor Ratio:
It is similar to the Sharpe ratio, but it refers to beta as the volatility measure rather than standard deviation. The return is defined as the incremental average return of an asset over the riskfree rate of return. The risk is defined as an asset’s beta relative to a benchmark. A larger value is preferred.
Information Ratio:
It measures an asset’s active premium divided by the asset’s tracking error. It relates the degree to which an asset beats the benchmark to the consistency by which the asset has beaten the benchmark.
a.2 Risk Measures
a.2.1 Absolute Risk Measures
Standard Deviation over weekly, monthly and yearly asset returns, Standard Deviation for Gain and Loss periods, GainLoss Ratio (that measures an asset’s average gain in a gain period divided by the asset’s average loss in a loss period), Skewness, Kurtosis etc. and other higher moments are considered as Absolute Risk Measures.
a.2.2 Relative Risk Measures
Beta:
It represents the slope of the regression line. It measures an asset’s risk relative to the market as a whole (where, the ’market’ is any index or investment). Beta describes the sensitivity of the asset to broad market movements.
a.2.3 Tail Risk Measures
Value at Risk (VaR):
It is the maximum loss that can be expected within a specified holding period and a specified confidence level. It assumes that the assets are normally distributed. In order to relax the normality assumption, the Modified Value at Risk is calculated in the same manner as VaR. The modified VaR ’corrects’ the VaR using the calculated skewness and kurtosis of the distribution of returns.
Expected Tail Loss (ETL):
Also referred as the Conditional Value at Risk (CVaR). It is the average expected loss beyond VaR; and, is used for risk budgeting. It can be interpreted as the expected shortfall assuming VaR as the benchmark. It is a downside risk measure that can recognize diversification opportunities. It is subadditive and is used to aggregate/decompose risk at various portfolio levels.
STARR Ratio:
Abbreviated as Stable Tail Adjusted Return Ratio. The evaluation of risk adjusted performance is an alternative to the Sharpe Ratio, but STARR takes into account the drawback of standard deviation as a risk measure, which penalizes not only for upside but for downside potential as well. It employs the Expected Tail Loss of the asset returns for the performance adjustment.
Rachev Ratio:
It is a rewardtorisk measure and defined as the ratio between the expected tail loss of the opposite of the excess return at a given confidence level and the expected tail loss of the excess return at another confidence level. It is the expected tail return (ETR) divided by expected tail loss (ETL). Where, ETL is the average of the returns that exceed the VaR number in the left tail, and ETR is the average of say 5% of returns in the right tail at say 95% confidence level.
Appendix B Business & Technical Constraints
This appendix highlights a nonexhaustive list of business constraints that are crucial for financial portfolio optimization. In practice, many more constraints are applicable.
Budget Constraint:
It represents the combinations of assets that can be held in the portfolio with the amount of money available for investment. It enforces that the total proportion of weights to be allocated should always equal 100% of the total investment.
Return Constraint:
It represents the expected return from the portfolio. It enforces that the return expected from the portfolio equals the expected return from any riskfree asset. Meeting this criteria using deterministic solvers might lead to higher number of iterations. To overcome convergence issues, it is often transformed to a soft constraint.
Longonly Constraint:
It allows for taking long positions only while portfolio rebalancing. It rules out the possibility of short positions; i.e. assets shall not be sold while managing the portfolio. Mathematically, it disallows for negative weights.
Turnover Constraint:
Also referred as Purchase and/or Sale Constraint. It represents the amount of money (including purchase and sales of assets) that can be traded. It imposes upper bounds on the variation of the asset holdings from one period to the next.
Holding Constraint:
Also referred as Cardinality Constraint. It limits the number of assets that can be held in the portfolio. It allows for reduced portfolio reweighting costs and efficient monitoring by dealing with a limited number of assets.
Trading Constraint:
It limits the number of assets in the portfolio that can be traded while managing the portfolio. It is usually enforced to reduce the burden of transaction costs, trading costs, and brokerages.
Risk Factor Constraint:
Also referred as Risk Fraction Constraint. It represents the fraction of portfolio risk that can be attributed to each asset. It is useful for creating risk parity portfolios and efficient management of risk allocations in the portfolio.
Tracking Error Constraint:
Also referred as Benchmark Exposure Constraint. It represents the amount of deviation a portfolio can have when compared to any standardized benchmark. It is often imposed by portfolio managers to compare the performance of their portfolio.
Round Lot Constraint:
Financial trading is mostly done by integer holdings of assets. When asset prices are given for lots, portfolio balancing is done accordingly. Positions that have noninteger values are excluded from the scope of this constraint.
Volatility Constraint:
It represents the expected volatility of the portfolio. It enforces that the portfolio volatility lie between certain bound constraints. It constrains the riskiness of the portfolio. In practice, multiple portfolio volatility checks are enforced.
Distance Constraint:
It limits the distance between the portfolio and some target portfolio i.e. the amount of money required to trade from one portfolio to the other. In principle, portfolio distance can be computed in many ways. The sum of absolute differences in weights is usually practised.
Closing Position Constraint:
A position is considered closed if an asset is in an existing portfolio but not in the revised portfolio. This constraints controls the number of positions (incl. long and short) that are closed.
Cost Constraint:
If transaction costs are applicable, then bounds on such costs are represented by the cost constraint.
Largest Weight Constraint:
It limits the sum of a fixed number of the largest weights, irrespective of the assets that have them. It is often confused with the constraint that controls the bounds on the asset holdings.
Sectoral Constraint:
It represents the control on the amount of investment in a particular industrial sector. Broadly, all constraints that can be enforced on individual assets can be applied to industrial sectors.
Forced Trade Constraint:
It enforces trades to be made of certain select assets with a specified size. It is different from threshold constraint that limits only the trade size.
Appendix C Characteristics of Solver Agents
Solver Agent ID  A1 

Solver Agent Type  Deterministic Solver Agent 
Solver Agent Name  MARKOWITZ_SINGLE_START 
Solver Orientation  Deterministic Optimization 
Solver Method  QP Solver (Quadratic Programming) 
Solver Details  Infeasible PrimalDual PredictorCorrector Interior Point Algorithm 
Solver References  Chapter 8: The Quadratic Programming Solver from SAS/OR 9.3 User’s Guide: Mathematical Programming 
No. of Solutions  15 
Dependent Parameters  MMPT_NUM_POINTS 
Remarks  The number of expected portfolio return values that were interpolated between the minimum and maximum return on any asset in the portfolio were set to 15 using the above parameter. 
Solver Agent ID  A2 

Solver Agent Type  Deterministic Solver Agent 
Solver Agent Name  MARKOWITZ_MULTI_START 
Solver Orientation  Deterministic Optimization 
Solver Method  NLP Solver with MS (NonLinear Programming Solver with Multistart) 
Solver Details  Infeasible PrimalDual Interior Point Algorithm 
Solver References  Chapter 7: Nonlinear Programming Solver from SAS/OR 9.3 User’s Guide: Mathematical Programming 
No. of Solutions  15 
Dependent Parameters  MMPT_NUM_POINTS 
Remarks  The number of expected portfolio return values that were interpolated between the minimum and maximum return on any asset in the portfolio were set to 15 using the above parameter. 
Solver Agent ID  A3 

Solver Agent Type  Domain Knowledge Agent / Domain Expert Agent 
Solver Agent Name  ROBUST_DETERMINISTIC 
Solver Orientation  Deterministic Optimization 
Solver Method  OPTLP Solver (Linear Programming) 
Solver Details  Dual Simplex Solver (Two Phase Simplex Algorithm) 
Solver References  Chapter 10: The OPTLP Procedure from SAS/OR 9.3 User’s Guide: Mathematical Programming 
No. of Solutions  59 (0 to 29 with an increment of 0.5) 
Dependent Parameters  ROBUST_GAMMA_INCREMENT 
Remarks  The budget of uncertainty parameter is scaled from 0 to the number of assets constituent in the portfolio with a fixed increment of 0.5 
Solver Agent ID  A4 

Solver Agent Type  Domain Knowledge Agent / Domain Expert Agent 
Solver Agent Name  RBAA_EW (Riskbased Asset Allocation: Equally Weighted Portfolio) 
Solver Orientation  None 
Solver Method  None 
Solver Details  None 
Solver References  None 
No. of Solutions  1 
Dependent Parameters  Number of Equities 
Remarks  Equalweights are assigned based on the number of equities. No numerical optimization is involved. 
Solver Agent ID  A5 

Solver Agent Type  Domain Knowledge Agent / Domain Expert Agent 
Solver Agent Name  RBAA_GMV (Riskbased Asset Allocation: Global Minimum Variance) 
Solver Orientation  Deterministic Optimization 
Solver Method  QP Solver (Quadratic Programming) 
Solver Details  Infeasible PrimalDual PredictorCorrector Interior Point Algorithm 
Solver References  Chapter 8: The Quadratic Programming Solver from SAS/OR 9.3 User’s Guide: Mathematical Programming 
No. of Solutions  1 
Dependent Parameters  None 
Remarks  None 
Solver Agent ID  A6 

Solver Agent Type  Domain Knowledge Agent / Domain Expert Agent 
Solver Agent Name  RBAA_MDP_DR (Riskbased Asset Allocation: MostDiversified Portfolio using Diversification Ratio) 
Solver Orientation  Deterministic Optimization 
Solver Method  NLP Solver (NonLinear Programming) 
Solver Details  Infeasible PrimalDual Interior Point Algorithm 
Solver References  Chapter 7: Nonlinear Programming Solver from SAS/OR 9.3 User’s Guide: Mathematical Programming 
No. of Solutions  1 
Dependent Parameters  None 
Remarks  None 
Solver Agent ID  A7 

Solver Agent Type  Domain Knowledge Agent / Domain Expert Agent 
Solver Agent Name  RBAA_MDP_RW (Riskbased Asset Allocation: MostDiversified Portfolio using RiskWeighted constraint) 
Solver Orientation  Deterministic Optimization 
Solver Method  NLP Solver (NonLinear Programming) 
Solver Details  Infeasible PrimalDual Interior Point Algorithm 
Solver References  Chapter 7: Nonlinear Programming Solver from SAS/OR 9.3 User’s Guide: Mathematical Programming 
No. of Solutions  1 
Dependent Parameters  None 
Remarks  None 
Solver Agent ID  A8 

Solver Agent Type  Domain Knowledge Agent / Domain Expert Agent 
Solver Agent Name  RBAA_ERC (Riskbased Asset Allocation: Equallyweighted Risk Contribution Strategy) 
Solver Orientation  Deterministic Optimization 
Solver Method  SQP Solver (Sequential Programming) 
Solver Details  Iterative procedure using Augmented Lagrangian Penalty Functions as a Merit function solves QP subproblem to obtain a search direction, followed by a LineSearch Algorithm along resulting search direction. 
Solver References  Chapter 13: The Sequential Programming Solver from SAS/OR 9.2 User’s Guide: Mathematical Programming 
No. of Solutions  1 
Dependent Parameters  None 
Remarks  None 
Solver Agent ID  A9 

Solver Agent Type  Stochasticsearch Agent 
Solver Agent Name  MARKOWITZ_PENALTY 
Solver Orientation  Stochastic Optimization 
Solver Method  ACOR Algorithm 
Solver Details  Ant Colony Optimization (ACO) adapted to Continuous Domains (R) 
Solver References  (Socha and Dorigo, 2008) and (Liao, 2011) 
No. of Solutions  10 
Dependent Parameters  BEST_ITERATION_ACO, NUM_SOLUTIONS_ACO 
Remarks  BEST_ITERATION_ACO represents the best solution index across ACO iterations, and NUM_SOLUTIONS_ACO represents the number of best solutions to be considered for the solution bank. 
Bibliography
 Antia [1991] H. M. Antia. Numerical Methods for Scientists and Engineers. Tata McGrawHill, 2nd edition, 1991.
 Ardia et al. [2010] David Ardia, Kris Boudt, Peter Carl, Katharine M. Mullen, and Brian Peterson. Differential evolution (deoptim) for nonconvex portfolio optimization. MPRA Paper 22135, University Library of Munich, Germany, April 2010.
 Balbás [2007] Alejandro Balbás. Mathematical methods in modern risk measurement: A survey. Rev. R. Acad. Cien. Serie A. Mat. RACSAM, 101(2):205–219, 2007.
 BenTal and Nemirovski [1998] A. BenTal and A. Nemirovski. Robust convex optimization. Mathematics of Operations Research, 23(4):769–805, 1998.
 BenTal and Nemirovski [1999] A. BenTal and A. Nemirovski. Robust solutions to uncertain programs. Operations Research Letters, 25, 1999.
 BenTal and Nemirovski [2000] A. BenTal and A. Nemirovski. Robust solutions of linear programming problems contaminated with uncertain data. Mathematical Programming, 88, 2000.
 Bertsimas and Sim [2003] D. Bertsimas and M. Sim. Robust discrete optimization and network flows. Mathematical Programming, 98:49–71, 2003.
 Bertsimas and Sim [2004] D. Bertsimas and M. Sim. The price of robustness. Operations Research, 52(1):35–53, 2004.
 Bertsimas and Thiele [2006] D. Bertsimas and A. Thiele. Robust and DataDriven Optimization: Modern DecisionMaking Under Uncertainty, volume Tutorials on Operations Research, chapter 4, pages 122–195. INFORMS, March 2006.
 Bertsimas et al. [2004] D. Bertsimas, D. Pachamanova, and M. Sim. Robust linear optimization under general norms. Operations Research Letters, 32(6):510–516, 2004.
 Black and Litterman [1992] Fischer Black and Robert Litterman. Global portfolio optimization. Financial Analysts Journal, 48(5):28–43, SeptemberOctober 1992.
 Bolshakova et al. [2009] I. Bolshakova, E. Girlich, and M. Kovalev. Portfolio Optimization Problems: A Survey. Preprint: Faculty of Mathematics, OttovonGuericke University Magdeburg, June 2009.
 Boyd and Vandenberghe [2009] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 7th edition, 2009.
 Branke et al. [2009] J. Branke, B. Scheckenbach, M. Stein, K. Deb, and H. Schmeck. Portfolio optimization with an envelopebased multiobjective evolutionary algorithm. European Journal of Operational Research, 199(3):684–693, 2009.
 Chan et al. [2002] M. C. Chan, C. C. Wong, B. K. S. Cheung, and G. Y. N. Tang. Genetic algorithms in multistage portfolio optimization system. In Proceedings of the Eighth International Conference of the Society for Computational Economics, Computing in Economics and Finance, 2002.
 Chen et al. [2006] Wei Chen, RunTong Zhang, YongMing Cai, and FaSheng Xu. Particle swarm optimization for constrained portfolio selection problems. In International Conference on Machine Learning and Cybernetics, pages 2425–2429, August 2006.
 Christodoulakis [2002] George A. Christodoulakis. Bayesian optimal portfolio selection: the blacklitterman approach. Notes for Quantitative Asset Pricing, MSc. Mathematical Trading and Finance, November 2002.
 Conlon et al. [2007] T. Conlon, H.J. Ruskin, and M. Cranehor. Random matrix theory and fund of funds portfolio optimisation. Physica A: Statistical Mechanics and its Applications, 382(2):565–576, 2007.
 Crama and M. [2003] Y. Crama and Schyns M. Simulated annealing for complex portfolio selection problems. European Journal of Operational Research, 150(3):546–571, 2003.
 Daly et al. [2008] J. Daly, M. Crane, and H. J. Ruskin. Random matrix theory filters in portfolio optimisation: A stability and risk assessment. Physica A: Statistical Mechanics and its Applications, 387(16):4248–4260, 2008.
 Deniz [2009] Erhan Deniz. Multiperiod Scenario Generation to Support Portfolio Optimization. PhD thesis, Graduate School  New Brunswick Rutgers, The State University of New Jersey, October 2009.
 Diagne [2002] Mabouba Diagne. Financial Risk Management and Portfolio Optimization Using Artificial Neural Networks and Extreme Value Theory. PhD thesis, Mathematics Department/Financial Mathematics, University of Kaiserslautern, October 2002.
 Doerner et al. [2001] Karl Doerner, Walter J. Gutjahr, Richard F. Hartl, and Christine Strauss. Ant colony optimization in multiobjective portfolio selection. In 4th Metaheuristics International Conference, pages 243–248, 2001.
 ElGhaoui and Lebret [1997] L. ElGhaoui and H. Lebret. Robust solutions to leastsquare problems to uncertain data matrices. SIAM Journal on Matrix Analysis and Applications, 18:1035–1064, 1997.
 ElGhaoui et al. [1998] L. ElGhaoui, F. Oustry, and H. Lebret. Robust solutions to uncertain semidefinite programs. SIAM Journal on Optimization, 9:33–52, 1998.
 Fernando [2000] K. V. Fernando. Practical portfolio optimization. Technical report, The Numerical Algorithms Group, Technical Report (TR2/00): NP3484, 2000.
 Fernholz [2002] Robert E. Fernholz. Stochastic Portfolio Theory, volume 48. Springer Verlag, 2002.
 FigueroaLopez [2005] José Enrique FigueroaLopez. A selected survey of portfolio optimization problems. 2005.
 Geyer et al. [2009] Alois Geyer, Michael Hanke, and Alex Weissensteiner. A stochastic programming approach for multiperiod portfolio optimization. Computational Management Science, 6(2):187–208, May 2009.
 Giannakouris et al. [2010] Giorgos Giannakouris, Vassilios Vassiliadis, and George Dounias. Experimental study on a hybrid natureinspired algorithm for financial portfolio optimization. In Stasinos Konstantopoulos, Stavros Perantonis, Vangelis Karkaletsis, Constantine Spyropoulos, and George Vouros, editors, Artificial Intelligence: Theories, Models and Applications, volume 6040 of Lecture Notes in Computer Science, pages 101–111. Springer Berlin / Heidelberg, 2010.
 Greyserman et al. [2006] Alex Greyserman, Douglas H. Jones, and William E. Strawderman. Portfolio selection using hierarchical bayesian analysis and mcmc methods. Journal of Banking and Finance, 30(2):669–678, 2006.
 Group [2010] Meketa Investment Group. Risk parity. Technical report, April 2010.
 Guastaroba [2010] Gianfranco Guastaroba. Portfolio Optimization: Scenario Generation, Models and Algorithms. PhD thesis, Universit’a Degli Studi Di Bergamo, January 2010.
 He and Litterman [1999] Guangliang He and Robert Litterman. The intuition behind blacklitterman model portfolios. Technical report, Investment Management Research, Goldman Sachs Quantitative Resources Group, December 1999.
 Hester and James [1967] Donald D. Hester and Tobin James. Risk Aversion and Portfolio Choice. John Wiley and Sons, Inc., 1967.
 Hochreiter [2008] Ronald Hochreiter. Evolutionary stochastic portfolio optimization. Natural Computing in Computational Finance, 1:67–87, 2008.
 Hoffman et al. [2011] Matt Hoffman, Eric Brochu, and Nando de Freitas. Portfolio allocation for bayesian optimization. In Uncertainty in Artificial Intelligence, pages 327–336, 2011.
 Karaboga and Akay [2009a] Dervis Karaboga and Bahriye Akay. A survey: Algorithms simulating bee swarm intelligence. Artificial Intelligence Review, 31:61–85, 2009a.
 Karaboga and Akay [2009b] Dervis Karaboga and Bahriye Akay. A comparative study of artificial bee colony algorithm. Applied Mathematics and Computation, 214(1):108–132, 2009b. ISSN 00963003.
 Karatzas and Fernholz [2008] Ioannis Karatzas and Robert E. Fernholz. Stochastic portfolio theory: An overview. Handbook of Numerical Analysis, 15:89–167, January 2008.
 Kirkpatrick et al. [1983] S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, May 1983. DOI:10.1126/science.220.4598.671.
 Krink and Paterlini [2011] Thiemo Krink and Sandra Paterlini. Multiobjective optimization using differential evolution for realworld portfolio optimization. Computational Management Science, 8(1):157–179, 2011.
 Lee [2011] Wai Lee. Riskbased asset allocation: A new answer to an old question? IIJ: The Journal of Portfolio Management, 37(4):11–28, Summer 2011.
 Levell [2010] Christopher A. Levell. Risk parity: In the spotlight after 50 years. Technical report, NEPC, March 2010.
 Liao [2011] Tianjun Liao. Improved ant colony optimization algorithms for continuous and mixed discretecontinuous optimization problems. Master’s thesis, IRIDIA  Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle, 2011.
 Lin et al. [2005] Dan Lin, Xiaoming Li, and Minqiang Li. A genetic algorithm for solving portfolio optimization problems with transaction costs and minimum transaction lots. In Lipo Wang, Ke Chen, and Yew Ong, editors, Advances in Natural Computation, volume 3612 of Lecture Notes in Computer Science, pages 441–441. Springer Berlin / Heidelberg, 2005.
 Lindberg [2009] Carl Lindberg. Portfolio optimization when expected stock returns are determined by exposure to risk. Technology, 15(2):464–474, 2009. URL http://arxiv.org/abs/0906.2271.
 Maillard et al. [2008] Sébastien Maillard, Thierry Roncalli, and Jerome Teiletche. On the properties of equallyweighted risk contributions portfolios. SSRN, pages 1–23, September 2008. http://dx.doi.org/10.2139/ssrn.1271972.
 Markowitz [1952] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, March 1952.
 Markowitz [2010] Harry M. Markowitz. Portfolio theory: As i still see it. Annu. Rev. Financ. Econ., 2(1):1–23, 2010.
 Marschinski et al. [2007] R. Marschinski, P. Rossi, M. Tavoni, and F. Cocco. Portfolio selection with probabilistic utility. Annals of Operations Research, 151(1):223–239, 2007.
 Michaud and Michaud [2008] Richard O. Michaud and Robert O. Michaud. Efficient Asset Management: A Practical Guide to Stock Portfolio Optimization and Asset Allocation. Oxford: Oxford University Press, 2008.
 Mishra et al. [2009] Sudhansu Kumar Mishra, Ganpati Panda, and Sukadev Meher. Multiobjective particle swarm optimization approach to portfolio optimization. In World Congress on Nature and Biologically Inspired Computing, pages 1612–1615, December 2009.
 Modigliani and Miller [1958] Franco Modigliani and Merton H. Miller. The cost of capital, corporation finance and the theory of investment. The American Economic Review, 48(3):261–297, June 1958.
 Moore [1972] P. G. Moore. Mathematical models in portfolio selection. Journal of the Institute of Actuaries, 98:103–148, 1972.
 Neumann and Morgenstern [1953] John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, 3rd edition, 1953.
 Nishimura [1990] Koichi Nishimura. On mathematical models of portfolio selection problem. Asia University, Management Review, 26(1):369–391, 1990.
 Nocedal and Wright [1999] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer Series in Operations Research. Springer, 2nd edition, 1999.
 Ortobelli et al. [2005] Sergio Ortobelli, Svetlozar T. Rachev, Stoyan V. Stoyanov, Frank J Fabozzi, and Almira Biglova. The proper use of risk measures in portfolio theory. International Journal of Theoretical and Applied Finance, 8(8):1–27, December 2005.
 Parpas and Rustem [2006] P. Parpas and B. Rustem. Global optimization of the scenario generation and portfolio selection problems. Computational Science and Its Applications  ICCSA 2006, pages 908–917, 2006.
 Press et al. [2007] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes. Cambridge University Press, 3rd edition, 2007.
 Ross [1976] Stephen A. Ross. The arbitrage theory of capital asset pricing. Journal of Economic Theory, 13(3):341–360, December 1976.
 Roudier [2007] F. Roudier. Portfolio optimization and genetic algorithms. Master’s thesis, Swiss Federal Institute of Technology (ETH) Zurich, May 2007.
 SCI [2009] IIF SCI. Risk models and statistical measures of risk. Technical report, Institute of International Finance, December 2009.
 Sereda et al. [2010] Ekaterina N. Sereda, Efim M. Bronshtein, Svetlozar T. Rachev, Frank J. Fabozzi, Wei Sun, and Stoyan V. Stoyanov. Handbook of Portfolio Construction, volume 3 of Business and Economics, chapter Distortion Risk Measures in Portfolio Optimization, pages 649–673. Springer US, December 2010.
 Sharifi et al. [2004] Saba Sharifi, Martin Crane, Atid Shamaie, and Heather J. Ruskin. Random matrix theory for portfolio optimization: A stability approach. Physica A: Statistical Mechanics and its Applications, 335(34):629–643, 2004.
 Sharpe [1964] William F. Sharpe. Capital asset prices: A theory of market equilibrium under conditions of risk. The Journal of Finance, 19(3):425–442, September 1964.
 Socha and Dorigo [2008] Krzysztof Socha and Marco Dorigo. Ant colony optimization for continuous domains. European Journal of Operational Research, 185, 2008.
 Soyster [1973] A. Soyster. Convex programming with setinclusive constraints and applications to inexact linear programming. Operations Research, 21, 1973.
 Steiner and Wittkemper [1997] Manfred Steiner and HansGeorg Wittkemper. Portfolio optimization with a neural network implementation of the coherent market hypothesis. European Journal of Operational Research, 100(1):27–40, July 1997.
 Still and Kondor [2010] Susanne Still and Imre Kondor. Regularizing portfolio optimization. New Journal of Physics, 12:075034, 2010.
 Thong [2007] Veliana Thong. Constrained Markowitz Portfolio Selection Using Ant Colony Optimization. Erasmus Universiteit, 2007.
 Troha [2011] Miha Troha. Portfolio Optimization as Stochastic Programming with Polynomial Decision Rules. PhD thesis, Department of Computing, Imperial College London, September 2011.
 Vassiliadis and Dounias [2008] Vassilios Vassiliadis and Georgios Dounias. Nature inspired intelligence for the constrained portfolio optimization problem. In John Darzentas, George Vouros, Spyros Vosinakis, and Argyris Arnellos, editors, Artificial Intelligence: Theories, Models and Applications, volume 5138 of Lecture Notes in Computer Science, pages 431–436. Springer Berlin / Heidelberg, 2008.
 Walters [2009] J. Walters. The blacklitterman model in detail. Harvard Management Company, (2):16–67, 2009.
 Wikipedia [2012a] Wikipedia. Von neumann–morgenstern utility theorem — wikipedia, the free encyclopedia, 2012a. URL http://en.wikipedia.org/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&oldid=491822593. [Online; accessed 13June2012].
 Wikipedia [2012b] Wikipedia. Modiglianimiller theorem — wikipedia, the free encyclopedia, 2012b. URL http://en.wikipedia.org/w/index.php?title=Modigliani%E2%80%93Miller_theorem&oldid=495308328. [Online; accessed 13June2012].
 Yu et al. [2003] LiYong Yu, XiaoDong Ji, and ShouYang Wang. Stochastic programming models in financial optimization: A survey. Advanced Modeling and Optimization, 5(1), 2003.
 Zhang and Chau [2008] Yi Zhang and Duen Horng Chau. Portfolio optimization with predictive distribution: Autoregressive, kalman filters and sparsegaussian kalman filters. December 2008.
 Zhou [2009] G. Zhou. Beyond blacklitterman: Letting the data speak. The Journal of Portfolio Management, 36(1):36–45, 2009.
 Zimmermann et al. [2001] Hans Georg Zimmermann, Ralph Neuneier, and Ralph Grothmann. Active portfoliomanagement based on error correction neural networks. In Advances in Neural Information Processing Systems (NIPS 2001, 2001.
Comments
There are no comments yet.