Logo en.artbmxmagazine.com

Minimization of the value at risk var as an investment strategy

Anonim

The main objective of this report is to implement an algorithm and develop software that complements the methodologies used today for making investment decisions, based on the minimization of Value at Risk (VaR) through programming. linear, which is feasible when using the Conditional Value at Risk (CVaR) as the risk measurement unit to optimize.

portfolio-risk-analysis-strategy

For the development of this, it was proposed as specific objectives, the obtaining and handling of relevant financial information for decision making, which includes analysis of returns, risks and correlations of the selected actions, as well as the study of a criterion and implementation of a criterion for modeling stock prices.

With regard to price forecasting, techniques such as the Wiener process, better known as Brownian motion, Monte Carlo simulations and matrix procedures such as Cholesky factorization were used to obtain correlated returns in the same way that they have been correlated in the past, generating results more consistent with reality, within the restrictions and difficulties that exist with respect to the modeling of stock fluctuations.

Finally, in this work an optimization algorithm developed by Uryasev and Rockafellar was implemented, whose methodology is not yet widely used in the national market. This algorithm results in an optimal investment portfolio based on the minimization of VaR, which quantifies the maximum expected loss for a portfolio with a certain level of confidence and a pre-established time horizon.

CHAPTER I INTRODUCTION

1.1 General Aspects of Portfolio Risk

In recent years, financial institutions have carried out numerous investigations in the area of ​​risk management, in order to obtain measures that efficiently manage the risks to which they are subjected.

The financial risks that affect entities are the same as those that have affected in previous years, however, it has been the techniques for measuring these risks that have evolved over time, currently placing us in the concept of VaR (Value of Risk or Value at Risk), which estimates the risk of investment portfolios with probabilistic bases.

Risk is understood as the existence of some probability of falling into losses, where the losses would be obtaining a lower return than was expected. In this way, financial risk is reflected in the loss of economic value of expected assets, as a result of the variability experienced by returns, thus the economic value of an investment portfolio is influenced by different risk factors such as: rates interest rates, exchange rates, share prices, among others.

In this way, it is essential to identify, measure and manage the financial risks you face. Some of the most common financial risks are shown below:

a) Interest rate risk. This in turn is composed of different risks (for more detail it is recommended to see).

a.1) Market Risk: It is one that causes capital losses in the market value of the assets as a result of variations in the interest rate. The greater or lesser variation in the prices of the assets in the event of rate variations will depend on the characteristics of the assets.

a.2) Reinvestment Risk: This occurs when the reinvestment of the asset itself or its cash flows must be carried out at lower rates than expected.

a.3) Volatility Risk: Refers to those assets that have certain options incorporated and whose price depends, in addition to the level of interest rates, on factors that may influence the value of the incorporated options, such as volatility in interest rates. The volatility risk or “volatility risk” is the derivative of a change in volatility negatively affecting the price of the bond.

b) Credit Risk, or also known as Insolvency Risk, is generated by the inability of the issuer to fulfill its obligations. Within this type we find the sovereign risk which refers to the default of payment of the obligations of a country.

c) Illiquidity Risk: Indicates the inability to have the necessary cash flow to meet short-term obligations, or in other words, the lack of sufficient working capital. It is also understood as the inability to sell an asset at its original price.

d) Legal Risk: It refers to all regulatory aspects that may directly or indirectly influence the results of a company. Among these we find the tax risk which would be generated by the possibility that certain tax advantages disappear as a result of these legal risks.

Initially, the risk models were aimed at measuring the risk of the investment portfolios of financial institutions. These institutions, motivated by the incentive to reduce the capitalization requirements imposed by the regulatory authorities, have been the main promoters of the methodological framework for risk management.

The ability to have a system that evaluates the market risk of the investment portfolio has been a constant need for institutional investors. This is why tools to assess and manage the volatility faced by investment portfolios have flourished over time.

Thus, in the 1970s, Gap analysis was used to measure exposure to interest rate risk, determined by the difference between assets and liabilities for different maturity stages.

In the 80's, duration (fixed income) began to be used as a tool to measure exposure to interest rate risk. Which measures the sensitivity or price elasticity of an instrument resulting from a change in the interest rate, that is, how much could be lost if rates rise a few percent. This measure is a bit better than the previous one as it takes into account the specific maturity and coupon of each asset. On the other hand, Betas (equities) measure the sensitivity of a financial instrument to changes in the market as a whole, represented by an index.

1.2 Value at Risk (VaR)

In an innovative framework, the US bank JP Morgan in the 90's disseminated a methodology made up of Value at Risk or “Value at Risk” (VaR) models which estimate the risk of investment portfolios with probabilistic bases.

This “RiskMetrics” 1 methodology was disclosed in 1995, which generated a revolution in risk management, giving way to the well-known Value at Risk (VaR) and in recent years, the Conditional Value at Risk or “Risk Value Conditional ”(CVaR).

Since the Basel Committee announced in 1995 that the establishment of capital reserves of financial institutions have to be based on VaR methodologies. At present, various studies and analyzes of the wide variety of methodologies that can be applied in financial institutions have emerged.

In simple terms, VaR is the need to quantify with a certain level of confidence the amount or percentage of loss that a portfolio will face in a certain period of time. In other words, it is the measurement of the maximum expected loss given a time horizon under normal market conditions and with a given level of risk. And more specifically the VaR represents a quantile of the profit and loss distribution, which is commonly selected as 95% or 99% of the distribution.

The philosophy of the VaR is to measure the relationship between return and risk to form the efficient portfolio, introduced by Markowitz and Sharpe,.

According to Garman and Blanco, the VaR of a portfolio is the minimum expected loss for a certain time horizon and a certain level of confidence, measured in a specific reference currency.

In general, the most widely used assumption is that of normality, which makes it possible to represent all the observations using the well-known Gaussian bell and apply its statistical properties.

Therefore, if we want to determine the VaR of a portfolio, for a time horizon of one day and demanding a significance level of 5%, this means that only 5% of the time, or 1 out of 20 times (that is, one once a month with daily data, or every 5 months with weekly data) the return of the portfolio will fall more than what the VaR indicates.

It must be multiplied 1,645 times (using a 95% confidence) by the standard deviation with respect to the return on the portfolio.

(Eq. 1.1)

Where:

Vector of non-negative weights that add up to one.

Matrix of variances and covariances for the returns of the n assets.

Vector of non-negative weights that add up to a transposed one.

Figure 1.1 Graphic representation of Value at Risk

Source:

Given the above, using the VaR methodology, JP Morgan Bank began to calculate every day the maximum probable loss they would incur in the next 24 hours,.

As a result of the popularity of VaR, in Chile the Superintendency of Securities and Insurance (SVS), left this indicator as a risk measure for banking regulation, for which it was incorporated by the Insurance Companies and Investment Fund Administrators (AFPs) as part of the institutional regulations.

For example, if the VaR of a portfolio is calculated at $ 3,518,033.25 pesos in one day, with a 95% confidence interval, it does not mean that the $ 3,518,033.25 pesos are necessarily lost, but that In the case of losses, the maximum that can be lost from today to tomorrow and with a probability of 0.95, is $ 3,518,033.25 pesos. In this way the necessary capital can be adjusted.

1.3 VaR Estimation Methodologies.

Basically the VaR can be calculated using two methodologies:

a) Parametric methodology. Which estimates the VaR through the use of parameters such as volatility, correlation, etc., of the risk vertices, assuming that the returns are normally distributed,.

b) Non-parametric or simulation methodology, which is subdivided into:

b.1) Historical simulation. Based on historical asset price returns.

In general terms, this method attempts to quantify the hypothetical returns that would have been obtained in the past by having maintained the current investment portfolio. That is, it consists of applying the vector of current investment weights to a representative series of historical returns, in order to generate a sequence of historical portfolio values ​​that can be represented by a histogram, and thus be able to define a certain probability distribution.

Among the advantages of this method is that it does not make any assumptions about the correlations of the instruments. Nor does it explicitly assume the shape of the probability distribution of instrument prices. On the other hand, by relying on historical information to estimate future losses, it can incorporate "wide tails", "asymmetries", if the historical sample had such characteristics (for more details, consult:.

Among the disadvantages we find the need to have of a large amount of historical information in the series of instruments, because otherwise we could obtain unreliable calculations

b.2) Monte Carlo simulation. Based on the simulation of returns using random numbers,.

This technique consists of generating future scenarios based on the distribution function of the variables. Therefore, it allows us to simulate all the possible scenarios of the values ​​taken by the returns of the different risk vertices, based on their distribution function. For this, it is necessary to assume that the scenarios will follow some particular distribution, be it normal, t-student, among others, and in this way to be able to generate the returns through some variable generator algorithm or some stochastic process.

For example, we can assume that the series are distributed following a Wiener stochastic process. (See index 2.5 for more details of the process)

(Eq. 1.2)

Where:

: corresponds to the return of the share (P is the price of the share) in the time interval.

: It is the expected value of the returns.

: It is the stochastic component of the returns and represents the standard deviation.

: It is a random variable with Normal distribution (0,1).

Among the advantages of this method, it is by far the most powerful method to calculate VaR. You can count on a wide range of risk exposures, including non-linear price risk, volatility risk, and even model risk. It is flexible enough to incorporate time variation in volatility, or fat tails and extreme scenarios. These simulations can be used to examine, for example, the expected loss behind a particular VaR.

As a drawback, we find the need to have a great computer support. For example, if 1000 sample tracks are generated by a portfolio of 1000 assets, the total number of valuations will be 1,000,000,.

Given the above, it has the difficulty of valuation in real time and the need to pre-establish models of behavior of asset prices. Also, while this method should be more accurate when trying to generate the entire probability distribution of the securities the portfolio takes, it still relies on historical returns to determine volatility and correlations.

1.4 Conditional Value at Risk (CVaR)

VaR, as a risk measure, is unstable and difficult to work out numerically when losses are not “normally distributed”, which in practice is the most frequent case, since distributions tend to have “wide tails”. Therefore, it has been shown to be consistent only when it is based on the standard deviation of normal distributions of the returns of the assets, since under a normal distribution the VaR is proportional to the standard deviation of the returns of the instruments.

On the other hand, VaR has undesirable mathematical characteristics such as lack of subadditivity and convexity, for more detail see.

In this way, when the returns are not normally distributed, the lack of subadditivity causes the VaR associated with a portfolio that combines two instruments to be greater than the sum of the VaR risks of the individual portfolios.

The VaR function, which we will denote by, is defined as the percentile of the loss distribution function by the formula:

(Eq. 1.3)

Where is the distribution function resulting from the loss function and x is the position or weights in the investment portfolio.

To understand the concept of subadditivity, let's look at the following case: Let the VaR measure associated with the portfolio, then we will say that it is subadditive if given the portfolios and, we have:

(Eq. 1.4)

That is, the combination of two portfolios it should be associated with lower risk as a result of diversification "not putting all your eggs in the same basket."

However, this is not satisfied by the VaR and as a result of its bad behavior as a risk measure, it would lead us to subdivide the investments or portfolio to reduce risk. Strongly contradicting the theory of diversification,.

On the other hand, by not meeting convexity, minimizing VaR does not ensure that we have obtained the optimal portfolio that minimizes the objective function (losses), since it could have multiple local extremes.

Finally, a very important deficiency of VaR is that it does not provide an indication of the magnitude of losses that could be experienced beyond the amount indicated by its measure, since it simply provides a lower limit for losses in the tail of the distribution of returns,.

In this context, an alternative measure has emerged that quantifies the losses that could be found in the tail of the loss distribution, called Conditional Value at Risk (CVaR), which can be used as a tool within portfolio optimization models. investment, which has properties superior to VaR in many aspects.

CVaR maintains consistency with VaR in the limited scenario where the calculation of the latter is treatable (when losses are normally distributed), where working with CVaR, VaR or minimum Markowitz variance produces the same results, that is, they lead to the same portfolio optimum. Furthermore, in practice the minimization of the VaR produces an optimal portfolio close to the minimization of the CVaR, since by definition the loss calculated as a function of the CVaR is less than or equal to the loss obtained with the VaR.

This measure, for continuous distributions, is also known as Mean Excess Loss, Expected Shortfall or Tail VaR. However, for discrete distributions, the CVaR may be different. By definition, for continuous distributions, the α-CVaR is the expected loss that exceeds the α-VaR, in other words, it is the mean value of the losses worse than. For α = 0.99, the CVaR will average over 1% of the worst losses. In general, for loss distribution functions (including discrete distributions) the CVaR is defined as the weighted average of the VaR conditional on the losses that exceed this measure.

CVaR, unlike VaR, has very good mathematical properties, which can be seen in greater depth in.

Our objective is to find the optimal portfolio, where the associated risk (VaR) is minimal, for this we will use the remarkable mathematical formulation developed by Rockafellar and Uryasev, implementing the algorithm that optimizes the CVaR, for which the data from the "Bloomberg ”, Provided by AGF. These data will be treated statistically, in order to obtain the time series of the returns of the different actions that will make up the investment portfolio and through a MonteCarlo algorithm, we will generate the scenarios that will be used in the general CVaR optimization problem, with which the vector of weights to be invested in each share of the portfolio will be obtained and whose associated risk (VaR) will be minimal.

1.5 Analysis of the Historical Data of an Investment Portfolio.

The historical data of the shares will be obtained by "Bloomberg", where daily data of the shares will be available for a T defined by us. For a “good” estimate, it is convenient to have a horizon of T = 10 years at least, for the assets that will make up the portfolio. It is important to make it clear that the Bloomberg gives the option of downloading the prices with their respective readjustments, so that the information is as "real" as possible.

First of all, it is good to separate the actions by sector:

Table 1.1 Example of Chilean Actions grouped by Sector

Source: Own elaboration

The shares that will ultimately make up the portfolio must have a certain degree of diversification with respect to different markets, such as: retail (retail), mining, transportation, electricity, among others. In simple words, diversification is, as we mentioned earlier, "Not putting all your eggs in the same basket" and its main objective is to achieve the maximum profitability with the least possible risk, bringing the following benefits.

• Reduces the vulnerability of the portfolio to severe market variations.

• Reduces the volatility (risk) of the portfolio.

For example, if you have a portfolio with 2 assets:

Figure 1.2 Example of diversifying stocks (assets) in a portfolio

Source:

In figure 1.2, it is clearly appreciated that with an appropriate diversification of the portfolio, risk is reduced, that is, when assets that are not related are combined and a lower risk is achieved.

The risk that can eventually be eliminated through diversification is own risk. Self-risk results from the fact that many of the dangers facing a given company are specifically its own and perhaps those of its immediate competitors.

But there is also a risk that cannot be avoided and even if one diversifies it cannot be eliminated, this is known as market risk. In conclusion, although there are benefits of diversification, the risk of a portfolio cannot be totally eliminated but rather minimized.

Market risk derives from the fact that there are other dangers that threaten the economy that threaten all businesses, this is the reason why investors are exposed to market uncertainty, such as inflation, regardless of the number of shares of different companies that the portfolio owns.

Graph 1.1 Example of diversifying by increasing the number of shares in the portfolio

Source: Own elaboration

In Graph 1.1, the effect of portfolio diversification can be clearly appreciated, where the risk represented by the standard deviation decreases as assets are added to the portfolio.

In addition to the diversification of the portfolio stocks, the following points should be analyzed; transcendence over time, great stock market presence, liquidity and high market capitalization, all of which provide us with great information and a low level of noise when analyzing them.

With respect to the liquidity of a company, this refers to the relationship that, at a given time, exists between its liquid resources and the obligations that are due at that time.

Likewise, market capitalization means the value of the company in the market and is defined by multiplying the share price with the number of shares of the company.

1.6 Current Situation of AGF Cruz del Sur

Currently, the general fund manager Cruz del Sur uses various mechanisms to try to achieve the “ideal”, a good return with the least possible risk.

As an example, the way AGF operates today will be explained, since this is the company that is providing all the financial know-how and the necessary information.

The portfolio management or financial equity trader, together with the investment manager, use the "old" Markowitz model (this theory is widely explained in various economics books, for example), which is based on the analysis of the "Efficient Frontier"; curve obtained by graphing risk versus profitability, for this the portfolio is diversified by taking assets that yield a lot but with a high risk and are combined with other assets that yield less, but are more "safe", that is, less volatile. Although this strategy is not "bad", for something it has been used since the 1950s, it has the defect of leaving fixed the percentage to invest in each action, which in our memory is what we will look for optimally and which we call as: “vector investment weights”.

Figure 1.3 Example of the Efficient Frontier

Source: Own elaboration

Figure 1.3 represents the efficient frontier that contains the portfolios composed of risky assets that dominate others whose risks are the same, but have lower profitability.

Once the efficient frontier is formed with the different investment percentages for each asset (the sum of them must be one and there you have 100%), the profitability versus risk graph is constructed, for the different investment percentages and the choice The final portfolio depends clearly on the type of investor the administrator is, in AGF for example, they have a more conservative style and therefore, the portfolio that is chosen is not so volatile. (In figure 1.5, the manager sees that as risk increases, portfolio profitability increases)

The main use that they give to the efficient frontier is the determination of the portfolio to recommend to a client, that is, the different types of portfolio that are periodically recommended to clients, in their conservative, moderate and aggressive nature.

The problem to be addressed is for AGF to change the "old" model and use this new investment strategy, which finds the optimal "weight vector" to invest in each asset that makes up the portfolio, with a minimum VaR (minimum risk) and an established expected return.

It must be made clear that this method is a support tool, as a complement to the decision maker, since he is the one with the financial know-how.

CHAPTER II THEORETICAL FRAMEWORK

2.1 Value at Risk, theoretical framework

VaR is a uniform risk measure that quantifies the amount or percentage of the potential loss in value of a portfolio as a result of changes in market factors within a specified time interval. This loss is valued with a certain level of uncertainty ().

Let be a loss function, which depends on the "vector of weights x", belonging to the feasibility set defined by y of a "random vector". The random vector y is assumed to be governed by a probability measure P, which is independent of. For each, it is denoted by Ψ (x, ·) in as the distribution function resulting from the loss function, that is:

(Eq. 2.1)

Therefore, if it is assumed that the random vector has a probability density function, that is, a continuous random vector, then for a fixed, the cumulative distribution function of the loss associated with the vector is given by:

(Eq. 2.2)

The formulas (2.1) and (2.2) represent the probability that the loss function does not exceed the threshold ζ. In both cases, the VaR function, which we will denote by ζα (x), is defined as the percentile of the loss distribution function by the formula:

(Eq. 2.3)

The optimization problem that will be studied in this report, associated with VaR is:

(Eq. 2.4)

Where the set X represents the conditions imposed on the weights or investment policies associated with the portfolio. For example, if nothing special is asked of the portfolio, then the set X is given by:

(Eq. 2.5)

However, if a certain level of diversification is added to the portfolio (for more detail it is recommended to see), then the set X is defined by:

(Eq. 2.6)

Where it represents the maximum investment weight for each of the portfolio assets, for example for everything, which is interpreted as the prohibition to have more than 30% of the entire investment in a single portfolio asset. If we also demand a minimum return on the portfolio, then X is given by:

(Eq. 2.7)

In which R corresponds to the minimum required return and are the predicted returns for each asset, in the predefined period of time.

Finally, it is important to note that the objective of the report is not to calculate the risk associated with an investment portfolio, with the weights in each predefined asset, but to find the investment policy or portfolio weights that make the risk of this is minimal, in other words, to provide a tool that helps to make a decision on how much to invest in each of the assets of a given investment portfolio.

2.2 Conditional Value at Risk, theoretical framework

In the case that a continuous distribution is considered, the CVaR is defined as the expected value of the losses under the condition that they exceed the VaR, (which will be denoted by). The CVaR function is defined, and it will be denoted by, as:

(Eq. 2.8)

Where is the density function associated with the probability measure P. In general, for distribution functions of any kind, including discrete distributions, the CVaR is defined as the weighted average of the VaR and the losses that exceed it, which we will denote by, that is, the expectation of conditional losses that strictly exceed VaR. In this way, the CVaR is defined as follows:

(Eq. 2.9)

Such that:

(Eq. 2.1.0)

In the case of considering a continuous distribution for the loss function, and therefore,.

CVaR is a coherent measure of risk, in the sense defined in, determined by means of a percentile and that, unlike VaR, has good mathematical properties, which can be seen in greater depth in the documents,,. In particular, the CVaR defined by (2.8) is an upper bound of the VaR since:

(Eq. 2.1.1)

In general, the minimization of CVaR and VaR are not equivalent. Since the CVaR definition explicitly involves the VaR function, that is, the function, therefore, it becomes very cumbersome to work and optimize the CVaR, however, if the following auxiliary function is considered:

(Eq. 2.1.2)

Alternatively, it can be written as follows:

(Eq. 2.1.3)

Where. For fixed, it is good to consider the following function of:

(Eq. 2.1.4)

This last function of, has the following properties that are very useful when calculating VaR and CVaR:

a) is a convex function in.

b) The en, is a minimum of, that is,.

c) The minimum value of the function is the en, that is,.

As an immediate consequence of these properties, it can be inferred that the CVaR can be optimized by optimizing the auxiliary function with respect to and simultaneously:

(Eq. 2.1.5)

In this way, the CVaR can be optimized directly, without the need to calculate the VaR first. Furthermore, it is a convex function on the portfolio variable when the loss function is also convex with respect to. In this case, if the set of feasible positions in the portfolio is also convex, so the optimization problem in equation (2.1.5) is a convex problem, which can be solved using well-known techniques for this type of problem..

Usually it is not possible to calculate or determine the density function of the random events in the proposed formulation, however, it is possible to have a number of scenarios, for example; with, which represent some historical values ​​of the random events, therefore; the historical time series of the profitability or the prices of the portfolio assets, or it can be values ​​obtained via computer simulation, in our memory the stochastic Wiener process. In any case, an important part of this research is to study the different alternatives for obtaining the scenarios.

Subsequently, an approximation of the function is obtained using an empirical distribution of the random events based on the available scenarios:

(Eq. 2.1.6)

In this way, the problem is approximated by replacing a by in equation (2.1.5):

(Eq. 2.1.7)

Now, if we introduce the auxiliary variables to replace assigning the restrictions, we have the following optimization problem:

(Eq. 2.1.8)

Sa:

Finally, it can be observed that if the loss function is linear with respect to, then the optimization problem in equation (2.1.8) can be reduced to a linear programming problem, that is, it must be made clear, its size depends on the number of scenarios generated and therefore large-scale linear programming techniques must be used. In a heuristic algorithm is proposed to solve this problem. An important part of this memory is to implement the aforementioned algorithm and obtain a comparison between profitability versus VaR (in the same way as Markowitz does with the efficient frontier).

2.3 Returns Analysis

The first thing to do is analyze the returns on the shares that make up the investment portfolio, in order to observe their behavior over a time horizon of at least T = 10 years.

This information is of vital importance, since from it the bases are obtained both for the development of predictive models and for the minimization of VaR that will be the starting point in our research.

Once the portfolio has been defined, the next step corresponds to obtaining the price series for each of these companies (see chapter 1.5).

With these series of historical prices, the profitability will be calculated as follows:

(Eq. 2.1.9)

The objective will be to obtain your returns both annually, monthly and daily, as well as their associated risks, which are shown by means of the variance and standard deviation. Finally, as a way to see the level of diversification of the portfolio, the correlation matrix will also be obtained, which will give us an idea of ​​the level of diversification of the chosen portfolio.

Regarding the ways of calculating returns, it can be said that there are various alternatives to perform them, some of which are more complex than others, but always having something in common: a projection of the price of the instrument for a desired investment horizon. With this, it can be said that both the returns calculated by simple means and a historical average, as well as calculations through time series, fulfill the purpose of showing the behavior of the returns for a defined time horizon.

A traditional projection used by many financial companies has been the historical average return, which is defined as follows:

(Eq. 2.2.0)

Considering the phenomenon of reversion to the mean existing in the returns, it seems to be a good approximation, however it is unrealistic as it is a statistical result that does not incorporate the fact that the investment horizon is not T.

A second methodology that considers the trajectory of the returns, is the estimation of time series models of the ARIMA type (Autoregressive Integrated Models of Moving Averages), which in this report will not be used, since the historical average return will be assumed for the entire period T.

Once the historical average of the returns has been obtained, on a horizon T, it is necessary to complement this measure, since by itself it is not self-sufficient to be able to make a decision, that is why the portfolio risks will be analyzed in addition.

Typically, financial institutions, such as banks or money desks, use the variance to measure the volatility of a stock, which is calculated as follows:

(Eq. 2.2.1)

If it is assumed that the possible returns of an asset is distributed according to a normal distribution (Gaussian curve), it can be said with 95% confidence, the future profitability of this asset will belong to the following interval:

(Eq. 2.2.2)

Under this assumption it is possible to quantify the width of the interval in which the future profitability will fall or also what will be the probability of obtaining a determined profitability.

Once the parameters corresponding to the returns and volatilities of each asset have been defined and calculated, the next step is to see the relationship between each of the actions, with which the Covariance and the Correlation Coefficient must be entered.

The covariance will indicate what the behavior of an asset will be when there is a change in the value of another asset and is defined as follows:

(Eq. 2.2.3)

Where and are the possible return values ​​for the assets and b respectively.

The covariance indicates the extent to which one action varies from the other. In this way, if the covariance is positive, it means that when one stock rises the other also tends to rise; If the covariance is negative, it means that when “a” rises, “b” tends to fall. If the covariance is close to zero, it means that the two actions are not related.

A statistical parameter that also indicates the relationship between two actions, and that is easier to interpret, is the correlation coefficient. This coefficient is defined by the following equation:

(Eq. 2.2.4) We

have to:

(Eq. 2.2.5)

As with the interpretation of the covariance, the correlation factor will be positive if both actions move in the same direction and it will be negative if the actions move in opposite directions. On the other hand, if the actions have no relation to each other, it will be around zero.

The advantage of this coefficient is that in addition to being able to interpret the direction in which both actions move, it provides us with information about the magnitude of this relationship, which is expressed as follows:

Close to 0 “weak relation between the shares”

Close to -0.5- “moderate relation between the shares”

Close to -1- “strong relation between the shares”

Once the historical average profitability for the established horizon has been obtained, together with the variance, the covariance matrix and the correlations of the shares, a prediction of future prices is made.

In order to predict the future prices of the shares that make up the portfolio, it is decided to generate price forecast scenarios through Wiener processes using matrix procedures to obtain correlated assets and MonteCarlo simulation techniques.

2.4 Selection of the shares that make up the portfolio of the report

First of all, through Bloomberg, we obtain daily closing prices for all IPSA shares from January 13, 1994 to August 10, 2007. The shares are then sorted by starting date in ascending order.

The portfolio selection criteria is as follows:

More than ten years of historical data on closing prices.

Stock market presence equal to 100%.

By having a large stock market presence, this ensures that the shares are well liquid in the stock market.

Therefore, the actions that meet these requirements and that will be used in this report will be seen in Table 1.2, those that are highlighted in green are the ones selected. In this way, the shares that will be worked on in this report correspond to half of the IPSA, that is, a total of 20 shares, with historical data from 05-22-1997 to 08-10-2007, with which to have more than 10 years of information with 2667 samples for each company.

Table 1.2 Example of Chilean Actions Selected for the Report

Source: Own elaboration

With these price series, the objective will be to obtain their returns both annually, weekly and daily, as well as their associated risks, which are shown by means of the variance and standard deviation. Finally, as a way to see the level of diversification of the portfolio, the correlation matrix will also be obtained, which will give us an idea of ​​the level of diversification of the chosen portfolio.

The historical data of the shares provided by Bloomberg is from Monday to Sunday, repeating the closing price of Friday for the weekend, which generates an error if the database is not cleaned. Therefore, they are cleaned using the SPSS software (Statistical package of the social science, standard version, 11.5). We create a variable "dys" that will be the weekend variable, and later it is filtered with the option to eliminate that variable, in other words, eliminate the weekend. The syntax is as follows:

COMPUTE syd = XDATE.WKDAY (date).

VARIABLE LABELS syd 'Saturday and Sunday'.

EXECUTE.

USE ALL.

SELECT IF (syd ~ = 1 & syd ~ = 7).

EXECUTE.

Continuing with the analysis of the price series, the next step is to obtain the returns for each of the actions, for which the behavior of the price series will first be analyzed graphically. This analysis will be done using Microsoft Excel 2003 software.

The graphic results of the price series are shown below:

On the Y axis are the prices of the series and the X axis corresponds to time. On the abscissa axis you can see the sample number, which is associated with the date. The series contain around 2,667 data which represent around 10 years of information, excluding non-business days (Saturday and Sunday).

Behavior of the Price Series Graphically

Graph 1.2 Price evolution for the selected shares (1997 to 2007)

Source: Own elaboration

Graph 1.2 shows that the price of the shares as the years go by, for the most part, show an exponential behavior, however, in the particular case of the Madeco share, the opposite phenomenon occurs, that is, inversely proportional to the rest of the shares. This is for the following:

Starting in 1999, the company faced a series of difficulties in its markets that had an unfavorable impact on its results. The Asian crisis, which began in 1998, caused a significant drop in the level of industrial activity in the markets served by Madeco, especially in the telecommunications and construction industries. In 1999, the devaluation of the Brazilian currency affected Ficap's competitive position, reducing its contribution to consolidated results. In recent years, as a consequence of the deterioration of the main regional economies in South America, there has been a reduction in investment levels in the industries that the company supplies, especially in the telecommunications area. This adverse situation intensified in 2001 and 2002,due to the economic crisis that occurred in Argentina (generating the closure of plants and recognition of provisions by Madeco). In 2003, the company began a process of restructuring its operations, aimed primarily at increasing the efficiency of its production processes in conjunction with a reduction in its expense structure and a strengthening of its commercial strategy. Although the level of sales decreased by 8% compared to 2002, the operating result increased by 84%, reflecting the operational adjustments made. As of September 2004, the strengthening of its commercial strategy together with the greater economic activity registered in its main markets (Brazil and Chile) resulted in a significant increase in its level of sales and capacity to generate cash flows.

This was reflected in the positive trend of the operating margin, which reached 8.2%, similar to that obtained before 1999. For 2005, the company expects that the consolidation of its operating structure will be reflected in the stabilization of its margins..

Then the next step was to calculate the historical returns, which were obtained through the formula of the historical average return (Eq. 2.2.0). The results of these are presented in Figures 1.8 and 1.9 based on daily data:

Based on the profitability of the period between the years 97-07, the expected returns can be obtained for the different required periods, such as the annual, weekly or daily expected returns.

In this way, the daily prices were converted to daily profitability through the formula (Eq. 2.1.9), to then make the transformation of daily, weekly and annual returns through the following equation:

(Eq. 2.2.6).

Where f corresponds to the frequency between the returns, it is the return that is taken as data and it is the standardized return to the required frequency.

Example: If we had an annual return and we wanted to decompose it on a monthly basis, then f = 1/12, since a year has 12 months. Otherwise, if you had the average daily return and you wanted to move to a monthly basis, then f = 21, since an average month has 21 business days with transactions.

Table of Profitability of the shares that make up the portfolio

Table 1.3 Historical profitability of the period (1997 to 2007)

Source: Own elaboration

Table 1.4 Detail of profitability of the period (1997 to 2007)

Source: Own elaboration

In Table 1.3, it can be seen that both the daily returns and the weekly returns of the Madeco share show negative figures, that is, if the investor invests in this share, they would lose money. This statement is not real since if one observes Table 1.4 the detail of the annual returns of this action, on average 45.8% income per year. It should be noted that in our research weekly data (t = weeks) will be used to introduce them into the Wiener process, which will be seen below.

CHAPTER III GENERATION OF SCENARIOS THROUGH THE WIENER PROCESS AND THE MONTECARLO SIMULATION TECHNIQUE.

The purpose of a scenario generator is to produce a set of values ​​of the decision variables involved, under a certain planning horizon, the output of which is a scenario or the set of them and which contains the historical behavior of the variables.

An alternative for the generation of future profitability scenarios is the use of Wiener processes using matrix procedures and MonteCarlo simulation techniques.

3.1 Introduction to a Stochastic Methodology

Any variable whose values ​​change in an uncertain way over time, it can be said that it follows a stochastic process. These types of processes can be classified as discrete or continuous time.

A discrete time stochastic process is where the value of the variable can change only at some definite points in time. On the other hand, a stochastic process of continuous time is one in which changes can take place at any instant of time.

Stochastic processes can also be classified as continuous or discrete variables. In continuous variable processes, the values ​​that the variables can take are defined by a range, while in discrete variable processes a range of possible values ​​are defined, which remain fixed throughout the process.

During this work, which in this part is oriented to stock price forecasts, continuous variable and continuous time processes will be developed. Knowledge of this type of process is essential for understanding the management of other derivatives such as options.

It should be said that in practice, stock prices that follow continuous variable or continuous time processes are not observed, since these prices are subject to certain discrete values, for example: integer values ​​or multiples of pesos, cents or pesos and on the other hand, the price variations are subject to the days in which the exchanges are trading. However, variable and continuous time processes have proven to be a very useful tool for this type of purpose.

3.2 Markov process

Markov processes are defined as a particular type of stochastic process, where only the present value of the variable is relevant for predicting the future. More generally, it can be said that both the history of the variable and the noise generated in the present by this variable will be irrelevant in predicting the future value. With respect to stock prices, it is worth mentioning that it is usually assumed that predictions can be made through Markov processes, with which the prediction of the future price of the share will not be affected by the prices of yesterday, last week or the last month.

This theory is consistent with everything proposed by theories such as market efficiencies, where it is postulated that the present price of the share incorporates all the past information.

Since future predictions are uncertain, they must be expressed in terms of probability distributions. In this regard, the Markov property implies that the probability distribution of the share price in the future will not depend on some pattern followed by the same action in the past, but only on its present state.

3.3 Wiener process

This process is a type of Stochastic Markov process also known as Brownian Motion, where its mean is 0 and its variance is equal to 1. This process is widely used in physics to describe the motion of particles that are subject to large amounts of variations.

Formally, a variable follows a Wiener process if it meets the following properties:

Property 1: The variation over a short period of time is:

(Eq. 2.2.7).

Where is a random variable with standard normal distribution.

Property 2: The values ​​of for two small time intervals are independent.

Continuing with what was stated in property 1, where by itself it has a normal distribution with:

and

The second property implies that z follows a Markov process.

Considering an increase in the value of z during approximately a long period of time T, we can denote this increase by means of. On the other hand, this could also be looked at as the sum of small increments of z in N (small) time intervals, where:

Thus,

(Eq. 2.2.8).

Where are random variables with distribution. On the other hand, from the second property of the Wiener process, it follows that the variables are independent of each other. Then, continuing with what was previously stated in (Eq. 2.2.8), it follows that it is normally distributed with:

and

Which is consistent with what was discussed at the beginning of this chapter.

Regarding the calculations, it is recurrent to note that small changes are denoted by the limit, making these variations close to zero. So it can be expressed as. When you have stochastic processes, you can proceed in the same way, with which the Wiener process is expressed as a limit, where for the process described above for z.

3.4 Generalized Wiener Process

The basic Wiener process has a rate of change of zero and a variance of 1. The rate of change equal to zero means that the expected value of z at any future instant will be equal to its present value. On the other hand, that the variance is equal to 1 means that the variance of the changes in z in a time interval T will be equal to T.

Generalizing the Wiener process for a variable x in terms of z we have:

(Eq. 2.2.9).

Where a and b are constants.

To understand the above equation, it is useful to consider a sum of two independent components, where the term implies that x has a rate of change of a per unit time. Without considering the term represented by b, the equation could be represented as follows:

Which by solving the differential equation gives us that:

where is the value of x at time 0. This implies that for each period of time t the value of x will increase at a rate of.

The term of the equation can be considered as a noise or a variation to the pattern followed by x. In this way, the amount of noise or variability in the equation will be defined as b times the Wiener process.

As the Wiener process has a standard deviation of 1, following the line that we have been developing, we will then obtain that b times a Wiener process will give us a standard deviation of b. With this, if we take small time intervals, the changes in the value of x will be given by equations (2.2.7 and 2.2.8), such as:

Where, as explained previously, it corresponds to a random variable with a standard normal distribution. From this it follows that it has a normal distribution with:

By means of the same arguments that were presented for the Wiener process, it is shown that for any change in the value of x in a time interval t, x will be normally distributed with:

Mean of change in x =

Variance of change in x =

Thus, the generalized Wiener process given by equation 2.2.9, has an expected rate of change per unit time equal to a and a variance per unit time of.

There are similar alternatives to the Wiener process, where variables a and b, instead of being constant, can be variable functions with respect to variables x and t, generating a more complex stochastic differential equation.

3.5 Forecast of Stock Prices

From now on we will focus on the stochastic processes used to determine share prices, without taking into account the dividend policies of companies.

It would be tempting to suggest that the prices of a stock follow a generalized Wiener process, that is, that its rate of change is constant and that its variance is constant. However, this method would be obsolete when capturing the most important characteristic of the share price, that is, that the expected percentage of return required by investors in a share is independent of the price of the same. Clearly, the assumption that the exchange rate is constant would be inappropriate and must be replaced by the assumption that the expected return (the expected change in the price of the stock) is constant.

In this way, if S is defined as the price of the stock at time t, the rate of change with respect to the price would be denoted as, being a constant parameter. In the same way, for small time intervals, the expected increase in S will be given by

With respect to, this parameter corresponds to the expected profitability of the share, expressed in decimal form.

Thus, if we suppose that the volatility of stock prices were always equal to zero, the model would be represented by:

Assuming

Integrating the equation between the interval, we obtain:

(Eq. 2.3.0).

Where and are the share prices at times zero and T respectively. Equation (2.3.0) shows that when the variance equals zero, the stock price will change continuously as a function of a rate per unit of time.

Assuming that the variation in stock prices does not show volatility, is quite far from reality. Given this, it is reasonable to assume that the variability of a share will be represented by a percentage of its price and, like returns, this value will be independent of the price of the share.

Finally the predictive model will be defined by:

(Eq. 2.3.1).

The above equation is one of the most used for modeling the behavior of stock prices, where it corresponds to the volatility of the stock or standard deviation, it is the expected return and it corresponds to the Cholesky random matrix (correlation matrix transposed by, which is a random variable from a standard normal distribution (with zero mean and 1 standard deviation).

3.6 Generalization of Price Forecasting

The behavior model of stock prices developed previously is known as Geometric Brownian Movement and in its discrete form it is represented by means of:

(Eq. 2.3.2).

(Eq. 2.3.3).

The variable represents the change in the share price, in a small time interval and is a random variable from a standard normal distribution (with zero mean and 1 standard deviation).

The left part of equation (2.3.2) corresponds to the return of the action in the time interval. The term corresponds to the expected value of the returns and represents the stochastic component of the returns.

Equation (2.3.2) shows that it is normally distributed with mean and standard deviation, in other words:

3.7 Predictive Model

The Brownian motion processes will be used as a model for the generation of future returns. For this, normal random numbers will be generated to which the expected returns, standard deviations and the random correlations of the assets are incorporated, based on the historical data, thus generating a forecast of future returns.

According to the above, the way in which future daily returns will be forecast will be as proposed in (2.3.1), with the exception that when working with daily information and wanting to forecast one day, the time differential will be equal a 1. In this way, for this particular case, the equation will be defined by:

(Eq. 2.3.4).

For each value of the random variable, with normal distribution, a future profitability scenario is generated for the next unit of time. This is repeated a large number of times and all these scenarios are used to obtain measures such as the average return and the variance of the shares. This is what is known as a Monte Carlo simulation.

3.7.1 Monte Carlo simulations

Making decisions under conditions of uncertainty implies making efforts to project the future in order to foresee risk situations, prepare to face undesirable conditions, avoid wrong options and take advantage of favorable situations.

For this, Monte Carlo simulations are a very good scientifically-based tool, with which a series of situations or possible scenarios for an event can be predicted.

In this way, in 1998 Nassir Sapag, defined Monte Carlo processes as a technique for simulating uncertain scenarios that allows obtaining expected values ​​for uncontrollable variables, through a random selection, where the probability of choosing a result corresponds to the one given by its distribution.

3.7.2 Correlation of returns

In the analysis of returns, it is very important to evaluate the correlation of these, since this indicator gives us an idea of ​​the behavior of an asset when there is a variation in the value of another asset. In other words, the correlation coefficient tells us to what extent two actions move in the same direction.

When generating random numbers and obtaining the different scenarios of expected returns through equation (2.3.1), both the returns and the volatilities will correspond approximately to those obtained through the historical data (in theory they are the same), but the behavior of the actions to each other will not be modeled. This means that by not taking correlations into account in the return modeling, they will be totally independent of each other (correlation coefficients close to zero), which when constructing portfolios means obtaining forecasts that are quite far from reality. An alternative to modeling this uses the Cholesky decomposition, which is discussed in the next section.

One of the ways in which forecasts of correlated returns can be generated in the same way that they have been correlated in the past is through Cholesky decomposition or factorization.

In linear algebra, the Cholesky decomposition corresponds to a matrix decomposition, in which a positive definite symmetric matrix is ​​decomposed into the product of two matrices.

Theorem 1: Every symmetric matrix A is positive definite if and only if there exists an upper triangular matrix S with strictly positive diagonal such that:

This decomposition of matrix A is known as its Cholesky factorization.

One of the most important applications of the triangular factorizations presented is that they allow us to solve a system as two triangular systems, that is, by means of two substitution procedures: one forwards and one backwards.

In the following, it will be demonstrated how by means of Cholesky decomposition it is possible to obtain correlated data series from data that were not correlated.

Sean::

The mean of the historical data

: Its matrix of variances and covariances.

A: The correlation matrix of the historical data.

Then:

(Eq. 2.3.5).

Where D is a diagonal matrix with the element

(Eq. 2.3.6).

That is, D is the matrix that has the inverses of the standard deviations on the diagonal.

Let S be the Cholesky decomposition of the matrix:

Substituting this expression in (2.3.5), we obtain:

Which means that the matrix of the Cholesky factorization of R is:

Next, it will be shown that from a vector, that is independent and premultiplied by the matrix of the Cholesky decomposition of R, a correlated normal vector is obtained in the same way as the historical data; that is:

It's known that, and that, Substituting this equation in the previous one, we have:

From equation (2.3.6) we have that it is a diagonal matrix whose elements are the inverses of the diagonal elements of the matrix R, which since R is a correlation matrix, these values ​​will be equal to 1. Therefore it is equal to the identity matrix, therefore it is shown that:

3.7.3 Generation of the scenarios

Once the correlated random numbers are obtained, equation (2.3.3) is used to restore the mean and historical standard deviation of the data. Matrixally this can be expressed as:

In this way, a random vector is generated with mean and standard deviation equal to the historical values, which is demonstrated by:

Where, therefore:

For the case of variance:

But how, And how, then:

Pre-multiplying and post-multiplying (a) by, we see that:

Thus:

With this, it has been shown that through equation (2.3.2) and the Cholesky decomposition scenarios can be generated with mean and matrix of variances and covariances equal to the historical ones.

3.7.4 Implementation of the predictive model

The Monte Carlo method is an algorithm that is used to estimate the expected value of a random variable, through the generation of scenarios, with which a vision is obtained about the behavior of the variables.

In this way, with the help of Matlab 7.4 and TomLab / CPlex (compiler to optimize), the algorithm will "run" on an Intel (R) Xeon (TM) computer, 2 3.4 GHz processors and 2Gb of RAM with Microsoft operating system. Windows Server 2003, in which a series of random numbers will be generated for each of the actions in the portfolio, simulating a set of daily and weekly scenarios. Thus, a large number of scenarios will be obtained (Between 2000 to 5000, based on Johnson's recommendation in), distributed according to a standard normal with mean and standard deviation equal to the data and also with the same correlation (as explained in the previous chapter).In this way, a matrix will be obtained with a number of rows equal to the number of actions handled and a number of columns equal to the number of scenarios defined in the simulation.

As said, the generation of the random numbers will be dependent on the amount of assets that are managed in the portfolio, which the system will recognize through the dimension of the vector of expected returns. On the other hand, the number of scenarios to be modeled weekly are entered manually, through a parameter called “sample”.

Once the random numbers have been generated, the Cholesky decomposition allows to obtain correlated series in the same way in which the data are correlated in the past, but maintaining the means and standard deviations of the random numbers, that is, and. The dimension of this new matrix is ​​the same as that generated by the random numbers.

Once the data has been correlated, the next step corresponds to obtaining series with means and standard deviations equal to the historical ones, since, as has been seen, the shares have returns other than zero and volatilities different than 1.

The incorporation of the returns, volatilities and historical correlations of the series is through equation (2.3.4), which comes from the development of the Wiener process. In this way, a matrix is ​​obtained that represents a series of possible scenarios in terms of returns for each of the stocks in the portfolio for a time horizon corresponding to one week.

In this way, the generation of random numbers, such as the procedures to obtain correlations, returns and standard deviations equal to the historical ones, will be repeated for each week that modeling is required, generating a 3-dimensional arrangement (number of actions, number of weekly scenarios to simulate and weekly horizons to forecast).

The program will deliver two alternatives for generating scenarios, one as we have already seen, using the historical average obtained by Eq. 2.2.0. and the other through expert judgment data, in our case by the "Bloomberg" software which will provide the data from equation 2.3.6, known as the "Capital Asset Valuation Model" or "Capital Asset Pricing Model" (Capm), this is a model frequently used in financial economics. It suggests that the higher the risk of investing in an asset, the greater the return on that asset must be to offset this increased risk. Therefore we have:

Eq. (2.3.6).

Where

:: Risk-free rate or, in Chile, 5-year indexed Central Bank bonds

: Market rate, in our case it would be the annual IPSA.

(Rm - Rf): Represents the excess profitability of the market portfolio.

: The beta coefficient is used to measure non-diversifiable risk. This is an index of how responsive an asset is to a change in market performance. The beta coefficient that characterizes the market is 1; all other coefficients are judged relative to this value. Asset betas can take on either positive or negative values, although (positive) values ​​are the norm. Most of the beta coefficients are between 0.5 and 2 (Expert judgment).

Later we transform the to the, by means of the following equation:

Eq. (2.3.7).

Applying the equation. 2.3.7 and using that of each asset, the following remains:

Table 1.5 Example of obtaining the weekly average through CAPM and the average obtained through historical data

Source: Own elaboration

Table 1.5 shows the weekly means (u_weekly) by means of the Capm for the 20 assets that will be replaced in equation 2.3.4, which comes from the development of the Wiener process. In this way, a matrix is ​​obtained that represents a series of possible scenarios in terms of returns for each of the stocks in the portfolio for a time horizon corresponding to one week. The difference between the means obtained by Capm and by the historical data is clearly appreciated, this is because in the last 10 years the Santiago Stock Exchange has experienced a considerable rise in the price of the shares, so when using the historical average would be in the presence of a lot of “noise”. Given the above, it is convenient to use the weekly average obtained by the Capm since it is much more conservative.

CHAPTER IV OPTIMIZATION ALGORITHM FOR THE CALCULATION OF VaR

In this part, an algorithm for minimization of VaR will be presented, for which it is considered that all the assumptions that were indicated in the equations (Eq. 2.4 - 2.9).

4.1 Informal description of the algorithm

By definition, the -VaR is the smallest value, such that the probability that the loss will be less than or equal to this value is greater than or equal to. Based on the simulation of the scenarios, the -VaR portfolio; portfolio whose probability that the loss is less than or equal to VaR is greater than or equal to, is estimated as the loss in a scenario k, where the total probability of all scenarios with losses less than or equal to is at least.

The general line of thought behind the heuristic algorithm to be considered in this paper is quite simple. This begins with an optimal portfolio that is obtained by applying an approximation to the minimum CVaR, then the VaR of the portfolio is systematically reduced by solving a series of CVaR problems using linear programming techniques. These CVaR problems are obtained by restricting and "discarding" the scenarios that show large losses.

The objective of the algorithm is to build upper limits for the VaR, and then minimize these limits. The first upper bound for -VaR is -CVaR, which is minimized. The scenarios in which the losses exceed -VaR are then divided and the upper portion of these scenarios is 'discarded' (see Figure 2.2). The number of scenarios that are discarded is determined by the parameter (eg, if it is equal to 0.5 then the upper half is discarded). Figure 1.4 shows the first step of the approach, when high-loss scenarios are discarded and excluded (making them "inactive"). Then a new one is calculated in such a way that the CVaR with this new one is an upper limit for the VaR of the original problem. This -CVaR is the expected loss of the active scenarios with losses that exceed -VaR, that is,the scenarios between the -VaR and the dotted line in the figure. In this way, the upper limit is reduced to a minimum. In short, the procedure consists of the construction of a series of upper limits that are reduced to a minimum until it is not possible to continue ruling out active scenarios. At the end of this procedure, the considered heuristic is used in which the loss is minimized, while ensuring that the losses in the scenarios that exceed are stored in. This approach requires solving a series of linear programming problems.The procedure consists of the construction of a series of upper limits that are reduced to a minimum until it is not possible to continue ruling out active scenarios. At the end of this procedure, the considered heuristic is used in which the loss is minimized, while ensuring that the losses in the scenarios that exceed are stored in. This approach requires solving a series of linear programming problems.The procedure consists of the construction of a series of upper limits that are reduced to a minimum until it is not possible to continue ruling out active scenarios. At the end of this procedure, the considered heuristic is used in which the loss is minimized, while ensuring that the losses in the scenarios that exceed are stored in. This approach requires solving a series of linear programming problems.

Figure 1.4 Graphic Example of the Implemented Algorithm.

Source:

In Figure 1.4, it can be seen that in the second step of the algorithm the scenarios that show the greatest losses are restricted and discarded (making them inactive). Thus a new CVaR is generated, in such a way that this CVaR is an upper limit of the VaR.

In the next section, the algorithm will be explained in greater detail.

4.1.1 Algorithm

In this section a formal description of the previously introduced algorithm is given.

Note that the solution of this optimization problem is given by

ii) Regarding the value of the loss function, order the scenarios,, in ascending order, denoting the scenarios ordered by,.

Step 2: Estimation of VaR.

Calculate the VaR estimate,, where

.

Step 3: algorithm stop criteria

Yes, stop the algorithm. Where will be the optimal estimate of the portfolio and the VaR will be equal to.

Step 4: Reset

i).

ii) and

iii).

iv) Go to step 1.

In other words:

Step 1: Optimization Sub-Problem

Regarding the value of the loss function, order the scenarios,, in ascending order, denoting the scenarios ordered by,.

f (xi *, yln) <= f (xi *, yl2) <= …….. <= f (xi *, yl5000)

Step 2: Estimation of VaR.

l (0.95) = l = 0.95 * 5000 = 4750

α = 0.95; i = 0; H0 = {1… 5000}

Step 3: algorithm stop criteria

Yes, stop the algorithm. Where will be the optimal estimate of the portfolio and the VaR will be equal to.

Therefore, in position 4486 the minimum risk (VaR) corresponds to the expected return.

Once the problem is formally defined, each of the previous steps is explained in more detail.

Step 0 initializes the algorithm by defining the confidence level, and setting the iterations counter to zero.

The scenarios included in the CVaR optimization sub-problem (equation 2.3.8) are defined as active. Initially all the scenarios are active and it is denoted by the set H0 (what this set really denotes is the set of indices of the active scenarios). In the following steps, as the optimization sub-problem defined by the CVaR is solved, only the set of active scenarios, defined by Hi, will be considered (let us emphasize that Hi is the set of indices of the active scenarios in the Step i). The so-called inactive scenarios correspond to those that have been excluded in previous iterations. The parameter defines the proportion of scenarios in the queue that will be excluded in each iteration. For example, if = 0.5, half of the queue is excluded in each iteration.Later we will give different values ​​to this variable to see how these variations influence the algorithm.

Step 1 solves the optimization subproblem of reducing the -CVaR, which is an upper bound on the -VaR. The variable is a free variable that ensures that losses in the inactive scenarios exceed those corresponding to the active scenarios.

In step 2, the VaR is estimated as the loss in the scenario such that the cumulative probability of the scenarios with losses less than or equal to that of this scenario is greater than or equal to.

In step 3, the algorithm stops when the optimization of the sub-problem has been carried out on only one of the active scenarios, that is, when the losses in the scenario corresponding to the estimate of the -VaR have been minimized. In this way, the number of iterations to perform, before obtaining an optimal solution, will depend on the magnitude of the following parameters:

J: Number of samples or scenarios to model.

Alpha (): Confidence level. (-VaR)

Chi (): Proportion of scenarios in the queue that will be excluded in each iteration.

In step 4, it is defined in such a way that -CVaR, which is calculated only based on the active scenarios, is an upper limit of the original -VaR. Minimizing the -CVaR over the active scenarios, results in a minimization of the mean value of the active queue that exceeds the -VaR. This situation is exemplified in Figure 2.2.

Furthermore, in this step the upper part of the active scenarios that exceed -VaR are excluded from the Hi active scenario system. For example, as illustrated in Figure 1.4, in the first iteration the queue is divided into two parts, the upper part of the queue becomes inactive and the lower part corresponds to the set H1 of active scenarios.

4.2 Results of the Optimization Algorithm

In this part of the chapter, the results obtained by means of the optimization algorithm will be shown.

As a first step, only the variables related to the number of scenarios to be modeled (J), the confidence level (α), which defines the α -VaR and the proportion of scenarios in the queue that will be excluded in each iteration (ξ), obtaining the VaR behavior in the selected portfolio under diversification restrictions of 30% and the non-requirement on returns, this calculation will be carried out in an analogous way for the two scenarios generation cases; historical average and that calculated using Capm (see chapter 3.7.4).

For the cases described above, the following values ​​are taken:

J = 5000 α = 0.95 ξ = 0.5

Table 1.6 Results of the Data of the Implemented Algorithm using historical weekly average and average through Capm.

Source: self made

Table 1.6 shows that under the same conditions, the generation of scenarios, the return is more optimistic when the historical average is used instead of the average Capm, this phenomenon was to be expected since the sample of assets that was taken For the analysis, it considers only the last 10 years (1997-2007), which is precisely the period in which the stock market rose more than expected, so we should take into account the results of the algorithm using the average obtained by the Capm, since they are more conservative data.

It should be noted that the algorithm itself has opted for assets with positive returns at the expense of assets with negative returns, which gives an idea of ​​how it is working.

It is also observed that in both cases, as the time increases from 4 to 36 weeks, the return increases, thereby increasing the risk.

Graph 1.3 Graph comparing the two scenarios simulation alternatives

Source: Own elaboration

Analyzing Graph 1.3, we observe the conservation of the basic principle of finance, which says that the higher the return, the greater the risk (VaR), which is applied for the two cases of means.

As mentioned in the previous paragraph, it can be seen that when using the historical average, the forecast of return v / s risk is more optimistic than that of the average Capm, since the second is more conservative.

Continuing with our study, we now set a forecast horizon of 24 weeks (6 months), 5000 scenarios (J = 5000), confidence interval of 90% (α = 0.9) and ξ = 0.5 (fixed parameter of the algorithm, indica that half of the tail is excluded in each iteration) and changing the diversification (div) and demanding return, the results are the following:

Table 1.7 Data provided by the software (Thesis)

Source: Own elaboration

In Table 1.7 it can be seen that under the same scenario (div = 0.3), if I let the algorithm "work alone", that is, without demanding a certain return, it obtains a lower risk (VaR) than when a 5.5% return.

Now requiring the algorithm that the investment portfolio rents at least 6% with the same diversification of 30%, it does not find the optimal portfolio with the requested profitability, since in that period there are no more profitable actions, therefore the program delivers an "Error" message, "try lower profitability."

In this way, if we leave the diversification equal to 1, that is, that the algorithm chooses the most profitable shares and invests freely without restriction of when to invest in each share and we require the algorithm to rent at least 6%, the Risk rises categorically by demanding a higher return, this clearly must be fulfilled since it is one of the basic principles of finance, that the higher the risk of the portfolio, the higher the expected return.

Now analyzing other cases:

Scenarios: 5000 and using weekly average Capm

Table 1.8 Variation of the Confidence Interval for three Time Periods

Source: Own elaboration

In Table 1.8 the variation of the confidence level (90%, 95% and 99%) was analyzed for three periods of time 4, 12 and 20 weeks respectively, with the same level of diversification 20% and without requiring a return. For the three time periods, it can be seen that the lower the VaR confidence interval, the lower the risk associated with the portfolio and as the confidence level increases, the associated risk will increase considerably.

Table 1.9 Variation of the Level of Diversification for a Time Horizon of 8 Weeks and a Confidence Interval of 95%.

Source: self made

In Table 1.9 it can be seen that as the level of diversification of the algorithm increases, the expected return of the portfolio and the risk associated with it are quite similar, this happens since what this restriction does is to see when it is the maximum that can be invest in each asset. In general, this restriction is used by the General Fund Administrators since the SVS imposes them under rule No. 148.

Finally, for a time horizon of 12 weeks (3 months) and with a confidence level of 95% and also a portfolio diversification level of 30%, the following results are obtained:

Table 2.0 Variation of ζ in the Algorithm

Source: Own elaboration

In Table 2.0 it is observed that, by increasing the chi (ξ) parameter in the algorithm, the expected return of the portfolio and the risk associated with it remain constant, these results are as expected since this parameter is associated with the time it takes the algorithm to converge on the solution, in other words, the number of iterations it has to go through to reach the optimum.

4.3 Validation of the Optimization Algorithm

To verify that our algorithm effectively delivers the optimal vector of weights to invest in each share with a minimum VaR risk, the following was done:

The previous example was taken (J = 5000, α = 0.9, div = 0.3, horizon = 24 weeks and the return was left free). The software was run and the optimal vector obtained by the X * algorithm was perturbed as follows:

X1 = X * + e1  VaR1, E (r) 1

X2 = X * + e2  VaR2, E (r) 2

X3 = X * + e3  VaR3, E (r) 3

Xn = X * + e4  VaRn, E (r) n

where y.

That is, in the vector, the 1st component was disturbed by 1%, the rest of the components were subtracted, where n is the number of components that are greater than 0.01 (so that they are not less than 0 and that the sum of Xi is equal to 1). Subsequently, the expected return and VaR were calculated at the new point X.

In order to show that we are indeed in the presence of the optimum, the resulting graph must look like this:

Figure 1.5 Validating the Optimum provided by the Algorithm

Source: Own elaboration

This means (see figure 1.5), there cannot be any point in the second quadrant, since if there is a point in it, it means that under the same risk (Var) or less than this, I obtain a higher return, which contradicts financial theory.

When we ran the validation with the 1% disturbances, the following results were obtained:

Figure 1.6 Results of the algorithm validation

Source: Own elaboration

Figure 1.7 Zoom of the disturbances in figure 2.7

Source: Own elaboration

In Figure 1.6 it is observed that effectively, the algorithm yields the optimal vector of weights to invest with a minimum associated risk, since by disturbing the vector X, the resulting values ​​effectively have a higher expected return, also a higher VaR.

Figure 1.7 is an enlargement of the previous Figure, and shows us that the disturbances form a curve and not a line as it seemed to be glimpsed in Figure 1.6.

CHAPTER V CONCLUSIONS AND RECOMMENDATIONS

In this report, it has been possible to meet the proposed objective of computationally implementing an optimization algorithm, non-existent in the national market, that calculates the VaR by minimizing the CVaR.

Although this type of algorithm can be used for all types of financial transactions, during this work the implementation was carried out for equity investment portfolios, based on assets traded in the national market, but under a methodology that can be extrapolated to almost any world market.

It is important to highlight that the use of VaR as a risk measure has become widespread throughout the world. In Chile, it is currently a requirement of the Superintendency of Securities and Insurance (SVS) as a risk quantifier for some types of transactions. Regarding this, it must be said that at the national level, the VaR estimates are only resolved by means of statistical methodologies, which are quite far from the optimization algorithm developed in this report.

In general, statistical VaR evaluations are used to quantify ex-post risks, taking into consideration defined portfolios. Therefore, they are only used to get an idea of ​​the level of risk taken, rather than as a future decision tool. On the other hand, the algorithm implemented results in an optimal portfolio in terms of VaR, that is, it calculates the weights to invest in each asset, simultaneously obtaining the CVaR, the most desirable risk measure (due to its properties) and more conservative.

Regarding the obtaining and generation of financial information for the operation of the algorithm, it should be said that although the projections of stock prices correspond to matters of great difficulty when modeling them, due to their great randomness, volatility, expectations and sudden movements In the market, the techniques used such as Monte Carlo simulations, Cholesky factoring and Wiener processes, were of great help to obtain forecasts of returns, volatilities and correlations similar to the historical ones exhibited by the original series.

In relation to the results obtained with respect to the optimization algorithm, it can be seen that they are in line with financial theory regarding the relationship between portfolio risk (VaR) and diversification, and the required return to the optimal portfolio which determines the algorithm.

In chapter III, for the generation of the scenarios, we tried to simulate the behavior of the shares in the most real way possible, changing the historical average of the returns for the CAPM, since those of each share and the risk-free rates and market are obtained by the expert judgment of people worldwide, so their vision is usually more real than a biased historical average.

In chapter IV, regarding the required return of the algorithm, it was observed that from a certain value, the VaR grows substantially. A similar behavior can be seen in the analysis of the level of free diversification versus static diversification.

From the statistical point of view, other distributions in addition to the normal one could be added to this report, in Wiener process modeling, in particular, it is desirable to take into account asymmetric distributions that correspond more realistically to the behavior of stock prices., for example t-student or logistic distribution.

Finally, another perspective for the development of this report could be the consideration of investment portfolios with other types of assets such as bonds and options, as well as applications in the areas of insurance or bank loans.

BIBLIOGRAPHIC REFERENCES

: Portillo P., MP, Sarto, JL (2001): Financial management of interest risk, Ed. Pirámide, Madrid.

: http://www.bde.es/informes/be/estfin/completo/estfin03_06.pdf

: Jorion Phillippe (2000): Value at Risk: the new benchmark for managing financial risk, 2nd edition, McGraw-Hill.

: Sharpe, W (1964): "Capital Assets Prices: A Theory of Market Equilibrium Under Conditions of Risk", Journal of Finance, No. 19, pp. 425-442.

: Garman, M. and Blanco, C. (1998): «New Advances in the Methodology of Value at Risk: Concepts of VeRdelta and VeRbeta», Financial Analysis Magazine, nº 75, pp. 6-8.

: Johnson Christian A. (2000): “Risk assessment methods for investment portfolios”, Working Papers, Central Bank of Chile, Nº67.

: https://emportal.jpmorgan.com/JPMorganMexico/doc_jun2006/24.pdf

: Romero, R., Laengle, S. (2005): "Implementation of the Conditional Value at Risk for Decision Making", mimeo Universidad de Chile, Faculty of Economic and Administrative Sciences.

: Rockafellar, RT, Uryasev, S. (2002): “Conditional Value-at-Risk for general loss distributions”, Journal of Banking & Finance 26, pp.1443-1471.

: Artzner, P., Delbaen, F., Eber, JM, Health, D. (1999): "Coherent Measures of Risk", Mathematical Finance 9, pp.203-228.

: Antonio Parisi F. (2006): “Diversification and Risk Management”, Economy and Administration Magazine, University of Chile, pp. 70-71.

: Markowitz, H. (1952): "Portfolio Selection", journal of finance, pp. 77-91.

: Brealey, Myers, Allen. (2006): "Principles of Corporate Finance", 8th ed., McGraw-Hill, pp. 161-187.

: http://www.gacetafinanciera.com/PORTAF1.ppt

: http://www.innovanet.com.ar/gis/TELEDETE/TELEDETE/bmatyest.htm

: Palmquist J., Uryasev S. and Krokhmal P. (1999): “Portfolio optimization with Conditional Value at Risk Objective and contraints” University of Florida, Department of Industrial Engineering and Systems.

: Mausser, H. and D. Rosen. (1998) “Efficient Risk / Return Frontiers from Credit Risk”, Algo Research Quarterly, Vol 2, N ° 2, pp. 5-20.

: Rockafellar, RT and S. Uryasev S. (2001): “Conditional Value at Risk for General Loss Distributions”, Research Report 2001-5. ISE Dept., University of Florida.

: Uryasev, S. (2000): “Conditional Value at Risk: Optimization Algorithms and applications” Financial Engineering New, 14, pp. 1-6.

: Hull, John C. (1999): “Options, Futures and other derivatives”. Prentice Hall, Fourth Edition. New Jersey.

: Duffie, D & Pan, J. (1997): "An Overwiew of Value at Risk", Journal of Derivates, 4, 7-49.

: Larsen N., Mausser H., Uryasev S. (2001): “Algorithms for Optimization of Value at Risk”, Research Report 2001-9. ISE Dept., University of Florida.

Download the original file

Minimization of the value at risk var as an investment strategy