Statistics and Actuarial Science
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9934
This is the collection for the University of Waterloo's Department of Statistics and Actuarial Science.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Browsing Statistics and Actuarial Science by Subject "Actuarial Science"
Now showing 1 - 20 of 42
- Results Per Page
- Sort Options
Item Actuarial Inference and Applications of Hidden Markov Models(University of Waterloo, 2011-08-17T15:00:03Z) Till, Matthew CharlesHidden Markov models have become a popular tool for modeling long-term investment guarantees. Many different variations of hidden Markov models have been proposed over the past decades for modeling indexes such as the S&P 500, and they capture the tail risk inherent in the market to varying degrees. However, goodness-of-fit testing, such as residual-based testing, for hidden Markov models is a relatively undeveloped area of research. This work focuses on hidden Markov model assessment, and develops a stochastic approach to deriving a residual set that is ideal for standard residual tests. This result allows hidden-state models to be tested for goodness-of-fit with the well developed testing strategies for single-state models. This work also focuses on parameter uncertainty for the popular long-term equity hidden Markov models. There is a special focus on underlying states that represent lower returns and higher volatility in the market, as these states can have the largest impact on investment guarantee valuation. A Bayesian approach for the hidden Markov models is applied to address the issue of parameter uncertainty and the impact it can have on investment guarantee models. Also in this thesis, the areas of portfolio optimization and portfolio replication under a hidden Markov model setting are further developed. Different strategies for optimization and portfolio hedging under hidden Markov models are presented and compared using real world data. The impact of parameter uncertainty, particularly with model parameters that are connected with higher market volatility, is once again a focus, and the effects of not taking parameter uncertainty into account when optimizing or hedging in a hidden Markov are demonstrated.Item Actuarial Ratemaking in Agricultural Insurance(University of Waterloo, 2015-08-06) Zhu, WenjunA scientific agricultural (re)insurance pricing approach is essential for maintaining sustainable and viable risk management solutions for different stakeholders including farmers, governments, insurers, and reinsurers. The major objective of this thesis is to investigate high dimensional solutions to refine the agricultural insurance and reinsurance pricing. In doing so, this thesis develops and evaluates three high dimensional approaches for constructing actuarial ratemaking framework for agricultural insurance and reinsurance, including credibility approach, high dimensional copula approach, and multivariate weighted distribution approach. This thesis comprehensively examines the ratemaking process, including reviews of different detrending methods and the generating process of the historical loss cost ratio's (LCR's, which is defined as the ratio of indemnities to liabilities). A modified credibility approach is developed based on the Erlang mixture distribution and the liability weighted LCR. In the empirical analysis, a comprehensive data set representing the entire crop insurance sector in Canada is used to show that the Erlang mixture distribution captures the tails of the data more accurately compared to conventional distributions. Further, the heterogeneous credibility premium based on the liability weighted LCR’s is more conservative, and provides a more scientific approach to enhance the reinsurance pricing. The agriculture sector relies substantially on insurance and reinsurance as a mechanism to spread loss. Climate change may lead to an increase in the frequency and severity of spatially correlated weather events, which could lead to an increase in insurance costs, or even the unavailability of crop insurance in some situations. This could have a profound impact on crop output, prices, and ultimately the ability to feed the world rowing population into the future. This thesis proposes a new reinsurance pricing framework, including a new crop yield forecasting model that integrates weather and crop production information from different risk geographically related regions, and closed form reinsurance pricing formulas. The framework is empirically analyzed, with an original weather index system we set up, and algorithms that combine screening regression (SR), cross validation (CV) and principle component analysis (PCA) to achieve efficient dimension reduction and model selection. Empirical results show that the new forecasting model has improved both in-sample and out-of-sample forecasting abilities. Based on this framework, weather risk management strategies are provided for agricultural reinsurers. Adverse weather related risk is a main source of crop production loss, and in addition to farmers, this exposure is a major concern to insurers and reinsurers who act as weather risk underwriters. To date, weather hedging has had limited success, largely due to challenges regarding basis risk. Therefore, this thesis develops and compares different weather risk hedging strategies for agricultural insurers and reinsurers, through investigating the spatial dependence and aggregation level of systemic weather risks across a country. In order to reduce basis risk and improve the efficiency of weather hedging strategies, this thesis refines the weather variable modeling by proposing a flexible time series model that assumes a general hyperbolic (GH) family for the margins to capture the heavy-tail property of the data, together with the Lévy subordinated hierarchical Archimedean copula (LSHAC) model to overcome the challenge of high-dimensionality in modeling the dependence of weather risk. Wavelet analysis is employed to study the detailed characteristics within the data from both time and frequency scales. Results show that it is of great importance of capturing the appropriate dependence structure of weather risk. Further, the results reveal significant geographical aggregation benefits in weather risk hedging, which means that more effective hedging may be achieved as the spatial aggregation level increases. It has been discussed that it is necessary to integrate auxiliary variables such as weather, soil, and other information into the ratemaking system to refine the pricing framework. In order to investigate a possible scientific way to reweight historical loss data with auxiliary variables, this thesis proposes a new premium principle based on multivariate weighted distribution. Some designable properties such as linearity and stochastic order preserving are derived for the new proposed multivariate weighted premium principle. Empirical analysis using a unique data set of the reinsurance experience in Manitoba from 2001 to 2011 compares different premium principles and shows that integrating auxiliary variables such as liability and economic factors into the pricing framework will redistribute premium rates by assigning higher loadings to more risky reinsurance contracts, and hence help reinsurers achieve more sustainable profits in the long term.Item Adaptive policies and drawdown problems in insurance risk models(University of Waterloo, 2015-08-31) Li, ShuRuin theory studies an insurer's solvency risk, and to quantify such a risk, a stochastic process is used to model the insurer's surplus process. In fact, research on ruin theory dates back to the pioneer works of Lundberg (1903) and Cramer (1930), where the classical compound Poisson risk model (also known as the Cramer-Lundberg model) was first introduced. The research was later extended to the Sparre Andersen risk model, the Markov arrival risk model, the Levy insurance risk model, and so on. However, in most analysis of the risk models, it is assumed that the premium rate per unit time is constant, which does not always reflect accurately the insurance environment. To better reflect the surplus cash flows of an insurance portfolio, there have been some studies (such as those related to dividend strategies and tax models) which allow the premium rate to take different values over time. Recently, Landriault et al. (2012) proposed the idea of an adaptive premium policy where the premium rate charged is based on the behaviour of the surplus process itself. Motivated by their model, the first part of the thesis focuses on risk models including certain adjustments to the premium rate to reflect the recent claim experience. In Chapter 2, we generalize the Gerber-Shiu analysis of the adaptive premium policy model of Landriault et al. (2012). Chapter 3 proposes an experience-based premium policy under the compound Poisson dynamic, where the premium rate changes are based on the increment between successive random review times. In Chapter 4, we examine a drawdown-based regime-switching Levy insurance model, where the drawdown process is used to model an insurer's level of financial distress over time, and to trigger regime-switching (or premium changes). Similarly to ruin problems which examine the first passage time of the risk process below a threshold level, drawdown problems relate to the first time that a drop in value from a historical peak exceeds a certain level (or equivalently the first passage time of the reflected process above a certain level). As such, drawdowns are fundamentally relevant from the viewpoint of risk management as they are known to be useful to detect, measure and manage extreme risks. They have various applications in many research areas, for instance, mathematical finance, applied probability and statistics. Among the common insurance surplus processes in ruin theory, drawdown episodes have been extensively studied in the class of spectrally negative Levy processes, or more recently, its Markov additive generalization. However, far less attention has been paid to the Sparre Andersen risk model, where the claim arrival process is modelled by a renewal process. The difficulty lies in the fact that such a process does not possess the strong Markov property. Therefore, in the second part of the thesis (Chapter 5), we extend the two-sided exit and drawdown analyses to a renewal risk process. In conclusion, the general focus of this thesis is to derive and analyze ruin-related and drawdown-related quantities in insurance risk models with adaptive policies, and assess their risk management impacts. Chapter 6 ends the thesis by some concluding remarks and directions for future research.Item Algorithmic Analysis of a General Class of Discrete-based Insurance Risk Models(University of Waterloo, 2013-08-28T13:59:23Z) Singer, Basil KarimThe aim of this thesis is to develop algorithmic methods for computing particular performance measures of interest for a general class of discrete-based insurance risk models. We build upon and generalize the insurance risk models considered by Drekic and Mera (2011) and Alfa and Drekic (2007), by incorporating a threshold-based dividend system in which dividends only get paid provided some period of good financial health is sustained above a pre-specified threshold level. We employ two fundamental methods for calculating the performance measures under the more general framework. The first method adopts the matrix-analytic approach originally used by Alfa and Drekic (2007) to calculate various ruin-related probabilities of interest such as the trivariate distribution of the time of ruin, the surplus prior to ruin, and the deficit at ruin. Specifically, we begin by introducing a particular trivariate Markov process and then expressing its transition probability matrix in a block-matrix form. From this characterization, we next identify an initial probability vector for the process, from which certain important conditional probability vectors are defined. For these vectors to be computed efficiently, we derive recursive expressions for each of them. Subsequently, using these probability vectors, we derive expressions which enable the calculation of conditional ruin probabilities and, from which, their unconditional counterparts naturally follow. The second method used involves the first claim conditioning approach (i.e., condition on knowing the time the first claim occurs and its size) employed in many ruin theoretic articles including Drekic and Mera (2011). We derive expressions for the finite-ruin time based Gerber-Shiu function as well as the moments of the total dividends paid by a finite time horizon or before ruin occurs, whichever happens first. It turns out that both functions can be expressed in elegant, albeit long, recursive formulas. With the algorithmic derivations obtained from the two fundamental methods, we next focus on computational aspects of the model class by comparing six different types of models belonging to this class and providing numerical calculations for several parametric examples, highlighting the robustness and versatility of our model class. Finally, we identify several potential areas for future research and possible ways to optimize numerical calculations.Item Analysis of a Threshold Strategy in a Discrete-time Sparre Andersen Model(University of Waterloo, 2007-09-26T19:17:14Z) Mera, Ana MariaIn this thesis, it is shown that the application of a threshold on the surplus level of a particular discrete-time delayed Sparre Andersen insurance risk model results in a process that can be analyzed as a doubly infinite Markov chain with finite blocks. Two fundamental cases, encompassing all possible values of the surplus level at the time of the first claim, are explored in detail. Matrix analytic methods are employed to establish a computational algorithm for each case. The resulting procedures are then used to calculate the probability distributions associated with fundamental ruin-related quantities of interest, such as the time of ruin, the surplus immediately prior to ruin, and the deficit at ruin. The ordinary Sparre Andersen model, an important special case of the general model, with varying threshold levels is considered in a numerical illustration.Item Analysis of Financial Data using a Difference-Poisson Autoregressive Model(University of Waterloo, 2011-05-17T17:40:20Z) Baroud, HibaBox and Jenkins methodologies have massively contributed to the analysis of time series data. However, the assumptions used in these methods impose constraints on the type of the data. As a result, difficulties arise when we apply those tools to a more generalized type of data (e.g. count, categorical or integer-valued data) rather than the classical continuous or more specifically Gaussian type. Papers in the literature proposed alternate methods to model discrete-valued time series data, among these methods is Pegram's operator (1980). We use this operator to build an AR(p) model for integer-valued time series (including both positive and negative integers). The innovations follow the differenced Poisson distribution, or Skellam distribution. While the model includes the usual AR(p) correlation structure, it can be made more general. In fact, the operator can be extended in a way where it is possible to have components which contribute to positive correlation, while at the same time have components which contribute to negative correlation. As an illustration, the process is used to model the change in a stock’s price, where three variations are presented: Variation I, Variation II and Variation III. The first model disregards outliers; however, the second and third include large price changes associated with the effect of large volume trades and market openings. Parameters of the model are estimated using Maximum Likelihood methods. We use several model selection criteria to select the best order for each variation of the model as well as to determine which is the best variation of the model. The most adequate order for all the variations of the model is $AR(3)$. While the best fit for the data is Variation II, residuals' diagnostic plots suggest that Variation III represents a better correlation structure for the model.Item Analysis of Islamic Stock Indices(University of Waterloo, 2009-04-29T19:14:04Z) Mohammed, Ansarullah RidwanIn this thesis, an attempt is made to build on the quantitative research in the field of Islamic Finance. Firstly, univariate modelling using special GARCH-type models is performed on both the FTSE All World and FTSE Shari'ah All World indices. The AR(1) + APARCH(1,1) model with standardized skewed student-t innovations provided the best overall fit and was the most successful at VaR modelling for long and short trading positions. A risk assessment is done using the Conditional Tail Expectation (CTE) risk measure which concluded that in short trading positions the FTSE Shari'ah All World index was riskier than the FTSE All World index but, in long trading positions the results were not conclusive as to which is riskier. Secondly, under the Markowitz model of risk and return the performance of Islamic equity is compared to conventional equity using various Dow Jones indices. The results indicated that even though the Islamic portfolio is relatively less diversified than the conventional portfolio, due to several investment restrictions, the Shari'ah screening process excluded various industries whose absence resulted in risk reduction. As a result, the Islamic portfolio provided a basket of stocks with special and favourable risk characteristics. Lastly, copulas are used to model the dependency structure between the filtered returns of the FTSE All World and FTSE Shari'ah All World indices after fitting the AR(1) + APARCH(1,1) model with standardized skewed student-t innovations. The t copula outperformed the others and a demonstration of forecasting using the copula-extended model is done.Item Analysis of some risk models involving dependence(University of Waterloo, 2010-08-12T15:51:27Z) Cheung, Eric C.K.The seminal paper by Gerber and Shiu (1998) gave a huge boost to the study of risk theory by not only unifying but also generalizing the treatment and the analysis of various risk-related quantities in one single mathematical function - the Gerber-Shiu expected discounted penalty function, or Gerber-Shiu function in short. The Gerber-Shiu function is known to possess many nice properties, at least in the case of the classical compound Poisson risk model. For example, upon the introduction of a dividend barrier strategy, it was shown by Lin et al. (2003) and Gerber et al. (2006) that the Gerber-Shiu function with a barrier can be expressed in terms of the Gerber-Shiu function without a barrier and the expected value of discounted dividend payments. This result is the so-called dividends-penalty identity, and it holds true when the surplus process belongs to a class of Markov processes which are skip-free upwards. However, one stringent assumption of the model considered by the above authors is that all the interclaim times and the claim sizes are independent, which is in general not true in reality. In this thesis, we propose to analyze the Gerber-Shiu functions under various dependent structures. The main focus of the thesis is the risk model where claims follow a Markovian arrival process (MAP) (see, e.g., Latouche and Ramaswami (1999) and Neuts (1979, 1989)) in which the interclaim times and the claim sizes form a chain of dependent variables. The first part of the thesis puts emphasis on certain dividend strategies. In Chapter 2, it is shown that a matrix form of the dividends-penalty identity holds true in a MAP risk model perturbed by diffusion with the use of integro-differential equations and their solutions. Chapter 3 considers the dual MAP risk model which is a reflection of the ordinary MAP model. A threshold dividend strategy is applied to the model and various risk-related quantities are studied. Our methodology is based on an existing connection between the MAP risk model and a fluid queue (see, e.g., Asmussen et al. (2002), Badescu et al. (2005), Ramaswami (2006) and references therein). The use of fluid flow techniques to analyze risk processes opens the door for further research as to what types of risk model with dependency structure can be studied via probabilistic arguments. In Chapter 4, we propose to analyze the Gerber-Shiu function and some discounted joint densities in a risk model where each pair of the interclaim time and the resulting claim size is assumed to follow a bivariate phase-type distribution, with the pairs assumed to be independent and identically distributed (i.i.d.). To this end, a novel fluid flow process is constructed to ease the analysis. In the classical Gerber-Shiu function introduced by Gerber and Shiu (1998), the random variables incorporated into the analysis include the time of ruin, the surplus prior to ruin and the deficit at ruin. The later part of this thesis focuses on generalizing the classical Gerber-Shiu function by incorporating more random variables into the so-called penalty function. These include the surplus level immediately after the second last claim before ruin, the minimum surplus level before ruin and the maximum surplus level before ruin. In Chapter 5, the focus will be on the study of the generalized Gerber-Shiu function involving the first two new random variables in the context of a semi-Markovian risk model (see, e.g., Albrecher and Boxma (2005) and Janssen and Reinhard (1985)). It is shown that the generalized Gerber-Shiu function satisfies a matrix defective renewal equation, and some discounted joint densities involving the new variables are derived. Chapter 6 revisits the MAP risk model in which the generalized Gerber-Shiu function involving the maximum surplus before ruin is examined. In this case, the Gerber-Shiu function no longer satisfies a defective renewal equation. Instead, the generalized Gerber-Shiu function can be expressed in terms of the classical Gerber-Shiu function and the Laplace transform of a first passage time that are both readily obtainable. In a MAP risk model, the interclaim time distribution must be phase-type distributed. This leads us to propose a generalization of the MAP risk model by allowing for the interclaim time to have an arbitrary distribution. This is the subject matter of Chapter 7. Chapter 8 is concerned with the generalized Sparre Andersen risk model with surplus-dependent premium rate, and some ordering properties of certain ruin-related quantities are studied. Chapter 9 ends the thesis by some concluding remarks and directions for future research.Item Coherent Distortion Risk Measures in Portfolio Selection(University of Waterloo, 2011-08-30T17:33:12Z) Feng, Ming BinThe theme of this thesis relates to solving the optimal portfolio selection problems using linear programming. There are two key contributions in this thesis. The first contribution is to generalize the well-known linear optimization framework of Conditional Value-at-Risk (CVaR)-based portfolio selection problems (see Rockafellar and Uryasev (2000, 2002)) to more general risk measure portfolio selection problems. In particular, the class of risk measure under consideration is called the Coherent Distortion Risk Measure (CDRM) and is the intersection of two well-known classes of risk measures in the literature: the Coherent Risk Measure (CRM) and the Distortion Risk Measure (DRM). In addition to CVaR, other risk measures which belong to CDRM include the Wang Transform (WT) measure, Proportional Hazard (PH) transform measure, and lookback (LB) distortion measure. Our generalization implies that the portfolio selection problems can be solved very efficiently using the linear programming approach and over a much wider class of risk measures. The second contribution of the thesis is to establish the equivalences among four formulations of CDRM optimization problems: the return maximization subject to CDRM constraint, the CDRM minimization subject to return constraint, the return-CDRM utility maximization, the CDRM-based Sharpe Ratio maximization. Equivalences among these four formulations are established in a sense that they produce the same efficient frontier when varying the parameters in their corresponding problems. We point out that the first three formulations have already been investigated in Krokhmal et al. (2002) with milder assumptions on risk measures (convex functional of portfolio weights). Here we apply their results to CDRM and establish the fourth equivalence. For every one of these formulations, the relationship between its given parameter and the implied parameters for the other three formulations is explored. Such equivalences and relationships can help verifying consistencies (or inconsistencies) for risk management with different objectives and constraints. They are also helpful for uncovering the implied information of a decision making process or of a given investment market. We conclude the thesis by conducting two case studies to illustrate the methodologies and implementations of our linear optimization approach, to verify the equivalences among four different problem formulations, and to investigate the properties of different members of CDRM. In addition, the efficiency (or inefficiency) of the so-called 1/n portfolio strategy in terms of the trade off between portfolio return and portfolio CDRM. The properties of optimal portfolios and their returns with respect to different CDRM minimization problems are compared through their numerical results.Item Contracting under Heterogeneous Beliefs(University of Waterloo, 2011-06-03T17:23:42Z) Ghossoub, MarioThe main motivation behind this thesis is the lack of belief subjectivity in problems of contracting, and especially in problems of demand for insurance. The idea that an underlying uncertainty in contracting problems (e.g. an insurable loss in problems of insurance demand) is a given random variable on some exogenously determined probability space is so engrained in the literature that one can easily forget that the notion of an objective uncertainty is only one possible approach to the formulation of uncertainty in economic theory. On the other hand, the subjectivist school led by De Finetti and Ramsey challenged the idea that uncertainty is totally objective, and advocated a personal view of probability (subjective probability). This ultimately led to Savage's approach to the theory of choice under uncertainty, where uncertainty is entirely subjective and it is only one's preferences that determine one's probabilistic assessment. It is the purpose of this thesis to revisit the "classical" insurance demand problem from a purely subjectivist perspective on uncertainty. To do so, we will first examine a general problem of contracting under heterogeneous subjective beliefs and provide conditions under which we can show the existence of a solution and then characterize that solution. One such condition will be called "vigilance". We will then specialize the study to the insurance framework, and characterize the solution in terms of what we will call a "generalized deductible contract". Subsequently, we will study some mathematical properties of collections of vigilant beliefs, in preparation for future work on the idea of vigilance. This and other envisaged future work will be discussed in the concluding chapter of this thesis. In the chapter preceding the concluding chapter, we will examine a model of contracting for innovation under heterogeneity and ambiguity, simply to demonstrate how the ideas and techniques developed in the first chapter can be used beyond problems of insurance demand.Item Convex duality in constrained mean-variance portfolio optimization under a regime-switching model(University of Waterloo, 2008-09-23T17:50:07Z) Donnelly, CatherineIn this thesis, we solve a mean-variance portfolio optimization problem with portfolio constraints under a regime-switching model. Specifically, we seek a portfolio process which minimizes the variance of the terminal wealth, subject to a terminal wealth constraint and convex portfolio constraints. The regime-switching is modeled using a finite state space, continuous-time Markov chain and the market parameters are allowed to be random processes. The solution to this problem is of interest to investors in financial markets, such as pension funds, insurance companies and individuals. We establish the existence and characterization of the solution to the given problem using a convex duality method. We encode the constraints on the given problem as static penalty functions in order to derive the primal problem. Next, we synthesize the dual problem from the primal problem using convex conjugate functions. We show that the solution to the dual problem exists. From the construction of the dual problem, we find a set of necessary and sufficient conditions for the primal and dual problems to each have a solution. Using these conditions, we can show the existence of the solution to the given problem and characterize it in terms of the market parameters and the solution to the dual problem. The results of the thesis lay the foundation to find an actual solution to the given problem, by looking at specific examples. If we can find the solution to the dual problem for a specific example, then, using the characterization of the solution to the given problem, we may be able to find the actual solution to the specific example. In order to use the convex duality method, we have to prove a martingale representation theorem for processes which are locally square-integrable martingales with respect to the filtration generated by a Brownian motion and a finite state space, continuous-time Markov chain. This result may be of interest in problems involving regime-switching models which require a martingale representation theorem.Item Directional Control of Generating Brownian Path under Quasi Monte Carlo(University of Waterloo, 2012-09-10T17:31:38Z) Liu, KaiQuasi-Monte Carlo (QMC) methods are playing an increasingly important role in computational finance. This is attributed to the increased complexity of the derivative securities and the sophistication of the financial models. Simple closed-form solutions for the finance applications typically do not exist and hence numerical methods need to be used to approximate their solutions. QMC method has been proposed as an alternative method to Monte Carlo (MC) method to accomplish this objective. Unlike MC methods, the efficiency of QMC-based methods is highly dependent on the dimensionality of the problems. In particular, numerous researches have documented, under the Black-Scholes models, the critical role of the generating matrix for simulating the Brownian paths. Numerical results support the notion that generating matrix that reduces the effective dimension of the underlying problems is able to increase the efficiency of QMC. Consequently, dimension reduction methods such as principal component analysis, Brownian bridge, Linear Transformation and Orthogonal Transformation have been proposed to further enhance QMC. Motivated by these results, we first propose a new measure to quantify the effective dimension. We then propose a new dimension reduction method which we refer as the directional method (DC). The proposed DC method has the advantage that it depends explicitly on the given function of interest. Furthermore, by assigning appropriately the direction of importance of the given function, the proposed method optimally determines the generating matrix used to simulate the Brownian paths. Because of the flexibility of our proposed method, it can be shown that many of the existing dimension reduction methods are special cases of our proposed DC methods. Finally, many numerical examples are provided to support the competitive efficiency of the proposed method.Item Economic Pricing of Mortality-Linked Securities(University of Waterloo, 2012-09-26T18:32:50Z) Zhou, RuiIn previous research on pricing mortality-linked securities, the no-arbitrage approach is often used. However, this method, which takes market prices as given, is difficult to implement in today's embryonic market where there are few traded securities. In particular, with limited market price data, identifying a risk neutral measure requires strong assumptions. In this thesis, we approach the pricing problem from a different angle by considering economic methods. We propose pricing approaches in both competitive market and non-competitive market. In the competitive market, we treat the pricing work as a Walrasian tâtonnement process, in which prices are determined through a gradual calibration of supply and demand. Such a pricing framework provides with us a pair of supply and demand curves. From these curves we can tell if there will be any trade between the counterparties, and if there will, at what price the mortality-linked security will be traded. This method does not require the market prices of other mortality-linked securities as input. This can spare us from the problems associated with the lack of market price data. We extend the pricing framework to incorporate population basis risk, which arises when a pension plan relies on standardized instruments to hedge its longevity risk exposure. This extension allows us to obtain the price and trading quantity of mortality-linked securities in the presence of population basis risk. The resulting supply and demand curves help us understand how population basis risk would affect the behaviors of agents. We apply the method to a hypothetical longevity bond, using real mortality data from different populations. Our illustrations show that, interestingly, population basis risk can affect the price of a mortality-linked security in different directions, depending on the properties of the populations involved. We have also examined the impact of transitory mortality jumps on trading in a competitive market. Mortality dynamics are subject to jumps, which are due to events such as the Spanish flu in 1918. Such jumps can have a significant impact on prices of mortality-linked securities, and therefore should be taken into account in modeling. Although several single-population mortality models with jump effects have been developed, they are not adequate for trades in which population basis risk exists. We first develop a two-population mortality model with transitory jump effects, and then we use the proposed mortality model to examine how mortality jumps may affect the supply and demand of mortality-linked securities. Finally, we model the pricing process in a non-competitive market as a bargaining game. Nash's bargaining solution is applied to obtain a unique trading contract. With no requirement of a competitive market, this approach is more appropriate for the current mortality-linked security market. We compare this approach with the other proposed pricing method. It is found that both pricing methods lead to Pareto optimal outcomes.Item Efficient Procedure for Valuing American Lookback Put Options(University of Waterloo, 2007-05-22T18:22:08Z) Wang, XuyanLookback option is a well-known path-dependent option where its payoff depends on the historical extremum prices. The thesis focuses on the binomial pricing of the American floating strike lookback put options with payoff at time $t$ (if exercise) characterized by \[ \max_{k=0, \ldots, t} S_k - S_t, \] where $S_t$ denotes the price of the underlying stock at time $t$. Build upon the idea of \hyperlink{RBCV}{Reiner Babbs Cheuk and Vorst} (RBCV, 1992) who proposed a transformed binomial lattice model for efficient pricing of this class of option, this thesis extends and enhances their binomial recursive algorithm by exploiting the additional combinatorial properties of the lattice structure. The proposed algorithm is not only computational efficient but it also significantly reduces the memory constraint. As a result, the proposed algorithm is more than 1000 times faster than the original RBCV algorithm and it can compute a binomial lattice with one million time steps in less than two seconds. This algorithm enables us to extrapolate the limiting (American) option value up to 4 or 5 decimal accuracy in real time.Item Estimation and allocation of insurance risk capital(University of Waterloo, 2007-05-15T13:06:52Z) Kim, Hyun TaeEstimating tail risk measures such as Value at Risk (VaR) and Conditional Tail Expectation (CTE) is a vital component in financial and actuarial risk management. The CTE is a preferred risk measure, due to coherence and a widespread acceptance in actuarial community. In particular we focus on the estimation of the CTE using both parametric and nonparametric approaches. In parametric case the conditional tail expectation and variance are analytically derived for the exponential distribution family and its transformed distributions. For small i.i.d. samples the exact bootstrap (EB) and the influence function are used as nonparametric methods in estimating the bias and the the variance of the empirical CTE. In particular, it is shown that the bias is corrected using the bootstrap for the CTE case. In variance estimation the influence function of the bootstrapped quantile is derived, and can be used to estimate the variance of any bootstrapped L-estimator without simulations, including the VaR and the CTE, via the nonparametric delta method. An industry model are provided by applying theoretical findings on the bias and the variance of the estimated CTE. Finally a new capital allocation method is proposed. Inspired by the allocation of the solvency exchange option by Sherris (2006), this method resembles the CTE allocation in its form and properties, but has its own unique features, such as managerbased decomposition. Through a numerical example the proposed allocation is shown to fail the no undercut axiom, but we argue that this axiom may not be aligned with the economic reality.Item Fee Structure and Surrender Incentives in Variable Annuities(University of Waterloo, 2014-08-05) MacKay, AnneVariable annuities (VAs) are investment products similar to mutual funds, but they also protect policyholders against poor market performance and other risks. They have become very popular in the past twenty years, and the guarantees they offer have grown increasingly complex. Variable annuities, also called segregated funds in Canada, can represent a challenge for insurers in terms of pricing, hedging and risk management. Simple financial guarantees expose the insurer to a variety of risks, ranging from poor market performance to changes in mortality rates and unexpected lapses. Most guarantees included in VA contracts are financed by a fixed fee, paid regularly as a fixed percentage of the value of the VA account. This fee structure is not ideal from a risk management perspective since the resulting amount paid out of the fund increases as most guarantees lose their value. In fact, when the account value increases, most financial guarantees fall out of the money, while the fixed percentage fee rate causes the fee amount to grow. The fixed fee rate can also become an incentive to surrender the variable annuity contract, since the policyholder pays more when the value of the guarantee is low. This incentive deserves our attention because unexpected surrenders have been shown to be an important component of the risk faced by insurers that sell variable annuities (see Kling, Ruez and Russ (2014) ). For this reason, it is important that the surrender behaviour be taken into account when developing a risk management strategy for variable annuity contracts. However, this behaviour can be hard to model. In this thesis, we analyse the surrender incentive caused by the fixed percentage fee rate and explore different fee structures that reduce the incentive to optimally surrender variable annuity contracts. We introduce a "state-dependent" fee, paid only when the VA account value is below a certain threshold. Integral representations are presented for the price of different guarantees under the state-dependent fee structure, and partial differential equations are solved numerically to analyse the resulting impact on the surrender incentive. From a theoretical point of view, we study certain conditions that eliminate the incentive to surrender the VA contract optimally. We show that the fee structure can be modified to design contracts whose optimal hedging strategy is simpler and robust to different surrender behaviours. The last part of this thesis analyzes a different problem. Group self-annuitization schemes are similar to life annuities, but part, or all, of the investment and longevity risk is borne by the annuitant through periodic adjustments to annuity payments. While they may decrease the price of the annuity, these adjustments increase the volatility of the payment patterns, making the product risky for the annuitant. In the last chapter of this thesis, we analyse optimal investment strategies in the presence of group self-annuitization schemes. We show that the optimal strategies obtained by maximizing the utility of the retiree's consumption may not be optimal when they are analysed using different metrics.Item Financial Risk Management of Guaranteed Minimum Income Benefits Embedded in Variable Annuities(University of Waterloo, 2011-08-29T13:24:35Z) Marshall, ClaymoreA guaranteed minimum income benefit (GMIB) is a long-dated option that can be embedded in a deferred variable annuity. The GMIB is attractive because, for policyholders who plan to annuitize, it offers protection against poor market performance during the accumulation phase, and adverse interest rate experience at annuitization. The GMIB also provides an upside equity guarantee that resembles the benefit provided by a lookback option. We price the GMIB, and determine the fair fee rate that should be charged. Due to the long dated nature of the option, conventional hedging methods, such as delta hedging, will only be partially successful. Therefore, we are motivated to find alternative hedging methods which are practicable for long-dated options. First, we measure the effectiveness of static hedging strategies for the GMIB. Static hedging portfolios are constructed based on minimizing the Conditional Tail Expectation of the hedging loss distribution, or minimizing the mean squared hedging loss. Next, we measure the performance of semi-static hedging strategies for the GMIB. We present a practical method for testing semi-static strategies applied to long term options, which employs nested Monte Carlo simulations and standard optimization methods. The semi-static strategies involve periodically rebalancing the hedging portfolio at certain time intervals during the accumulation phase, such that, at the option maturity date, the hedging portfolio payoff is equal to or exceeds the option value, subject to an acceptable level of risk. While we focus on the GMIB as a case study, the methods we utilize are extendable to other types of long-dated options with similar features.Item Funding Liquidity and Limits to Arbitrage(University of Waterloo, 2012-06-14T19:15:02Z) Aoun, BassamArbitrageurs play an important role in keeping market prices close to their fundamental values by providing market liquidity. Most arbitrageurs however use leverage. When funding conditions worsen they are forced to reduce their positions. The resulting selling pressure depresses market prices, and in certain situations, pushes arbitrage spreads to levels exceeding many standard deviations. This phenomenon drove many century old financial institutions into bankruptcy during the 2007−2009 financial crisis. In this thesis, we provide empirical evidence and demonstrate analytically the effects of funding liquidity on arbitrage. We further discuss the implications for risk management. To conduct our empirical studies, we construct a novel Funding Liquidity Stress Index (FLSI) using principal components analysis. Its constituents are measures representing various funding channels. We study the relationship between the FLSI index and three di↵erent arbitrage strategies that we reproduce with real and daily transactional data. We show that the FLSI index has a strong explanatory power for changes in arbitrage spreads, and is an important source of contagion between various arbitrage strategies. In addition, we perform “event studies” surrounding events of changing margin requirements on futures contracts. The “event studies” provide empirical evidence supporting important assumptions and predictions of various theoretical work on market micro-structure. Next, we explain the mechanism through which funding liquidity affects arbitrage spreads. To do so, we study the liquidity risk premium in a market micro-structure framework where market prices are determined by the supply and demand of securities. We extend the model developed by Brunnermeier and Pedersen [BP09] to multiple periods and generalize their work by considering all market participants to be risk-averse. We further decompose the liquidity risk premium into two components: 1) a fundamental risk premium and 2) a systemic risk premium. The fundamental risk premium compensates market participants for providing liquidity in a security whose fundamental value is volatile, while the systemic risk premium compensates them for taking positions in a market that is vulnerable to funding liquidity. The first component is therefore related to the nature of the security while the second component is related to the fragility of the market micro-structure (such as leverage of market participants and margin setting mechanisms).Item A Generalization of the Discounted Penalty Function in Ruin Theory(University of Waterloo, 2008-08-21T15:54:18Z) Feng, RunhuanAs ruin theory evolves in recent years, there has been a variety of quantities pertaining to an insurer's bankruptcy at the centre of focus in the literature. Despite the fact that these quantities are distinct from each other, it was brought to our attention that many solution methods apply to nearly all ruin-related quantities. Such a peculiar similarity among their solution methods inspired us to search for a general form that reconciles those seemingly different ruin-related quantities. The stochastic approach proposed in the thesis addresses such issues and contributes to the current literature in three major directions. (1) It provides a new function that unifies many existing ruin-related quantities and that produces more new quantities of potential use in both practice and academia. (2) It applies generally to a vast majority of risk processes and permits the consideration of combined effects of investment strategies, policy modifications, etc, which were either impossible or difficult tasks using traditional approaches. (3) It gives a shortcut to the derivation of intermediate solution equations. In addition to the efficiency, the new approach also leads to a standardized procedure to cope with various situations. The thesis covers a wide range of ruin-related and financial topics while developing the unifying stochastic approach. Not only does it attempt to provide insights into the unification of quantities in ruin theory, the thesis also seeks to extend its applications in other related areas.Item Gerber-Shiu analysis in some dependent Sparre Andersen risk models(University of Waterloo, 2010-08-11T18:56:09Z) Woo, Jae-KyungIn this thesis, we consider a generalization of the classical Gerber-Shiu function in various risk models. The generalization involves introduction of two new variables in the original penalty function including the surplus prior to ruin and the deficit at ruin. These new variables are the minimum surplus level before ruin occurs and the surplus immediately after the second last claim before ruin occurs. Although these quantities can not be observed until ruin occurs, we can still identify their distributions in advance because they do not functionally depend on the time of ruin, but only depend on known quantities including the initial surplus allocated to the business. Therefore, some ruin related quantities obtained by incorporating four variables in the generalized Gerber-Shiu function can help our understanding of the analysis of the random walk and the resultant risk management. In Chapter 2, we demonstrate the generalized Gerber-Shiu functions satisfy the defective renewal equation in terms of the compound geometric distribution in the ordinary Sparre Andersen renewal risk models (continuous time). As a result, forms of joint and marginal distributions associated with the variables in the generalized penalty function are derived for an arbitrary distribution of interclaim/interarrival times. Because the identification of the compound geometric components is difficult without any specific conditions on the interclaim times, in Chapter 3 we consider the special case when the interclaim time distribution is from the Coxian class of distribution, as well as the classical compound Poisson models. Note that the analysis of the generalized Gerber-Shiu function involving three (the classical two variables and the surplus after the second last claim) is sufficient to study of four variable. It is shown to be true even in the cases where the interclaim of the first event is assumed to be different from the subsequent interclaims (i.e. delayed renewal risk models) in Chapter 4 or the counting (the number of claims) process is defined in the discrete time (i.e. discrete renewal risk models) in Chapter 5. In Chapter 6 the two-sided bounds for a renewal equation are studied. These results may be used in many cases related to the various ruin quantities from the generalized Gerber-Shiu function analyzed in previous chapters. Note that the larger number of iterations of computing the bound produces the closer result to the exact value. However, for the nonexponential bound the form of bound contains the convolution involving usually heavy-tailed distribution (e.g. heavy-tailed claims, extreme events), we need to find the alternative method to reinforce the convolution computation in this case.
- «
- 1 (current)
- 2
- 3
- »