University of Waterloo >
Electronic Theses and Dissertations (UW) >
Please use this identifier to cite or link to this item:
|Title: ||Numerical Methods for Pricing a Guaranteed Minimum Withdrawal Benefit (GMWB) as a Singular Control Problem|
|Authors: ||Huang, Yiqing|
|Keywords: ||singular stochastic control, HJB equation, GMWB, iterative methods, scaled direct control method, penalty method, xed point policy iteration, policy iteration, jump diffusion, inexact arithmetic|
|Approved Date: ||23-Aug-2011 |
|Date Submitted: ||2011 |
|Abstract: ||Guaranteed Minimum Withdrawal Benefits(GMWB) have become popular riders on variable annuities. The pricing of a GMWB contract was originally formulated as a singular stochastic control problem which results in a Hamilton Jacobi Bellman (HJB) Variational Inequality (VI). A penalty method method can then be used to solve the HJB VI. We present a rigorous proof of convergence of the penalty method to the viscosity solution of the HJB VI assuming the underlying asset follows a Geometric Brownian Motion. A direct control method is an alternative formulation for the HJB VI. We also extend the HJB VI to the case of where the underlying asset follows a Poisson jump diffusion.
The HJB VI is normally solved numerically by an implicit method, which gives rise to highly nonlinear discretized algebraic equations. The classic policy iteration approach works well for the Geometric Brownian Motion case. However it is not efficient in some circumstances such as when the underlying asset follows a Poisson jump diffusion process. We develop a combined fixed point policy iteration scheme which significantly increases the efficiency of solving the discretized equations. Sufficient conditions to ensure the convergence of the combined fixed point policy iteration scheme are derived both for the penalty method and direct control method.
The GMWB formulated as a singular control problem has a special structure which results in a block matrix fixed point policy iteration converging about one order of magnitude faster than a full matrix fixed point policy iteration. Sufficient conditions for convergence of the block matrix fixed point policy iteration are derived. Estimates for bounds on the penalty parameter (penalty method) and scaling parameter (direct control method) are obtained so that convergence of the iteration can be expected in the presence of round-off error.|
|Program: ||Computer Science|
|Department: ||School of Computer Science|
|Degree: ||Doctor of Philosophy|
|Appears in Collections:||Electronic Theses and Dissertations (UW)|
Faculty of Mathematics Theses and Dissertations
All items in UWSpace are protected by copyright, with all rights reserved.