Numerical Methods for Optimal Stochastic Control in Finance
MetadataShow full item record
In this thesis, we develop partial differential equation (PDE) based numerical methods to solve certain optimal stochastic control problems in finance. The value of a stochastic control problem is normally identical to the viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation or an HJB variational inequality. The HJB equation corresponds to the case when the controls are bounded while the HJB variational inequality corresponds to the unbounded control case. As a result, the solution to the stochastic control problem can be computed by solving the corresponding HJB equation/variational inequality as long as the convergence to the viscosity solution is guaranteed. We develop a unified numerical scheme based on a semi-Lagrangian timestepping for solving both the bounded and unbounded stochastic control problems as well as the discrete cases where the controls are allowed only at discrete times. Our scheme has the following useful properties: it is unconditionally stable; it can be shown rigorously to converge to the viscosity solution; it can easily handle various stochastic models such as jump diffusion and regime-switching models; it avoids Policy type iterations at each mesh node at each timestep which is required by the standard implicit finite difference methods. In this thesis, we demonstrate the properties of our scheme by valuing natural gas storage facilities---a bounded stochastic control problem, and pricing variable annuities with guaranteed minimum withdrawal benefits (GMWBs)---an unbounded stochastic control problem. In particular, we use an impulse control formulation for the unbounded stochastic control problem and show that the impulse control formulation is more general than the singular control formulation previously used to price GMWB contracts.