Repository logo
About
Deposit
Communities & Collections
All of UWSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Weng, Chengguo"

Filter results by typing the first few letters
Now showing 1 - 10 of 10
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    Applications of Stochastic Control to Portfolio Selection Problems
    (University of Waterloo, 2018-10-16) Lin, Hongcan; Saunders, David; Weng, Chengguo
    Portfolio selection is an important problem both in academia and in practice. Due to its significance, it has received great attention and facilitated a large amount of research. This thesis is devoted to structuring optimal portfolios using different criteria. Participating contracts are popular insurance policies, in which the payoff to a policyholder is linked to the performance of a portfolio managed by the insurer. In Chapter 2, we consider the portfolio selection problem of an insurer that offers participating contracts and has an S-shaped utility function. Applying the martingale approach, closed-form solutions are obtained. The resulting optimal strategies are compared with two portfolio insurance hedging strategies, e.g. Constant Proportion Portfolio Insurance strategy and Option Based Portfolio Insurance strategy. We also study numerical solutions of the portfolio selection problem with constraints on the portfolio weights. In Chapter 3, we consider the portfolio selection problem of maximizing a performance measure in a continuous-time diffusion model. The performance measure is the ratio of the overperformance to the underperformance of a portfolio relative to a benchmark. Following a strategy from fractional programming, we analyze the problem by solving a family of related problems, where the objective functions are the numerator of the original problem minus the denominator multiplied by a penalty parameter. These auxiliary problems can be solved using the martingale method for stochastic control. The existence of a solution is discussed in a general setting and explicit solutions are derived when both the reward and the penalty functions are power functions. In Chapter 4, we consider the mean-risk portfolio selection problem of optimizing the expectile risk measure in a continuous-time diffusion model. Due to the lack of an explicit form for expectiles and the close relationship with the Omega measure, we propose an alternative optimization problem with the Omega measure as an objective and show the equivalence between the two problems. After showing the solution for the mean-expectile problem is not attainable but the value function is finite, we modify the problem with an upper bound constraint imposed on the terminal wealth and obtain the solution via the Lagrangian duality method and pointwise optimization technique. The global expectile minimizing portfolio and efficient frontier are also considered in our analysis. In Chapter 5, we consider the utility-based portfolio selection problem in a continuous-time setting. We assume the market price of risk depends on a stochastic factor that satisfies an affine-form, square-root, Markovian model. This financial market framework includes the classical geometric Brownian motion, the constant elasticity of variance (CEV) model and the Heston's model as special cases. Adopting the Backward Stochastic Differential Equation (BSDE) approach, we obtain the closed-form solutions for power, logarithm, or exponential utility functions, respectively. Concluding remarks and several potential topics for further research are presented in Chapter 6.
  • Loading...
    Thumbnail Image
    Item
    Climate Change Risk in Stock Markets
    (University of Waterloo, 2020-01-20) Jiang, Ruihong; Weng, Chengguo
    Climate change is becoming a common threat to the world and has been studied by scholars in various fields. In the field of finance, many papers have discussed financial market efficiency toward climate change in order to better manage related risks. Our work focuses on the topic of climate change risk in the stock market. We use long-term trends of a newly released climate index, Actuaries Climate Index (ACI), as proxies for climate change risk. As a type of production risk, ACI trends have an adverse impact on the agricultural production and corporate profitability of agriculture-related companies. We find significant predictability of climate change risk on corporate profits. This motivates us to further test the predictability of the ACI on stock returns. We construct a risk-adjusted stock trading strategy that adjusts to climate change risk. With a one-year holding period, our non-overlapping strategy earns positive returns with zero cost at the beginning over a 26-year test period. The outperformance suggests the predictive ability of the ACI and creates potential arbitrage opportunities in the stock market. Thus, the stock market is believed to be inefficient toward climate change risk. We get similar results and conclusions for different versions and extensions of the non-overlapping strategy. However, these conclusions are no longer attainable when we look at strategy returns in shorter periods. From subsample tests, we find that our strategy performs considerably well in terms of abnormally positive returns before 2015. But the predictability on stock returns degenerates quickly over a short period of time in 2017. This "overturn" of market inefficiency highlights the importance of follow-up studies and we suggest that future research could be devoted more toward discovering evidence about market efficiency and the impact of climate events on investors' attention toward climate change risk.
  • Loading...
    Thumbnail Image
    Item
    Estimation risk and optimal combined portfolio strategies
    (University of Waterloo, 2024-08-13) Huang, Zhenzhen; Weng, Chengguo; Wei, Pengyu
    The traditional Mean-Variance (MV) framework of Markowitz(1952) has been the foundation of numerous research works for many years, benefiting from its mathematical tractability and intuitive clarity for investors. However, a significant limitation of this framework is its dependence on the mean vector and covariance matrix of asset returns, which are generally unknown and have to be estimated using historical data. The resulting plug-in portfolio, which uses these estimates instead of the true parameter values, often exhibits poor out-of-sample performance due to estimation risk. A considerable amount of research proposes various sophisticated estimators for these two unknown parameters or introduces portfolio constraints and regularizations. In this thesis, however, we focus on an alternative approach to mitigate estimation risk by utilizing combined portfolios and directly optimizing the expected out-of-sample performance. We review the relevant literature and present essential preliminary discussions in Chapter 1. Building on this, we introduce three distinct perspectives in portfolio selection, each aimed at assessing the efficiency of combined portfolios in managing estimation risk. These perspectives guide the detailed examination of research projects presented in the subsequent three chapters of the thesis. Chapter 2 discusses the Tail Mean-Variance (TMV) portfolio selection with estimation risk. The TMV risk measure has emerged from the actuarial community as a criterion for risk management and portfolio selection, with a focus on extreme losses. The existing literature on portfolio optimization under the TMV criterion relies on the plug-in approach, which introduces estimation risk and leads to significant deterioration in the out-of-sample portfolio performance. To address this issue, we propose a combination of the plug-in and 1/N rules and optimize its expected out-of-sample performance. Our study is based on the Mean-Variance-Standard-deviation (MVS) performance measure, which encompasses the TMV, classical MV, and Mean-Standard-Deviation (MStD) as special cases. The MStD criterion is particularly relevant to mean-risk portfolio selection when risk is assessed using quantile-based risk measures. Our proposed combined portfolio consistently outperforms the plug-in MVS and 1/N portfolios in both simulated and real-world datasets. Chapter 3 focuses on Environmental, Social, and Governance (ESG) investing with estimation risk taken into account. Recently, there has been a significant increase in the commitment of institutional investors to responsible investment. We explore an ESG constrained framework that integrates the ESG criteria into decision-making processes, aiming to enhance risk-adjusted returns by ensuring that the total ESG score of the portfolio meets a specified target. The optimal ESG portfolio satisfies a three-fund separation. However, similar to the traditional MV portfolio, the practical application of the optimal ESG portfolio often encounters estimation risk. To mitigate estimation risk, we introduce a combined three-fund portfolio comprising components corresponding to the plug-in ESG portfolio, and we derive the optimal combination coefficients under the expected out-of-sample MV utility optimization, incorporating either an inequality or equality constraint on the expected total ESG score of the portfolio. Both simulation and empirical studies indicate that the implementable combined portfolio outperforms the plug-in ESG portfolio. Chapter 4 introduces a novel Winning Probability Weighted (WPW) framework for constructing combined portfolios from any pair of constituent portfolios. This framework is centered around the concept of winning probability, which evaluates the likelihood that one constituent portfolio will outperform another in terms of out-of-sample returns. To ensure comparability, the constituent portfolios are adjusted to align with their long-term risk profiles. We utilize machine learning techniques that incorporate financial market factors alongside historical asset returns to estimate the winning probabilities, which then taken as the combination coefficients for the combined portfolio. Additionally, we optimize the expected out-of-sample MV utility of the combined portfolio to enhance its performance. Extensive empirical studies demonstrate the superiority of the proposed WPW approach over existing analytical methods in terms of certainty equivalent return across various scenarios. Finally, Chapter 5 summarizes the thesis and outlines potential directions for further research.
  • Loading...
    Thumbnail Image
    Item
    Mortality Prediction using Statistical Learning Approaches
    (University of Waterloo, 2022-11-21) Meng, Yechao; Weng, Chengguo; Diao, Liqun
    Longevity risk, as one of the major risks faced by insurers, has triggered a heated stream of research in mortality modeling among actuaries for effective design/pricing/risk management of insurance products. The idea of borrowing a ``proper'' amount of information from populations with similar structures, widely acknowledged as a conducive strategy to enhance the accuracy of the mortality prediction for a target population, has been explored and utilized by the actuarial community. However, the problem of determining a ``proper'' amount of information amounts to a trade-off that one needs to strive well between gains from including relevant signals and adverse impacts from bringing in irrelevant noise. Conventional solutions to determine a ``proper'' amount of information resort to multiple sources of exogenous data and involve substantial manual work of ``feature engineering'' without guaranteeing an improvement in prediction accuracy. Therefore, in this thesis, we set sail from the exploration to design fully data-driven frameworks to screen out useful hidden information from different aspects effectively to enhance the predicting accuracy of mortality rates with the assistance of various statistical learning approaches. First and foremost, Chapter 2 aims to throw light on how to select a ``proper'' group of populations among a given pool to ensure the success of a multi-population mortality model conducive to improved mortality predicting accuracy. We design a fully data-driven framework, based on a Deletion-Substitution-Addition algorithm, to automatically recommend a group selection for joint modeling through a multi-population model in order to obtain enhanced predicting accuracy. The procedure avoids the excessive involvement of subjective decisions in the group selection. The superiority of the proposed framework in mortality predicting performance is evident by extensive numerical studies when compared with several conventional strategies for population selection problems. Chapter 3 also focuses on how to effectively borrow information from a given pool of populations to enhance the mortality predicting accuracy in a computationally efficient manner. In this chapter, we propose a bivariate model based ensemble framework to aggregate predictions that use the joint information from each pair of populations in the given pool. In addition, we also introduce a time-shift parameter to the base learner mortality model for extra flexibility. This additional parameter characterizes the time by which one population is ahead of or behind the other in their mortality development stages and allows for borrowing information from populations at disparate mortality development stages. The results of the empirical studies confirm the effectiveness of the proposed framework. In Chapter 4, we extend the idea of borrowing information by changing the scope of consideration from populations to ages. We provide insights on detecting similarities and borrowing information that is hidden under the similarities of age-specific mortality patterns among ages. We propose a novel predicting framework where the overall predicting goal is decomposed into multiple individual tasks that search for age-specific age bands to ensure the mortality prediction of each target age can receive the benefit of borrowing information across ages to the largest extent. Extensive empirical studies with the Human Mortality Database confirm noticeable differences for different target ages in their ways of borrowing information from other ages. Those empirical studies also confirm an overall improvement in predicting accuracy of the proposed framework for most ages, especially for adults and retiree groups. In Chapter 5, information across different ages and different populations is considered simultaneously. We extend the idea of borrowing information among ages to multi-population cases and proposed three different approaches: a distance-based approach, an ensemble-based approach, and an ACF model-based approach. Empirical studies with real mortality data are conducted to compare their predicting performance and significance in improving predicting accuracy compared with some benchmark models. Additionally, several general stylized facts of how ages from multiple populations are borrowed by the distance-based method are provided. Finally, Chapter 6 briefly outlines some directions worth further exploration for research by the momentum from each chapter and some research ideas that are less relevant to the previous chapters.
  • Loading...
    Thumbnail Image
    Item
    Mortality prediction via age-specific band selection
    (Taylor & Francis, 2025) Meng, Yechao; Diao, Liqun; Weng, Chengguo
    A novel mortality prediction framework, age-specific band selection, is proposed to borrow information from "neighboring" ages and train prediction models tailored for each individual age in a mortality table. This framework is further extended to borrow information across multiple populations through two proposed approaches: a distance-based approach and an ACF model-based approach. Extensive empirical studies with the Human Mortality Database are conducted to illustrate the enhanced prediction accuracy achieved by these methods.
  • Loading...
    Thumbnail Image
    Item
    Numerical Solutions to Stochastic Control Problems: When Monte Carlo Simulation Meets Nonparametric Regression
    (University of Waterloo, 2019-07-30) Shen, Zhiyi; Weng, Chengguo
    The theme of this thesis is to develop theoretically sound as well as numerically efficient Least Squares Monte Carlo (LSMC) methods for solving discrete-time stochastic control problems motivated by insurance and finance problems. Despite its popularity in solving optimal stopping problems, the application of the LSMC method to stochastic control problems is hampered by several challenges. Firstly, the simulation of the state process is intricate in the absence of the optimal control policy in prior. Secondly, numerical methods only warrant the approximation accuracy of the value function over a bounded domain, which is incompatible with the unbounded set the state variable dwells in. Thirdly, given a considerable number of simulated paths, regression methods are computationally challenging. This thesis responds to the above problems. Chapter 2 develops a novel LSMC algorithm to solve discrete-time stochastic optimal control problems, referred to as the Backward Simulation and Backward Updating (BSBU) algorithm. The BSBU algorithm has three pillars: a construction of auxiliary stochastic control model, an artificial simulation of the post-action value of state process, and a shape-preserving sieve estimation method which equip the algorithm with a number of merits including obviating forward simulation and control randomization, evading extrapolating the value function, and alleviating computational burden of the tuning parameter selection. Chapter 3 proposes an alternative LSMC algorithm which directly approximates the optimal value function at each time step instead of the continuation function. This brings the benefits of faster convergence rate and closed-form expressions of the value function compared with the previously developed BSBU algorithm. We also develop a general argument for constructing an auxiliary stochastic control problem which inherits the continuity, monotonicity, and concavity of the original problem. This argument renders the LSMC algorithm circumvent extrapolating the value function in the backward recursion and can well adapt to other numerical methods. Chapter 4 studies a complicated stochastic control problem: the no-arbitrage pricing of the “Polaris Choice IV" variable annuities issued by the American International Group. The Polaris allows the income base to lock in the high-water-mark of the investment account over a certain monitoring period which is related to the timing of the policyholder’s first withdrawal. By prudently introducing certain auxiliary state and control variables, we formulate the pricing problem into a Markovian stochastic optimal control framework. With a slight modification on the fee structure, we prove the existence of a bang-bang solution to the stochastic control problem: the policyholder's optimal withdrawal strategy is limited to a few choices. Accordingly, the price of the modified contract can be solved by the BSBU algorithm. Finally, we prove that the price of the modified contract is an upper bound for that of the Polaris with the real fee structure. Numerical experiments show that this bound is fairly tight.
  • Loading...
    Thumbnail Image
    Item
    Risk Management with Basis Risk
    (University of Waterloo, 2018-06-19) Zhang, Jingong; Tan, Ken Seng; Weng, Chengguo
    Basis risk occurs naturally in a variety of financial and actuarial applications, and it introduces additional complexity to the risk management problems. Current literature on quantifying and managing basis risk is still quite limited, and one class of important questions that remains open is how to conduct effective risk mitigation when basis risk is involved and perfect hedging is either impossible or too expensive. The theme of this thesis is to study risk management problems in the presence of basis risk under three settings: 1) hedging equity-linked financial derivatives; 2) hedging longevity risk; and 3) index insurance design. First we consider the problem of hedging a vanilla European option using a liquidly traded asset which is not the underlying asset but correlates to the underlying and we investigate an optimal construction of hedging portfolio involving such an asset. The mean-variance criterion is adopted to evaluate the hedging performance, and a subgame Nash equilibrium is used to define the optimal solution. The problem is solved by resorting to a dynamic programming procedure and a change-of-measure technique. A closed-form optimal control process is obtained under a general diffusion model. The solution we obtain is highly tractable and to the best of our knowledge, this is the first time the analytical solution exists for dynamic hedging of general vanilla European options with basis risk under the mean-variance criterion. Examples on hedging European call options are presented to foster the feasibility and importance of our optimal hedging strategy in the presence of basis risk. We then explore the problem of optimal dynamic longevity hedge. From a pension plan sponsor’s perspective, we study dynamic hedging strategies for longevity risk using standardized securities in a discrete-time setting. The hedging securities are linked to a population which may differ from the underlying population of the pension plan, and thus basis risk arises. Drawing from the technique of dynamic programming, we develop a framework which allows us to obtain analytical optimal dynamic hedging strategies to achieve the minimum variance of hedging error. For the first time in the literature, analytical optimal solutions are obtained for such a hedging problem. The most striking advantage of the method lies in its flexibility. While q-forwards are considered in the specific implementation in the paper, our method is readily applicable to other securities such as longevity swaps. Further, our method is implementable for a variety of longevity models including Lee-Carter, Cairns-Blake-Dowd (CBD) and their variants. Extensive numerical experiments show that our hedging method significantly outperforms the standard “delta” hedging strategy which is commonly adopted in the literature. Lastly we study the problem of optimal index insurance design under an expected utility maximization framework. For general utility functions, we formally prove the existence and uniqueness of optimal contract, and develop an effective numerical procedure to calculate the optimal solution. For exponential utility and quadratic utility functions, we obtain analytical expression of the optimal indemnity function. Our results show that the indemnity can be a highly non-linear and even non-monotonic function of the index variable in order to align with the actuarial loss variable so as to achieve the best reduction in basis risk. Due to the generality of model setup, our proposed method is readily applicable to a variety of insurance applications including index-linked mortality securities, weather index agriculture insurance and index-based catastrophe insurance. Our method is illustrated by a numerical example where weather index insurance is designed for protection against the adverse rice yield using temperature and precipitation as the underlying indices. Numerical results show that our optimal index insurance significantly outperforms linear-type index insurance contracts in terms of reducing basis risk.
  • Loading...
    Thumbnail Image
    Item
    Several Mathematical Problems in Investment Management
    (University of Waterloo, 2023-08-21) Jiang, Ruihong; Weng, Chengguo; Saunders, David
    This thesis studies four mathematical problems in investment management. All four problems arise from practical challenges and are data-driven. Chapter 2 investigates the Kelly portfolio strategy. The full Kelly strategy's deficiency in the face of estimation errors in practice can be mitigated by fractional or shrinkage Kelly strategies. This chapter provides an alternative, the RL Kelly strategy, based on a reinforcement learning (RL) framework. RL algorithms are developed for the practical implementation of the RL Kelly strategy. Extensive simulation studies are conducted, and the results confirm the superior performance of the RL Kelly strategies. In Chapter 3, we study the discrete-time mean-variance problem under an RL framework. The continuous-time problem was theoretically studied by the existing literature but was subject to a discretization error in implementations. We compare our discrete-time model with the continuous-time model in terms of theoretical results and numerical performance. In a daily trading market setting, we find both discrete-time and continuous-time models achieve comparable performance. However, the discrete-time model outperforms better than the continuous-time model when the trading is less frequent. Our discrete-time model is not subject to the discretization error. Chapter 4 explores the valuation problem of large variable annuity (VA) portfolios. A computationally appealing methodology for the valuation of large VA portfolios is a metamodelling framework that evaluates a small set of representative contracts, fits a predictive model based on these computed values, and then extrapolates the model to estimate the values of the remaining contracts. This chapter proposes a new two-phase procedure for selecting representative contracts. The representatives from the first phase are determined using contract attributes as in existing metamodelling approaches, but those in the second phase are chosen by utilizing the information contained in the values of the representatives from the first phase. Two numerical studies confirm that our two-phase selection procedure improves upon conventional approaches from the existing literature. Chapter 5 focuses on the capture ratio which is a widely-used investment performance measure. We study the statistical problem of estimating the capture ratio based on a finite number of observations of a fund's returns. We derive the asymptotic distribution of the estimator, and use it for testing whether one fund has a capture ratio that is statistically significantly higher than another. We also perform hypothesis tests with real-world hedge fund data. Our analysis raises concerns regarding the models and sample sizes used for estimating capture ratios in practice.
  • Loading...
    Thumbnail Image
    Item
    Sparse Models in High-Dimensional Dependence Modelling and Index Tracking
    (University of Waterloo, 2017-01-17) Han, Dezhao; Tan, Ken Seng; Weng, Chengguo
    This thesis is divided into two parts. The first part proposes parsimonious models to the vine copula. The second part is devoted to the index tracking problem. Vine copulas provide a flexible tool to capture asymmetry in modelling multivariate distributions. Nevertheless, the computational expense of its flexibility increases exponentially as the dimension of the joint distribution grows. To alleviate this issue, the simplifying assumption (SA) is commonly adopted in special applications of vine copula models. In order to relax SA, Chapter 2 proposes generalized linear models (GLMs) to model parameters in conditional bivariate copulas. In the spirit of the principle of parsimony, a regularization methodology is developed to control the number of parameters. This leads to sparse vine copula models. The conventional vine copula with the SA, the proposed GLM-based vine copula and the sparse vine copula are applied to several financial datasets. Empirical results show that the proposed models in this chapter outperform the one with SA significantly in terms of the Bayesian information criterion. Index tracking is a dominant method among passive investment strategies. It attempts to reproduce the return of stock-market indices. Chapter 3 focuses on selecting stocks to construct tracking portfolios. In order to do that, principal component analysis (PCA) is applied via a two-step procedure. In the first step, the index return is expressed as a function of the principal components (PCs) of stock returns, and a subset of PCs is selected according to Sobol's total sensitivity index. In the second step, a subset of stocks, which is most "similar" to those selected PCs, is detected. This similarity is measured by Yanai's generalized coefficient of determination, the distance correlation, or Heller-Heller-Gorfine test statistics. Given selected stocks, their weights in the tracking portfolio can be determined by minimizing a specific tracking error. Compared with existing methods, constructing tracking portfolios based on stocks selected by this PCA-based method is more computationally efficient and comparably effective at minimizing the tracking error. When the number of index components is large, it is too computationally demanding to apply methods in Chapter 3 or most of existing methods, such as those relying on mixed-integer quadratic programming. In Chapter 4, factor models are used to describe stock returns. Under this assumption, the tracking error is partitioned into two parts: one depends on common economic factors, and the other depends on idiosyncratic risks. According to this partition, a 2-stage method is introduced to construct tracking portfolios by minimizing the tracking error. Stage 1 relies on a mixed-integer linear program to identify stocks that are able to reduce factors' impacts on the tracking error, and Stage2 determines weights of identified stocks by minimizing the tracking error. This 2-stage method efficiently constructs tracking portfolios benchmarked to indices with thousands of components. It reduces out-of-sample tracking errors significantly. In Chapter 5, the index tracking problem is solved by repeatedly solving one-period tracking problems. Each one-period tracking strategy is determined by a quadratic optimization with the L-1 regularization on asset weights. This formulation considers transaction costs and other practical constraints. Since the true joint distribution of financial returns is usually unknown, we solve one-period tracking problems under empirical distributions. With the L-1 regularization on asset weights, our one-period tracking strategy enjoys persistent properties in the high-dimensional setting. More specifically, the variable number d=d(n)=O(n^ α), where n is the sample size and α>1. Simulation studies are carried out to support our one-period tracking strategy's performance with finite samples. Applications on real financial data provide evidence that, in dealing with one-period tracking, this tracking strategy outperforms the L-q penalty tracking method in terms of tracking performance and computational efficiency. In terms of multi-period tracking, this proposed method outperforms the full-replication strategy.
  • Loading...
    Thumbnail Image
    Item
    A Statistical Response to Challenges in Vast Portfolio Selection
    (University of Waterloo, 2019-07-04) Guo, Danqiao; Weng, Chengguo; Wirjanto, Tony
    The thesis is written in response to emerging issues brought about by an increasing number of assets allocated in a portfolio and seeks answers to puzzling empirical findings in the portfolio management area. Over the years, researchers and practitioners working in the portfolio optimization area have been concerned with estimation errors in the first two moments of asset returns. The thesis comprises several related chapters on our statistical inquiry into this subject. Chapter 1 of the thesis contains an introduction to what will be reported in the remaining chapters. A few well-known covariance matrix estimation methods in the literature involve adjustment of sample eigenvalues. Chapter 2 of the thesis examines the effects of sample eigenvalue adjustment on the out-of-sample performance of a portfolio constructed from the sample covariance matrix. We identify a few sample eigenvalue adjustment patterns that lead to a definite improvement in the out-of-sample portfolio Sharpe ratio when the true covariance matrix admits a high-dimensional factor model. Chapter 3 shows that even when the covariance matrix is poorly estimated, it is still possible to obtain a robust maximum Sharpe ratio (MSR) portfolio by exploiting the uneven distribution of estimation errors across principal components. This is accomplished by approximating the vector of expected future asset returns using a few relatively accurate sample principal components. We discuss two approximation methods. The first method leads to a subtle connection to existing approaches in the literature, while the second one named the ``spectral selection method" is novel and able to address main shortcomings of existing methods in the literature. A few academic studies report an unsatisfactory performance of the optimized portfolios relative to that of the 1/N portfolio. Chapter 4 of the thesis reports an in-depth investigation into the reasons behind the reported superior performance of the 1/N portfolio. It is supported by both theoretical and empirical evidence that the success of the 1/N portfolio is by no means due to the failure of the portfolio optimization theory. Instead, a major reason behind the superiority of the 1/N portfolio is its adjacency to the mean-variance optimal portfolio. Chapter 5 examines the performance of randomized 1/N stock portfolios over time. During the last four decades these portfolios outperformed the market. The construction of these portfolios implies that their constituent stocks are in general older than those in the market as a whole. We show that the differential performance can be explained by the relation between stock returns and firm age. We document a significant relation between age and returns in the US stock market. Since 1977 stock returns have been an increasing function of age apart from the oldest ages. For this period the age effect completely dominates the size effect.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback