Mathematics (Faculty of)
http://hdl.handle.net/10012/9924
2024-07-17T10:56:48ZOptimization, model uncertainty, and testing in risk and insurance
http://hdl.handle.net/10012/20718
Optimization, model uncertainty, and testing in risk and insurance
Jiao, Zhanyi
This thesis focuses on three important topics in quantitative risk management and actuarial science: risk optimization, risk sharing, and statistical hypothesis testing in risk.
For the risk optimization, we concentrate on risk optimization under model uncertainty where only partial information about the underlying distribution is available. One key highlight, detailed in Chapter 2, is the development of a novel formula named the reverse Expected Shortfall (ES) optimization formula. This formula is derived to better facilitate the calculation of the worst-case mean excess loss under two commonly used model uncertainty sets – moment-based and distance-based (Wasserstein) uncertainty sets. Further exploration reveals that the reverse ES optimization formula is closely related to the Fenchel-Legendre transforms, and our formulas are generalized from ES to optimized certainty equivalents, a popular class of convex risk measures. Chapter 3 considers a different approach to derive the closed-form worst-case target semi-variance by including distributional shape information, crucial for finance (symmetry) and insurance (non-negativity) applications. We demonstrate that all results are applicable to robust portfolio selection, where the closed-form formulas greatly simplify the calculations for optimal robust portfolio selections, either through explicit forms or via easily solvable optimization problems.
Risk sharing focuses on the redistribution of total risk among agents in a specific way. In contrast to the traditional risk sharing rules, Chapter 4 introduces a new risk sharing framework - anonymized risk sharing, which requires no information on preferences, identities, private operations, and realized losses from the individual agents. We establish an axiomatic theory based on four axioms of fairness and anonymity within the context of anonymized risk sharing. The development of this theory provides a solid foundation for further explorations on decentralized and digital economy including peer-to-peer (P2P) insurance, revenue sharing of digital contents and blockchain mining pools.
Hypothesis testing plays a vital role not only in statistical inference but also in risk management, particularly in the backtesting of risk measures. In Chapter 5, we address the problem of testing conditional mean and conditional variance for non-stationary data using the recent emerging concept of e-statistics. We build e-values and p-values for four types of non-parametric composite hypotheses with specified mean and variance as well as other conditions on the shape of the data-generating distribution. These shape conditions include symmetry, unimodality, and their combination. Using the obtained e-values and p-values, we construct tests via e-processes, also known as testing by betting, as well as some tests based on combining p-values for comparison. To demonstrate the practical application of these methodologies, empirical studies using financial data are conducted under several settings.
2024-07-11T00:00:00ZTechnology Design Recommendations Informed by Observations of Videos of Popular Musicians Teaching and Learning Songs by Ear
http://hdl.handle.net/10012/20717
Technology Design Recommendations Informed by Observations of Videos of Popular Musicians Teaching and Learning Songs by Ear
Liscio, Christopher
Instrumentalists who play popular music often learn songs by ear, using recordings in lieu of sheet music or tablature. This practice was made possible by technology that allows musicians to control playback events. Until now, researchers have not studied the human-recording interactions of musicians attempting to learn pop songs by ear. Through a pair of studies analyzing the content of online videos from YouTube, we generate hypotheses and seek a better understanding of by-ear learning from a recording. Combined with results from neuroscience studies of tonal working memory and aural imagery, our findings reveal a model of by-ear learning that highlights note-finding as a core activity. Using what we learned, we discuss opportunities for designers to create a set of novel human-recording interactions, and to provide assistive technology for those who lack the baseline skills to engage in the foundational note-finding activity.
2024-07-11T00:00:00ZDesign with Sampling Distribution Segments
http://hdl.handle.net/10012/20714
Design with Sampling Distribution Segments
Hagar, Luke
In most settings where data-driven decisions are made, these decisions are informed by two-group comparisons. Characteristics – such as median survival times for two cancer treatments, defect rates for two assembly lines, or average satisfaction scores for two consumer products – quantify the impact of each choice available to decision makers. Given estimates for these two characteristics, such comparisons are often made via hypothesis tests. This thesis focuses on sample size determination for hypothesis tests with interval hypotheses, including standard one-sided hypothesis tests, equivalence tests, and noninferiority tests in both frequentist and Bayesian settings. To choose sample sizes for nonstandard hypothesis tests, simulation is used to estimate sampling distributions of e.g., test statistics or posterior summaries corresponding to various sample sizes. These sampling distributions provide context as to which estimated values for the two characteristics are plausible. By considering quantiles of these distributions, one can determine whether a particular sample size satisfies criteria for the operating characteristics of the hypothesis test: power and the type I error rate. It is standard practice to estimate entire sampling distributions for each sample size considered. The computational cost of doing so impedes the adoption of non-simplistic designs. However, only quantiles of the sampling distributions must be estimated to assess operating characteristics. To improve the scalability of simulation-based design, we could focus only on exploring the segments of the sampling distributions near the relevant quantiles. This thesis proposes methods to explore sampling distribution segments for various designs. These methods are used to determine sample sizes and decision criteria for hypothesis tests with orders of magnitude fewer simulation repetitions. Importantly, this reduction in computational complexity is achieved without compromising the consistency of the simulation results that is guaranteed when estimating entire sampling distributions. In parametric frequentist hypothesis tests, test statistics are often constructed from exact pivotal quantities. To improve sample size determination in the absence of exact pivotal quantities, we first propose a simulation-based method for power curve approximation with such hypothesis tests. This method leverages low-discrepancy sequences of sufficient statistics and root-finding algorithms to prompt unbiased sample size recommendations using sampling distribution segments. We also propose a framework for power curve approximation with Bayesian hypothesis tests. The corresponding methods leverage low-discrepancy sequences of maximum likelihood estimates, normal approximations to the posterior, and root-finding algorithms to explore segments of sampling distributions of posterior probabilities. The resulting sample size recommendations are consistent in that they are suitable when the normal approximations to the posterior and sampling distribution of the maximum likelihood estimator are appropriate. When designing Bayesian hypothesis tests, practitioners may need to specify various prior distributions to generate and analyze data for the sample size calculation. Specifying dependence structures for these priors in multivariate settings is particularly difficult. The challenges with specifying such dependence structures have been exacerbated by recommendations made alongside recent advances with copula-based priors. We prove theoretical results that can be used to help select prior dependence structures that align with one's objectives for posterior analysis. We lastly propose a comprehensive method for sample size determination with Bayesian hypothesis tests that considers our recommendations for prior specification. Unlike our framework for power curve approximation, this method recommends probabilistic cutoffs that facilitate decision making while controlling both power and the type I error rate. This scalable approach obtains consistent sample size recommendations by estimating segments of two sampling distributions - one for each operating characteristic. We also extend our design framework to accommodate more complex two-group comparisons that account for additional covariates.
2024-07-09T00:00:00ZQuantum Query Complexity of Hypergraph Search Problems
http://hdl.handle.net/10012/20712
Quantum Query Complexity of Hypergraph Search Problems
Yu, Zhiying
In the study of quantum query complexity, it is natural to study the problems of
finding triangles and spanning trees in a simple graph. Over the past decades, many
techniques are developed for finding the upper and lower quantum query bounds of these
graph problems. We can generalize these problems to detecting certain properties of higher
rank hypergraphs and ask whether these techniques are still available. In this thesis, we
will see that when the rank increase, complexity bounds still holds for some problems,
although less effectively. For some other problems, their nontrivial complexity bounds
vanish. Moreover, we will focused on using the generalized adversary and learning graph
techniques for finding nontrivial quantum query bounds for different hypergraph search
problems. The following results are presented.
• Discover a general quantum query lower bound for subhypergraph-closed properties
and monotone properties over r-partite r-uniform hypergraphs.
• Provide tight quantum query bounds for the connectivity and acyclicity problems
over r-uniform hypergraphs.
• Present a nontrivial learning graph algorithm for the 3-simplex finding problem.
• Formulate nested quantum walk in the adaptive learning context and use it to present
a nontrivial quantum query algorithm for the 4-simplex finding problem.
• Present a natural relationship of lower bounds for simplex finding of different ranks.
• Use the learning graph formalization of tetrahedron certificate structure to find a
nontrivial quantum query lower bound of the 3-simplex sum problem.
2024-07-09T00:00:00Z