摘要:
High-dimensional statistics poses significant computational challenges for efficient estimation, which are exacerbated in today's big data era where data collection is often vast. We collectively refer to these challenges as high-dimensional problems because of the large sample sizes and dimensions involved. These problems are encountered in a wide range of applications, including portfolio optimization, genetic association analysis, and signal processing. To mitigate the increased computational burden, stochastic optimization methods have become the mainstream due to their low operational cost and excellent generalization performance.
This presentation will cover various stochastic optimization algorithms for solving high-dimensional problems, ranging from a shallow to a deeper level, and explaining their working mechanisms intuitively. The scope of the algorithms includes all popular choices, such as proximal methods, mirror descent, and dual averaging, and their statistical properties, including sample complexity and minimax optimality, will be discussed. Finally, we will place high-dimensional problems in more modern scenarios where robustness, differential privacy, or consensus reaching are taken into account, and see how to propose efficient algorithms to accommodate these additional requirements.