Improved Convergence Rate of Nested Simulation with LSE on Sieve

Expand
  • 1 Division of Emerging Interdisciplinary Areas, Academy of Interdisciplinary Studies, The Hong Kong University of Science and Technology, Hong Kong 999077, China;
    2 School of Data Science, Fudan University, Shanghai 200437, China;
    3 Data Science and Analytics Thrust, Information Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, Guangdong, China;
    4 School of Management, Shenzhen Polytechnic University, Shenzhen 518055, Guangdong, China

Received date: 2023-10-19

  Revised date: 2025-04-15

  Online published: 2025-09-16

Supported by

This work is partly supported by the National Natural Science Foundation of China (No.72301076, No.12101149) and the Science and Technology Commission of Shanghai Municipality (No.23PJ1400800).

Abstract

Nested simulation encompasses the estimation of functionals linked to conditional expectations through simulation techniques. In this paper, we treat conditional expectation as a function of the multidimensional conditioning variable and provide asymptotic analyses of general nonparametric least squared estimators on sieve, without imposing specific assumptions on the function’s form. Our study explores scenarios in which the convergence rate surpasses that of the standard Monte Carlo method and the one recently proposed based on kernel ridge regression. We use kernel ridge regression with inducing points and neural networks as examples to illustrate our theorems. Numerical experiments are conducted to support our statements.

Cite this article

Ruo-xue Liu, Liang Ding, Wen-jia Wang, Lu Zou . Improved Convergence Rate of Nested Simulation with LSE on Sieve[J]. Journal of the Operations Research Society of China, 2025 , 13(3) : 837 -872 . DOI: 10.1007/s40305-025-00609-9

References

[1] Adams, R.A., Fournier, J.J.: Sobolev Spaces, 2nd edn. Academic Press, Cambridge, MA (2003)
[2] Asmussen, S., Glynn, P.W.: Stochastic Simulation: Algorithm and Analysis. Springer, NY (2007)
[3] Broadie, M., Du, Y., Moallemi, C.C.: Risk estimation via regression. Oper. Res. 63(5), 1077-1097(2015)
[4] Blanchard, G., Mücke, N.: Optimal rates for regularization of statistical inverse learning problems. Foundations of Computational Mathematics 18(4), 971-1013(2018)
[5] Barton, R.R., Nelson, B.L., Xie, W.: Quantifying input uncertainty via simulation confidence intervals. INFORMS journal on computing 26(1), 74-87(2014)
[6] Brauchart, J.S., Reznikov, A.B., Saff, E.B., Sloan, I.H., Wang, Y.G., Womersley, R.S.: Random point sets on the sphere-hole radii, covering, and separation. Experimental Mathematics 27(1), 62-81(2018)
[7] Burt, D., Rasmussen, C.E., Van Der Wilk, M.: Rates of convergence for sparse variational gaussian process regression. In: International Conference on Machine Learning, pp. 862-871(2019). PMLR
[8] Birman, M.S., Solomjak, M.Z.: Piecewise-polynomial approximations of functions of the classes. Mathematics of the USSR-Sbornik 2(3), 295(1967). https://doi.org/10.1070/SM1967v002n03ABEH002343
[9] Chérief-Abdellatif, B.-E.: Convergence rates of variational inference in sparse deep learning. In: International Conference on Machine Learning, pp. 1831-1842(2020). PMLR
[10] Cheng, H.-F., Liu, X., Zhang, K.: Constructing confidence intervals for nested simulation. Naval Research Logistics (NRL) 69(8), 1138-1149(2022)
[11] Cheng, H.-F., Zhang, K.: Non-nested estimators for the central moments of a conditional expectation and their convergence properties. Operations Research Letters 49(5), 625-632(2021)
[12] Davies, A.J.: Effective implementation of gaussian process regression for machine learning. PhD thesis, University of Cambridge (2015)
[13] Dicker, L.H., Foster, D.P., Hsu, D.: Kernel ridge vs. principal component regression: Minimax bounds and the qualification of regularization operators. Electronic Journal of Statistics 11(1), 1022-1047(2017)
[14] Ding, L., Hu, T., Jiang, J., Li, D., Wang, W., Yao, Y.: Random smoothing regularization in kernel gradient descent learning. Journal of Machine Learning Research 25(284), 1-88(2024)
[15] Donoho, D.L.: De-noising by soft-thresholding. IEEE transactions on information theory 41(3), 613- 627(1995)
[16] Ding, L., Tuo, R., Shahrampour, S.: Generalization guarantees for sparse kernel approximation with entropic optimal features. In: International Conference on Machine Learning, pp. 2545-2555(2020). PMLR
[17] Evans, L.C.: Partial Differential Equations, vol. 19. American Mathematical Society, Providence (2022)
[18] Fu, M.C., Hu, J.-Q.: Conditional Monte Carlo: Gradient Estimation and Optimization Applications. Springer, NY (1997)
[19] Fu, M.C., Hong, L.J., Hu, J.-Q.: Conditional Monte Carlo estimation of quantile sensitivities. Manag. Sci. 55(12), 2019-2027(2009)
[20] Feng, M.B., Song, E.: Optimal nested simulation experiment design via likelihood ratio method. arXiv:2008.13087(2020)
[21] Gouk, H., Frank, E., Pfahringer, B., Cree, M.J.: Regularisation of neural networks by enforcing lipschitz continuity. Machine Learning 110, 393-416(2021)
[22] Gordy, M.B., Juneja, S.: Nested simulation in portfolio risk measurement. Manag. Sci. 56(10), 1833- 1848(2010)
[23] Glasserman, P.: Monte Carlo Methods in Financial Engineering, 1st edn. Springer, New York (2003)
[24] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. Advances in neural information processing systems 27(2014)
[25] Guntuboyina, A., Sen, B.: Nonparametric shape-restricted regression. Statistical Science 33(4), 568- 594(2018)
[26] Hastie, T.: Ridge regularization: An essential concept in data science. Technometrics 62(4), 426-433(2020)
[27] Haug, E.G.: The Complete Guide to Option Pricing Formulas. McGraw-Hill, New York (2007)
[28] Hong, L.J., Juneja, S.: Estimating the mean of a non-linear function of conditional expectation. In: Proceedings of the 2009 Winter Simulation Conference (WSC), pp. 1223-1236(2009). IEEE
[29] Hong, L.J., Juneja, S., Liu, G.: Kernel smoothing for nested estimation with application to portfolio risk measurement. Operations Research 65(3), 657-673(2017)
[30] Jacot, A., Gabriel, F., Hongler, C.: Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems 31(2018)
[31] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv:1412.6980(2014)
[32] Kanagawa, M., Hennig, P., Sejdinovic, D., Sriperumbudur, B.K.: Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences. arXiv: 1807.02582(2018)
[33] Kohler, M., Krzy· zak, A., Walk, H.: Optimal global rates of convergence for nonparametric regression with unbounded data. Journal of Statistical Planning and Inference 139(4), 1286-1296(2009)
[34] Koltchinskii, V.: Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: École D’Été de Probabilités de Saint-Flour XXXVIII-2008 vol. 2033. Springer, New York (2011)
[35] Kuchibhotla, A.K., Patra, R.K.: On least squares estimation under heteroscedastic and heavy-tailed errors. The Annals of Statistics 50(1), 277-302(2022)
[36] Kur, G., Rakhlin, A.: On the minimal error of empirical risk minimization. In: Conference on Learning Theory, pp. 2849-2852(2021). PMLR
[37] Kühn, T.: Covering numbers of gaussian reproducing kernel hilbert spaces. Journal of Complexity 27(5), 489-499(2011)
[38] Lan, H., Nelson, B.L., Staum, J.: A confidence interval procedure for expected shortfall risk measurement via two-level simulation. Operations Research 58(5), 1481-1490(2010)
[39] Liu, M., Staum, J.: Stochastic kriging for efficient nested simulation of expected shortfall. Journal of Risk 12(3), 3(2010)
[40] Lin, X.S., Yang, S.: Fast and efficient nested simulation for large variable annuity portfolios: A surrogate modeling approach. Insurance: Mathematics and Economics 91, 85-103(2020)
[41] Liu, X., Yan, X., Zhang, K.: Kernel quantile estimators for nested simulation with application to portfolio value-at-risk measurement. European Journal of Operational Research 312(3), 1168-1177(2024)
[42] Liu, T., Zhou, E.: Online quantification of input model uncertainty by two-layer importance sampling. arXiv:1912.11172(2019)
[43] Liang, G., Zhang, K., Luo, J.: A fast method for nested estimation. INFORMS Journal on Computing (2024)
[44] Rakhlin, A., Spidharan, K., Tsybakov, A.B.: Empirical entropy, minimax regret and minimax risk. Bernoulli, 789-824(2017)
[45] Suzuki, T.: Adaptivity of deep relu network for learning in besov and mixed smooth besov spaces: optimal rate and curse of dimensionality. In: Proceedings of the Nternational Conference on Learning Representations (2019)
[46] Geer, S.: Empirical Processes in M-Estimation. Cambridge University Press, Cambridge (2000)
[47] Geer, S.: On the uniform convergence of empirical norms and inner products, with application to causal inference. Electronic Journal of Statistics 8(1), 543-574(2014)
[48] Vaart, A.W.: Asymptotic Statistics. Cambridge University Press, Cambridge (1998)
[49] Van Der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes: with Applications to Statistics. Springer, New York (1997)
[50] Wainwright, M.J.: High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press, Cambridge (2019)
[51] Wendland, H.: Scattered Data Approximation. Cambridge University Press, Cambridge (2004)
[52] Wu, Z.-M., Schaback, R.: Local error estimates for radial basis function interpolation of scattered data. IMA journal of Numerical Analysis 13(1), 13-27(1993)
[53] Wang, W., Wang, Y., Zhang, X.: Smooth nested simulation: Bridging cubic and square root convergence rates in high dimensions. Management Science (2024)
[54] Xie, W., Nelson, B.L., Barton, R.R.: A bayesian framework for quantifying uncertainty in stochastic simulation. Operations Research 62(6), 1439-1452(2014)
[55] Yao, Y., Rosasco, L., Caponnetto, A.: On early stopping in gradient descent learning. Constructive Approximation 26(2), 289-315(2007)
[56] Zhang, B., Cole, D.A., Gramacy, R.B.: Distance-distributed design for gaussian process surrogates. Technometrics 63(1), 40-52(2021)
[57] Zhang, K., Feng, B.M., Liu, G., Wang, S.: Sample recycling for nested simulation with application in portfolio risk measurement. arXiv:2203.15929(2022)
[58] Zhang, K., Liu, G., Wang, S.: Bootstrap-based budget allocation for nested simulation. Operations Research 70(2), 1128-1142(2022)
[59] Zhang, K., Liu, G., Wang, S.: Technical note-Bootstrap-based budget allocation for nested simulation. Operations Research 70(2), 1128-1142(2022)
[60] Zhu, H., Liu, T., Zhou, E.: Risk quantification in stochastic simulation under input uncertainty. ACM Trans. Model. Comput. Simul. 30(1), 1(2020)
Options
Outlines

/