Journal of the Operations Research Society of China ›› 2025, Vol. 13 ›› Issue (3): 688-722.doi: 10.1007/s40305-025-00599-8
Previous Articles Next Articles
Wei-Wei Fan1, L. Jeff Hong2, Guang-Xin Jiang3, Jun Luo4
Received:
2024-03-24
Revised:
2025-03-08
Online:
2025-09-30
Published:
2025-09-16
Contact:
L. Jeff Hong
E-mail:lhong@umn.edu
Supported by:
CLC Number:
Wei-Wei Fan, L. Jeff Hong, Guang-Xin Jiang, Jun Luo. Review of Large-Scale Simulation Optimization[J]. Journal of the Operations Research Society of China, 2025, 13(3): 688-722.
[1] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: TensorFlow: Large-scale machine learning on heterogeneous distributed systems (2016). arXiv:1603.04467 [2] Alvi, A., Ru, B., Calliess, J.P., Roberts, S., Osborne, M.A.: Asynchronous batch Bayesian optimisation with improved local penalisation. In: Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 253-262(2019) [3] Andradóttir, S., Kim, S.H.: Fully sequential procedures for comparing constrained systems via simulation. Nav. Res. Logist. 57(5), 403-421(2010) [4] Andradóttir, S., Prudius, A.A.: Adaptive random search for continuous simulation optimization. Nav. Res. Logist. 57(6), 583-604(2010) [5] Angun, E., Kleijnen, J., den Hertog, D., Gurkan, G.: Response surface methodology with stochastic constraints for expensive simulation. J. Oper. Res. Soc. 60(6), 735-746(2009) [6] Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(2), 281-305(2012) [7] Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’2010, 19th International Conference on Computational Statistics Paris France, pp. 177-186. Physica-Verlag HD (2010) [8] Bouhlel, M.A., Martins, J.: Gradient-enhanced kriging for high-dimensional problems. Eng. Comput. 35(1), 157-173(2019) [9] Byrd, R.H., Chin, G.M., Neveitt, W., Nocedal, J.: On the use of stochastic Hessian information in optimization methods for machine learning. SIAM J. Optim. 21(3), 977-995(2011) [10] Byrd, R.H., Chin, G.M., Nocedal, J., Wu, Y.: Sample size selection in optimization methods for machine learning. Math. Program. 134(1), 127-155(2012) [11] Chang, K.H., Hong, L.J., Wan, H.: Stochastic trust-region response-surface method (STRONG)-a new response-surface framework for simulation optimization. Informs J. Comput. 25(2), 230-243(2013) [12] Chen, C.H., Lin, J., Yücesan, E., Chick, S.E.: Simulation budget allocation for further enhancing the efficiency of ordinal optimization. Discrete Event Dyn. Syst. 10, 251-270(2000) [13] Chen, X., Ankenman, B., Nelson, B.L.: Enhancing stochastic kriging metamodels with gradient estimators. Oper. Res. 61, 512-528(2013) [14] Chen, Y., Zhang, Q., Li, M., Cai, W.: Sequential selection for accelerated life testing via approximate Bayesian inference. Nav. Res. Logist. 69(2), 336-351(2022) [15] Cheng, R.C.H.: Searching for important factors: Sequential bifurcation under uncertainty. In: Proceedings of the 1997 Winter Simulation Conference, pp. 275-280. IEEE, New York (1997) [16] Constantine, P.G., Dow, E., Wang, Q.: Active subspace methods in theory and practice: Applications to kriging surfaces. SIAM J. Sci. Comput. 36(4), A1500-A1524(2014) [17] De Ath, G., Everson, R.M., Fieldsend, J.E.: Asynchronous ε-greedy Bayesian optimisation. In: Uncertainty in Artificial Intelligence, pp. 578-588(2021) [18] Defazio, A., Bach, F.R., Lacoste-Julien, S.: SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In: Advances in Neural Information Processing Systems, volume 1, pp. 1646-1654. Curran Associates, Inc., New York (2014) [19] Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19(2), 400-408(1982) [20] Dennis, J.E., Moré, J.J.: A characterization of superlinear convergence and its application to quasiNewton methods. Math. Comput. 28(126), 549-560(1974) [21] Desautels, T., Krause, A., Burdick, J.W.: Parallelizing exploration-exploitation tradeoffs in Gaussian process bandit optimization. J. Mach. Learn. Res. 15, 3873-3923(2014) [22] Ding, L., Tuo, R., Zhang, X.: High-dimensional simulation optimization via Brownian fields and sparse grids (2021). arXiv:2107.08595 [23] Ding, L., Hu, T., Jiang, J., Li, D., Wang, W., Yao, Y.: Random smoothing regularization in kernel gradient descent learning (2023). arXiv:2305.03531 [24] Döring, M., Györfi, L., Walk, H.: Rate of convergence of k-nearest-neighbor classification rule. J. Mach. Learn. Res. 18(227), 1-16(2018) [25] Du, J., Gao, S., Chen, C.H.: A contextual ranking and selection method for personalized medicine. Manuf. Serv. Oper. Manage. 26(1), 167-181(2024) [26] Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(7), 2121-2159(2011) [27] Eckman, D.J., Plumlee, M., Nelson, B.L.: Plausible screening using functional properties for simulations with large solution spaces. Oper. Res. 70(6), 3473-3489(2022) [28] Fan, W., Hong, L.J., Zhang, X.: Distributionally robust selection of the best. Manage. Sci. 66(1), 190-208(2020) [29] Frazier, P.I.: Bayesian optimization. In: Recent Advances in Optimization and Modeling of contemporary problems, pp. 255-278. INFORMS, Catonsville (2018) [30] Frazier, P.I.: A tutorial on Bayesian optimization (2018). arXiv:1807.02811 [31] Frazier, P.I., Jedynak, B., Chen, L.: Sequential screening: A Bayesian dynamic programming analysis of optimal group-splitting. In: Proceedings of the 2012 Winter Simulation Conference, pp. 1-12. IEEE, New York (2012) [32] Fu, M.C.: Optimization for simulation: Theory vs. practice. INFORMS J. Comput. 14(3), 192-215(2002) [33] Fu, M.C.: What you should know about simulation and derivatives. Nav. Res. Logist. 55(8), 723-736(2008) [34] Fu, M.C.: Handbook of Simulation Optimization. Springer, New York (2015) [35] Fu, M.C.: Stochastic gradient estimation. In: Fu, M.C. (ed.). Handbook of Simulation Optimization. volume 216, pp. 105-147. Springer, New York (2015) [36] Fu, M.C., Hong, L.J., Hu, J.Q.: Conditional Monte Carlo estimation of quantile sensitivities. Manage. Sci. 55(12), 2019-2027(2009) [37] Gao, W.,Zhou, Z.H.: Towards convergence rate analysis of random forests for classification. In: Advances in Neural Information Processing Systems, vol. 33, pp. 9300-9311. Curran Associates, Inc., New York (2020) [38] Gardner, J., Guo, C., Weinberger, K., Garnett, R., Grosse, R.: Discovering and exploiting additive structure for Bayesian optimization. In: Artificial Intelligence and Statistics, pp. 1311-1319(2017) [39] Ginsbourger, D., Le Riche, R., Carraro, L.: A Multi-points Criterion for Deterministic Parallel Global Optimization Based on Gaussian Processes. Technical report, Ecole Nationale Supérieure des Mines (2008) [40] Glasserman, P., Tayur, S.: Sensitivity analysis for base-stock levels in multiechelon productioninventory systems. Manage. Sci. 41(2), 263-281(1995) [41] Glynn, P., Juneja, S.: A large deviations perspective on ordinal optimization. In: Proceedings of the 2004 Winter Simulation Conference, pp. 565-573. IEEE, New York (2004) [42] Glynn, P.W., Peng, Y., Fu, M.C., Hu, J.Q.: Computing sensitivities for distortion risk measures. INFORMS J. Comput. 33(4), 1520-1532(2021) [43] Griebel, M. Sparse grids and related approximation schemes for higher dimensional problems. In: Pardo, L.M., Pinkus, A., Endre, S., Todd, M.J. (eds.). Foundations of Computational Mathematics, Santander 2005, pp. 106-161. Cambridge University Press, Cambridge (2006) [44] Gruslys, A., Munos, R., Danihelka, I., Lanctot, M., Graves, A.: Memory-efficient backpropagation through time. In: Advances in Neural Information Processing Systems, vol. 29, pp. 4132-4140. Curran Associates, Inc., New York (2016) [45] Hamm, T., Steinwart, I.: Adaptive learning rates for support vector machines working on data with low intrinsic dimension. Ann. Stat. 49(6), 3153-3180(2021) [46] Hashemi, F.S., Ghosh, S., Pasupathy, R.: On adaptive sampling rules for stochastic recursions. In: Proceedings of the 2014 Winter Simulation Conference, pp. 3959-3970. IEEE, New York (2014) [47] Hernández-Lobato, J.M., Hoffman, M.W., Ghahramani, Z.: Predictive entropy search for efficient global optimization of black-box functions. In: Advances in Neural Information Processing Systems, vol. 27, pp. 918-926. Curran Associates, Inc, New York (2014) [48] Ho, Y.C., Cao, X., Cassandras, C.: Infinitesimal and finite perturbation analysis for queueing networks. Automatica. 19(4), 439-445(1983) [49] Hong, L.J.: Estimating quantile sensitivities. Oper. Res. 57(1), 118-130(2009) [50] Hong, L.J., Jiang, G.: Offline simulation online application: A new framework of simulation-based decision making. Asia-Pacific J. Oper. Res. 36(06), 1940015(2019) [51] Hong, L.J., Liu, G.: Pathwise estimation of probability sensitivities through terminating or steadystate simulations. Oper. Res. 58(2), 357-370(2009) [52] Hong, L.J., Liu, G.: Simulating sensitivities of conditional value at risk. Manage. Sci. 55(2), 281-293(2009) [53] Hong, L.J., Nelson, B.L.: Discrete optimization via simulation using COMPASS. Oper. Res. 54(1), 115-129(2006) [54] Hong, L.J., Nelson, B.L.: A brief introduction to optimization via simulation. In: Proceedings of the 2009 Winter Simulation Conference, pp. 75-85. IEEE, New York (2009) [55] Hong, L.J., Zhang, X.: Surrogate-based simulation optimization. In: Tutorials in Operations Research: Emerging Optimization Methods and Modeling Techniques with Applications, pp. 287-311. INFORMS, Catonsville (2021) [56] Hong, L.J., Nelson, B.L., Xu, J.: Speeding up COMPASS for high-dimensional discrete optimization via simulation. Oper. Res. Lett. 38(6), 550-555(2010) [57] Hong, L.J., Luo, J., Nelson, B.L.: Chance constrained selection of the best. INFORMS J. Comput. 27(2), 317-334(2015) [58] Hong, L.J., Fan, W., Luo, J.: Review on ranking and selection: A new perspective. Front. Eng. Manag. 8(3), 321-343(2021) [59] Hong, L.J., Jiang, G., Zhong, Y.: Solving large-scale fixed-budget ranking and selection problems. INFORMS J. Comput. 34(6), 2930-2949(2022) [60] Hong, L.J., Song, Y., Wang, T.: Fast discrete-event simulation of Markovian queueing networks through Euler approximation (2024). arXiv:2402.13259 [61] Hunter,S.R.,Applegate,E.A.,Arora,V.,Chong,B.,Cooper,K.,Rincón-Guevara,O.,Vivas-Valencia, C.: An introduction to multiobjective simulation optimization. ACM Trans. Model. Comput. Simul. 29(1), 1-36(2019) [62] Jalali, H., Nieuwenhuyse, I.V.: Simulation optimization in inventory replenishment: A classification. IISE Trans. 47(11), 1217-1235(2015) [63] Jalali, H., Van Nieuwenhuyse, I., Picheny, V.: Comparison of kriging-based algorithms for simulation optimization with heterogeneous noise. Eur. J. Oper. Res. 261(1), 279-301(2017) [64] Jaquier, N., Rozo, L.: High-dimensional Bayesian optimization via nested Riemannian manifolds. In: Advances in Neural Information Processing Systems, vol. 33, pp. 20939-20951. Curran Associates, Inc, New York (2020) [65] Jiang, G., Fu, M.C.: Technical note-on estimating quantile sensitivities via infinitesimal perturbation analysis. Oper. Res. 63(2), 435-441(2015) [66] Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems, vol. 26, pp. 315-323. Curran Associates, Inc, New York (2013) [67] Kalai, A.T., Vempala, S.: Simulated annealing for convex optimization. Math. Oper. Res. 31(2), 253-266(2006) [68] Kandasamy, K., Schneider, J., Póczos, B.: High dimensional Bayesian optimisation and bandits via additive models. In: International Conference on Machine Learning, vol. 84, pp. 298-307(2015) [69] Kiefer, J., Wolfowitz, J.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23(3), 462-466(1952) [70] Kim, S.H., Nelson, B.L.: A fully sequential procedure for indifference-zone selection in simulation. ACM Trans. Model. Comput. Simul. 11(3), 251-273(2001) [71] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014). arXiv:1412.6980 [72] Kirsch, L., Schmidhuber, J.: Meta learning backpropagation and improving it. In: Advances in Neural Information Processing Systems, vol. 34, pp. 14122-14134. Curran Associates, Inc, New York (2021) [73] Kleijnen, J.P.: Response surface methodology for constrained simulation optimization: An overview. Simul. Model. Pract. Theory 16(1), 50-64(2008) [74] Kleijnen, J., Bettonvil, B., Persson, F.: Screening for the important factors in large discrete-event simulation models: Sequential bifurcation and its applications. In: Dean, A., Lewis, S. (eds.). Screening: Methods for Experimentation in Industry. Drug Discovery, and Genetics, pp. 287-307. Springer, New York (2006) [75] Lan, G.: First-Order and Stochastic Optimization Methods for Machine Learning. Springer Nature, New York (2020) [76] Li, H., Lee, L.H., Chew, E.P., Lendermann, P.: MO-COMPASS: A fast convergent search algorithm for multi-objective discrete optimization via simulation. IISE Trans. 47(11), 1153-1169(2015) [77] Li, W., Chen, N., Hong, L.J.: Dimension reduction in contextual online learning via nonparametric variable selection. J. Mach. Learn. Res. 24(136), 1-84(2023) [78] Li, X., Song, E.: Projected Gaussian markov improvement algorithm for high-dimensional discrete optimization via simulation. ACM Trans. Model. Comput. Simul. 34(3), 1-29(2024) [79] Li, Z., Fan, W., Hong, L.J.: The (surprising) sample optimality of greedy procedures for large-scale ranking and selection. Manage. Sci. 71(2), 1238-1259(2025) [80] Linnainmaa, S.: The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master’s thesis, University of Helsinki, Helsinki (1970) [81] Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(1-3), 503-528(1989) [82] Liu, G., Hong, L.J.: Kernel estimation of the Greeks for options with discontinuous payoffs. Oper. Res. 59(1), 96-108(2010) [83] Lovász, L.: Submodular functions and convexity. In: Bachem, A., Korte, B., Grötschel, M. (eds.). Mathematical Programming The State of the Art, pp. 235-257. Springer, Berlin, Heidelberg (1983) [84] Luo, J., Hong, L.J., Nelson, B.L., Wu, Y.: Fully sequential procedures for large-scale ranking-andselection problems in parallel computing environments. Oper. Res. 63(5), 1177-1194(2015) [85] Marceau-Caron, G., Ollivier, Y.: Practical Riemannian neural networks (2016). arXiv:1602.08007 [86] Martens, J.: New insights and perspectives on the natural gradient method. J. Mach. Learn. Res. 21(1), 5776-5851(2020) [87] Meng, Q., Wang, S., Ng, S.H.: Combined global and local search for optimization with Gaussian process models. INFORMS J. Comput. 34(1), 622-637(2022) [88] Morris, M.D.: Factorial sampling plans for preliminary computational experiments. Technometrics. 33(2), 161-174(1991) [89] Negoescu, D.M., Frazier, P.I., Powell, W.B.: The knowledge-gradient algorithm for sequencing experiments in drug discovery. INFORMS J. Comput. 23(3), 346-363(2011) [90] Nelson, B.L., Swann, J., Goldsman, D., Song, W.: Simple procedures for selecting the best simulated system when the number of alternatives is large. Oper. Res. 49(6), 950-963(2001) [91] Nemirovski, A., Juditsky, A.B., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574-1609(2009) [92] Ni, E.C., Hunter, S.R., Henderson, S.G.: Ranking and selection in a high performance computing environment. In: Proceedings of the 2013 Winter Simulations Conference, pp. 833-845. IEEE, New York (2013) [93] Ni, E.C., Ciocan, D.F., Henderson, S.G., Hunter, S.R.: Efficient ranking and selection in parallel computing environments. Oper. Res. 65(3), 821-836(2017) [94] Norkin, V.I., Ermoliev, Y.M., Ruszczyński, A.: On optimal allocation of indivisibles under uncertainty. Oper. Res. 46(3), 381-395(1998) [95] Norkin, V.I., Pflug, G.C., Ruszczyński, A.: A branch and bound method for stochastic global optimization. Math. Program. 83, 425-450(1998) [96] Osorio, C., Chong, L.: A computationally efficient simulation-based optimization algorithm for large-scale urban transportation problems. Transp. Sci. 49(3), 623-636(2015) [97] Park, H., Amari, S.I., Fukumizu, K.: Adaptive natural gradient learning algorithms for various stochastic models. Neural Netw. 13(7), 755-764(2000) [98] Paszke, A., Gross, S., Chintala, S., Chana, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In: NIPS Workshop (2017) [99] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024-8035. Curran Associates, Inc, New York (2019) [100] Paulson, E.: A sequential procedure for selecting the population with the largest mean from k normal populations. Ann. Math. Stat. 35(1), 174-180(1964) [101] Pearce, M., Poloczek, M., Branke, J.: Bayesian optimization allowing for common random numbers. Oper. Res. 70(6), 3457-3472(2022) [102] Pei, L., Nelson, B.L., Hunter, S.: A new framework for parallel ranking & selection using an adaptive standard. In: Proceedings of the 2018 Winter Simulation Conference, pp. 2201-2212. IEEE, New York (2018) [103] Pei, L., Nelson, B.L., Hunter, S.R.: Parallel adaptive survivor selection. Oper. Res. 72(1), 336-354(2024) [104] Peng, Y., Fu, M.C., Hu, J.Q., Heidergott, B.: A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters. Oper. Res. 66(2), 487-499(2018) [105] Peng, Y., Fu, M.C., Heidergott, B., Lam, H.: Maximum likelihood estimation by monte carlo simulation: Toward data-driven stochastic modeling. Oper. Res. 68(6), 1896-1912(2020) [106] Peng, Y., Xiao, L., Heidergott, B., Hong, L., Lam, H.: A new likelihood ratio method for training artificial neural networks. INFORMS J. Comput. 34(1), 638-655(2021) [107] Peng, Y., Chen, C.H., Fu, M.C.: Simulation optimization in the new era of AI. In: Tutorials in Operations Research: Advancing the Frontiers of OR/MS: From Methodologies to Applications, pp. 82-108. INFORMS, Catonsville (2023) [108] Polyak, B.: New method of stochastic approximation type. Autom. Remote Control. 51, 937-946(1990) [109] Polyak, B.T., Juditsky, A.B.: Acceleration of stochastic approximation by averaging. SIAM J. Control. Optim. 30(4), 838-855(1992) [110] Quan, N., Yin, J., Ng, S.H., Lee, L.H.: Simulation optimization via kriging: A sequential search using expected improvement with computing budget constraints. IISE Trans. 45(7), 763-780(2013) [111] Rall, L.B., Corliss, G.F.: An introduction to automatic differentiation. Comput. Differ. Tech. Appl. Tools. 89, 1-18(1996) [112] Raskutti, G., Wainwright, M.J., Yu, B.: Early stopping and non-parametric regression: An optimal data-dependent stopping rule. J. Mach. Learn. Res. 15(1), 335-366(2014) [113] Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22(3), 400-407(1951) [114] Rolland, P., Scarlett, J., Bogunovic, I., Cevher, V.: High-dimensional Bayesian optimization via additive models with overlapping groups. In: International Conference on Artificial Intelligence and Statistics, pp. 298-307(2018) [115] Roosta-Khorasani, F., Mahoney, M.W.: Sub-sampled Newton methods. Math. Program. 174, 293- 326(2019) [116] Roux, N.L., Schmidt, M., Bach, F.R.: A stochastic gradient method with an exponential convergence rate for finite training sets. In: Advances in Neural Information Processing Systems, vol. 2, pp. 2663-2671. Curran Associates, Inc, New York (2012) [117] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature. 323(6088), 533-536(1986) [118] Salemi, P.L., Song, E., Nelson, B.L., Staum, J.: Gaussian Markov random fields for discrete optimization via simulation: Framework and algorithms. Oper. Res. 67(1), 250-266(2019) [119] Schraudolph, N.N.: Fast curvature matrix-vector products for second-order gradient descent. Neural Comput. 14(7), 1723-1738(2002) [120] Schraudolph, N.N., Yu, J., Günter, S.: A stochastic quasi-Newton method for online convex optimization. In: Artificial Intelligence and Statistics, pp. 436-443(2007) [121] Semelhago, M., Nelson, B.L., Song, E., Wächter, A.: Rapid discrete optimization via simulation with Gaussian Markov random fields. INFORMS J. Comput. 33(3), 915-930(2021) [122] Shah, A., Ghahramani, Z.: Parallel predictive entropy search for batch global optimization of expensive objective functions. In: Advances in Neural Information Processing Systems, vol. 28, pp. 3330-3338. Curran Associates, Inc, New York (2015) [123] Shen, H., Hong, L.J., Zhang, X.: Ranking and selection with covariates for personalized decision making. INFORMS J. Comput. 33(4), 1500-1519(2021) [124] Shi, L., Olafsson, S.: Nested partitions method for global optimization. Oper. Res. 48(3), 390-407(2000) [125] Shi, W., Chen, X., Shang, J.: An efficient Morris method-based framework for simulation factor screening. INFORMS J. Comput. 31(4), 745-770(2019) [126] Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, vol. 25, pp. 2951-2959. Curran Associates, Inc, New York (2012) [127] Spall, J.C.: Feedback and weighting mechanisms for improving Jacobian estimates in the adaptive simultaneous perturbation algorithm. IEEE Trans. Autom. Control. 54(6), 1216-1229(2009) [128] Srinivas, N., Krause, A., Kakade, S., Seeger, M.: Gaussian process optimization in the bandit setting: No regret and experimental design. In: Proceedings of the 27th International Conference on Machine Learning, pp. 1015-1022. Omnipress, Madison (2010) [129] Stone, C.J.: Optimal global rates of convergence for nonparametric regression. Ann. Stat. 10(4), 1040-1053(1982) [130] Sun, L., Hong, L., Hu, Z.: Balancing exploitation and exploration in discrete optimization via simulation through a Gaussian process-based search. Oper. Res. 62(6), 1416-1438(2014) [131] Tieleman, T., Hinton, G.: Lecture 6.5-RMSprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4, 26-31(2012) [132] van Merriënboer, B., Wiltschko, A.B., Moldovan, D.: Tangent: automatic differentiation using source code transformation in python (2017). arXiv:1711.02712 [133] Wan, H., Ankenman, B.E., Nelson, B.L.: Controlled sequential bifurcation: A new factor-screening method for discrete-event simulation. Oper. Res. 54(4), 743-755(2006) [134] Wan, H., Ankenman, B.E., Nelson, B.L.: Improving the efficiency and efficacy of controlled sequential bifurcation for simulation factor screening. INFORMS J. Comput. 22(3), 482-492(2010) [135] Wang, H.: Retrospective optimization of mixed-integer stochastic systems using dynamic simplex linear interpolation. Eur. J. Oper. Res. 217(1), 141-148(2012) [136] Wang, J., Clark, S.C., Liu, E., Frazier, P.I.: Parallel Bayesian global optimization of expensive functions. Oper. Res. 68(6), 1850-1865(2020) [137] Wang, L., Demeulemeester, E.: Simulation optimization in healthcare resource planning: A literature review. IISE Trans. 55(10), 985-1007(2023) [138] Wang, T., Hong, L.J.: Large-scale inventory optimization: A recurrent neural networks-inspired simulation approach. INFORMS J. Comput. 35(1), 196-215(2023) [139] Wang, X., Hong, L.J., Jiang, Z., Shen, H.: Gaussian process-based random search for continuous optimization via simulation. Oper. Res. 73(1), 385-407(2025) [140] Wang, Z., Hutter, F., Zoghi, M., Matheson, D., De Feitas, N.: Bayesian optimization in a billion dimensions via random embeddings. J. Artif. Intell. Res. 55, 361-387(2016) [141] Wu, J., Frazier, P.: The parallel knowledge gradient method for batch Bayesian optimization. In: Advances in Neural Information Processing Systems, vol. 29, pp. 3134-3142. Curran Associates, Inc, New York (2016) [142] Xie, J., Frazier, P.I., Chick, S.E.: Bayesian optimization via simulation with pairwise sampling and correlated prior beliefs. Oper. Res. 64(2), 542-559(2016) [143] Xu, J., Nelson, B.L., Hong, J.L.: Industrial strength COMPASS: a comprehensive algorithm and software for optimization via simulation. ACM Trans. Model. Comput. Simul. 20(1), 1-29(2010) [144] Xu, J., Nelson, B.L., Hong, L.J.: An adaptive hyperbox algorithm for high-dimensional discrete optimization via simulation problems. INFORMS J. Comput. 25(1), 133-146(2013) [145] Xu, W.L., Nelson, B.L.: Empirical stochastic branch-and-bound for optimization via simulation. IISE Trans. 45(7), 685-698(2013) [146] Yang, L., Lv, S., Wang, J.: Model-free variable selection in reproducing kernel Hilbert space. J. Mach. Learn. Res. 17(82), 1-24(2016) [147] Yang, Y., Dunson, D.: Bayesian manifold regression. Ann. Stat. 44(2), 876-905(2013) [148] Zeiler, M.D.: Adadelta: An adaptive learning rate method (2012). arXiv:1212.5701 [149] Zhang, H., Zheng, Z., Lavaei, J.: Gradient-based algorithms for convex discrete optimization via simulation. Oper. Res. 71(5), 1815-1834(2023) [150] Zhang, H., Zheng, Z., Lavaei, J.: Stochastic localization methods for convex discrete optimization via simulation. Oper. Res. 73(2), 927-948(2025) [151] Zhang, Q., Hu, J.: Actor-critic-like stochastic adaptive search for continuous simulation optimization. Oper. Res. 70(6), 3519-3537(2022) [152] Zhang, S., Xu, J., Lee, L.H., Chew, E.P., Wong, W.P., Chen, C.H.: Optimal computing budget allocation for particle swarm optimization in stochastic optimization. IEEE Trans. Evol. Comput. 21(2), 206-219(2017) [153] Zhong, Y., Hong, L.J.: Knockout-tournament procedures for large-scale ranking and selection in parallel computing environments. Oper. Res. 70(1), 432-453(2022) [154] Zhong, Y., Liu, S., Luo, J., Hong, L.J.: Speeding up Paulson’s procedure for large-scale problems using parallel computing. INFORMS J. Comput. 34(1), 586-606(2022) [155] Zhou, C., Ma, N., Cao, X., Lee, L.H., Chew, E.P.: Classification and literature review on the integration of simulation and optimization in maritime logistics studies. IISE Trans. 53(10), 1157-1176(2021) [156] Zhou, E., Bhatnagar, S.: Gradient-based adaptive stochastic search for simulation optimization over continuous space. INFORMS J. Comput. 30(1), 154-167(2017) [157] Zhou, T., Fields, E., Osorio, C.: A data-driven discrete simulation-based optimization algorithm for car-sharing service design. Transp. Res. Part B Methodol. 178, 102818(2023) |
[1] | Gong-Bo Zhang, Hao-Bin Li, Xiao-Tian Liu, Yi-Jie Peng. Simulation Budget Allocation for Improving Scheduling and Routing of Automated Guided Vehicles in Warehouse Management [J]. Journal of the Operations Research Society of China, 2025, 13(3): 775-809. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||