This book offers a pioneering exploration of classification-based derivative-free optimization (DFO), providing researchers and professionals in artificial intelligence, machine learning, AutoML, and optimization with a robust framework for addressing complex, large-scale problems where gradients are unavailable. By bridging theoretical foundations with practical implementations, it fills critical gaps in the field, making it an indispensable resource for both academic and industrial audiences.
The book introduces innovative frameworks such as sampling-and-classification (SAC) and sampling-and-learning (SAL), which underpin cutting-edge algorithms like Racos and SRacos. These methods are designed to excel in challenging optimization scenarios, including high-dimensional search spaces, noisy environments, and parallel computing. A dedicated section on the ZOOpt toolbox provides practical tools for implementing these algorithms effectively. The book’s structure moves from foundational principles and algorithmic development to advanced topics and real-world applications, such as hyperparameter tuning, neural architecture search, and algorithm selection in AutoML.
Readers will benefit from a comprehensive yet concise presentation of modern DFO methods, gaining theoretical insights and practical tools to enhance their research and problem-solving capabilities. A foundational understanding of machine learning, probability theory, and algorithms is recommended for readers to fully engage with the material.
Introduction.- Preliminaries.- Framework.- Theoretical Foundation.- Basic Algorithm.- Optimization in Sequential Mode.- Optimization in High-Dimensional Search Space.- Optimization under Noise.- Optimization with Parallel Computing.
This book offers a pioneering exploration of classification-based derivative-free optimization (DFO), providing researchers and professionals in artificial intelligence, machine learning, AutoML, and optimization with a robust framework for addressing complex, large-scale problems where gradients are unavailable. By bridging theoretical foundations with practical implementations, it fills critical gaps in the field, making it an indispensable resource for both academic and industrial audiences.
The book introduces innovative frameworks such as sampling-and-classification (SAC) and sampling-and-learning (SAL), which underpin cutting-edge algorithms like Racos and SRacos. These methods are designed to excel in challenging optimization scenarios, including high-dimensional search spaces, noisy environments, and parallel computing. A dedicated section on the ZOOpt toolbox provides practical tools for implementing these algorithms effectively. The book’s structure moves from foundational principles and algorithmic development to advanced topics and real-world applications, such as hyperparameter tuning, neural architecture search, and algorithm selection in AutoML.
Readers will benefit from a comprehensive yet concise presentation of modern DFO methods, gaining theoretical insights and practical tools to enhance their research and problem-solving capabilities. A foundational understanding of machine learning, probability theory, and algorithms is recommended for readers to fully engage with the material.
Produktdetaljer
Biografisk notat
Yang Yu is a professor at Nanjing University, specializing in artificial intelligence, machine learning, and optimization. His research focuses on derivative-free optimization, AutoML, and reinforcement learning. Prof. Yu has an extensive publication record in leading journals and conferences, including Artificial Intelligence, IEEE Transactions on Pattern Analysis and Machine Intelligence, ICML, NeurIPS, IJCAI, and AAAI. He is a co-author of the book Evolutionary Learning: Advances in Theories and Algorithms (Springer, 2019). His work has introduced foundational frameworks and algorithms in classification-based optimization, notably Racos and SRacos, and contributed to the development of the optimization toolbox ZOOpt, widely utilized in academic and industrial research.
Hong Qian is an associate professor at East China Normal University, with expertise in optimization algorithms, machine learning, and computational intelligence. His research focuses on developing scalable derivative-free optimization techniques for high-dimensional problems with theoretical guarantees, and LLM for optimization. Dr. Qian has published extensively in prominent venues such as ICML, NeurIPS, AAAI, and IEEE Transactions on Evolutionary Computation and has contributed to advancements in sampling-and-classification frameworks and their applications in machine learning and optimization tasks.
Yi-Qi Hu is an AI technical expert in Huawei Co. Ltd., with expertise in machine learning, optimization algorithms, and large language model on device. His work focuses on developing machine learning systems utilizing derivative-free optimization techniques. Dr. Hu has published extensively in prominent venues such as AAAI and IJCAI and has contributed to advancements in derivative-free optimization-based AutoML systems.