Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. This book provides practical modeling methods for many real-world problems with high dimensionality or complexity which have not hitherto been treatable with Markov decision processes. In addition to providing numerous specific algorithms, coverage includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. In addition, the book shows how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Steven I. Marcus received his Ph.D. and S.M. from the Massachusetts Institute of Technology in 1975 and 1972, respectively. He received a B.A. from Rice University in 1971. From 1975 to 1991, he was with the Department of Electrical and Computer Engineering at the University of Texas at Austin, where he was the L.B. (Preach) Meaders Professor in Engineering. He was Associate Chairman of the Department during the period 1984-89. In 1991, he joined the University of Maryland, College Park, where he was Director of the Institute for Systems Research until 1996. He is currently a Professor in the Electrical Engineering Department and the Institute for Systems Research.
Steven Marcus is a Fellow of IEEE, and a member of SIAM, AMS, and the Operations Research Society of America. He is an Editor of the SIAM Journal on Control and Optimization, and Associate Editor of Mathematics of Control, Signals, and Systems, Journal on Discrete Event Dynamic Systems, and Acta Applicandae Mathematicae. He has authored or co-authored more than 100 articles, conference proceedings, and book chapters.
Dr. Marcus's research interests lie in the areas of control and systems engineering, analysis and control of stochastic systems, Markov decision processes, stochastic and adaptive control, learning, fault detection, and discrete event systems, with applications in manufacturing, acoustics, and communication networks.
Dr. Fu received his Ph.D. and M.S degrees in applied mathematics from Harvard University in 1989 and 1986, respectively. He received S.B. and S.M. degrees in electrical engineering and an S.B. degree in mathematics from the Massachusetts Institute of Technology in 1985. Since 1989, he has been at the University of Maryland, College Park, in the College of Business and Management.
Dr. Fu is a member of IEEE and the Institute for Operations Research and the Management Sciences (INFORMS). He is the Simulation Area Editor for Operations, an Associate Editor for Management Science, and has served on the Editorial Boards of the INFORMS Journal on Computing, Production and Operations Management and IIE Transactions. He was on the program committee for the Spring 1996 INFORMS National Meeting, in charge of contributed papers. In 1995, he received the Maryland Business School's annual Allen J. Krowe Award for Teaching Excellence. He is the co-author (with Jian-Qiang Hu) of the book, Conditional Monte Carlo: Gradient Estimation and Optimization Applications (0-7923-9873-4, 1997), which received the 1998 INFORMS College on Simulation Outstanding Publication Award. Other awards include the 1999 IIE Operations Research Division Award and a 1998 IIE Transactions Best Paper Award. In 2002, he received ISR's Outstanding Systems Engineering Faculty Award.
Dr. Fu's research interests lie in the areas of stochastic derivative estimation and simulation optimization of discrete-event systems, particularly with applications towards manufacturing systems, inventory control, and the pricing of financial derivatives.
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
Vendeur : Brook Bookstore On Demand, Napoli, NA, Italie
Etat : new. Questo è un articolo print on demand. N° de réf. du vendeur YZYA84OAWL
Quantité disponible : Plus de 20 disponibles
Vendeur : Ria Christie Collections, Uxbridge, Royaume-Uni
Etat : New. In. N° de réf. du vendeur ria9781849966436_new
Quantité disponible : Plus de 20 disponibles
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
Taschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. This book brings the state-of-the-art research together for the first time. It provides practical modeling methods for many real-world problems with high dimensionality or complexity which have not hitherto been treatable with Markov decision processes. 189 pp. Englisch. N° de réf. du vendeur 9781849966436
Quantité disponible : 2 disponible(s)
Vendeur : moluna, Greven, Allemagne
Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Provides practical modeling methods for many real-world problems with high dimensionality or complextity which have not hitherto been treatable with Markov decision processesRigorous theoretical derivation of sampling and population-based algorith. N° de réf. du vendeur 4288758
Quantité disponible : Plus de 20 disponibles
Vendeur : Books Puddle, New York, NY, Etats-Unis
Etat : New. pp. 208. N° de réf. du vendeur 262143643
Quantité disponible : 4 disponible(s)
Vendeur : THE SAINT BOOKSTORE, Southport, Royaume-Uni
Paperback / softback. Etat : New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days. N° de réf. du vendeur C9781849966436
Quantité disponible : Plus de 20 disponibles
Vendeur : Majestic Books, Hounslow, Royaume-Uni
Etat : New. Print on Demand pp. 208 Illus. 38. N° de réf. du vendeur 5704260
Quantité disponible : 4 disponible(s)
Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne
Etat : New. PRINT ON DEMAND pp. 208. N° de réf. du vendeur 182143633
Quantité disponible : 4 disponible(s)
Vendeur : preigu, Osnabrück, Allemagne
Taschenbuch. Etat : Neu. Simulation-based Algorithms for Markov Decision Processes | Hyeong Soo Chang (u. a.) | Taschenbuch | Communications and Control Engineering | Einband - flex.(Paperback) | Englisch | 2010 | Springer | EAN 9781849966436 | Verantwortliche Person für die EU: Springer Nature Customer Service Center GmbH, Europaplatz 3, 69115 Heidelberg, productsafety[at]springernature[dot]com | Anbieter: preigu. N° de réf. du vendeur 111803700
Quantité disponible : 5 disponible(s)
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
Taschenbuch. Etat : Neu. This item is printed on demand - Print on Demand Titel. Neuware -Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. This book provides practical modeling methods for many real-world problems with high dimensionality or complexity which have not hitherto been treatable with Markov decision processes. In addition to providing numerous specific algorithms, coverage includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. In addition, the book shows how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.Springer-Verlag KG, Sachsenplatz 4-6, 1201 Wien 208 pp. Englisch. N° de réf. du vendeur 9781849966436
Quantité disponible : 1 disponible(s)