Articles liés à Reinforcement Learning for Optimal Feedback Control:...

Reinforcement Learning for Optimal Feedback Control: A Lyapunov-based Approach - Couverture rigide

 
9783319783833: Reinforcement Learning for Optimal Feedback Control: A Lyapunov-based Approach

Synopsis

Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book’s focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution.

To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor–critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements.

This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.

Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.

À propos de l?auteur

Rushikesh Kamalapurkar received his M.S. and his Ph.D. degree in 2011 and 2014, respectively, from the Mechanical and Aerospace Engineering Department at the University of Florida. After working for a year as a postdoctoral research fellow with Dr. Warren E. Dixon, he was selected as the 2015-16 MAE postdoctoral teaching fellow. In 2016 he joined the School of Mechanical and Aerospace Engineering at the Oklahoma State University as an Assistant professor. His primary research interest has been intelligent, learning-based optimal control of uncertain nonlinear dynamical systems. He has published 3 book chapters, 18 peer reviewed journal papers and 21 peer reviewed conference papers. His work has been recognized by the 2015 University Of Florida Department Of Mechanical and Aerospace Engineering Best Dissertation Award, and the 2014 University of Florida Department of Mechanical and Aerospace Engineering Outstanding Graduate Research Award.
Dr. Joel Rosenfeld is a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at Vanderbilt University in the VeriVital Laboratory. He received his PhD in Mathematics at the University of Florida in 2013 under the direction of Dr. Michael T. Jury. His doctoral work concerned densely defined operators over reproducing kernel Hilbert spaces (RKHS), where he established characterizations of densely defined multiplication operators for several RKHSs. Dr. Rosenfeld then spent four years as a postdoctoral researcher in the Nonlinear Controls and Robotics Laboratory under Dr. Warren E. Dixon where he worked on problems in Numerical Analysis and Optimal Control Theory. Working together with Dr. Dixon and Dr. Kamalapurkar, he developed the numerical approach represented by the state following (StaF) method, which enables the implementation of online optimal control methods that were previously intractable.
Prof. Warren Dixon received his Ph.D. in 2000 from the Department of Electrical and Computer Engineering from Clemson University. He worked as a research staff member and Eugene P. Wigner Fellow at Oak Ridge National Laboratory (ORNL) until 2004, when he joined the University of Florida in the Mechanical and Aerospace Engineering Department. His main research interest has been the development and application of Lyapunov-based control techniques for uncertain nonlinear systems. He has published 3 books, an edited collection, 13 chapters, and over 130 journal and 240 conference papers. His work has been recognized by the 2015 & 2009 American Automatic Control Council (AACC) O. Hugo Schuck (Best Paper) Award, the 2013 Fred Ellersick Award for Best Overall MILCOM Paper, a 2012-2013 University of Florida College of Engineering Doctoral Dissertation Mentoring Award, the 2011 American Society of Mechanical Engineers (ASME) Dynamics Systems and Control Division Outstanding Young Investigator Award, the 2006 IEEE Robotics and Automation Society (RAS) Early Academic Career Award, an NSF CAREER Award (2006-2011), the 2004 Department of Energy Outstanding Mentor Award, and the 2001 ORNL Early Career Award for Engineering Achievement. He is an ASME and IEEE Fellow, an IEEE Control Systems Society (CSS) Distinguished Lecturer, and has served as the Director of Operations for the Executive Committee of the IEEE CSS Board of Governors (2012-2015). He was awarded the Air Force Commander's Public Service Award (2016) for his contributions to the U.S. Air Force Science Advisory Board. He is currently or formerly an associate editor for ASME Journal of Journal of Dynamic Systems, Measurement and Control, Automatica, IEEE Transactions on Systems Man and Cybernetics: Part B Cybernetics, and the International Journal of Robust and Nonlinear Control.

Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.

  • ÉditeurSpringer International Publishing AG
  • Date d'édition2018
  • ISBN 10 3319783831
  • ISBN 13 9783319783833
  • ReliureRelié
  • Langueanglais
  • Numéro d'édition1
  • Nombre de pages293
  • Coordonnées du fabricantnon disponible

Acheter D'occasion

Zustand: Hervorragend | Seiten:...
Afficher cet article
EUR 48,67

Autre devise

EUR 3,90 expédition depuis Allemagne vers France

Destinations, frais et délais

Acheter neuf

Afficher cet article
EUR 146,12

Autre devise

EUR 9,70 expédition depuis Allemagne vers France

Destinations, frais et délais

Autres éditions populaires du même titre

9783030086893: Reinforcement Learning for Optimal Feedback Control: A Lyapunov-Based Approach

Edition présentée

ISBN 10 :  3030086895 ISBN 13 :  9783030086893
Editeur : Springer, 2019
Couverture souple

Résultats de recherche pour Reinforcement Learning for Optimal Feedback Control:...

Image d'archives

Rushikesh Kamalapurkar, Warren Dixon, Joel Rosenfeld, Patrick Walters
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Ancien ou d'occasion Couverture rigide

Vendeur : Buchpark, Trebbin, Allemagne

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Etat : Hervorragend. Zustand: Hervorragend | Seiten: 312 | Sprache: Englisch | Produktart: Bücher. N° de réf. du vendeur 29487795/1

Contacter le vendeur

Acheter D'occasion

EUR 48,67
Autre devise
Frais de port : EUR 3,90
De Allemagne vers France
Destinations, frais et délais

Quantité disponible : 1 disponible(s)

Ajouter au panier

Image d'archives

Kamalapurkar, Rushikesh; Walters, Patrick; Rosenfeld, Joel; Dixon, Warren
Edité par Springer, 2018
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Ancien ou d'occasion Couverture rigide Edition originale

Vendeur : SpringBooks, Berlin, Allemagne

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Hardcover. Etat : As New. 1. Auflage. unread, like new - will be dispatched immediately. N° de réf. du vendeur CE-2310C-TEPPICHMIRE-15-1000XS

Contacter le vendeur

Acheter D'occasion

EUR 61,71
Autre devise
Frais de port : EUR 11,90
De Allemagne vers France
Destinations, frais et délais

Quantité disponible : 1 disponible(s)

Ajouter au panier

Image fournie par le vendeur

Rushikesh Kamalapurkar|Patrick Walters|Joel Rosenfeld|Warren Dixon
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide
impression à la demande

Vendeur : moluna, Greven, Allemagne

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Gebunden. Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Illustrates&nbspthe effectiveness of the developed methods with comparative simulations&nbspto leading off-line numerical methodsPresents theoretical development through engineering examples and hardware implementations. N° de réf. du vendeur 218770942

Contacter le vendeur

Acheter neuf

EUR 146,12
Autre devise
Frais de port : EUR 9,70
De Allemagne vers France
Destinations, frais et délais

Quantité disponible : Plus de 20 disponibles

Ajouter au panier

Image fournie par le vendeur

Rushikesh Kamalapurkar
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide

Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Buch. Etat : Neu. Neuware -Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book¿s focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution.To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor¿critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements.This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 312 pp. Englisch. N° de réf. du vendeur 9783319783833

Contacter le vendeur

Acheter neuf

EUR 160,49
Autre devise
Frais de port : EUR 15
De Allemagne vers France
Destinations, frais et délais

Quantité disponible : 2 disponible(s)

Ajouter au panier

Image fournie par le vendeur

Rushikesh Kamalapurkar
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide

Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Buch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book's focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution. To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor-critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements. This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry. N° de réf. du vendeur 9783319783833

Contacter le vendeur

Acheter neuf

EUR 171,19
Autre devise
Frais de port : EUR 10,99
De Allemagne vers France
Destinations, frais et délais

Quantité disponible : 1 disponible(s)

Ajouter au panier

Image fournie par le vendeur

Rushikesh Kamalapurkar
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide
impression à la demande

Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Buch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book's focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution. To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor-critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements. This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry. 312 pp. Englisch. N° de réf. du vendeur 9783319783833

Contacter le vendeur

Acheter neuf

EUR 171,19
Autre devise
Frais de port : EUR 11
De Allemagne vers France
Destinations, frais et délais

Quantité disponible : 2 disponible(s)

Ajouter au panier

Image d'archives

Kamalapurkar, Rushikesh (Author)/ Walters, Patrick (Author)/ Rosenfeld, Joel (Author)/ Dixon, Warren (Author)
Edité par Springer, 2018
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide

Vendeur : Revaluation Books, Exeter, Royaume-Uni

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Hardcover. Etat : Brand New. 293 pages. 9.25x6.10x0.79 inches. In Stock. N° de réf. du vendeur zk3319783831

Contacter le vendeur

Acheter neuf

EUR 244,33
Autre devise
Frais de port : EUR 11,92
De Royaume-Uni vers France
Destinations, frais et délais

Quantité disponible : 1 disponible(s)

Ajouter au panier

Image d'archives

Kamalapurkar, Rushikesh; Walters, Patrick; Rosenfeld, Joel; Dixon, Warren
Edité par Springer, 2018
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide

Vendeur : Books Puddle, New York, NY, Etats-Unis

Évaluation du vendeur 4 sur 5 étoiles Evaluation 4 étoiles, En savoir plus sur les évaluations des vendeurs

Etat : New. N° de réf. du vendeur 26376478399

Contacter le vendeur

Acheter neuf

EUR 251,08
Autre devise
Frais de port : EUR 7,94
De Etats-Unis vers France
Destinations, frais et délais

Quantité disponible : 4 disponible(s)

Ajouter au panier

Image d'archives

Kamalapurkar, Rushikesh; Walters, Patrick; Rosenfeld, Joel; Dixon, Warren
Edité par Springer, 2018
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide
impression à la demande

Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Etat : New. PRINT ON DEMAND. N° de réf. du vendeur 18376478389

Contacter le vendeur

Acheter neuf

EUR 264,81
Autre devise
Frais de port : EUR 7,95
De Allemagne vers France
Destinations, frais et délais

Quantité disponible : 4 disponible(s)

Ajouter au panier

Image d'archives

Kamalapurkar, Rushikesh; Walters, Patrick; Rosenfeld, Joel; Dixon, Warren
Edité par Springer, 2018
ISBN 10 : 3319783831 ISBN 13 : 9783319783833
Neuf Couverture rigide
impression à la demande

Vendeur : Majestic Books, Hounslow, Royaume-Uni

Évaluation du vendeur 5 sur 5 étoiles Evaluation 5 étoiles, En savoir plus sur les évaluations des vendeurs

Etat : New. Print on Demand. N° de réf. du vendeur 369567072

Contacter le vendeur

Acheter neuf

EUR 263,39
Autre devise
Frais de port : EUR 10,55
De Royaume-Uni vers France
Destinations, frais et délais

Quantité disponible : 4 disponible(s)

Ajouter au panier

There are 1 autres exemplaires de ce livre sont disponibles

Afficher tous les résultats pour ce livre