This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Gauri Joshi, Ph.D., is an Associate Professor in the ECE department at Carnegie Mellon University. Dr. Joshi completed her Ph.D. from MIT EECS. Her current research is on designing algorithms for federated learning, distributed optimization, and parallel computing. Her awards and honors include being named as one of MIT Technology Review's 35 Innovators under 35 (2022), the NSF CAREER Award (2021), the ACM SIGMETRICS Best Paper Award (2020), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
EUR 9,90 expédition depuis Allemagne vers France
Destinations, frais et délaisEUR 9,70 expédition depuis Allemagne vers France
Destinations, frais et délaisVendeur : Buchpark, Trebbin, Allemagne
Etat : Hervorragend. Zustand: Hervorragend | Seiten: 144 | Sprache: Englisch | Produktart: Bücher. N° de réf. du vendeur 40959855/1
Quantité disponible : 3 disponible(s)
Vendeur : moluna, Greven, Allemagne
Gebunden. Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where th. N° de réf. du vendeur 706732221
Quantité disponible : Plus de 20 disponibles
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
Buch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. N° de réf. du vendeur 9783031190667
Quantité disponible : 1 disponible(s)
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
Buch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch. N° de réf. du vendeur 9783031190667
Quantité disponible : 2 disponible(s)
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
Buch. Etat : Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch. N° de réf. du vendeur 9783031190667
Quantité disponible : 2 disponible(s)
Vendeur : Books Puddle, New York, NY, Etats-Unis
Etat : New. 1st ed. 2023 edition NO-PA16APR2015-KAP. N° de réf. du vendeur 26396295639
Quantité disponible : 4 disponible(s)
Vendeur : Majestic Books, Hounslow, Royaume-Uni
Etat : New. Print on Demand This item is printed on demand. N° de réf. du vendeur 401162760
Quantité disponible : 4 disponible(s)
Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne
Etat : New. PRINT ON DEMAND. N° de réf. du vendeur 18396295645
Quantité disponible : 4 disponible(s)