Edité par Springer International Publishing, 2022
ISBN 10 : 3031190661 ISBN 13 : 9783031190667
Langue: anglais
Vendeur : Buchpark, Trebbin, Allemagne
Quantité disponible : 3 disponible(s)
Ajouter au panierEtat : Hervorragend. Zustand: Hervorragend | Seiten: 144 | Sprache: Englisch | Produktart: Bücher.
Vendeur : Ria Christie Collections, Uxbridge, Royaume-Uni
EUR 48,60
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New. In.
Vendeur : Ria Christie Collections, Uxbridge, Royaume-Uni
EUR 54,86
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New. In.
Vendeur : Chiron Media, Wallingford, Royaume-Uni
EUR 48,06
Autre deviseQuantité disponible : 10 disponible(s)
Ajouter au panierPF. Etat : New.
Edité par Springer International Publishing, Springer Nature Switzerland, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Langue: anglais
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
EUR 48,14
Autre deviseQuantité disponible : 1 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Edité par Springer International Publishing, Springer Nature Switzerland, 2022
ISBN 10 : 3031190661 ISBN 13 : 9783031190667
Langue: anglais
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
EUR 48,14
Autre deviseQuantité disponible : 1 disponible(s)
Ajouter au panierBuch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Vendeur : California Books, Miami, FL, Etats-Unis
EUR 55,41
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New.
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2023, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Langue: anglais
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
EUR 48,14
Autre deviseQuantité disponible : 2 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch.
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2022, 2022
ISBN 10 : 3031190661 ISBN 13 : 9783031190667
Langue: anglais
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
EUR 48,14
Autre deviseQuantité disponible : 2 disponible(s)
Ajouter au panierBuch. Etat : Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch.
Vendeur : Books Puddle, New York, NY, Etats-Unis
EUR 70,21
Autre deviseQuantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. 1st ed. 2023 edition NO-PA16APR2015-KAP.
Edité par Springer-Nature New York Inc, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Langue: anglais
Vendeur : Revaluation Books, Exeter, Royaume-Uni
EUR 70,52
Autre deviseQuantité disponible : 2 disponible(s)
Ajouter au panierPaperback. Etat : Brand New. 140 pages. 9.45x6.61x0.33 inches. In Stock.
Edité par Springer, Berlin|Springer International Publishing|Springer, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Langue: anglais
Vendeur : moluna, Greven, Allemagne
EUR 42,96
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierKartoniert / Broschiert. Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where th.
Vendeur : PBShop.store US, Wood Dale, IL, Etats-Unis
EUR 58,57
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierPAP. Etat : New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2023, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Langue: anglais
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
EUR 48,14
Autre deviseQuantité disponible : 2 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch.
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2022, 2022
ISBN 10 : 3031190661 ISBN 13 : 9783031190667
Langue: anglais
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
EUR 48,14
Autre deviseQuantité disponible : 2 disponible(s)
Ajouter au panierBuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch.
Vendeur : PBShop.store UK, Fairford, GLOS, Royaume-Uni
EUR 55,80
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierPAP. Etat : New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.
Vendeur : Majestic Books, Hounslow, Royaume-Uni
EUR 69,76
Autre deviseQuantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. Print on Demand This item is printed on demand.
Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne
EUR 73,44
Autre deviseQuantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. PRINT ON DEMAND.