Vendeur : California Books, Miami, FL, Etats-Unis
EUR 54,38
Quantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New.
Vendeur : Chiron Media, Wallingford, Royaume-Uni
EUR 49,55
Quantité disponible : 10 disponible(s)
Ajouter au panierPF. Etat : New.
Vendeur : Books Puddle, New York, NY, Etats-Unis
EUR 63,66
Quantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. 1st ed. 2023 edition NO-PA16APR2015-KAP.
EUR 66,29
Quantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. pp. 144.
Langue: anglais
Edité par Springer-Nature New York Inc, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Vendeur : Revaluation Books, Exeter, Royaume-Uni
EUR 68,75
Quantité disponible : 2 disponible(s)
Ajouter au panierPaperback. Etat : Brand New. 140 pages. 9.45x6.61x0.33 inches. In Stock.
Langue: anglais
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2023, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
EUR 48,14
Quantité disponible : 2 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch.
Langue: anglais
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2022, 2022
ISBN 10 : 3031190661 ISBN 13 : 9783031190667
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
EUR 48,14
Quantité disponible : 2 disponible(s)
Ajouter au panierBuch. Etat : Neu. Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 144 pp. Englisch.
Langue: anglais
Edité par Springer International Publishing, Springer Nature Switzerland, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
EUR 48,14
Quantité disponible : 1 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Langue: anglais
Edité par Springer International Publishing, Springer Nature Switzerland, 2022
ISBN 10 : 3031190661 ISBN 13 : 9783031190667
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
EUR 48,14
Quantité disponible : 1 disponible(s)
Ajouter au panierBuch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
EUR 44,75
Quantité disponible : 5 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. Optimization Algorithms for Distributed Machine Learning | Gauri Joshi | Taschenbuch | xiii | Englisch | 2023 | Springer | EAN 9783031190698 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu.
EUR 31,32
Quantité disponible : 5 disponible(s)
Ajouter au panierEtat : Hervorragend. Zustand: Hervorragend | Sprache: Englisch | Produktart: Bücher | This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Vendeur : PBShop.store US, Wood Dale, IL, Etats-Unis
EUR 55,52
Quantité disponible : Plus de 20 disponibles
Ajouter au panierPAP. Etat : New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.
Vendeur : PBShop.store UK, Fairford, GLOS, Royaume-Uni
EUR 53,31
Quantité disponible : Plus de 20 disponibles
Ajouter au panierPAP. Etat : New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.
Vendeur : Majestic Books, Hounslow, Royaume-Uni
EUR 64,75
Quantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. Print on Demand This item is printed on demand.
Langue: anglais
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2023, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
EUR 48,14
Quantité disponible : 2 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch.
Langue: anglais
Edité par Springer International Publishing, Springer Nature Switzerland Nov 2022, 2022
ISBN 10 : 3031190661 ISBN 13 : 9783031190667
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
EUR 48,14
Quantité disponible : 2 disponible(s)
Ajouter au panierBuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch.
Vendeur : Majestic Books, Hounslow, Royaume-Uni
EUR 67,69
Quantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. Print on Demand pp. 144.
Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne
EUR 66,89
Quantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. PRINT ON DEMAND.
Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne
EUR 69,88
Quantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. PRINT ON DEMAND pp. 144.
Langue: anglais
Edité par Springer, Berlin|Springer International Publishing|Springer, 2023
ISBN 10 : 3031190696 ISBN 13 : 9783031190698
Vendeur : moluna, Greven, Allemagne
EUR 42,96
Quantité disponible : Plus de 20 disponibles
Ajouter au panierKartoniert / Broschiert. Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where th.
Vendeur : preigu, Osnabrück, Allemagne
EUR 44,75
Quantité disponible : 5 disponible(s)
Ajouter au panierBuch. Etat : Neu. Optimization Algorithms for Distributed Machine Learning | Gauri Joshi | Buch | xiii | Englisch | 2022 | Springer | EAN 9783031190667 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu Print on Demand.