This course aims at providing a mathematical perspective to some key elements of the so-called deep neural networks (DNNs). Much of the interest on deep learning has focused on the implementation of DNN-based algorithms. Our hope is that this compact textbook will offer a complementary point of view that emphasizes the underlying mathematical ideas. We believe that a more foundational perspective will help to answer important questions that have only received empirical answers so far.
Our goal is to introduce basic concepts from deep learning in a rigorous mathematical fashion, e.g. introduce mathematical definitions of deep neural networks (DNNs), loss functions, the backpropagation algorithm, etc.
We attempt to identify for each concept the simplest setting that minimizes technicalities but still contains the key mathematics.
The book focuses on deep learning techniques and introduces them almost immediately. Other techniques such as regression and SVM are briefly introduced and used as a steppingstone for explaining basic ideas of deep learning.
Throughout these notes, the rigorous definitions and statements are supplemented by heuristic explanations and figures. The book is organized so that each chapter introduces a key concept. When teaching this course, some chapters could be presented as a part of a single lecture whereas the others have more material and would take several lectures.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Leonid Berland received his Ph. D. in 1985 from Kharkiv University (Ukraine). He joined the Pennsylvania State University (PSU) in 1991, and he is currently a Professor of Mathematics and a member of the Materials Research Institute at PSU. He is a founding co-director of PSU Centers for Interdisciplinary Mathematics and for Mathematics of Living and Mimetic Matter. He is known for his works at the interface between mathematics and other disciplines such as physics, materials sciences, life sciences, and most recently, computer science. He co-authored three books and more than 100 publications. His interdisciplinary works received research awards from leading research agencies in the USA, such as NSF, the US Department of Energy, and the National Institute of Health as well as internationally (Bi-National Science Foundation and NATO). Most recently his work was recognized with the Humboldt Research Award of 2021. His teaching excellence was recognized by C.I. Noll Award for Excellence in Teaching by Eberly College of Science at Penn State.
Pierre-Emmanuel Jabin is currently a distinguished professor at the Pennsylvania State University since August 2020. He was a student of École Normale Supérieure from 1995 to 1999; he earned his Ph.D. in 2000 and his HRD in 2003 both at Université Pierre et Marie Curie (Paris VI). He was more recently a professor at the University of Maryland from 2011 to 2020, where he was also director of the Center for Scientific Computation and Mathematical Modeling from 2016 to 2020. Jabin‘s work in applied mathematics is internationally recognized and he has made seminal contributions to the theory and applications of many-particle/multi-agent systems together with advection and transport phenomena. Jabin was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro in 2018.
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : New. N° de réf. du vendeur 52142859-n
Quantité disponible : 2 disponible(s)
Vendeur : Grand Eagle Retail, Bensenville, IL, Etats-Unis
Paperback. Etat : new. Paperback. This course aims at providing a mathematical perspective to some key elements of the so-called deep neural networks (DNNs). Much of the interest on deep learning has focused on the implementation of DNN-based algorithms. Our hope is that this compact textbook will offer a complementary point of view that emphasizes the underlying mathematical ideas. We believe that a more foundational perspective will help to answer important questions that have only received empirical answers so far. Our goal is to introduce basic concepts from deep learning in a rigorous mathematical fashion, e.g. introduce mathematical definitions of deep neural networks (DNNs), loss functions, the backpropagation algorithm, etc. We attempt to identify for each concept the simplest setting that minimizes technicalities but still contains the key mathematics. The book focuses on deep learning techniques and introduces them almost immediately. Other techniques such as regression and SVM are briefly introduced and used as a steppingstone for explaining basic ideas of deep learning. Throughout these notes, the rigorous definitions and statements are supplemented by heuristic explanations and figures. The book is organized so that each chapter introduces a key concept. When teaching this course, some chapters could be presented as a part of a single lecture whereas the others have more material and would take several lectures. This item is printed on demand. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. N° de réf. du vendeur 9783119144117
Quantité disponible : 1 disponible(s)
Vendeur : PBShop.store UK, Fairford, GLOS, Royaume-Uni
PAP. Etat : New. New Book. Shipped from UK. Established seller since 2000. N° de réf. du vendeur DB-9783119144117
Quantité disponible : 2 disponible(s)
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 52142859
Quantité disponible : 2 disponible(s)
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : New. N° de réf. du vendeur 52142859-n
Quantité disponible : 2 disponible(s)
Vendeur : Brook Bookstore On Demand, Napoli, NA, Italie
Etat : new. N° de réf. du vendeur ZVJCDLQ02P
Quantité disponible : Plus de 20 disponibles
Vendeur : Revaluation Books, Exeter, Royaume-Uni
perfect. Etat : Brand New. 158 pages. 6.69x2.00x9.45 inches. In Stock. N° de réf. du vendeur __3119144118
Quantité disponible : 2 disponible(s)
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 52142859
Quantité disponible : 2 disponible(s)
Vendeur : AussieBookSeller, Truganina, VIC, Australie
Paperback. Etat : new. Paperback. This course aims at providing a mathematical perspective to some key elements of the so-called deep neural networks (DNNs). Much of the interest on deep learning has focused on the implementation of DNN-based algorithms. Our hope is that this compact textbook will offer a complementary point of view that emphasizes the underlying mathematical ideas. We believe that a more foundational perspective will help to answer important questions that have only received empirical answers so far. Our goal is to introduce basic concepts from deep learning in a rigorous mathematical fashion, e.g. introduce mathematical definitions of deep neural networks (DNNs), loss functions, the backpropagation algorithm, etc. We attempt to identify for each concept the simplest setting that minimizes technicalities but still contains the key mathematics. The book focuses on deep learning techniques and introduces them almost immediately. Other techniques such as regression and SVM are briefly introduced and used as a steppingstone for explaining basic ideas of deep learning. Throughout these notes, the rigorous definitions and statements are supplemented by heuristic explanations and figures. The book is organized so that each chapter introduces a key concept. When teaching this course, some chapters could be presented as a part of a single lecture whereas the others have more material and would take several lectures. This item is printed on demand. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability. N° de réf. du vendeur 9783119144117
Quantité disponible : 1 disponible(s)
Vendeur : Rheinberg-Buch Andreas Meier eK, Bergisch Gladbach, Allemagne
Taschenbuch. Etat : Neu. Neuware -This course aims at providing a mathematical perspective to some key elements of the so-called deep neural networks (DNNs). Much of the interest on deep learning has focused on the implementation of DNN-based algorithms. Our hope is that this compact textbook will offer a complementary point of view that emphasizes the underlying mathematical ideas. We believe that a more foundational perspective will help to answer important questions that have only received empirical answers so far. Our goal is to introduce basic concepts from deep learning in a rigorous mathematical fashion, e.g. introduce mathematical definitions of deep neural networks (DNNs), loss functions, the backpropagation algorithm, etc. We attempt to identify for each concept the simplest setting that minimizes technicalities but still contains the key mathematics. The book focuses on deep learning techniques and introduces them almost immediately. Other techniques such as regression and SVM are briefly introduced and used as a steppingstone for explaining basic ideas of deep learning. Throughout these notes, the rigorous definitions and statements are supplemented by heuristic explanations and figures. The book is organized so that each chapter introduces a key concept. When teaching this course, some chapters could be presented as a part of a single lecture whereas the others have more material and would take several lectures. 150 pp. Englisch. N° de réf. du vendeur 9783119144117
Quantité disponible : 1 disponible(s)