This book provides a deep but still self-contained introduction to Support Vector Machines (SVMs) and their role in statistical problems.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Ingo Steinwart is a researcher in the machine learning group at the Los Alamos National Laboratory. He works on support vector machines and related methods.
Andreas Christmann is Professor of Stochastics in the Department of Mathematics at the University of Bayreuth. He works in particular on support vector machines and robust statistics.
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
EUR 9,70 expédition depuis Allemagne vers France
Destinations, frais et délaisVendeur : moluna, Greven, Allemagne
Gebunden. Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Explains the principles that make support vector machines a successful modelling and prediction tool for a variety of applicationsRigorous treatment of state-of-the-art results on support vector machinesSuitable for both graduate students a. N° de réf. du vendeur 5911114
Quantité disponible : Plus de 20 disponibles
Vendeur : Ria Christie Collections, Uxbridge, Royaume-Uni
Etat : New. In. N° de réf. du vendeur ria9780387772417_new
Quantité disponible : Plus de 20 disponibles
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
Buch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Every mathematical discipline goes through three periods of development: the naive, the formal, and the critical. David Hilbert The goal of this book is to explain the principles that made support vector machines (SVMs) a successful modeling and prediction tool for a variety of applications. We try to achieve this by presenting the basic ideas of SVMs together with the latest developments and current research questions in a uni ed style. In a nutshell, we identify at least three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and last but not least their computational e ciency compared with several other methods. Although there are several roots and precursors of SVMs, these methods gained particular momentum during the last 15 years since Vapnik (1995, 1998) published his well-known textbooks on statistical learning theory with aspecialemphasisonsupportvectormachines. Sincethen,the eldofmachine learninghaswitnessedintenseactivityinthestudyofSVMs,whichhasspread moreandmoretootherdisciplinessuchasstatisticsandmathematics. Thusit seems fair to say that several communities are currently working on support vector machines and on related kernel-based methods. Although there are many interactions between these communities, we think that there is still roomforadditionalfruitfulinteractionandwouldbegladifthistextbookwere found helpful in stimulating further research. Many of the results presented in this book have previously been scattered in the journal literature or are still under review. As a consequence, these results have been accessible only to a relativelysmallnumberofspecialists,sometimesprobablyonlytopeoplefrom one community but not the others. 620 pp. Englisch. N° de réf. du vendeur 9780387772417
Quantité disponible : 2 disponible(s)
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
Buch. Etat : Neu. Neuware -Every mathematical discipline goes through three periods of development: the naive, the formal, and the critical. David Hilbert The goal of this book is to explain the principles that made support vector machines (SVMs) a successful modeling and prediction tool for a variety of applications. We try to achieve this by presenting the basic ideas of SVMs together with the latest developments and current research questions in a uni ed style. In a nutshell, we identify at least three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and last but not least their computational e ciency compared with several other methods. Although there are several roots and precursors of SVMs, these methods gained particular momentum during the last 15 years since Vapnik (1995, 1998) published his well-known textbooks on statistical learning theory with aspecialemphasisonsupportvectormachines. Sincethen,the eldofmachine learninghaswitnessedintenseactivityinthestudyofSVMs,whichhasspread moreandmoretootherdisciplinessuchasstatisticsandmathematics. Thusit seems fair to say that several communities are currently working on support vector machines and on related kernel-based methods. Although there are many interactions between these communities, we think that there is still roomforadditionalfruitfulinteractionandwouldbegladifthistextbookwere found helpful in stimulating further research. Many of the results presented in this book have previously been scattered in the journal literature or are still under review. As a consequence, these results have been accessible only to a relativelysmallnumberofspecialists,sometimesprobablyonlytopeoplefrom one community but not the others.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 620 pp. Englisch. N° de réf. du vendeur 9780387772417
Quantité disponible : 2 disponible(s)
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
Buch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - Every mathematical discipline goes through three periods of development: the naive, the formal, and the critical. David Hilbert The goal of this book is to explain the principles that made support vector machines (SVMs) a successful modeling and prediction tool for a variety of applications. We try to achieve this by presenting the basic ideas of SVMs together with the latest developments and current research questions in a uni ed style. In a nutshell, we identify at least three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and last but not least their computational e ciency compared with several other methods. Although there are several roots and precursors of SVMs, these methods gained particular momentum during the last 15 years since Vapnik (1995, 1998) published his well-known textbooks on statistical learning theory with aspecialemphasisonsupportvectormachines. Sincethen,the eldofmachine learninghaswitnessedintenseactivityinthestudyofSVMs,whichhasspread moreandmoretootherdisciplinessuchasstatisticsandmathematics. Thusit seems fair to say that several communities are currently working on support vector machines and on related kernel-based methods. Although there are many interactions between these communities, we think that there is still roomforadditionalfruitfulinteractionandwouldbegladifthistextbookwere found helpful in stimulating further research. Many of the results presented in this book have previously been scattered in the journal literature or are still under review. As a consequence, these results have been accessible only to a relativelysmallnumberofspecialists,sometimesprobablyonlytopeoplefrom one community but not the others. N° de réf. du vendeur 9780387772417
Quantité disponible : 1 disponible(s)
Vendeur : Lucky's Textbooks, Dallas, TX, Etats-Unis
Etat : New. N° de réf. du vendeur ABLIING23Feb2215580173147
Quantité disponible : Plus de 20 disponibles
Vendeur : Revaluation Books, Exeter, Royaume-Uni
Hardcover. Etat : Brand New. 1st edition. 602 pages. 9.50x6.50x1.25 inches. In Stock. N° de réf. du vendeur x-0387772413
Quantité disponible : 2 disponible(s)