This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index γ is positive. The gamma-divergence can be defined even when the power index γ is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative γ. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when γ is equal to -1.
The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.
In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Shinto Eguchi received his master degree from Osaka University in 1979 and a Ph.D. from Hiroshima University, Japan, in 1984. His working career started as Assistant Professor of Hiroshima University, 1984, Associate Professor of Shinamne University, 1986, and Professor of The Institute of Statistical Mathematics, 1995-2020. He is currently Emeritus Professor at the Institute of Statistical Mathematics and Graduate University of Advanced Studies. His research interest is primarily statistics, including statistical machine learning, bioinformatics, information geometry, statistical ecology and parametric/semiparametric inference and robust statistics.
His recent publication: -A generalized quasi-linear mixed-effects model, Y Saigusa, S Eguchi, O Komori, Statistical Methods in Medical Research, 31 (7), 1280-1291, 2022. -Robust self-tuning semiparametric PCA for contaminated elliptical distribution, H Hung, SY Huang, S Eguchi, IEEE Transactions on Signal Processing 70, 5885-5897, 2022. -Minimum information divergence of Q-functions for dynamic treatment resumes. S Eguchi, Information Geometry, 1-21, 2022.
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
Vendeur : Books From California, Simi Valley, CA, Etats-Unis
paperback. Etat : Very Good. N° de réf. du vendeur mon0003814605
Quantité disponible : 1 disponible(s)
Vendeur : Brook Bookstore On Demand, Napoli, NA, Italie
Etat : new. Questo è un articolo print on demand. N° de réf. du vendeur KL5DE7XQGA
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : New. N° de réf. du vendeur 49972830-n
Quantité disponible : 15 disponible(s)
Vendeur : California Books, Miami, FL, Etats-Unis
Etat : New. N° de réf. du vendeur I-9789819788798
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 49972830
Quantité disponible : 15 disponible(s)
Vendeur : Grand Eagle Retail, Bensenville, IL, Etats-Unis
Paperback. Etat : new. Paperback. This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index g is positive. The gamma-divergence can be defined even when the power index g is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative g. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when g is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. N° de réf. du vendeur 9789819788798
Quantité disponible : 1 disponible(s)
Vendeur : Ria Christie Collections, Uxbridge, Royaume-Uni
Etat : New. In. N° de réf. du vendeur ria9789819788798_new
Quantité disponible : Plus de 20 disponibles
Vendeur : Books Puddle, New York, NY, Etats-Unis
Etat : New. N° de réf. du vendeur 26403504486
Quantité disponible : 4 disponible(s)
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
Taschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book introduces the gamma-divergence, a measure of distance between probability distributions that was proposed by Fujisawa and Eguchi in 2008. The gamma-divergence has been extensively explored to provide robust estimation when the power index Gamma is positive. The gamma-divergence can be defined even when the power index Gamma is negative, as long as the condition of integrability is satisfied. Thus, the authors consider the gamma-divergence defined on a set of discrete distributions. The arithmetic, geometric, and harmonic means for the distribution ratios are closely connected with the gamma-divergence with a negative Gamma. In particular, the authors call the geometric-mean (GM) divergence the gamma-divergence when Gamma is equal to -1.The book begins by providing an overview of the gamma-divergence and its properties. It then goes on to discuss the applications of the gamma-divergence in various areas, including machine learning, statistics, and ecology. Bernoulli, categorical, Poisson, negative binomial, and Boltzmann distributions are discussed as typical examples. Furthermore, regression analysis models that explicitly or implicitly assume these distributions as the dependent variable in generalized linear models are discussed to apply the minimum gamma-divergence method.In ensemble learning, AdaBoost is derived by the exponential loss function in the weighted majority vote manner. It is pointed out that the exponential loss function is deeply connected to the GM divergence. In the Boltzmann machine, the maximum likelihood has to use approximation methods such as mean field approximation because of the intractable computation of the partition function. However, by considering the GM divergence and the exponential loss, it is shown that the calculation of the partition function is not necessary, and it can be executed without variational inference. 118 pp. Englisch. N° de réf. du vendeur 9789819788798
Quantité disponible : 2 disponible(s)
Vendeur : Majestic Books, Hounslow, Royaume-Uni
Etat : New. Print on Demand. N° de réf. du vendeur 410731193
Quantité disponible : 4 disponible(s)