Vendeur : Ria Christie Collections, Uxbridge, Royaume-Uni
EUR 61,47
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New. In.
Edité par London ; Berlin ; Tokyo ; Heidelberg ; New York ; Barcelona ; Hong Kong ; Milan ; Paris ; Santa Clara ; Singapore : Springer, 1999
ISBN 10 : 1852330953 ISBN 13 : 9781852330958
Langue: anglais
Vendeur : Roland Antiquariat UG haftungsbeschränkt, Weinheim, Allemagne
EUR 56
Autre deviseQuantité disponible : 1 disponible(s)
Ajouter au panierSoftcover. XXIII, 275 S. : graph. Darst. ; 24 cm Like new. Unread book. --- Neuwertiger Zustand. Ungelesenes Buch. 9781852330958 Sprache: Deutsch Gewicht in Gramm: 467 Softcover reprint of the original 1st ed. 1999.
Vendeur : Chiron Media, Wallingford, Royaume-Uni
EUR 58,37
Autre deviseQuantité disponible : 10 disponible(s)
Ajouter au panierPaperback. Etat : New.
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
EUR 59,97
Autre deviseQuantité disponible : 1 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. Druck auf Anfrage Neuware - Printed after ordering - Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.
Vendeur : California Books, Miami, FL, Etats-Unis
EUR 65,98
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New.
Vendeur : Books Puddle, New York, NY, Etats-Unis
EUR 81,12
Autre deviseQuantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. pp. 302.
Vendeur : Revaluation Books, Exeter, Royaume-Uni
EUR 79,42
Autre deviseQuantité disponible : 2 disponible(s)
Ajouter au panierPaperback. Etat : Brand New. 275 pages. 9.50x6.25x0.75 inches. In Stock.
Vendeur : Solr Books, Lincolnwood, IL, Etats-Unis
EUR 41,67
Autre deviseQuantité disponible : 1 disponible(s)
Ajouter au panierEtat : very_good. This books is in Very good condition. There may be a few flaws like shelf wear and some light wear.
Vendeur : moluna, Greven, Allemagne
EUR 48,37
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Provides unique, comprehensive coverage of generalisation and regularisation: Provides the first real-world test results for recent theoretical findings on the generalisation performance of committeesConventional applications of neural networks usually .
Edité par Springer London, Springer London Feb 1999, 1999
ISBN 10 : 1852330953 ISBN 13 : 9781852330958
Langue: anglais
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
EUR 53,49
Autre deviseQuantité disponible : 1 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. This item is printed on demand - Print on Demand Titel. Neuware -Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 300 pp. Englisch.
Vendeur : THE SAINT BOOKSTORE, Southport, Royaume-Uni
EUR 67,85
Autre deviseQuantité disponible : Plus de 20 disponibles
Ajouter au panierPaperback / softback. Etat : New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 497.
Vendeur : Majestic Books, Hounslow, Royaume-Uni
EUR 81,12
Autre deviseQuantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. Print on Demand pp. 302 49:B&W 6.14 x 9.21 in or 234 x 156 mm (Royal 8vo) Perfect Bound on White w/Gloss Lam.
Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne
EUR 85,38
Autre deviseQuantité disponible : 4 disponible(s)
Ajouter au panierEtat : New. PRINT ON DEMAND pp. 302.
Edité par Springer London Feb 1999, 1999
ISBN 10 : 1852330953 ISBN 13 : 9781852330958
Langue: anglais
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
EUR 85,55
Autre deviseQuantité disponible : 2 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5. 300 pp. Englisch.