Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems.
Dr Aaron Rasheed Rababaah is an Associate Professor of Computer Science at the American University of Kuwait. He holds BSc in Idustrial Engineering, MSc in Computer Science and PhD in Computer Systems Engineering. He has 8 years teaching experience at 4 universities. His research interests include: intelligent systems, machine vision, and robotics.
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
EUR 9,70 expédition depuis Allemagne vers France
Destinations, frais et délaisVendeur : moluna, Greven, Allemagne
Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Autor/Autorin: Rababaah AaronDr Aaron Rasheed Rababaah is an Associate Professor of Computer Science at the American University of Kuwait. He holds BSc in Idustrial Engineering, MSc in Computer Science and PhD in Computer Systems Engineering. He ha. N° de réf. du vendeur 151238823
Quantité disponible : Plus de 20 disponibles
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
Taschenbuch. Etat : Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems. N° de réf. du vendeur 9783330651531
Quantité disponible : 1 disponible(s)
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
Taschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems. 240 pp. Englisch. N° de réf. du vendeur 9783330651531
Quantité disponible : 2 disponible(s)
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
Taschenbuch. Etat : Neu. Neuware -Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems.VDM Verlag, Dudweiler Landstraße 99, 66123 Saarbrücken 240 pp. Englisch. N° de réf. du vendeur 9783330651531
Quantité disponible : 2 disponible(s)
Vendeur : Revaluation Books, Exeter, Royaume-Uni
Paperback. Etat : Brand New. 240 pages. 8.66x5.91x0.55 inches. In Stock. N° de réf. du vendeur 3330651539
Quantité disponible : 1 disponible(s)