This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users’ fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Fan Liu is a Research Fellow with the School of Computing, National University of Singapore (NUS). His research interests lie primarily in multimedia computing and information retrieval. His work has been published in a set of top forums, including ACM SIGIR, MM, WWW, TKDE, TOIS, TMM, and TCSVT. He is an area chair of ACM MM and a senior PC member of CIKM.
Zhenyang Li is a Postdoc with the Hong Kong Generative Al Research and Development Center Limited. His research interest is primarily in recommendation and visual question answering. His work has been published in a set of top forums, including ACM MM, TIP, and TMM.
Liqiang Nie is Professor at and Dean of the School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen). His research interests are primarily in multimedia computing and information retrieval. He has co-authored more than 200 articles and four books. He is a regular area chair of ACM MM, NeurIPS, IJCAI, and AAAI, and a member of the ICME steering committee. He has received many awards, like the ACM MM and SIGIR best paper honorable mention in 2019, SIGMM rising star in 2020, TR35 China 2020, DAMO Academy Young Fellow in 2020, and SIGIR best student paper in 2021.
Les informations fournies dans la section « A propos du livre » peuvent faire référence à une autre édition de ce titre.
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : New. N° de réf. du vendeur 49563644-n
Quantité disponible : 1 disponible(s)
Vendeur : Grand Eagle Retail, Bensenville, IL, Etats-Unis
Paperback. Etat : new. Paperback. This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. N° de réf. du vendeur 9783031831874
Quantité disponible : 1 disponible(s)
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
Taschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users' fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. 152 pp. Englisch. N° de réf. du vendeur 9783031831874
Quantité disponible : 2 disponible(s)
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 49563644
Quantité disponible : 1 disponible(s)
Vendeur : moluna, Greven, Allemagne
Etat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. N° de réf. du vendeur 2054249771
Quantité disponible : Plus de 20 disponibles
Vendeur : Books Puddle, New York, NY, Etats-Unis
Etat : New. N° de réf. du vendeur 26403601002
Quantité disponible : 4 disponible(s)
Vendeur : Majestic Books, Hounslow, Royaume-Uni
Etat : New. Print on Demand. N° de réf. du vendeur 410634677
Quantité disponible : 4 disponible(s)
Vendeur : Biblios, Frankfurt am main, HESSE, Allemagne
Etat : New. PRINT ON DEMAND. N° de réf. du vendeur 18403600992
Quantité disponible : 4 disponible(s)
Vendeur : Revaluation Books, Exeter, Royaume-Uni
Paperback. Etat : Brand New. 169 pages. 9.25x6.10x9.25 inches. In Stock. N° de réf. du vendeur x-303183187X
Quantité disponible : 1 disponible(s)
Vendeur : preigu, Osnabrück, Allemagne
Taschenbuch. Etat : Neu. Multimodal Learning toward Recommendation | Fan Liu (u. a.) | Taschenbuch | xvii | Englisch | 2025 | Springer Nature Switzerland | EAN 9783031831874 | Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg, juergen[dot]hartmann[at]springer[dot]com | Anbieter: preigu. N° de réf. du vendeur 130977204
Quantité disponible : 5 disponible(s)