Edité par Mercury Learning and Information, 2024
ISBN 10 : 1501523562 ISBN 13 : 9781501523564
Langue: anglais
Vendeur : Books From California, Simi Valley, CA, Etats-Unis
EUR 40,75
Quantité disponible : 1 disponible(s)
Ajouter au panierpaperback. Etat : Very Good.
Edité par Mercury Learning and Information, 2025
ISBN 10 : 1501523562 ISBN 13 : 9781501523564
Langue: anglais
Vendeur : Ria Christie Collections, Uxbridge, Royaume-Uni
EUR 49,79
Quantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New. In.
Edité par Mercury Learning and Information 1/1/2025, 2025
ISBN 10 : 1501523562 ISBN 13 : 9781501523564
Langue: anglais
Vendeur : BargainBookStores, Grand Rapids, MI, Etats-Unis
EUR 65,15
Quantité disponible : 5 disponible(s)
Ajouter au panierPaperback or Softback. Etat : New. Large Language Models for Developers: A Prompt-Based Exploration of Llms. Book.
EUR 65,24
Quantité disponible : Plus de 20 disponibles
Ajouter au panierPaperback. Etat : New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
EUR 92,26
Quantité disponible : Plus de 20 disponibles
Ajouter au panierPaperback. Etat : New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Edité par Mercury Learning & Information, 2025
ISBN 10 : 1501523562 ISBN 13 : 9781501523564
Langue: anglais
Vendeur : Revaluation Books, Exeter, Royaume-Uni
EUR 77,66
Quantité disponible : 2 disponible(s)
Ajouter au panierPaperback. Etat : Brand New. 1012 pages. 6.00x1.90x9.00 inches. In Stock.
EUR 67,16
Quantité disponible : Plus de 20 disponibles
Ajouter au panierPaperback. Etat : New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Vendeur : preigu, Osnabrück, Allemagne
EUR 53,65
Quantité disponible : 5 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. Large Language Models for Developers | A Prompt-based Exploration of LLMs | Oswald Campesato | Taschenbuch | 1012 S. | Englisch | 2025 | De Gruyter | EAN 9781501523564 | Verantwortliche Person für die EU: Walter de Gruyter GmbH, De Gruyter GmbH, Genthiner Str. 13, 10785 Berlin, productsafety[at]degruyterbrill[dot]com | Anbieter: preigu.
EUR 83,52
Quantité disponible : Plus de 20 disponibles
Ajouter au panierPaperback. Etat : New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Vendeur : Grand Eagle Retail, Bensenville, IL, Etats-Unis
EUR 46,55
Quantité disponible : 1 disponible(s)
Ajouter au panierPaperback. Etat : new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi This item is printed on demand. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Vendeur : PBShop.store UK, Fairford, GLOS, Royaume-Uni
EUR 51,24
Quantité disponible : Plus de 20 disponibles
Ajouter au panierPAP. Etat : New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.
Edité par Mercury Learning And Information, De Gruyter Jan 2025, 2025
ISBN 10 : 1501523562 ISBN 13 : 9781501523564
Langue: anglais
Vendeur : BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Allemagne
EUR 58,95
Quantité disponible : 1 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES- Covers the full lifecycle of working with LLMs, from model selection to deployment- Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization- Teaches readers to enhance model efficiency with advanced optimization techniques- Includes companion files with code and images -- available from the publisher 1046 pp. Englisch.
Vendeur : CitiRetail, Stevenage, Royaume-Uni
EUR 56,69
Quantité disponible : 1 disponible(s)
Ajouter au panierPaperback. Etat : new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.
Vendeur : moluna, Greven, Allemagne
EUR 51,60
Quantité disponible : Plus de 20 disponibles
Ajouter au panierEtat : New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Oswald Campesato (San Francisco, CA) specializes in Deep Learning, Python, Data Science, and Generative AI. He is the author/co-author of over forty-five books including Google Gemini for Python, Large Language Models, and GPT-4 for Developers (all Mercury .
Edité par Mercury Learning And Information, De Gruyter Jan 2025, 2025
ISBN 10 : 1501523562 ISBN 13 : 9781501523564
Langue: anglais
Vendeur : buchversandmimpf2000, Emtmannsberg, BAYE, Allemagne
EUR 58,95
Quantité disponible : 1 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. This item is printed on demand - Print on Demand Titel. Neuware -This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES¿ Covers the full lifecycle of working with LLMs, from model selection to deployment¿ Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization¿ Teaches readers to enhance model efficiency with advanced optimization techniques¿ Includes companion files with code and images -- available from the publisherWalter de Gruyter, Genthiner Straße 13, 10785 Berlin 1046 pp. Englisch.
Edité par Mercury Learning And Information, De Gruyter Akademie Forschung, 2025
ISBN 10 : 1501523562 ISBN 13 : 9781501523564
Langue: anglais
Vendeur : AHA-BUCH GmbH, Einbeck, Allemagne
EUR 65,89
Quantité disponible : 1 disponible(s)
Ajouter au panierTaschenbuch. Etat : Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES- Covers the full lifecycle of working with LLMs, from model selection to deployment- Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization- Teaches readers to enhance model efficiency with advanced optimization techniques- Includes companion files with code and images -- available from the publisher.