What You Will Learn in This Book
Master the Mathematical Foundations: Go beyond theory to implement the core mathematical operations of linear algebra, calculus, and probability that form the bedrock of all modern neural networks, using Python and NumPy.
Build a Neural Network From Scratch: Gain an intuitive understanding of how models learn by constructing a simple neural network from first principles, giving you a solid grasp of concepts like activation functions, loss, and backpropagation.
Engineer a Complete Data Pipeline: Learn the critical and often overlooked steps of sourcing, cleaning, and pre-processing the massive text datasets that fuel LLMs, while navigating the ethical considerations of bias and fairness.
Implement a Subword Tokenizer: Solve the "vocabulary problem" by building a Byte-Pair Encoding (BPE) tokenizer from scratch, learning precisely how raw text is converted into a format that models can understand.
Construct a Transformer Block, Piece by Piece: Deconstruct the "black box" of the Transformer by implementing its core components in code. You will build the scaled dot-product attention mechanism, expand it to multi-head attention, and assemble a complete, functional Transformer block.
Differentiate and Understand Key Architectures: Clearly grasp the differences and use cases for the foundational LLM designs, including encoder-only (like BERT), decoder-only (like GPT), and encoder-decoder models (like T5).
Write a Full Pre-training Loop: Move from theory to practice by writing the complete code to pre-train a small-scale GPT-style model from scratch, including setting up the language modeling objective and monitoring loss curves.
Understand the Economics and Scale of Training: Learn the "scaling laws" that govern the relationship between model size, dataset size, and performance, and understand the hardware and distributed computing strategies (e.g., model parallelism, ZeRO) required for training at scale.
Adapt Pre-trained Models with Fine-Tuning: Learn to take a powerful, general-purpose LLM and adapt it for specific, real-world tasks using techniques like instruction tuning and standard fine-tuning.
Grasp Advanced Alignment and Evaluation Techniques: Gain a conceptual understanding of how Reinforcement Learning from Human Feedback (RLHF) aligns models with human intent, and learn how to properly evaluate model quality using benchmarks like MMLU and SuperGLUE.
Explore State-of-the-Art and Future Architectures: Survey the cutting edge of LLM research, including methods for model efficiency (quantization, Mixture of Experts), the shift to multimodality (incorporating images and audio), and the rise of agentic AI systems.
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
EUR 17,04 expédition depuis Etats-Unis vers France
Destinations, frais et délaisEUR 2,31 expédition depuis Royaume-Uni vers France
Destinations, frais et délaisVendeur : Rarewaves.com UK, London, Royaume-Uni
Paperback. Etat : New. N° de réf. du vendeur LU-9798288712999
Quantité disponible : Plus de 20 disponibles
Vendeur : Rarewaves.com USA, London, LONDO, Royaume-Uni
Paperback. Etat : New. N° de réf. du vendeur LU-9798288712999
Quantité disponible : Plus de 20 disponibles
Vendeur : California Books, Miami, FL, Etats-Unis
Etat : New. Print on Demand. N° de réf. du vendeur I-9798288712999
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : New. N° de réf. du vendeur 50480016-n
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPrices, Columbia, MD, Etats-Unis
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 50480016
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : New. N° de réf. du vendeur 50480016-n
Quantité disponible : Plus de 20 disponibles
Vendeur : GreatBookPricesUK, Woodford Green, Royaume-Uni
Etat : As New. Unread book in perfect condition. N° de réf. du vendeur 50480016
Quantité disponible : Plus de 20 disponibles
Vendeur : Best Price, Torrance, CA, Etats-Unis
Etat : New. SUPER FAST SHIPPING. N° de réf. du vendeur 9798288712999
Quantité disponible : 1 disponible(s)
Vendeur : CitiRetail, Stevenage, Royaume-Uni
Paperback. Etat : new. Paperback. What You Will Learn in This BookMaster the Mathematical Foundations: Go beyond theory to implement the core mathematical operations of linear algebra, calculus, and probability that form the bedrock of all modern neural networks, using Python and NumPy.Build a Neural Network From Scratch: Gain an intuitive understanding of how models learn by constructing a simple neural network from first principles, giving you a solid grasp of concepts like activation functions, loss, and backpropagation.Engineer a Complete Data Pipeline: Learn the critical and often overlooked steps of sourcing, cleaning, and pre-processing the massive text datasets that fuel LLMs, while navigating the ethical considerations of bias and fairness.Implement a Subword Tokenizer: Solve the "vocabulary problem" by building a Byte-Pair Encoding (BPE) tokenizer from scratch, learning precisely how raw text is converted into a format that models can understand.Construct a Transformer Block, Piece by Piece: Deconstruct the "black box" of the Transformer by implementing its core components in code. You will build the scaled dot-product attention mechanism, expand it to multi-head attention, and assemble a complete, functional Transformer block.Differentiate and Understand Key Architectures: Clearly grasp the differences and use cases for the foundational LLM designs, including encoder-only (like BERT), decoder-only (like GPT), and encoder-decoder models (like T5).Write a Full Pre-training Loop: Move from theory to practice by writing the complete code to pre-train a small-scale GPT-style model from scratch, including setting up the language modeling objective and monitoring loss curves.Understand the Economics and Scale of Training: Learn the "scaling laws" that govern the relationship between model size, dataset size, and performance, and understand the hardware and distributed computing strategies (e.g., model parallelism, ZeRO) required for training at scale.Adapt Pre-trained Models with Fine-Tuning: Learn to take a powerful, general-purpose LLM and adapt it for specific, real-world tasks using techniques like instruction tuning and standard fine-tuning.Grasp Advanced Alignment and Evaluation Techniques: Gain a conceptual understanding of how Reinforcement Learning from Human Feedback (RLHF) aligns models with human intent, and learn how to properly evaluate model quality using benchmarks like MMLU and SuperGLUE.Explore State-of-the-Art and Future Architectures: Survey the cutting edge of LLM research, including methods for model efficiency (quantization, Mixture of Experts), the shift to multimodality (incorporating images and audio), and the rise of agentic AI systems. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. N° de réf. du vendeur 9798288712999
Quantité disponible : 1 disponible(s)
Vendeur : Grand Eagle Retail, Mason, OH, Etats-Unis
Paperback. Etat : new. Paperback. What You Will Learn in This BookMaster the Mathematical Foundations: Go beyond theory to implement the core mathematical operations of linear algebra, calculus, and probability that form the bedrock of all modern neural networks, using Python and NumPy.Build a Neural Network From Scratch: Gain an intuitive understanding of how models learn by constructing a simple neural network from first principles, giving you a solid grasp of concepts like activation functions, loss, and backpropagation.Engineer a Complete Data Pipeline: Learn the critical and often overlooked steps of sourcing, cleaning, and pre-processing the massive text datasets that fuel LLMs, while navigating the ethical considerations of bias and fairness.Implement a Subword Tokenizer: Solve the "vocabulary problem" by building a Byte-Pair Encoding (BPE) tokenizer from scratch, learning precisely how raw text is converted into a format that models can understand.Construct a Transformer Block, Piece by Piece: Deconstruct the "black box" of the Transformer by implementing its core components in code. You will build the scaled dot-product attention mechanism, expand it to multi-head attention, and assemble a complete, functional Transformer block.Differentiate and Understand Key Architectures: Clearly grasp the differences and use cases for the foundational LLM designs, including encoder-only (like BERT), decoder-only (like GPT), and encoder-decoder models (like T5).Write a Full Pre-training Loop: Move from theory to practice by writing the complete code to pre-train a small-scale GPT-style model from scratch, including setting up the language modeling objective and monitoring loss curves.Understand the Economics and Scale of Training: Learn the "scaling laws" that govern the relationship between model size, dataset size, and performance, and understand the hardware and distributed computing strategies (e.g., model parallelism, ZeRO) required for training at scale.Adapt Pre-trained Models with Fine-Tuning: Learn to take a powerful, general-purpose LLM and adapt it for specific, real-world tasks using techniques like instruction tuning and standard fine-tuning.Grasp Advanced Alignment and Evaluation Techniques: Gain a conceptual understanding of how Reinforcement Learning from Human Feedback (RLHF) aligns models with human intent, and learn how to properly evaluate model quality using benchmarks like MMLU and SuperGLUE.Explore State-of-the-Art and Future Architectures: Survey the cutting edge of LLM research, including methods for model efficiency (quantization, Mixture of Experts), the shift to multimodality (incorporating images and audio), and the rise of agentic AI systems. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. N° de réf. du vendeur 9798288712999
Quantité disponible : 1 disponible(s)