Modern LLMs forget by default. This book shows you how to make them remember. LLM Memory Engineering is your practical blueprint for building AI agents and applications with long-term, persistent memory—so your models can recall, reason, and evolve across time.
Written by a seasoned AI engineer with hands-on experience in building memory-augmented agents, this book goes far beyond theoretical explanations. You’ll get field-tested patterns, production-grade implementations, and insights drawn from real-world LLM deployments using LangChain, LlamaIndex, Chroma, Weaviate, and Pinecone.
About the Technology:
Today’s large language models, while powerful, are inherently stateless. They respond based on the current prompt alone—unless you equip them with external memory systems. This book bridges that critical gap. Whether you're building an AI assistant, a personalized tutor, or a long-running agent system, memory matters. Technologies like vector databases, embedding retrievers, summarizers, and LangChain's modular memory interfaces are the foundation of truly intelligent LLM workflows.
What’s Inside:
You’ll learn how to design memory pipelines, store episodic and semantic memory, customize prompt injection, and scale memory systems for production. From chunking and retrieval to compression and auditing, the book guides you through every layer of memory engineering:
Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.
Vendeur : California Books, Miami, FL, Etats-Unis
Etat : New. Print on Demand. N° de réf. du vendeur I-9798297750678
Quantité disponible : Plus de 20 disponibles
Vendeur : CitiRetail, Stevenage, Royaume-Uni
Paperback. Etat : new. Paperback. Modern LLMs forget by default. This book shows you how to make them remember. LLM Memory Engineering is your practical blueprint for building AI agents and applications with long-term, persistent memory-so your models can recall, reason, and evolve across time. Written by a seasoned AI engineer with hands-on experience in building memory-augmented agents, this book goes far beyond theoretical explanations. You'll get field-tested patterns, production-grade implementations, and insights drawn from real-world LLM deployments using LangChain, LlamaIndex, Chroma, Weaviate, and Pinecone. About the Technology: Today's large language models, while powerful, are inherently stateless. They respond based on the current prompt alone-unless you equip them with external memory systems. This book bridges that critical gap. Whether you're building an AI assistant, a personalized tutor, or a long-running agent system, memory matters. Technologies like vector databases, embedding retrievers, summarizers, and LangChain's modular memory interfaces are the foundation of truly intelligent LLM workflows. What's Inside: You'll learn how to design memory pipelines, store episodic and semantic memory, customize prompt injection, and scale memory systems for production. From chunking and retrieval to compression and auditing, the book guides you through every layer of memory engineering: Vector search with FAISS, Chroma, Pinecone, WeaviateRetrieval-Augmented Generation (RAG) with persistent memoryLangChain and LlamaIndex memory modules explained and appliedMemory summarization, relevance scoring, and garbage collectionCompliance, data minimization, and secure memory designReal-world use cases: research agents, companions, support bots, and moreWho This Book Is For: Whether you're an AI engineer, backend developer, data scientist, or product team working on intelligent applications, this book is written for you. No deep ML background is required-only curiosity and the desire to build smarter, context-aware systems. LLMs are evolving rapidly-and so are expectations for what they can remember. If you're not already working with memory-augmented architectures, you're behind. Don't let your AI product stall at session-level intelligence. Start building agents that remember, adapt, and improve. This isn't just a book-it's a complete engineering guide. You're not only learning how memory works, you're getting battle-tested techniques, best practices, reusable prompts, deployment patterns, and architectural blueprints that can save you months of R&D. If you're serious about building AI that goes beyond the prompt-AI that remembers, reasons, and grows with its users-then this book is your toolkit.Start mastering LLM memory engineering today and shape the future of intelligent agents. This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. N° de réf. du vendeur 9798297750678
Quantité disponible : 1 disponible(s)