LLM Inference in C++: Building High-Throughput Engines with PagedAttention and CUDA Kernels - Couverture souple

Livre 7 sur 11: High-Performance C++ Engineering

S. Lightner, Billie

 
9798259069299: LLM Inference in C++: Building High-Throughput Engines with PagedAttention and CUDA Kernels

Synopsis

Stop Wasting GPU Compute. Build the High-Throughput, Low-Latency AI Infrastructure of 2026.

The "VRAM Wall" is the biggest bottleneck in modern AI. Standard Python wrappers and out-of-the-box runtimes are fine for prototyping, but at scale, memory fragmentation and Global Interpreter Lock (GIL) overhead will destroy your throughput. LLM Inference in C++ is the definitive engineering manual for bypassing Python entirely and building custom, bare-metal inference engines that maximize hardware utilization.

Focusing on the cutting-edge 2026 landscape, this book bridges the gap between high-level AI concepts and low-level GPU execution. You will learn how to implement enterprise-grade features like PagedAttention, FlashAttention-3, and Continuous Batching directly in C++ and CUDA, unlocking massive performance gains for large-scale language models.
Inside, you will discover:

  • Hardware-Aware Memory Management: Eliminate memory waste by implementing PagedAttention logic and custom allocators to bypass std::malloc overhead.
  • Accelerated Tensor Algebra: Master C++23's std::mdspan and write fused SIMD kernels with AVX-512 to minimize GPU context switching.
  • Custom CUDA Kernels: Write high-speed FlashAttention-3, LayerNorm, and RMSNorm kernels while managing CUDA streams for maximum GPU occupancy.
  • The Cost Killer (Quantization): Slash VRAM requirements with bit-level manipulation for 4-bit (AWQ) and 8-bit (FP8) inference using NVIDIA Tensor Cores.
  • Distributed & Speculative Execution: Scale across clusters using zero-copy NCCL/RDMA interconnects and implement Draft Models to accelerate massive architectures.
  • The Production Serving Layer: Build lock-free C++ request queues for continuous batching and track P99 "Time to First Token" (TTFT) at the systems level.
THE IMPLEMENTATION VAULT (Appendix)

Built for the infrastructure engineer in the trenches, the Appendix provides immediate, battle-tested utility:
  • The 15-Point Production-Ready Checklist: Your mandatory safety and performance audit before deploying any custom engine.
  • Latency vs. Throughput Reference Table: The ultimate cheat sheet for balancing batch sizes against user wait times.
  • Troubleshooting Guide: Direct solutions for the top 10 most common and devastating CUDA and C++ memory errors.
Don't let inefficient software architecture throttle your hardware. Master C++ LLM inference and build the fastest, most cost-effective AI engines in the industry.

Les informations fournies dans la section « Synopsis » peuvent faire référence à une autre édition de ce titre.