Council LogoCouncil
AI Glossary

What is Mixed Precision Training?

Training neural networks using a mix of 16-bit and 32-bit floating-point numbers to save memory and increase speed.

By Council Research TeamUpdated: Jan 27, 2026

Definition

Mixed precision training uses lower-precision numerical formats (FP16 or BF16) for most computations while maintaining FP32 for critical operations like loss accumulation and gradient updates. Since most neural network operations tolerate reduced precision, this approach nearly halves memory usage and doubles throughput on modern GPUs with tensor cores (which accelerate FP16 operations). The technique requires loss scaling to prevent gradient underflow in FP16 range. BF16 (bfloat16) has become preferred for training because its wider exponent range avoids most overflow/underflow issues. Nearly all frontier model training now uses mixed or lower precision.

Examples

1Training a model with BF16 forward and backward passes but FP32 optimizer states
2NVIDIA tensor cores providing 2x throughput for FP16 matrix multiplications
3Loss scaling multiplying the loss by 1024 to prevent small gradients from becoming zero in FP16
4FP8 training emerging on H100 GPUs for even greater efficiency gains

Why It Matters

Mixed precision training is why modern AI models can be trained at all — without it, the memory and compute costs would be prohibitive. It is a foundational efficiency technique behind every major AI model.

Related Terms

GPU Compute

Using graphics processing units for parallel mathematical operations that power AI training and inference.

Data Parallelism

Distributing training data across multiple GPUs that each hold a copy of the model, then synchronizing gradients.

Gradient Descent

The core optimization algorithm that adjusts neural network weights by following the slope of the loss function downward.

Backpropagation

The algorithm that computes how much each weight contributed to the error, enabling gradient descent to update them.

Common Questions

What does Mixed Precision Training mean in simple terms?

Training neural networks using a mix of 16-bit and 32-bit floating-point numbers to save memory and increase speed.

Why is Mixed Precision Training important for AI users?

Mixed precision training is why modern AI models can be trained at all — without it, the memory and compute costs would be prohibitive. It is a foundational efficiency technique behind every major AI model.

How does Mixed Precision Training relate to AI chatbots like ChatGPT?

Mixed Precision Training is a fundamental concept in how AI assistants like ChatGPT, Claude, and Gemini work. For example: Training a model with BF16 forward and backward passes but FP32 optimizer states Understanding this helps you use AI tools more effectively.

Related Use Cases

Best AI for Coding

Best AI for Writing

AI Models Using This Concept

ClaudeClaudeChatGPTChatGPTGeminiGemini

See Mixed Precision Training in Action

Council lets you compare responses from multiple AI models side-by-side. Experience different approaches to the same prompt instantly.

Browse AI Glossary

Large Language Model (LLM)Prompt EngineeringAI HallucinationContext WindowToken (AI)RAG (Retrieval-Augmented Generation)Fine-TuningTemperature (AI)Multimodal AIAI Agent