Council LogoCouncil
AI Glossary

What is LoRA (Low-Rank Adaptation)?

A parameter-efficient fine-tuning method that trains small adapter matrices instead of modifying the full model.

By Council Research TeamUpdated: Jan 27, 2026

Definition

LoRA (Low-Rank Adaptation) is a technique for fine-tuning large language models efficiently by freezing the original model weights and injecting small trainable matrices into each transformer layer. Instead of updating billions of parameters, LoRA trains only millions of additional parameters (typically 0.1-1% of the original model size) that represent low-rank decompositions of the weight updates. This dramatically reduces memory requirements and training time while achieving performance comparable to full fine-tuning. LoRA adapters can be swapped, merged, or stacked, enabling flexible model customization.

Examples

1Fine-tuning a 70B model on a single GPU by training only 50M LoRA parameters
2Creating a medical-specialized adapter that can be applied to any base model
3Stacking a coding LoRA and a writing LoRA for combined capabilities
4QLoRA combining 4-bit quantization with LoRA for even more memory efficiency

Why It Matters

LoRA makes AI customization accessible to individuals and small teams. It is the reason the open-source AI community can create thousands of specialized models without needing massive GPU clusters.

Related Terms

Prefix Tuning

A fine-tuning method that prepends learnable virtual tokens to the input without modifying model weights.

Instruction Tuning

Fine-tuning a language model on instruction-response pairs so it follows human directions reliably.

Model Merging

Combining weights from multiple fine-tuned models into a single model that inherits capabilities from each.

Mixed Precision Training

Training neural networks using a mix of 16-bit and 32-bit floating-point numbers to save memory and increase speed.

Common Questions

What does LoRA (Low-Rank Adaptation) mean in simple terms?

A parameter-efficient fine-tuning method that trains small adapter matrices instead of modifying the full model.

Why is LoRA (Low-Rank Adaptation) important for AI users?

LoRA makes AI customization accessible to individuals and small teams. It is the reason the open-source AI community can create thousands of specialized models without needing massive GPU clusters.

How does LoRA (Low-Rank Adaptation) relate to AI chatbots like ChatGPT?

LoRA (Low-Rank Adaptation) is a fundamental concept in how AI assistants like ChatGPT, Claude, and Gemini work. For example: Fine-tuning a 70B model on a single GPU by training only 50M LoRA parameters Understanding this helps you use AI tools more effectively.

Related Use Cases

Best AI for Coding

Best AI for Writing

AI Models Using This Concept

ClaudeClaudeChatGPTChatGPTGeminiGemini

See LoRA (Low-Rank Adaptation) in Action

Council lets you compare responses from multiple AI models side-by-side. Experience different approaches to the same prompt instantly.

Browse AI Glossary

Large Language Model (LLM)Prompt EngineeringAI HallucinationContext WindowToken (AI)RAG (Retrieval-Augmented Generation)Fine-TuningTemperature (AI)Multimodal AIAI Agent