Domain adaptation bridges the gap between training data and real-world deployment data when they come from different distributions. A model trained on daytime driving images will struggle at night. A detector trained on studio product photos will underperform on factory floor images with different lighting and backgrounds. This distribution shift, called domain gap, is one of the most common reasons vision models fail in production.
Techniques work at different levels. Feature alignment methods (DANN, CORAL, MMD) train the feature extractor to produce representations that look similar regardless of whether the input comes from the source or target domain, typically using an adversarial discriminator or statistical matching. Image-level adaptation uses style transfer or CycleGAN to translate source images into the visual style of the target domain before training. Self-training approaches generate pseudo-labels on unlabeled target data and iteratively refine the model. Test-time adaptation (Tent, MEMO) adjusts batch normalization statistics or model parameters on-the-fly during inference.
The related problem of domain generalization tries to train models that work across any unseen domain without target data at all, using techniques like domain randomization and multi-source training. Even small amounts of target domain data combined with adaptation can dramatically close the performance gap.

