← Back to Projects

Research Project

Drift-Aware Adaptive Aggregation for Federated Learning

A federated learning project that studies client update drift under heterogeneous data and introduces an adaptive aggregation rule that reweights client contributions according to their deviation from the consensus update direction.

Research Theme

Federated Optimization

Studies how heterogeneous client data affects stability, aggregation quality, and the behavior of decentralized training.

Core Insight

Drift Reveals Instability

Divergence between client updates and the mean update direction provides a useful signal for understanding training instability under non-IID conditions.

Method Contribution

DAA

A lightweight server-side weighting rule that adaptively reduces the influence of highly drifted client updates during aggregation.

Overview

This project examines a central challenge in federated learning: how to aggregate client updates robustly when client data are strongly heterogeneous. Standard methods such as Federated Averaging treat client updates uniformly, but this can become unstable when local update directions diverge significantly.

Problem Motivation

In realistic federated settings, clients often have different data distributions shaped by local usage patterns, context, or environment. Under these conditions, local updates may point in conflicting directions, and uniform aggregation can amplify instability instead of reducing it.

This project asks whether the deviation of individual client updates from the consensus direction can be used as a reliable signal for more informed server-side aggregation.

Technical Idea

The project defines client update drift as the distance between a client’s local update and the mean update across participating clients. This produces an interpretable geometry-based signal of how far each update deviates from the consensus direction in a given communication round.

Building on this idea, the work introduces Drift-Aware Adaptive Aggregation (DAA), a lightweight aggregation rule that uses drift-sensitive weights to attenuate highly divergent updates while preserving the standard federated training pipeline.

What This Project Shows

  • Update drift grows substantially as client heterogeneity becomes stronger.
  • Drift provides an interpretable signal of instability during federated training.
  • Adaptive weighting based on drift exposes useful structure in client updates that uniform averaging ignores.
  • Simple geometry-aware aggregation rules can make federated optimization behavior easier to analyze and reason about.

Why It Matters

Robust federated learning depends on more than privacy and communication efficiency. It also depends on whether the server is able to recognize when client updates are structurally inconsistent.

This project shows that even a simple notion of drift can reveal meaningful information about the dynamics of decentralized optimization, offering a useful direction for designing more reliable aggregation strategies.

Research Value

This work reflects my interest in identifying concrete failure modes in federated learning and translating them into simple, interpretable, and technically grounded methods. It combines optimization thinking, systems awareness, and empirical analysis in a way that supports both practical relevance and research clarity.