Research Project
Client-Specific Personalization Depth in Federated Learning
A personalized federated learning study asking how much of a shared model each client should adapt after federated training, and whether personalization depth should be treated as a client-specific decision rather than a fixed global policy.
Research Theme
Personalized Federated Learning
Studies how clients should adapt a shared federated model under heterogeneous local data distributions.
Core Question
How Much Should Each Client Personalize?
Treats adaptation depth as a client-specific decision rather than assuming one fixed personalization policy works for everyone.
Evaluation Focus
Oracle Routing Headroom
Quantifies unrealized performance from fixed policies by comparing client-wise oracle routing against standard adaptation choices.
Overview
This project studies personalization depth in federated learning: after a shared model has been trained, how much of the model should each client adapt locally? Instead of assuming that every client should use the same post-hoc adaptation policy, the work treats personalization depth as a client-dependent decision problem.
Problem Motivation
Personalized federated learning is motivated by the fact that clients often have different local distributions. Yet many personalization workflows still apply the same adaptation strategy to all clients, such as fine-tuning only the head, partially adapting the network, or fully fine-tuning the local model.
This project asks whether that assumption is too restrictive: within the same federated run, different clients may benefit from different levels of personalization depth.
Technical Idea
The study compares multiple post-hoc personalization policies, including head-only fine-tuning, partial fine-tuning, and full fine-tuning. It evaluates these choices across five benchmark datasets and multiple Dirichlet non-IID regimes to understand how personalization depth interacts with client heterogeneity.
The analysis then measures oracle client-wise routing headroom: how much performance is left on the table when all clients are forced to use the strongest fixed personalization policy instead of selecting the best depth per client.
What This Project Shows
- Personalization depth is not globally optimal; different clients can prefer different adaptation depths.
- Fixed personalization policies can leave measurable performance unrealized under heterogeneous federated settings.
- Oracle routing provides a useful upper bound for understanding the value of client-specific adaptation decisions.
- Lightweight automatic selectors are nontrivial: recovering oracle-level gains requires more than a simple heuristic.
Experimental Scope
The repository includes implementation code, configurations, stored results, figures, and analysis artifacts. Experiments cover CIFAR-10, CIFAR-100, SVHN, FashionMNIST, and EMNIST-Balanced under multiple Dirichlet non-IID client partitions and random seeds.
Why It Matters
This project reframes personalization in federated learning as a finer-grained client-level decision problem. Instead of asking only whether personalization helps, it asks which type of personalization helps each client.
That perspective is important for real federated systems, where clients differ not only in their data distributions but also in how much local adaptation they need after global training.
Research Value
This work strengthens my broader research direction around federated learning under heterogeneity. It shows an ability to identify a precise research gap, formulate a measurable decision problem, design systematic experiments, and package the work in a reproducible research repository.