Research Focus
Federated & Privacy-Preserving ML
Research spanning federated learning, decentralized optimization, privacy-aware training, and robust learning under client data heterogeneity.
Privacy-Preserving Machine Learning
Building robust and privacy-preserving machine learning systems for decentralized environments.
I work at the intersection of federated learning, privacy-preserving machine learning, robust optimization, and reproducible research systems. My research focuses on understanding instability under data heterogeneity, designing principled aggregation strategies, and building publication-standard experimental pipelines for reliable machine learning research.

Dr Victor Obarafor
Privacy-Preserving Machine Learning Researcher
Research Focus
Research spanning federated learning, decentralized optimization, privacy-aware training, and robust learning under client data heterogeneity.
Method Design
Designing adaptive aggregation mechanisms that use update structure, drift, and geometry signals to improve robustness in challenging federated settings.
Engineering Strength
Building clean, modular, and publication-grade machine learning infrastructure for rigorous experimentation, benchmarking, and technical delivery.
My work sits at the intersection of federated learning, privacy-preserving machine learning, robust optimization, and reproducible research engineering. I’m particularly interested in designing methods and systems that remain stable, interpretable, and technically rigorous in real-world decentralized settings.
Selected Work
Investigating low-rank adaptation geometry and instability signals in heterogeneous federated learning.
View projectStudying aggregation strategies that respond dynamically to client drift and instability.
View projectAnalyzing client-specific personalization depth and oracle routing headroom in heterogeneous federated environments.
View projectEngineering publication-grade ML systems for scalable experimentation and research reproducibility.
View projectPublications & Output
Investigating whether early-round training dynamics can predict final convergence behavior and instability in heterogeneous federated learning environments.
Studying personalization depth as a client-level decision problem and evaluating oracle routing headroom under heterogeneous federated distributions.
Active work includes aggregation geometry, federated LoRA, adaptive optimization, personalization strategies, and publication-grade ML experimentation systems.