Hadrien Hendrikx
|
Briefly
I am a researcher (Chargé de Recherche) in the Thoth team at Inria Grenoble, a French public research institute. From October 2021 to December 2022, I was a post-doc in the MLO team from EPFL, working with Martin Jaggi. Before that (2018-2021), I was a Ph.D. student in the SIERRA and DYOGENE (now ARGO) teams from Inria Paris, which are also part of the Computer Science Department of Ecole Normale Supérieure. I was also part of the MSR-INRIA joint centre. I worked under the supervision of Francis Bach and Laurent Massoulié on decentralized optimization.
Prior to that, I graduated from Ecole Polytechnique in 2016 and got a master degree from EPFL in Computer Science (Master en Informatique) in 2018. During my master, I had the chance to work as a Research Assistant in the DCL lab under the supervision of Rachid Guerraoui and in close collaboration with Aurélien Bellet.
Contact
Physical address: Inria, 655 Av. de l'Europe, 38330 Montbonnot-Saint-Martin
E-mail: hadrien [dot] hendrikx [at] inria [dot] fr
|
Openings
I am currently looking for a PhD student (+internship) on distributing dimensionality reduction methods to accelerate large-scale physics simulations, together with Thomas Moreau.
Please reach out by email if you're interested in collaboration!
Research interests
I am broadly interested in optimization for machine learning, regardless of the flavor: stochastic, accelerated, non-euclidean… My PhD mainly focused on decentralized methods for distributed optimization, and in particular how to efficiently leverage acceleration and variance reduction in a decentralized setting. My work leverages principled reformulation-based approaches, that obtain decentralized algorithms (with guarantees) by applying standard (single-machine) optimization theory to well-chosen problems. Efficient algorithms can then be obtained by going back and forth between the reformulations and the optimization tools.
I am more generally open to any problem related to making many entities work together efficiently, potentially without a central authority, and hopefully with some guarantees for the participants. This leads me to read about differential privacy issues in ML, and reinforcement learning theory.
I am currently interested in projects at the interface between machine learning (and digital tools more generally) and agriculture, and will hopefully have more content related to this to add to this page in the near future. Please contact me for discussions and potential collaborations!
Teaching
2018 - 2019: Teaching assistant, Advanced Algorithms (L3 Informatique), Logic (L1 Informatique) University Paris Descartes
2019 - 2020: Teaching assistant, Advanced Algorithms (L3 Informatique), University Paris Descartes
2023 - 2024: Teacher, Generalizations Properties of Machine Learning Algorithms, M2 Mathématiques de l'aléatoire, Orsay.
2023 - 2024: Teacher, Numerical Optimization, M1 Applied Mathematics, Université Grenoble Alpes
2024 - 2025: Teacher, Generalizations Properties of Machine Learning Algorithms, M2 Mathématiques de l'aléatoire, Orsay.
2024 - 2025: Teacher, Numerical Optimization, M1 Applied Mathematics, Université Grenoble Alpes
Reviewing
Conferences: ICML 2019 (Top 5%), NeurIPS 2019 (Top 400), ICML 2020 (Top 33%), NeurIPS 2020 (Top 10%), AISTATS 2021, ICML 2021, NeurIPS 2021, AISTATS 2022, ICML 2022 (Top 10%), NeurIPS 2022, AISTATS 2023 (Top 10%), ICML 2023, AISTATS 2023, AISTATS 2024, ICML 2024, NeurIPS 2024.
Journals: Mathematical Programming, IEEE Transactions on Signal Processing, Automatica, SIOPT, JMLR
Supervision
Current:
Former:
Mohamed Bacar Abdoulandhum: Master Student, co-supervised with Léa Lugassy (PADV)
Daniel Morales Broton: full-time Master intern.
Abdellah El Mrini: Master Student at Ecole Polytechnique (Paris)
Rustem Islamov: Master Student at Ecole Polytechnique (Paris)
Mathieu Even: PhD student at INRIA Paris with Laurent Massoulié
Publications and preprints
H. Hendrikx, Investigating Variance Definitions for Stochastic Mirror Descent with Relative Smoothness. [arXiv:2404.12213], arXiv preprint, 2024.
R. Gaucher, A. Dieuleveut, H. Hendrikx, Byzantine-Robust Gossip: Insights from a Dual Approach. [arXiv:2405.03449], arXiv preprint, 2024.
D. Morales-Brotons, T. Vogels, H. Hendrikx, Exponential Moving Average of Weights in Deep Learning: Dynamics and Benefits, [arXiv:], Transactions on Machine Learning Research (TMLR), 2024.
H. Hendrikx, P. Mangold, A. Bellet. The Relative Gaussian Mechanism and its Application to Private Gradient Descent. [arXiv:2308.15250], International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
A. Koloskova*, H. Hendrikx*, S. Stich. Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees. [arXiv:2305.01588], International Conference on Machine Learning (ICML), 2023.
T. Vogels*, H. Hendrikx*, M. Jaggi. Beyond spectral gap (extended): the role of the topology in decentralized learning. [arXiv:2301.02151], Journal of Machine Learning Research (JMLR), 2023.
H. Hendrikx. A principled framework for the design and analysis of token algorithms. [arXiv:2205.15015, video], International Conference on Artificial Intelligence and Statistics (AISTATS), 2023.
T. Vogels*, H. Hendrikx*, M. Jaggi. Beyond spectral gap: the role of the topology in decentralized learning. [arXiv:2206.03093, video], Advances in Neural Information Processing Systems (NeurIPS), 2022.
H. Hendrikx. Accelerated Methods for Distributed Optimization. [HAL link], PhD Thesis, PSL Research university, 2021.
M. Even, H. Hendrikx, L. Massoulié. Decentralized Optimization with Heterogeneous Delays: a Continuous-Time Approach. [arXiv:2106.03585], arXiv preprint, 2021.
M. Even, R. Berthier, F. Bach, N. Flammarion, P. Gaillard, H. Hendrikx, L. Massoulié, A. Taylor A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip. [arXiv:2106.07644, video], Advances in Neural Information Processing Systems (NeurIPS), 2021. Oral Presentation. Outstanding paper award.
H. Hendrikx, F. Bach, L. Massoulié. An Optimal Algorithm for Decentralized Finite Sum Optimization. [arXiv:2005.10675], SIAM Journal on Optimization, 2021.
R. Dragomir*, M. Even*, H. Hendrikx*. Fast Stochastic Bregman Gradient Methods: Sharp Analysis and Variance Reduction. [arXiv:2104.09813], International Conference on Machine Learning (ICML), 2021.
H. Hendrikx, F. Bach, L. Massoulié. Dual-Free Stochastic Decentralized Optimization with Variance Reduction. [arXiv:2006.14384], Advances in Neural Information Processing Systems (NeurIPS), 2020.
H. Hendrikx, L. Xiao, S. Bubeck, F. Bach, L. Massoulié. Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization. [arXiv:2002.10726, video], International Conference on Machine Learning (ICML), 2020.
A. Bellet, R. Guerraoui, H. Hendrikx. Who started this rumor? Quantifying the natural differential privacy guarantees of gossip protocols. [arXiv:1902.07138], International Symposium on DIStributed Computing (DISC), 2020.
H. Hendrikx, F. Bach, L. Massoulié. An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums. [arXiv:1905.11394], Advances in Neural Information Processing Systems (NeurIPS), 2019.
H. Hendrikx, F. Bach, L. Massoulié. Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives. [arXiv:1810.02660], International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
EME Mhamdi, R Guerraoui, H Hendrikx, A Maurer. Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning. [arXiv:1704.02882], Advances in Neural Information Processing Systems (NIPS), 2017. Spotlight presentation.
H. Hendrikx, M. Nuñez del Prado Cortez. Towards a route detection method based on detail call records. [IEEE Xplore 7885725], In Computational Intelligence (LA-CCI), 2016 IEEE Latin American Conference on (pp. 1-6). IEEE., 2016.
* denotes equal contribution
|