Hadrien Hendrikx

 

Briefly

Since September 2018, I am a Ph.D. student in the SIERRA and DYOGENE teams, which are part of the Computer Science Department of Ecole Normale Supérieure and are also joint teams between CNRS and INRIA. I am also part of the MSR-INRIA joint centre. I work under the supervision of Francis Bach and Laurent Massoulié on decentralized optimization.

Prior to that, I graduated from Ecole Polytechnique in 2016 and got a master degree from EPFL in Computer Science (Master en Informatique) in 2018. During my master, I had the chance to work as a Research Assistant in the DCL lab under the supervision of Rachid Guerraoui and in close collaboration with Aurélien Bellet.

Contact

Research interests

I am broadly interested in optimization for machine learning, regardless of the flavor: stochastic, accelerated, non-euclidean… My PhD mainly focuses on decentralized methods for distributed optimization, and in particular how to efficiently leverage acceleration and variance reduction in a decentralized setting.

I am more generally open to any problem related to making many entities work together efficiently, potentially without a central authority, and hopefully with some guarantees for the participants. This leads me to read about differential privacy issues in ML, and reinforcement learning theory.

Teaching

  • 2018 - 2019: Teaching assistant, Advanced Algorithms (L3 Informatique), Logic (L1 Informatique) University Paris Descartes

  • 2019 - 2020: Teaching assistant, Advanced Algorithms (L3 Informatique), University Paris Descartes

Reviewing

  • Conferences: ICML 2019 (Top 5%), NeurIPS 2019 (Top 400), ICML 2020 (Top 33%), NeurIPS 2020

  • Journals: Mathematical Programming, IEEE Transactions on Signal Processing, Automatica, SIOPT

Talks

Publications and preprints

  • H. Hendrikx, F. Bach, L. Massoulié. Dual-Free Stochastic Decentralized Optimization with Variance Reduction.
    [arXiv:2006.14384], arXiv preprint, 2020.

  • H. Hendrikx, F. Bach, L. Massoulié. An Optimal Algorithm for Decentralized Finite Sum Optimization.
    [arXiv:2005.10675], arXiv preprint, 2020.

  • H. Hendrikx, L. Xiao, S. Bubeck, F. Bach, L. Massoulié. Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization.
    [arXiv:2002.10726], International Conference on Machine Learning (ICML), 2020.

  • A. Bellet, R. Guerraoui, H. Hendrikx. Who started this rumor? Quantifying the natural differential privacy guarantees of gossip protocols.
    [arXiv:1902.07138], International Symposium on DIStributed Computing (DISC), 2020.

  • H. Hendrikx, F. Bach, L. Massoulié. An Accelerated Decentralized Stochastic Proximal Algorithm for Finite Sums.
    [arXiv:1905.11394], Advances in Neural Information Processing Systems (NeurIPS), 2019.

  • H. Hendrikx, F. Bach, L. Massoulié. Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives.
    [arXiv:1810.02660], International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.

  • EME Mhamdi, R Guerraoui, H Hendrikx, A Maurer. Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning.
    [arXiv:1704.02882], Advances in Neural Information Processing Systems (NIPS), 2017.

  • H. Hendrikx, M. Nuñez del Prado Cortez. Towards a route detection method based on detail call records.
    [IEEE Xplore 7885725], In Computational Intelligence (LA-CCI), 2016 IEEE Latin American Conference on (pp. 1-6). IEEE., 2016.