Welcome
I am a postdoctoral fellow at the ETH AI Center, working under the supervision of Prof. Martin Vechev and Prof. Fanny Yang.
My research interests lie in the area of trustworthy machine learning, with a focus on robustness and fairness.
Before ETH I was a PhD student at IST Austria, working in the group of Prof. Christoph Lampert. I was also part of the ELLIS PhD Program.
Links: Google Scholar profile, LinkedIn, a full CV .
Papers
The order of the authors is by contribution, unless specified otherwise.
Preprints
Eugenia Iofinova*, Nikola Konstantinov*, Christoph H. Lampert
FLEA: Provably Fair Multisource Learning from Unreliable Training Data
Under submission, 2022
* Denotes equal contribution
Nikola Konstantinov, Christoph H. Lampert
Fairness Through Regularization for Learning to Rank
Preprint, 2021
Publications
Nikola Konstantinov, Christoph H. Lampert
Fairness-Aware PAC Learning from Corrupted Data
Accepted to: Journal of Machine Learning Research (JMLR) , 2022
Nikola Konstantinov, Christoph H. Lampert
On the Impossibility of Fairness-Aware Learning from Corrupted Data
Contributed talk + in proceedings of AFCR@NeurIPS , 2021
Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph H. Lampert
On the Sample Complexity of Adversarial Multi-Source PAC Learning
In: International Conference on Machine Learning (ICML), 2020
Nikola Konstantinov, Christoph H. Lampert
Robust Learning from Untrusted Sources
In: International Conference on Machine Learning (ICML), 2019; Long Talk
Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov*, Sarit Khirirat, Cedric Renggli
The Convergence of Sparsified Gradient Methods
In: Conference on Neural Information Processing Systems (NeurIPS) , 2018
* Authors' order is alphabetical.
Dan Alistarh, Chris De Sa, Nikola Konstantinov*
The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory
In: ACM Symposium of Principles of Distributed Computing (PODC) , 2018
* Authors' order is alphabetical.