Welcome
I am a postdoctoral fellow at the ETH AI Center, working under the supervision of Prof. Martin Vechev and Prof. Fanny Yang.
My research interests lie in the area of trustworthy machine learning, with a focus on robustness and fairness. In particular, I am interested in providng provable guarantees for machine learning algorithms, especially in the context of collabrative and federated learning.
Before ETH I was a PhD student at IST Austria, working in the group of Prof. Christoph Lampert. I was also part of the ELLIS PhD Program.
Links: Google Scholar profile, LinkedIn, a full CV .
Papers
The order of the authors is by contribution, unless specified otherwise.
Preprints
Nikita Tsoy, Nikola Konstantinov
Strategic Data Sharing between Competitors
In preparation, 2023; Draft available on request
Florian E. Dorner, Nikola Konstantinov, Giorgi Pashaliev, Martin Vechev
Incentivizing Honesty in Federated Learning under Competition
In preparation, 2023; Draft available on request
Nikola Konstantinov, Christoph H. Lampert
Fairness Through Regularization for Learning to Rank
Preprint, 2021
Publications
Florian E. Dorner, Momchil Peychev, Nikola Konstantinov, Naman Goel, Elliott Ash, Martin Vechev
Human-Guided Fair Classification for Natural Language Processing
To appear in: International Conference on Learning Representations (ICLR), Spotlight , 2023
Short version presented in: TSRML@NeurIPS , 2022
Dimitar I. Dimitrov, Mislav Balunović, Nikola Konstantinov, Martin Vechev
Data Leakage in Federated Averaging
To appear in: Transactions of Machine Learning Research (TMLR) , 2022
Eugenia Iofinova*, Nikola Konstantinov*, Christoph H. Lampert
FLEA: Provably Fair Multisource Learning from Unreliable Training Data
To appear in: Transactions of Machine Learning Research (TMLR) , 2022
* Denotes equal contribution
Nikola Konstantinov, Christoph H. Lampert
Fairness-Aware PAC Learning from Corrupted Data
In: Journal of Machine Learning Research (JMLR) , 2022
Nikola Konstantinov, Christoph H. Lampert
On the Impossibility of Fairness-Aware Learning from Corrupted Data
Contributed talk + in proceedings of AFCR@NeurIPS , 2021
Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph H. Lampert
On the Sample Complexity of Adversarial Multi-Source PAC Learning
In: International Conference on Machine Learning (ICML), 2020
Nikola Konstantinov, Christoph H. Lampert
Robust Learning from Untrusted Sources
In: International Conference on Machine Learning (ICML), 2019; Long Talk
Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov*, Sarit Khirirat, Cedric Renggli
The Convergence of Sparsified Gradient Methods
In: Conference on Neural Information Processing Systems (NeurIPS) , 2018
* Authors' order is alphabetical.
Dan Alistarh, Chris De Sa, Nikola Konstantinov*
The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory
In: ACM Symposium of Principles of Distributed Computing (PODC) , 2018
* Authors' order is alphabetical.