Scalable optimization for ML
I have developed methods that scale up optimization in multi-task learning under the distributed/federated setting [3]. Also, I have worked on developoing and analyzing theoretical properties of gradient-based optimization algorithms for multi-task learning [4], distributed optimization [2], and matrix factorization [1].
Related Publications
[1] Revisiting the Landscape of Matrix Factorization. Paper
| Video
Hossein Valavi, Sulin Liu, Peter J. Ramadge. International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
[2] Communication-Efficient Distributed Primal-Dual Algorithm for Saddle Point Problems. Paper
Yaodong Yu*, Sulin Liu* (equal contr.), Sinno Jialin Pan. Conference on Uncertainty in Artificial Intelligence (UAI), 2017.
[3] Distributed Multi-Task Relationship Learning. Paper
| Video
Sulin Liu, Sinno Jialin Pan, Qirong Ho. SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2017.
[4] Adaptive Group Sparse Multi-task Learning via Trace Lasso. Paper
Sulin Liu, Sinno Jialin Pan. International Joint Conference on Artificial Intelligence (IJCAI), 2017.