Aryan Mokhtari Receives NSF Grant to Research Optimization Algorithms for Large-Scale Learning

Share this content

Published:
September 29, 2020

WNCG professor Aryan Mokhtari has received a grant from the National Science Foundation (NSF) to study Computationally Efficient Second-Order Optimization Algorithms for Large-Scale Learning. The project “lays out an agenda to develop a class of memory efficient, computationally affordable, and distributed friendly second-order methods for solving modern machine learning problems.”

 “Current optimization algorithms for large-scale machine learning are inefficient at times since these methods operate using only first-order information (gradient) of the objective function. This project aims to develop a class of fast and efficient second-order methods that exploit the curvature information of the objective function to accelerate convergence in ill-conditioned settings. The research encompasses three different thrusts: (I) Developing memory efficient incremental quasi-Newton methods with provably fast convergence guarantees; (II) Improving the computational complexity of second-order adaptive sample size algorithms by leveraging quasi-Newton approximation techniques; and (III) Designing distributed second-order methods that outperform first-order algorithms both in terms of overall complexity (in convex settings) and in terms of quality of solution (in non-convex settings).”

Aryan Mokhtari is an Assistant Professor in the Department of Electrical and Computer Engineering at UT Austin and a member of the Wireless Networking and Communications Group. His research interests include the areas of optimization, machine learning, and artificial intelligence. His current research focuses on the theory and applications of convex and non-convex optimization in large-scale machine learning and data science problems.