Skip to main content
King Abdullah University of Science and Technology
KAUST
Main navigation
Home
Publications
ISL Publications Repository
Research Output
gradient methods
Unveiling Insights from "Gradient Descent Converges Linearly for Logistic Regression on Separable Data"
Bang An, Ph.D. Student, Applied Mathematics and Computational Sciences
Jan 17, 10:00
-
11:00
B1 L0 R0118
gradient methods
Regression Models
Abstract In this presentation, I will share a paper titled "Gradient Descent Converges Linearly for Logistic Regression on Separable Data", a work highly related to my ongoing research. I will explore its relevance to my current research topic and discuss the inspiration for our future works. Abstract of the paper: We show that running gradient descent with variable learning rate guarantees loss f(x) \leq 1.1f(x^*)+\epsilon for the logistic regression objective, where the error \epsilon decays exponentially with the number of iterations and polynomially with the magnitude of the entries of an
On the Natural Gradient Descent
Prof. Levon Nurbekyan
Jun 11, 16:00
-
17:00
KAUST
gradient methods
Abstract Numerous problems in scientific computing can be formulated as optimization problems of suitable parametric models over parameter spaces. Neural network and deep learning methods provide unique capabilities for building and optimizing such models, especially in high-dimensional settings. Nevertheless, neural networks and deep learning techniques are often opaque and resistant to precise control of their mathematical properties in terms of architectures, hyperparameters, etc. Consequently, optimizing neural network models can result in a laborious hyperparameter tuning process that