Publications
Here is a list of my publications (organized by venue) and preprints.
* denotes equal contribution
Thesis
- Randomized Numerical Linear Algebra for Large-Scale Optimization
Zachary Frangella
Stanford University Thesis, 2025
Journal Publications
On the Linear Convergence of Generalized Newton Inexact ADMM
Zachary Frangella, Theo Diamandis, Bartolomeo Stellato, Madeleine Udell
Accepted, Transactions on Machine Learning Research, 2025
[code]GeNIOS: an (almost) second-order operator-splitting solver for large-scale convex optimization
Theo Diamandis, Zachary Frangella, Shipu Zhao, Bartolomeo Stellato, Madeleine Udell
Accepted, Mathematical Programming Computation, 2025
[code]Enhancing Physics-Informed Neural Networks Through Feature Engineering
Shaghayegh Fazliani, Zachary Frangella, Madeleine Udell
Transactions on Machine Learning Research, 2025PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates
Zachary Frangella*, Pratik Rathore*, Shipu Zhao, Madeleine Udell
Journal of Machine Learning Research, 2024
[code]SketchySGD: Reliable Stochastic Optimization via Randomized Curvature Estimates
Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell
SIAM Journal on Mathematics of Data Science, 2024
[code]Randomized Nyström Preconditioning
Zachary Frangella, Joel A. Tropp, Madeleine Udell
SIAM Journal on Matrix Analysis and Applications, 2023
Conference Publications
Turbocharging Gaussian Process Inference with Approximate Sketch-and-Project
Pratik Rathore, Zachary Frangella, Sachin Garg, Shaghayegh Fazliani, Michał Dereziński, Madeleine Udell
Accepted, Advances in Neural Information Processing Systems, 2025CRONOS: Enhancing Deep Learning with Scalable GPU Accelerated Convex Neural Networks
Miria Feng, Zachary Frangella, Mert Pilanci
Advances in Neural Information Processing Systems, 2024
[code]Challenges in Training PINNs: A Loss Landscape Perspective
Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, Madeleine Udell
International Conference on Machine Learning, 2024, Oral (top 1.5% of submissions)
[code]NysADMM: faster composite convex optimization via low-rank approximation
Shipu Zhao*, Zachary Frangella*, Madeleine Udell
International Conference on Machine Learning, 2022Can we globally optimize cross-validation loss? Quasiconvexity in ridge regression
William T. Stephenson, Zachary Frangella, Madeleine Udell, Tamara Broderick
Advances in Neural Information Processing Systems, 2021
In the Pipeline
Robust, randomized preconditioning for kernel ridge regression
Mateo Diaz, Ethan N. Epperly, Zachary Frangella, Joel A. Tropp, Robert J. Webber
Submitted
[code]Have ASkotch: A Neat Solution for Large-scale Kernel Ridge Regression
Pratik Rathore, Zachary Frangella, Jiaming Yang, Michał Dereziński, Madeleine Udell
Submitted
[code]SAPPHIRE: Preconditioned Stochcastic Variance Reduction for Faster Large-Scale Statistical Learning
Jingruo Sun, Zachary Frangella, Madeleine Udell
Submitted
[code]
