Follow
Andre Wibisono
Andre Wibisono
Assistant Professor at Yale University
Verified email at yale.edu - Homepage
Title
Cited by
Cited by
Year
A variational perspective on accelerated methods in optimization
A Wibisono, AC Wilson, MI Jordan
proceedings of the National Academy of Sciences 113 (47), E7351-E7358, 2016
5472016
Optimal rates for zero-order convex optimization: The power of two function evaluations
JC Duchi, MI Jordan, MJ Wainwright, A Wibisono
IEEE Transactions on Information Theory 61 (5), 2788-2806, 2015
5152015
Streaming variational bayes
T Broderick, N Boyd, A Wibisono, AC Wilson, MI Jordan
Advances in neural information processing systems 26, 2013
3952013
Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices
S Vempala, A Wibisono
Advances in neural information processing systems 32, 2019
2552019
Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem
A Wibisono
Conference on Learning Theory, 2093-3027, 2018
1772018
Last-iterate convergence rates for min-max optimization
J Abernethy, KA Lai, A Wibisono
arXiv preprint arXiv:1906.02027, 2019
622019
Maximum entropy distributions on graphs
C Hillar, A Wibisono
arXiv preprint arXiv:1301.3321, 2013
602013
Accelerating rescaled gradient descent: Fast optimization of smooth functions
AC Wilson, L Mackey, A Wibisono
Advances in Neural Information Processing Systems 32, 2019
522019
Improved analysis for a proximal algorithm for sampling
Y Chen, S Chewi, A Salim, A Wibisono
Conference on Learning Theory, 2984-3014, 2022
502022
Proximal langevin algorithm: Rapid convergence under isoperimetry
A Wibisono
arXiv preprint arXiv:1911.01469, 2019
462019
Finite sample convergence rates of zero-order stochastic optimization methods
A Wibisono, MJ Wainwright, M Jordan, JC Duchi
Advances in Neural Information Processing Systems 25, 2012
362012
Last-iterate convergence rates for min-max optimization: Convergence of hamiltonian gradient descent and consensus optimization
J Abernethy, KA Lai, A Wibisono
Algorithmic Learning Theory, 3-47, 2021
352021
On accelerated methods in optimization
A Wibisono, AC Wilson
arXiv preprint arXiv:1509.03616, 2015
332015
The mirror Langevin algorithm converges with vanishing bias
R Li, M Tao, SS Vempala, A Wibisono
International Conference on Algorithmic Learning Theory, 718-742, 2022
292022
Minimax option pricing meets Black-Scholes in the limit
J Abernethy, RM Frongillo, A Wibisono
Proceedings of the forty-fourth annual ACM symposium on Theory of computing …, 2012
262012
Convergence of the Inexact Langevin Algorithm and Score-based Generative Models in KL Divergence
K Yingxi Yang, A Wibisono
arXiv e-prints, arXiv: 2211.01512, 2022
21*2022
Provable acceleration of heavy ball beyond quadratics for a class of Polyak-Lojasiewicz functions when the non-convexity is averaged-out
JK Wang, CH Lin, A Wibisono, B Hu
International conference on machine learning, 22839-22864, 2022
192022
Sufficient conditions for uniform stability of regularization algorithms
A Wibisono, L Rosasco, T Poggio
Computer Science and Artificial Intelligence Laboratory Technical Report …, 2009
172009
How to hedge an option against an adversary: Black-scholes pricing is minimax optimal
J Abernethy, PL Bartlett, R Frongillo, A Wibisono
Advances in neural information processing systems 26, 2013
152013
Information and estimation in Fokker-Planck channels
A Wibisono, V Jog, PL Loh
2017 IEEE International Symposium on Information Theory (ISIT), 2673-2677, 2017
142017
The system can't perform the operation now. Try again later.
Articles 1–20