Follow
Ohad Shamir
Ohad Shamir
Verified email at weizmann.ac.il - Homepage
Title
Cited by
Cited by
Year
The power of depth for feedforward neural networks
R Eldan, O Shamir
Conference on learning theory, 907-940, 2016
9312016
Learnability, stability and uniform convergence
S Shalev-Shwartz, O Shamir, N Srebro, K Sridharan
The Journal of Machine Learning Research 9999, 2635-2670, 2010
842*2010
Making gradient descent optimal for strongly convex stochastic optimization
A Rakhlin, O Shamir, K Sridharan
arXiv preprint arXiv:1109.5647, 2011
7682011
Optimal Distributed Online Prediction Using Mini-Batches.
O Dekel, R Gilad-Bachrach, O Shamir, L Xiao
Journal of Machine Learning Research 13 (1), 2012
7662012
Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes
O Shamir, T Zhang
International conference on machine learning, 71-79, 2013
6322013
Communication-efficient distributed optimization using an approximate newton-type method
O Shamir, N Srebro, T Zhang
International conference on machine learning, 1000-1008, 2014
6142014
On the computational efficiency of training neural networks
R Livni, S Shalev-Shwartz, O Shamir
Advances in neural information processing systems 27, 2014
5822014
Size-independent sample complexity of neural networks
N Golowich, A Rakhlin, O Shamir
Conference On Learning Theory, 297-299, 2018
5422018
Better mini-batch algorithms via accelerated gradient methods
A Cotter, O Shamir, N Srebro, K Sridharan
Advances in neural information processing systems 24, 2011
3792011
Adaptively learning the crowd kernel
O Tamuz, C Liu, S Belongie, O Shamir, AT Kalai
arXiv preprint arXiv:1105.1033, 2011
3142011
Nonstochastic multi-armed bandits with graph-structured feedback
N Alon, N Cesa-Bianchi, C Gentile, S Mannor, Y Mansour, O Shamir
SIAM Journal on Computing 46 (6), 1785-1826, 2017
296*2017
Spurious local minima are common in two-layer relu neural networks
I Safran, O Shamir
International conference on machine learning, 4433-4441, 2018
2852018
Proving the lottery ticket hypothesis: Pruning is all you need
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
International Conference on Machine Learning, 6682-6691, 2020
2642020
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
2482020
An optimal algorithm for bandit and zero-order convex optimization with two-point feedback
O Shamir
Journal of Machine Learning Research 18 (52), 1-11, 2017
2482017
Learning and generalization with the information bottleneck
O Shamir, S Sabato, N Tishby
Theoretical Computer Science 411 (29-30), 2696-2711, 2010
2402010
Depth-width tradeoffs in approximating natural functions with neural networks
I Safran, O Shamir
International conference on machine learning, 2979-2987, 2017
217*2017
Communication complexity of distributed convex learning and optimization
Y Arjevani, O Shamir
Advances in neural information processing systems 28, 2015
2152015
Failures of gradient-based deep learning
S Shalev-Shwartz, O Shamir, S Shammah
International Conference on Machine Learning, 3067-3075, 2017
2132017
Learning to classify with missing and corrupted features
O Dekel, O Shamir
Proceedings of the 25th international conference on Machine learning, 216-223, 2008
2132008
The system can't perform the operation now. Try again later.
Articles 1–20