Publications and Preprints. The Journal of Physical Chemsitry, 2015. pdf, Annie Marsden. resume/cv; publications. Try again later. with Yair Carmon, Arun Jambulapati and Aaron Sidford Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. " Geometric median in nearly linear time ." In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, Pp. Yang P. Liu, Aaron Sidford, Department of Mathematics Email / We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). Overview This class will introduce the theoretical foundations of discrete mathematics and algorithms. Before attending Stanford, I graduated from MIT in May 2018. "I am excited to push the theory of optimization and algorithm design to new heights!" Assistant Professor Aaron Sidford speaks at ICME's Xpo event. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. Some I am still actively improving and all of them I am happy to continue polishing. Research Interests: My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. 475 Via Ortega I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. arXiv | conference pdf, Annie Marsden, Sergio Bacallado. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is . Anup B. Rao. Neural Information Processing Systems (NeurIPS), 2014. The site facilitates research and collaboration in academic endeavors. In Sidford's dissertation, Iterative Methods, Combinatorial . We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). Best Paper Award. Source: www.ebay.ie 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. ", Applied Math at Fudan With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. Research interests : Data streams, machine learning, numerical linear algebra, sketching, and sparse recovery.. With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. /Filter /FlateDecode rl1 with Yair Carmon, Danielle Hausler, Arun Jambulapati and Aaron Sidford Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. "FV %H"Hr ![EE1PL* rP+PPT/j5&uVhWt :G+MvY c0 L& 9cX& ICML Workshop on Reinforcement Learning Theory, 2021, Variance Reduction for Matrix Games Stanford University I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. . I enjoy understanding the theoretical ground of many algorithms that are [pdf] [talk] Google Scholar Digital Library; Russell Lyons and Yuval Peres. Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions Aaron Sidford Stanford University Verified email at stanford.edu. In Symposium on Foundations of Computer Science (FOCS 2017) (arXiv), "Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, With Yair Carmon, John C. Duchi, and Oliver Hinder, In International Conference on Machine Learning (ICML 2017) (arXiv), Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu, In Symposium on Theory of Computing (STOC 2017), Subquadratic Submodular Function Minimization, With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong, In Symposium on Theory of Computing (STOC 2017) (arXiv), Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu, In Symposium on Foundations of Computer Science (FOCS 2016) (arXiv), With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki, In Symposium on Theory of Computing (STOC 2016) (arXiv), With Alina Ene, Gary L. Miller, and Jakub Pachocki, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm, With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli, In Conference on Learning Theory (COLT 2016) (arXiv), Principal Component Projection Without Principal Component Analysis, With Roy Frostig, Cameron Musco, and Christopher Musco, In International Conference on Machine Learning (ICML 2016) (arXiv), Faster Eigenvector Computation via Shift-and-Invert Preconditioning, With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. Prior to that, I received an MPhil in Scientific Computing at the University of Cambridge on a Churchill Scholarship where I was advised by Sergio Bacallado. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Group Resources. Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang: Minimum Cost Flows, MDPs, and 1 -Regression in Nearly Linear Time for Dense Instances. 172 Gates Computer Science Building 353 Jane Stanford Way Stanford University I am fortunate to be advised by Aaron Sidford. Yujia Jin. I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. Aaron's research interests lie in optimization, the theory of computation, and the . We forward in this generation, Triumphantly. Thesis, 2016. pdf. International Conference on Machine Learning (ICML), 2021, Acceleration with a Ball Optimization Oracle with Aaron Sidford Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. SODA 2023: 4667-4767. which is why I created a Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. With Yair Carmon, John C. Duchi, and Oliver Hinder. I am broadly interested in mathematics and theoretical computer science. In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) I graduated with a PhD from Princeton University in 2018. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford Call (225) 687-7590 or park nicollet dermatology wayzata today! University of Cambridge MPhil. In each setting we provide faster exact and approximate algorithms. Optimization and Algorithmic Paradigms (CS 261): Winter '23, Optimization Algorithms (CS 369O / CME 334 / MS&E 312): Fall '22, Discrete Mathematics and Algorithms (CME 305 / MS&E 315): Winter '22, '21, '20, '19, '18, Introduction to Optimization Theory (CS 269O / MS&E 213): Fall '20, '19, Spring '19, '18, '17, Almost Linear Time Graph Algorithms (CS 269G / MS&E 313): Fall '18, Winter '17. Navajo Math Circles Instructor. Contact: dwoodruf (at) cs (dot) cmu (dot) edu or dpwoodru (at) gmail (dot) com CV (updated July, 2021) with Vidya Muthukumar and Aaron Sidford van vu professor, yale Verified email at yale.edu. } 4(JR!$AkRf[(t Bw!hz#0 )l`/8p.7p|O~ Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. Two months later, he was found lying in a creek, dead from . Yair Carmon. Verified email at stanford.edu - Homepage. . Sequential Matrix Completion. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . One research focus are dynamic algorithms (i.e. [pdf] 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. There will be a talk every day from 16:00-18:00 CEST from July 26 to August 13. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, FOCS 2022 Links. With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020).
Oliver Hammond Wedding,
Articles A
*
Be the first to comment.