
Sihan Liu's Personal Page (刘思涵)
I am a second-year PhD student in UC San Diego's theoretical computer science group, where I am fortunate to be advised by Daniel Kane.
I did my undergraduate BS in Computer Science at University of Wisconsin Madison and was lucky enough to work with Ilias Diakonikolas and Christos Tzamos.
I am interested in algorithmic and statistical problems related to the foundations of machine learning.
- In theory, there is no difference between theory and practice. But in practice, there is.
My music collection
Research
- Efficient Testable Learning of Halfspaces with Adversarial Label Noise
- with Ilias Diakonikolas, Daniel Kane, Vasilis Kontonis, Nikos Zarifis
- Exponential Hardness of Reinforcement Learning with Linear Function Approximation
- with Daniel Kane, Shachar Lovett, Gaurav Mahajan, Csaba Szepesvári, Gellért Weisz
- Accepted to COLT 2023. Link
- Strengthens the computational lower bound for RL with linear function approximation under Randomized Exponential Time Hypothesis.
- Sampling Equilibria: Fast No-Regret Learning in Structured Games
- with Daniel Beaglehole, Max Hopkins, Daniel Kane, Shachar Lovett
- Accepted to SODA 2023. Link
- Develops a general framework of sampling equilibria from games of large state space.
- Fast algorithm for computing approximate equilibria for Colonel Blotto Game.
- Nearly-Tight Bounds for Testing Histogram Distributions
- with Clément Canonne, Ilias Diakonikolas, Daniel Kane
-
- Accepted to NeurIPS 2022. Link
- Obtains nearly optimal algorithm for histogram testing. Story behind
- Derives nearly matching sample complexity lower bound.
- Computational-Statistical Gaps in Reinforcement Learning
- with Daniel Kane, Shachar Lovett, Gaurav Mahajan
- 2021.10 - 2022.2
- Accepted to COLT 2022. Link
- Shows the first computational lower bound for RL with linear function approximation.
- Convergence and Sample Complexity of SGD in GANs
- with Christos Tzamos, Vasilis Kontonis
- 2020.1 - 2020.10
- Obtains theoretical guarantees on nearly optimal sample complexity and convergence rate of Generative Adversarial Networks in learning one-layer non-linear neural nets.
- Selected to present at CS Student Research Symposium. Link to Manuscript
Other Experience
- Competitive Programming
- 2018 - Present (in reality, 2021)
- Dieter van Melkebeek
- Achievements: 2019 ICPC North Central NA Regional Contest #3 over 183 teams (Team NP-Easiness), Codejam Round 2 Top 3% (leo990629), Hackercup Round 2 Top 10% (sihan.liu.3139).
- Software Enginner Intern in DataChat
- 2019.6 - 2021.5
- DataChat Inc.
- Constrained Natural Language (CNL) parsing: intention inference and entity retrieval from structured natural language.
- Autocomplete Engine: provide data-sensitive suggestions in the middle of typing.
- Linter: semantic highlighting in user input.
- Skills used: React.js + Redux + Saga, Golang, Python, RedisStore, spaCy.
Education
- Bachelor degrees in Computer Science and Mathematics (4.0/4.0)
- GPA: 4.0/4.0 Dewitt Undergraduate Scholarship
- Coursework: Learning Theory, Advanced Algorithm, Algorithmic Game Theory, Mechanism Design, Quantum Computing
- University of Wisconsin Madison
- 2017 - 2021
Class Projects
- Computational Learning Theory - Survey over learning linear threshold functions under Random/Adversarial/Massart noise model
- Mechanism Design - Survey on sample complexity of mechanism design
- Robust Statistics - GAN and Robust Statistics