Sihan Liu


Sihan Liu

Sihan Liu's Personal Page (刘思涵)

I am a third-year PhD student in UC San Diego's theoretical computer science group, where I am fortunate to be advised by Daniel Kane.

I did my undergraduate BS in Computer Science at University of Wisconsin Madison and was lucky enough to work with Ilias Diakonikolas and Christos Tzamos.

I am interested in algorithmic and statistical problems related to the foundations of machine learning.

- In theory, there is no difference between theory and practice. But in practice, there is.

My music collection

  1. Kamado Tanjiro no Uta - Demon Slayer OST
  2. Akuma no Ko - Attack on Titan Ending
  3. Rachmaninoff Liebesleid (Love's Sorrow)
  4. Sparkling Daydream - Love, Chunibyo & Other Delusions Opening


Super Non-singular Decompositions of Polynomials and their Application to Robustly Learning Low-degree PTFs
with Ilias Diakonikolas, Daniel Kane, Vasilis Kontonis, Nikos Zarifis
  • To appear in STOC 2024.
Testing Closeness of Multivariate Distributions via Ramsey Theory
with Ilias Diakonikolas, Daniel Kane
  • To appear in STOC 2024.Link
Online Robust Mean Estimation
with Daniel Kane, Ilias Diakonikolas, Hanshen Xiao
  • Accepted to SODA 2024.Link
  • A model for conducting a sequence of robust mean estimation tasks when they share the same set of data providers
Efficient Testable Learning of Halfspaces with Adversarial Label Noise
with Ilias Diakonikolas, Daniel Kane, Vasilis Kontonis, Nikos Zarifis
  • Accepted to NeurIPS 2023.Link
  • See the framework of Testable Learning here.
Exponential Hardness of Reinforcement Learning with Linear Function Approximation
with Daniel Kane, Shachar Lovett, Gaurav Mahajan, Csaba Szepesvári, Gellért Weisz
  • Accepted to COLT 2023. Link
  • Strengthens the computational lower bound for RL with linear function approximation under Randomized Exponential Time Hypothesis.
Sampling Equilibria: Fast No-Regret Learning in Structured Games
with Daniel Beaglehole, Max Hopkins, Daniel Kane, Shachar Lovett
  • Accepted to SODA 2023. Link
  • Develops a general framework of sampling equilibria from games of large state space.
  • Fast algorithm for computing approximate equilibria for Colonel Blotto Game.
Nearly-Tight Bounds for Testing Histogram Distributions
with Clément Canonne, Ilias Diakonikolas, Daniel Kane
  • Accepted to NeurIPS 2022. Link
  • Obtains nearly optimal algorithm for histogram testing. Story behind
  • Derives nearly matching sample complexity lower bound.
Computational-Statistical Gaps in Reinforcement Learning
with Daniel Kane, Shachar Lovett, Gaurav Mahajan
2021.10 - 2022.2
  • Accepted to COLT 2022. Link
  • Shows the first computational lower bound for RL with linear function approximation.
Convergence and Sample Complexity of SGD in GANs
with Christos Tzamos, Vasilis Kontonis
2020.1 - 2020.10
  • Obtains theoretical guarantees on nearly optimal sample complexity and convergence rate of Generative Adversarial Networks in learning one-layer non-linear neural nets.
  • Selected to present at CS Student Research Symposium. Link to Manuscript

Other Experience

Competitive Programming
2018 - Present (in reality, 2021)
Dieter van Melkebeek
Achievements: 2019 ICPC North Central NA Regional Contest #3 over 183 teams (Team NP-Easiness), Codejam Round 2 Top 3% (leo990629), Hackercup Round 2 Top 10% (sihan.liu.3139).
Software Enginner Intern in DataChat
2019.6 - 2021.5
DataChat Inc.
  • Constrained Natural Language (CNL) parsing: intention inference and entity retrieval from structured natural language.
  • Autocomplete Engine: provide data-sensitive suggestions in the middle of typing.
  • Linter: semantic highlighting in user input.
  • Skills used: React.js + Redux + Saga, Golang, Python, RedisStore, spaCy.


Bachelor degrees in Computer Science and Mathematics (4.0/4.0)
GPA: 4.0/4.0
Dewitt Undergraduate Scholarship
Coursework: Learning Theory, Advanced Algorithm, Algorithmic Game Theory, Mechanism Design, Quantum Computing
University of Wisconsin Madison
2017 - 2021

Class Projects