# Theory@Nanjing 2019

Jump to navigation
Jump to search

## General Information

**Saturday, May 11, 2019: 9 am--5 pm.**

**Venue**: 南京大学 计算机科学与技术系 （南京大学仙林校区常州楼）111报告厅 在线地图

## List of Speakers (in alphabetic order)

- Yu Cheng (Duke University)
- Lingxiao Huang (EPFL)
- Jingcheng Liu (UC Berkeley)
- Jiapeng Zhang (UCSD)

- local speakers

- Weiming Feng (Nanjing University)
- Penghui Yao (Nanjing University)

## Guests

## Program

**Workshop Program**9:00 **am**-- 9:50**am**Yu Cheng (程宇)

Duke University

**Title**: Robustness and Strategic Concerns in Machine Learning**Abstract**: Most people interact with machine learning systems on a daily basis. Such interactions often happen in strategic environments where people have incentives to manipulate the outcome of the learning algorithms. As machine learning plays a more prominent role in our society, it is important to understand how people's incentives can affect the learning problems, and to design reliable algorithms that perform well in these strategic environments.

- In this talk, I will explore the challenges that arise at the interface of machine learning and game theory: selfish agents may interact with machine learning algorithms strategically rather than truthfully, and many existing algorithms are vulnerable to adversarial attacks. I will discuss two lines of my work that address these challenges: designing fast algorithms that are provably robust for several fundamental problems in machine learning and statistics, when the input data is modified by an adversary; and finding optimal policies for collecting data from selfish agents.

**Coffee Break**10:15 **am**-- 11:05**am**Jingcheng Liu (刘景铖)

University of California, Berkeley

**Title**: Private Selection from Private Candidates

**Abstract**: Differentially Private algorithms often need to select the best amongst many candidate options. Classical works on this selection problem require that the candidates’ goodness, measured as a real-valued score function, does not change by much when one person’s data changes. In many applications such as hyperparameter optimiza- tion, this stability assumption is much too strong. In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private. Under this assumption, we present algorithms that are near-optimal along the three relevant dimensions: privacy, utility and computational efficiency.

- Our result can be seen as a generalization of the exponential mechanism and its existing generalizations. We also develop an on-line version of our algorithm, that can be seen as a generalization of the sparse vector technique to this weaker stability assumption. We show how our results imply better algorithms for hyperparameter selection in differentially private machine learning, as well as for adaptive data analysis.

- Joint work with Kunal Talwar.

11:10 **am**-- 12:00**pm**Weiming Feng (凤维明)

Nanjing University

**Title**: Dynamic Sampling from Graphical Models

**Abstract**: We study the problem of sampling from a graphical model when the model itself is changing dynamically with time. This problem derives its interest from a variety of inference, learning, and sampling settings in machine learning, computer vision, statistical physics, and theoretical computer science. While the problem of sampling from a static graphical model has received considerable attention, theoretical works for its dynamic variants have been largely lacking.

- The main contribution of our work is an algorithm that can sample dynamically from a broad class of graphical models over discrete random variables. Our algorithm is parallel and Las Vegas: it knows when to stop and it outputs samples from the exact distribution. We also provide sufficient conditions under which this algorithm runs in time proportional to the size of the update, on general graphical models as well as well-studied specific spin systems. Our dynamic sampling algorithm relies on a local resampling algorithm and a new equilibrium property that is shown to be satisfied by our algorithm at each step, and enables us to prove its correctness. This equilibrium property is robust enough to guarantee the correctness of our algorithm, helps us improve bounds on fast convergence on specific models, and should be of independent interest.

- Joint work with Nisheeth Vishnoi and Yitong Yin.

**Lunch Break (12 noon -- 1:30 pm)**1:30 **pm**-- 2:20**pm**Lingxiao Huang (黄棱潇)

École polytechnique fédérale de Lausanne (EPFL)

**Title**: Fairness in Automated Decision-Making Tasks

**Abstract**: Automated decision-making algorithms are increasingly deployed and affect people's lives significantly. Recently, there has been growing concern about systematically discriminate against minority groups of individuals that may exist in such algorithms. Thus, developing algorithms that are "fair" with respect to sensitive attributes has become an important problem.

- In this talk, I will first introduce the motivation of "fairness" in real-world applications and how to model "fairness" in theory. Then I will present several recent progress in designing algorithms that maintain fairness requirements for automated decision-making tasks, including multiwinner voting, personalization, classification, and coreset construction.

2:25 **pm**-- 3:15**pm**Jiapeng Zhang (张家鹏)

University of California, San Diego

**Title**: Sunflowers: a bridge between complexity, combinatorics and pseudorandomness

**Abstract**: The Erdos-Rado sunflower conjecture is one of the tantalizing open problems in combinatorics. It has profound a lot of connections with theoretical computer science. In this talk, I will focus on the connection between the sunflower conjecture and DNF sparsification. Specifically, I will cover the following two parts:

- An improved lower approximation of DNF sparsification lemma.
- A connection between upper approximation of DNF sparsification and sunflower structures.

- These two results provide some clue to improve the upper bound of sunflower structures.

- The talk is based on joint works with Xin Li, Shachar Lovett and Noam Solomon.

3:20 **pm**-- 4:10**pm**Penghui Yao (姚鹏晖)

Nanjing University

**Title**: A doubly exponential upper bound on noisy EPR states for binary games

**Abstract**: We initiate the study of a class of entangled-games, mono-state games, which is a two-player one-round game with a given biparite state and the players are only allowed to share arbitrary copies of the given state . This paper provides a doubly exponential upper bound on the copies of the states for the players to approximate the value of the game to an arbitrarily small constant precision for any mono-state binary game if the shared state is a noisy EPR state.- This work develops a series of new techniques about the Fourier analysis on matrix spaces and proves a quantum invariance principle and a hypercontractive inequality of random operators. The structure of the proofs is built the recent framework about the decidability of the noninteractive simulation of joint distributions, which is completely different from all previous optimization-based approaches or "Tsirelson’s problem"-based approaches. This novel approach provides a new angle to study the decidability of the complexity class MIP*, a longstanding open problem in quantum complexity theory.

**Coffee Break**4:30 **pm**-- 5:00**pm****Panel Discussion**

## Short Bios

- Yu Cheng (程宇) will join the Mathematics (MSCS) Department of University of Illinois at Chicago as Assistant Professor in Fall 2019. He is currently a postdoctoral researcher in the Department of Computer Science at Duke University, hosted by Vincent Conitzer, Rong Ge, Kamesh Munagala, and Debmalya Panigrahi. He obtained his Ph.D. in Computer Science from the University of Southern California in 2017, advised by Shang-Hua Teng. His research interests include machine learning, game theory, and optimization.

- Jingcheng Liu (刘景铖) is in the final year of his PhD in the CS theory group at UC Berkeley, with an anticipated completion of summer 2019, under the supervision of Professor Alistair Sinclair. He is broadly interested in theoretical computer science. His current research focuses on the interplay between phase transitions in statistical physics, locations of zeros of graph polynomials, and algorithmic questions such as the tractable boundaries of approximate counting, sampling and inference. Before attending UC Berkeley, he completed his undergraduate studies in the ACM Honor Class of 2010, at Shanghai Jiao Tong University.

- Weiming Feng (凤维明) is a third year PhD student in the theoretical computer science group at Nanjing University, under the supervision of Professor Yitong Yin. His research mainly focuses on theory of distributed computing and randomized algorithms. Currently, he is working on distributed sampling and dynamic sampling problems.

- Lingxiao Huang (黄棱潇) is a postdoc of computer science in EPFL, where he is advised by Nisheeth Vishnoi. He joined EPFL in 2017, after received his Ph.D. in IIIS, Tsinghua University. His current research interest is algorithm design and computational social choice. He is passionate about creating novel algorithms that are motivated by existing practical challenges.

- Jiapeng Zhang (张家鹏) is a PhD candidate at UC San Diego, under the supervise of Shachar Lovett. He is working on boolean function analysis, computational complexity and foundations of cryptography.

- Penghui Yao (姚鹏晖) is an associate professor in the Department of Computer Science and Technology, Nanjing University. He obtained his doctoral degree from Centre for Quantum Technology, National University of Singapore. Prior to joining Nanjing Univeristy, He was a postdoctoral researcher at CWI Netherlands; IQC University of Waterloo and QuICS University of Maryland. His research mainly focuses on quantum information theory, communication complexity and computational complexity.