Theory@Nanjing 2018

From TCS Wiki
Jump to navigation Jump to search

General Information

  • Friday, May 11, 2018: 9 am--5 pm.
  • Venue: 南京大学 计算机科学与技术系 (南京大学仙林校区常州楼)111报告厅 在线地图

List of Speakers (in alphabetic order)

Guests

  • 上海交通大学ACM班部分同学,及张驰豪博士。
  • 江苏省青少年信息学与数学奥林匹克竞赛的部分同学。

Program

Workshop Program
9:00 am -- 9:50 am Yuan Zhou (周源)

Indiana University at Bloomington

Title: Data Driven Algorithms for Assortment Planning
Abstract: Dynamic assortment planning concerns about the optimal displaying strategy to maximize total revenue over the selling season with no a priori information on consumers’ choice model parameters. Combining combinatorial techniques and the powerful lower-upper confidence bound (LUCB) method, we develop data-driven algorithms to simultaneously learn consumers’ model and optimize assortment selection decisions. Our algorithms’ performance guarantees surprisingly do not depend on the number of candidate products, which is particularly useful in settings such as fast fashion retailing and online advertising.
Coffee Break
10:15 am -- 11:05 am Huacheng Yu (俞华程)

Harvard University

Title: Cell-Probe Lower Bounds from Online Communication Complexity
Abstract: In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players Bob his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result it presents Bob with the next piece. This model has a closer and more natural correspondence to dynamic data structures than the classic communication models do and hence presents a new perspective on data structures.
We apply the online communication model to data structure lower bounds by studying the Group Range Problem and the dynamic connectivity problem. These problems admit [math]\displaystyle{ O(\log n) }[/math]-time worst-case data structures. Using online communication complexity, we prove a tight cell-probe lower bound: spending [math]\displaystyle{ o(\log n) }[/math] (even amortized) time per operation results in at best an [math]\displaystyle{ \exp(-\delta^2 n) }[/math] probability of correctly answering a [math]\displaystyle{ (1/2+\delta) }[/math]-fraction of the [math]\displaystyle{ n }[/math] queries.
In this talk, I will present a lower bound for the online set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a higher lower bound.
Joint work with Josh Alman and Joshua R. Wang.
11:10 am -- 12:00 pm Junxing Wang (王君行)

Carnegie Mellon University

Title: Graph Sketching Against Adaptive Adversaries Applied to the Minimum Degree Algorithm
Abstract: Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in [math]\displaystyle{ O(\mathrm{polylog}(n)) }[/math] time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees.
While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating the interleaved queries and updates to this randomized data structure may be of independent interest.
Lunch Break (12 noon -- 1:30 pm)
1:30 pm -- 2:20 pm Zhengfeng Ji (季铮锋)

University of Technology Sydney

Title: The Complexity-Theoretic Bell Inequality
Abstract: The Bell inequality is one of the most important and foundational results of quantum physics named after its inventor John Stewart Bell. The inequality proves Einstein wrong and provides a concrete method to distinguish quantum mechanics from classical theories. We study Bell inequalities from a complexity theory perspective and show that the classical and quantum violations of a Bell inequality are not only different in value but are also different in terms of their computational complexity.
2:25 pm -- 3:15 pm Bingkai Lin (林冰凯)

National Institute of Informatics, Japan

Title: One-sided gap of Biclique and its application in parameterized inapproximability
Abstract: Given an integer [math]\displaystyle{ k }[/math] and an [math]\displaystyle{ n }[/math]-vertex graph [math]\displaystyle{ G }[/math], the goal of [math]\displaystyle{ k }[/math]-Biclique problem is to decide if [math]\displaystyle{ G }[/math] contains a complete bipartite subgraph with [math]\displaystyle{ k }[/math] vertices on each side. Whether there is an [math]\displaystyle{ f(k)poly(n) }[/math]-time algorithm solving [math]\displaystyle{ k }[/math]-BICLIQUE for some computable function [math]\displaystyle{ f }[/math] was one of the “most infamous” open problems in parameterized complexity theory.
We show that [math]\displaystyle{ k }[/math]-BICLIQUE is [math]\displaystyle{ W[1] }[/math]-hard, which implies that such an [math]\displaystyle{ f(k)poly(n) }[/math]-time algorithm does not exist under the hypothesis [math]\displaystyle{ W[1]\neq \mathrm{FPT} }[/math] from parameterized complexity theory. To prove this result, we give a reduction which, for every [math]\displaystyle{ n }[/math]-vertex graph [math]\displaystyle{ G }[/math] and small integer [math]\displaystyle{ k }[/math], outputs a bipartite graph [math]\displaystyle{ H = (L, R, E) }[/math] in time polynomial in [math]\displaystyle{ n }[/math] such that:
  • if [math]\displaystyle{ G }[/math] contains a clique with [math]\displaystyle{ k }[/math] vertices, then there are [math]\displaystyle{ k(k - 1)/2 }[/math] vertices in [math]\displaystyle{ L }[/math] with common neighbors,
  • otherwise any [math]\displaystyle{ k(k - 1)/2 }[/math] vertices in [math]\displaystyle{ L }[/math] have at most [math]\displaystyle{ (k + 1)! }[/math] common neighbors.
An important feature of this reduction is that it creates a gap on the right side of the biclique. We will present some of its applications in proving parameterized inapproximability.
3:20 pm -- 4:10 pm Shengyu Zhang (张胜誉)

CUHK & 腾讯公司

Title: A Scalable and Robust Method to Learn Non-Convex Embedding Space.
Abstract: Dimensionality reduction has been a fundamental problem in unsupervised learning to find a low-dimensional representation of data by a nontrivial combination of input features. Linear methods such as PCA often fail when the data points reside in a highly nonlinear space, in which case manifold learning are often resorted to. Unfortunately, most of the proposed manifold learning methods have a prohibitively high time complexity for massive datasets. In this paper, we propose a novel method based on the Nystrom approximation. We reinterpret an extrapolation formula from a mixture of weighted linear functions, where out-of-sample vectors are returned from a nonparametric model, and the newly defined formula is optimized from local isometry. Our method improves previous ones in several directions. First, its time cost is merely [math]\displaystyle{ O(N) }[/math], significantly smaller than previous ones. Second, our method is able to capture fine structures and can handle non-convex datasets. Third, the method is robust to observation noise. Experiments were conducted on Swiss Roll and MINST, and demonstrated great advantages of our method over existing ones on both effectiveness and efficiency.
Coffee Break
4:30 pm -- 5:00 pm

Panel Discussion

Short Bios

yuanzhou.jpg
Yuan Zhou (周源) is currently an Assistant Professor at the Computer Science Department of Indiana University at Bloomington. Before joining Indiana University, he was an Applied Mathematics Instructor at the Mathematics Department of Massachusetts Institute of Technology. Prior to MIT, Yuan was the recipient of the Simons Graduate Fellowship and obtained his Ph.D. in Computer Science at Carnegie Mellon University.
Yuan’s research interests include stochastic and combinatorial optimizations and their applications to operations management and machine learning. He is also interested in and publishes on analysis of mathematical programming, approximation algorithms, and hardness of approximation.
  • IOI 2005 金牌(第一名)
  • ACM ICPC 2008, world final, 第二名
huachengyu.jpg
Huacheng Yu (俞华程) is a postdoctoral researcher in the Theory of Computing group at Harvard University. He obtained his PhD from Stanford University in 2017 under the supervision of Ryan Williams and Omer Reingold. He also holds a Bachelor's degree from Tsinghua University (2012). His primary research interests are data structure lower bounds. He also works in algorithm design and communication complexity.
  • IOI 2008 金牌(第一名)
  • ACM ICPC 2010, world final, 第六名
  • ICALP 2015 (track A) 最佳学生论文奖
junxingwang.jpg
Junxing Wang (王君行) is currently a 4th year PhD student, advised by Gary Miller in Computer Science Department (CSD) at Carnegie Mellon University. He is broadly interested though in many areas of computer science theories, including spectrum graph theories, approximation algorithms and machine learning theories.
  • EC 2014 最佳学生论文奖
zhengfengji.jpg
Zhengfeng Ji (季铮锋) is currently a Professor of the Centre for Quantum Software and Information (QSI), Faculty of Engineering and Information Technology (FEIT), University of Technology Sydney. He received BEng and PhD from the Department of Computer Science and Technology, Tsinghua University, Beijing, China in 2002 and 2007. His current research interests include quantum algorithms, quantum complexity theory and (post-)quantum cryptography.
  • STOC 2010 最佳论文奖
bingkailin.jpg
Bingkai Lin (林冰凯) is currently a postdoc researcher at National Institute of Informatics, Tokyo, Japan. He received his PhD degree in the Department of Computer Science in the University of Tokyo in 2016. He obtained his master and bachelor degrees in the Department of Computer Science in Shanghai Jiao Tong University in 2013 and 2010. His research interest includes parameterized complexity, graph algorithms and hardness of approximation.
  • SODA 2016 最佳论文奖、最佳学生论文奖
  • WG 2017 最佳论文奖
shengyuzhang.jpg
张胜誉教授本科毕业于复旦大学数学系;硕士毕业于清华大学计算机系,师从应明生教授;博士毕业于普林斯顿大学计算机系,师从图灵奖得主姚期智教授。后在加州理工学院跟随 Prof. John Preskill,Prof. Alexei Kitaev 及 Prof. Leonard Schulman 做博士后研究。在香港中文大学任职副教授期间,张胜誉教授研究方向包括量子计算,算法设计和计算复杂性分析,以及人工智能基础研究。
张胜誉在理论计算机会议 STOC /FOCS /SODA /ICALP,量子信息处理会议QIP,人工智能会议ICML /AAAI /IJCAI 等上面均发表多篇文章,担任 Theoretical Computer Science 及 International Journal of Quantum Information 杂志的编委。2018年1月张胜誉作为杰出科学家加入腾讯,担任量子实验室负责人。