Theory@Nanjing 2018: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
No edit summary
imported>Etone
No edit summary
Line 16: Line 16:
* 江苏省青少年信息学奥林匹克竞赛的部分同学。
* 江苏省青少年信息学奥林匹克竞赛的部分同学。


=== Program ===
== Program ==


:{|border="2" width="100%" cellspacing="4" cellpadding="3" rules="all" style="margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;"  
:{|border="2" width="100%" cellspacing="4" cellpadding="3" rules="all" style="margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;"  

Revision as of 16:07, 3 May 2018

General Information

  • Friday, May 11, 2018: 9 am--5 pm.
  • Venue: 南京大学 计算机科学与技术系 (南京大学仙林校区常州楼)111报告厅 在线地图

List of Speakers (in alphabetic order)

Guests

  • 上海交通大学ACM班部分同学,及张驰豪博士。
  • 江苏省青少年信息学奥林匹克竞赛的部分同学。

Program

Workshop Program
9:00 am -- 9:50 am Yuan Zhou (周源)

Indiana University at Bloomington

Title: Data Driven Algorithms for Assortment Planning
Abstract: Dynamic assortment planning concerns about the optimal displaying strategy to maximize total revenue over the selling season with no a priori information on consumers’ choice model parameters. Combining combinatorial techniques and the powerful lower-upper confidence bound (LUCB) method, we develop data-driven algorithms to simultaneously learn consumers’ model and optimize assortment selection decisions. Our algorithms’ performance guarantees surprisingly do not depend on the number of candidate products, which is particularly useful in settings such as fast fashion retailing and online advertising.
Coffee Break
10:15 am -- 11:05 am Huacheng Yu (俞华程)

Harvard University

Title: Cell-Probe Lower Bounds from Online Communication Complexity
Abstract: In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players Bob his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result it presents Bob with the next piece. This model has a closer and more natural correspondence to dynamic data structures than the classic communication models do and hence presents a new perspective on data structures.
We apply the online communication model to data structure lower bounds by studying the Group Range Problem and the dynamic connectivity problem. These problems admit [math]\displaystyle{ O(\log n) }[/math]-time worst-case data structures. Using online communication complexity, we prove a tight cell-probe lower bound: spending [math]\displaystyle{ o(\log n) }[/math] (even amortized) time per operation results in at best an [math]\displaystyle{ \exp(-\delta^2 n) }[/math] probability of correctly answering a [math]\displaystyle{ (1/2+\delta) }[/math]-fraction of the [math]\displaystyle{ n }[/math] queries.
In this talk, I will present a lower bound for the online set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a higher lower bound.
Joint work with Josh Alman and Joshua R. Wang.
11:10 am -- 12:00 pm Zhengfeng Ji (季铮锋)

University of Technology Sydney

Title: The Complexity-Theoretic Bell Inequality
Abstract: The Bell inequality is one of the most important and foundational results of quantum physics named after its inventor John Stewart Bell. The inequality proves Einstein wrong and provides a concrete method to distinguish quantum mechanics from classical theories. We study Bell inequalities from a complexity theory perspective and show that the classical and quantum violations of a Bell inequality are not only different in value but are also different in terms of their computational complexity.
Lunch Break (12 noon -- 1:30 pm)
1:30 pm -- 2:20 pm Junxing Wang (王君行)

Carnegie Mellon University

Title: Graph Sketching Against Adaptive Adversaries Applied to the Minimum Degree Algorithm
Abstract: Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in [math]\displaystyle{ O(\mathrm{polylog}(n)) }[/math] time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees.
While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating the interleaved queries and updates to this randomized data structure may be of independent interest.
2:25 pm -- 3:15 pm Bingkai Lin (林冰凯)

National Institute of Informatics, Japan

Title: One-sided gap of Biclique and its application in parameterized inapproximability
Abstract: Given an integer [math]\displaystyle{ k }[/math] and an [math]\displaystyle{ n }[/math]-vertex graph [math]\displaystyle{ G }[/math], the goal of [math]\displaystyle{ k }[/math]-Biclique problem is to decide if [math]\displaystyle{ G }[/math] contains a complete bipartite subgraph with [math]\displaystyle{ k }[/math] vertices on each side. Whether there is an [math]\displaystyle{ f(k)poly(n) }[/math]-time algorithm solving [math]\displaystyle{ k }[/math]-BICLIQUE for some computable function [math]\displaystyle{ f }[/math] was one of the “most infamous” open problems in parameterized complexity theory.
We show that [math]\displaystyle{ k }[/math]-BICLIQUE is [math]\displaystyle{ W[1] }[/math]-hard, which implies that such an [math]\displaystyle{ f(k)poly(n) }[/math]-time algorithm does not exist under the hypothesis [math]\displaystyle{ W[1]\neq \mathrm{FPT} }[/math] from parameterized complexity theory. To prove this result, we give a reduction which, for every [math]\displaystyle{ n }[/math]-vertex graph [math]\displaystyle{ G }[/math] and small integer [math]\displaystyle{ k }[/math], outputs a bipartite graph [math]\displaystyle{ H = (L, R, E) }[/math] in time polynomial in [math]\displaystyle{ n }[/math] such that:
  • if [math]\displaystyle{ G }[/math] contains a clique with [math]\displaystyle{ k }[/math] vertices, then there are [math]\displaystyle{ k(k - 1)/2 }[/math] vertices in [math]\displaystyle{ L }[/math] with common neighbors,
  • otherwise any [math]\displaystyle{ k(k - 1)/2 }[/math] vertices in [math]\displaystyle{ L }[/math] have at most [math]\displaystyle{ (k + 1)! }[/math] common neighbors.
An important feature of this reduction is that it creates a gap on the right side of the biclique. We will present some of its applications in proving parameterized inapproximability.
3:20 pm -- 4:10 pm Shengyu Zhang (张胜誉)

CUHK & 腾讯公司

Title: A Scalable and Robust Method to Learn Non-Convex Embedding Space.
Abstract: Dimensionality reduction has been a fundamental problem in unsupervised learning to find a low-dimensional representation of data by a nontrivial combination of input features. Linear methods such as PCA often fail when the data points reside in a highly nonlinear space, in which case manifold learning are often resorted to. Unfortunately, most of the proposed manifold learning methods have a prohibitively high time complexity for massive datasets. In this paper, we propose a novel method based on the Nystrom approximation. We reinterpret an extrapolation formula from a mixture of weighted linear functions, where out-of-sample vectors are returned from a nonparametric model, and the newly defined formula is optimized from local isometry. Our method improves previous ones in several directions. First, its time cost is merely [math]\displaystyle{ O(N) }[/math], significantly smaller than previous ones. Second, our method is able to capture fine structures and can handle non-convex datasets. Third, the method is robust to observation noise. Experiments were conducted on Swiss Roll and MINST, and demonstrated great advantages of our method over existing ones on both effectiveness and efficiency.
Coffee Break
4:30 pm -- 5:00 pm

Panel Discussion