Select Page

Theory Seminar Spring 2012

Theory Seminar Schedule for Spring 2012

The weekly Theory seminar provides an opportunity to get exposed to cutting edge research in theoretical computer science, to learn to understand technical material from talks and to give technical presentations. Graduate students interested in doing research in algorithms, complexity theory, cryptography, quantum computing and related areas are strongly encouraged to attend.

Date Speaker Title
Jan 23 Piotr Berman PTAS for Geometric Hitting Set Problems via Local Search
Jan 30 Kashyap Dixit Extractors for circuit sources
Feb 6 Megan Heysham Practical Chosen Ciphertext Secure Encryption from Factoring
Feb 13 Ye Zhang Identity-Based Encryption Resilient to Continual Auxiliary Leakage
Feb 20 Piotr Berman Shortest Paths Between Shortest Paths and Independent Sets
Feb 27 Abhradeep Guha Thakurta Learnability, Stability and Uniform Convergence
Mar 11 Grigory Yaroslavtsev L-1 embeddings and algorithmic applications
Mar 19 Arnab Bhattacharyya Tight lower bounds for 2-query locally correctable codes over finite fields
Apr 16 Ali Kemal Sinop Lasserre Hierarchy, Higher Eigenvalues, and Graph Partitioning
Apr 23 Piotr Berman Node-weighted Network Design Problems in Planar Graphs
  • All talks will be at 2:30 pm on Mondays in 218 Earth and Engineering Sciences (unless they are part of the departmental colloquium).
  • If you would like to speak, please email us at: theory [at] cse [dot] psu [dot] edu.
  • If you want to receive announcements of upcoming talks, please join the theory mailing list.
Archives

Spring 2007     Fall 2007     Spring 2008     Summer 2008     Fall 2008     Spring 2009     Fall 2009     Spring 2010     Fall 2010     Spring 2011     Fall 2011     Spring 2012     Fall 2012     Spring 2013     Fall 2013     Fall 2014     Spring 2015     Fall 2015     Spring 2016     Fall 2016

PTAS for Geometric Hitting Set Problems via Local Search

paper link: Nabil H. Mustafa and Saurabh Ray PTAS for Geometric Hitting Set Problems via Local Search

Extractors for circuit sources

paper link: Emanuele Viola Extractors for circuit sources

The Group of Signed Quadratic Residues and Applications

paper link: Dennis Hofheinz and Eike Kiltz The Group of Signed Quadratic Residues and Applications

Identity-Based Encryption Resilient to Continual Auxiliary Leakage

We devise the first identity-based encryption (IBE) that remains secure even when the adversary is equipped with {\it auxiliary input} (STOC~’09) — any computationally uninvertible function of the master secret key and the identity-based secret key. In particular, this is more general than the tolerance of Chow \etal’s IBE schemes (CCS~’10) and Lewko \etal’s IBE schemes (TCC~’11), in which the leakage is bounded by a pre-defined number of bits; yet our construction is also fully secure in the standard model based on only static assumptions, and can be easily extended to give the first hierarchical IBE with auxiliary input.

Furthermore, we propose the model of {\it continual auxiliary leakage} (CAL) that can capture both memory leakage and continual leakage. The CAL model is particularly appealing since it not only gives a clean definition when there are multiple secret keys (the master secret key, the identity-based secret keys, and their refreshed versions), but also gives a generalized definition that does not assume secure erasure of secret keys after each key update. This is different from previous definitions of continual leakage (FOCS~’10, TCC~’11) in which the length-bounded leakage is only the secret key in the current time period. Finally, we devise an IBE scheme which is secure in this model. A major tool we use is the modified Goldreich-Levin theorem (TCC~’10), which until now has only been applied in traditional public-key encryption with a single private key.

Shortest paths between shortest paths and independent sets

paper link: Marcin Kaminski, Paul Medvedev, and Martin Milanic Shortest paths between shortest paths and independent sets

Learnability, Stability and Uniform Convergence

One of the fundamental problems in statistical learning theory is to characterize the notion of learnability. For problems like supervised classification and regression, it is well known that learnability is equivalent to uniform convergence of empirical risk minimizer (ERM) to the population risk. However, in general learning setting (introduced by Vapnik), the situation is much more complicated. It can be shown that uniform convergence of ERM is not a necessary (but sufficient) condition for learnability. Vapnik conjectured that the problems where uniform convergence of ERM is not necessary are in some sense “trivial”. The contribution of this paper is two folds. First, it shows that Vapnik’s idea of “trivial” learning problems is false, i.e., there exists non-trivial problems (of practical interest) where uniform convergence of ERM is not a necessary condition for learnability. Second, it provides a complete characterization of learnability via the notion of algorithmic stability of the *asymptotic *ERM (AERM). It shows that stability of AERM is both necessary and sufficient for learnability.

In this talk, I will first introduce the basic concepts (learnability, ERM, uniform convergence etc.) needed for understanding the paper. The rest of the talk will be divided into two parts. In the first part, I will delve into the details of why Vapnik was wrong. In this part, I will mostly concentrate on step-wise building of an example which is both “non-trivial” and is not learnable via ERM. In the second part, I will introduce various notions of algorithmic stability (within the realms of this talk) and then show their relation to learnability.
paper link: Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro and Karthik Sridharan Learnability, Stability and Uniform Convergence

L-1 embeddings and algorithmic applications

We consider the problem of embedding a finite metric space into Euclidean space with L1-norm. We will consider the relationship between L1-embeddings, embeddings into Euclidean space with general Lp norms and embeddings into tree metrics. We will study the proof of the result of Aumann and Rabani and Linial, London and Rabinovich, who showed that any finite metric on n points can be embedded into O(log^2 n)-dimensional Euclidean space with L1-norm with distortion O(log n) with high probability in polynomial time.

We will also show that any L1-embeddable metric can be represented as a convex combination of cut metrics. As an algorithmic application, we will consider an O(log k)-approximation algorithm for the Sparsest Cut problem.

Tight lower bounds for 2-query locally correctable codes over finite fields

A locally correctable code (LCC) is an error correcting code mapping d symbols to n symbols, such that for every codeword c and every received word r that is delta-close to c, we can recover any coordinate of c (with high probability) by only making a few queries to r. LCCs are a stronger form of Locally Decodable Codes (LDCs) which have received a lot of attention recently due to their many applications and surprising constructions. A linear LCC over a finite field F_p is a code where the codewords form a linear subspace of F_p^n. The Hadamard code, whose codewords are the truth tables of linear functions evaluated on all of F_p^n, is an example of a linear LCC that uses only 2 queries. In this talk, we will show that in some sense, this is the only example of a linear 2-query LCC over F_p. Specifically, we prove a lower bound of the form p^(delta*d) on the length n of linear 2-query LCCs over F_p, that encode messages of length d. This also gives a separation between 2 query LCCs and 2 query LDCs over finite fields of prime order. Constructions of such 2 query LCCs are intimately related to special configurations of lines and points in F_p^n with interesting incidence properties. The problem of ruling out constructions of small 2-query LCCs boils down to showing that certain configurations of points and lines do not exist. Our proof makes use of tools from additive combinatorics. We also obtain, as corollaries of our main theorem, new results in incidence geometry over finite fields, such as an improvement to the Sylvester-Gallai theorem over finite fields and a new analog of Beck’s theorem over finite fields.

Joint work with Zeev Dvir, Shubhangi Saraf, and Amir Shpilka.

Lasserre Hierarchy, Higher Eigenvalues, and Graph Partitioning

Partitioning the vertices of a graph into two (roughly) equal parts to minimize the weight of edges cut is a fundamental optimization problem, arising in diverse applications. Despite intense research, there remains a huge gap in our understanding of the approximability of these problems — the best algorithms achieve a factor (log n)^{\Omega(1)} approximation as a function of the number n of vertices, whereas even a factor 1.1 approximation is not known to be NP-hard.

We describe an approximation scheme for various graph partitioning problems such as sparsest cut, minimum bisection, and small set expansion. Specifically, we give an algorithm running in time n^{O_epsilon(r)} with approximation ratio (1+epsilon)/min(1,lambda_r), where lambda_r is the r’th smallest eigenvalue of the normalized graph Laplacian matrix. This perhaps indicates why even showing very weak hardness for these problems has been elusive, since the reduction must produce hard instances with slowly growing spectra.

Our algorithm is based on a rounding procedure for semidefinite programming relaxations from a strong class called the Lasserre hierarchy. The analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix.

Our methods apply more broadly to optimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints. This framework includes other notorious problems such as Unique Games, which we again show to be easy when the normalized Laplacian doesn’t have too many small eigenvalues.

Joint work with Venkatesan Guruswami.

Node-weighted Network Design Problems in Planar Graphs.

In those problems we need to select a set of nodes in a given planar graph to satisfy either a connectivity property or hit (remove) a certain family of cycles. We show that connectivity property that satisfies certain conditions (described as proper requirement function by Goemans and Williamson) is equivalent to hitting a family of cycles that satisfies so-called crossing property (also described by Goemans and Williamson in another paper) which in turns leads to an approximation algorithm for such problems with ratio 12/5.