First Workshop on Entropy and Information Inequalities
Apr 15-17, 2013
Room 1009, Mong Man Wai Engineering Building
The Chinese University of Hong Kong
|
Monday, Session Chair: Babak Hassibi |
9:30-10:30 |
Raymond Yeung, CUHK
How did we get into this mess?
The discovery of the first non-Shannon-type inequalities in the late 1990's revealed the incompleteness of the polymatroidal axioms as constraints on the entropy function. In the past 15 years, connections between the entropy function and a number of fields in information science, mathematics, and physics have been established. These fields include probability theory, network coding, combinatorics, group theory, Kolmogorov complexity, matrix theory, and quantum mechanics. In this talk I will give an account of the development of the subject since its inception.
|
Slides/Video link
|
11:00-12:00 |
František Matúš, Academy of Sciences of the Czech Republic
Entropy and matroids
Constraints on entropies are in the core of many models of multiuser information theory, multivariate statistics, networking, cryptography and quantum sciences. Most degenerate situations sometimes turn out to be related to matroids — combinatorial structures studied for many decades. This tutorial will provide a gentle introduction to the matroid theory with a view towards Shannon entropy function, entropy region and information-theoretical inequalities. Quantum counterparts of the models will be briefly mentioned as well.
|
Slides/Video link
|
|
2:00-3:00 |
Terence Chan, University of South Australia
Relations among information inequalities, entropies, codes and group
Entropy functions and information inequalities are some of the most important tools in communications. They are crucial in deriving converse coding theorems in communications and compression problems, and are closely related to different mathematical areas including matroid theory, group theory, linear algebra, algorithmic complexity, determinantal inequalities and others. In this tutorial, we will examine some interesting properties about entropies and information inequalities. Our focus will be on the interplay among codes, entropies and groups.
First, we will explore the one-to-one correspondence relations among information inequalities, inequalities for groups and subspace rank inequalities. Then, we will examine the relations between entropies and codes. By treating a code (either a classical error control code, a secret sharing code, or a network code) as a set of random variables, many properties of a code can be re-interpreted ''information-theoretically''. One example is the Generalised Greene Theorem, which says that the weight enumerating polynomial of a code can be determined by the entropies of the codeword symbol random variables. By exploiting the link between codes and entropies, we have extended Delsarte's linear programming bound for a boarder class of codes including the secret sharing codes. Furthermore, by constructing random variables (and hence the corresponding code) from subgroups, we obtained a new group inequality.
|
Slides/Video link
|
4:00-5:00 |
Randall Dougherty, IDA Center for Communications Research
Entropy inequalities and linear rank inequalities
This talk will describe the similarities and differences between entropy inequalities (inequalities between combinations of joint entropies, used, among other things, for bounding capacities for network coding) and linear rank inequalities (inequalities between combinations of joint subspace ranks, used, among other things, for bounding linear capacities for network coding). In addition to the direct relationship (every entropy inequality is a linear rank inequality, but not vice versa), it will cover the main known methods for generating these inequalities, the collections of such inequalities known at this point, and unresolved questions (such as whether the currently known methods are sufficient to generate all such inequalities).
|
Slides/Video link
|
Tuesday, Session Chair: Raymond Yeung |
9:30-10:30 |
Babak Hassibi, Caltech
Gauss, Cayley and projective linear groups
I will talk about a collection of results related to the space of entropic vectors. I will revisit and reinterpret a result of Chan on the connection between the spaces of entropic and "differential" entropic vectors. Using this I will focus on entropic vectors obtained from Gaussian random variables and demonstrate how Cayley's hyperdeterminant plays a crucial role in their characterization. I will argue for the importance of studying all "pairwise" entropies of a collection of random variables and will introduce a family of Ingleton-bound-violating groups.
|
Slides [1][2]Video link
|
11:00-12:00 |
John M. Walsh, Drexel University
Information geometry, polyhedral computation, and entropic vectors
This talk reviews some theoretical and computational results involving the characterization of the region of entropic vectors.
The computational part of the talk aims primarily to discuss software under development at the Adaptive Signal Processing and Information Theory research group at Drexel University for the purpose of doing efficient computations with the region of entropic vectors. At present, the best known methods for computing inner and outer bounds for region of entropic vectors, as well as utilizing them to calculate fundamental rate regions for network coding and distributed storage, can all be posed as instances of a single three step polyhedral computation. This computation can be performed with a number of different algorithms, all with very different empirical complexities and types of problems they are suited to. Additionally, the case is made that the polyhedra involved have an incredible amount of symmetry, and some recently developed methods from the literature to exploit this symmetry in the polyhedral computation are reviewed. We outline the purposes, features, and capabilities of the software we are developing, and aim to garner feedback from the entropy vectors community regarding them to guide further development.
We also review, however, that these polyhedral computational techniques, however efficiently they can be implemented, have several fundamental weaknesses from the standpoint of completely characterizing the region of entropic vectors. Inspired by this fact, the theoretical part of the talk aims to explore what insights an alternative analytical tool, information geometry, may provide to the study of the region of entropic vectors and information inequalities.
Information geometry is a branch of statistics that endows the manifold of probability distributions with a special differential geometric structure. This structure enables, among other things, the development of parameterizations for joint distributions which link affine sets to important statistical properties such as independence, conditional independence, and having a specified collection of marginal distributions.
To provide evidence that information geometry can provide a useful tool for bounding and determining entropy geometry, we first provide a simple information geometric interpretation of the Shannon type inequalities. We then identify the information geometric origin of non-Shannon type inequalities, and demonstrate these ideas in the context of the region of entropic vectors in four variables. In particular, the closure of the region of entropic vectors is an unknown 15 dimensional non-polyhedral cone which may be expressed as a 14 dimensional polyhedron together with an upper and lower bound for the remaining entropy parameterized by the polyhedron. Computing this upper and lower bound, and thus completely characterizing the region, is then is placed in the information geometric framework. The talk concludes with a list of exciting directions for further study.
|
Slides/Video link
|
|
2:00-3:15 |
Chandra Nair, CUHK
Information inequalities in network information theoretic settings
Information inequalities have been useful in network information theory settings for proving converses to coding theorems or for computing certain achievable regions. Entropy power inequality and its non-trivial variants have long found application in network information theory settings, especially under additive Gaussian noise model.
Recently there have been new non-trivial information inequalities that have been discovered for discrete random variables motivated by multi terminal information theory settings. One type of these new inequalities hold when one of the random variables is restricted to satisfy a certain cardinality constraint. Another type of the inequalities, called factorization inequalities, are related to new functions that are not linear combinations of mutual information terms.
The factorization inequalities are particularly interesting as they have been directly motivated from the proofs of certain converses in discrete settings. Indeed there are also several inequalities that one may conjecture, and have numerical evidence supporting them, which would yield to new capacity results and resolution of problems that have been open for decades.
Remark: The talk will be self-contained and some of the above inequalities (established and conjectured) will be presented with their motivation.
The main content of this talk is generated from the speakers work in network information theory which was done with several collaborators and students.
|
Slides/Video link
|
4:00-4:45 |
Amos Beimel, Ben-Gurion University
Secret sharing schemes: A survey
A secret-sharing scheme is a method by which a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret and all other sets get no information on the secret. Secret-sharing schemes are an important tool in cryptography and they are used as a building box in many secure protocols, e.g., general protocol for multiparty computation, Byzantine agreement, threshold cryptography, access control, attribute-based encryption, and generalized oblivious transfer.
In this talk, I will describe the most important constructions of secret-sharing schemes. In particular, I will describe linear secret sharing schemes, which are schemes where the shares are computed using a linear mapping. Most known secret-sharing schemes are linear.
I will then discuss the main problem with known secret-sharing schemes — the large share size, which is exponential in the number of parties. It is conjectured that this is unavoidable. I will present the known lower bounds on the share size for general and linear secret sharing schemes. The lower bounds for general secret sharing schemes are fairly weak and there is a big gap between the lower and upper bounds. For linear secret-sharing schemes, super-polynomial lower bounds on the share size are known.
|
Slides/ Video Link
|
4:45-5:30 |
Ilan Orlov, Ben-Gurion University
Secret sharing and non-Shannon information inequalities
The known secret-sharing schemes for most access structures are not efficient; even for a one-bit secret the length of the shares in the schemes is $2^{O(n)}$, where $n$ is the number of participants in the access structure. It is a long standing open problem to improve these schemes or prove that they cannot be improved. The best known lower bound is by Csirmaz (J. Cryptology 97), who proved that there exist access structures with $n$ participants such that the size of the share of at least one party is $n/ \log n$ times the secret size. Csirmaz's proof uses Shannon information inequalities, which were the only information inequalities known when Csirmaz published his result. On the negative side, Csirmaz proved that by only using Shannon information inequalities one cannot prove a lower bound of $\omega(n)$ on the share size. In the last decade, a sequence of non-Shannon information inequalities were discovered. This raises the hope that these inequalities can help in improving the lower bounds beyond $n$. However, in this work we show that all the inequalities known to date cannot prove a lower bound of $\omega(n)$ on the share size. In addition, some recent results will be presented in this talk.
Joint work with Amos Beimel. Presented in TCC 2009.
|
Slides/ Video Link
|
Wednesday, Session Chair: Sidharth Jaggi |
9:30-10:30 |
Mokshay Madiman, Univ. of Delaware and Yale University
Upper bounds for entropies of sums, and ramifications
Much effort has gone into obtaining lower bounds on entropies of sums (in both discrete and continuous settings, in each of which the behaviour is quite different). Our focus will be upper bounds, where we find that we can give a unified analysis of bounds for both settings. These bounds imply, for instance, the (new and non-obvious) fact that for i.i.d. random variables X and X', the entropies of X+X' and X-X' strongly constrain each other, even though the difference of these two entropies can be arbitrarily large (as shown by Lapidoth and Pete). In the discrete setting, even more can be said going beyond sums to the more general setting of partition-determined functions. If time permits, we may discuss an application to reverse entropy power inequalities. The work on the discrete setting is joint with Adam Marcus (Yale) and Prasad Tetali (Georgia Tech), and that on the continuous(and unified) settings is with Ioannis Kontoyiannis (AUEB).
|
Slides/Video link
|
10:30-11:00 |
Liyao Wang, Yale University
Rearrangement and entropy inequalities
A new lower bound of the entropy of the sum of independent random variables is demonstrated in terms of rearrangements. A number of applications will be given, including a new and independent proof of the entropy power inequality. Discrete versions of these results will be discussed briefly in the end. Joint work with Mokshay Madiman (University of Delaware and Yale University).
|
Slides/Video link
|
11:30-12:30 |
Tarik Kaced, CUHK
Essentially conditional inequalities
We study conditional linear information inequalities, i.e., linear inequalities for the Shannon entropy that hold for distributions whose entropies meet some linear constraints.
· We are interested in non-Shannon-type (conditional) inequalities.
· We discuss the (two) available proof techniques.
· We prove that some conditional inequalities cannot be extended to any unconditional linear inequalities (and thus called essentially conditional inequalities).
· We further show that not all conditional inequalities hold for almost entropic points.
The proofs provide several Ingleton-violating examples that might be of independent interest.
|
Slides/Video Link
|
|
2:30-4:00 |
Søren Riis, Queen Mary, University of London
Graph guessing games and non-Shannon information inequalities
It can be shown that protocols for a multiple-unicast network can be directly converted into a strategy for a guessing game. The performance of the optimal strategy for a graph is measured by the guessing number, and this number can be bounded from above using Information Inequalities. Extensive computer calculations have shown that of the roughly 12 million graphs with 10 or fewer nodes, only one graph (up to isomorphism) has a guessing number which differs from the bound that can be derived from Shannon's Information Inequalities. The Shannon bound for the graph is 114/17=6.705... while the bound from Zhang-Yeung Information Inequality is slightly better namely 1212/181 = 6.696... Combining the 214 non-Shannon Inequalities on 4 variables published by Dougherty-Freiling-Zeger improves the bound only slightly to 59767/892=6.693... Our best lower bound is 20/3=6.666... which matches the Ingelton bound.
Using similar ideas it is possible to construct a multiple-unicast network where:
1. Shannon's bound on capacity is optimal in one direction, and
2. Zhang-Yeung's Information Inequality is insufficient to derive the optimal capacity bound in the opposite direction
|
Slides/Video link
|
4:30-5:30 |
Panel Discussion: Whither from here
Terence Chan, Randall Dougherty, Babak Hassibi, František Matúš, Raymond Yeung
|
|