The urge to communicate, to speak and to be heard, is a fundamental human desire. To be able to do is in the presence of random disturbances (due to noise introduced by the communication medium) is among the primary mandates of the field of Information Theory. Analogously, the field of Coding Theory attempts to analyze communication in the presence of adversarial noise (due to active jamming by potentially malicious third parties who wish to disrupt communication).
Despite the similarity in the objectives of Information Theory and Coding Theory, the analytical techniques used, the types of results obtained, the "styles/philosophies", and even to some extent the research communities involved in these two fields differ, partly due to fundamental differences in the fundamental models used, but also partly due to their historical roots. Broadly speaking, in the "Shannon world", noise is stochastic, and the goal is high probability of success in communication. In contrast, in the "Hamming world", noise is worst-case but bounded in magnitude, and the goal is always successful communication.
While there is already significant cross-fertilization between Information Theory and Coding Theory, this tutorial aims to focus on recent results for particular models of communication that lie in between the Shannon and Hamming worlds. Specifically, the tutorial aims to further examine the effects of
Jammer’s "causality": In the Shannon world, the noise is usually independent of the transmission. In the Hamming world, the adversary usually knows the entire transmission before deciding how to jam. Here the effect of the transmission being revealed only causally to the adversary will be examined.
Jammer's "myopicity": Often the jammer himself doesn’t hear the transmitter exactly, before having to decide his jamming pattern. Here the impact of the jammer’s interference on the channel capacity as a function of how clearly the jammer hears the transmission will be analyzed.
The tutorial will be self-contained, and start with a refresher in basic definitions and results in Information Theory (error metrics, random coding arguments) and Coding Theory (GV bound, Plotkin bound). There will then be a discussion on list-decoding as a tool of significant value in designing communication schemes. As an example of the utility of list-decoding, randomized capacities (when the encoder and decoder share common randomness unknown to the jammer), and capacities of channels with oblivious jammers will be derived. There will then be a focus on causal and/or myopic jammers, and novel impossibility bounds (jamming attacks) and achievability schemes (sometimes) meeting them, will be provided. Much of the tutorial will focus on specific adversarial channels (binary erasure or bit-flip channels, large alphabet channels, quadratically constrained channels) as examples, with results for general channels (when available) presented as addenda. As a subsidiary goal, when computationally efficient codes meeting capacity bounds can be constructed, they will be highlighted.