I have the distinct feeling that with some development, I might just be able to drive a symmetric channel to capacity. I thought somebody might be interested in coding theory, so here goes.
Forward error correction is always about averaging over errors over the long run. The problem is that the channel is binary, not continuous, so you cannot use the normal, dispersive, linear transforms from analysis, like Fourier or Walsh-Hadamard. That leads to tons of combinatorial trouble, which lie at the heart of coding theory. Computational complexity, the scarcity of neat enough algebraic structures suited for the job, extravagant decoding latency, and so on, all spring from this source. So, what we'd really like is some means of using real linear algebraic tools over the discrete channel.
This sort of thing is actually done in signal processing. We want our discretized PCM channels to be as linear as possible, so we use dither. I'm thinking, perhaps we can lift dithering theory to this new setting and real linearize the binary channel in average. If we could achieve this without losing any information carrying capacity in the process, it would then be possible to utilize some conventional linear transform to achieve the temporal spreading that is needed in order for the channel errors to even out. We could try this either with long enough blocks and a block transform, like Walsh-Hadamard, or by using the impulse response of a continuous time LTI filter as the symbol prototype. The idea is that the decoder would essentially become the inverse linear transform followed by a quantizer. Redundancy would be inserted by oversampling the signal or by embedding equidistant constant bits in suitable numbers.
The right tool for the job would be subtractive dither, because it does not add average noise power. Hence, you should be able to subtractively dither a continuous signal into a binary sequence without losing information carried by it. Normal dithering results don't apply in this setting, though, because we're working with a two level quantizer, but I think that by reinterpreting RPDF/TPDF dither so that wraparound over Z/2Z is allowed after summation, the tight first to second order statistical error bounds we're after can be regained.
The prime payoff from this sort of thing over LDPC and turbo codes would be lower decoding complexity and latency. In the streaming, or convolutional, case, it furthermore ought to be that by balancing the decay rate of the individual, continuous time symbols against the error exponent of the channel, we should be able to approach the minimum decoder latency achievable for the channel at any given residual error rate.
In the LTI case, all this ought to have a connection with inverse arithmetic coding as well.
Then on the other hand, all this is very much the work of an intuitive sort of person. We know that math isn't really made for my kind of sloppiness, so don't get your hopes up just yet.