|
Post by calabiyau on Feb 6, 2007 10:55:52 GMT -5
Hi all,
Theory says that BER of a TCM for the same Eb/No is always lesser than the non-coded QAM (it is said that we can get up to some dBs called the assymptotically coding gain)
I've designed a TCM simulation and I notice that for high SNR, 128TCM is better than 64QAM as expected. However for low SNR, yielding BER around 0.1, a lot of errors occurs in TCM.
To be more especific I found:
Eb/No = 12dB TCM128 = 789 /10.000 QAM64 = 550 /10.000 (symbol Errors / Total Symbol)
Eb/No=14dB TCM128 = 9 /10.000 QAM64 = 89 /10.000
and from this Eb/No to higher values, the difference gets more important.
I imagine that is is due to convergence in the trellis path, which cause a burst of errors when it occurs a decoding error.
I assume that the probability of a convergence error is really less than a symbol error in QAM, but if happens often could be disastrous.
Is this assumption correct?
But then in an effective way TCM are worse than uncoded QAM in low SNR despite any curve show it.
Is in this way or am I missing some point in my simulation?
Thanks a lot!
|
|
|
Post by tmcdavid on Feb 12, 2007 23:05:00 GMT -5
Calab:
Not having done a similar simulation, I cannot point out the exact numbers, but your result is intuitively supportable.
Consider that the coding will give a correct output in spite of an occasional error in the received sequence. If this were not the case, you would not bother with the coding. However, for a given code set, there is a maximum number of errors that can exist within the span of the decoder length before the correct decision can no longer be guaranteed. Since the probable number of errors in the decoder is the probable error rate of a single bit times the number of bits in the coder, one can see that a threshold is reached where the probability is that the number of bits in the shift register is at the critical level, and then any input error will propagate to several output errors until the oldest input error is shifted out of the chain without a new one shifting in. So the result becomes rapidly chaotic.
It may not be that you have not seen a graph of this because the cross-over point is highly dependent on the code design and the metrics used. I saw a graph of a decoder performance once, and though it did not plot the comparison, it looked odd, so I plotted a few points of the simple uncoded error probability and was amazed to see that the encoder decoder was inferior to uncoded decisions over much of the Eb/No range. It was only better for very poor Eb/No values. Just the opposite of what you plotted! On inquiring what the designers were so proud of regarding such apparently poor performance, it turned out the system was for deep space communication where the Eb/No is guaranteed to be extremely low after a bit of time into the mission.
So the lesson there is that design of the coding method (and the evaluation of if coding should even be used) and the metrics in the decoder needs to begin with a understanding of the noise environment to be expected in the application. You would probably make a totally different choice if the disturbance was non-Gaussian, such as occasional bursts from another signal source using the same frequency, versus just thermal albedo. A multi-path environment also changes the game since you can usually predict the maximum delay of the self-interference arrival.
Regards,
Terry
|
|