>I was there and Tom gave me a copy before the talk. It was very
>interesting, but I really hestitate to call anything less than 1 kHz
>wide "spread spectrum". A carefully designed signal, yes.
See my previous note. Our use of SSB-like bandwidths was a design constraint intended to facilitate a QRP EME demonstration with existing transceivers. We would definitely prefer wider bandwidths.
>When you read their numbers, you'll find that people are doing EME with
>not-too-different power, with human-ear signal processing, TODAY. And
>with a much less complex system. One of the problems with many
>"advanced systems" generally is that they have an extremely pronouced
>threshold; when they're good they are very good, but with tenths of a dB
>less signal, they're horrid. In commercial service, where the system is
>designed to ensure at least the required minimum signal, they're
>wonderful. Weak-sig hams always want to work a station that's just a
Your first sentence is flatly wrong when you look at the energy PER USER DATA BIT. Valid comparisons of modulation & coding methods for power-limited channels are always normalized to the same user data throughput. Anything else is apples vs oranges.
The "problem" you attribute to advanced systems is a feature, not a bug. It's just as information theory says. A sharp performance threshold is a necessary (but not sufficient) characteristic of any power-efficient communication scheme. The ultimate limit, the Shannon bound, is a "brick wall" at -1.6 dB Eb/N0. It says that as long as you operate just above -1.6 dB it is theoretically possible to build a modulation and coding scheme that can operate with arbitrarily low bit error rate. Below -1.6 dB, this is not possible, no matter what you do.
The reason for the sharp threshold comes from the "noise averaging" inherent in codes that approach the Shannon bound. That is, a bunch of user data bits "pool" their resources (transmitter power, channel time) such that each channel symbol is a function of many user data bits.
This allows the law of large numbers to work in their favor, making the overall effect of noise on the block more predictable than it would be if each user bit had to go it alone. If the channel Eb/N0 is above the code's design point, all the "pooled" bits nearly always "win" (are decoded correctly). If the Eb/N0 is too low, they all "lose" (are decoded as garbage). The larger the block, the more predictable the noise, the lower the required Eb/N0 -- and the more pronounced the threshold.
Given two digital communication systems, one having an extremely sharp threshold at, say, Eb/N0 = 2.5 dB and another having a very shallow BER curve that doesn't reach acceptable levels until you're up to 20 dB or so, which is the better system? Apparently you would have us believe it's the latter.
Sure, the efficient system has a sharp threshold, but it fails long after the inefficient system has become totally unusable.
By the way, the DSS system uses fairly strong coding, so it too exhibits a pretty strong threshold. (Try moving your hand slowly in front of the feed -- the picture freezes pretty suddenly). Would you argue that a VSB-AM TV system is somehow "better" just because it doesn't exhibit this behavior?
Any communication system has to be properly designed for its link. And the way to deal with a link that can't be closed any other way is to lower the user data rate to provide more energy per bit. The last thing you want to do is to fall back to a less efficient coding scheme just because it has a less pronounced threshold.
>The various designed-signal weak-sig schemes, such as the KA9Q / W3IWI
>talk at CSVHFS - use techniques, primarily redundancy, to overcome weak
>sig limitations. Wouldn't they have done nearly as well with multiple
>repetitions of the very constrained transmission content using almost
>any 500 baud transmission method? Followed (as they proposed) by a
>non-coherent, but time synchronized, summing technique with a one-minute
>or more summation time? I don't see how this 1 kHz wide signal compares,
>in impact on other users of a band, with a frequency hopping or DS SS
>signal many tens or hundreds of kHz wide. Apples vs. oranges.
While simple repetition is in fact a perfectly valid approach to the Rayleigh fading channel, you can always do better (though not always a lot better) with a code that is not constrained to simple repetition. One way to get the rate 1/10 code we recommend on top of the 64-ary FSK is to first encode with a rate 1/2 convolutional code and then repeat each symbol 5 times to make what is in effect a rate 1/10 convolutional code with only two distinct generator polynomials. But this code won't perform quite as well as a true rate 1/10 code where the polynomials are allowed to be all different. The rate 1/10 code formed by repeating the rate 1/2 symbols will have the same coding gain as the rate 1/2 code, but a true rate 1/10 code will have more gain. And of course if you were to simply repeat each user data bit 10 times, you'd get NO coding gain at all on a nonfading channel, though you'd still get the benefit of time diversity on a fading channel.
According to my references, a true rate 1/3 code is about 1.5-2.0 dB better than a repeated rate 1/2 code on a Rayleigh fading channel at a 10e-5 BER, and code rates between 1/8 and 1/3 perform within about 1 dB of each other. Yet the gain of even the true rate 1/2 code over a simple 2:1 repetition time diversity scheme is truly dramatic: on the order of 20-25 dB at 10e-5 BER on a Rayleigh channel. Moral: a little coding, properly applied, goes a very long way.
````------ Submissions: email@example.com Subscription/removal requests: firstname.lastname@example.org Human list administrator: email@example.com