On Fri, 2006-12-08 at 17:58 -0600, Merle Bone wrote:
> Jerry said:
> "While the effect on the output spectrum is that of a bandpass filter, I
> don't think NR works that way. I think it works on correlation of the
> pass band with a time delayed copy of the pass band."
> -------------------------------------------------------------------------------------------------
> Jerry, I am curious as to why you believe this? Are there other amateur
> transceivers - ICOM or Yaesu - where you have seen the technique you
> described used before?
> Thanks & 73,
> Merle - W0EWM
Primarily because the books I've read on noise reduction tend to
emphasize autocorrelation as the technique with the main variations
being the selection of the time delay and judging the results to adjust
that time delay. There are several books in the ISU library on noise
reduction, maybe a shelf full, I've not read them all, but I might if I
get a start on my own receiver design, providing I believe that noise
reduction is a needed thing the DSP can do better for me than my own
ears can do with practice. I know my Timewave DSP won't find a weaker
signal than I can detect by ear, but it will improve the S/N of any
signal I can detect to make copy take less effort.
I propose a test to compare how a narrow filter might work and how
autocorrelation might work. Look for some weak multiple tone signals,
perhaps a digital multiple tone signal or many weak signals in a DX pile
up spread over the receiver bandwidth. Attenuate those signals to be in
the noise. Apply the NR. Autocorrelation has a chance of enhancing all
the CW signals, while a single adaptive filter can only do ONE. It takes
as many parallel adaptive filters as you have separate signals to go the
filter approach and I don't think there's enough DSP in your receiver to
do that while there can be an autocorrelation time delay that enhances
all the tones. It may be long because the tones are not coherent with
each other and the autocorrelation delay has to find the delay that
amounts to full cycles for all the tones at the same time. This test
might work better with two or three than with 50 signals.
Another adaptive filter technique could be doing a FFT of the signal and
reconstructing the signal peaks in the output. I don't think a radio DSP
chip has the compute power for that. And if the sample interval isn't
long enough (latency) and the window function isn't well chosen the FFT
can introduce MORE noise than was in the original signal. There's this
little detail that the math is based on a continuous signal from time =
- infinity to + infinity but the sample interval is much smaller. If the
amplitude and the slope of the beginning and end samples of the interval
don't match thats introducing a step function into the computation which
shows up as broad band noise. Many who apply the FFT to signal analysis
miss that detail and get useless results.
If the FFT sample interval is an even second, but there's a tone of 50.5
Hz, it will show up at a greatly reduced amplitude compared to a tone of
the same input amplitude but on 50.00000 Hz. That's great on paper, but
a real problem with real radio signal data.
--
73, Jerry, K0CQ,
All content copyright Dr. Gerald N. Johnson, electrical engineer
_______________________________________________
TenTec mailing list
TenTec@contesting.com
http://lists.contesting.com/mailman/listinfo/tentec
|