-----Original Message-----
>From: Jay Terleski <jayt@arraysolutions.com>
>Sent: Jun 4, 2008 5:11 AM
>To: towertalk@contesting.com
>Subject: [TowerTalk] Analyzers
>
>Re: Array Solutions AIM 4170 Antenna Analyzer
>
>I was forwarded a link to a Towertalk post archived at
>http://lists.contesting.com/archives//html/Towertalk/2008-06/msg00022.html
><http://lists.contesting.com/archives/html/Towertalk/2008-06/msg00022.html>
>from someone who, in light of his experience, found N6RK's comments
>puzzling.
>
>When using the proper procedure, one obtains much different results from
>those N6RK obtained. Using the procedure contained in the documentation,
>the result is the ability to make measurements with no loss of accuracy
>through an AM broadcast band filter. The procedure has been available for
>months now.
<snip>
>
>It is specifically designed for cases of high BCB RF existing on a large
>160m antenna. It works extremely well, completely canceling out the effect
>of a BCB filter, or any filter for that matter. In fact, the filter used
>does not even have to be a symmetrical filter. The procedure requires
>taking enough data points in a limited frequency range to completely cancel
>out any filter transfer function no matter the loss of the filter with the
>short, open, load technique used in VNA's. The calibration function has
>been verified by the several broadcast engineers and is well documented as
>very accurate. Since we have not heard from N6RK, we can suspect he did not
>use the tool correctly or his analyzer was malfunctioning. I invite him to
>call us, or send in the unit for testing if he feels it is malfunctioning.
I don't know that Riok said the AIM didn't work, he said that the accuracy of
the resulting measurement was degraded. I suspect that Rick is quite familiar
with the OSL calibration technique for VNAs as well (being in the RF metrology
business, he would need to be).
It's one thing to calibrate out a few degrees of phase or a few tenths of a dB
shift to compensate for the non-ideal directivity in a probe or the
loss/mismatch in a connector or test port cable. It's entirely another to
calibrate out the response of a reactive device with S11 and S21 varying by
orders of magnitude over the passband.
Think of it as the gain/loss through the filter at some frequency is some
constant "k", and the measured value (call it Y) is "k times the actual
value(call it X)". That is, Y = k*X. The calibration process is composed of
two parts: measuring k; and turning measured values into calibrated values
(i.e. calculating X = Y/k).
The problem comes in because that measurement of k has some uncertainty, so the
result of the calculation has an
uncertainty that's related.
It's like measuring a 1.5V dry cell with a 1000:1 HV probe and the meter set to
2 mV full scale. Not only are you trusting in the 1000:1 not being 990:1, but
you're making a measurement closer to the noise floor. If the probe factor is
measured with the same meter, it's even worse.
Likewise, you expect a spectrum analyzer power measurement at -150 dBm to be of
the same accuracy as a measurement at 0dBm.
Basically, if you divide a not very accurate measurement by another not very
accurate measurement, the result isn't very accurate.
--------
This kind of uncertainty analysis is what separates typical experimental gear
and bench experiments from "lab-grade" instruments like a VNA from Agilent. If
I make a measurement on an unknown device with a professional VNA, not only do
I get an estimate of what the unknown device's characteristics are, but I also
get an estimate of the uncertainty of that estimate.
For simple measurements (voltage) the uncertainty is easy to figure out. That
is, the calibration model is simple. For complex measurements (where you're
calibrating out adapters, fixtures, and filters, and directivity and isolation
are issues), the calibration model is substantially more complex. The hard
part of building a VNA isn't the hardware to make the measurement (although
that's challenging), it's implementing the 12 or 16 term calibration model
(i.e. using the open, short, and load measurements to back into the cal
parameters) and even harder, being able to say what the uncertainty of the
resulting measurements are.
So, can you point us to an analysis of the accuracy of the calibration method?
How much (in numbers) does the filter degrade the accuracy of the measurement
(or, more correctly.. what's the uncertainty of the calibrated measurement
after applying the calibration)
>With as powerful a device as the AIM 4170, it is more important than with
>simpler analyzers to delve into the documentation in order to use the device
>to its potential.
Unfortunately, the manual on 'BIGs website pointed to by your website doesn't
discuss the uncertainty of the *calibrated* measurements, or the impact of
measurement uncertainty during the calibration process. It just says that
there's a 12 bit A/D and that the measurement has 5% relative accuracy.
In fact, the Ap Note describing the 160m antenna measurements is quite
suspicious, from a calibration standpoint, because you made use of
"smoothing"..Those fluctuations with respect to frequency in the calibrated
response before smoothing are often indicative of the calibration constants
being derived from a noisy measurement of the standard. (the whole dividing a
small noisy number by another small noisy number problem...).
This is NOT to denigrate the AIM analyzer or the software. It was clearly not
intended to be a replacement for a $100K PNA from Agilent. It's just that
Rick's comment about poor accuracy could be the result of his recognition of
the limitations of the measurement and calibration techniques being used.
The typical evaluation process used by, for example, ARRL labs, is to measure a
set of standards which have been measured against some other well calibrated
piece of gear. Those measurements will let you know that the calibration math
has been properly implemented and that the device can make some measurements.
However, a few point measurements is not sufficient to understand or know the
basic measurement uncertainty.
This may be viewed by some as splitting hairs, but the essence of good
measurements is understanding the limits of the measurement.
Jim, W6RMK
(Who also spends a fair amount of time dealing with metrology and calibration
issues at work. We obsess about hundredths of a dB, frequency errors of 1E-16,
etc., and a presentation of experimental data without the uncertainty analysis
in a review will result in being beaten soundly about the head and shoulders,
at least verbally.)
_______________________________________________
_______________________________________________
TowerTalk mailing list
TowerTalk@contesting.com
http://lists.contesting.com/mailman/listinfo/towertalk
|