We used an ARCluster v6 feed during the contest. We enabled the
"UNIQUE >2" feature, meaning that within the validation time window, at least
three skimmers have to spot the same callsign.
Results: 99.9% accuracy. Virtually no bogus callsigns. We were ready to
use N1MM's blacklist feature to eliminate bogus spots (LW3LPL, etc.) but
never needed to do so.
This feature is not available on RBN sources based on DXSpider software;
only on ARC6 nodes. The arcluster.reversebeacon.com source is ARC6.
You do have to learn the filter logic protocol, as it is totally different than
DXSpider.
Dave, N3RD
On 2 Dec 2011 at 7:10, jpescatore@aol.com wrote:
> Pete - first, thanks to you and all the others that do all the heavy lifting
> that got RBN up and running and keep it going. This year I went assisted in
> both CQ WW CW and ARRL SS CW and had a blast using RBN as my only spotting
> feed.
>
>
> It reminds me of the early years of Packet Cluster and I think there are a
> lot of lessons to be learned from what grew out of that and I think it would
> be great to early on try to make sure the RBN doesn't grow up to have the
> problems of today's global DX cluster. So, some suggestions along that vein:
>
>
> 1. Have some standards of operation for skimmers that connect and get
> aggregated. Right now it is small enough and friendly enough that "rogue"
> skimmers are not really a problem, but that is certainly coming as it gets
> easier for more people to run more skimmers. If nothing else, some way for
> consumers of RBN feeds to know which skimmers *are* adhering to recommended
> practices and letting us set filters to only consume spots from those that do.
>
>
> 2. I tend to be on N1EU's side that more filtering of obviously bogus spots
> is better but I know there are probably just as many on the side of "just
> give me everything and I'll sort it out." I work in Internet security and
> "false positives" are killers - I'd would rather have fewer false positives
> at the expense of an increase in "false negatives" (missed spots).
>
>
> 3. I think there may already be something like this, but I think there is the
> need for a "robot exclusion protocol" (such as exists on the WWW for web
> spiders/crawlers) to voluntarily agree not to encourage spots on certain
> segments or frequencies, like say the JT65 frequency or some emergency nets,
> etc. I don't believe anyone *owns* any frequency, but as contesters it is
> better if we try to be semi-decent neighbors and dumping a skimmer feeding
> frenzy on the QRPers is not all that friendly...
>
>
> 4. Similarly, a "don't spot me" list might be a good thing. Per the comments
> on various forums, a lot of casual ops in rare locations are not wild about
> the chaos that results from a spot. If they would prefer not to be spotted
> and there was a mechanism to support that, so much the better.
>
>
>
> All four of these suggestions are basically addressing centralized filtering
> vs. end user filtering. But imagine how much more useful the globally
> connected human-driven spotting network would be today if there had been some
> more agreed upon built-in controls. Back in 1980s' when AK1A's software first
> came out and everyone set up these innocent little packetclusters, no one had
> any idea of the horror that lay ahead! If not some centralized
> filtering/controls, that at least features/processes that provide information
> so that downstream filtering from those who want to do so can more easily do
> so.
>
>
> Once again, thanks for all the effort.
>
>
> 73, JohnK3TN
>
>
>
>
>
>
> _______________________________________________
> CQ-Contest mailing list
> CQ-Contest@contesting.com
> http://lists.contesting.com/mailman/listinfo/cq-contest
_______________________________________________
CQ-Contest mailing list
CQ-Contest@contesting.com
http://lists.contesting.com/mailman/listinfo/cq-contest
|