Smarter SETI Strategy
"Wide-sky surveys," on the other hand, scan large areas of the celestial sphere and vast numbers of stars, though at lower sensitivity. This is the strategy of Projects BETA, SERENDIP (including SETI@home), Southern SERENDIP, and others.
Which strategy is best? Deep scrutiny of a few stars, or a shallow scan of many? Given our ignorance about alien civilizations and their technologies, the two approaches are often described as complementary and equally valid.
They are not. Recent work confirms long-standing suspicions that star-by-star targeting should be abandoned in favor of scanning the richest star fields to encompass very large numbers of stars, even if most of them are very far away.
To see why, we flash back 30 years to when Frank Drake did the basic mathematics that still governs the field. He showed that finding an ET signal is similar to certain problems in surveying natural radio sources. Some sources are intrinsically strong; a greater number are intrinsically weak. The steepness of the ratio between them determines which category will dominate our sky. For example, many of the first sources found by early, primitive radio telescopes are at extreme, cosmological distances. This is because inherently strong radio sources (such as quasars and radio galaxies) are powerful enough to more than make up for their scarcity compared to the abundant weak sources (such as the coronas of stars).
Similarly, it was clear that if even just a few rare, very distant alien radio beacons are very powerful, they will dominate the detectable population in our sky, and a wide-sky survey will succeed first. If, on the other hand, ET transmitters are common and all of them are relatively weak and similar to each other, a star-by-star targeted survey starting nearby will work best.
Recently we revisited this 30-year-old problem with the advantage of more sophisticated mathematical models (and computers capable of running them!) covering all reasonable scenarios. The outcome is clear, surprising, and overwhelming. Unless ETs truly infest the stars like flies (very unlikely), the first signals we detect will come from the very rare, very powerful transmitters very far away. The 1971 model, which lent too much weight to nearby stars, turns out to be a naive case, the best that could be calculated at the time.
In practical terms, this means that SETI searchers should use their limited resources to scan great numbers of stars first and worry about sensitivity per star second.
Given real radio telescopes under the real sky, the best use of SETI time actually turns out to be a "hybrid," semi-targeted strategy: one that targets the richest star fields. These might include selected parts of the Milky Way's plane, certain star clusters, and even nearby galaxies. The idea is to fill the radio telescope's beam (listening area) with many stars, then dwell on this spot long enough to build up sensitivity.
With, say, just 100 carefully selected patches of sky on the list, millions of Milky Way stars and many billions in other galaxies can be scrutinized in significant depth. It makes no sense to dwell on nearby stars one by one if they have sparse backgrounds. We need to look deep and long and bet on the numbers.
Thus it was heartening to hear SETI Institute chair Frank Drake say that such thinking should carry the day and that the strategy for the ATA should emphasize searches near the galactic plane.
Nathan Cohen and Robert Hohlfeld are professors at Boston University in telecommunications and computational science, respectively. Both have their roots in SETI at Cornell University during the era of Frank Drake and Carl Sagan, where they received their astronomy doctorates.This article continues; click "Next Page" below.