Reader surveys tell us that one of the main reasons you turn to Sky & Telescope each month is to learn about new astronomical products. Thus we would like to describe the mission of S&T Test Report and the ground rules we've adopted for reviewing equipment. Most of these policies and procedures have been in effect since our first product reviews appeared in 1987, but a few are more recent. While much of what follows applies to software reviews too, this article is designed mainly to offer insight into how we conduct reviews of telescopes and related astronomical gear.

The Basics

In S&T Test Report we aim to provide timely, informative, fair, and useful reviews of telescopes, accessories, and other equipment of interest to the astronomical community. To accomplish this, we incorporate the results of field and bench testing done by staff members and/or qualified outside reviewers. Products are chosen by our Test Report Committee of editors, who together have more than a century of experience in backyard astronomy.

Telescope Test

Sky & Telescope senior editor Dennis di Cicco works on two product reviews simultaneously by testing Telescope Engineering Company's TEC 140 apochromatic refractor (S&T: December 2003, page 55) on Software Bisque's Paramount ME robotic equatorial mount (S&T: May 2003, page 50).

Sky & Telescope photograph.

Telescopes, eyepieces, and other optical instruments are our principal subjects, but we also review electronic and other accessories. All items must be readily obtainable, especially in North America, where most of our readers live.

In an ideal world we'd anonymously purchase every product we test. But in some cases initially limited availability or high cost dictates otherwise, so occasionally we borrow products from manufacturers or distributors. Sometimes we borrow and purchase products — the former so we can get started testing a "hot" item before it is widely available, and the latter so we can be sure our published measurements are made on the same equipment shipping to consumers. In any case, we always tell you how we obtained our test units. Although we always acquire the most current model available, manufacturers sometimes make changes between the time a review is written and when it is published.

You should learn enough from an S&T Test Report to decide on the suitability of a particular product for your needs. Our goal is to tell you how the equipment performs and whether this performance is consistent with the manufacturer's claims.

We provide the names and addresses of vendors, as well as current "street" prices in US dollars. Prices for equipment manufactured in the US are often much higher in other countries. Except where noted, stated prices don't include shipping fees, taxes, or duties. Of course, vendor contact information and prices may change after publication.

We note relevant problems discovered during testing and try to determine whether such shortcomings are peculiar to the test unit or endemic to the product. Sometimes we offer suggestions for the use, maintenance, or improvement of a product. When more than one item is reviewed, we may conduct comparative testing to acquaint readers with differences among products.

The opinions stated in all S&T Test Reports are those of the authors.

Field and Bench Testing

Unless we state otherwise, the numbers presented in S&T Test Report are determined by us and are not just a rehash of specifications provided by the manufacturer or distributor. This goes for everything from the size and weight of a telescope to the aperture, focal length, and central obstruction of the optical system.

Anyone can hold a ruler up to the primary mirror of a Newtonian reflector, but this may not give a true indication of the telescope's effective aperture. Measuring the true light-gathering power of a refractor or catadioptric system is even more problematic. In all cases we measure optical parameters using carefully controlled tests. We determine the diameter of the aperture that collects and focuses starlight to a point on the telescope's optical axis. And we measure a telescope's effective focal length from the image scale at the focal plane with the instrument focused at infinity.

Frozen Scope

Reviewing telescopes during New England winters can be an exercise in human endurance, as is apparent from this instrument just brought inside after a three-hour test of the scope's altazimuth-mode tracking accuracy.

Sky & Telescope photo by Dennis di Cicco.

Most experienced observers can develop an informed opinion of a telescope their first night under the stars. This is certainly true of our reviewers, who on average have been amateur astronomers for more than 30 years each. But we go much further than one or two nights of field-testing. We typically spend four to eight weeks evaluating equipment before writing about it. We drag scopes in and out of garages and spare bedrooms, pack them in cars and drive them to star parties and dark-sky sites, and sometimes leave them covered outside for days at a time. We share views with family and friends as well as with other experienced observers. In short, we test telescopes under conditions very similar to those experienced by our readers.

As such we can make an accurate assessment of how a telescope performs in everyday use, uncovering issues overlooked during the first night or two of testing, while dismissing others that initially seemed like potential problems but proved not to be in the end. We are particularly proud of this kind of field-testing, and we consider it at least as important as our bench testing.

Our Rating System

In the May 2004 issue of Sky & Telescope we introduced a five-star rating system that tells at a glance our opinion of a telescope's optical, mechanical, and overall performance. When you use this rating system to compare different telescopes, keep in mind that such comparisons are valid only for telescopes of similar design and aperture. The ratings alone will not tell you how much you can see using very different telescopes. But the combination of the ratings and an accompanying performance diagram will.

Theoretical Performance of Telescopes

This diagram from S&T Test Report shows the theoretical performance of telescopes of different apertures. The example indicates that with a top-quality 8-inch scope you can detect stars as faint as magnitude 14.5 and resolve details as small as 0.6 arcsecond if the air is very still.

Sky & Telescope illustration.

The diagram graphs the theoretical performance of a telescope as a function of aperture. We plot the test telescope's position on two curves: one giving the visual magnitude limit, and the other the visual resolution. The former indicates the faintest star visible in the telescope at a magnification of 150× under good sky conditions, and the latter gives the minimum resolvable separation for a pair of stars of equal brightness (the so-called Dawes limit). Each is dependent on aperture — bigger scopes generally let you see fainter stars and finer details. But quality matters too, and that's where the ratings come in.

So, for example, a mediocre 8-inch reflector with a two-star rating will run circles around a jewel-like 4-inch refractor with a five-star rating when it comes to viewing faint deep-sky objects, because the 8-inch gathers four times more light. But for high-resolution observations of bright targets like the Moon and planets, this refractor may beat the reflector, because exceptional optics can be used effectively at higher powers than poor ones.

The rating system is not an assessment of the merits of one optical design versus another. Those subtleties, which usually involve some consideration of the types of observations you might wish to make with a given telescope, will be dealt with in the text of the review. But for observations of the same targets with, say, two 6-inch reflectors, a four-star instrument will outperform a scope with a three-star rating.

Our rating system is a bit different than those often found with reviews of other products. Most significantly, our rating scale is nonlinear. A typical five-step scale might mean poor, fair, average, good, and excellent. But we generally review equipment intended for serious amateur astronomers, so our selection process tends to filter out instruments that might warrant a poor rating if we were reviewing the full spectrum of telescopes sold today. Nevertheless, we did reserve a one-star rating (defects so severe that the equipment is virtually unusable) just in case.

The remaining four steps in our rating system offer a precise assessment of the equipment we review. At the top is five stars for a telescope so perfect that we can see no room for meaningful improvement. This would be a rare instrument indeed, but since we are judging each telescope by its own set of standards (that is, by standards appropriate to its particular optical design and aperture), every scope we look at has the potential of achieving a five-star rating — reflector, refractor, or catadioptric; large or small aperture; mass produced or custom made.

A four-star telescope, while not perfect, is still a superb instrument. This rating means we can detect imperfections with our testing procedures (and we'll mention these shortcomings in the review), but they will go unnoticed in normal use. A good example is minor spherical aberration of the optics. Even tiny amounts of this common optical defect — the one made famous by the Hubble Space Telescope in 1990 — are revealed with a simple star test. But it takes more than the minimum detectable amount of spherical aberration to render fuzzy star images that don't "snap" into focus and to produce planetary views that lack contrast.

A three-star rating signifies a scope with defects that are visible to a trained eye during normal use but do not significantly degrade performance. An example would be a scope with enough spherical aberration to degrade low-contrast planetary features and show slightly "soft" in-focus stars.

Two stars signify defects that compromise performance. In this case we'd be talking about enough spherical aberration to fuzz close double stars to the point where they are difficult to resolve in an instrument that would easily split them if the optics were better. Such degradation would also significantly impact views of the Moon and planets.

When it comes to telescopes, optics aren't the whole story. Even the best optics are ineffective if they're not supported by a steady mount. So we rate optics and mechanics — which include "fit and finish," the smoothness of motions, tracking accuracy (if applicable), and related features — separately. And because a telescope is more than the sum of its parts, we include a third, overall rating that incorporates such things as price, versatility, and other factors that we'll point out in the accompanying "bottom-line summary." This overall rating is not the average of the optical and mechanical ratings. Indeed, a telescope that gets three stars for optics and mechanics may nevertheless end up with a four-star overall rating — if, for example, it is priced extremely attractively and thus represents a terrific value for consumers.

Frequently Asked Questions

The Internet provides a global public forum where readers can communicate in an instant, not only with us, but also with each other. We receive many e-mail messages about S&T Test Report each month — so many that we can't answer them all — and our reviews are a frequent topic of discussion in newsgroups and other online venues. Certain questions come up repeatedly; we'll answer a few of them here.

. "What role, if any, does the manufacturer of a product have in its review?"

As already noted, if a new product is scarce or especially costly, we might borrow a sample from the manufacturer or distributor for S&T Test Report. Obviously this alerts the company that its product is being reviewed. Another way a vendor might learn about an upcoming review is if we call with questions that came up during our bench or field testing, or to verify current pricing and availability. But we never share the contents of a review with a vendor in advance of publication, and usually a company that knows about an upcoming review doesn't know which issue it will appear in.

Manufacturers sometimes encourage us to review their products — and sometimes they actually discourage us! But we decide what equipment to review based on reader interest and on our own perceptions of which new products are especially significant in today's astronomy marketplace.

. "But aren't you constrained in what you can write about products because so many manufacturers and dealers advertise in your magazine?"

Yes and no. We are constrained to get our facts straight. Both readers and advertisers look critically at what we write, so we take great care that every number, statement, and test result gets checked and double-checked for accuracy. But we are not constrained to say only positive things about the products we review, and we don't.

Many consumer publications — perhaps even most of them — are paid for mainly by ad revenue. But that's not the case with Sky & Telescope. In its early years the magazine carried very little advertising, mainly because there were hardly any commercial suppliers of telescopes and accessories. While there are many astronomy vendors and plenty of advertising in S&T today, most of our revenue still comes from subscriptions and newsstand sales. So if you believe that money talks, then the loudest voice is that of our readers, and it always has been.

We sometimes joke that we know we've written a fair, balanced, and accurate review when we hear equal amounts of praise and criticism from both readers and advertisers!

. "Why don't you ask for input from people who've actually bought and used the equipment?"

We do! We get lots of reader feedback on new products, and we read additional users' comments on the Internet in the course of preparing a review. But we believe you get a better value from S&T Test Report if we have our own experts do the tests and write up the results in a manner that's consistent from issue to issue.

There's nothing preventing a consumer from writing a fair review, but sorting the good ones from the poor ones is not easy for those not already familiar with the equipment in question. And it's usually impossible to tell whether the reviewer has a personal agenda or experience with any other equipment.

As already explained, most of the time we anonymously purchase the equipment we test, and we test it the way consumers use it, over a period of months, rather than publishing first impressions based on a night or two of casual use.

. "Why do I sometimes find comments on the Internet from amateur astronomers who pan equipment that S&T has reviewed favorably?"

Many online reviews get written because the author is either very excited or very disappointed in a piece of gear. Rarely do you see a review by someone who is simply satisfied. As a result, much of the writing revolves around supporting the idea that a given product is either absolutely amazing or utterly worthless. In our experience, most equipment falls somewhere in-between.

When we search the Internet for comments on a product before publishing a review, we look for issues (both good and bad) that are mentioned consistently. It's a mistake to judge a product based on a composite picture assembled from random reports of problems by different people using different instruments.

It's certainly true that not every telescope comes out of the box in perfect condition. But unless there's an overall indication of shoddy quality — something we would definitely mention in a review — a random problem such as a stripped screw or a small ding in the paint usually isn't worth mentioning. We will definitely cover something major, such as dead electronics in a Go To telescope.

We constantly work at improving the quality of our product reviews through internal editorial assessment, input from optical designers and equipment manufacturers, and most importantly, feedback from readers. Our goal is to keep S&T Test Report second to none.

Comments


You must be logged in to post a comment.