
Conclusion.
-----------

When reporting scanner problems, we have tried to be as fair as 
possible. First, we assume that AV producers do their best to support
their users with products of as good quality as they can achive. Second,
though we have done our best to assure that our virus databases are
relevant and that our test procedures are fair, we cannot be absolutely
sure that we have not made any fault (no scientist is ever absulutely sure.)

We are esp. aware that our test has a *rather limited scope*, esp. due to
the following inherent decisions:

   1) We have tested ONLY on-demand scanners. Many products offer
      broader functionalities as they include resident components,
      integrity checkers etc. 

   2) We did NOT the ability of scanners to detect viruses in memory,
      and we also didnot test whether cleaning is done successfully.

   3) Concerning polymorphic viruses, this test contains a large number
      of static samples. Presently, we are preparing a dynamic test (where
      we generate multiple generations) but this method is not mature enough.

   4) Though we have tried our best to have only real viruses in the resp.
      virus databases, we have no systematic proof mechanism in place to
      *guarantee* that all specimen are indeed viral. 

   5) It is beyond our scope to evaluate user interfaces. Here, we regard
      users and - to some degree - well qualified journalists as adequate
      testers. Moreover, we refrain from reporting time behaviour as our
      test procedure is rather untypical for user requirements (we sincerely
      hope that users will never have so many infected files as in our test!).

   6) We have deliberately concentrated on DOS/Windows 3.x based scanners.    


With such restrictions: What are those tests good for, then? Regardless 
of the drawbacks mentioned above, we believe that our tests are of some value.

   A) Probably the most valuable part is the naming cross-reference.  It
      can help the producers of the scanners to become compliant with the
      CARO virus naming scheme and can be used by the users to figure out
      which virus they have exactly, after their favorite scanner reports
      some name.

   B) The tests provides some general impression of how good a scanner
      is at detecting viruses. Morover, a by-product of our "fairness"
      (having frozen the viral databases some 8 weeks ago) produces some
      information about quality improvements of scanners for which several
      versions were made available during the test period.
   
   C) With growing importance of other forms of malicious software, such
      as droppers, virus generators and trojan horses, our initial test
      of macro-related malware may hopefully convince AV producers to
      support users also in detecting malware. Our future tests will
      therefore test also file- and boot-related malware. 


Generally, we will be very interested to learn about any comment about our 
approach and test method as this may help us to improve test procedures for 
future tests and achieve a higher level of quality where possible. But every 
critical remark should have in mind that we are only able to test the 
behaviour of any product based on information made available from its
manufacturer. We have no insight into how the products work, and we have
not tried to reverse-engineer any product to understand experienced problems.
We therefore just report such problems and ask manufacturers to analyse their
problems themselves; sufficient information concerning tests protocols have
been made available from us (see SCAN-RES), but we are prepared to help
AV producers upon special request where possible, so that they can support
their customers by improving the quality for their products.

At last, we would like to express our hope that the users will find
this document useful.

