


                                   
                                   
                                   
                                   


                                   
                                   
            ANTIVIRUS SCANNER ANALYSIS BASED ON JOE WELL'S
                 LIST OF PC VIRUSES IN THE WILD 3/1996
                                   
                            Marko Helenius
                                   
  Virus Research Unit, University of Tampere, Department of Computer
 Science, P.O.BOX 607, 33101 TAMPERE, FINLAND, Tel: +358 31 215 7139,
             Fax: +358 31 215 6070, E-mail: cshema@uta.fi,
  WWW: http://www.uta.fi/laitokset/virus, ftp: ftp.cs.uta.fi /pub/vru






This paper briefly introduces our methods of testing antivirus products 
and results of an antivirus  scanner analysis  carried out in the Virus 
Research Unit  at Summer 1996.  The analysis  was performed  with DOS-, 
Windows 3.1,  Windows 95 and  memory resident versions of the scanners. 
The test set was based completely  on Joe Well's  list of PC viruses in 
the wild 3/1996.  I have also  tried to  think what  a reader should be 
aware of when reading the results.


ACKNOWLEDGEMENTS

Permission  is  granted  to  distribute  copies  of  this  information, 
provided that the contents of  the files and information is not changed
in any way and the source of  the information is clearly mentioned. For 
republication,  permission must  be  obtained  from the  Virus Research 
Unit.  To avoid  publishing  misleading  information  those who wish to 
quote  the results  should discuss  the matter with  the Virus Research
Unit.

This  analysis  has  required a lot  of  work  efforts.  Also a  lot of 
co-operation with antivirus  researchers was required.  I would like to 
thank the  following  people who have been of great  help when carrying 
out the analysis.

Karsten Ahlbeck, Karahldata
Pavel Baudis, ALWIL Software
David Chess, IBM T.J.Watson Research Center
Amir Elbaz, Eliashim Microcomputers Ltd.
Dmitry Gryaznov, S&S International PLC.
Andy Hayter, IBM AntiVirus Worldwide Brand Team
Jan Hruska, Sophos Plc.
Mikko Hypponen, Data Fellows Ltd.
Eugene Kaspersky, KAMI Group
Jimmy Kuo, McAfee Association
Cliff Livingstone, Look Software
Yury Lyashchenko, DialogueScience Inc.
Wolfgan Stiller, Stiller Research
Frans Veldman, ESaSS Ltd.
Rene Visser, Symantec Peter Norton's Group
Gerard Vuille, Metronet BBS

                                   
1. INTRODUCTION

It  is  too  often unclear how antivirus testers are working  and  what
viruses  they  are using in their tests. This is not, however,  how  it
should  be.  I  believe it should be known to the public how  antivirus
testers  are  working and what viruses they are using in  their  tests.
Also  I believe, that a tester should admit the lacks of his/her  tests
to  avoid spreading misleading information. At least it should be known
to  the public what was actually tested. To give more exact view of our
work  I  have briefly presented our methods of testing and  some  facts
that  readers  should  be aware of. This paper presents  problems  with
collecting  the  "In the Wild" test set, how we are  carrying  out  the
tests,  test results of the analysis and what a reader of the  analysis
should be aware of when reading the results.

This  report presents results of an antivirus scanner analysis  carried
out  by  the Virus Research Unit at summer 1996. The test set consisted
of  viruses  in Joe Well's list of PC-viruses in the wild  3/1996.  The
analysis  includes  tests  of DOS, memory  resident,  Windows  3.1  and
Windows 95 scanners against the test set of file viruses found  on  the
field.

2. ANALYSING THE PRODUCTS

The following sections describe briefly how the analysis was carried
out.

2.1 EXCLUDING NON-VIRUSES

Trojan  horses,  joke  programs,  intended  viruses,  first  generation
viruses,  innocent files and other non-viruses should be excluded  from
the  test  set.  Otherwise products, which are good at  detecting  true
viruses, but "bad" at detecting non-viruses would have lower score than
they  are  worthy of and products, which are giving false alarms  could
perform  well. After all we should be analysing how well  products  can
detect  viruses. In this analysis a lot of work efforts  were  used  to
exclude  Trojan  horses,  joke  programs,  droppers,  first  generation
viruses, innocent files, intended viruses and especially damaged  files
from the test-set. The non-virus removal process was carried out with a
help  of  an  invention  implemented at the Virus  Research  Unit.  The
invention  is called as "Automatic and Controlled Virus Code  Execution
System"  [Helenius 1995] and it automatically executes  virus  code  in
controlled  area  and  saves  infected  areas  into  specific   network
directory. The system's power is that it is implemented so that it  can
be  left  to  work on its own. It recovers automatically from  hanging,
damage  and  CMOS memory failures that execution of malicious  software
may cause.

2.2 ANALYSING ON-LINE DOS-SCANNERS

The  analysis of on-line DOS-scanners was carried out against both file
and  boot sector viruses. Detection of boot sector viruses was analysed
by  writing  one  by  one diskette boot images on  diskettes  and  then
scanning the diskettes. File virus detection capabilities were analysed
by  executing  DOS-scanners  from batch files  by  using  the  switches
presented in Table 1.

PRODUCT                 COMMAND LINE
Avast 7.50              LGUARD %1%2 /P /S /R*%2.rep
AVP 2.2 (3.6.1996)      AVP /W=AP%2.rep /S /Y /Q %1%2
Dr. Solomon 7.60        FINDVIRU %1%2 /REPORT=FV%2.REP /LOUD /VID
Dr.Web 3.11             drweb %1 /cl /rpdw%2.rep
F-PROT 2.23             f-prot %1%2 /nowrap /list /report=fp%2.rep
IBM Antivirus 2.4.1     ibmavsp -LOGIBMAVSP.LOG -PROGRAMS -VLOG -NB
                        -NREP -NWIPE -NFSCAN %1%2
Virus Alert 4.10        VASCAN %1 /P /C /S /A /E* /R*al%2.rep
Integrity Master 2.61b  IM /NOB /NE /VL /REPA /1 /RF=i:\IM\IM%3.rep
Microsoft Antivirus     MSAV %1%2 /P /R
(1.6.1996)
Norton Antivirus        Scan executed from graphic interface
(1.6.1996)
McAfee Scan 2.2.12      SCAN /REPORT S2%2.REP /RPTALL /NOMEM /SUB %1%2
Sweep 2.86              SWEEP -ALL -REC -NK -NAS -NB -P=SW%2.REP
Thunderbyte 7.01        TBSCAN %1%2 largedir expertlog noautohr batch
                        log logname=tb%2.rep
Virusafe 7.0            VREMOVE %1%2 /R /C /D
                    Table 1: Command line switches

2.3 ANALYSING MEMORY RESIDENT SCANNERS

Memory  resident scanners were analysed against file viruses by copying
files when the memory resident part of a product was activated. In case
of  Virus  Alert, Avast, Dr. Solomon's Antivirus Toolkit, IBM Antivirus
and  McAfee Scan memory resident scanners were analysed in addition  by
file  execution  method. These product's can detect more  viruses  when
actual  virus code is executed. Because of the automatic and controlled
virus  code  execution  system  even  the  file  execution  method  was
possible.  It was assumed that if a product could both interrupt  usage
of  the  computer and prevent change happening, the virus was found  or
otherwice it was assumed that the product could not completely find the
virus. Boot sector virus tests of memory resident scanners were carried
out by attaching infected diskettes with the "CHKDSK"-command.

2.4 ANALYSING WINDOWS 3.1 AND WINDOWS 95 SCANNERS

Windows scanners were analysed only against file viruses found  on  the
field. Boot sector viruses were not included, because so far we do  not
have methods for the automatic analysis in these environments.

2.5 REPORT FILE CREATION

I  believe,  that it should be known to the public, what  was  actually
tested and how did we conclude the results in a test. So this is why we
have  always prepared cross-references, which clearly show which sample
files  were  found  by  which products. To implement  the  report  file
generation we are first using awk-scripts to organize the report  files
into  analogous format. After this we are using a specific tool,  which
unites the report files.

3. PROBLEMS WITH THE 'IN THE WILD' TEST SET

For  a  virus to be included "In the Wild" test set, it must have  been
found on the "field" at least once. This is not, however, as obvious as
it  sounds. How do we know, that a virus has been found on the field at
least  once.  Someone must have reported to some antivirus  researcher,
that  the  virus has been found on the field but how do  we  know  that
someone  has  reported  the  virus to some anti-virus  researcher.  One
solution is to use Joe Well's list [Joe Wells], which includes viruses,
which  have been reported as found on the field according to main anti-
virus  researchers. It does not, however, contain all the viruses found
from  the  field, because all the cases are not reported to Joe  Wells.
For  example, we in Finland have viruses found on the field, which have
been  reported  to  antivirus researchers and/or  to  Central  Criminal
Police, but some of them still are not in the Joe Well's list.  I  have
also reports from other anti-virus researchers of viruses found on  the
field,  which  are  not in the Joe Well's list. However  those  viruses
mentioned  in  the Joe Well's list should at least be included  in  the
test set.

In  some  cases Joe Well's list does not have exact information,  which
variant  of  a  virus was found on the field. In most cases  the  exact
variant  can  be identified directly, but sometimes further examination
was  needed.  This  causes  problems when constructing  the  test  set.
Sometimes I could receive the original virus from antivirus researchers
but  this  is not always possible. I had to compare several sources  of
information between each other to determine, which variant of the virus
was  "In the Wild". In most cases this comparing process was successful
and I could almost certainly identify the correct variant, but still  I
cannot be absolutely certain, that all variants were chosen correctly.

4. RESULTS OF THE ANALYSIS

The  following sections present results of the analysis. The  detection
percentages  were  calculated so that for  each  virus  an  average  of
detection  was counted. In other words if a scanner could  detect  only
part  of  the  sample  files  of  a virus,  an  average  detection  was
calculated.  A  drawback of this method is that it does  not  perfectly
take into account that a partly detected virus may cause trouble for  a
user,  because undetected files may cause reinfection of the virus.  On
the  other  hand, even unreliably detected virus does get caught.  This
slows down the spread of the virus and thus unreliable detection should
be  taken  into account. Anyway, because estimating reliable  detection
would be too unsure, I decided to count the averages.

4.1 DOS SCANNER ANALYSIS

The  test set included 82 boot sector viruses and 111 file viruses  and
these  viruses  were infected on 7711 target files.  Table  2  presents
results  of  on-line  DOS-scanners. The 'In  the  Wild'  test  included
viruses  from Joe Well's list's first part [Wells]. The first  part  of
Joe Well's list includes viruses, which have been reported as being  in
the  wild  at  least  by  two different antivirus  researchers.  McAfee
Association's Vshield was analysed with the /POLY switch activated.

DOS-scanner            Boot sector   File viruses(%) Combination(%)
                       viruses(%)
Dr. Solomon 7.60         100.00          100.00          100.0
Avast 7.50                98.78          100.00          99.5
Virus Alert 4.10          98.78          100.00          99.5
Thunderbyte 7.01         100.00          99.01           99.4
Sweep 2.86                99.39          98.85           99.1
F-PROT 2.23               98.39          99.01           98.7
AVP 2.2 (3.6.1996)        96.95        100/99.01       98.7/98.1
IBM Antivirus 2.4.1       98.78          98.17           98.4
McAfee Scan 2.2.12        98.78          93.96.          96.0
Norton Antivirus          95.93          95.81           95.9
(1.6.1996)
Virusafe 7.0              97.56          94.26           95.5
Integrity Master 2.61b    89.02          93.48           91.6
Microsoft Antivirus       84.15          62.22           71.5
(1.6.1996)
Dr.Web 3.11               55.49          64.66           60.8
                   Table 2: Results of DOS scanners
                                   
Most  scanners  seem to perform well. Clear exceptions  were  Microsoft
Antivirus and Dr.Web, which could detect only about 70 percent  of  the
viruses  in the test bed. Detecting viruses found on the field is  more
critical  than detecting all viruses and therefore those  producers  of
antivirus products which cannot detect near 100 percent of the  viruses
should pay attention on detecting viruses found on the field.

As a default AVP does not search for viruses in DOC-files. Users should
define the extention before WordMacro.Concept virus can be found.  This
is reason for including two different percentages in case of AVP.

4.2 MEMORY RESIDENT SCANNER ANALYSIS

In most cases memory resident scanners cannot detect as many viruses as
on-line  scanners. Table 3 presents results of memory resident  scanner
analysis.  Memory  resident  scanners of IBM  Antivirus  and  Microsoft
Antivirus could detect less than half of the viruses in the test bed.

Memory resident       Boot sector   File viruses(%)  Combination(%)
scanner               viruses(%)
Virus Alert 4.10         98.78           97.3            97.9
Avast 7.50               98.78           97.02           97.8
Dr. Solomon 7.60         100.00          94.78           97.0
Sweep 2.86               96.95           96.4            96.6
Virusafe 7.0             97.56           86.48           91.2
Norton Antivirus         96.34           81.85           88.0
(1.6.1996)
Thunderbyte 7.01         97.56           77.11           85.8
McAfee Scan 2.2.12       93.90           85.17           88.9
F-PROT 2.23              84.15           70.95           76.4.
IBM Antivirus 2.4.1      65.85           46.78           54.9
Microsoft Antivirus      24.39           27.03           25.9
(1.6.1996)
             Table 3: Results of memory resident scanners

Sweep  does  not have a memory resident scanner, but Sweep's Intercheck
can  be  used  like  a memory resident scanner in a  computer  that  is
connected to a network server. All unauthorized files and boot  sectors
are  copied  to  the network server and then netware version  of  Sweep
scans  the files. Therefore detection capabilities are same as for  the
network  version.  A drawback of the method is the extra  traffic  that
InterCheck  causes for the network server when copying files  and  boot
sectors.

4.3 WINDOWS 3.1 SCANNER ANALYSIS

Windows  scanners  were analysed only with file viruses  found  on  the
field.  Boot sector viruses were not included in the test set,  because
so  far  we  do  not have methods for automating the analysis  task  in
Windows.

             WINDOWS 3.1 SCANNER    FILE VIRUSES (%)
             Avast 7.50                  97.3
             AVP 2.2 (3.6.1996)        --------
             Dr. Solomon 7.60           100.00
             F-PROT 2.23                 98.98
             IBM Antivirus 2.4.1         98.17
             Integrity Master 2.61b    --------
             McAfee Scan 2.2.12          93.96
             Microsoft Antivirus      Not tested
             (1.6.1996)
             Norton Antivirus 3.0        95.81
             (1.6.1996)
             Sweep 2.86                --------
             Thunderbyte 7.01            99.01
             Virusafe 7.0             Not tested
             Table 4: Results of Windows 3.1 scanners

Almost  every analysed Windows scanner could detect as many viruses  as
the  DOS version of the product. Microsoft antivirus and Virusafe  were
not included because there was no log file creation possibility.

4.4 WINDOWS 95 SCANNER ANALYSIS

Also Windows 95 scanners were only executed with the file viruses found
on  the  field.  Reason for this was that so far  we  do  not  have  an
automatic  method  for  boot sector virus analysis.  Table  5  presents
results of the Windows 95 scanner analysis.

          WINDOWS 95 SCANNER        FILE VIRUSES (%)
             Avast 7.50                 100.00
             AVP 2.2 (3.6.1996)        --------
             Dr. Solomon 7.60           100.00
             F-PROT 2.23                 98.98
             IBM Antivirus 2.4.1         98.17
             Integrity Master 2.61b    --------
             McAfee Scan 2.2.12          96.26
             Microsoft Antivirus       --------
             (1.6.1996)
             Norton Antivirus            95.81
             (1.6.1996)
             Sweep 2.86                  97.45
             Thunderbyte 7.01            94.75
             Virusafe 7.0             Not tested
          Table 5: Results of Windows 95 scanners

In most cases Windows 95 versions could detect the same number of
viruses as the DOS version.

4.5 ANALYSIS OF MEMORY RESIDENT SCANNERS FOR WINDOWS

Analysis of memory resident scanners for Windows was executed with  the
file  viruses found on the field. Boot sector viruses were not included
in  the  test set, because so far we do not have methods for automating
the analysis task.

          MEMORY RESIDENT SCANNER    FILE VIRUSES (%)
             Dr. Solomon 7.50            100.00
             F-PROT 2.23c                 99.01
             Norton Antivirus 3.0         95.81
  Table 6: Results of memory resident scanners working under Windows

All analysed scanners seemed to work well.

5. DISCUSSION AND CONCLUSIONS

A lot of work was assigned to carry out everything as well as possible.
Carrying out an anti-virus scanner analysis requires a lot of work  and
still  there  is  always something to improve. Also this  analysis  has
drawbacks,  which  a reader of the results should  be  aware  of  while
examining  them.  First of all performance of checksum calculation  and
active   monitoring   programs  were  not  analysed.   Also   products'
disinfection capabilities were not examined. A thorough analysis should
also  include a false alarm rate test. Because of restricted time there
was  no false alarm rate test in this analysis. In addition, the  tests
were  not carried out while viruses were memory resident although  this
is  often the case, when a computer is infected with a virus. It should
be  also  noted, that all viruses found on the field were not included,
because  only viruses, which were in Joe Well's list's first part  were
included.  In  addition,  we might have done a mistake  while  checking
correct  variants  of the viruses in the test set. Also  it  should  be
noted,  that we did not try to measure how common each virus is and  so
the  percentages  do  not directly measure the  actual  risk  level  of
infection. Instead the percentage just presents, how many per cents  of
viruses  used in this analysis the products can detect. A lot  of  work
was  used to exclude non-viruses, droppers and first generation viruses
from  the  test set. However, we might have done mistakes and therefore
it  is  possible to have non-viruses included although I  believe  that
there  are only few such mistakes. Also it should be noted that we  did
not try to check whether a product can reliably detect a virus e.g.  we
did not count cases, where a product did not detect all replicates of a
same virus. Because of the mentioned drawbacks the results give only an
overall impression of the performance of the tested products.

Regardless  of the drawbacks I believe, that there are some  advantages
in  this analysis. We succeeded to include memory resident, Windows 3.1
and  Windows  95  in the analysis. In addition, the test  set  includes
large  set  of files. This analysis has also advanced cross-references,
which clearly show, which viruses were detected by which product and by
which name.

REFERENCES

[Helenius]      Marko Helenius, "Automatic and Controlled Virus Code
                Execution System", An article from the proceedings
                of the eicar 1995 conference held in Zuerich,
                Switzerland 27.-29.11 1995. Available electronically
                via anonymous ftp as ftp.cs.uta.fi:
                /pub/vru/documents/automat.zip

[Wells]         Joe Wells, "PC Viruses in the Wild", Available
                electronically via anonymous ftp as ftp.cs.uta.fi:
                /pub/vru/wildlist/0396.zip


