CNAPS Harnesses Full Power of PCI for Affordable, Real-time Image
Processing

BEAVERTON, OREGON--SEPTEMBER 11, 1995-- Adaptive Solutions Inc.
(NASDAQ:ADSO) today announced two new PCI-based image processing
accelerator boards for OEMs and integrators. By combining the Adaptive
Solutions CNAPS parallel processor with the wide bandwidth of the PCI bus
format, designers now have the performance and throughput required for
real-time processing of images, forms, signals, optical character
recognition (OCR) and neural networks.

With the CNAPS/PCI or CNAPS/PCIx board format, PC users are now able to
take advantage of Adaptive's proven technology, which is currently
deployed in the company's PowerShop accelerator board for Adobe Photoshop.
The increased data transfer rate of the PCI bus provides data fast enough
to take full advantage of the high-speed processing power of all 64 or 128
CNAPS parallel processors.

"PCI has become the systems standard," said John Haynes, Director of New
Business Marketing for Adaptive Solutions. "By merging Adaptive's parallel
processing technology with a PCI bus format we're able to provide
real-time image processing and classification muscle to a greater number
of applications."

Two PCI Configurations, Up To 80 MB/Second Throughput

Adaptive Solutions offers two styles of PCI boards in order to fit a broad
range of applications. Both act as a co-processor to the system's main
processor (386 or better) and both are electrically compliant with the PCI
2.0 specification. The boards vary in physical dimensions, maximum number
of CNAPS processors, clock rate, number of SIMM sockets, and interconnect
characteristics.

The CNAPS/PCI parallel co-processor delivers up to 1.6 billion multiply-
accumulates per second from 64 CNAPS processors. It offers two SIMM
sockets for up to 64 MB of additional data storage local to the CNAPS
array of processors, and fits into one PCI full- sized slot (12.28" x
3.87").

The second board, the CNAPS/PCIx parallel co-processor, electrically
supports the 32-bit version of the PCI 2.0 specification, but occupies a
full-size ISA-sized PCI slot (13.33" x 4.5"). The CNAPS/PCIx supports
either 64 or 128 CNAPS processors (up to eight CNAPS chips), delivering up
to 2.56 billion multiply-accumulates per second. It has additional
connectors on the board to connect mezzanine boards for custom high- speed
off-bus data I/O.

The CNAPS/PCIx hosts both Direct I/O, which can sustain 20 megabytes per
second continuous data input and output, and Quick I/O, which delivers
data to 32 processors at a time, creating a bandwidth of up to 80
megabytes per second in and out.

The Best Choice for Embedded Applications

The CNAPS/PCI board is ideal for applications that will be deployed in PCs
from many different manufactures so that strict adherence to the PCI
specification is mandatory. In contrast to the CNAPS/PCIx board, the
CNAPS/PCI board offers larger data storage memory (up to 64 MB vs. up to
32 MB) but fewer processors (64 vs. 64 or 128).

The CNAPS/PCIx board should be used for maximum performance (128
processors) and where the target PC will allow a PCI board as big as an
ISA board. The CNAPS/PCIx board has a wider variety of data I/O
capabilities. With the addition of a properly designed mezzanine board, it
will provide the highest data bandwidth.

Unmatched Performance Comes from CNAPS Processor Array

The CNAPS array is used in compute-intensive applications where traditional
DSPs and other linear architectures prove inadequate. CNAPS is frequently
used for its ability to handle pattern recognition and classification in
real-time image processing applications. Recently announced applications
include the PowerShop accelerator for Adobe Photoshop, head-up displays
for aircraft landing systems, character recognition for advanced postal
sorting systems, and real-time target tracking.

The CNAPS processor array consists of one or more CNAPS chips, each
containing 16 individual processors, and 4 KB of on-chip memory per
processor. While up to eight chips can be installed on a CNAPS/PCI board,
the programmer sees only a flat, one-dimensional array of processors,
without regard to individual chip boundaries. Adaptive Solutions provides
a complete development environment, including a C compiler, assembler,
debugger, and hundreds of prewritten image processing library functions.

Performance Proven in Toughest Applications

In March 1995, Motorola announced a formal agreement to work with Adaptive
Solutions on parallel processor requirements and technology with the aim
of developing future technologies and products. Adaptive Solutions boasts
additional CNAPS technology partnerships with Matsushita Electric
Industrial Company Limited, Cromemco and Siemens.

Price and Availability

The CNAPS/PCI and CNAPS/PCIx boards are available now, with pricing
beginning at $3,995. Volume OEM pricing and engineering design support
packages are also available.

Adaptive Solutions, headquartered in Beaverton, Oregon, pioneered parallel
processing for the desktop and embedded applications. Adaptive designs and
manufactures parallel processing products using the CNAPS architecture for
pattern recognition applications, including image processing and neural
networks.

For more information, contact:

Adaptive Solutions
1400 NW Compton Dr., Suite 340
Beaverton, OR 97006
Phone (800) 482-6277
Fax (503) 690-1249
 
 =========================================================
 From the 'New Product News' Electronic News Service on...
 AOL (Keyword = New Products) and Delphi (GO COMP PROD)
 =========================================================
 This information was processed from data provided by the
 company/author mentioned. For additional details, please
 contact them directly at the address/phone# indicated.
 Trademarks are the property of their respective owners.
 =========================================================
 All submissions for this service should be addressed to:
 BAKER ENTERPRISES,  20 Ferro Dr,  Sewell, NJ  08080  USA
 Email: rbakerpc@delphi.com  -or- RBakerPC (on AOL/Delphi)
 =========================================================
