PC chipsets build a firm foundation for embedded applications
Embedded applications come in a variety of forms and levels of complexity. Although
some simple operations require only a processor and some support chips, complex systems
can encompass several processors that share resources at speeds unimaginable only a few
years ago. Between these two extremes, though, many middle-of-the-road embedded
applications only require moderate processor horsepower and simple video/graphics or other
I/O. To implement such a product, designers generally either build a custom processor
board or work with standard cards such as on the STD, Multibus or VME formats. If cost,
size or other considerations dictate that you develop a custom board, using a PC chipset
as the core can be an ideal solution. Those devices save development time, unit cost and
provide flexibility in adapting, upgrading or otherwise modifying an application's
As shown in Fig 1, these chipsets implement a PC's innards in a small number of
devices. Among the functionality they provide are: bus switching between 8-, 16- and
32-bit memory accesses; dynamic memory refresh; DMA and interrupt processing; hardware
timers; numeric coprocessor interface; Wait state generation; and I/O channel or local bus
interfaces. All you typically need to add are a few glue-logic chips---most of which you
can implement in PALs or FPGAs. Chipsets that handle XT, 286, 386 and
486 functionality are available from a number of vendors including Chips &
Technology (San Jose, CA (415)434-0600), Headland Technology (Freemont, CA (415)683-6221),
National Semiconductor (Santa Clara, CA (408)721-5000) and Intel (Santa Clara, CA
(408)765-8080). The differences among these offerings aren't so much related to price as
they are to the extra features available on the chips and support.
Why use a PC chipset?
The simple reason for building an application around a PC chipset is efficiency. It's
simply more productive to design an embedded application on a hardware platform that's
easily and readily understood by a large number of designers because you don't have to
educate new programmers on the nuances of a proprietary design. In addition, most software
design and testing can take place on a PC instead of the target system. Youre also
working in the host system's native language with most of the same I/O devices as in the
actual application. This hardware and software commonality means that you spend less time
with emulators and more time coding and debugging. Hence, you become more productive,
projects take less time to complete and the product reaches market faster. Advantages
don't stop there---software maintenance costs also decrease for the same reasons.
If the preceding argument sounds a bit far-fetched, experience has proven it otherwise.
At my company, applications developed for embedded systems using proprietary designs take
much longer to develop compared to embedded designs based on PC chipsets. I haven't kept
rigorous data, but chipset designs seem to take roughly one-third the time to design and
develop. I attribute this improvement to the reduced emulation times (one less step in the
design process) and faster feedback to programmers during debugging sessions. The
increased productivity is based on emulating in software or hardware (but not with a
microprocessor emulator) the various I/O devices that make each embedded application
Despite all the above advantages, this approach isn't a panacea. Indeed, if I said that
every embedded application should have a full-blown PC under the hood, Id quickly be
looking for a new job. The vast majority of applications don't require all of a PC's
functionality, but reduced functional requirements shouldn't automatically stop you from
considering a chipset. There's more to a PC than just the big heavy box on your desk.
To effectively use these chipsets, determine which functions an application needs to
take full advantage of the chips capabilities. Then either be creative with the
remaining functions or just ignore them. For example, in a medical instrument I designed,
I needed the horsepower of a 80286 processor, but my I/O requirements consisted of a
simple 8-bit I/O bus, a few interrupts and two DMA channels. With these requirements in
mind, I selected a chipset for an XT-class computer instead of the costlier AT set. I got
my horsepower at a reduced cost by only designing in what I needed for the
application---not what the textbook PC manuals said I needed with an 80286 processor.
Compatibility isn't vital
Before trying a design using these PC chipsets, it's also imperative that you
thoroughly understand how the innards of a PC operate. Only then can you know which
functions, clocks or I/O ports are mandatory and which ones you can ax from an
application. For example, I eliminated the 14.31818-MHz color-burst crystal used in
standard PCs for both time-of-day and dynamic refresh by tapping into the numeric
coprocessor's clock and adjusting my timings accordingly. Although my system isn't fully
PC-compatible, it never was to begin with, and the reduced parts count lowers the
instrument's cost and increases hardware reliability.
In looking for information about PC hardware, the best references I've found are
Technical Reference books from IBM. Unfortunately, I understand theyre no longer in
print, but you can still find them in libraries. They contain not only descriptions of
hardware but also include computer schematics and BIOS listings. Everything youd
need to know about the inner workings of the PC is all available in one book. This
knowledge is important because PC chipset vendors assume you know it to begin with.
This assumption became particularly evident as I considered different chipsets for my
medical instrument. Following is my experience in learning how to design with parts from
Chips & Technology and (hopefully) doesn't reflect on all the vendors. The
documentation that firm supplied for designing with its parts was woefully lacking for an
embedded-application designer. The information I needed just wasn't in the documents, and
the information provided was riddled with errors. For instance, pins labeled No Connection
actually had to be connected, diagrams inverted signals and mislabeled pin functions. To
top it off, before I received the documentation I had to sign a nondisclosure agreement
stating that I wouldn't divulge these "trade secrets." This was the first time
in all my years of designing embedded applications that I found that the chip design
information for an existing product wasn't available without nondisclosure. I still don't
know why I had to sign it.
The other main problem I had designing with these chips was C&T's lack of knowledge
or commitment about using the parts in an embedded application. They swore that they
wanted to support the embedded market, yet their actions said otherwise. For example, the
firm never satisfactorily answered questions about how to initialize the chips on
power-up---they said that I had to disassemble their BIOS to figure out what was needed!
Even so, these parts advantages still outweigh those problems and the resulting
system is very reliable.
To BIOS or not to BIOS...
The subject of the PC's BIOS has arisen several times. It seems that everyone thinks
that if they use a PC chipset they must also use the BIOS. This belief is a fallacy. A
BIOS in an embedded application might be just what you need, but the majority of
applications probably require little of what a BIOS offers. Most applications probably
only need its hardware initialization and timer functions. In addition, you might
sometimes need display, RS232 and parallel printer functions, but is the Power-On Self
Test really of interest? Has your PC ever failed the CPU instruction test? As in the case
of the hardware mentioned earlier, only design into an application what it really needs.
A good way to learn about a BIOS is to buy a "roll-your-own" version from
Annabooks Inc. (San Diego, CA (619) 271-9526). That vendor sells the source code for a
BIOS in which most of the functions are in C. The code costs from $100 to $200 depending
on the BIOS (XT or AT), and having the source code allows you to configure the specific
application. If youre dead-set on having a full-blown BIOS, most PC chip vendors
also sell a matching BIOS (generally without source code). Usually they charge a 1-time
licensing fee in the thousands of dollars and then a per-unit royalty fee that depends on
how many chipsets you buy.
For my design, after weighing the alternatives, I elected to forgo the BIOS and wrote
my own startup and hardware interface routines. To understand why I did so, consider that
the term BIOS means Basic Input Output System--it's a set of drivers that allows a program
to communicate and control different types of hardware. In other words, it isolates the
reality of the hardware from the software, allowing the same software to run on many
different platforms. However, I had no desire to run other software on my platform, so why
have this formalized level of abstraction? I left the separation of hardware interfacing
to the driver modules in my application software. My design has a boot ROM that
initializes the hardware just enough to allow my program to load and execute.
Just as you must decide how much (if any) of the BIOS to keep, you also must determine
where to keep the application program. Many solutions exist to this problem. If your
system has an integral disk drive, the problem's solved. For diskless systems, note that a
conventional BIOS automatically starts a ROM-based program if it has the correct header
information (Fig 2). You simply burn the application program into a PROM, insert it into
the board and it automatically runs on power-up.
One problem with this scheme, though, is that a program running out of ROM is slower
than the same code in RAM. Getting a program out of ROM and into RAM is easy except for
memory size. If the ROM program is large, the system might not have enough RAM space to
accommodate it. But by writing a custom BIOS, you can configure the ROM in a page mode
access, freeing the address space and easily downloading the program to RAM. This
technique is extendible to storing program overlays in ROM. I've also used this technique
with flash memories in place of PROMs. The boot ROM not only loads RAM with the program,
it can also program the flash memories using a downloaded program.
While on the subject of application software, make sure that your development
environment lets you easily port code to a ROM-based environment. For example, most C
compilers don't easily support ROM systems. I use the Aztec C Commercial Developer's
Package from Manx Software Systems (Shrewsbury, NJ (800) 221-0440). Not only does it
include library source code, it also allows an easy transition to a ROM-based system. With
the Manx package I maintain one set of code that runs on both a standard PC and my target
application with just minor modifications. In order to use this technique, though, you
must know where all the DOS and BIOS calls are in the libraries. The source code makes it
infinitely easier to obtain this information.
Note that running the target-system software on the development PC causes a few
headaches. For instance, the PC might not have all the target system's I/O. A couple of
options can alleviate such problems. First you can build a board that either allows the
development PC to access target I/O or simulate that I/O. An easy way to develop this
hardware is to purchase a prototype I/O channel card with built-in address decoding and
design enough of your circuits to simulate the target. Another solution is to build hooks
into your software that allow the I/O to work properly in the development PC. Because all
access to hardware goes through software drivers (you've written the code that way,
right?), a small change to the hardware driver allows the target code to work on the
development system or in the target system. PE&IN