The true test for a new technique is the real world -- not a development lab
It's easy for me to write that you ought to be using a new technology; I simply throw
out a couple of examples, and suddenly anyone should be able to take advantage of the
latest high-tech technique. However, life's seldom as easy as writing a column --
especially life in embedded development. Take for example my last column about single-chip
microcontrollers. Its message is that they're, as my father might say, "The greatest
thing since sliced bread." In many ways they are, but the appropriate testing ground
for a new technique is the real world, not a chip manufacturer's development lab. In this
vein I'm going to let you in on a behind-the-scene look at some of the trade-offs, and
even failures, of an embedded-system designer wrestling with new technology in a
My firm produces many designs for the medical field, especially invasive designs where
the sensors for our instruments end up in a patient's blood stream. In designing
instruments for this field, patient-safety issues dominate much of the electronics design.
Obviously you don't want to create something that can inadvertently kill a patient.
Although it takes typically tens of milliamps to kill someone, placing the point of
electrical contact in the bloodstream drops the lethal current to a few tens of microamps
(µA). Therefore, medical designs with intimate patient contact must have leakage currents
on the order of < 10 µA.
In addition, these designs must also exhibit very high isolation voltages. Current
standards specify 2500V in the US and 3750V for most of the rest of the world. To achieve
this isolation, instrument designs typically use optoisolators to get signals across the
isolation barrier, but magnetic or capacitive coupling is also suitable. A more
significant problem is getting power across this barrier. It's not easy to move lots of
power across a barrier designed to block any power transfer. Therefore, circuitry on the
patient side of the interface must be low power.
Now that you know the basic safety principles of a simple invasive medical device,
let's outline an instrument requirement. This fictitious instrument must continuously
measure the hemoglobin concentration in the bloodstream using an electrical sensor. It
must complete its readings every 5 sec and report them to the operator as a concentration
in mg/dl. In addition, the unit must keep the sensor energized even when the main power
switch is off; otherwise the sensor's calibration becomes invalid. Likewise, disconnecting
the sensor from the instrument also invalidates the calibration. The unit must never use a
sensor with an invalid calibration.
The heart of the system is a computer -- called the master -- that handles all
calculations and user I/O functions. It also coordinates the basic operation and hides the
internal processing from the user (remember my articles on usability and intuitive
design?). For several reasons the patient isolation barrier must sit between the sensor
electronics and the master. Also note that the expected yearly volume for this instrument
is about 500 units.
Now it's time to move into the design stage. Obviously, the system needs an alternate
power source in the sensor electronics to power it while the rest of the instrument's off.
Because of shelf-life and intended applications for the instrument, the sensor electronics
uses a lithium button cell as the backup power. When you consider the required 3-year time
between battery changes, the estimated time the sensor requires backup power and standard
lithium battery sizes, the sensor electronics can consume no more than 100 µA at 3V. This
value assumes when the user disconnects the sensor, the sensor powering circuitry switches
off the battery power.
Putting it all together
Talk about an ideal application for a low-cost microcontroller! It, together with an
A/D and a couple of optoisolators, can manage sensor energization, sensor disconnection,
signal conversion and communications over the isolation barrier. The slave would send the
converted sensor reading to the master serially, thereby reducing to two the number of
expensive, high-isolation voltage optoisolators the system needs. Due to its ability to
operate from a battery, the slave could still detect sensor disconnection when the master
is off. What a nice system -- too bad I couldn't make it work.
Embedded design is more than just selecting a micro and a language, it's finding a
correct solution to the problem within system constraints. To review, the constraints for
this project are:
- An annual quantity of 500 units.
- The patient side of the instrument must operate both with line power and battery power.
- Under battery power, the patient side must use less than 100 µA at 3V including sensor
electronics and the microcontroller
- The microcontroller must be able to run at 3.0V nominal, 2.7V minimum.
On examining these constraints, you can see that the annual quantity quickly limits the
choice of processors. You can't use a masked ROM part because the quantity isn't high
enough to justify the setup costs. Typically, chip manufacturers expect minimum quantities
of 5000 to 10,000 pieces/yr. before they'll even talk to you. This limitation leaves the
options of using microcontrollers with OTP (One Time Programmable) ROM or external program
Datasheets from Philips Corp (Sunnyvale, CA (408) 991-2000) for its 8051 derivatives
show that the firm sells low-power microcontrollers such as the 8XCL410 that work down to
1.8V. These chips come with either internal masked ROM or can use external memory.
Unfortunately, the firm's OTP microcontrollers are neither low-power nor low-voltage
devices. As previously noted, masked ROM is out so the only remaining alternative is
external memory. Wrong! UVEPROMs, parallel EEPROMs, and flash memory don't yet work at 3V.
Even if 5V were available, these memories' power hunger would exceed the power budget.
One potential way out of this problem is a technique I used years ago at my previous
company when we encountered a similar situation. We were designing a portable system for a
ship that carried containers filled with fruits and vegetables. The device, which we
colloquially referred to as the "Sniffer," contained 20 fire-extinguisher
bottles each containing a vacuum. The Sniffer's purpose was to see if the produce had
matured during the voyage. If it had, the wholesaler had to sell it quickly, otherwise he
could put it into cold storage. The shipper programmed a panel through a set of thumbwheel
switches with the time during the sea voyage at which each bottle should open its valve
and suck in a sample of the ambient atmosphere. After capturing a sample, the valve
closed. At the end of the voyage, the wholesaler took all 20 bottles to a gas
chromatograph and tested for the presence of compounds that provided evidence of produce
maturation. The problem was that at that time (1981), affordable low-power CMOS UVEPROMs
didn't exist. Our solution was to use NMOS UVEPROMs, and on power up the system
transferred the program from ROM to CMOS RAM and then turned off the NMOS ROMs. The system
worked well, and the Sniffer happily traveled many sea miles until it met an untimely end
at the hands of 500 pounds of errant avocados. The point is, you can find many ways to
design low-power systems.
Unfortunately, the same technique (transferring ROM to RAM) won't work for the medical
instrument because even though RAMs hold their information down to 2V, they don't operate
at 3V! Some readers are probably thinking, "Why not just use two lithium batteries in
series and boost the voltage to 6V?" Again, that level is outside the operating range
of typical ROM and RAM memories. You could use either a voltage regulator to step the
level down to 5V, but a regulator also consumes power. The lowest current loss for a
regulator I found was in the 40-µA range. That value's low but not low enough because
even though the system saves power once the user disconnects the sensor, the
microcontroller still requires its juice and the additional load from the regulator would
be too much for the battery lifetime.
In this application, even though a microcontroller at first glance looks like an ideal
solution for the sensor electronics, other constraints preclude its use. The remaining
solutions include discrete electronics (74HC series parts work down to 2V), adding more
battery power or changing the acceptable lifetime. The last two choices are probably
marketing decisions because they impact battery size and weight or customer acceptance of
more frequent service calls or down times. In closing, one-chip microcontrollers are neat,
but the lack of appropriate peripheral chips still limits their use, at least in this
particular application. So don't discount them out of hand, but don't assume they're a
magic bullet either. PE&IN