## Monthly Archives: June 2016

## Sensor Fusion Works

How do you know where you are? What is real? That’s the core question sensor fusion is supposed to answer. Not in a philosophical way, but the literal “Am I about to autonomously crash into the White House? Because I’ve been told not to do that” way that is built into the firmware of commercial Quadcopters.

Sensors are far from perfect devices. Each has conditions that will send them crazy.

Inertial Measurement Units are a classic case—there are IMU chips that seem superior on paper, but have gained a reputation for “locking up” in the field when subjected to certain conditions, like flying 60kph through the air on a quadcopter without rubber cushions.

In these cases the IMU can be subject to vibrations—while inside spec—that can match harmonics with micromechanical parts. The IMU may have been designed for use in a mobile phone with a wobbler, not next to multiple motors humming at 20,000 RPM. Suddenly the robot thinks it’s flipping upside down (when it’s not) and rotates to compensate. Some pilots fly with data-loggers, and have captured the moment when the IMU bursts with noise immediately before a spectacular crash.

So, how do we cope with imperfect sensors? We can add more, but aren’t we just compounding the problem?

Your blind spots get smaller the more sensors you have. But the math gets harder in order to deal with the resulting fuzziness. Modern algorithms for doing sensor fusion are “Belief Propagation” systems—the Kalman filter being the classic example.

*Naze32 flight controller with onboard “sensor fusion” Inertial Measurement Unit. This one has flown many times.*

The Kalman Filter

At its heart, the algorithm has a set of “belief” factors for each sensor. Each loop, data coming from the sensors is used to statistically improve the location guess, but the quality of the sensors is judged as well.

Robotic systems also include constraints that encode the real-world knowledge that physical objects move smoothly and continuously through space (often on wheels), rather than teleporting around like GPS coordinate samples might imply.

That means if one sensor which has always given excellent, consistent values starts telling you unlikely and frankly impossible things (such as GPS/radio systems when you go into a tunnel), that sensors’ believability rating gets downgraded within a few millisecond iterations until it starts talking sense again.

This is better than just averaging or voting because the Kalman filter can cope with the *majority *of its sensors going temporarily crazy, so long as one keeps making good sense. It becomes the lifeline that gets the robot through the dark times.

The Kalman filter is an application of the more general concepts of Markov Chains and Bayesian Inference, which are mathematical systems that iteratively refine their guesses using evidence. These are tools designed to help science itself test ideas (and are the basis of what we call “statistical significance”).

## Analysis of BJT Switch

The proliferation of the Arduino, the Raspberry Pi, the TI MSP430 LaunchPad, and various other embedded development platforms has led to a corresponding proliferation of a basic switch/driver circuit based on an NPN bipolar junction transistor. This configuration allows a microcontroller output pin to safely and conveniently control high-current loads. The following diagram depicts two standard applications—high-intensity illumination with an LED and relay control.

This circuit certainly has its advantages:

- It’s simple and uses readily available parts.
- It’s flexible—a wide variety of voltages and load currents can be accommodated by choosing an appropriate transistor.
- You can easily migrate to a galvanically isolated implementation by using an optocoupler instead of a BJT.

However, it also comes with a risk: complacency. It’s simple and widespread, and this might encourage us to simply drop in a circuit that we find online and assume that it will work.

As is usually the case in life, one size doesn’t fit all. The following are important quantities that you need to consider before you finalize your BJT switch/driver design:

- BJT base current (I
_{B}), which is also the current sourced by the GPIO pin - BJT active-region DC current gain (β)
- BJT collector current (I
_{C}), which is also the load current

Here is a visual representation :

I_{B} should not exceed the maximum output current spec for the pin driving the base. To check this, assume a constant voltage drop of 0.7 V for the base-to-emitter junction. This gives you the following:

where V_{IO} is the voltage supply for the chip’s input/output circuitry (common values are 5 V and 3.3 V).

Next we need to confirm that the collector current is 1) high enough to properly drive the load and 2) not so high that it causes the load to malfunction. The first step is to calculate an approximate minimum collector current using the BJT’s minimum value for active-region current gain.

If this is less than your minimum acceptable load current, you cannot be certain that the circuit will function properly. To remedy this, increase the base current by using a smaller base resistor or choose a transistor with higher β.

The next step is to calculate the approximate maximum collector current using the maximum value for β. If I_{Cmax} is too high for your load, you need a resistor to limit the collector current. Whenever you force I_{C} to be less than β × I_{B}, you are moving the BJT into the saturation region—the additional voltage drop (created by the resistor) lowers the collector voltage and causes the base-to-collector junction to become insufficiently reverse-biased for active-region operation. (Actually, it is not practical to set the collector current using I_{C} = β × I_{B} because β is so variable; thus, you ensure that the transistor has more than enough current gain and then add resistance to limit I_{C}). When you’re in saturation mode, you assume a fixed voltage for the collector-to-emitter junction, referred to as V_{CEsat}; check the BJT’s datasheet, or use the common but imprecise value of 0.2 V. Then you use Ohm’s law in conjunction with V_{CC} and V_{CEsat} to calculate the collector current and confirm that it is in the acceptable range for your load.

## MOSFET Differential Pair

The effect of channel length modulation

In the two previous articles, we introduced the actively loaded MOSFET differential pair and discussed two prominent advantages of this configuration—namely, improved biasing (compared to the use of drain resistors) and differential-to-single-ended conversion without loss of gain.

Now it’s time to analyze the differential gain of this circuit; before we can determine the gain, though, we need to understand the concept of small-signal output resistance and how we incorporate this resistance into our analysis. (If you’re not sure what I mean by “small-signal,” take a look at the “Two Outputs or One?” section in this article.)

The first thing to understand is that small-signal output resistance is not an inherent, precise property of a real MOSFET. Rather, it is a *model* that we use to account for the effect of channel-length modulation on a MOSFET’s small-signal behavior. Recall that MOSFETs used for linear amplification are typically biased in the saturation region, which corresponds to when the FET’s channel is “pinched off” at the drain end.

In a simplified analysis, we use the following equation for saturation-mode drain current:

This equation conveys the assumption that drain current is not affected by the drain-to-source voltage. The MOSFET acts like a dependent current source controlled by the overdrive voltage V_{OV}, where V_{OV} = V_{GS} – V_{TH}. This assumption is based on the idea that increasing the drain-to-source voltage does not alter the channel once it has become pinched off.

As you’ve probably noticed, though, the real world is not particularly conducive to idealized situations such as this.

The reality is that increasing the drain-to-source voltage does have a nontrivial effect on the channel: the pinch-off point is moved toward the source, and the result is more drain-to-source current as drain-to-source voltage increases. This means that we need an additional circuit element to account for this additional current, and by now you have probably guessed that the element we’re looking for is a resistor—namely, the small-signal output resistance r_{o}.

So now we have a MOSFET, which is still assumed to be immune to increasing drain-to-source voltage, in conjunction with an ordinary resistor, which (like any resistor) has a current flow equal to the voltage across the resistor divided by the resistance. As drain-to-source voltage increases, more current flows through the resistor, and this current compensates for the lack of change in the drain current of the idealized MOSFET. By combining these two currents—drain current of the idealized FET and current through the resistor—we can find the total drain current for a *real* MOSFET.

Ignoring channel-length modulation is equivalent to assuming that the small-signal output resistance of the FET is infinite. It follows, then, that higher output resistance is desirable if we want a MOSFET to behave more like the idealized component in which drain current is not influenced by drain-to-source voltage. As we will see later, small-signal output resistance is determined in part by the FET’s DC bias current, so we do have some ability to increase the output resistance of a given device.

One last note before we move on: Output resistance is itself a simplification of real MOSFET behavior. The subatomic action taking place in a MOSFET’s channel is not exactly straightforward, and it comes as no surprise to me that the simple linear relationship represented by a drain-to-source resistor is not the whole story.