The Advantages of Actively Loaded MOSFET

images-7The previous article presented the “drain-resistor problem”: We need a larger drain resistor to achieve higher gain, but more drain resistance means a lower DC bias voltage at the output node.

This is a problem because the output voltage is also the MOSFET’s drain voltage, and a lower drain voltage corresponds to a higher risk of pushing the FET out of saturation and into the triode region. We suggested that a current source might resolve this problem by providing high gain without negatively affecting the bias conditions.

The following diagram gives you an idea of the improved biasing situation associated with the use of a current mirror instead of drain resistors.

We have not yet discussed the small-signal resistance of the active-load current mirror, so for now you’ll just have to believe me when I tell you that it is quite high. But as you can see in the circuit diagram, this large small-signal resistance does not apply to the biasing conditions: The bias voltage at the output node is determined by whatever gate-to-source voltage corresponds to Q3’s drain current.

If we consider that this drain current is not particularly large and that Q3’s threshold voltage is maybe 0.7 V, we can guess that the magnitude of VGS will be quite small relative to the high gain resulting from the current mirror’s large small-signal resistance.

Let’s confirm this guess via simulation. Here is the LTspice circuit, with the bias voltage labeled:

(The SPICE models for the FETs can be downloaded here.) This simulation demonstrates that the current mirror’s large small-signal resistance (and thus high gain) does not require a large DC voltage drop. With this particular circuit, Q3 can generate current equal to IBIAS/2 (i.e., 250 µA) with a gate-to-source voltage of only –1.04 V, leading to an output-node bias voltage of (3.3 + VGS) = 2.26 V.

While we’re on this subject, I should point out that the bias voltage will be influenced by the width-to-length ratio of the current-mirror transistors. Recall that the saturation-mode relationship between gate-to-source voltage and drain current (if we ignore channel-length modulation) is the following:


We can see that a lower width-to-length ratio will cause the FET to conduct less drain current for the same VGS. Likewise, if the drain current is held constant and the width-to-length ratio is reduced, the magnitude of VGS will have to increase. Theoretically, then, we could fine-tune the bias voltage by adjusting the width-to-length ratio of the current-mirror transistors. Consider the following simulation:

The output-voltage range is restricted by the positive supply voltage and the lowest voltage that allows Q2 to remain in saturation. The condition for saturation is VGD ≤ VTH, so this lower limit is equal to VTH volts below the DC offset of the voltage applied to the gate of Q2. In this circuit we have a DC offset of 0 V and VTH for the NMOS transistors is ~0.5 V, so the lower limit is around –0.5 V. Thus, by reducing the width to 10 µm, we have moved the output node’s bias voltage closer to the middle of the allowable range.

Before we move on, I should point out that in real life the biasing scheme shown above is not practical. The VOUT bias voltage is very important: If it’s too high (or too low), the output’s positive (or negative) signal swing will be restricted. The biasing accomplished in our simulations is reliable only because Q3 and Q4 are perfectly matched.

We can rely on IC manufacturing technology to achieve good matching, but not perfect matching, and any mismatches between the two transistors will lead to variations in the VOUT bias voltage. Thus, real-life implementations employ special biasing circuits that are not so sensitive to manufacturing imperfections.

Two Outputs or One?

You may have noticed that the left-hand VOUT disappeared when we switched from drain resistors to a current mirror. It turns out that an additional (and perhaps somewhat unexpected) benefit of active loading is that it converts the output signal from differential to single-ended without loss of gain.

If you have read The Basic MOSFET Differential Pair, you might remember that we analyzed the overall gain differentially, meaning the output signal was defined as VOUT1 – VOUT2. These two signals are 180° out of phase, so the amplitude of the resulting output signal is doubled.

Sensor Fusion Works

unduhan-12How do you know where you are? What is real? That’s the core question sensor fusion is supposed to answer. Not in a philosophical way, but the literal “Am I about to autonomously crash into the White House? Because I’ve been told not to do that” way that is built into the firmware of commercial Quadcopters.

Sensors are far from perfect devices. Each has conditions that will send them crazy.

Inertial Measurement Units are a classic case—there are IMU chips that seem superior on paper, but have gained a reputation for “locking up” in the field when subjected to certain conditions, like flying 60kph through the air on a quadcopter without rubber cushions.

In these cases the IMU can be subject to vibrations—while inside spec—that can match harmonics with micromechanical parts. The IMU may have been designed for use in a mobile phone with a wobbler, not next to multiple motors humming at 20,000 RPM. Suddenly the robot thinks it’s flipping upside down (when it’s not) and rotates to compensate. Some pilots fly with data-loggers, and have captured the moment when the IMU bursts with noise immediately before a spectacular crash.

So, how do we cope with imperfect sensors? We can add more, but aren’t we just compounding the problem?

Your blind spots get smaller the more sensors you have. But the math gets harder in order to deal with the resulting fuzziness. Modern algorithms for doing sensor fusion are “Belief Propagation” systems—the Kalman filter being the classic example.

Naze32 flight controller with onboard “sensor fusion” Inertial Measurement Unit. This one has flown many times.

The Kalman Filter

At its heart, the algorithm has a set of “belief” factors for each sensor. Each loop, data coming from the sensors is used to statistically improve the location guess, but the quality of the sensors is judged as well.

Robotic systems also include constraints that encode the real-world knowledge that physical objects move smoothly and continuously through space (often on wheels), rather than teleporting around like GPS coordinate samples might imply.

That means if one sensor which has always given excellent, consistent values starts telling you unlikely and frankly impossible things (such as GPS/radio systems when you go into a tunnel), that sensors’ believability rating gets downgraded within a few millisecond iterations until it starts talking sense again.

This is better than just averaging or voting because the Kalman filter can cope with the majority of its sensors going temporarily crazy, so long as one keeps making good sense. It becomes the lifeline that gets the robot through the dark times.

The Kalman filter is an application of the more general concepts of Markov Chains and Bayesian Inference, which are mathematical systems that iteratively refine their guesses using evidence. These are tools designed to help science itself test ideas (and are the basis of what we call “statistical significance”).

Analysis of BJT Switch

unduhan-13The proliferation of the Arduino, the Raspberry Pi, the TI MSP430 LaunchPad, and various other embedded development platforms has led to a corresponding proliferation of a basic switch/driver circuit based on an NPN bipolar junction transistor. This configuration allows a microcontroller output pin to safely and conveniently control high-current loads. The following diagram depicts two standard applications—high-intensity illumination with an LED and relay control.

This circuit certainly has its advantages:

  • It’s simple and uses readily available parts.
  • It’s flexible—a wide variety of voltages and load currents can be accommodated by choosing an appropriate transistor.
  • You can easily migrate to a galvanically isolated implementation by using an optocoupler instead of a BJT.

However, it also comes with a risk: complacency. It’s simple and widespread, and this might encourage us to simply drop in a circuit that we find online and assume that it will work.

As is usually the case in life, one size doesn’t fit all. The following are important quantities that you need to consider before you finalize your BJT switch/driver design:

  1. BJT base current (IB), which is also the current sourced by the GPIO pin
  2. BJT active-region DC current gain (β)
  3. BJT collector current (IC), which is also the load current

Here is a visual representation :

IB should not exceed the maximum output current spec for the pin driving the base. To check this, assume a constant voltage drop of 0.7 V for the base-to-emitter junction. This gives you the following:


where VIO is the voltage supply for the chip’s input/output circuitry (common values are 5 V and 3.3 V).

Next we need to confirm that the collector current is 1) high enough to properly drive the load and 2) not so high that it causes the load to malfunction. The first step is to calculate an approximate minimum collector current using the BJT’s minimum value for active-region current gain.


If this is less than your minimum acceptable load current, you cannot be certain that the circuit will function properly. To remedy this, increase the base current by using a smaller base resistor or choose a transistor with higher β.

The next step is to calculate the approximate maximum collector current using the maximum value for β. If ICmax is too high for your load, you need a resistor to limit the collector current. Whenever you force IC to be less than β × IB, you are moving the BJT into the saturation region—the additional voltage drop (created by the resistor) lowers the collector voltage and causes the base-to-collector junction to become insufficiently reverse-biased for active-region operation. (Actually, it is not practical to set the collector current using IC = β × IB because β is so variable; thus, you ensure that the transistor has more than enough current gain and then add resistance to limit IC). When you’re in saturation mode, you assume a fixed voltage for the collector-to-emitter junction, referred to as VCEsat; check the BJT’s datasheet, or use the common but imprecise value of 0.2 V. Then you use Ohm’s law in conjunction with VCC and VCEsat to calculate the collector current and confirm that it is in the acceptable range for your load.

Energy Harvesting for Low Power Applications

Energy harvesting is the capture and conversion of small amounts of readily available energy in the environment into usable electrical energy. The electrical energy is conditioned for either direct use or accumulated and stored for later use. This provides an alternative source of power for applications in locations where there is no grid power and it is inefficient to install wind turbines or solar panels.

Other than outdoor solar, no small energy sources provide a great deal of energy. However, the energy captured is adequate for most wireless applications, remote sensing, body implants, RFID, and other applications at the lower segments of the power spectrum. And even if the harvested energy is low and incapable of powering a device, it can still be used to extend the life of a battery.

Energy harvesting is also known as energy scavenging or micro energy harvesting.

Why Harvest Energy

Most low-power electronics, such as remote sensors and embedded devices, are powered by batteries. However, even long-lasting batteries have a limited lifespan and must be replaced every few years. The replacements become costly when there are hundreds of sensors in remote locations. Energy harvesting technologies, on the other hand, provide unlimited operating life of low-power equipment  and eliminate the need to replace batteries where it is costly, impractical, or dangerous.

Most energy harvesting applications are designed to be self-sustaining, cost-effective, and to require little or no servicing for many years. In addition, the power is used closest to the source, hence eliminating transmission losses and long cables. If the energy is enough to power the device directly, the application or device powered by the energy can operate batteryless.

The Building Blocks of an Energy Harvesting System

The process of energy harvesting takes different forms based on the source, amount, and type of energy being converted to electrical energy. In its simplest form, the energy harvesting system requires a source of energy such as heat, light, or vibration, and the following three key components.

  • Transducer/harvester:  This is the energy harvester that collects and converts the energy from the source into electrical energy. Typical transducers include photovoltaic for light, thermoelectric for heat, inductive for magnetic, RF for radio frequency, and piezoelectric for vibrations/kinetic energy.
  • Energy storage: Such as a battery or super capacitor.
  • Power management: This conditions the electrical energy into a suitable form for the application. Typical conditioners include regulators and complex control circuits that can manage the power, based on power needs and the available power.


Common Sources of Energy

  • Light energy: From sunlight or artificial light.
  • Kinetic energy: From vibration, mechanical stress or strain.
  • Thermal energy: Waste energy from heaters, friction, engines, furnaces, etc.
  • RF energy: From RF signals.

Energy Harvesting Technologies

Harvesting electrical power from non-traditional power sources using thermoelectric generators, piezoelectric transducers, and solar cells still remains a challenge. Each of these requires a form of power conversion circuit to efficiently collect, manage, and convert the energy from these sources into usable electrical energy for microcontrollers, sensors, wireless devices, and other low-power circuits.

Harvesting Kinetic Energy

Piezoelectric transducers produce electricity when subjected to kinetic energy from vibrations, movements, and sounds such as those from heat waves or motor bearing noise from aircraft wings and other sources. The transducer converts the kinetic energy from vibrations into an AC output voltage which is then rectified, regulated, and stored in a thin film battery or a super capacitor.

Potential sources of kinetic energy include motion generated by humans, acoustic noise, and low-frequency vibrations. Some practical examples are:

  • A batteryless remote control unit: Power is harvested from the force that one uses in pressing the button. The harvested energy is enough to power the low-power circuit and transmit the infrared or wireless radio signal.
  • Pressure sensors for car tires: Piezoelectric energy harvesting sensors are put inside the car tire where they monitor pressure and transmit the information to the dashboard for the driver to see.
  • Piezoelectric floor tiles: Kinetic energy from people walking on the floor is converted to electrical power that can be used for essential services such as display systems, emergency lighting, powering ticket gates, and more.

What is The Advantage Of Boolean Algebraic Simplification Technique

Digital electronics deals with the discrete-valued digital signals. In general, any electronic system based on the digital logic uses binary notation (zeros and ones) to represent the states of the variables involved in it. Thus, Boolean algebraic simplification is an integral part of the design and analysis of a digital electronic system.

Although Boolean algebraic laws and DeMorgan’s theorems can be used to achieve the objective, the process becomes tedious and error-prone as the number of variables involved increases. This necessitates the use of a suitable, relatively-simple simplification technique like that of Karnaugh map (K-map), introduced by Maurice Karnaugh in 1953.

A Typical K-Map

The K-map method of solving the logical expressions is referred to as the graphical technique of simplifying Boolean expressions. K-maps are also referred to as 2D truth tables as each K-map is nothing but a different format of representing the values present in a one-dimensional truth table.

K-maps basically deal with the technique of inserting the values of the output variable in cells within a rectangle or square grid according to a definite pattern. The number of cells in the K-map is determined by the number of input variables and is mathematically expressed as two raised to the power of the number of input variables, i.e., 2n, where the number of input variables is n.

Thus, to simplify a logical expression with two inputs, we require a K-map with 4 (=22) cells. A four-input logical expression would lead to a 16 (=24) celled-K-map, and so on.

Gray Coding

Further, each cell within a K-map has a definite place-value which is obtained by using an encoding technique known as Gray code.

The specialty of this code is the fact that the adjacent code values differ only by a single bit. That is, if the given code-word is 01, then the previous and the next code-words can be 11 or 00, in any order, but cannot be 10 in any case.

In K-maps, the rows and the columns of the table use Gray code-labeling which in turn represent the values of the corresponding input variables. This means that each K-map cell can be addressed using a unique Gray Code-Word.

These concepts are further emphasized by a typical 16-celled K-map shown in Figure 1, which can be used to simplify a logical expression comprising of 4-variables (A, B, C and D mentioned at its top-left corner).

Here the rows and the columns of the K-map are labeled using 2-bit Gray code, shown in the figure, which assigns a definite address for each of its cells.

For example, the grey colored cell of the K-map shown can be addressed using the code-word “0101” which is equivalent to 5 in decimal (shown as the green number in the figure) and corresponds to the input variable combination A̅BC̅D or A+B̅+C+D̅, depending on whether the input–output relationship is expressed in SOP (sum of products) form or POS (product of sums) form, respectively.

Similarly, AB̅CD or A̅+B+C̅+D̅ refers to the Gray code-word of “1011”, equivalent to 11 in decimal (again, shown in green in the figure), which in turn means that we are addressing the pink-colored K-map cell in the figure.

K-Map Simplification Technique

With this general idea of K-maps, let us now move on to the procedure employed in designing an optimal (in terms of the number of gates used to realize the logic) digital system. We’ll start with a given problem statement.

Example 1:

Design a digital system whose output is defined as logically low if the 4-bit input binary number is a multiple of 3; otherwise, the output will be logically high. The output is defined if and only if the input binary number is greater than 2.

Step 1: Truth Table / Canonical Expression Leading to Min- or Max-Terms

The first step in designing any digital system is to have a clear idea of the variables involved in the process, along with their state-values. Further, depending on the problem statement, we have to arrive at the number of output variables and their values for each and every combination of the input literals, which can be conveniently represented in the form of a truth table.

In the given example:

Number of input variables = 4, which we will call A, B, C and D.

Number of output variables = 1, which we will call Y


         Y = Don’t Care, if the input number is less than 3 (orange entries in the truth table)

         Y = 0, if the input number is an integral multiple of 3 (green entries in the truth table)

         Y = 1, if the input number is not an integral multiple of 3 (blue entries in the truth table)

Note that, in addition to the input and output columns, the truth table also has a column which gives the decimal equivalent of the input binary combination, which makes it easy for us to arrive at the minterm or maxterm expansion for the given problem. Thus for the given example,

Minterm expansion will be  ∑m(4,5,7,8,10,11,13,14) + ∑d (0,1,2)

Maxterm expansion will be ∏M(3,6,9,12,15) · ∏D (0,1,2)

However, sometimes the logical expression which is to be simplified might be directly given in terms of SOP or POS forms. In this case, the requirement for the truth table can be overlooked provided that we express the given expression in its canonical form, from which the corresponding minterms or maxterms can be obtained.

Step 2: Select and Populate K-Map

From Step 1, we know the number of input variables involved in the logical expression from which size of the K-map required will be decided. Further, we also know the number of such K-maps required to design the desired system as the number of output variables would also be known definitely. This means that, for the example considered, we require a single (due to one output variable) K-map with 16 cells (as there are four input variables).

Next, we have to fill the K-map cells with one for each minterm, zero for each maxterm, and X for Don’t Care terms. The procedure is to be repeated for every single output variable. Hence for this example, we get the K-map as shown in Figure 2.

Window Parameters in FIR Filter Design

The previous article in this series discussed that a tapered window, such as a Bartlett, can give better PSL than a rectangular window which has abrupt variation in the time domain.

In this article, first, we will review other popular windows. Then, we will clarify the design procedure by calculating the cutoff of the ideal filter, window type, and window length from given filter specs, namely, and now we need to find the required ideal filter response, window type, and window length to design an FIR filter. The relation between these parameters is the subject of this article.

Other Popular Window Functions

Fortunately, the Bartlett and the rectangular windows are not the only options in FIR filter design and many other windows have been developed.

Table I shows some of the most popular windows along with their important properties. In Table I, Bartlett, Hann, and Hamming have equal approximate main lobe width, but we can observe the general trade-off between the PSL and the main lobe width. The rectangular window has the smallest main lobe width and the largest PSL, whereas the Blackman has the widest main lobe and the smallest PSL.

The Fourier transform of three windows, Bartlett, Hann, and Hamming with


, are plotted in Figure (1). The mentioned trade-off is observed in these three windows, too. As the PSL reduces, the main lobe width increases.


In addition to PSL and approximate main lobe width, Table I gives, for each window, the peak approximation error, which is the deviation from the ideal response (denoted by


) expressed in dB. This is an important parameter which allows us to choose an appropriate window based on the requirements of an application. Peak approximation error determines how much deviation from the ideal response we expect for each of the window types. This is illustrated in Figure (2).

As will be discussed in the following section, the deviations from the ideal response in the pass-band and stop-band are approximately equal when using the window method to design FIR filters, i.e.,


. Therefore, we can select the suitable window based on how much ripple is allowed in the pass-band or how much attenuation is needed in the stop-band.

How to The Speakers Work

An atom has a center nucleus that is made of neutral charges called neutrons, and positive charges called protons. Moving around the nucleus are negative charges called electrons.

Opposite charges attract, so electrons are attracted to the protons in the nucleus. At the same time, similar charges repel, so too many electrons in one area tend to push one or more electrons to leave.

Electrons are in constant motion around an atom.


When multiple atoms are brought near each other, the electrons can move between neighboring atoms. The electrons must follow paths that take into account their constant motion, the force of attraction to the protons, and the force of repulsion to fellow electrons. Balancing all those rules for many atoms brought together sometimes results in regular patterns or crystalline shapes.

Piezoelectric Effect

Certain materials will generate a measurable potential difference when they are made to expand or shrink in a particular direction.

Increasing or decreasing the space between the atoms by squeezing, hitting, or bending the crystal can cause the electrons to redistribute themselves and cause electrons to leave the crystal, or create room for electrons to enter the crystal. A physical force on the crystal creates the electromotive force that moves charges around a circuit.

The opposite is true as well: Applying an electric field to a piezoelectric crystal leads to the addition or removal of electrons, and this in turn causes the crystal to deform and thereby generate a small physical force.

Representation of a compressed (left) and stretched (right) crystalline structure.

How Piezoelectric Speakers Move

The piezoelectric effect can be employed in the construction of thin-form-factor speakers that are valuable alternatives to traditional electrodynamic speakers in space-constrained applications. These devices are referred to as both piezoelectric speakers and ceramic speakers.

Apply an electric field to a piezoelectric material and it will change size. The piezoelectric material will shrink or grow as charges are introduced or removed, but the base material will not.

This causes elastic deformation of the material toward or away from a direction that is perpendicular to the surface of the speaker. As soon as the electric field is removed from the piezoelectric material, it will return to its original shape.

As the speaker flexes and strikes air molecules, it causes a chain reaction of collisions that eventually reaches your ear. If enough air molecules strike your ear, the nerve cells send a signal to your brain that you interpret as sound.

How Disturbances Travel

An unimaginable number of atoms and molecules surround us and are in constant motion. These particles move in straight lines until they hit other atoms and their direction changes. A single particle will never move far before a collision, but the effects of the collision can travel great distances as new particles collide with their neighbors.

Imagine adding a single drop of food coloring to the center of a swimming pool. The particles of food coloring might take minutes or hours to reach the edge, but the waves generated by the drop would be at the pools edge in seconds.

Air particles strike our bodies constantly and randomly all the time. When the collisions stop being less constant and less random, and start being more regular and patterned, we are hit with more particles at specific times. Certain nerve cells in our ears can detect these increased, patterned collisions and send signals to our brains, and our brain interprets the pattern as sound.

UNO Hardware Design

Most articles explain the software of Arduinos. However, understanding hardware design helps you to make the next step in the Arduino journey. A good grasp of the electronic design of your Arduino hardware will help you learn how to embed an Arduino in the design of a final product, including what to keep and what to omit from your original design.

Components Overview

The PCB design of the Arduino UNO uses SMD (Surface Mount Device) components. I entered the SMD world years ago when I dug into Arduino PCB design while I was a part of a team redesigning a DIY clonefor Arduino UNO.

Integrated circuits use standardized packages, and there are families for packages.

The dimensions of many SMD resistors, capacitors, and LEDs are indicated by package codes such as the following:

Most packages are generic and can be used for different parts with different functionality. The SOT-223 package, for example, can contain a transistor or a regulator.

In the table below, you can see a list of some components in the Arduino UNO with their respective package:

Arduino UNO System Overview

Before we can understand the UNO’s hardware, we must have a general overview of the system first.

After your code is compiled using Arduino IDE, it should be uploaded to the main microcontroller of the Arduino UNO using a USB connection. Because the main microcontroller doesn’t have a USB transceiver, you need a bridge to convert signals between the serial interface (UART interface) of the microcontroller and the host USB signals.

The bridge in the latest revision is the ATmega16U2, which has a USB transceiver and also a serial interface (UART interface).

To power your Arduino board, you can use the USB as a power source. Another option is to use a DC jack. You may ask, “if I connect both a DC adapter and the USB, which will be the power source?” The answer will be discussed in the “Power Part” section from this article.

To reset your board, you should use a push button in the board. Another source of reset should be every time you open the serial monitor from Arduino IDE.

I redistributed the original Arduino UNO schematic to be more readable below. I advise you to download it and open the PCB and schematic using Eagle CAD while you are reading this article.

MOSFET Differential Pair

The effect of channel length modulation

In the two previous articles, we introduced the actively loaded MOSFET differential pair and discussed two prominent advantages of this configuration—namely, improved biasing (compared to the use of drain resistors) and differential-to-single-ended conversion without loss of gain.

Now it’s time to analyze the differential gain of this circuit; before we can determine the gain, though, we need to understand the concept of small-signal output resistance and how we incorporate this resistance into our analysis. (If you’re not sure what I mean by “small-signal,” take a look at the “Two Outputs or One?” section in this article.)

The first thing to understand is that small-signal output resistance is not an inherent, precise property of a real MOSFET. Rather, it is a model that we use to account for the effect of channel-length modulation on a MOSFET’s small-signal behavior. Recall that MOSFETs used for linear amplification are typically biased in the saturation region, which corresponds to when the FET’s channel is “pinched off” at the drain end.

In a simplified analysis, we use the following equation for saturation-mode drain current:


This equation conveys the assumption that drain current is not affected by the drain-to-source voltage. The MOSFET acts like a dependent current source controlled by the overdrive voltage VOV, where VOV = VGS – VTH. This assumption is based on the idea that increasing the drain-to-source voltage does not alter the channel once it has become pinched off.

As you’ve probably noticed, though, the real world is not particularly conducive to idealized situations such as this.

The reality is that increasing the drain-to-source voltage does have a nontrivial effect on the channel: the pinch-off point is moved toward the source, and the result is more drain-to-source current as drain-to-source voltage increases. This means that we need an additional circuit element to account for this additional current, and by now you have probably guessed that the element we’re looking for is a resistor—namely, the small-signal output resistance ro.

So now we have a MOSFET, which is still assumed to be immune to increasing drain-to-source voltage, in conjunction with an ordinary resistor, which (like any resistor) has a current flow equal to the voltage across the resistor divided by the resistance. As drain-to-source voltage increases, more current flows through the resistor, and this current compensates for the lack of change in the drain current of the idealized MOSFET. By combining these two currents—drain current of the idealized FET and current through the resistor—we can find the total drain current for a real MOSFET.

Ignoring channel-length modulation is equivalent to assuming that the small-signal output resistance of the FET is infinite. It follows, then, that higher output resistance is desirable if we want a MOSFET to behave more like the idealized component in which drain current is not influenced by drain-to-source voltage. As we will see later, small-signal output resistance is determined in part by the FET’s DC bias current, so we do have some ability to increase the output resistance of a given device.

One last note before we move on: Output resistance is itself a simplification of real MOSFET behavior. The subatomic action taking place in a MOSFET’s channel is not exactly straightforward, and it comes as no surprise to me that the simple linear relationship represented by a drain-to-source resistor is not the whole story.


Antenna Basics

Over 2600 years ago (and likely well before that) the ancient Greeks discovered that a piece of amber rubbed on a piece of fur would attract lightweight objects like feathers. Around the same time, the ancients discovered lodestone, which are pieces of magnetised rock.

It took a few hundred years more to determine that there are two different properties of attraction and repulsion (magnetic and electric): likes repel and opposites attract. Another 2000 years passed before scientists first discovered that these two entirely different novelties of nature were inextricably linked.

In the early nineteenth-century, Hans Christen Oersted placed a wire perpendicular to a compass needle and saw nothing. But when he rotated the wire parallel to the compass needle and passed a current through the wire, it deflected in one direction. When he passed the current through the wire in the opposite direction, the compass needle deflected in the opposite direction.

A current carrying wire perpendicular to a compass needle causes no movement.

This wire was the first antenna transmitter and the compass needle the first receiver. The scientists just did not know it at the time.

While not terribly elegant, it provided a clue about the way the universe worked—that charges moving through a wire create a magnetic field that is perpendicular to the wire.  (Scientists soon learned the field surrounding a wire is circular, not perpendicular.)

With this information, scientists were able to describe the ways in which electric fields and magnetic fields interact with electric charges and formed a basis of an understanding of electromagnetism.

Shortly after, Nikola Tesla wirelessly lit lamps in his workshop, demonstrated the first remote-control toy boat, and established the alternating-current system we use to transfer electricity throughout the world today.

Less than a full century after Orstead’s experiment, Guglielmo Marconi devised a way to send the first wireless telegraph signals across the Atlantic.

And here we stand, a full two centuries after that first compass experiment, able to capture images from distant planets and send them through the vastness of space to a device we can hold in the palm of our hand—all with antennas.