0 Preparatory knowledge

It is assumed that you already have some basic knowledge of electronic components, electronic circuit analysis and of mathematics. Nonetheless, a short recap is presented in this chapter. Also, there are multiple notations and symbols around for signals and components. This book sticks mainly to the European style. These notational things and the symbols are also summarized in this chapter.

0.1 Notation

This book uses a consistent notation for components and signals:

Notation it is expression
R a resistor R = vRiR
C a capacitor C = QCvC
L an inductor L = ΦLiL
Z an impedance (any combination of R,L,C)
r a resistance r = vRiR
c a capacitance c = QCvC
l an inductance l = ΦLiL
vX the (total) voltage at node X vX = V X + vx
V X the DC-voltage at node X
vx the voltage variation at node X
V x the amplitude of the voltage variation at node X
f the signal frequency in [Hz]
ω the angular signal frequency in [rad/s] ω = 2π f

0.2 Linear components

Simple electronic networks are composed of linear components; the element equations and impedances of these are listed below.

Component Value v i-relation Impedance Unit
resistor R = v i i = v R ZR = R Ω (Ohm)
capacitor C = Q v i = C ∂v ∂t ZC = 1 jωC F (Farad)
inductor L = Φ i v = L ∂i ∂t ZL = jωL H (Henry)

 
The symbols for the components above, as used in this book, are presented in figures 0.1a to c. Often a general impedance is used, rather than an impedance specifically for capacitors, inductors or resistors. In that case, the symbol for a resistor is used with a notation that indicates that it is an impedance: ZC, ZR, ZL or Zx. In this reader the US-style wire wound resistor symbol (the zigzag one) is not used.

pict

Figure 0.1: Linear components: a) a resistor or impedance, b) a capacitor, c) an inductor, d) a DC-voltage source, e) a voltage source and f) a current source.

0.3 Independent sources

There are two basic types of independent sources: an independent voltage source and an independent current source. Usually, the term “independent” is dropped for simplicity.

The voltage source forces a voltage difference across its terminals, independent of the current that will flow due to that voltage. Hence, a voltage source can either deliver or dissipate energy. In this book, we will encounter two different independent voltage sources: the DC-voltage source (shown in Figure 0.1d) and the general voltage source (figure 0.1e).

The current source, shown in 0.1f, forces a current through its terminals. This current is independent of the voltage across its terminals. This book does not make any symbolic distinction between various current sources, DC, AC, independent or controlled.

0.4 Controlled or dependent sources

Circuits that have power gain are usually modelled using controlled voltage sources and/or controlled current sources, shown symbolically in figures 0.1e and f. The value of a source describes whether it is controlled or independent: e.g. a value IA corresponds to a DC-current source, while a value like gm vin corresponds to a (here voltage) controlled current source.

0.5 Kirchhoff’s current and voltage laws

Kirchhoff’s voltage law (KVL) and Kirchhoff’s current law (KCL), formulated in 1845 by Gustav Kirchhoff, give elementary relations for electronic circuits1 . Kirchhoff’s laws state that the total voltage drop in any mesh equals 0 V, and that no current can appear or disappear from nodes: meshvn = 0 and nodein = 0.

In essence, the current and voltage laws are nothing more or less than the two most basic laws of (simple) physics: the laws of conservation of matter and conservation of energy.

As a short explanation: if you apply the law of conservation of matter to the particles we call electrons, you obtain Kirchhoff’s current law: electrons do not disappear or appear at random and hence the summed current into any node is zero. Furthermore, electrons have some level of energy, which is expressed in electronvolts [eV]. In electronics, we usually work with a large number of electrons (a Coulomb), which results in the unit of Volt [V]. Since electrons do not (dis)appear at random and energy does not either, the voltage drop in any mesh must equal 0 V.

0.6 Superposition

In any circuit, the voltage on a node (or the current in a branch) is resulting from the contribution of all sources in that circuit. However, calculating the voltage at some node in a circuit due to all sources simultaneously can be a lot of work.

With linear circuits, a voltage or current can be calculated much more easily by calculating the contribution of every source separately and finally summing all these contributions. This method is called superposition; it is one of the most powerful tools available for linear circuit analysis. The underlying idea is that a complex problem is separated into small problems in a very efficient way2 .

A good example of a circuit that can be easily analyzed using superposition, but quite difficult without superposition, is the R-2R-ladder circuit, shown in Figure 0.2.

pict

Figure 0.2: An R-2R-ladder circuit: an example where superposition is extremely useful.

The output voltage as a function of the four independent sources is easily obtained if we calculate the separate contributions of all the independent sources. For the given circuit, we would have to do this four times. Calculating (only) the contribution of v1, the circuit in Figure 0.2 can be redrawn as shown below. For this circuit it can be derived that vOUT (v1) = 1 16 v1.

pict

Figure 0.3: Equivalent circuit for the R-2R-ladder circuit to derive the contribution of (only) v1.

Calculating (only) the contribution of v2, the circuit in Figure 0.2 can be redrawn as shown in Figure 0.4. For this circuit it can be derived that vOUT (v2) = 1 8 v2. Similarly, the contribution of (only) v3, follows from the Figure 0.5. In this figure, the original situation is shown on the left, while the simplified (redrawn) equivalent is depicted on the right hand side. For this circuit it can be derived that vOUT (v2) = 1 4 v3. Lastly, the contribution of (only) v4 to vout can be derived from the actual and simplified equivalent circuits in Figure 0.6 yielding vOUT (v2) = 1 2 v4.

pict

Figure 0.4: Equivalent circuit for the R-2R-ladder circuit to derive the contribution of (only) v2. Right hand side figure is the simplified version of the left hand side circuit.

pict

Figure 0.5: Equivalent circuit for the R-2R-ladder circuit to derive the contribution of (only) v3. Right hand side figure is the simplified version of the left hand side circuit.

pict

Figure 0.6: Equivalent circuit for the R-2R-ladder circuit to derive the contribution of (only) v4. Right hand side figure is the simplified version of the left hand side circuit.

Summarizing these findings yields

vOUT = v4 2 + v3 4 + v2 8 + v1 16.

This type of circuit is sometimes used to convert digital signals — setting vn either to 0 or to a well defined constant voltage dependent on (here) 4 bit digital data — to analog signals.

This example also shows that superposition and simplifying circuits whenever possible significantly reduces computational complexity: it leads to a ”divide and conquer”-strategy that usually enables many possible simplifications that bottom line reduce the amount of cumbersome calculations.

0.7 Advanced superposition

In most textbooks, superposition is formulated only for independent sources and it may appear that it does not hold for dependent (controlled) sources or for circuits that contain dependent sources. This is wrong! In the analysis of circuits, you can calculate the contribution of any linearly dependent source exactly the same way you’d do it for an independent source. The trick is that at some stage — preferably at the end of the calculations to limit the amount of work for you — you have to define the linearly dependent voltage or current for the controlled sources. It does not matter at all whether this value is independent or linearly dependent.

0.8 Thévenin and Norton equivalents

The electrical behavior of every linear circuit can be modelled as a single source and a single impedance. This can easily be seen using the definition of linear circuits, stating that linear I-V behavior can be described by a linear relation. Furthermore, any linear function is uniquely defined by any two (non-coinciding) points satisfying that function. In linear circuits, it is convenient to choose the points where the load is Z = 0Ω and Z Ω.

From this, a simple equivalent model can be constructed with just one source and one impedance. If the equivalent uses a current source then we call it a Norton equivalent, while a model with a voltage source is called a Thévenin equivalent. Both are named after their discoverers, respectively in 1883 [1] and 1926 [2]3 .

pict

Figure 0.7: A random linear circuit with its Thévenin and Norton equivalents

The circuit in Figure 0.7a has its Thévenin and Norton equivalents shown in, respectively, Figures 0.7b and c. The open circuit voltage and short circuit current for this example are:

vopen = i (Z4(Z3 + Z1Z2)) + v Z4 Z3 + Z4 Z2(Z3 + Z4) Z1 + Z2(Z3 + Z4) ishortcircuit = i + v 1 Z3 Z3Z2 Z1 + Z3Z2

According to Ohm’s law, the following equivalent circuits hold:

vEQU = vopen iEQU = ishortcircuit ZEQU = vopen ishortcircuit

0.9 Linear networks and signals

A linear network consists of linear components: resistors (with an instantaneous linear relation between voltage and current), capacitors and inductors (with an integral or differential relation between v and i). The input source can either be a current source or a voltage source.

One of the most useful characteristics of a linear circuit is the fact that the input signal emerges undistorted at the output. This might seem counter intuitive: if we input for example a square wave into an arbitrary linear circuit, the output signal is in general not a square wave - the circuit may appear to be distorting the shape of the input signal. The circuit however is linear and is not distorting a specific class of signals, usually sine waves, and the input signal can be viewed as a summation of sine waves that individually remains undistorted, but that individually may get a different phase or amplitude. After summing these individually undistorted sine waves that all have experienced different phase shifts and gain, the sum of these may have a different shape than the input signal. Still, this is linear!

The types of signals where the output signal is a shifted and scaled version of the input signal s, are those that satisfy the following mathematical relation:

∂s(t) ∂t s(t + τ)

Signals that satisfy this are sin(ωt + ϕ) and e(a+jb)t: harmonic and exponential signals. Euler showed [3] that these two types of signals are closely related4 : ejbt is a rotating unit vector in the complex plane with angle bt. The representation of this on the real axis is cos(bt), while the imaginary part is j sin(bt). From this, it follows that:

e(a+jb)t = eat (cos(bt) + j sin(bt)) sin(ωt) = ejωt ejωt 2j cos(ωt) = ejωt + ejωt 2

In this book we assume the sine waves as these allow easier interpretation, calculation, simulation and measurement of circuit behaviour in terms of gain and phase shift (per sine wave).

0.10 Complex impedances

Assuming a sinusoidal signal, it is now straightforward to derive the impedance of reactive elements. For a capacitor it follows (assuming a sinusoidal signal) that:

ZC = v i i = C ∂v ∂t = C V c sin(ωt) ∂t = C ω V c cos(ωt) = C ω V c sin(ωt + 90) ZC = sin(ωt) C ω sin(ωt + 90) = 1 jωC

Similarly, for an inductor:

ZL = v i v = L ∂i ∂t = L Il sin(ωt) ∂t = L ω Il cos(ωt) = L ω Ic sin(ωt + 90) ZL = L ω sin(ωt + 90) sin(ωt) = jωL

It may appear weird that the ratio of a sin(ωt) and a shifted sin(ωt + ϕ) is rewritten into a complex number.; weird that e.g. sin(ωt + 90)sin(ωt) = j instead of being tan(ωt). The reason is that in the frequency domain it is all about the magnitude and the phase of harmonic signals, and the phase of the sin(ωt + 90) leads that of sin(ωt) by 90 while the magnitudes are identical. A complex number that captures this is the number having modulus 1 and argument 90 which is the complex number j.

0.11 Fourier transformations

The basic signals used to analyze linear circuits — the sine waves — have a close correlation to Fourier analysis. Fourier stated [4] that every periodic signal f(x) can be written as an infinite sum of harmonic signals:

f(x) = a0 + a1 cos(x + ϕ1) + a2 cos(2x + ϕ2) + ...

Using a number of goniometric relations, the Fourier transformation of a signal is obtained. The relevant equations here are:

02πsin(x)dx = 0 and 02πcos(x)dx = 0 a cos(x) + b sin(x) = a2 + b2 cos(x atan(b a)) sin(x) sin(y) = 1 2cos(x y) 1 2cos(x + y)

The first two relations state that the average of a harmonic signal equals 0. The third relation states that the sum of a sine and a cosine with the same argument can be written as one harmonic function with that argument and a phase shift. The fourth relation is crucial: the product of two harmonics equals the sum of two harmonics, one with the difference between the arguments, the other with the sum of the arguments. From the first three relations, it immediately follows that if a periodic signal with angular frequency ω can be written as the sum of harmonics, then those harmonics must have angular frequencies which are an integer multiple of the angular frequency of the original signal. Now, a new relation can be written:

f(ωt) = a0 + a1 cos(ωt + ϕ1) + a2 cos(2ωt + ϕ2) + ...

Note that the a0-term corresponds to the 0th harmonic, or in fact the a0 cos(0) term. The above relation can already be used to perform Fourier transformations: all an terms and all ϕn factors would have to be determined. However, in general, determining the ϕn factors can be very difficult. Using the 3rd goniometric relation, this yields the most widely used Fourier formula:

f(ωt) = a0 + a1 cos(ωt) + b1 sin(ωt) + a2 cos(2ωt) + b2 sin(2ωt) + ...

From the fourth goniometric relation, together with the first two, the relation to determine an and bn can be derived quite easily:

02πsin(x) sin(x)dx =02πcos(x) cos(x)dx = 1 2 2π = π an = 1 π02πf(x) cos(x)dx = 2 T0T f(ωt) cos(ωt)dt bn = 1 π02πf(x) sin(x)dx = 2 T0T f(ωt) sin(ωt)dt (T = 2π ω = 1 f )

The Laplace transformations are closely correlated to Fourier transformations: the most important “differences” include the use of ejx instead of sin(x) and cos(x) and using s instead of . In this book, basic knowledge of Fourier series is assumed, as this is used implicitly in many chapters. Although Laplace is very useful for e.g. stability analyses and calculating time responses it is not used in this book.

0.12 Differential equations

It is convenient to analyze circuits in the frequency domain using complex impedances, which is however only allowed for linear circuit because of the linear (Fourier) transform that underlies complex impedances. Typically circuits that are sufficiently linear (i.e. have sufficiently low distortion for the signals that are used in the analysis) are analyzed in the frequency domain. Then the resulting small inaccuracies are accepted. Moreover, to simplify the analyses the circuits are usually upfront modelled as being linear. This approach will be followed later in this book to simplify analyses significantly.

From a fundamental point of view non-linear circuits cannot be analyzed in frequency domain. If a circuit is very non-linear or switching, typically linearisation is not allowed and any sine wave would be very much distorted in the circuit. Then complex impedances — assuming single sine waves in a circuit — cannot be used. As only resort, time-domain element equations must be used and these require time domain analyses, usually in the form of differential equations. Below is a short summary for 1st and 2nd order differential equations, here written as respectively

B dx dt + C x = Dand A dx2 dt2 + B dx dt + C x = D.

It is evident from this that signal x(t) has a derivative that has the same shape as the signal itself, i.e. either exponential or harmonic. Including the demand for an arbitrary phase shift in the shape, the only valid shape is the exponential shape: the derivative of a sine wave has a fixed phase shift with respect to the sine wave itself. The easiest solving method5 is to substitute the most general form and solve the missing parameters for the homogeneous solution:

x(t) = Xeat aB Xeat + C Xeat = 0 a = C B a2A Xeat + aB Xeat + C Xeat = 0 a = B ±B2 4AC 2A

Clearly there is just one solution for first-order differential equations, and two for second-order differential equations (and yes, n for an nth order differential equation). These two solutions can be complex, in which case an (exponentially increasing or decreasing) harmonic solution results:

Xe(a+jb)t + Xe(ajb)t = Xeat (ejbt + ejbt) = 2Xeat cos(bt)

When there are only real solutions, the output is the sum of two exponential functions. The particular solution, where D is also implemented, has to be solved next. This usually takes some tricks6 . From all initial conditions, the rest of the parameters can usually be determined.

0.13 Circuit analysis methods

A number of circuit analysis methods for (linear) electronic circuits is well known. The most common methods are the nodal analysis and the mesh analysis; in this book, we will mostly be using the brute force approach . All these methods are very systematic, and while the first two are very well suited for implementation in software, the third method gives more insight (although it is difficult to automate in software).

0.14 Transfer functions

In electronics, we often want to get relations between the input signal and something which is a consequence of that signal. Usually this consequence is an output signal, meaning that we often have to find a transfer function. Other meaningful relations include those for the input and output impedance of an electronic circuit:

H() = signalout signalin Zin = vin iin Zout = vout iout

To analyze, sketch or interpret these transfer functions or impedances it is usually convenient to rewrite the original function as (a product or sum of) standard forms. There are several standard forms; for a low-pass-like transfer function:

H() = H(ω0) 1 j ω ω0 H() = H(0) 1 1 + j ω ω0 = H(0) 1 1 + τ0 H() = H(0) 1 1 + j ω ω0Q + j2 ω2 ω02

The first form corresponds to an integrator, which is just a limit case of the second form. The second and third forms are identical, and have a first-order characteristic; the fourth form has a second-order characteristic. High-pass characteristics can be obtained from low-pass functions, using:

( ω0 ) LP (ω0 )HP

From this it follows that:

H() = H(ω0) j ω ω0 H() = H() j ω ω0 1 + j ω ω0 = H() τ0 1 + τ0 H() = H() j2 ω2 ω02 1 + j ω ω0Q + j2 ω2 ω02

The order of any transfer function is simply equal to the highest power of ω. Every normal transfer function, of arbitrary power, can be written as the product of first and second-order functions. Knowing the three basic standard forms for low-pass characteristics by heart and being able to do some basic manipulations pretty much covers everything you will ever need to visualize transfer functions or impedances as a function of frequency.

0.15 Bode plots

A Bode plot is a convenient method for presenting the behaviour of a (linear) circuit; this is done by plotting the magnitude and phase shift of a transfer function as a function of the frequency. Here, the magnitude and frequency are plotted on a logarithmic scale, which proves to be very convenient9 . Before we dive into Bode diagrams, we first repeat a number of mathematical logarithmic rules:

log(x) + log(y) = log(x y) log(xy) = y log(x) log(x + y)|x<<y log(y)

In words:

To calculate the argument of a (complex) transfer function, standard rules for complex numbers can be used. The most important one that is used for Bode plots is that the angle (or argument or phase) of the product of two complex numbers equals the sum of the angles of the individual numbers: (A B) = ∠A + ∠B.

Bode plot for first order transfer functions

For example, the standard form of a first-order low-pass transfer function is H() = H(0) 1 1+j ω ω0. Assuming for simplicity reasons only H(0) > 0,

Using this, the Bode plot of a first order low pass transfer function H() = H(0) 1 1+j ω ω0 is as shown in Figure 0.10. The corresponding curves for this low-pass transfer function are shown in red. The curves for a first order high pass transfer function H() = H(0) j ω ω0 1+j ω ω0 are shown in blue.

pict

Figure 0.10: The Bode plot of a first-order low pass (red) and high pass (blue) transfer functions.

Bode plot for second order low pass transfer functions

A similar analysis/construction can be done to create a Bode plot for second order transfer functions. Below, this is done for low pass transfer functions, for a few different values for Q. Again a positive H(0) is assumed for simplicity reasons.

Using this, the Bode plot of a second order low pass transfer functions with Q = 0.4 (blue curves), Q = 1 (green curves) and Q = 2 (purple curves) are shown in Figure 0.11.

pict

Figure 0.11: The Bode plots of a few second-order low pass transfer functions.

Bode plots for other transfer functions

In most electronic systems, transfer functions are not first order or second order. A generic transfer function can be decomposed into the product of first order (high pass or low-pass) and second order transfer functions (idem). Constructing a Bode plot of the product of (basic) transfer functions is quite simple which is due to the fact that fo two complex numbers c1 and c2,

|c1 c2| = |c1 |c2| (c1 c2) = c1 + c2

Then, constructing a Bode plot of Htotal() = H1() H2() can easily be done.

Done.

0.16 Calculations & mathematics

Calculations (or mathematics for more complicated calculations) is a necessity for describing something in an exact manner. Without calculations, there would only be vague statements like “if I change something here, then something changes over there” or “if I press here, it hurts there”. Those statements are completely useless! As in any sensible scientific field, in electronics we like to get sufficiently exact relations that are described in an exact language: mathematical terms. To refresh some basic math knowledge, this section reviews some of the most basic math rules. They may appear trivial but in the past number of years did not prove to be so.

0.17 The basics

The basis of almost all math is the equation, or a “=” with something on the left hand side, and something else on the right hand side. What those somethings are exactly is not important, but the two somethings are equal to each other in some way and probably have a different form.

These days, in elementary school, students do math with apples, pears and pizzas:

1 2pizza + 1 2pizza = pizza
(0.1)

This is of course complete nonsense! Even if you would assume all pizzas to be of exactly the same size, shape and appearance (ingredients and their location), it would still depend on how you slice the pizza in half. It is possible to slice a pizza in half in, more or less, different directions, and if I cut the half pizzas in (0.1) in two different directions from full pizzas, then there is no way the two of them will be one complete pizza again, although it is suggested by (0.1)10 .

In electronics, our job is a little different: typically we use (integer) numbers of electrons, (a real number of) electrons per second or (real) energy per electron: with charge, or current and voltage. We might possibly add flux if we are talking about inductors, but the physics gets a bit more complicated since we would have to take relativity and Einstein into account. In general, we are dealing with matter that can easily be added, subtracted, divided and multiplied. The basics for doing math then is simply the equation:

something = something
(0.2)

often written in a somewhat different form:

somethingform1 = somethingform2
(0.3)

It clearly states that the part left of the “=”-symbol is equal to the part on the right. More specific: its value is identical, not its form. Often, we would like to rewrite the equation to have something simple on the left hand side (we “read” from left to right) which is understandable (monthly pay, speed, impedance, ...) and a form on the right hand side that includes a bunch of variables. This is what is called an equation or relation: if you change something on the right hand side, something also changes on the left hand side, and vice versa. Such mathematical relations give the relation between different parameters and are very valuable in analyses and syntheses11 .

0.18 Basic rules

The most basic rules for relations are formulated below. They only ensure that during manipulation the “=”-symbol is not violated.

That’s it.

0.19 Basic math rules

In addition to the basic rules above, it is also assumed that the basic mathematical rules for exponential functions are known and can be applied by you. Also, the derivatives of some basic functions must be known by heart. If you remember how esomething and harmonic signals (sine and cosine) are related, then you have enough knowledge to start off in this book. If you have some skill in manipulations with equations, can work in a structured way, have some perseverance and some confidence in yourself, then you should be just fine!

0.20 Simplifying relations

In this book relations will be derived frequently: mostly for impedances and transfer functions. Relations will be derived, since these relations will help you to analyze, understand, optimize and synthesize things. When deriving these equations, some skills in simplifying equations would come in handy. Simplifying equations boils down to using the basic rules in the previous section.

The big challenge in the multiplication by 1 is in choosing the correct 1. For example the voltage transfer of a voltage divider out of a capacitor and a resistor can be derived to be:

H() = 1 jωC 1 jωC + R

which is an ugly expression, which can become more readable after multiplication by 1. Note that the relation does not change, only the form or appearance does. If you choose the correct 1, the relation gets nicer to read...

H() = 1 jωC 1 jωC + R jωC jωC = 1 1 + jωRC(wellchosen1) H() = 1 jωC 1 jωC + R 1 ea 1 ea = 1ea jωC 1ea jωC + R(1 ea)(notsowellchosen1)

A parameter or signal may be a function of itself. For instance, the relation

y = a x + b y

looks nothing like a closed expression for y. The solution is obviously “separation of variables”, a trick which comes down to adding “something=something” to something12 . A well chosen “something=something” gives:

y b y = a x + b y b y y (1 b) = a x

Simplifying even further can be easily done by multiplying with a well chosen ”something=something”, like:

y (1 b) 1 1 b = a x 1 1 b y = a x 1 b

Hence, in order to simplify a relation, it is really importance to have mastered the multiplication table of 1 and to be able to use the equation 1 = 1. This seems easy, but it usually proves to be very difficult.

0.21 Impedance matching and maximum power transfer

The power dissipated up in a load impedance Rload, using a source having a source impedance Rsource can straight forwardly be calculated. In this calculation, the notation detailed in section 0.1 is used where e.g. V src denoted the amplitude of voltage vSRC. The maximum power in the load as a function of Rload can be obtained by differentiation:

Pload = Iload2 R load = ( V src Rload + Rsource ) 2 R load Pload Rload = V src2 Rsource Rload (Rload + Rsource)3

It follows that the power in the load is maximum if the load resistance is equal to the source resistance: Rload = Rsource. In a similar way, the maximum power in the load for complex impedances is for Zload = Zsource. A fairly simple result. But also a result that is only valid when designing the load impedance ans without any limitation in the signal source. In this book, the focus is however on designing oscillators and amplifiers that drive a load...

pict
Figure 0.12: An abstract signal source (in gray) and a load

The optimum above is always true, since it is a mathematical truth. However, it assumes a fixed source impedance, no voltage or current limitation of the signal source and assumes that the load impedance is designed. Designing amplifiers that drive a load, the amplifier’s output voltage range, the amplifier’s output current range and the amplifier’s output impedance can be (and are) designed. Ensuring that the amplifier can supply a sufficient output voltage and output current, the maximum power into the load then follow from the following two (partial) derivatives:

Pload Rsource = V src2 2 Rload (Rload + Rsource)3 Pload V src = V src 2 Rload (Rload + Rsource)2

from which it follows that

If these two conditions are satisfied, then clearly the load and source impedances are not matched to achieve maximum output power. The upper limit in maximum power into the load for a given load impedance is then simply limited by the maximum output voltage OR by the maximum output current that can be provided simultaneously by the amplifier. This is worked out in more detail in section 11.15.

0.22 Solving exercises

Most problems can be tackled in the same general way.

1.
Understand the question, then try to specify it. For example, if you are asked for an output impedance, start writing something like:
zout =?

or

zout = vOUT iOUT = vout iout =?

This gives you a clear direction for the derivation. You can also check afterwards whether you actually calculated what you wanted to know.

2.
Make a drawing/schematic where all relevant items for this specific problem are presented. Leave out everything that is not important. You might need multiple drawings / schematics to obtain a final version. Putting something together quickly usually yields incorrect results or causes unnecessarily complex calculations.
3.
Work in a structured manner towards the answer. This can be done in several ways, some of which will be presented in this book.
4.
Verify your answer:

In addition, there are a number of issues which are useful to keep in mind. The exercises of any course can be solved; you don’t have to worry whether or not you have enough parameters. There even may be too many parameters within one exercise, just to let students think more and learn more.

In more complicated assignments, it is not always evident that you have enough data to actually calculate something. Before you start calculating, it might be useful to validate whether or not you are actually capable of calculating something in the first place. One approach for this is to use the fact that you need n independent equations to solve n variables. If you have less equations: be smart. If you have more equations or conditions: compromise!

Furthermore, it is always useful to work with variables, instead of numbers. Firstly you then make fewer mistakes and it allows to perform some basic sanity checks on your answer. Secondly, on tests, if you would make a minor mistake using variables that is a minor issue while it would be a direct and complete fail for numerical answers. Thirdly, relations can be (re)used and (fourthly) allow for synthesis.

When you work with variables, preferably use a divide and conquer strategy. Divide your problem in subproblems, solve these individually and construct the overall solution from these. This significantly reduces the amount of work (calculations) compared to directly calculating properties for a bigger system.

This can also be shown scientifically for systematic methods like the node analysis. If you use Gaussian elimination, it takes in the order of n3 (hence O(n3)) calculations to solve the system of equations. Separating the original problem into two smaller problems, takes only (2 O(n 2 )3 + 23) manipulations. Subdividing the problems in smaller subproblems that consist of about 2 calculations or components is optimal. This subdividing is inherently woven in the brute force approach . If you wish do to as little work as possible, always divide the problem into smaller problems, which you solve independently. From there, you can easily construct the answer to the larger original problem again.

If you follow points 1 to 3, you are able to solve just about any problem, in electronics or otherwise. Point 4 is to verify your answer. By now, you might be wondering why there is nothing stated here about verification using the answer manual.

0.23 Verification using the answer manual

An answer manual is for most students (obviously not for you) useless, since it is typically used the wrong way: the example derivation is read along with the exercise which makes many students conclude that they would have been able to solve it themselves. However, actually being able to solve the problem and to understand the solution are two entirely different concepts.

This is the reason you never get the answers to your exam during the exam, with a sheet of questions with something like:
Assignment 1. Tick the correct answer:
I could have made this assignment myself
I could not have made this assignment myself

The correct way to handle an answer manual, is:

In essence, every answer manual is useless; Herman Finkers already stated “stories for in the fireplace” (freely translated) [5]. In the end, the only correct way of using an answer manual is not using it at all.

0.24 And finally...

Some useless knowledge always comes in handy. If you have ever wondered about e:

e = lim x (1 + 1 x )x

More nonsense: for non-linear effects, you usually get terms like xb and you have to deal with it in a meaningful way. For harmonic distortion calculations you would then need something like sinn(x) which is not that easy to use. Luckily, Euler has told us many things, among which:

cos(x) = 1 2 (ejx + ejx)

Using the binomial:

(x + y)n = k=0n(n k)xnkyk

This binomial is nothing more and nothing less than counting all possibilities to obtain a specific power term. For instance, (x + y)4 is the same (equals to) (x + y) (x + y) (x + y) (x + y) and there is only one way to get x4: multiply all x’s within parenthesis with each other. To get to x3y, there are 4 ways to change one x into a y: plain combinatorics. Consequently cosn(x) is:

cosn(x) = (1 2 (ejx + ejx)) n = 1 2n (ejx + ejx) n = 1 2n k=0n(n k) ejx(nk)ejkx

Just using Euler you can rewrite any cosn(x) in therefore no time into a series of higher harmonic components. You just might need it some day.

More useless information: now that we are talking about the binomial: you can easily use it to see that the derivative (with respect to x) of a term a xp equals ap xp1:

∂a xp ∂x = lim δ0a (x + δ)p a xp δ = lim δ0a k=0p(p k)xpkδk a xp δ = lim δ0a k=1p(p k)xpkδk δ = a ( p 1)xp1δ1 δ = a p xp1(and you only have to remember this)

Enough with this useless chatter, let us start with the actual topics of this book. Have fun!