Principle of Superposition, classic version
For all linear systems, the net respon-se at a given place and time caused by two or more inputs (or, stimuli) is the sum of the responses which would have been caused by each input (or, stimulus) individually.
Then, if x, y are the inputs and a, b are two scalars, the correlated output is:
f (a x + b y) = a f (x) + b f (y)
Classic version
All the Engineers engaged on a daily base with Maintenance operations in the Machinery of Food and Beverage production Lines, know they continously apply an idea named Principle of Superposition.
As an example, we apply it during the most different activities, when:
- applying Root Cause Analysis, during problem solving;
- troubleshooting electronics, electrotechnics and electromechanical problems;
- designing simple or complex electronic circuits;
- imagining how an effect could be related to a hypotetical cause.
A canon some centuries old, always part of the design calculations of Electronics (Automation, Electronic Inspection and fully included), Mathematics and Analytical Mechanics. The classic version of the Principle of Superposition, basically enounced in the form visible at right side.
All clockworks are systems of linear mechanical components. Since centuries knowingly to be extremely sensible to the Environment. The entire Physics was tentatively modelled as a clockwork world until the year 1899. In 1900 the nobelist Max Planck made a decisive breakthrough which changed the course of Mankind's history
Constructive interference. Two gaussian wave pulses with a minimal amplitude difference, travelling on a nondispersive string in opposite directions. Visibly, they pass through each other without being disturbed. The net displacement is the sum of the two individual displacements
It is based on the assumption that the system:
- is linear, so that the function satisfies the conditions of additivity and continuity;
- is insulated by the Environment: no influence (or, inputs or, stimuli) on the system except the inputs considered (x and y, in the example above, on right side);
- has no subsystems then: no correlation of subsystems’ states.
Considering that the state of a system is uniquely defined by the correlations of its subsystems, the point 3. above could surely be a reasonable assumption centuries ago, when the Principle was born. But, in the meantime Science and Technologies proceeded rapidly reducing the measurements' uncertainties. So rapidly that, yet in the year 1900, the evaluation by Planck of the constant since then carrying his name, implied an update of some assumptions valid since centuries. Without to enter in fine-details, we all know that yet the simplest linear systems when closely looked are, in the reality, extremely complex.
Known examples: inductors and capacitors connected to alternate current generators, mechanical resonance of metal bars, waves, etc. The same term complexity hints to the existence of subsystems, and subsystems’ states are correlated. In the reality, the supposedly linear behaviour was yet centuries ago object of the doubts and investigations of many. Also the clockwork and the dials visible above and below are known complex and purely mechanical systems, where the Environment (e.g., the ambient Temperature and atmospheric pressure) is decisive, and different times shall be presented following different conditions. The Machine Vision systems inside Empty and Full Bottle Inspectors, Case or Crate Inspectors, show not inferior levels of complexity.
Case Study 1
Superposition in 1D of two waves having same wavelength
“...the sum of two waves can be also be their mutual annihilation and disappearance”
The Principle of Superposition in its classic version, when applied to two (or, more) waves with the same wavelength λ, superposed and propagating along the same direction, states that they can be considered as being a single wave whose instantaneous amplitude is the geometric sum V of the individual instantaneous amplitudes of the separate waves U and U′.
As an example, let us consider two monochromatic (isofrequential) waves U and U′ with the same amplitude a, differing only for their phase’s angle, φ and φ′:
U = a cos (ωt − φ)
U' = a cos (ωt − φ′)
The instantaneous value of their combined amplitudes U and U′, is:
V = U + U′ = a [cos(ωt − φ) + cos(ωt − φ′)] = 2a^{2 }cos[(φ − φ′)/2] cos[ωt − (φ + φ′)/2]
The new amplitude A depending upon the difference of optical paths x and x′of each wave, expressed in units of the wavelength λ, results:
A = 4 a^{2 }cos^{2 }(π / λ) (n x n x′) = 2a^{2 }{1 + cos [2 n (x − x′) / λ]} [1]
Thus, there can be both constructive and destructive interference between the two waves, and the resulting amplitude can be anything between 0 and 2a. An extremely important result, stating that the sum of two waves can be also be their mutual annihilation and disappearance.
Case Study 2
Liquid waves in 4D
The example analytically developed above, however complex it may appear, treated a case where the entire dynamical phenomena lies in a 1D spatially limited volume. Also, no external Environment is affecting the evolution of the wave deriving by the superposition of two waves. It is, literally, two infinite orders less complex than a true 4D (3 + 1-dimensional) apparently-only “banal” liquid wave motion, grossly simplified by the liquid waves movement, themselves differentiable manifolds
To understand how many ideas lie today back of the word Superposition, we suggest to simply observe a liquid wave motion like the one in the video below. The wave motion of liquids, e.g. sea waves, are a case much more complex than the superposition of two waves in 1D. As an example, in the motion of real liquid waves, undercurrent foam and spray are influenced by flip particle movements. Complexity explicit when considering that each one frame of the following video, in its native extremely high definition, took 8 hours of time to an high-end personal computer. Meaning they were necessary months of computation to make the video below.
The Machine Vision systems inside Empty and Full Bottle Inspectors, Case or Crate Inspectors, share similar levels of computational complexity.
Case Study 3
“vibrations can occur simultaneously and independently of one another”
Normal modes of vibration in 1D
It was yet observed twenty six centuries ago, by the Greek mathematician Pytagoras, how the separation of the stretched strings into two segments gave pleasent sounds when the ratio of those segments implied special numerical ratios. The phenomenon that we’ll briefly introduce in the following, manifested in a vibrating string, was first clearly discussed by the French mathematician Daniel Bernoulli in 1753 and its analytical foundations discovered by the French mathematician Joseph Fourier in 1807. Analytical foundations themselves based over the key concept of normal modes of vibration, also named permitted stationary vibrations or eigenvalues. Vibrations whose essential form for the functional dependance was first discovered by the Italian mathematician and physicist Galileo Galilei nearly four centuries ago and we’ll explore below. A stretched string fixed at its extremal points, is a 1D object. What follows can be generalised by mean of different functional relations also to a 2D objects like the stretched drums striked by a point source.
The stretched drums are aliased also by the:
- metal caps, when closing a bottle,
- lids, when closing a can.
A stretched string with both ends fixed, has a defined amount of states of natural vibration.
The reason why they are named stationary vibrations, or normal modes n, can be visually recognised by the figures at right side, and lies in the fact that:
- each point on the string vibrates transversely in simple harmonic motion with constant amplitude;
- the frequency of this vibration ν is one and the same for all portions of the string;
- they exist points whose amplitudes always remains null (with the only exception of the lowest mode), named nodes;
- they exist points whose amplitudes reach a maximum, named antinodes.
Normal modes of vibration of a string, also named stationary waves, obtained shining ultraviolet light UV-A emitted by a Wood’s lamp, on a motor-driven string. Visible the modes n = 1, 2, 4, 7 (abridged by image credit University of California at Los Angeles, 2014)
Frequencies of the permitted modes of vibration
Naming:
- L length of the string [meter],
- x coordinate x = 0, x = L where the string’s end positions are held fixed [meter]
- ν = ω/ 2π frequency [hertz],
- ω = 2 π ν angular velocity [radians/second],
- t time parameter [second]
- n mode of vibration, where n > 0 integer number 1, 2, 3, …, ∞
- v speed of the progressive waves travelling on a long string [meter/second]
- T tension of the string [newton],
- μ uniform linear density of the string [kg/meter]
the frequencies νn of the permitted modes of vibration n are then given by:
ν_{n }= (n v / 2 L) = [n /(2L)] (T/μ)^{1/2}
Simplifying the formalism to its physical meaning, the shape of a string:
- at any instant of time,
- in any particular mode of vibration n,
is that one allowing to accomodate an integral number of half-sines curves.
The wavelength λ_{n }associated with the mode n of vibration is:
λ_{n }= (2L) /n
and implies the following serie of identities:
ω/ v = (n π/ L) = (2π)/ λn
Shape and equation of motion of a string vibrating in 1D
The shape of a string vibrating in the mode n derives by the equation:
f_{n }(x) = A_{n} sin[ (2 π x)/ λ_{n }] = A_{n} sin[ (n π x)/ L ]
and its equation of motion expressed as a space-time function named displacement y, is:
y_{n} (x, t) = A_{n} sin [(2 π x)/ λ_{n}] cos ω_{n} t [2]
“Linearity-Superposition relation”
Examining this equation:
- its various individual solutions corresponds to the various individual harmonics,
- the term T/μ having the dimension of the square of a speed [ m^{2}/s^{2 }], represents the speed v of the progressive waves travelling on a long string,
- the pulsation ωn associated to the mode n is:
ω_{n }= (n π / L) (T/μ)^{1/2 }= n ω_{1}
and results an integral multiple n of the pulsation ω1 associated to the state with the lowest possible frequency ν1 by mean of the relation ω1 = 2π ν1. State named fundamental mode. It is a physically very important fact that the vibrations at the different frequencies νn can occur simultaneously and independently of one another. An experimental fact this, yet observed four centuries ago, whose modern interpretation we treat in another page.
Also, an experimentally well confirmed fact, happening because the properties of the system are such that the basic dynamical equation is linear. Meaning that only the first power of the displacement y occurs anywhere associated to all possible values in the field of existence of the spatial coordinate x,say in the range: 0 → L. If the various individual solutions of the equation [2] above are denoted as y1, y2, y3, …, y∞ then their sum also satisfies the basic equation, and the motion described is resolved into these individual components.
Mutual independance of the superposed vibrations
“mutual independance of the superposed vibrations of a string”
To understand some of the most modern concepts of Physics presented in other pages of this website, it is necessary to hit before against a mass of the experimental and theoretical evidences accumulated before a critical turn-around point happened around one century ago.
Since centuries these are univoquely showing the:
mutual independance of the superposed vibrations of a string.
However considered relevant, the physical meaning of this feature was oversight until one century ago, when Quantum Mechanics was born.
Figures above show examples of such compound or superposed vibrations. The fact that the vibrations are mutually independent can be immediately seen after stopping the transversal motion of the string. Stopping it at a point that results a node for some harmonics, but not for other harmonics. In this case, those component vibrations for which the point is a node will continue to vibrate without feeling any effect of our action. On the opposite, the others will be quenched. Applying this to a practical case, imagine a piano string set sounding loudly by striking a key. A kept held down. Imagine to touch the string one third the way along its length. You’ll discover that all component vibrations are stopped except the third, sixth, etc., all of them multiples of the fundamental frequency.
Case Study 4
A supposedly linear system: RLC resonant serie circuit
James Clerk Maxwell
An example of the strict correlation of the subsystems and of the correlation between a system and its Environment, is known to Electronic Engineers operating in Food and Beverage Bottling Lines. The RLC resonant circuit behaviour, when varying the frequency of the alternative current Signals applied. Inductors, resistors and capacitors are linear components, where linear means they are bipoles whose behaviour follows the classic version of the Principle of Superposition.
These circuits are defined by Maxwell’s differential equation [3] where:
- E potential [volt]
- L inductance [henry]
- C capacity [farad]
- R resistance [ohm]
- i current [ampere]
- t time [second]
- ν frequency in hertz [1/second]
- ω pulsation ω = 2π ν [radians/second]
[3]
having solution [4] for the current intensity i:
[4]
These relations between currents, voltages, resistance, inductance and capacitance are what we are expecting to see when applying the Principle of Superposition, assuming it fully applies to this kind of system and its subsystems.
The fiction. Linear character of the inductor, resistor and capacitor is what we expect to see after connecting them to Functions or Signal Generator in a RLC serie circuit, powered by a sinusoidal wave form. Linear means they are bipoles whose behaviour follows the classic version of the Principle of Superposition
The reality. RLC are correlated subsystems and each one of them separately and differentially exposed to an Environment whose relation with the correlated RLC is impossible to fully account
Connect to a Signal or Functions Generator the RLC (Resistor-Inductor-Capacitor) serie circuit, creating what visible on right side and power on setting a sinusoidal wave form:
- ideally, to allow precise measurements, the behaviour of the circuit should be controlled by mean of an instruments to measure values of intensity of high frequency alternate current, with scales until 10 μA, 100 μA, 1 mA and 10 mA;
- if such an instrument is not available, to understand what happens may be enough a visual display of the wave form and its features:
- directly in the Function Generator (like in the figure on side), or:
- by mean of an Oscilloscope connected to the same leads where the Function Generator or Signal Generator is connected;
- start by a frequency of 50 Hz and increase the frequency by decadic steps to: 500 Hz, 5 kHz, 50 kHz, 500 kHz, 5 MHz, 50 MHz;
- after each increase of frequency:
- observe and take a note about the intensity of the current (or, evaluate the oscilloscope’s wave forms, if you have no microamperometer);
- move your hand at a distance of 300 mm by the circuit, and look for effects on the measurements readouts (or, waveforms' shapes);
- for frequencies > 200 kHz the fact to oscillate the hand a few centimetres, by a distance of 300 mm, let a circuit built with three surely linear components behave an apparent nonlinear way. The hand, lying out of the RLC circuit visible on side, is now part of an Environment undoubtedly correlated with the resonant circuit;
- staying far from the circuit and increasing the value of the frequency, we are spectators of the circuit increasingly complex behaviour due to the superposition of the effects of distinct behaviours:
- RLC circuit resonance;
- correlations between RLC circuit's subsystems;
- correlation between the resistor R and Environment;
- correlation between the inductor L and Environment;
- correlation between the capacitor C and Environment;
- closing the entire circuit in a Faraday cage and increasing the value of the frequency, we are spectators of the circuit increasingly complex behaviour due to the superposition of the effects of distinct behaviours:
- RLC circuit resonance;
- correlations between RLC circuit's subsystems;
- correlation between the resistor R and Environment, where the superposed terms due to electromagnetic induction by the power network, motors, data, radio, TV, are minimised, but still they remain several other contributions;
- correlation between the inductor L and Environment, where the superposed terms due to electromagnetic induction by the power network, motors, data, radio, TV, are minimised, but still they remain several other contributions;
- correlation between the capacitor C and Environment, where the superposed terms due to electromagnetic induction by the power network, motors, data, radio, TV, are minimised, but still they remain several other contributions;
implying a complex behaviour which the definition of the Principle of Superposition we are adopting like our Polar Star, does not appear fully capable to handle;
Nonlinear classic devices were born long before transistors. Legacy vacuum tubes are non-linear devices, operating on base of Electrostatic. A portion of their characteristic curve shows the equivalent of a differential negative resistance, what made of them the first Signal amplifiers of the history
- increasing the frequency of the signal from 500 kHz to 50 MHz the circuit is markedly showing a complex behaviour. The RLC circuit expected operation is progressively replaced by a vectorial superposition of different effects:
- Resistor R
- is progressively showing properties we’d expect by a superposed inductor because of the solenoidal geometry of its material and also because the couple of terminals are metallic, then themselves inductors;
- a superposed capacitor, because of the potential existing between different sections;
- skin effect is progressively increasing the impedance, reducing the section of its terminals really interested by the passage of electrons;
- the metal oxides of which the resistor is made behave a different way, introducing unexpected superimposed effects;
- changes in the Environmental temperature around the resistor, seems to be intervening in the operation, introducing unexpected superimposed effects;
- there is an additional alternative current induced in the terminals and in the resistive metal oxides materials lying in the em field created by the Inductor;
- there is an additional alternative current induced in the terminals and in the resistive metal oxides materials lying in the em field created by the terminals of the Capacitor C;
- there are additional alternative currents induced in the Resistor, originating by the fact that the Generator is not ideal (it radiates);
- there are additional alternative currents induced in the Resistor, due to sudden fluctuations in frequency, polarization and amplitude of the electromagnetic fields in the Environment (e.g., data, radio and TV transmissions, cables of power network radiating at 50 or 60 Hz);
- there are additional alternative currents induced in the Resistor, impulses of brief duration due to cosmic rays;
- there are additional alternative currents induced in the Resistor, impulses of brief duration due to environmental radioactivity;
- the electromagnetic fields of prior point 9. also fluctuate because of sudden changes in the local value g of the Earth gravitational field, when no way exists to shield gravitationally the RLC circuit and gravimeters’ measurements’ repetition rate < 10 Hz, always delayed with respect to the em induced interfering signals we’d like to compensate;
- (…………..).
- Capacitor C
- shows a progressively marked behaviour equivalent to the one we'd expect by an inductor, because its terminals are metallic;
- an additional parallel capacitor, because of the potential existing between its terminal;
- skin effect is progressively increasing the impedance, reducing the section of its terminals interested by the passage of electrons;
- the dielectric behaves a different way, introducing unexpected superimposed effects;
- changes in the Environmental temperature around the capacitor, seems to be intervening in the operation, introducing unexpected superimposed effects;
- there is an additional alternative current induced in the terminals and in the metal plates, lying in the em field created by the Inductor L;
- there is an additional alternative current induced in the terminals and in the metal plates lying in the em field created by the passage of current thru the terminals of the Resistor R;
- there are additional alternative currents induced in the Capacitor, originating by the fact that the Generator is not ideal (it radiates);
- there are additional alternative currents induced in the Capacitor, due to sudden fluctuations in frequency, polarization and amplitude of the electromagnetic fields in the Environment (e.g., data, radio and TV tranmissions, cables of power network radiating at 50 or 60 Hz);
- there are additional alternative currents induced in the Capacitor, impulses of brief duration due to environmental radioactivity;
- there are additional alternative currents induced in the Capacitor, impulses of brief duration due to cosmic rays;
- the electromagnetic fields of prior point 9. also fluctuate because of sudden changes in the local value g of the Earth gravitational field, when no way exists to shield gravitationally the RLC circuit and gravimeters’ measurements’ repetition rate < 10 Hz, always delayed with respect to the em induced interfering signals we’d like to compensate;
- (…………).
- Inductor L
- is progressively showing properties we’d expect by a superposed capacitor, because of the fact that its resistance is not zero and then a potential exists between different sections of the solenoid;
- an additional parallel capacitor, because of the potential existing between its terminals;
- skin effect is progressively increasing the impedance, reducing the section interested by the passage of electrons;
- the air in which the winding lies, is clearly intervening in the operation, following relatively small changes in the Environmental humidity, introducing unexpected superimposed effects;
- there is an additional alternative current self-induced in the terminals and in the solenoid, lying in the em field created by the same Inductor L;
- there is an additional alternative current induced in the terminals and in the solenoid lying in the em field created by the passage of current thru the terminals of the Capacitor C;
- there is an additional alternative current induced in the terminals and in the solenoid lying in the em field created by the passage of current thru the terminals of the Resistor R;
- there is an additional parasitic capacitance originating by the fact that the winding has not zero resistance and that there is a dielectric in between them;
- there are additional alternative currents induced in the Inductor, originating by the fact that the Generator is not ideal (it radiates);
- there are additional alternative currents induced in the Inductor, due to sudden fluctuations in frequency, polarization and amplitude of the electromagnetic fields in the Environment (e.g., data, radio and TV tranmissions, cables of power network radiating at 50 or 60 Hz);
- there are additional alternative currents induced in the Inductor, impulses of brief duration due to environmental radioactivity;
- there are additional alternative currents induced in the Inductor, impulses of brief duration due to cosmic rays;
- the electromagnetic fields of prior point 10. also fluctuate because of sudden changes in the local value g of the Earth gravitational field, when no way exists to shield gravitationally the RLC circuit and gravimeters’ measurements’ repetition rate < 10 Hz, always delayed with respect to the em induced interfering signals we’d like to compensate;
- (…………..).
- Resistor R
The values we registered for the current i, so different than what expected by the equation [4] have shown that the impedance characteristics of common circuit elements (resistors, capacitors, inductors) utilized in circuit theory, are simply low-frequency asymptotes of the overall frequency responses of these components. Also, these tests show that each one of the three components, yet when took separately by the others violates the rules for a linear device because has internal subsystems and, worse, subsytems whose behaviour with respect to the frequency is differential.
This, means that also the three circuits on right side, equivalent to Resistor, Capacitor and Inductor yet when separately considered, are frequency-dependant. Their aspect and performances change, following the frequency. Also, these equivalent circuits abstract by the existence of the Environment. Comparing the reality we discovered, with fourteen elementary components appearing below rather than three (R, L, C), it becomes easier to imagine why the observed behaviour is so complex and divergent by what the differential equation [3] defines.
Equivalent circuits of a Resistor, a Capacitor and an Inductor. The values we registered for the current i, so different than what expected by the equation [4], show that the impedance characteristics of common circuit linear elements R, L, C, are only low-frequency asymptotes of the overall frequency responses of these components
Up to 750 GHz
Now, imagine to have the possibility to access a Signal Generator capable to reach an even higher frequency, e.g. 750 GHz. Then, the only model explaining what is observed is that one where the amount of superposed and correlated effects diverges to infinity.
Connecting the module at left side in the out feed of a Signal Generator like that one above, it becomes possible to extend until 750 GHz the frequency of the signals introduced into electrical and electronic circuits. Thus, allowing to witness the divergence of the super posed correlated effects. Infinity converging to a limited, however mind boggling amount, due to the discretization of the frequencies readable in the Generator (abridged by images credited to Rohde & Schwarz® above, Virginia Diodes® on side)
When fronted by the necessary answers to the following questions we start to feel much more than the superposed effects of the multitude of particles building up an atom of Copper in a conductor:
- what of the effects, and in what amount, is due respectively to the RLC circuit and to the Environment ?
- of the space-time existing around the circuit, what section to consider Environment ?
- of all of the objects (e.g., elementary particles, atoms, molecules, macroscopic bodies) existing in the section of space-time we have chosen to consider Environment, what and in what extent to consider causally connected ?
- for each one infinitesimal frequency in such a wide range, we’d be forced to deduce the complete detailed behaviour by the measurement of an amount of properties which, having been forced to include the Environment, is now no more limited to three RLC components, rather extended to a mind boggling amount;
- the measurements instruments we are using are not ideal, and the amplitudes of their uncertainties, as we increase the Signal frequency and extend to the Environment the exam, becomes of the same size as what we are looking as Signals. What to consider Signal and what to consider a fluctuation due to the instrument uncertainty ?
- what should happen when increasing the frequency toward infinite ?
- how to treat the chained effects of a change in the temperature T implying a new value for the resistance R, inductance L and capacity C ? ...say the equation to the partial derivatives:
∂f(R, L, C) / ∂T
Common cables, linear when transferring DC power (not closely looking what in the meantime is going on in the atomic scale) start to reveal their true nonlinear nature, as much as we increase the frequency of the Signals
We saw above only a small fraction of all of the aspects that an allegedly-simple RLC serie circuit presents when closely examined by mean of the Principle of Superposition, into its Environment and increasing the frequency of the alternative current i. Above we considered a relatively complex case with three electronic components. May linearity be assured when considering simpler examples ? Of all of the electrical components, no one looks simpler than a cable. What above is true also for the less suspect of the linear devices, like the common copper cables and their connectors, visible in the figure on right side.
Also cables, seemingly linear when transferring DC power (not closely looking what in the meantime is going on in the atomic scale) start to reveal their true nonlinear nature, as much as we increase the frequency of the Signals. How true this statement is can be inferred by the graphics below.
A classically unexpected behaviour manifestly nonlinear of the simplest electric components shows itself when increasing the signals' frequency. In the example referred to a cable, to signals’ frequencies ranging (27 - 34) GHz correspond wide deviations out of what expected by classical Electrodynamics. Nature has an agenda different than our classic, itself based on a completely different set of assumptions. In evidence the fact that the standard deviation (“Std Dev”) of the timing is only 1.74587 ps (picosecond, say 10^{-12} s). We are looking the Events with improved precision, but still compelled to zoom additional 29 orders of magnitude to see directly their natural Planck scale
Shown above the response curve referred to a cable powered in a range of frequencies: (0 - 34) GHz. In the vertical axe, the relation between transmitted and incident signals, expressed in dB. The cable transmitted response (voltage trasmitted / voltage incident) quite finely evaluated sampling the signals 80 billions of times per second.
In the figure, evidenced by a red colour square, the oscillating behaviour shown over signals’ frequencies of 27 GHz. A classically unexpected behaviour, manifestly complex and nonlinear. We leave the Reader imagine what it signifies, in terms of deviations, to power that same cable at 750 GHz, considering that yet at 30 GHz the frequency response is what visible in the graphics below.
Case Study 5
Electromagnetic radiation emitted by a black body,
at a definite temperature
That the assumptions resumed in the Classic Principle of Superposition, underlying Mechanics and Thermodynamics, could be false was a personal experience of the nobelist Max Planck in 1900, when studying black-body (a perfect absorber) radiation. Namely, when trying to establish the correct answer to the question:
Black body curves for various temperatures. Their comparison with classical theory of Rayleigh-Jeans. Radiation expressed in units of power in kilowatt, per steradians, per square meter of surface, per nanometer of wavelength (image credit D. Kule, 2010)
“How does the intensity of the electromagnetic radiation emitted by a black body depend on the frequency of the radiation and the temperature of the body ?”.
The classic answer, based on the classic principle of superposition, assumes that Energy, space and time are continuous, then Real numbers ℝ. An expected behaviour synthesized in the figure at right side by the black colour curve, asymptotically reaching infinite values of radiation for short wavelengths. Unobserved infinite values of radiation.
Real numbers ℝ also meaning that two Events like an Cause and an Effect, are:
- separated by an infinity of infinitesimal spaces and times;
- Energy exchanged at the Event Cause or the Event Effect, is a number real number, like π (3.14159265358979323…..) or e (2.718281828459045…..).
But, the experimental data, on the opposite, contradicted what expected on the base of the Classic Principle of Superposition. Experimental data, in the figure above resumed by the red-, green- and blue-colour curves, coherently explained by a law of radiation assuming Energy discreteness based over radiation frequencies’ discreteness (and, associated wavelengths) thru a now famous constant of proportionality.
Information Flow underlying measurements
Knowingly, in Classical Mechanics it is named Lagrangian L of a system, i.e. a material body, the relation between its kinetic and potential energy.
Its analogue in Quantum Electrodynamics (QED) is noted L, an amount which can be integrated over all spacetime to get the action S.
The Lagrangian density L of quantum electrodynamics (QED) is a typical example of interacting systems:
Year 2000: in absence of any Intelligence coordinating all of the particles of the Universe, it started to be understood the extreme relevance of the Information Flow for all physical phenomena
The Italian physicist, mathematician and astronomer Giuseppe Lagrangia (who later changed his name in Joseph Lagrange) is also known for his discovery of the today named “Lagrangian Points”. The animation above shows the case of L1, L2, …, L5 Lagrangian points of the physical system Earth-Moon (image reproduced under CC 3.0)
left to right, it contains three terms:
- the coupling between an ideal electromagnetic field and an ideal electron field;
- an ideal electron field in isolation, characterized in terms of a mass parameter m;
- an ideal electromagnetic field in isolation, based on the elementary charge e.
These parameters pertain to a literally bare electron stripped of all electromagnetic field, which is described separately. Shortly later we’ll see how deeply all this applies to the industrial Machinery operation. Readers are suggested to observe that the rationale used to evaluate the QED Lagrangian density L pass through a first term giving due relevance to the coupling. Coupling of two ideal fields: an electric and an electromagnetic. In other words, Quantum Electrodynamics recognized and calculated the fundamental action of a charged particle, like the electron, over itself. Since long time the recognized origin of a row of practical industrial applications of the electromagnetic induction, like the motors. Something presented to the students of the high school courses under the heavy and 150 years-old today Classic dress of reactance, impedance and reactive power.
After this briefer to the revolution started by Paul Dirac in 1938 when founding Quantum Field Theory, we ask the Readers to extend as much as they are capable in the space and time the rationale of his own epistemologic strategy. What is the widest thinkable collection of particles ? The Universe. How many couplings between particles ? Since more than seven decades it is known that our measurement instruments (RLC circuit included) and Machinery are causally related, a near-synonymous of coupled, with other ~10^{80 }elementary articles. Particles close enough to allow the establishment of a causal relation.
To coordinate the movements and energy levels of so many particles spread in such an enormous volume they are both vital:
- Intelligence;
- Information Flow
To be sure of this last statement, imagine you have been given the task to move over 10^{80 }material particles, assuring their positions with accuracies ~10^{-31 }m with timings as precise as 10^{-43 }s. Needing an extremely high processing power and memory, You’ll surely ask for the intelligence of an impressive Supercomputer. Also, you’ll expect to have to transmit and receive an equally impressive amount of Signals. An extremely intense Information Flow.
Two fundamental questions
And now, try to answer the last two fundamental questions, whose answer allows to attack the source of all of the other problems:
- Where is the Intelligence necessary to coordinate, on such distances and at such infinitesimal levels of precision, all these atoms ?
- Never detected: it does not exists. Since centuries it had been abandoned the anthropic idea that natural facts correspond to the manifestation of a superior intelligence. Then, Nature has to apply a different mechanism to create that sensation of Physical Laws used, as an example, to design everything technological. A different definition for Superposition, one allowing to have however ready the correct answer, the fitting eigenvalue, also in absence of an intelligence to precalculate and transmit the correct. The next question 2. and its answers, strictly related to this, is what is considered the mechanism allowing the coordination.
2. Where is the Information Flow underlying such coordination ?
The “paradox” is only a conflict between reality and your feeling of what reality “ought to be”
Richard P. Feynman
To endorse validity to the Classic version of the Principle of Superposition means that every time a particle decides what to do, it has to consult with all the others that it has ever interacted with during past 15 billions of years. Particles which to some degree it had become entangled with. These others in turn have to communicate with all the particles that they have ever interacted with, and so on, to coordinate their own new positions, momentums, energy, as a consequence of the change happened in one of the multitude. Sort of regressio-ab-infinitum: an impressive amount of behind-the-scenes messaging going on.
- Virtual Particles play the role. The photons are called virtual because their creation and annihilation in the interaction does not conserves energy and momentum. One electron creates some virtual photons, which are annihilated by the other. Thus the electron and electromagnetic fields form an integral system. Given enough energy, an electron can easily create photons. Conversely, an energetic photon can excite the electron field and create electron-positron pairs. A problem arises because an electron spontaneously creates photons. There is nothing to prevent the electron from absorbing a photon that it itself creates. In fact the electron cannot escape from interacting with the electromagnetic field it itself generates. The self-energy of interaction gives rise to infinities. The solution to the problem of infinities is based on the insight that the bare electron, with which the theory starts, is fictitious. For the electromagnetic field is not something external but is generated by the electron. Thus the electron is always accompanied by some electromagnetic field excitation or surrounded by a cloud of virtual photons. The electron dressed up in electromagnetic field excitations, not the bare electron, is the physically significant entity. Thus the mass parameter m and charge parameter e, which pertain to bare electrons and which appear initially in the equation make no physical sense. They must be replaced somewhere down the line by the physically significant parameters pertaining to dressed-up electrons.
Heating Plutonium by mean of Electromagnetic Induction, at the Los Alamos National Laboratory. Depicted a furnace to heat and bomb reduce the Plutonium elements later used to fabricate the nuclear weapons’ pits. Plutonium lies in the centre of the cylindric crucible and is heated by electromagnetic induced field, originating in the thick copper bobine visible all around it (image courtesy LANL, 2014)
In the following, an abridged complete list of cases and technological applications whose existence is assured by Virtual Particles, in the framework of the Quantum Field Theory:
- Electromagnetic induction. This phenomenon transferring energy to and from a magnetic coil via an electromagnetic field, is a near-field effect (see point 12). It is the basis for power transfer in transformers, electric generators and motors, signal transfer in metal detectors.
- Coulomb force between electric charges. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse square law for electric force. Since the photon has no mass, the Coulomb potential has an infinite range.
- Magnetic field between magnetic dipoles. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse cube law for magnetic force. Since the photon has no mass, the magnetic potential has an infinite range.
- Hawking radiation, where the gravitational field is so strong that it causes the spontaneous production of photon pairs and particle pairs.
- Strong nuclear force, between quarks is the result of interaction of virtual gluons. The residual of this force outside of quark triplets (neutron and proton) holds neutrons and protons together in nuclei, and is due to virtual mesons such as the pi meson and rho meson.
- Weak nuclear force. It is the result of exchange by virtual W and Z bosons.
- Decay of an excited atom, accompanyied by spontaneous emission of a photon. Such a decay is prohibited by ordinary Quantum Mechanics and requires the quantization of the electromagnetic field for its explanation.
- Casimir effect, where the ground state of the quantized electromagnetic field causes attraction between a pair of electrically neutral metal plates.
- Van der Waals force, which is partly due to the Casimir effect between two atoms.
- Lamb shift of positions of atomic levels.
- Vacuum polarization, which involves pair production or the decay of the vacuum, which is the spontaneous production of particle-antiparticle pairs.
- Electromagnetic near-field. Where the magnetic and electric effects of the changing current in the antenna wire and the charge effects of the wire's capacitive charge may be important contributors to the total em field close to the source, but both of which effects are dipole effects that decay with increasing distance from the antenna much more quickly than do the influence of conventional electromagnetic waves that are far from the source. Far-field waves, for which electric field intensity E is (in the limit of long distance) equal to cB, are composed of actual photons. It should be noted that actual and virtual photons are mixed near an antenna, with the virtual photons responsible only for the extra magnetic-inductive and transient electric-dipole effects, which cause any imbalance between E and c B. As distance from the antenna grows, the near-field effects (as dipole fields) damp themselves much more rapidly, and only the radiative effects that are due to actual photons remain as important effects. Naming r the radius measured from the source, the virtual effects extend to infinity but they drop off in field amplitudes as r^{-2}^{ }rather than the field of electromagnetic waves composed of actual photons, which drop r^{-1}. Also, the powers of the virtual photons decrease as r^{-4} when the powers of the actual photons decrease as r^{-2}
Pairs' production. The creation of an elementary particle and its antiparticle. Here, a 2 eV photon creates a positron-electron pair. A short time later the pair annihilates leaving a 2 eV photon so that in the long term total Energy is conserved. Positrons may be conceived as electrons moving toward the Past, or imagining that their own clock, as seen by our own point of view, rotates counterclockwise. At a first sight something purely theoretical, it is since three decades that they are used positrons in the Positron Electron Tomographs (PET) with improved non-invasive capability of diagnosis in Medicine. Virtual Particles, introduced by the nobelist Paul Dirac as consequences of his quantum relativistic theory of 1928, are a reality with practical effects^{}
Why the classic version of the Principle of Superposition fails
What precedes demonstrates that the majority of the linear systems encountered are simplifications of a nonlinear reality, under some basic assumptions which, in the case of electronic circuits, can be synthesized as:
- no Environment;
- no high frequency Signals;
- no Virtual Particles;
- full linearity of the eventual subsystems of the linear system, in our case, the RLC circuitry;
- no correlations between the eventual subsystems of the linear system, in our case, the RLC circuitry.
Six stages in the history of a Virtual Particle and of its antiparticle counterpart traveling backword in time. The Classic Principle of Superposition, still today teached in the high schools and colleges, was coined centuries ago when electromagnetic and coulombian forces’ effects were considered fully accounted. Since 1928 the description agreeing with what in the experiments pops-out to existence are Paul Dirac’s Virtual Particles
"No impedance, no reactive nor apparent powers due to the Inductor or the Capacitor or the inductance and capacity of a real Resistor, can be explained without Virtual Particles, say without the Quantum Field Theory (QFT)"
The intrinsic nonlinearity in these conditions is not perceptible in those measurements which are accessible to direct observation, because in them the nonlinear terms are quite negligible in comparison with the linear ones. That’s why the Classic version of the Principle of Superposition is found to be confirmed. Surely, the Classic Principle of Superposition presents the advantage to have to solve relatively simple algebric or (linear) differential equations, rather than much more difficult nonlinear differential equations. But, can this way to proceed let our knowledge advance toward the solution of intrinsically nonlinear problems ? Ignoring Virtual Particles' existence, when these are assuring electromagnetic and coulombian forces is an ill-fated position. Then, only the dissipative thermal effects of active power (watt) in the resistor R are truly accounted for by the Classic Principle of Superposition.
No impedance Z, no reactive Q, nor apparent powers A (measured in VAR and VA) due to the Inductor or the Capacitor or the inductance and capacity of a real Resistor, can be explained without Virtual Particles, say without the Quantum Field Theory (QFT).
The denomination "Virtual Particles”, comprehensible when it was coined several decades ago, results today a bit inappropriate. A “virtual particle” is not a particle at all: it refers to a disturbance in a field, and a field is not really a particle. A particle is a regular ripple in a field, one that can travel through space. On the opposite, a virtual particle generally is a disturbance in a field that will never be found on its own. It is caused by the presence of other particles and often of other fields.
After this briefer, it is however necessary to make it clear that the, in terms of mere existence, “virtual particles" are not less real than all other particles. Since decades the technological progress is allowing to observe them in experiments made in the widest range of ambient temperatures, comprised between a little more than the absolute zero and billions of Kelvin degrees. They were the new physics …seventy years ago !
Links to the pages:
￼ IntroductionIn the the field of Industrial Automation and in that of the Electronic Inspectors (Bottling Controls), however named (i.e., “Bottle Presence”) and operatively disguised, Triggers play the role of the most…
Objects and measurements’ hidden nature A few definite steps allow to fully recognise the nature of the Events and the action and function of the devices part of the industrial machinery and of the equipments to count and relate them (e.g., Triggers, …
Trigger signals: sharp and unsharp￼Triggered is said of Events with the most strict kind of correlation which may be imagined: the causative. Their effects are other Events. After the introduction given here…
Classic versionAll the Engineers engaged on a daily base with Maintenance operations in the Machinery of Food and Beverage production Lines, know they continously apply an idea named Principle of Superposition. …
The far reaching consequences of a Ph.D dissertation￼When treating the revolutionary ideas of Relativity, we saw the Relativity Principle implying infinite 3-D spaces associated to each instant of time, w…
In synthesisThe guiding principles for today comprehension understanding of the fine-scales of all phenomena are:linear superposition principle, probabilistic interpretation of Quantum Mechanics. These two principles suggest that the …
We introduced in other pages the Modern version of the Principle of Superposition and related Theory of Measurement, conceived in 1957. QUESTION: ￼ ￼Does the new version of the Principle of Superposition applies to Packaging Machinery Automations or Binary Classifiers (Electronic Inspectors or, Bottling Controls) ?