Introduction

Technical/White paper

     Flash animation, 69 MB, ZIP

                  Click  index.html


                        35 MB


       

            134 pages, 33 MB         



Magnifying the orbit where the electron of the Hydrogen atom was assumed 110 years ago to be revolving a proton, discloses an unexpected scenario. No orbit at all (image abridged by C. M. DeWitt, J. A. Wheeler, 1967)  

We are going to start this section proposing you to imagine what you’d see if you could magnify details as small as ~10-35 m.   As an example, the orbits where Rutherford 111 years ago assumed the electron of the Hydrogen atom to be revolving a nucleus.  The effects of this permanent and randomly oriented oscillation of the electron are known since then.   Effects hinted in the figure below, on right side: no orbit at all.   


  The trajectory of a relativistic particle, traveling in one dimension, is a zig zag of straight segments, whose slope is constant in magnitude. The difference lies in the sign from zig to zag.  In each one of the corners the acceleration is infinite, implying that the concept of ‘trajectory’ of the Classical Mechanics, however familiar it may be, does not applies (abridged by R. P. Feynman, A. R. Hibbs, 1965)



It lies in this posterior observation one of the main reasons why Quantum Mechanics had to be created replacing the newtonian Mechanics.   It was not an academic exercise: the progress of theories and their comparison with experimental and technological facts, around the end of the Nineteenth Century implied also a progressive divergence of what was considered causing what else.   

The motion of a particle, yet in the basic unidimensional case above on left side, is an example of this departure from the Classic concepts. The electron is affected by fluctuations of the electric field in vacuum (also named “ground state” or “zero-point” fluctuations), creating displacements with respect to the theoretical curve a true orbit should imply.  Displacements nil on average, but huge when considering their root mean square.   

Then, the electric field truly felt by the electron is not the static one supposed around the (positive) proton.   Zero-Point field random oscillations permeate all the space: they are not a feature of the matter, rather of the space itself.   They are observed also freezing an ambient a few thousandths of kelvin degrees over the absolute zero, implying that their origin is non thermal.    To detect them and to measure what the animation above tries to show, is not necessary complex Laboratory equipment.   In the following sections two practical examples. 




zero point field random oscillations permeates all space. They are observed also freezing an ambient a few thousandths of kelvin degree over the absolute zero, implying that their origin is non thermal.  Cutting to 1 mm the metallic sensible part of the probe of a common oscilloscope, allows to measure what the animation above tries to show (image credit W. Brown, 2012)

  The vacuum is not vacuum at all.  Its energy peaks at amounts related to matter densities (tons per cubic centimetre) around eighty orders of magnitude 


Example 1  

Electromagnetic field fluctuations


  All oscilloscopes are capable to detect signals at frequencies of 1 MHz. They allow a direct sight to the discontinuity existing between the world of the illusions derived by our sensations and the reality (image credit Agilent Technologies®)






Imagine a common oscilloscope like the one here above: 

  1. Expose the 10 mm long metallic tip of its probe (an example in the figure below) to the electromagnetic field of a distant source, whose frequency is 1 MHz. Imagine a magnetic field B, measured where the oscilloscope’s probe emulates an antenna, amounting to 10-8 gauss.  
  2. For a spatial domain of this size, oscillating at that frequency, the fluctuation of the em field amounts to only ~10-17 gauss.  Say, it is one hundred millions times smaller, then definetely impossible to detect, lost in a sea of noise.

Repeat the test after having cutted to < 1 mm the metallic exposed part of the probe: 

  1. the quantum fluctuations dominates for an order of magnitude over the classically expected em field, amounting to ~10-7 gauss.  Cutting 10 times the length of the probe, we reduced the amount of mass-energy and space in the meantime. 


  Reducing the length of the metal tip of this probe to 1 mm. You’ll start to witness and measure the electromagnetic counterparts of the natural quantum fluctuations of that limited space.  Appearing like an electrical noise originating by an em-induced signal whose amplitude is approx. one tenth of millionth of gauss.  The space where Electronic Inspectors' measurements are carried on, is a multiply connected foliated structure.  Its energy density fluctuations reach the impressive value ~1083 ton/m3  (abridged by image credit Pico Technology®)


  Carl Friedrich Gauss, astronomer, physicist and mathematician. To him is entitled the magnetic field unit 






Conclusions:

The spatial domain, also when emptied by mass-energy (e.g., the matter of the probe metallic tip) has its own electromagnetic field, and this results now ten times bigger than that induced by a power trasmitter.  Space’s true nature starts to show itself.  





Example 2  

Geometric fluctuations




Until now we have frequently named concepts like: 

  • curvature of the geometry, 
  • curvature of the space, 
  • 3-dimensional geometry,
  • 4-dimensional spacetime.

The yield of these terms is frequently underestimated.  But after 1915 they refer to quantitatively defined amounts. Amounts whoever feels directly, like the gravity, and can easily measure with small field devices named gravimeters.  Gravimeters are equipments commonly used to measure variations in the Earth’s gravitational field.  This last, a synonimous of the tidal effects due to the curvature and other peculiarities of the geometry of the space at the surface of the Earth.   The field is not static, but varies continuously with time because of the movement of the masses.   


 This device is the world's most sensible and precise commercial gravimeter. Its accuracy amounts to an acceleration of 0.000 001 cm/s2Its accuracy is however not enough to detect the fluctuations of acceleration originating by the changes on Topology happening on the smaller scales.  As a consequence, these do not influence at all the result of the measurements of the Electronic Inspectors, electromagnetic or, mediated by electromagnetic forces. The reason, a Cosmological one, started to be understood only recently, in 1996 (abridged by image credit Microg Lacoste®)

 Geometric fluctuations

We frequently named concepts like: curvature of the geometry, curvature of the space, 3-dimensional geometry or 4-dimensional spacetime. These terms are frequently underestimated in their yield. On the opposite, after 1915, they refer to quantitatively defined amounts. Amounts whoever feels (the gravity) and can easily measure with small field devices named gravimeters. Gravimeters are equipments commonly used to measure variations in the Earth’s gravitational field. This last, a synonimous of the tidal effects due to the curvature and other peculiarities of the geometry of the space at the surface of the Earth.  The field is not static, but varies continuously with time because of the movement of the masses. 

The masses being primary sources of gravity variation are:

tides, 
hydrology, 
land uplift/subsidence, 
ocean tide loading, 
atmospheric loading;
changes in the Earth’s polar ice caps;
Sun;
Moon;
planets.


A number of different mechanical and optical schemes exist to measure this deflection, which in general is very small. One of these consist of a weight suspended from a spring; variations in gravity cause variations in the extension of the spring.  Variations in the Earth’s gravitational field as small as one part in 10 millions can yet be detected.  Also, it exist a version capable of absolute measurements of gravity. In these models the measurement is directly tied to international standards, and this is what makes them absolute gravimeter.  The measured accelerations can be converted into absolute measurements of the curvature of the local geometry by mean of relativistic formulae.  A professional model of these appears on right side. Its performance specifications may be resumed in its impressively small amount for the accuracy: 2 µGal.  Gal is a non-SI unit of acceleration used extensively in the science of gravimetry, defined as 1 centimeter per second squared (1 cm/s2).   

For the same 1 cm (initial length of the oscilloscope probe) domain we used in the precedent example, the fluctuations of the curvature are negligible (~10-33 cm-2).  One hundred thousands times smaller than the curvature of the space at the surface of the Earth (~10-28 cm-2) that an instrument like the gravimeter on right side easily measures.  This time, reducing ten times or one billion of billions of times the spatial extent of the domain, the fluctuations continue to be negligible by the gravimeter.  

We are witnessing a real divide existing between the:

electromagnetic forces, object or medium of all of the Electronic Inspectors’ measurements;
gravitational forces, however existing where our em measurements are accomplished.


A divide whose origin started to be understood only recently, in 1996, object of deeper analysis in the last sections of this pages.
  This device is the world's most sensible and precise commercial gravimeter. Its accuracy amounts to an acceleration of 0.000 001 cm/s2. Its accuracy is however not enough to detect the fluctuations of acceleration originating by the changes on Topology happening on the smaller scales.  As a consequence, these do not influence at all the result of the measurements of the Electronic Inspectors, electromagnetic or, mediated by electromagnetic forces. The reason, a Cosmological one, started to be understood only recently, in 1996 (abridged by image credit Microg Lacoste®)







Until now we have frequently named concepts like: 

curvature of the geometry, 
curvature of the space, 
3-dimensional geometry,
4-dimensional spacetime.
The yield of these terms is frequently underestimated.  But after 1915 they refer to quantitatively defined amounts. Amounts whoever feels directly, like the gravity, and can easily measure with small field devices named gravimeters.  Gravimeters are equipments commonly used to measure variations in the Earth’s gravitational field.  This last, a synonimous of the tidal effects due to the curvature and other peculiarities of the geometry of the space at the surface of the Earth.   The field is not static, but varies continuously with time because of the movement of the masses.   The masses being primary sources of gravity variation are:

tides, 
hydrology, 
land uplift/subsidence, 
ocean tide loading, 
atmospheric loading;
changes in the Earth’s polar ice caps;
Sun;
Moon;
planets.
A number of different mechanical and optical schemes exist to measure this deflection, which in general is very small. One of these consist of a weight suspended from a spring; variations in gravity cause variations in the extension of the spring.   Variations in the Earth’s gravitational field as small as one part in 10 millions can yet be detected.   Also, it exist a version capable of absolute measurements of gravity. In these models the measurement is directly tied to international standards, and this is what makes them absolute gravimeter.  The measured accelerations can be converted into absolute measurements of the curvature of the local geometry by mean of relativistic formulae.   A professional model of these appears above. Its performance specifications may be resumed in its impressively small amount for the accuracy: 2 µGal.  Gal is a non-SI unit of acceleration used extensively in the science of gravimetry, defined as 1 centimeter per second squared (1 cm/s2).     For the same 10 mm (initial length of the oscilloscope probe) domain we used in the precedent example, the fluctuations of the curvature are negligible (~10-33 cm-2).   One hundred thousands times smaller than the curvature of the space at the surface of the Earth (~10-28 cm-2) that an instrument like the gravimeter on right side easily measures.  This time, reducing ten times or one billion of billions of times the spatial extent of the domain, the fluctuations continue to be negligible by the gravimeter.    We are witnessing a real divide existing between the:

electromagnetic forces, object or medium of all of the Electronic Inspectors’ measurements;
gravitational forces, however existing where our em measurements are accomplished.
Divide whose origin started to be understood only in 1996, object of deeper analysis in the last sections of this pages.                                                          The Geoid. Dynamic view of the large scale distribution of the Earth gravitational equipotential surfaces, forming the named geoid.  An example for the geoid is given by the surface of the Oceans. The gravitational fluctuations we named in the text before, hint to spatial scale much smaller than these (Barthelmes F. et al./2015)


The masses being primary sources of gravity variation are:

  • tides, 
  • hydrology, 
  • land uplift/subsidence, 
  • ocean tide loading, 
  • atmospheric loading;
  • changes in the Earth’s polar ice caps;
  • Sun;
  • Moon;
  • planets.


 The large scale aspect of the Earth gravitational equipotential surfaces, forming the named geoid, from 2003 til 2011, derived by satellites’ gravimetric measurements  An outline for the changing shape of the geoid is given by the surface of the Oceans, arising by tidal forces.  Potentials are exaggerated to let them observable.  The gravitational fluctuations we named in the text before, hint to spatial scales  much smaller than these (Barthelmes F. et al./2015)






























A number of different mechanical and optical schemes exist to measure this deflection, which in general is very small.  One of these consist of a weight suspended from a spring; variations in gravity cause variations in the extension of the spring.   Variations in the Earth’s gravitational field as small as one part in 10 millions can yet be detected.   Also, it exist a version capable of absolute measurements of gravity. In these models the measurement is directly tied to international standards, and this is what makes them absolute gravimeter.  The measured accelerations can be converted into absolute measurements of the curvature of the local geometry by mean of relativistic formulae.   A professional model of these appears above. Its performance specifications may be resumed in its impressively small amount for the accuracy: 2 µGal.  Gal is a non-SI unit of acceleration used extensively in the science of gravimetry, defined as 1 centimeter per second squared (1 cm/s2).     

For the same 10 mm (initial length of the oscilloscope probe) domain we used in the precedent example, the fluctuations of the curvature are negligible (~10-33 cm-2).   One hundred thousands times smaller than the curvature of the space at the surface of the Earth (~10-28 cm-2) that an instrument like the gravimeter on right side easily measures.  This time, reducing ten times or one billion of billions of times the spatial extent of the domain, the fluctuations continue to be negligible by the gravimeter.    We are witnessing a real divide existing between the:

  • electromagnetic forces, object or medium of all of the Electronic Inspectors’ measurements;
  • gravitational forces, however existing where our em measurements are accomplished.

Divide whose origin started to be understood only in 1996, object of deeper analysis in the last sections of this pages.  


Final conclusions

The complexity of these effects hints to the fact that space, at small scale, is not as regular, simple or smooth as it was imagined and still it is supposed.  In the real world of Quantum Physics one cannot give both a dynamic variable and its time rate of change, because forbidden by the Uncertainty Principle.   This implies that no meaning exists for what intuitively each one of us consider senseful: the history of the material and radiative content of space, evolving in time. Quoting Wheeler, Thorne and Misner's perspective dated 1973: 

The concepts of space and time are not primary but secondary ideas in the structure of physical theory.  These concepts are valid in the classic approximation (…) There is no space-time, there is no time, there is no before, there is no after. The question of what happens next is without meaning”.   



The deep Meaning of Sum over Histories


  Richard P. Feynman understood in the last decades of his life that the multitudes of particles’ paths he studied in 1948 were not alternative, rather multiple coexisting paths 




After 1948 it started to be adopted the idea that an Event refers to the results just after a fundamental interaction took place between subatomic particles, occurring in a very short time span and at a well localized region of space.   But, because of the Uncertainty Principle, the signification is no more univoquely defined, rather probabilistic.   This point of view, named 'Path Integrals', is mainly due to the nobelist Richard P. Feynman (see figure on right side).   He solved in modern times the paradox evident in an ancient experiment of Optics, named after Thomas Young.   He proposed (see figure below) that ...all of the possible alternative paths allowed to particles, from a source to a detector, contribute to define the probability amplitude of the Event we name ‘detection’.  As a matter of fact, if we are still considering senseful the Dictionary definition of Trigger, the detector itself is a Trigger.  In this new scenario, the probability amplitude for an Event is the sum of the amplitudes for the various alternative ways that the Event can occur.  In the figure down, there are several alternative paths which an electron may take to go from the source to either hole in the screen C.

The then imagined alternative paths, were conceived like time-ordered (left to right) sequences [1]:

  • Source > Event 0 > Event 1 > Event 5 > Event 8 > Event 10 > Detection
  • Source > Event 0 > Event 2 > Event 6 > Event 9 > Event 11 > Detection               [1]
  • Source > Event 0 > Event 3 > Event 4 > Event 7 > Event 10 > Detection                      
  • ………

A simple sight to the system in the figure below allows the preception of the combinatorial character of this approach: different paths are different combinations of Events.  Quoting Feynman's words transmits the reality of the electron passing thru a barrier of potential:    

The electron does anything it likes. It just goes in any direction at any speed, forward or backward in time.  However it likes, and then you add up the amplitudes and it gives you the wave-function”. 

  “When several holes are drilled in the screens E and D placed between the source at screen A and the final position at screen B, several alternative routes are available for each electron.  For each of these routes there is an amplitude.  The result of any experiment in which all of the holes are open requires  the addition of all of these amplitudes, one for each possible path” (original quote of the diagram above in 'Quantum Mechanics and Path Integrals’, Richard P. Feynman)





"If you're doing an experiment, you should report everything that you think might make it invalid -not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked-  to make sure the other fellow can tell they have been eliminated

 Feynman about scientist's integrity (at Caltech, 1974)





 The constructive and destructive superpositions of a multitude of terms [2] induce the familiar wave behaviour.   This video, originating by macroscopic fluid parallel wave fronts advancing toward a couple of thin slits,  illustrates with precision the behaviour of matter and radiation also at the subatomic scales.  It is a case of interference of the waves deriving by the perturabations induced by two sources.  Today it is clear they only exist waves and no “particles”. These last remained in common use as an approximation valuable because allows to simplify calculations in particular cases

The key point is that the transition to Quantum Physics implied to assign a probability amplitude to each possible geometry, spread over all of the higher dimensional space. An amplitude higher close to the classically forecast leaf of history and falling off steeply outside a zone of finite thickness briefly extended on either side of the leaf.  

Above, Feynman wrote about real electrons, not theoretical electrons.  He described what the most common thinkable electrons do, say the electrons in the mains power socket, wherever and whenever in the World, Bottling Controls included.   Dynamics starts to appear when and where sufficiently many such spread-out probability functions Ψ1, … Ψ2,… Ψi,… ΨN  are superposed, building up a localized wave packet, thus:

                   

[2]             Ψ  =  c1  Ψ1  +  c2  Ψ2  + … +   ci Ψi  + … +  cN  ΨN                  

      

Constructive interference occurs where the phases of the several individual waves superpose themselves and agree, a behaviour visible in the video here down.  This video, filmed on a macroscopic fluid, displays accurately the general behaviour of matter, everywhere and whenever, on all scales of dimensions over Planck's length 10-35 meter.




Refer to the figure on right side, representing a multiplet with six optic elements in a smartphone. Feynman’s point of view about the sum over histories encounters here an immediate application when trying to imagine the total of all of the superpositions involved in the formation of what we name image.  Their number is mind boggling yet when trying to simplify the evaluation imagining each one lens of infinitesimal thickness, so to abstract by the diffraction and refraction effects between consecutive atoms in a lens.   Their total grows further starting to consider that the majority of atoms, have plenty of substructures (e.g., quarks) which can individually interfere.   We are trying to clear that when Nature is closely and thoroughly observed, it shows those numbers with hundredths of zeros which shall appear later in these pages.

 6-lenses optic multiplet in a common smart phone, illustrates superposition's complexity 




                                                                                                            Copyright Graphene Limited 2013-2019