Introduction

 Signature of John Von Neumann, who created much of the Quantum logic and terminology  




Methods and formalism of Quantum Mechanics, different than those of Relativity, established that subsystems evolve separately, along a tree-like structure occupying a multitude of spaces.   Multitude of spaces however reminiscent of the (infinite) multitude of spaces since 1907 attributed by Relativity theory to each one instant of time.   The information about the entire set of historical time-ordered ramifications, only known to the superposition of all of the subsystems and never fully known to its subsystems.   Causing what, from the subsystems' point of view, is the existence of limits to knowledge, synthesized in the Heisenberg’s Uncertainty Principle.   Multiverse is an idea implicitly embedded since the start in the same wave function Ψ.   The same function describing how, as an example, the semiconductor-based transistors operate grouped into the Integrated Circuits of the Bottling Controls (Electronic Inspectors) or PLCs’ processors.  


John von Neumann: Logic and Space of Information

  Projection of a quantum state-vector | ψ⟩ into a vector subspace S  by a projector P(S ).  Here shown the projection of  | ψ⟩ onto a ray corresponding to | ψm⟩, with which it makes an angle θ.  The probability for this transition to occur is cos2 θ, thus illustrating von Neumann’s concept of probabilities evaluated by the measurement of angles.  A von Neumann measurement corresponds to the set of possible such projections onto a complete orthogonal set rays of the Hilbert space being measured (abridged by Jaeger/2009)







Basic Quantum Terminology

Coherent, typically applied to a system, in modern Physics is a synonimous of superimposed.  In this sense, superimposed (or, coherent) are the orthonormal eigenvectors constituting a base for the wave function Ψ, representing an object, however complex or massive may it be.   Then, decoherent means reduced by a measurement to one definite value (eigenvalueeigenstate of the ), the only one we perceive in the multitude of alternatives.

Eigen means “characteristic” or “intrinsic”.

How was it possible for the classic idea of superposition to generate the non-classic kind of logic of observables, input currents, output currents, valence band and gaps, underlying all designs based on semiconductor junctions ?    We’ll answer quoting at length from the creator of the Quantum Logic.   The Hungarian physicist and mathematician John von Neumann, delivered as an address September 2–9, 1954, where he expressed the spirit of his creation.   


The oscillation characteristic values (or, eigenvalues) are modes of vibration, each one oscillating its own frequency, directly visible in this image of a drumhead.  As an example of direct interest for the Food and Beverage Packaging automated Quality Control, they are what is measured at acoustic and ultrasounds frequencies by the acoustic transducer part of the Inner Pressure Inspection by Ultrasounds (  J. Kreso, 1998-2010)

John von Neumann (right side) and Robert Julius Oppenheimer (left side) in a photo shot front of the many double triodes acting as twin 2-state switches in one of the first electronic calculators.  How was it possible for the classic idea of superposition to generate the non-classic kind of logic of observables, input currents, output currents, valence band and gaps, underlying all designs based on semiconductor junctions ?    We’ll answer quoting at length from an unpublished paper by the creator of the Quantum Logic.   The Hungarian physicist and mathematician John von Neumann, delivered as an address September 2–9, 1954, where he expressed the spirit of his creation.   A fruitful one, after considering that all the computers and smartphones used to read this text, and not only all of the Industrial Machinery and Equipments, exist and are as performant as we know they are, because abandoned the old classic Logic and embraced John von Neumann's Logic:

 John von Neumann and Robert Julius Oppenheimer in a photo shot front of the manu double triodes of one of the first electronic calculators









  Signature of John Von Neumann, who created much of the Quantum logic and terminology  







Basic Quantum Terminology

Coherent, typically applied to a system, in modern Physics is a synonimous of superimposed.  In this sense, superimposed (or, coherent) are the orthonormal eigenvectors constituting a base for the wave function Ψ, representing an object, however complex or massive may it be.   Then, decoherent means reduced by a measurement to one definite value (eigenvalue of the eigenstate), the only one we perceive in the multitude of alternatives.

Eigen means “characteristic” or “intrinsic”.

Eigenstate is the measured state of some object possessing quantifiable properties like position, momentum, energy etc.  The state being measured and described have to be observable (e.g. like the common electromagnetic measurements of Bottling Controls ), and have a definite value, called an eigenvalue.

Eigenfunction of a linear operator, defined on some function space, is any non-zero function in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. 

“If you take a classical mechanism of logics, and if you exclude all those traits of logics which are difficult and where all the deep questions of the foundations come in, so if you limit yourself to logics referred to a finite set, it is perfectly clear that logics in that range is equivalent to the theory of all sub-sets of that finite set, and that probability means that you have attributed weights to single points, that you can attribute a probability to each event, which means essentially that the logical treatment corresponds to set theory in that domain and that a probabilistic treatment corresponds to introducing measure.  I am, of course, taking both things now in the completely trivialized finite case.

But it is quite possible to extend this to the usual infinite sets.   And one also has this parallelism that logics corresponds to set theory and probability theory corresponds to measure theory and that given a system of logics, so given a system of sets, if all is right, you can introduce measures, you can introduce probability and you can always do it in very many different ways.

In the quantum mechanical machinery the situation is quite different.  Namely instead of the sets use the linear sub-sets of a suitable space, say of a Hilbert space.  The set theoretical situation of logics is replaced by the machinery of projective geometry, which in itself is quite simple.

However, all quantum mechanical probabilities are defined by inner products of vectors. Essentially if a state of a system is given by one vector, the transition probability in another state is the inner product of the two which is the square of the cosine of the angle between them. In other words, probability corresponds precisely to introducing the angles geometrically.   Furthermore, there is only one way to introduce it.   The more so because in the quantum mechanical machinery the negation of a statement, so the negation of a statement which is represented by a linear set of vectors, corresponds to the orthogonal complement of this linear space.

And therefore, as soon as you have introduced into the projective geometry the ordinary machinery of logics, you must have introduced the concept of orthogonality.  This actually is rigorously true and any axiomatic elaboration of the subject bears it out. So in order to have logics you need in this set of projective geometry with a concept of orthogonality in it.

In order to have probability all you need is a concept of all angles, I mean angles other than 90º.   Now it is perfectly quite true that in a geometry, as soon as you can define the right angle, you can define all angles.  Another way to put it is that if you take the case of an orthogonal space, those mappings of this space on itself, which leave orthogonality intact, leave all angles intact, in other words, in those systems which can be used as models of the logical background for quantum theory, it is true that as soon as all the ordinary concepts of logics are fixed under some isomorphic transformation, all of probability theory is already fixed.

What I now say is not more profound than saying that the concept of a priori probability in quantum mechanics is uniquely given from the start.  You can derive it by counting states and all the ambiguities which are attached to it in classical theories have disappeared.   This means, however, that one has a formal mechanism, in which logics and probability theory arise simultaneously and are derived simultaneously.  I think that it is quite important and will probably [shed] a great deal of new light on logics and probably alter the whole formal structure of logics considerably, if one succeeds in deriving this system from first principles, in other words from a suitable set of axioms.  All the existing axiomatisations of this system are unsatisfactory in this sense, that they bring in quite arbitrarily algebraical laws which are not clearly related to anything that one believes to be true or that one has observed in quantum theory to be true.   So, while one has very satisfactorily formalistic foundations of projective geometry of some infinite generalizations of it, of generalizations of it including orthogonality, including angles, none of them are derived from intuitively plausible first principles in the manner in which axiomatisations in other areas are.”


A fruitful spirit, after considering that all the Computers and Smartphones used to read this text, and not only all of the Industrial Machinery and Equipments, exist and are as performant as we know they are, because abandoned the old classic Logic and embraced John von Neumann's Logic:

“...If you take a classical mechanism of logics, and if you exclude all those traits of logics which are difficult and where all the deep questions of the foundations come in, so if you limit yourself to logics referred to a finite set, it is perfectly clear that logics in that range is equivalent to the theory of all sub-sets of that finite set, and that probability means that you have attributed weights to single points, that you can attribute a probability to each event, which means essentially that the logical treatment corresponds to set theory in that domain and that a probabilistic treatment corresponds to introducing measure.  I am, of course, taking both things now in the completely trivialized finite case.

 John von Neumann (right side) and Robert Julius Oppenheimer (left side) front of the many double triodes acting as twin 2-state switches in one of the first electronic calculators



Eigenstate is the measured state of some object possessing quantifiable properties like position, momentum, energy etc.  The state being measured and described have to be observable (e.g. like the common electromagnetic measurements of Bottling Controls), and have a definite value, called an eigenvalue.


Eigenfunction of a linear operator, defined on some function space, is any non-zero function in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. 

But it is quite possible to extend this to the usual infinite sets.   And one also has this parallelism that logics corresponds to set theory and probability theory corresponds to measure theory and that given a system of logics, so given a system of sets, if all is right, you can introduce measures, you can introduce probability and you can always do it in very many different ways.

In the quantum mechanical machinery the situation is quite different.  Namely instead of the sets use the linear sub-sets of a suitable space, say of a Hilbert space.  The set theoretical situation of logics is replaced by the machinery of projective geometry, which in itself is quite simple.

However, all quantum mechanical probabilities are defined by inner products of vectors. Essentially if a state of a system is given by one vector, the transition probability in another state is the inner product of the two which is the square of the cosine of the angle between them.   In other words, probability corresponds precisely to introducing the angles geometrically.   Furthermore, there is only one way to introduce it.   The more so because in the quantum mechanical machinery the negation of a statement, so the negation of a statement which is represented by a linear set of vectors, corresponds to the orthogonal complement of this linear space.

And therefore, as soon as you have introduced into the projective geometry the ordinary machinery of logics, you must have introduced the concept of orthogonality.  This actually is rigorously true and any axiomatic elaboration of the subject bears it out.   So in order to have logics you need in this set of projective geometry with a concept of orthogonality in it.

In order to have probability all you need is a concept of all angles, I mean angles other than 90º.   Now it is perfectly quite true that in a geometry, as soon as you can define the right angle, you can define all angles.   Another way to put it is that if you take the case of an orthogonal space, those mappings of this space on itself, which leave orthogonality intact, leave all angles intact, in other words, in those systems which can be used as models of the logical background for quantum theory, it is true that as soon as all the ordinary concepts of logics are fixed under some isomorphic transformation, all of probability theory is already fixed.

RAM Memory and John von Neumann 2800x1582@1x

 John von Neumann fathered the idea of what is today named a RAM Memory

What I now say is not more profound than saying that the concept of a priori probability in quantum mechanics is uniquely given from the start.  You can derive it by counting states and all the ambiguities which are attached to it in classical theories have disappeared.   This means, however, that one has a formal mechanism, in which logics and probability theory arise simultaneously and are derived simultaneously.   I think that it is quite important and will probably [shed] a great deal of new light on logics and probably alter the whole formal structure of logics considerably, if one succeeds in deriving this system from first principles, in other words from a suitable set of axioms.  


Eigenvectors are a special set of vectors associated with a linear system of equations that are sometimes also known as characteristic vectors, proper vectors, or latent vectors.

The determination of the eigenvectors and eigenvalues of a system is important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few.  Each eigenvector is paired with a corresponding so-called eigenvalue. 

All the existing axiomatisations of this system are unsatisfactory in this sense, that they bring in quite arbitrarily algebraical laws which are not clearly related to anything that one believes to be true or that one has observed in quantum theory to be true.   So, while one has very satisfactorily formalistic foundations of projective geometry of some infinite generalizations of it, of generalizations of it including orthogonality, including angles, none of them are derived from intuitively plausible first principles in the manner in which axiomatisations in other areas are…

The words written before could, at least at a first sight, look theoretical and unrelated by the named laymen reality of the facts.   But, they are not.     

An example of their explanatory power can be hinted by the figure before, depicting an extremely common RAM memory.  They are wherever: from the gigantic assembles of thousands of computers and servers in the Data Centres, til the smallest and cheapest existing portable telephones or computers, like the quad core-equipped Raspberry® Pi™ (figure at right side).  We are so accustomed to use RAM memories that it is frequently oversight who fathered their very basic concept, in an epoch when data were electromechanically handled by mean of relays.  John von Neumann, between other things, fathered the idea of what is today named a RAM memory, and lived enough to see an application of his idea in the US Army computers devoted to the modelling of the shockwaves arising by nuclear detonations.


In synthesis

The guiding principles for today comprehension understanding of the fine-scales of all phenomena are:

  1. linear superposition principle
  2. probabilistic interpretation of Quantum Mechanics. 

These two principles suggest that the state space is a linear space, thus accounting for the superposition principle, endowed with a scalar product used to calculate probability amplitudes.   It is named Hilbert Space and abbreviated H , such a linear space endowed with a scalar product.  More, Hilbert Space H  is a linear space over the field of the Complex numbers .    This means that if a, b    H , then: 

  1.   a + b  ∈  H
  2.   c * a   ∈  H

where c is any complex number  ℂ.



Observable and Self-Adjoint Operators

An observable associated with a quantum system is: 















  1. exclusively represented by a unique self-adjoint operator A on its Hilbert space;
  2. a dynamical variable.   Familiar examples of observables are energy, position, and momentum.   They should not be confused with their classical namesakes.  The quantum momentum has a richer structure than classical momentum.   They also exist observables, as an example spin, without any classical counterparts.   An operator A is a linear transformation of the Hilbert space into itself;
  3. characterised by a spectrum of its representing operator comprising the set of all possible values obtainable in measurements of A.   A spectrum can be discrete, continuous, or a combination of both. 

Associated with many operators is the operator called its adjoint and denoted A†, which will be such that for all functions f and g in the Hilbert space,

                                                f  | A ⟩  =  ⟨A  f  | 

A is an operator that, when applied to the left member of any scalar product, produces the same result as is obtained if A is applied to the right member of the same scalar product.    The equation before is the defining equation for A.   Depending on the specific operator A, and the definitions in use of the Hilbert space and the scalar product, A may or may not be equal to A.    If A = A, then A is referred to as self-adjoint or, Hermitian.   Self-adjoint operators distinguish themselves by having spectra that consist only of real numbers.   The observable A is normally supposed as having a pure spectrum where the real numbers a  are called the eigenvalues of A.   The eigenvalues can be the direct results of measurements or experiments.    

    Photodetectors.  Present under different dress in all common industrial photo-electric switches. Designed and operating on base of physical laws following John von Neumann's Logic of Quantum Mechanics 


Information about Observables in the Hilbert space  

Complex numbers does matter

Do you remember the handling given to √-1 when solving equations like:                                                                                                       

               a x2 + b x + c = 0


into the set ℝ of real numbers ?

Solutions given by relation:

x1,2  =  ± ( b2 – 4ac )0.5 / 2a


may imply square roots of negative numbers.  

An example in the case:

x2  -  1  =  0

                                                    

whose roots are:                                                                                      

                      x1,2  =  {-1; 1}    


Negative roots of minus one are considered roots without signification in the set ℝ.  As a matter of fact, on the opposite these same roots have full meaning (also physical, e.g. when dealing with concepts of reactive power in Electrotechnics and resonance in Electronics) when the solution of the equations is extended to the set  of the complex numbers, of which the real numbers ℝ are just an infinitesimal subset. 

The superposition principle requires a continuum.   As an example, the superposition of classical configurations such as the positions of N particles, originates the idea of wave functions.   Wave functions evolving continuously in Time according to an equation named Schrödinger equation.   Evolving continuously until a measurement disturb the system.   The Writer of these notes remembers he had personally been teached Quantum Mechanics starting by the study of single-particle problems.  Then, following the early Schrödinger equation.   This frequent recall of studies whose value is today nearly only historic gives the impression that the wave function Ψ was kind of a spatial field.   And that’s why many Telecommunications or Chemical engineers or scientists, still erroneously believe it exists a wave function for each electron in an atom.

On the opposite, according to Schrödinger’s quantization procedure or the superposition principle, wave functions are defined on Configuration Space.  And is this what carries to the unavoidable entanglement of all systems existing in the Universe.   These discoveries of one century ago has to be seen from today point of view, one based on fields rather than particles.   The effect of the entanglement of all systems existing in the Universe can be immediately perceived in the superposition of classical configurations.   If we  superpose their amplitudes of certain fields (i.e., the electro magnetic field) rather than particle's positions, the wave function Ψ becomes an entangled wave functional for all of them.     And it is because of what explained before, that the modern Principle of Superposition is the basic reason why methods and formalism of Quantum Mechanics, different than those of Relativity, establish that subsystems evolve separately.  Evolve separately along a tree-like structure occupying a multitude of spaces.   Multitude of spaces reminiscent of the (infinite) multitude of spaces since 1907 attributed by Relativity Theory to each one instant of Time.   

The information about the entire set of historical time-ordered ramifications is only known to the superposition of all of the subsystems (named in different ways, i.e. Multiverse), and never fully known to the subsystems.   Causing what, from the sub systems point of view, is the existence of limits to knowledge, synthesized in the Uncertainty Principle.  Multiverse is an idea implicitly embedded since the start in the same wave function Ψ.    The same function describing how, as an example, semiconductor-based transistors operate grouped into the Integrated Circuits of the Industrial Machinery Programmable Logic Controllers (PLC) devoted to automation, or the Bottling Controls (Electronic Inspectors).      



Superposition of States

hilbert vectors sets med

  Imagine a dot and let all infinite radiuses spherically get out of that dot: that is the dimension of a dot in the Hilbert space, where each one radius identifies each one of the properties




In 1935 Schroedinger conceived a case where a complex (biologic) system lies in an ambient assuring total disconnection from the external world, the Environment.   He was the first to understand that an object lying in a truly closed ambient, without any exchange (no matter, radiation nor information flow) with the external ambient, should have been in a superposition of states, rather than in the single one we perceive directly or by mean of measurement instruments.    In this case, the insulated system should have had a multiple existence: different statuses of different instances of the system in the insulated ambient.  The object should have been: 

  • out of the Environment;
  • into an infinite-dimensional space, where a multitude of instances of the system exists.

How this may be possible also depends on the fact that Ψ is defined in the Hilbert space, a vector space itself generalization of the known 3  Euclidean space.   Hilbert space extends the methods of vector algebra and calculus from the three-dimensional space to spaces with any infinite number of dimensions.   Here, we are using the term dimension in the usual way as the quantities fully defining the position of a point in the space.   Imagine a dot and let all infinite radiuses spherically get out of that dot: that is the dimension of a dot in the Hilbert space, where each one radius identifies each one of the properties (e.g., position, time, momentum, polarization, curvature, action, energy, temperature, information, etc.) of that dot.    



Dimensionality of a System in the Hilbert Space


modern principle of superpo med hr-2

As an example, of the differences existing between the classic Euclidean and the modern Hilbert space, consider two classical independent systems for which we know:

  • the Euclidean position, in the 3-dimensional space (x, y, z);
  • their velocity components, along each one of these axes (Vx, Vy, vz).   


“The system derived by the superposition of the two original systems, each one described by 6 dimensions, is fully described in the Hilbert Space H  by 36 dimensions”

In the Euclidean space:

If we join these two systems and let them interact, we add the number of dimensions of each one of them.   The system derived by the superposition of the two original systems, each one described by 6 dimensions, is fully described by 12 dimensions.  

In the Hilbert space:

Repeating exactly the same steps in the Hilbert space carries a completely different result because we have now to multiply the number of dimensions of the original systems.    Then, in this case the dimensionality necessary to fully describe the superposed system increases to 36.   



Noncommutativity in the Hilbert Space

 The iridescent reflection of light, visible in the diffraction grating of DVDs, is an example of decomposition of the frequencies superimposed to form the incident solar light.  Examining individual photons, rather than the multitudes of the classic Geometric Optics, deserves surprises about photons’ nature

More, in the Hilbert space the superposition of two systems like that described before (a measurement) is noncommutative [6]meaning that the order of the operations implies different results.    A binary operation, indicated by the asterisk symbol  *  on a set S, is noncommutative if for all x, y ∈ S:    


            a * b  ≠  b * a               [6]  


This last point truly makes the difference with respect to a common sense still based on the classic Algebra of real numbers functions in the set  of elements in the same set .   Our idea of commutativity is strictly related to the deeper idea of symmetry.   How can be possible that a different order in the operations change the result ?    An idea like this, judged from the familiar point of view of the real numbers seems unthinkable.   As an example, suggesting that:

light iridescent reflection med

                                              2  *  3    ≠   3  *   2


After the implications of non commutative rules became evident, it was coined a first comfortable temptative explanation.  One trying to pass around the new scenario it forced to front:  

“No paradox exists because we are treating non-classic complex operators 

  in the space of Hilbert”.  



But, the Hilbert space is yet the base for all practical technological applications.  How can it be possible that a useful fiction, a mathematical-only artifice could carry us such a bonanza of Wall Street-quoted technological and industrial applications ?   As an example, no one doubts about the reality of the switchings of the CPUs and of the logic gates in the Programmable Logic Controllers (PLCs) which let nearly all worldwide Machinery run in this moment.   






“To reduce by mean of a measurement to a single point  the observed value, does not mean that the others do not exist any more or they did not existed at all before the measurement”


No one doubts that what those CPUs and I/Os are performing, are computations over a programmed algorithm.   A second attempt to pass around the uncomfortable implications of the reality of the Quantum Field, tried to limit to the atomic and subatomic scales the domain of application of noncommutative geometry rules.   But, other examples of noncommutativity were yet known also for objects of the spacetime macroscale of dimensions we directly perceive without any instrumental aid.   As an example the geometric rotations of a solid, massive 3-dimensional book, in the figure below.   Then, non commutativity represents rules of general application.    The key point is that Hilbert space represent a system before the measurement action when all of the states of the system exist in superposition.    The act to measure a status for a variable (or, Trigger, e.g. the time of passage of a container front of a photoelectric sensor) really reduces the observed value to a single point.   To reduce by mean of a measurement to a single point the observed value, does not mean that the others do not exist any more or they did not existed at all before the measurement.  





  The rotations in three dimensions of a solid are non commutative (abridged by   Jeff Atwood, 2011) 








Hilbert Space and Hypercomplex numbers







In the course of the last century it was discovered that the Hilbert Space H can be defined over different number fields.   These include the: 

  • Real numbers , numbers on a line, an 1-dimensional number system;
  • Complex numbers , a 2-dimensional number system; 
  • Quaternions , a 4-dimensional number system;

and, to some extent, also the Octonions, in the figure below included with the quaternions between the Hypercomplex numbers .   The Hypercomplex numbers are all those whose dimension is over 3.    The Octonions can be thought of as octets (or 8-tuples) of Real numbers building up an 8-dimensional number system.  


  The set of all numbers includes the subsets of the > 3-dimensional Hypercomplex numbers , visibly dropping the commutative property.     includes the 2-dimensional Complex numbers set of special interest for all dynamical systems, included the Industrial Machinery object of Root Cause Analysis (  stratocaster47, 2014)













An Octonion is a real linear combination of the unit octonions:

                         { e0, e1, e2, e3, e4, e5, e6, e7 }


where e0 is the scalar, or Real element, which can be identified with the Real number 1.  

That is, every Octonion x can be written in the form:

x  =  x0 e0 + x1 e1 + x2 e2 + x3 e3 + x4 e4 + x5 e5 + x6 e6 + x7 e7


with Real coefficients  { xi }.   

The reason why we pointed out the relation of the Hilbert Space H  to Octonions is that today's best mathematical instrument to scan all is String Theory. And the 10-dimensional version of String Theory, say the version exempt by anomalies, is the one using the Octonions rather than different number systems.



The transition from classic to quantum


Classic Physics' state

 Relation between Hilbert space and Topological Space. Topological spaces are defined by the most basic Set Theory












































In classical physics, the notion of the state of a physical system is intuitive.   It is focused on certain measurable quantities of interest, for example, the position and momentum of a moving body, and subsequently assign mathematical symbols to these quantities (such as “x” and “p”).   Then, the state of motion of a body is specified by assigning numerical values to these symbols. In other words, there exists a one-to-one correspondence between the physical properties of the object and their mathematical representation in the theory.   To be sure, we may certainly think of some cases in classical physics where this direct correspondence is not always established as easily as in the example of Newtonian mechanics used here.   As an example, it is rather difficult to relate the formal definition of temperature in the theory of Thermodynamics to the underlying molecular processes leading to the physical notion of temperature.   However, reference to other physical quantities and phenomena usually allowed one to resolve this identification problem at least at some level.



Quantum Physics' state

The one-to-one correspondence between the physical world and its mathematical representation in the theory came to an end with quantum theory after 1926. Instead of describing the state of a physical system by means of intuitive symbols that corresponded directly to the “objectively existing” physical properties of our experience, in quantum mechanics we have only an abstract quantum state that is defined as a vector (or, more generally, as a ray) in a similarly abstract Hilbert vector space. The conceptual leap associated with this abstraction cannot be underestimated.  As a matter of fact, the discussions regarding the interpretation of quantum mechanics since the early years of quantum theory, are to a large extent due to the question:  

  ...how to relate the abstract quantum state to the physical reality out there ?  

The connection with the familiar physical quantities of our experience is only indirect, through measurements of physical quantities, that is, of observables object of our measurements and everyday life, represented by (Hermitian) operators in a Hilbert space. To a certain extent, the measurement allows us to revert to a one-to-one correspondence between the mathematical formalism and the “objectively existing physical properties” of the system, say to the concept familiar from classical physics.  But, due to the fact that many observables are mutually incompatible (non commutativity), a quantum state will in general be a simultaneous eigenstate of only a very small set of operator-observables. Accordingly, we may ascribe only a limited number of definite physical properties to a quantum system, and additional measurements will in general alter the state of the system unless, we measure, by virtue of luck or prior knowledge, an operator-observable with an eigenstate that happens to coincide with the quantum state of the system before the measurement.    A consequence is that it is impossible to uniquely determine an unknown quantum state of an individual system by means of measurements performed on that system only.   This situation is in evident contrast with classical physics:   

  • here we can enlarge our  knowledge of physical properties of the system by performing an arbitrary number of measurements of additional physical quantities.  For example, in a RLC-serie electric circuit, several measurement of current intensity along the circuit and alternative voltages at the resistor, capacitor and inductor, when changing the frequency of the Signal outcoming by a Generator;
  • many independent observers may carry out such measurements and agree on the results, without running into any risk of disturbing the state of the system, even though they may have been initially completely ignorant about this state.  

The last century of Physics also demonstrated that the idea of preexistence of the classical states is an illusion.   A remnant of the limited knowledge we had along past centuries.  



Actual states   

Until some decades ago it has often been argued that these states represent only potentialities for the various observed states.  Say that at the same time, the quantum state: 

  • represent a complete description of a system, encapsulating all there is to say about its physical state;
  • do not tell us which particular outcome will be obtained in a measurement, but only the probabilities of the various possible outcomes. 

This probabilistic character of quantum mechanics until thirty years ago teached as a canon, is an effect of our cultural perspective.    In an experimental situation, the probabilistic aspect is represented by the fact that, if we measure the same physical quantity on a collection of systems all prepared in exactly the same quantum state, we will in general obtain a set of different outcomes.   The purely probabilistic interpretation of the wave function, when closely examined creates many kinds of paradoxes.   What had been fully understood along past decades, marrying the best experimental techniques to the most powerful theoretical ideas, is that on the opposite all possible measurement's values, then all physical states about whom the measurement is providing us information, are actual and equally real.



all- particles are construc med hr

Schroedinger.  Only waves

  Erwin Schroedinger, one of the founders of Quantum Mechanics. He was the first to understand that a multitude of simultaneous instances of the electron the Hydrogen proton, were modelled by his newly created Wave Function Ψ





















Reading textbooks written sixty years ago, you’ll be surprised by the difficulty to figure the named wave-particle duality.   This idea, stood between those which delayed many initial efforts to understand Quantum Mechanics.   The following decades demonstrated it was one more side-effect of the past cultural perspective.   Yet Erwin Schroedinger, the physicist depicted in the figure here at right side, in the early days of Quantum Mechanics when attempting to identify narrow wave packets in real space with actual physical particles observed two problems: 

  • initially localized wave packets spread out very rapidly over large regions of space, a behaviour irreconcilable with the idea of particles, by definition localized in space;
  • the Wave Function Ψ describing the quantum state of N > 1 particles in the 3-dimensional space, resides in a 3N-dimensional Hilbert space.   No more in the familiar 3-dimensional Euclidean space of our experience.

In 1952, during a conference at Dublin, Ireland, he first introduced openly the idea that what his own and the Werner Heisenberg's matricial quantum formalisms really are suggesting, is the simultaneous superposition of all the possible instances of the same physical object.    Physical object, from his point of view, reduced to a wave.    And particles as mere idealisations for wave packets with relatively limited extension in the space-time.   We explained above that the wave function Ψ, originally born only to define the probability to localize the electron of an atom, since the start spread the answer in the infinite range of the positions x.    Erwin Schroedinger, the nobelist who fathered in 1926 the same wave function formalism on the base of prior ideas of Louis De Broglie, saw that the wave function Ψ was describing a multitude of simultaneously coexisting electron positions in the space  of the complex numbers:


                              {a + ib,    a, b ∈ ℝ} 


rather than a single position in the space  whose elements are real numbers.   Schroedinger choose to treat the multitude of solutions in the particular case he studied [a multitude of electrons in a Hydrogen atom], the way we treat square roots of minus one in .   They were necessary other thirty one years to let someone else (Hugh Everett III), recognise their meaning, yield and physical relevance.   Recognising that the wave function Ψ represents a physical field, and that each solution may differs than the others for a minimum amount of Information.    Amount considered 1 bit.    We recall now the detailed example we considered elsewhere, of a RLC serie electric circuit, based on passive components.   A circuit which should have fully satisfied Maxwell’s equation [3] in case of an ideal RLC serie circuit (something which never existed nor shall exist) in an ideal Environment where:

  • temperature is kept constant with an infinite precision;
  • humidity is kept constant with an infinite precision; 
  • electromagnetic fields do not exist;
  • no gravitational fields exist, then a RLC resonant circuit in a never existed flat Euclidean space-time;
  • no induction of the electromagnetic field of the inductor in the resistor and capacitor, etc. 

In other terms: ideal RLC components part of an ideal Environment.  Conditions like these can only be encountered out of the Universe, because they imply no electromagnetic nor gravitational fields.   We remember to the Reader that they exist excellent (however not ideal) ways to shield an ambient by the electromagnetic fields but that no way still today exists to “shield” it from gravitation.   On practice, a system causally disconnected (then, non correlated) by the Environment. 


“One of the applications of the quantum computers is the Binary Classification. 

All Electronic Inspectors, whatever their size, complexity or task, are Binary Classifiers”


The Past  





We prefer to start this section with a visit in one of the Google's, Inc. Data Centres.   Why this visit ?    Because it shows the Past.    And, specifically, it shows what is one of the world’s greatest Classifiers when approaching the dawn of their design.   The aligned rows of thousands of Servers consume impressive amounts of active energy.   Reason why Google privileges the naturally cold ambients, like Icelandic North Sea or Swedish Baltic Sea, for its Data Centres, so to have cheaper cooling of the CPUs.   Parallelizing Support Vector Machines on Distributed Computers is one of the main characteristics of these Data Centres.












In some way, the alignment of the Servers and of the many CPUs there embedded, suggests one of those mirror-like effects where it is possible to see many instances of a single object.    It is important to start to understand that, living in a multiversal environment, all states are elements of a single Superposition.   A single CPU, in the reality, has a multitude of counterparts, parallel processing data whose difference is limited to 1 bit. 


Binary Classification: Present and Future 

The video below synthesises in a few words a new paradigm. Binary Classification, the task of all of the Electronic Inspectors of the World, is an activity intrinsically optimized for massive parallel computing.    Massive parallel computing expensive and unpractical, if arising by the ideas underlying all today’s Data Centres, like the Google, Inc. above.   A single Quantum Computer outperforms an entire Data Centre, at a fraction of its cost.



Rainier processors and Josephson-effect junctions operated into 16 concentric levels of electro magnetic shielding at temperatures close to the -273 ºC. Special conditions to recreate small and brielf-duration Hilbert spaces in our hot decohering environment (  courtesy D-Wave Systems, Inc./2014)     

Computing in the Hilbert space

The following video, originating by the world leader in quantum computing, D-Wave Systems, Inc., is meant as an introduction to commercial applications of the superposed structure named Multiverse.  One of the applications of the quantum computers is the Binary Classification. Electronic Inspectors in the Food and Beverage Bottling Lines are Binary Classifiers.  (In another page of this site we’ll explain why the ideal Bottling Control is a device calculating in binary way as usual but, in the Hilbert space).    From Decoherence we learnt that the general mechanisms and phenomenons arising from the interaction of a macroscopic quantum system with its Environment, strictly depend on the strength of the coupling between the considered degree of freedom and the rest of the world.  And because of this reason D-Wave's “Vesuvius” Central Processing Units are a system of 512 quantum bits (qbits).  

512 superpositions of the Ψ wave functions, preserved longer than possible in the Hilbert space: 




  • operating at temperatures of 0.02 K (-272.98 ºC), extremely close to the absolute zero;
  • protected by induction of external electromagnetic fields by fifteen levels of Faraday cages.

A mind-boggling amount of other branches of the Multiverse where the computer exists, each one processing the same original qbit with differences from one to another in the variable processed, reduced to 1 bit.   Moreover they:

  • adopt Rainier processors and Josephson-effect junctions, rather than Silicium-based processors,  
  • adopt Josephson junctions, the most basic mesoscopic superconducting quantum devices;
  • handle qubits which can slowly be tuned (annealed) from their superposition state (say where they are 0 and 1 at the same time) into a classical state where they are either 0 or 1.   When this is done in the presence of the programmed memory elements on the processor, the 0 and 1 states that the qubits end up settling into, gives the answer to an user-defined problem.

  To create a Hilbert Space of reduced volume and duration here on the Earth's surface, requires a technology completely different than that of the actual Supercomputers. Technology which is the core of the new Quantum Computers. Nearly Science Fiction, as seen by the point of view of today Industrial Automation.  One of the key points lies in the fact that superpositions of the Ψ wave functions are preserved longer than possible in the Hilbert space when:   







1) operating at temperatures of 0.02 K (-272.98 ºC), extremely close to the absolute zero;    

2) protected by induction of external electromagnetic fields by the shielding provided by fifteen levels of Faraday's cages.  This image is showing the extreme cares with respect to both points:  a) the entire circuit lies in a fridge.   b) EMI-induced parasitic currents flowing in the shields of the cables carrying Signals, are immediately discharged to Ground. This, limiting to < 250 mm their extension far from Ground discharge points massive and whose impedance is extremely low. In other terms, also EMI-induced parasitic currents are conceived in the framework of the wider circuital design (  courtesy D-Wave Systems Inc./2014)





Links to the pages



                                                                                                            Copyright Graphene Limited 2013-2019