Quantum in Brief

hilbert vectors sets med






With reference to the figure on side, a quantum system is specified by: 

  1. Hilbert space H :  a subset of the Banach space, whose rays are non-nil complex-valued vectors each of them representing possible states of a physical system.
  2. Rays’ dynamics in H : a set of unitary transformations parameterised by mean of the Time, evolving a state at a certain Time into anotehr at another time.
  3. Quantum system structure, providing operators  on the Hilbert space or a preferred sets of basis vectors. Alternatively, providing a rule of decomposition of the system into subsystems, equivalent to a decomposition of the Hilbert space into quotient spaces.

The rationale for the last point 3. is the isomorphism of all Hilbert spaces of the same dimension. That’s why the differences between quantum systems cannot really be understood without additional structure.  An eigenvector in Hilbert space is a simple line. But the quantum system whose state it represents is typically a complex object, like a molecule, a machine or an entire planet.


Logic and Space of Information

 Projection of a quantum state-vector | ψ⟩ into a vector subspace S  by a projector P(S ).  Here shown the projection of  | ψ⟩ onto a ray corresponding to | ψm⟩, with which it makes an angle θ.  The probability for this transition to occur is cos2θ, thus illustrating von Neumann’s concept of probabilities evaluated by the measurement of angles.  A von Neumann Measurement is the set of possible such projections onto a complete orthogonal set rays of the Hilbert space being measured (  abridged by Jaeger/2009)

Technical/White paper

    Flash animation, 69 MB, ZIP

                  Click  index.html


      

           134 pages, 33 MB     










 Signature of John Von Neumann, to whom we owe much of the Quantum logic and terminology  






















To specify a quantum system, we begin by giving some Hilbert space H, whose rays—that is, its non-zero vectors up to normalization and phase—are intended to represent the possible states of the system. We then provide a dynamics for rays in H: a set of time-indexed unitary transformations which take a state at one time into the states it evolves to at other times (often we specify these unitary transformations by the Hamiltonian, the self-adjoint operator which generates them). Finally, we provide some additional structure on the Hilbert space, sufficient to specify the particular system which we are studying.

This last requirement may look unfamiliar, and indeed inelegant (it is certainly not stated with the mathematical cleanness of the other two requirements).  Nonetheless it is essential. All Hilbert spaces of the same dimension are isomorphic, so the differences between quantum systems cannot really be understood without additional structure. And absent additional structure, a Hilbert-space ray is just a featureless, unstructured object, whereas the quantum state of a complex system is very richly structured.

In practice, we specify structure for quantum systems in two ways: by providing certain operators on the Hilbert space (or, equivalently, providing certain preferred sets of basis vectors); or by providing a certain decomposition of the Hilbert space into quotient spaces (i.e. by providing a particular decomposition of the system into subsystems). An Event can typically happen in several different ways.  In Quantum Mechanics its probability is calculated as absolute value of the square of a Superposition of contributions.  A contribution from each alternative path.  Then, the probability that a particle will be found to have a path x(t) lying within a region of space-time results the square of a Superposition of contributions from each path in the region.  The contribution from an individual path is postulated to be an exponential whose complex phase is the action for the path in question.  The total contribution from all actions along all paths reaching the point (x,t) from the Past, is named wave-function Ψ(x,t).  How was it possible for the classic idea of Superposition to generate this non-classic kind of logic of observables, input currents, output currents, valence band and gaps, underlying all designs based on semiconductor junctions?   We’ll answer quoting at length from the creator of the Quantum Logic.  The Hungarian mathematician and physicist John von Neumann, delivered as an address on September 2–9, 1954, where he expressed the spirit of his creation.   


 The oscillation characteristic values (or, eigenvalues) are modes of vibration, each one oscillating its own frequency, directly visible in this image of a drumhead.  As an example of direct interest for the Food and Beverage Packaging automated Quality Control, they are what is measured at acoustic and ultrasounds frequencies by the acoustic transducer part of the Inner Pressure Inspection by Ultrasounds (   J. Kreso/1998-2010)


A fruitful spirit, after considering that all the Computers and Smartphones used to read this text, and not only all of the Industrial Machinery and Equipments, exist and are as performant as we know they are, because abandoned the old classic Logic and embraced John von Neumann's Logic:

“...If you take a classical mechanism of logics, and if you exclude all those traits of logics which are difficult and where all the deep questions of the foundations come in, so if you limit yourself to logics referred to a finite set, it is perfectly clear that logics in that range is equivalent to the theory of all sub-sets of that finite set, and that probability means that you have attributed weights to single points, that you can attribute a probability to each event, which means essentially that the logical treatment corresponds to set theory in that domain and that a probabilistic treatment corresponds to introducing measure. I am, of course, taking both things now in the completely trivialized finite case.

But it is quite possible to extend this to the usual infinite sets.  And one also has this parallelism that logics corresponds to set theory and probability theory corresponds to measure theory and that given a system of logics, so given a system of sets, if all is right, you can introduce measures, you can introduce probability and you can always do it in very many different ways.

John von Neumann (right side) and Robert Julius Oppenheimer (left side) in a photo shot front of the many double triodes acting as twin 2-state switches in one of the first electronic calculators.  How was it possible for the classic idea of superposition to generate the non-classic kind of logic of observables, input currents, output currents, valence band and gaps, underlying all designs based on semiconductor junctions ?    We’ll answer quoting at length from an unpublished paper by the creator of the Quantum Logic.   The Hungarian physicist and mathematician John von Neumann, delivered as an address September 2–9, 1954, where he expressed the spirit of his creation.   A fruitful one, after considering that all the computers and smartphones used to read this text, and not only all of the Industrial Machinery and Equipments, exist and are as performant as we know they are, because abandoned the old classic Logic and embraced John von Neumann's Logic:

 John von Neumann and Robert Julius Oppenheimer in a photo shot front of the manu double triodes of one of the first electronic calculators









  Signature of John Von Neumann, who created much of the Quantum logic and terminology  







Basic Quantum Terminology

Coherent, typically applied to a system, in modern Physics is a synonimous of superimposed.  In this sense, superimposed (or, coherent) are the orthonormal eigenvectors constituting a base for the wave function Ψ, representing an object, however complex or massive may it be.   Then, decoherent means reduced by a measurement to one definite value (eigenvalue of the eigenstate), the only one we perceive in the multitude of alternatives.

Eigen means “characteristic” or “intrinsic”.

Eigenstate is the measured state of some object possessing quantifiable properties like position, momentum, energy etc.  The state being measured and described have to be observable (e.g. like the common electromagnetic measurements of Bottling Controls ), and have a definite value, called an eigenvalue.

Eigenfunction of a linear operator, defined on some function space, is any non-zero function in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. 

“If you take a classical mechanism of logics, and if you exclude all those traits of logics which are difficult and where all the deep questions of the foundations come in, so if you limit yourself to logics referred to a finite set, it is perfectly clear that logics in that range is equivalent to the theory of all sub-sets of that finite set, and that probability means that you have attributed weights to single points, that you can attribute a probability to each event, which means essentially that the logical treatment corresponds to set theory in that domain and that a probabilistic treatment corresponds to introducing measure.  I am, of course, taking both things now in the completely trivialized finite case.

But it is quite possible to extend this to the usual infinite sets.   And one also has this parallelism that logics corresponds to set theory and probability theory corresponds to measure theory and that given a system of logics, so given a system of sets, if all is right, you can introduce measures, you can introduce probability and you can always do it in very many different ways.

In the quantum mechanical machinery the situation is quite different.  Namely instead of the sets use the linear sub-sets of a suitable space, say of a Hilbert space.  The set theoretical situation of logics is replaced by the machinery of projective geometry, which in itself is quite simple.

However, all quantum mechanical probabilities are defined by inner products of vectors. Essentially if a state of a system is given by one vector, the transition probability in another state is the inner product of the two which is the square of the cosine of the angle between them. In other words, probability corresponds precisely to introducing the angles geometrically.   Furthermore, there is only one way to introduce it.   The more so because in the quantum mechanical machinery the negation of a statement, so the negation of a statement which is represented by a linear set of vectors, corresponds to the orthogonal complement of this linear space.

And therefore, as soon as you have introduced into the projective geometry the ordinary machinery of logics, you must have introduced the concept of orthogonality.  This actually is rigorously true and any axiomatic elaboration of the subject bears it out. So in order to have logics you need in this set of projective geometry with a concept of orthogonality in it.

In order to have probability all you need is a concept of all angles, I mean angles other than 90º.   Now it is perfectly quite true that in a geometry, as soon as you can define the right angle, you can define all angles.  Another way to put it is that if you take the case of an orthogonal space, those mappings of this space on itself, which leave orthogonality intact, leave all angles intact, in other words, in those systems which can be used as models of the logical background for quantum theory, it is true that as soon as all the ordinary concepts of logics are fixed under some isomorphic transformation, all of probability theory is already fixed.

What I now say is not more profound than saying that the concept of a priori probability in quantum mechanics is uniquely given from the start.  You can derive it by counting states and all the ambiguities which are attached to it in classical theories have disappeared.   This means, however, that one has a formal mechanism, in which logics and probability theory arise simultaneously and are derived simultaneously.  I think that it is quite important and will probably [shed] a great deal of new light on logics and probably alter the whole formal structure of logics considerably, if one succeeds in deriving this system from first principles, in other words from a suitable set of axioms.  All the existing axiomatisations of this system are unsatisfactory in this sense, that they bring in quite arbitrarily algebraical laws which are not clearly related to anything that one believes to be true or that one has observed in quantum theory to be true.   So, while one has very satisfactorily formalistic foundations of projective geometry of some infinite generalizations of it, of generalizations of it including orthogonality, including angles, none of them are derived from intuitively plausible first principles in the manner in which axiomatisations in other areas are.”

 John von Neumann (right side) and Robert Julius Oppenheimer  front of the thousands of double triodes acting as twin 2-state switches (2 bits each) in one of the first electronic calculators





Basic Quantum Terminology

Coherent, typically applied to a system, in modern Physics is a synonimous of superimposed.  In this sense, superimposed (or, coherent) are the orthonormal eigenvectors constituting a base for the wave function Ψ, representing an object, however complex or massive may it be.   Then, decoherent means reduced by a measurement to one definite value (eigenvalueeigenstate of the ), the only one we perceive in the multitude of alternatives.


Eigen means “characteristic”, “proper” or “intrinsic”.


Eigenstate is the measured state of some object possessing quantifiable properties like position, momentum, energy etc.  The state being measured and described have to be observable (e.g. like the common electromagnetic measurements of Bottling Controls), and have a definite value, called an eigenvalue.


Eigenfunction of a linear operator, defined on some function space, is any non-zero function in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. 


Eigenvectors are a special set of vectors associated with a linear system of equations that are sometimes also known as characteristic vectors.  The determination of the eigenvectors and eigenvalues of a system is important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few.  Each eigenvector is paired with a corresponding so-called eigenvalue. 



In the quantum mechanical machinery the situation is quite different. Namely instead of the sets use the linear sub-sets of a suitable space, say of a Hilbert space. The set theoretical situation of logics is replaced by the machinery of projective geometry, which in itself is quite simple.

However, all quantum mechanical probabilities are defined by inner products of vectors. Essentially if a state of a system is given by one vector, the transition probability in another state is the inner product of the two which is the square of the cosine of the angle between them.  In other words, probability corresponds precisely to introducing the angles geometrically.  Furthermore, there is only one way to introduce it.  The more so because in the quantum mechanical machinery the negation of a statement, so the negation of a statement which is represented by a linear set of vectors, corresponds to the orthogonal complement of this linear space.

And therefore, as soon as you have introduced into the projective geometry the ordinary machinery of logics, you must have introduced the concept of orthogonality. This actually is rigorously true and any axiomatic elaboration of the subject bears it out.  So in order to have logics you need in this set of projective geometry with a concept of orthogonality in it.

In order to have probability all you need is a concept of all angles, I mean angles other than 90º.  Now it is perfectly quite true that in a geometry, as soon as you can define the right angle, you can define all angles.  Another way to put it is that if you take the case of an orthogonal space, those mappings of this space on itself, which leave orthogonality intact, leave all angles intact, in other words, in those systems which can be used as models of the logical background for quantum theory, it is true that as soon as all the ordinary concepts of logics are fixed under some isomorphic transformation, all of probability theory is already fixed.

What I now say is not more profound than saying that the concept of a priori probability in quantum mechanics is uniquely given from the start.  You can derive it by counting states and all the ambiguities which are attached to it in classical theories have disappeared.  This means, however, that one has a formal mechanism, in which logics and probability theory arise simultaneously and are derived simultaneously. I think that it is quite important and will probably [shed] a great deal of new light on logics and probably alter the whole formal structure of logics considerably, if one succeeds in deriving this system from first principles, in other words from a suitable set of axioms. 



 John von Neumann fathered the idea of what is today named “RAM Memory” (   Kingston/2015)


All the existing axiomatisations of this system are unsatisfactory in this sense, that they bring in quite arbitrarily algebraical laws which are not clearly related to anything that one believes to be true or that one has observed in quantum theory to be true.  So, while one has very satisfactorily formalistic foundations of projective geometry of some infinite generalizations of it, of generalizations of it including orthogonality, including angles, none of them are derived from intuitively plausible first principles in the manner in which axiomatisations in other areas are…









 The applications of the many insights by the Hungarian physicist and mathematician John von Neumann are wherever.  Also into the smallest computer of the World, like this zoomed Raspberry™ Pi™ 2 model B


The words written before could, at least at a first sight, look theoretical and unrelated by the named laymen reality of the facts.  But, they are not.  And an example of their explanatory power can be hinted by the figure before, depicting an extremely common RAM memory.  They are wherever: from the gigantic assembles of thousands of computers and servers in the Data Centres, til the smallest and cheapest existing portable telephones or computers, like the quad core-equipped Raspberry™ Pi™ (figure at right side).  We are so accustomed to use RAM memories that it is frequently oversight who fathered their very basic concept, in an epoch when data were electromechanically handled by mean of relays.  He is John von Neumann the one who, between other things, fathered the idea of RAM memory, and lived enough to see an application of his idea in the US Army computers devoted to the modelling of the shockwaves arising by nuclear detonations.

LASER and Quantum Mechanics. An Event can typically happen in several different ways.  In Quantum Mechanics its probability is calculated as absolute value of the square of a Superposition of contributions.   A contribution from each alternative path.  Then, the probability that a particle will be found to have a path x(t) lying within a region of space-time results the square of a Superposition of contributions from each path in the region.  The contribution from an individual path is postulated to be an exponential whose complex phase is the action for the path in question.  The total contribution from all actions along all paths reaching the point (x,t) from the Past, is named wave-function Ψ(x,t).  How was it possible for the classic idea of Superposition to generate this non-classic kind of logic of observables, input currents, output currents, valence band and gaps, underlying all designs based on semiconductor junctions ?    We’ll answer quoting at length from the creator of the Quantum Logic.   The Hungarian mathematician and physicist John von Neumann, delivered as an address on September 2–9, 1954, where he expressed the spirit of his creation.  A fruitful spirit, after considering that all the Computers and Smartphones used to read this text, and not only all of the Industrial Machinery and Equipments, exist and are as performant as we know they are, because abandoned the old classic Logic and embraced John von Neumann's Logic:

“...If you take a classical mechanism of logics, and if you exclude all those traits of logics which are difficult and where all the deep questions of the foundations come in, so if you limit yourself to logics referred to a finite set, it is perfectly clear that logics in that range is equivalent to the theory of all sub-sets of that finite set, and that probability means that you have attributed weights to single points, that you can attribute a probability to each event, which means essentially that the logical treatment corresponds to set theory in that domain and that a probabilistic treatment corresponds to introducing measure.  I am, of course, taking both things now in the completely trivialized finite case.

But it is quite possible to extend this to the usual infinite sets.  And one also has this parallelism that logics corresponds to set theory and probability theory corresponds to measure theory and that given a system of logics, so given a system of sets, if all is right, you can introduce measures, you can introduce probability and you can always do it in very many different ways.

John von Neumann (right side) and Robert Julius Oppenheimer (left side) in a photo shot front of the many double triodes acting as twin 2-state switches in one of the first electronic calculators.  How was it possible for the classic idea of superposition to generate the non-classic kind of logic of observables, input currents, output currents, valence band and gaps, underlying all designs based on semiconductor junctions ?    We’ll answer quoting at length from an unpublished paper by the creator of the Quantum Logic.   The Hungarian physicist and mathematician John von Neumann, delivered as an address September 2–9, 1954, where he expressed the spirit of his creation.   A fruitful one, after considering that all the computers and smartphones used to read this text, and not only all of the Industrial Machinery and Equipments, exist and are as performant as we know they are, because abandoned the old classic Logic and embraced John von Neumann's Logic:

 John von Neumann and Robert Julius Oppenheimer in a photo shot front of the manu double triodes of one of the first electronic calculators









  Signature of John Von Neumann, who created much of the Quantum logic and terminology  







Basic Quantum Terminology

Coherent, typically applied to a system, in modern Physics is a synonimous of superimposed.  In this sense, superimposed (or, coherent) are the orthonormal eigenvectors constituting a base for the wave function Ψ, representing an object, however complex or massive may it be.   Then, decoherent means reduced by a measurement to one definite value (eigenvalue of the eigenstate), the only one we perceive in the multitude of alternatives.

Eigen means “characteristic” or “intrinsic”.

Eigenstate is the measured state of some object possessing quantifiable properties like position, momentum, energy etc.  The state being measured and described have to be observable (e.g. like the common electromagnetic measurements of Bottling Controls ), and have a definite value, called an eigenvalue.

Eigenfunction of a linear operator, defined on some function space, is any non-zero function in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. 

“If you take a classical mechanism of logics, and if you exclude all those traits of logics which are difficult and where all the deep questions of the foundations come in, so if you limit yourself to logics referred to a finite set, it is perfectly clear that logics in that range is equivalent to the theory of all sub-sets of that finite set, and that probability means that you have attributed weights to single points, that you can attribute a probability to each event, which means essentially that the logical treatment corresponds to set theory in that domain and that a probabilistic treatment corresponds to introducing measure.  I am, of course, taking both things now in the completely trivialized finite case.

But it is quite possible to extend this to the usual infinite sets.   And one also has this parallelism that logics corresponds to set theory and probability theory corresponds to measure theory and that given a system of logics, so given a system of sets, if all is right, you can introduce measures, you can introduce probability and you can always do it in very many different ways.

In the quantum mechanical machinery the situation is quite different.  Namely instead of the sets use the linear sub-sets of a suitable space, say of a Hilbert space.  The set theoretical situation of logics is replaced by the machinery of projective geometry, which in itself is quite simple.

However, all quantum mechanical probabilities are defined by inner products of vectors. Essentially if a state of a system is given by one vector, the transition probability in another state is the inner product of the two which is the square of the cosine of the angle between them. In other words, probability corresponds precisely to introducing the angles geometrically.   Furthermore, there is only one way to introduce it.   The more so because in the quantum mechanical machinery the negation of a statement, so the negation of a statement which is represented by a linear set of vectors, corresponds to the orthogonal complement of this linear space.

And therefore, as soon as you have introduced into the projective geometry the ordinary machinery of logics, you must have introduced the concept of orthogonality.  This actually is rigorously true and any axiomatic elaboration of the subject bears it out. So in order to have logics you need in this set of projective geometry with a concept of orthogonality in it.

In order to have probability all you need is a concept of all angles, I mean angles other than 90º.   Now it is perfectly quite true that in a geometry, as soon as you can define the right angle, you can define all angles.  Another way to put it is that if you take the case of an orthogonal space, those mappings of this space on itself, which leave orthogonality intact, leave all angles intact, in other words, in those systems which can be used as models of the logical background for quantum theory, it is true that as soon as all the ordinary concepts of logics are fixed under some isomorphic transformation, all of probability theory is already fixed.

What I now say is not more profound than saying that the concept of a priori probability in quantum mechanics is uniquely given from the start.  You can derive it by counting states and all the ambiguities which are attached to it in classical theories have disappeared.   This means, however, that one has a formal mechanism, in which logics and probability theory arise simultaneously and are derived simultaneously.  I think that it is quite important and will probably [shed] a great deal of new light on logics and probably alter the whole formal structure of logics considerably, if one succeeds in deriving this system from first principles, in other words from a suitable set of axioms.  All the existing axiomatisations of this system are unsatisfactory in this sense, that they bring in quite arbitrarily algebraical laws which are not clearly related to anything that one believes to be true or that one has observed in quantum theory to be true.   So, while one has very satisfactorily formalistic foundations of projective geometry of some infinite generalizations of it, of generalizations of it including orthogonality, including angles, none of them are derived from intuitively plausible first principles in the manner in which axiomatisations in other areas are.”
 John von Neumann (right side) and Robert Julius Oppenheimer  front of the thousands of double triodes acting as twin 2-state switches (2 bits each) in one of the first electronic calculators















Basic Quantum Terminology

Coherent, typically applied to a system, in modern Physics is a synonimous of superimposed.  In this sense, superimposed (or, coherent) are the orthonormal eigenvectors constituting a base for the wave function Ψ, representing an object, however complex or massive may it be.   Then, decoherent means reduced by a measurement to one definite value (eigenvalueeigenstate of the ), the only one we perceive in the multitude of alternatives.



Eigen means “characteristic”, “proper” or “intrinsic”.



Eigenstate is the measured state of some object possessing quantifiable properties like position, momentum, energy etc.  The state being measured and described have to be observable (e.g. like the common electromagnetic measurements of Bottling Controls), and have a definite value, called an eigenvalue.



Eigenfunction of a linear operator, defined on some function space, is any non-zero function in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. 



Eigenvectors are a special set of vectors associated with a linear system of equations that are sometimes also known as characteristic vectors.  The determination of the eigenvectors and eigenvalues of a system is important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few.  Each eigenvector is paired with a corresponding so-called eigenvalue. 





In the quantum mechanical machinery the situation is quite different.  Namely instead of the sets use the linear sub-sets of a suitable space, say of a Hilbert space.  The set theoretical situation of logics is replaced by the machinery of projective geometry, which in itself is quite simple.

However, all quantum mechanical probabilities are defined by inner products of vectors. Essentially if a state of a system is given by one vector, the transition probability in another state is the inner product of the two which is the square of the cosine of the angle between them.   In other words, probability corresponds precisely to introducing the angles geometrically.   Furthermore, there is only one way to introduce it.   The more so because in the quantum mechanical machinery the negation of a statement, so the negation of a statement which is represented by a linear set of vectors, corresponds to the orthogonal complement of this linear space.

And therefore, as soon as you have introduced into the projective geometry the ordinary machinery of logics, you must have introduced the concept of orthogonality.  This actually is rigorously true and any axiomatic elaboration of the subject bears it out.  So in order to have logics you need in this set of projective geometry with a concept of orthogonality in it.

In order to have probability all you need is a concept of all angles, I mean angles other than 90º.   Now it is perfectly quite true that in a geometry, as soon as you can define the right angle, you can define all angles.   Another way to put it is that if you take the case of an orthogonal space, those mappings of this space on itself, which leave orthogonality intact, leave all angles intact, in other words, in those systems which can be used as models of the logical background for quantum theory, it is true that as soon as all the ordinary concepts of logics are fixed under some isomorphic transformation, all of probability theory is already fixed.

RAM Memory Kingston
What I now say is not more profound than saying that the concept of a priori probability in quantum mechanics is uniquely given from the start.  You can derive it by counting states and all the ambiguities which are attached to it in classical theories have disappeared.   This means, however, that one has a formal mechanism, in which logics and probability theory arise simultaneously and are derived simultaneously.   I think that it is quite important and will probably [shed] a great deal of new light on logics and probably alter the whole formal structure of logics considerably, if one succeeds in deriving this system from first principles, in other words from a suitable set of axioms. 





 John von Neumann fathered the idea of what is today named “RAM Memory” ( Kingston/2015)



All the existing axiomatisations of this system are unsatisfactory in this sense, that they bring in quite arbitrarily algebraical laws which are not clearly related to anything that one believes to be true or that one has observed in quantum theory to be true.   So, while one has very satisfactorily formalistic foundations of projective geometry of some infinite generalizations of it, of generalizations of it including orthogonality, including angles, none of them are derived from intuitively plausible first principles in the manner in which axiomatisations in other areas are…”

What I now say is not more profound than saying that the concept of a priori probability in quantum mechanics is uniquely given from the start.  You can derive it by counting states and all the ambiguities which are attached to it in classical theories have disappeared.   This means, however, that one has a formal mechanism, in which logics and probability theory arise simultaneously and are derived simultaneously.   I think that it is quite important and will probably [shed] a great deal of new light on logics and probably alter the whole formal structure of logics considerably, if one succeeds in deriving this system from first principles, in other words from a suitable set of axioms.  

All the existing axiomatisations of this system are unsatisfactory in this sense, that they bring in quite arbitrarily algebraical laws which are not clearly related to anything that one believes to be true or that one has observed in quantum theory to be true.   So, while one has very satisfactorily formalistic foundations of projective geometry of some infinite generalizations of it, of generalizations of it including orthogonality, including angles, none of them are derived from intuitively plausible first principles in the manner in which axiomatisations in other areas are…”

 John von Neumann fathered also the concept of non permanent memory today named “RAM”.  RAMs exist wherever, including the smallest computer of the World like this Raspberry® Pi model

















The words written before could, at least at a first sight, look theoretical and unrelated by the named laymen reality of the facts.   But, they are not.     

An example of their explanatory power can be hinted by the following figure, depicting an extremely common RAM memory.  They are wherever: from the gigantic assembles of thousands of computers and servers in the Data Centres, til the smallest and cheapest existing portable telephone or computers (see figure at right side, showing a quad core-equipped Raspberry® Pi).  We are so accustomed to use RAM memories that it is frequently oversight who fathered their very basic concept, in an epoch when data were electromechanically handled by mean of relays.  John von Neumann, between other things, fathered the idea of what is today named a RAM memory, and lived enough to see an application of his idea in the US Army computers devoted to the modelling of the shockwaves arising by nuclear detonations.


















 The applications of the many insights by the Hungarian physicist and mathematician John von Neumann are wherever.  Also into the smallest computer of the World, like this zoomed Raspberry™ Pi™ 2 model B

The words written before could, at least at a first sight, look theoretical and unrelated by the named laymen reality of the facts.   But, they are not.  And an example of their explanatory power can be hinted by the figure before, depicting an extremely common RAM memory.  They are wherever: from the gigantic assembles of thousands of computers and servers in the Data Centres, til the smallest and cheapest existing portable telephones or computers, like the quad core-equipped Raspberry™ Pi™ (figure at right side).  We are so accustomed to use RAM memories that it is frequently oversight who fathered their very basic concept, in an epoch when data were electromechanically handled by mean of relays.  John von Neumann, between other things, fathered the idea of what is today named a RAM memory, and lived enough to see an application of his idea in the US Army computers devoted to the modelling of the shockwaves arising by nuclear detonations.


 In Synthesis




















The guiding principles for today's understanding of the fine-scales where act all phenomena are:

  1. linear superposition principle
  2. probabilistic interpretation of Quantum Mechanics. 

These two principles suggest that the state space is a linear space, thus accounting for the superposition principle, endowed with a scalar product used to calculate probability amplitudes.  It is named Hilbert Space and abbreviated H , such a linear space endowed with a scalar product.  More, Hilbert Space H  is a linear space over the field of the Complex numbers .   This means that if a, b  H , then: 

  1.   a + b  ∈  H
  2.   c * a   ∈  H

where c is any Complex number ℂ.  Complex numbers were conceived nearly three centuries ago by the Italian mathematician Giovanni Cardano, to make sense to the real solutions of cubic equations that appear as the sum of square roots of negative quantities. That an equation may admit strange, common-sense defying square roots of -1, was observed yet over two thousands years ago.  Complex numbers introduction was slow and encountered opposition.  A historic remnant of this in the fact that the square roots of negative quantities are still today called imaginary numbers. The term “complex” numbers refers to the fact they are obtained by the sum of one real and one imaginary number.   A symbolic sum, defying the laymen usual point of view, teached yet in the courses of elementary arithmetic in the primary schools  (★ + ★ = 2 ★;     ★ + ☾  = ☾ + ★  ≠  2 ★)  where they can be summed only homogeneous quantities.  Complex numbers are two component quantities, knowingly written as c = a + i b.  Another well known two-component quantity is the plane vector, written as z = i x + j y, where i, j are two unit vectors indicating the coordinate axes in a Cartesian representation.  Complex numbers were the first representation of two-component quantities on the Gauss–Argand plane.   


Observable and Self-Adjoint Operators

An observable associated with a quantum system is: 
















  1. exclusively represented by a unique self-adjoint operator A on its Hilbert space;
  2. a dynamical variable. Familiar examples of observables are energy, position, and momentum. They should not be confused with their classical namesakes.  The quantum momentum has a richer structure than classical momentum. They also exist observables, as an example spin, without any classical counterparts.  An operator A is a linear transformation of the Hilbert space into itself;
  3. characterised by a spectrum of its representing operator comprising the set of all possible values obtainable in measurements of A.  A spectrum can be discrete, continuous, or a combination of both. 

Associated with many operators is the operator called its adjoint and denoted A†, which will be such that for all functions f and g in the Hilbert space:

                                                f  | A ⟩  =  ⟨A  f  | 

A is an operator that, when applied to the left member of any scalar product, produces the same result as is obtained if A is applied to the right member of the same scalar product.  The equation before is the defining equation for A.  Depending on the specific operator A, and the definitions in use of the Hilbert space and the scalar product, A may or may not be equal to A.   If A = A, then A is referred to as self-adjoint or, Hermitian.  Self-adjoint operators distinguish themselves by having spectra that consist only of real numbers.  The observable A is normally supposed as having a pure spectrum where the real numbers a  are called the eigenvalues of A.  The eigenvalues can be the direct results of measurements or experiments.    

    Photodetectors.  Present under different dress in all common industrial photo-electric switches. Designed and operating on base of physical laws following John von Neumann's Logic of Quantum Mechanics outlined before 


Information about Observables 

Complex numbers does matter

Do you remember the handling given to √-1 when solving equations like:                                                                                                       

               a x2 + b x + c = 0


into the set ℝ of real numbers ?

Solutions given by relation:

x1,2  =  ± ( b2 – 4ac )0.5 / 2a


may imply square roots of negative numbers.  

An example in the case:

x2  -  1  =  0

                                                    

whose roots are:                                                                                      

                      x1,2  =  {-1; 1}    

Negative roots of minus one are considered roots without signification in the set ℝ.  As a matter of fact, on the opposite these same roots have full meaning (also physical, e.g. when dealing with concepts of reactive power in Electrotechnics and resonance in Electronics) when the solution of the equations is extended to the set  of the complex numbers, of which the real numbers ℝ are just an infinitesimal subset. 

The superposition principle requires a continuum.   As an example, the superposition of classical configurations such as the positions of N particles, originates the idea of wave functions.   Wave functions evolving continuously in Time according to an equation named Schrödinger equation.   Evolving continuously until a measurement disturb the system.  The Writer of these notes remembers he had been teached Quantum Mechanics starting by the study of single-particle problems.  Then, following the early Schrödinger equation.  This frequent recall of studies whose value is today nearly only historic gives the impression that the wave function Ψ was kind of a spatial field.  And that’s why many Telecommunications or Chemical engineers or scientists, still erroneously believe it exists a wave function for each electron in an atom.

On the opposite, according to Schrödinger’s quantization procedure or the superposition principle, wave functions are defined on Configuration Space.  And is this what carries to the unavoidable entanglement of all systems existing in the Universe.  These discoveries of one century ago has to be seen from today point of view, one based on fields rather than particles.   The effect of the entanglement of all systems existing in the Universe can be immediately perceived in the superposition of classical configurations.  If we  superpose their amplitudes of certain fields (i.e., the electro magnetic field) rather than particle's positions, the wave function Ψ becomes an entangled wave functional for all of them.  And it is because of what explained before, that the modern Principle of Superposition is the basic reason why methods and formalism of Quantum Mechanics, different than those of Relativity, establish that subsystems evolve separately.  Evolve separately along a tree-like structure occupying a multitude of spaces.  Multitude of spaces reminiscent of the (infinite) multitude of spaces since 1907 attributed by Relativity Theory to each one instant of Time.   

The information about the entire set of historical time-ordered ramifications is only known to the superposition of all of the subsystems, and never fully known to the subsystems.   Causing what, from the sub systems point of view, is the existence of limits to knowledge, synthesized in the Uncertainty Principle.  Multiverse is an idea implicitly embedded since the start in the same wave function Ψ.   The same function describing how, as an example, semiconductor-based transistors operate grouped into the Integrated Circuits of the Industrial Machinery Programmable Logic Controllers (PLC) devoted to automation, or the Bottling Controls (Electronic Inspectors).      



Superposition of Different States

hilbert vectors sets med

In 1935 Schroedinger conceived a case where a complex (biologic) system lies in an ambient assuring no causal relation with the external world, the Environment.  He was the first to understand that an object lying in a truly closed ambient, without any exchange (no matter, radiation nor information flow) with the external ambient, should have been in a superposition of states, rather than in the single one we perceive directly or by mean of measurement instruments.  In this case, the insulated system should have had a multiple existence: different statuses of different instances of the system in the insulated ambient.  The object should have been: 

  Imagine a dot and let all infinite radiuses spherically get out of that dot: that is the dimension of a dot in the Hilbert space, where each one radius identifies each one of its properties




  • out of the Environment;
  • into an infinite-dimensional space, where a multitude of instances of the system exists.

How this may be possible also depends on the fact that the wave function (or, state function) Ψ is defined in the Hilbert space, a vector space itself generalization of the known 3  Euclidean space.  Hilbert space extends the methods of vector algebra and calculus from the three-dimensional space to spaces with any infinite number of dimensions.  Here, we are using the term dimension in the usual way as the quantities fully defining the position of a point in the space.  Imagine a dot and let all infinite radiuses spherically get out of that dot: that is the dimension of a dot in the Hilbert space, where each one radius identifies each one of the properties (e.g., position, time, momentum, polarization, curvature, action, energy, temperature, information, etc.) of that dot.    



Dimension of a System in the Hilbert Space


modern principle of superpo med hr-2

As an example, of the differences existing between the classic Euclidean and the modern Hilbert space, consider two classical independent systems for which we know:

  • the Euclidean position, in the 3-dimensional space (x, y, z);
  • their velocity components, along each one of these axes (Vx, Vy, vz).   


“The system derived by the superposition of the two original systems, each one described by 6 dimensions, is fully described in the Hilbert Space H  by 36 dimensions”

In the Euclidean space:

If we join these two systems and let them interact, we add the number of dimensions of each one of them.  The system derived by the superposition of the two original systems, each one described by 6 dimensions, is fully described by 12 dimensions.  

In the Hilbert space:

Repeating exactly the same steps in the Hilbert space carries a completely different result because we have now to multiply the number of dimensions of the original systems.    Then, in this case the dimensionality necessary to fully describe the superposed system increases to 36.   




Noncommutativity in the Hilbert Space

Commutativity is an idea deeply ingrained in the human beings. Reflected also in the way vectors may be superposed in a 2D space.  With reference to the figure on side, the vectorial sum (or, superposition) of the vectors a, b is commutative:

                                       a + b   =   a  


   Commutativity is an idea deeply ingrained in the human beings. Reflected also in the way vectors may be superposed in a 2D space


Commutativity having an associated topologic meaning.  To go from the vertex ⋁ origin of a, b at let side to the successive vertex ⋀ where a, b interfere joining themselves, at the right side they exist two different, equivalent paths: 

                              

                                  (a + b),  (b + a


But, in the Hilbert space the superposition of two systems like that described before (i.e., a measurement) is their non-commutative [6] multiplicationmeaning that the order of the operations implies different results.   A binary operation, indicated by the asterisk symbol * on a set S, is non-commutative if for all x, y ∈ S:    


 The iridescent reflection of light, visible in the diffraction grating of DVDs, is an example of decomposition of the frequencies superimposed to form the incident solar light.  Examining individual photons, rather than the multitudes of the classic Geometric Optics, deserves surprises about photons’ nature

                                      a * b    b * a                    [6]  


This last point truly makes the difference with respect to a common sense still based on the classic Algebra of real numbers functions in the set  of elements in the same set .   Our idea of commutativity is strictly related to the deeper idea of symmetry.   How can be possible that a different order in the operations change the result?    An idea like this, judged from the familiar point of view of the real numbers seems unthinkable.   As an example, suggesting that:

                                                2  * 3       3  *  2


Considering now the Euclidean 3D space, anticommutativity property acts in the terms of the named volume product, so that the order in the multiplication of the vectors a, b, c determines different results visible below:



After the implications of these and other rules became evident, it was coined a first comfortable temptative explanation.  One trying to pass around the new scenario it forced to front:  

“No paradox exists because we are treating non-classic Complex operators in the space of Hilbert”. 

 

light iridescent reflection med


But, the Hilbert space is yet the base for all practical technological applications.  How can it be possible that a useful fiction, a mathematical-only artifice could carry us such a bonanza of Wall Street-quoted technological and industrial applications ?   As an example, no one doubts about the reality of the switchings of the CPUs and of the logic gates in the Programmable Logic Controllers (PLCs) which let nearly all worldwide Machinery run in this moment.   No one doubts that what those CPUs, logic gates and I/Os are performing, are computations over a programmed algorithm.  



“To reduce by mean of a measurement to a single point  the observed value, does not mean that the others do not exist any more or they did not existed at all before the measurement”

A second attempt to pass around the uncomfortable implications of the reality of the Quantum Field, tried to limit to the atomic and subatomic scales the domain of application of non-commutative geometry rules.  But, other examples of non commutativity were yet known also for objects of the spacetime macroscale of dimensions we directly perceive without any instrumental aid.   As an example the geometric rotations of a solid, massive 3-dimensional book, in the figure below.  Then, non commutativity represents rules of general application.  The key point is that Hilbert space represent a system before the measurement action when all of the states of the system exist in superposition.      




  The rotations in 3D of a solid are non-commutative (   abridged by Jeff Atwood/2011) 






The act to measure an eigenvalue of an eigenvector related to the eigenstate of a physical property (i.e., in a photoelectric switch the obscuration caused by the passage of a container) reduces the observed value to a single point.  To reduce by mean of a measurement to a single point the observed value, does not mean that the others do not exist any more or they did not existed at all before the measurement.  Non-commutativity operates at all scales.  As examples, we can consider three known vector fields, referred to: 

 Noncommutativity operates at all scales. Lie brackets st[X,Y]  are a geometric description of the noncommutativity of two vectorial flows (  abridged by Bertlmann/1996)

magnetic field lines, 

  • gravitational field lines, 
  • movement of air on Earth associating for every point on the surface of the Earth a vector with the wind speed and direction for that point.

The last two examples well represent the category of vector fields on manifolds.  All vector fields on a map generate flows and the change of a vector field along a flow encounters its description in the Lie derivative.  At right side are depicted two flows σt(x) generated by the vectorial field X and τs(x) generated by Y.  Moving initially along σt and later along τs and viceversa, the difference of the coordinates, indicated with red colour in the figure on side, is the Lie bracket  st[X,Y].   The Lie bracket is a geometric description of the non-commutativity of the two vectorial flows σt(x) and τs(x).



Hilbert Space and Hypercomplex Numbers







In the course of the last century it was discovered that the Hilbert Space H can be defined over different number fields.   These include the: 

  • Real numbers , numbers on a line, an 1D number system;
  • Complex numbers , a 2D number system; 
  • Quaternions , a 4D number system;

and, to some extent, also the Octonions, in the figure below included with the quaternions between the Hypercomplex numbers .  The Hypercomplex numbers are all those whose dimension is ≥ 4.  The Octonions can be thought of as octets (or 8-tuples) of Real numbers building up an 8D number system.  


 The set of all numbers includes the subsets of the > 3D Hypercomplex numbers , visibly dropping the commutative property.    includes the 2D Complex numbers set of special interest for all dynamical systems, included the Industrial Machinery object of Root Cause Analysis (   stratocaster47/ 2014)








An Octonion is a real linear combination of the unit octonions:

                             { e0, e1, e2, e3, e4, e5, e6, e7 }


where e0 is the scalar, or Real element, which can be identified with the Real number 1.  That is, every Octonion x can be written in the form:


x  =  x0 e0 + x1 e1 + x2 e2 + x3 e3 + x4 e4 + x5 e5 + x6 e6 + x7 e7





with Real coefficients  { xi }.  The reason why we pointed out the relation of the Hilbert Space H  to Octonions is that today's best mathematical instrument to scan everything is String Theory.  And a 10-dimensional version of String Theory, namely the version exempt by anomalies, is the one using the Octonions rather than different number systems. 


The Transition from Classic to Quantum

Classic Physics' State

 Relation between Hilbert space and Topological Space. Topological spaces are defined by the most basic theory: the Set Theory








































In classical physics, the notion of the state of a physical system is intuitive.   It is focused on certain measurable quantities of interest, for example, the position and momentum of a moving body, and subsequently assign mathematical symbols to these quantities (such as position “x” and momentum “p”).  Then, the state of motion of a body is specified by assigning numerical values to these symbols. In other words, there exists a one-to-one correspondence between the physical properties of the object and their mathematical representation in the theory.   To be sure, we may certainly think of some cases in classical physics where this direct correspondence is not always established as easily as in the example of Newtonian mechanics used here.  As an example, it is rather difficult to relate the formal definition of temperature in the theory of Thermodynamics to the underlying molecular processes leading to the physical notion of temperature.  However, reference to other physical quantities and phenomena usually allowed one to resolve this identification problem at least at some level.



Quantum Physics' State

The one-to-one correspondence between the physical world and its mathematical representation in the theory came to an end with quantum theory after 1926. Instead of describing the state of a physical system by means of intuitive symbols that corresponded directly to the objectively existing physical properties of our experience, in quantum mechanics we have only an abstract quantum state that is defined as a vector (or, more generally, as a ray) in a similarly abstract Hilbert vector space.  The conceptual leap associated with this abstraction cannot be underestimated.  As a matter of fact, the discussions regarding the interpretation of quantum mechanics since the early years of quantum theory, are to a large extent due to the question:  

  ...how to relate the abstract quantum state to the physical reality out there ?  

The connection with the familiar physical quantities of our experience is only indirect, through measurements of physical quantities, that is, of observables object of our measurements and everyday life, represented by (Hermitian) operators in a Hilbert space. To a certain extent, the measurement allows us to revert to a one-to-one correspondence between the mathematical formalism and the “objectively existing physical properties” of the system, say to the concept familiar from classical physics.  But, due to the fact that many observables are mutually incompatible (non commutativity), a quantum state will in general be a simultaneous eigenstate of only a very small set of operator-observables. Accordingly, we may ascribe only a limited number of definite physical properties to a quantum system, and additional measurements will in general alter the state of the system unless, we measure, by virtue of luck or prior knowledge, an operator-observable with an eigenstate that happens to coincide with the quantum state of the system before the measurement.    A consequence is that it is impossible to uniquely determine an unknown quantum state of an individual system by means of measurements performed on that system only.   This situation is in evident contrast with classical physics:   

  • here we can enlarge our  knowledge of physical properties of the system by performing an arbitrary number of measurements of additional physical quantities.  For example, in a RLC-serie electric circuit, several measurement of current intensity along the circuit and alternative voltages at the resistor, capacitor and inductor, when changing the frequency of the Signal outcoming by a Generator;
  • many independent observers may carry out such measurements and agree on the results, without running into any risk of disturbing the state of the system, even though they may have been initially completely ignorant about this state.  

The last century of Physics also demonstrated that the idea of preexistence of the classical states is an illusion.   A remnant of the limited knowledge we had along past centuries.  


Actual States   

Until some decades ago it has often been argued that these states represent only potentialities for the various observed states.  Say that at the same time, the quantum state: 

  • represent a complete description of a system, encapsulating all there is to say about its physical state;
  • do not tell us which particular outcome will be obtained in a measurement, but only the probabilities of the various possible outcomes. 

This probabilistic character of quantum mechanics until thirty years ago teached as a canon, is an effect of our cultural perspective.    In an experimental situation, the probabilistic aspect is represented by the fact that, if we measure the same physical quantity on a collection of systems all prepared in exactly the same quantum state, we will in general obtain a set of different outcomes.   The purely probabilistic interpretation of the wave function, when closely examined creates many kinds of paradoxes.   What had been fully understood along past decades, marrying the best experimental techniques to the most powerful theoretical ideas, is that on the opposite all possible measurement's values, then all physical states about whom the measurement is providing us information, are actual and equally real.


all- particles are construc med hr

Schroedinger. Only Waves

  The Austrian physicist Erwin Schroedinger, one of the founders of Quantum Mechanics. He was the first to understand that a multitude of simultaneous instances of the electron the Hydrogen proton, were modelled by his newly created Wave Function Ψ



















Reading textbooks written sixty years ago, you’ll be surprised by the difficulty to figure the named wave-particle duality.   This idea, stood between those which delayed many initial efforts to understand Quantum Mechanics.   The following decades demonstrated it was one more side-effect of the past cultural perspective.   Yet Erwin Schroedinger, the physicist depicted in the figure here at right side, in the early days of Quantum Mechanics when attempting to identify narrow wave packets in real space with actual physical particles observed two problems: 

  • initially localized wave packets spread out very rapidly over large regions of space, a behaviour irreconcilable with the idea of particles, by definition localized in space;
  • the wavefunction Ψ describing the quantum state of N > 1 particles in the 3-dimensional space, resides in a 3N-dimensional Hilbert space.   No more in the familiar 3-dimensional Euclidean space of our experience.

In 1952, during a conference at Dublin, Ireland, he first introduced openly the idea that what his own and the Werner Heisenberg's matricial quantum formalisms really are suggesting, is the simultaneous superposition of all the possible instances of the same physical object.    Physical object, from his point of view, reduced to a wave.    And particles as mere idealisations for wave packets with relatively limited extension in the space-time.   We explained above that the wave function Ψ, originally born only to define the probability to localize the electron of an atom, since the start spread the answer in the infinite range of the positions x.    Erwin Schroedinger, the nobelist who fathered in 1926 the same wave function formalism on the base of prior ideas of Louis De Broglie, saw that the wave function Ψ was describing a multitude of simultaneously coexisting electron positions in the space  of the complex numbers:

                              {a + b;    a, b ∈ ℝ} 

rather than a single position in the space  whose elements are real numbers.   Schroedinger choose to treat the multitude of solutions in the particular case he studied [a multitude of electrons in a Hydrogen atom], the way we treat square roots of minus one in .   They were necessary other thirty one years to let someone else (Hugh Everett III), recognise their meaning, yield and physical relevance.   Recognising that the wave function Ψ represents a physical field, and that each solution may differs than the others for a minimum amount of Information.   Amount considered 1 bit.   We recall now the detailed example we considered elsewhere, of a RLC serie electric circuit, based on purely passive components.   A circuit which should have fully satisfied Maxwell’s equation [3] if composed by ideal resistive, capacitive and inductive components in an ideal Environment where:

  • temperature is kept constant with an infinite precision;
  • humidity is kept constant with an infinite precision; 
  • electromagnetic fields do not exist;
  • no gravitational fields exist, then a RLC resonant circuit in a never existed flat Euclidean space-time;
  • no induction of the electromagnetic field of the inductor in the resistor and capacitor, etc. 

In other terms: ideal RLC components part of an ideal Environment. Conditions like these can only be encountered out of the Universe, because they imply no electromagnetic nor gravitational fields.  We remember to the Reader that they exist excellent (however not ideal) ways to shield an ambient by the electromagnetic fields but that no way still today exists to “shield” it from gravitation. On practice, a system causally disconnected (then, non correlated) by the Environment. 


“One of the applications of the quantum computers is the Binary Classification. 

All Electronic Inspectors, whatever their size, complexity or task, are Binary Classifiers”


The Past  





We prefer to start this section with a visit in one of the Google's, Inc. Data Centres.  Why this visit?  Because it shows the Past. And, specifically, it shows the World’s greatest network of Binary Classifiers when approaching the dawn of their design. The aligned rows of thousands of Servers consume impressive amounts of active energy. Reason why Google™ privileges the naturally cold ambients, like Icelandic North Sea or Swedish Baltic Sea, for its Data Centres so to have cheaper cooling of the CPUs.  Parallelizing Support Vector Machines on Distributed Computers is one of the main characteristics of these Data Centres.

















In some way, the alignment of the Servers and of the many CPUs there embedded, suggests one of those mirror-like effects where it is possible to see many instances of a single object.  It is important to start to understand that, living in a multiversal environment, all states are elements of a single Superposition. A single CPU, in the reality, has a multitude of counterparts, parallel processing data whose difference is limited to 1 bit. 




Binary Classification: Present and Future 

“Landauer was telling everyone that Computation Is Physics and that you can't understand the limits of computation without saying what the physical implementation is. He was a lone voice in the wilderness. No one really understood what he was talking about—and certainly not why”

David Deutsch to Julian Brown, circa 1999





The video below synthesises in a few words a new paradigm.  Binary Classification, also task of all of the Electronic Inspectors of the World, is an activity intrinsically optimized for massive parallel computing. Massive parallel computing expensive and unpractical, if arising by the ideas underlying all today’s Data Centres, like the Google, Inc. above.  A single Quantum Computer outperforms an entire Data Centre, at a fraction of its cost.



Rainier processors and Josephson-effect junctions operated into 16 concentric levels of electro magnetic shielding at temperatures close to the -273 ºC. Special conditions to recreate small and brielf-duration Hilbert spaces in our hot decohering environment (  D-Wave Systems, Inc./2014)   

Computing in the Hilbert space

The following video, originating by the world leader in quantum computing, D-Wave Systems, Inc., is meant as an introduction to the commercial applications of the superposed structure named Multiverse.  One of the applications of the quantum computers is the Binary Classification.  Electronic Inspectors in the Food and Beverage Bottling Lines are Binary Classifiers. [In another page of this site we’ll explain why the ideal Control equipment is a device calculating in binary way as usual but, in the Hilbert space].  From Decoherence we learnt that the general mechanisms and phenomenons arising from the interaction of a macroscopic quantum system with its Environment, strictly depend on the strength of the coupling between the considered degree of freedom and the rest of the world.  And because of this reason D-Wave's “Vesuvius” Central Processing Units are a system of 512 quantum bits (qbits).  512 superpositions of the Ψ wavefunctions, preserved longer than possible in their pure state in the Hilbert space: 

  • operating at temperatures of 0.02 K (-272.98 ºC), extremely close to the absolute zero;
  • protected by induction of external electromagnetic fields by fifteen levels of Faraday cages.



512 qbits

If we consider a binary register of 512 qubits, we could obtain a superposition simultaneously representing:

 

      2512  =    ~10154 





numbers. A mind-boggling amount of other branches of the Multiverse where the computer exists, however in a different superposition. Each branch processing the same original qbit with differences from one to another in the variable processed, reduced to 1 bit.  In contrast, a Classical register (like the one which lets you read these notes) of 512 bits has the possibility to represent any of the integers between 0 and ~10154, but only one at a time.


 To pipeline an Hilbert Space of  reduced volume and duration here on the Earth's surface, requires a technology completely different than that of the actual Supercomputers.  Technology which is the core of the new Quantum Computers.  Nearly Science Fiction, as seen by the point of view of today Industrial Automation.  One of the key points lies in the fact that the superpositions of the wave functions Ψ are preserved longer than possible in the Hilbert space when:   

1) operating at temperatures of 0.02 K (-272.98 ºC), extremely close to the absolute zero;    

2) protected by induction of external electromagnetic fields by the shielding provided by fifteen levels of Faraday's cages.  This image is showing the extreme cares with respect to both points:  a) the entire circuit lies in a fridge.   b) EMI-induced parasitic currents flowing in the shields of the cables carrying Signals, are immediately discharged to Ground.  This, limiting to < 250 mm their extension far from Ground discharge points massive and whose impedance is extremely low. In other terms, also EMI-induced parasitic currents are conceived in the framework of the wider circuital design (  D-Wave Systems/2014)


Moreover they:







 

 Ψ


  • adopt Rainier processors and Josephson-effect junctions, rather than Silicium-based processors,  
  • adopt Josephson junctions, the most basic mesoscopic superconducting quantum devices;
  • handle qubits which can slowly be tuned (annealed) from their superposition state (say where they are 0 and 1 at the same time) into a classical state where they are either 0 or 1.  When this is done in the presence of the programmed memory elements on the processor, the 0 and 1 states that the qubits end up settling into, gives the answer to an user-defined problem.


The Ongoing Computation

We close this introductory page to the Hilbert space with a D-Wave™ video preceded by a phrase pronounced in 1982 by the computer scientist Tommaso Toffoli. Convinced as we are that many of the industrial, financial, medical, agricultural, chemical, commercial or military applications of the 22nd century will be driven, as something obvious, by the rationale of this phrase pronounced end of the 20th century:   

“Nature has been continually computing the 'next state' of the universe for billions of years. 

All we have to do is 'hitch a ride' on this huge ongoing computation and try to discover which parts of it happen to go near to where we want”

Tommaso Toffoli, “Physics and Computation”, International Journal of Theoretical Physics 21 (1982)

Imagine a gigantic tree-like structure, of wavelike nature and mathematically expressed by a wave function Ψ.  Tree-like structure where each branch is extending itself in all other directions.  Extension reminiscent what quantitatively described by the Huygens-Fresnel-Kirchhoff diffraction integral.  Here the wavefront of a propagating wave of light at any instant is shaped by the envelope of the spherical wavelets emanating from every point on the wavefront at the prior instant. What from a differential geometry and topology perspective is a foliation, is evolving in all directions. Thus including also those direction which, following our own viewpoint (economic-, technological-, medical-, military-, scientific-, etc.) are close to, or directly into our area of interest.  An example in the Industrial Design.  A smart industrial design differentiates itself by a rough design, because closer to a perceived tendence of Nature.  It results unavoidably cheaper, consuming less energy, hyper productive and longer lasting.  The value-added intelligence differentiating rough and smart designs, in the reality is an implicit synonymous of the Designer’s own perception.  Perception of the “roadmap” joining his interests to Nature’s next state.  In the end, could we forget that Physics, by its Greek language etymology “Physis”, is …Nature ? 


Qubits and couplers in the D-Wave 2 device. The D-Wave 2 Vesuvius chip includes a 8 × 8 two-dimensional square lattice of 8-qubit unit cells, with open boundary conditions. In the figure on side the qubits are each denoted by circles, connected by programmable inductive couplers as shown by the lines between the qubits.  In this special example, just 503 qubits marked in green and the couplers connecting them, are fully functional of the total 512 qubits of the device (  Albash T., Vinci, W., et al./2015)    





Links to pages on related subjects:


Links to pages on other subjects:






                                                                                                                                                                                                                                                                                                                                                                                                                                                         
Webutation
                                                                                                                       © 2013-2015 Graphene.  All rights reserved                                                         DMCA.com Protection Status                    

                                     
                                              
TRUSTe Privacy Policy Privacy Policy
Site protected by 6Scan