Causal Connection of the Events

In the following we'll introduce the relevance of the history of the Events. The long time-ordered chaining of Events in the Past considered causes for a present Event. We’ll consider in this section three cases at different level of complexity. The Events shall be marked using their various existing and equivalent notations, like PQ , p, q, a, b, c, x, y or  ei (i = 1, 2, 3, …n ).   All following diagrams are spatio-temporal, then adopting the Physics conventional notation where Time is oriented along the vertical ordinate axe: 

  1. Past Events, which may be causes of a Present Event at P, 
  2. Future Events which may be caused by a Present Event at P, 
  3. Events happening Elsewhere, causally disconnected by P ,


          Events’ causal connection and disconnection






follow Minkowski’s formulation. Here, the parameter t represents the Time measured along the world-line crossing the Event P  at the vertex of the bicone.  We’ll treat first the intuitive and unrealistic case where the Events lie along two parallel Time axes, in an Euclidean rectangular 3D spatial system, of which we’ll represent just two axes.  Also, we are assuming the clocks at rest in the points and Q  ticking with same rithm.  P and Q  lie over two distinct equitemp surfaces, perpendicularly crossed by the Time axe.  A single straight equitemp line in this surface joins the surfaces.  For an instance, the segment t1 - t0 is the Time difference between the Events and Q .  In the equitemp surfaces the Time is labelled ti, synchronised by any proper procedure for all Events in the surface, thus having a univoquely defined meaning for all of them.  


All what happened in the Past of the Event Q  is causally related with  since the origin of Time.  The entire superposition of Events contained in the Past cone of P causes P.   Some of the effects of P shall enter in ’s Future when and where they'll be concauses of a share of Q’s  Future observed Events









We’ll be representing just one of the two Time axes for the sake of clarity of the graphic representation. The figure at right side depicts two bicones representing an Event P  at time t0 and an Event Q  at time t1 where t0 < t1.  The 2+1 dimensional intersection volume (a 3D space-time) of the bicones in and Q ’s respective: 

  • Past, is a subset of all the Events which caused and Q  effects; 
  • Future, is a subset of all the Events determined by  who will be concauses (shared causes) of what shall be observed in the Future of the Event 

All what happened in the Past of the Event Q is causally related with Q  since the origin of Time. The entire superposition of Events contained in the Past cone of P causes P.  A closer exam of the figure above at right side makes it clear that this way to represent facts and their relations, based over Minkowski geometry in the Special Relativity perspective, is not dynamical.     

The entire system may be imagined like a multitude of frozen frames, created by an holographic camera capable to encode 4D images in a 3D media.  If this should be the entire story, the complete sequence of causes and effects should be predetermined yet at the start of Time, when the single clock reproduced along infinite worldlines, started to tick.  A cosmologic concept for the origin of Time.  In open contradiction with some basic ideas of Quantum Field Theory.  Furthermore, we all think that a choice here and now may later change the outcome there. Then, what before is not the entire story.  The real geometry is not so simple and, most important, there is an initial condition which was given at the origin of Time.  The cones’ null-surfaces visible in the figure above are crossed by photons.  The geodetic curves of the massive particles, on the opposite, pass exclusively into those cones, like in the figure at left side. Also meaning that massive particles cannot propagate themselves at the speed of the photons.  


 Massive particles' geodetics enveloped by the null-surface of the Past light-cone referred to the Event P (    abridged by D. Hilbert/1927)



Cauchy Surfaces and Root Causes

Technical/White paper

     Flash animation, 69 MB, ZIP

   After download click  index.html



                132 pages,  33 MB 


















The key point we’ll be trying to illuminate in this section is that since over one century a few famous mathematicians, like the great French Henry Poincare’ to name just one, discovered the existence of a layer of knowledge with its own rules, interposed between pure Logic and Geometry.  A branch of Mathematics named Topology.  The discovery is not trivial, because all Technological applications (i.e., Machinery, Equipments, processes) are applications of Physics.  Other applications of technological interest, like those pertaining to the Chemical and Biochemical processes, are themselves based over Physics.  Then, it is the entire Technological world which is related to the basic Logic by mean of a Topologic interface. Could we root cause analyse what happens in a machine, equipment or productive process abiding by Logic?  Surely we could.  We’ll explore in the following examples: 

  • where this happens, 
  • toward what conclusions it carries the analysis, 
  • why technological applications behave in another way, their own way.  

Root Cause Analysis’ methods may be indifferently applied to malfunctions, inefficiencies affecting equipments and machinery yet commissioned or not still commissioned.  In the latter case, the initial information resides in the Project documentation and specifications.  The first case is however the most common and root cause analysing, we are investigating what let a system diverge out of its precedent history.  A history where it was performing into the Project specifications, resumed in an array of numbers, symbols and rules named Technological Guarantees.   


Example

Thermal Process Malfunction

We ask to the Readers to imagine a Root Cause Analysis unknowingly accomplished in violation of any or more basic physical laws.  As an example, imagine an industrial boiler with double separate circuits (hot and cold fluids) which has been correctly heating water to 90ºC and keeping it at that temperature.  A process needing ~3 hours to reach that regime starting from a water temperature of 20ºC.  Suddenly, due to an unknown reason or reasons, the boiler starts to need ~12 hours to reach the same temperature.  Clearly, to identify what, who, when and in what extent could have made the change causing the present malfunction, it is necessary to analyse the entire thermal process and its conditions before and after the problem started to exist.  A multitude of conditions, functions, operations differentially related and some of them apparently unrelated.  At a first sight, inputs and outputs of the Automation controlling system seems present and correctly exchanged.  Then, someone else remembers to have observed months before some cuts in some teflon O-rings and residuals in the piping.  This added to the fact that O-rings’ lifetime is not eternal, lets the root cause Analyst focus the boiler’s Heat Exchanger.  Focusing the eventuality that a foreign massive body (i.e., a plastic foil or some broken O-rings, or something else) entered in the hot fluid circuit, partially blocking the hot fluid inflow to the thermal exchanger.       



 Cauchy surfaces are crossed only once by the objects’ world-lines.  Thus, preventing the otherwise unavoidable causality violations which could derive by worldlines developing themselves along closed cycles.  In the case of the physical space they are spatial-only, with dimensionality ≥ 3.  Green and red coloured a causal and an acausal history





















“...root cause analysing what interrupted the normal execution of a physical process, we are always tracing back in time and all-around in the space, who and/or what and in what extent changed its precedent state”



After a long time, Production-consuming stop to empty the hot fluid circuit and thoroughly inspect all pipings, this eventuality is excluded as Root Cause.  Excluded because encountered inconsistent with the result of the inspection: no foreign objects.  A thorough control of the Maintenance Department's logs shows that all O-rings were purportedly renewed a short time (with respect to their standard lifetime specification) before the Event causing the industrial process downtime.  Meaning that the additional Production time lost during the unuseful piping inspection is consequence of an error.  In what a way the eventuality that a foreign object clogged the piping relates to the facts written in the logbook, before to loose additional time, emptying the entire circuit to inspect the piping ?   In brief, the sequential observations about the: 

  1. broken O-rings
  2. change of all O-rings with spares
  3. start of the problem

are three facts related to their own proper times.  Jointly they form a subset of an history.  The history which could have been a reasonable Root Cause if only the point 1. exists, reveals itself an auto-goal after ignoring that also the point 2. existed before the point of time 3. when the problem became manifest.  The subspace of the eventual histories is composed by the combinations:

         (1., 2., 3.),   (1., 3., 2.),   (2., 1., 3.),   (2., 3., 1.),   (3., 1., 2.),   (3., 2., 1.)

As an example, the case (1., 3., 2) represents a history which could be a realized by the sequence of observations:

  1. broken O-rings
  2. start of the problem
  3. change of all O-rings with spares

But, what about the sequences (2., 1., 3.) or (3., 2., 1.) representing unrealistic histories of observations ?

  1. change of all O-rings with spares
  2. broken O-rings
  3. start of the problem


  1. start of the problem
  2. change of all O-rings with spares
  3. broken O-rings


 Causal chaining involves a multitude of different pathways open to let an Event cause effects elsewhere. But only some of the ways, those respecting few basic Logic and Topologic rules, participate in creating the realized histories









At our scales of mass-energy, spatial extension and time duration, all processes look Topology-compliant.  Physical processes are mappings from a hypersurface at time t0 to another at time t1, where t0  t1  (““ meaning “precedes”).  What prevents us from observing unrealistic histories is the presence of the Cauchy hyper surfaces.  All Information from a Past Event to a Future Event is present on the Cauchy hyper surface.  Surface crossed only once by the objects’ world-lines.  Thus, preventing otherwise unavoidable causality violations.  The incoherences deriving by the world-lines developing themselves along closed cycles.  Cauchy hyper surfaces can be idealised as the most basic directional filters, acting at all scales of dimension, time and mass-energy.  They are spatial-only, with dimensionality ≥ 3.  Many of the temptative root causes and interpretations failing to solve a Problem, do have hidden inside any or more erroneous assumptions or data.  A deeper exam should discover their flaws because implying causality violations.  



 A Cauchy hypersurface, marked as a bold curved line passing through the Event Q, translates the flat-space idealization lying back of the equitemp surfaces, in the curved space general relativistic perspective













What it'd mean to be developing a RCA based over any or more erroneous assumptions implying causality violations?  Since two-thousands five hundred years it is known a typical flawed logic: the tautology.  A circular disguised restatement of the basic assumption, always and falsely proofing itself.  Cauchy hyper surfaces prevent matter and radiation from following topologies whose logic should be tautologic.  It is easy to conceive this necessity imagining an object pertaining to our scales of dimensions, times and mass-energy in a peculiar topology like in the figure below.  A 3+1-dimensional obiect of which the figure forcedly shows just 2+1-dimensions.  The object proceeding along its inner surface encounters three different ways to cross the handle from a throat to the other, corresponding to different lengths for each path.  This, at a first sight, does not seems an anomaly.  In the end, we all think that our free-will allows us to choose between different paths when moving from a place to another.  But, our choices happen at scales exempt by topologic pathologies like this.  Looking the details, it is possible to see a closed cycle in the direction of the Time axe.  If an handle like that should exist at our scales, we’d see:

 A handle is an immediate candidate for causality violations. Three non-equivalent ways joining two places ( abridged by F. Hellaby, in J. Plebanski, A. Krasinski/2006)












  • distinctly visible contradictory Events;
  • exchanges of the left- and right-sided sides of what should cross it;
  • creation of energy from…nothing;
  • a Time Machine.

The conclusion is that root cause analysing what interrupted the normal execution of a physical process, whatever its complexity we are always tracing back in time and all-around in the space, who and/or what in what extent changed its precedent state.  Physical systems falling under our experience are realized systems and not ideal systems.  Their operation is filtered at all scales by basic structures like the Cauchy hyper-surfaces, preventing realized contradictions.  As a consequence, when root cause analysing our line of reasoning cannot abide to be based over reliable, controlled data.  The alternatives are ill-fated analysis and consequent “pet theories” redressed to look-like Root Cause Analysis.  Flawed analysis just capable to change the pseudo root cause of a real problem, from an unknown attribution to an erroneous attribution.  


 Events’ causality at large scale. An Event at the point P along a worldline, is causally related to all those in its past light cone.  At its right side, marked with a bold line, 3 different time-ordered positions themselves Events, of another object.  Object interacting and statistically dependent with P in its Past, later no more interacting (i.e., disappeared) but still statistically dependent, finally reappearing in P’s Future when and where interaction and statistical dependence are re-established



Each Observed History is Realized History















The conclusion of what before is that, one time observed, the history containing a Problem becomes realized history.  The history of a physical process where the Information Flow, from the Past to the Future, is filtered by the Cauchy hyper-surfaces.  Just an extremely small share of all of the realized histories lie in a permanent way in our own memory, logged by Dataloggers, registered by Data Acquisition systems, etc.  We are capable to conceive scenarios or histories pertaining to the wider state space.  Measurements and experiments show that also those multitudes of unrealistic histories are part of the state space.  But the stability of the majority of these configurations is minimal.  That’s why they are observed and registered only by instruments extending our biological detection capabilities.  What before are not really modern ideas, i.e., ideas of Quantum Mechanics or of its actual grandchild Membrane-Theory.  As a matter of fact, Mechanical and Chemical Engineers reading these notes, remember that by Liouville and Poincare’ recurrence theorems, inherent to subjects of Statistical Mechanics and Thermodynamics, it is derived a similar situation.  Here, it is conceived a system of two chambers, kept separated by a partition (i.e., a gate or door).  Initially, a gas is present just in one of them and the other is completely empty.   Then the gate is open and we all know that the gas shall reach a temporary equilibrium state, where the same number of particles shall be divided between the chambers.  Liouville and Poincare’ theorems state that waiting enough, we’ll be spectator of a paradoxical state where ...all of the particles have returned in the original chamber!   And if we continue to wait, we’ll reencounter all particles again distributed between the chambers.  Just one of the many paradoxical configurations and histories actually part of those existing in the state space.



Introducing Alexandrov Intervals

The bicone here at left side represents a second way to display the same 2+1-dimensional section of the 4D Minkowski geometry.  At Time t = 0 is referred the plane surface where all the electromagnetic and/or gravitational signals outcoming by the Event e1 pass through a circular white coloured surface S after a certain time. All Events lying in S are causally correlated with e1. An infinitely sensible Detector or an Observer placed at the Event esimultaneous with the events in the surface S, however extremely close to them has no idea of their existence.   

 An infinitely sensible Detector or an Observer placed at the Event esimultaneous with the events in the surface S, however spatially extremely close to them, has no idea of their existenceMeaning that at the Time slice t = 0, the events e1 and e0 are causally disconnected or, space-like separated (  abridged by J. A. Winnie/1977)



Meaning that at the Time slice (or leaf, or sheet) t = 0, the events e1 and e0 are causally disconnected or, space-like separated. On the opposite, the Event e2 along the same worldline correlated with e0 in its Past, also if simultaneous with the Event e3, is causally related to e3 because lying in the Future lightcone of e1 not less than e3.   By the figure above it is finally possible to see that the Event e3 is correlated to all (in the Classic view, infinite) events in the circle S.  Because of the fact we are representing a 2+1-dimensional section of a 4D geometry impossible to depict, Readers understand that the nature of the circle S is in the reality that of a 3D sphere.  


In a modern quantomechanical perspective, a Detector placed at the Event esimultaneous with the events in the surface S, extremely close to them, is capable to detect them like Noise pulses. Pulses stochastically occurring and of extremely brief duration, originated by quantum  effects like the tunnelling 








Sphere whose centre is the Event along the worldline joining e1 to e3, crossed by the equitemp surface xy where the Event e0 lies. The third case is represented by the figure below, showing a situation closer to the reality lived by Detectors, Machinery and Observers.  A complex sequence of bicones illustrating why Relativity considers all the spatial dots around a single Event, related to the Event.  Here, on the base of Minkowski geometry of 1907, an entire spatial volume lying out of the light cone should be causally disconnected by the Event a in its Past. In the reality, it shall result still related, however slightly. We start observing the time-ordered sequences where:

  1. the Event a lies in the Past of the Events b and c;
  2. the Event a lies in the Past of the Events y0 x, s and y;
  3. S(a, b) is the hypersurface separating the Future of a by the Past of b;
  4. S(a, c) is the hypersurface separating the Future of a by the Past of c



 A complex sequence of bicones illustrating why General Relativity considers the spatial dots all around a single Event related to the Event.  The image shows a special case: here on the base of Minkowski geometry of 1908 an entire spatial volume lying out of the light cone should be causally disconnected by the Event a in its Past.  In the reality, it shall result still partially related.  Event a lies in the Past of the Events b and c.  Also, a lies in the Past of the Events y0,  x, s and y.  S(a, b) is the hypersurface separating the Future of a by the Past of b.   S(a, c) is the hyper surface separating the Future of a by the Past of c.   Visibly, the Event s in the Future of the Event x, lies on the external boundary of the Future of the Event a, part of the same hypersurface of constant Time of the of the Event b. Because of this reason, the pink coloured hyper surface centered on the Event s results partitioned.  The inner portion hosting a 3D space-like volume causally related to the Event a much more strictly than the portion out of the hyper surface S(a, c) external boundary (  abridged by H.-J. Borchers, R. N. Sen/2006)






The Event s in the Future of the Event x, visibly lies on the external boundary of the Future of the Event a, part of the same hypersurface of constant Time of the Event b  Due to this reason the pink coloured hypersurface centered on the Event s results partitioned.  The inner portion hosting a 3D space-like volume causally related to the Event a much more strictly than the portion lying out of the hypersurface S(a, c) external boundary.



Phenomena Are Independent Of Transfer In Time


Conservation of energy. Emmy Noether. The law of conservation of Energy manifests that phenomena are independent of transfer in Time

 Emmy Noether. The law of conservation of Energy manifests that phenomena are independent of transfer in Time







In 1915 the German mathematician Emmy Noether demonstrated that: 

  1. it is possible to obtain the law of conservation of energy in its ordinary mathematical form, recognising that phenomena are independent of transfer in time.     
  2. the law of conservation of momentum is due to the fact that a phenomenon is not dependent on the region of space in which it occurs. 
  3. the law of conservation of angular momentum follows from the isotropy of space.  

From these theorems it follows that a relationship between cause and effect under fixed conditions, is essentially related to the laws of conservation of energy, momentum and angular momentum.  If one assumes that even one of these conservation laws is violated in some kind of process,  this unavoidably will lead to a change in the form of manifestation of the necessity of causal relations.  For instance, to presume the law of conservation of momentum is violated would force one to acknowledge that absolutely identical conditions realised at different times would be accompanied by different effects.  Time would then be capable of operating physically on processes, and then time would have to be included in the conditions (causes) of these processes.  The supposition that processes exist for which the laws of conservation of momentum do not hold would compel acceptance of the fact that the same conditions realised in different regions of space would generate different effects.  This would mean that space exerts a physical effect on these processes.  Then space should have to be included in the conditions of these processes.  In the same way, violation of the law of conservation of angular momentum should mean that a rotation of certain phenomena through some angle in space could physically effect the behaviour of the phenomena.  If in the actual world there were entities that did not obey the laws of conservation of energy, momentum and angular momentum, then the idea of the necessity of causal relations determining the behaviour of these entities would have to be stated as the behaviour of an entity is of necessity determined by its interactions with the environment and with space-time.   


Predictability

Quantum versus Classic Causality 

















This relation to the environment as part of the experimental configuration, is exactly what started to be hinted by theory and experiments around ninety years ago. Marking a transition from the Classic to the Quantum (or, modern) point of view, downplaying the relevance of the Time.  What may be better perceived by the analysis of the graphics below.  All Classic points of view about causation consider true only the graphics at left side, corresponding to what a physical object experiences and what can be said about its Past and Future, at two consecutive times t1, t2 where t2 > t1.  Depicted with red colour two different histories. With reference to the history of the object at left side, before the time t1 we: 

  • have full historical knowledge its Past, 
  • may imagine several “possible” Future states.  

With reference to the history of the object below at right side, at the time t1 it undergoes a positional change until the time t2, thus: 

  • excluding a vast amount of (left-way oriented) Events, which were considered possible in its Past time t1,
  • they continue to exist possible Events of the respective futures of the histories (or, branches) that the object’s memory at time t2 considers not happened.

The point of view beloow at right side is the modern, where a happened measurement: 

  • does not reduce the space of the Future eventualities. Eventualities which continue to remain potential,
  • excludes the vast majority of the Past potential histories. 


   What can be said about Past and Future of a physical object, at two consecutive times t1t2 with  t2 > t1.  Depicted with red colour the same history of an object seen as it evolves in the Classic and Quantum view.  At left side the Classic view, also the laymen view.  Here, before the time t1 we have full records about its Past and may imagine several “possible Future” states.  At right side, the Quantum view of the object’s history.  At the time t1 the object undergoes a positional change until the time t2thus excluding a vast amount of left-way oriented Events which were considered “possible Futures” at object's Past time t1.    But, visibly, at time t2  they continue to exist possible Events also in the respective futures of the histories (or, branches) that the object’s memory at time t2 considers not happened (  abridged by G. R. F. Ellis, in eds. A. Ashtekar, V. Petkov/2014)









   Yet our every day life open systems fail to follow in their Future observed states what expected on base of Present data, also when these would be known with ideally infinite precision. Then, the reason cannot originate by just uncertainties in the knowledge of initial conditions (  abridged by G. R. F. Ellis, in A. Ashtekar, V. Petkov eds./2014)













In an open system, like all those part of our everyday life, quantum effects introduce a perceptible unpredictability. Perceptible by the fact that Classic Laws fail to prefetch on base of Past conditions the Future expected outcomes.  As an example (see figure below at right side) divergences between the expected values of physical properties calculated on base of Present informations, and values observed later in the Future.  A situation mirrored by Statistical Mechanics. Readers agree that it is still widespread the idea that, at least in principle, a macroscopic system is on the microscopic level fully, correctly described by deterministic, time-reversible Physical Laws. If such a microscopic description should be complete, then it should also include all the physical properties measured in the Thermodynamic systems.  I.e., those consisting of a mole of particles. As a consequence, the famously irreversible character of the Second Law of Thermodynamics is typically ascribed to our inability to obtain infinitely precise knowledge of the microscopic state of the system, superposed with special initial conditions for the macroscopic, observable quantities of the system.  Namely, the so-called Ignorance Interpretation of Probability Theory.  Our inability to know the microstate of the system and hence to predict its Future evolution aggravated by the fact that no system is fully isolated from the rest of the world.  Meaning that the Future evolution of a system is influenced by its environment. In order to see the deterministic character of a system’s Time evolution, one would need to know the state of its environment, which in turn depends on still another environment, ab-infinitum. Since it is impossible to include in our calculations such a wider environment, which might consist of the entire Universe, is frequently argued that we have no choice but to describe our system using the concept of Probabilities.  The Ignorance Interpretation of Probability Theory, born and developed in the Classic framework of Statistical Mechanics, is unavoidably missing the Quantum Mechanics revolution. The body of experimental and theoretical discoveries made after the Interpretation was conceived.  The Author of these notes openly challenges the basic assumption contained in several textbooks that Classical or Quantum Mechanics can provide an accurate microscopic description of a Thermodynamic system. Nowadays’ idea is that Quantum effects intervening in the system after an Event, include also the: 

  1. Future interactions of the open system with all other systems. Interactions starting to happen after the Present t0 and before the Future Time t1;
  2. Entanglement, existing since the start of Time between each of the elementary particles in the system and the grand superposition (Multiverse);
  3. Entanglement, existing since the start of Time between each of the elementary particles in the system.

The Future interactions, originating by the named “Everywhere” region all around a bicone, are impossible to account for when trying to calculate what eigenvalue shall an observable property acquire at a Future Time.  Entanglement refers to a generalized relation of each element with the grand Superposition, with minimal possibility to acquire reliable data about tyhis last banally because the element is by definition part of that Superposition.  A problem of wrong perspective, whose solution is impossible.  The couple of viewpoints Classic and Quantum shown before, encounter a quite similar interpretation when comparing in the figure below Classic and Relativistic predictability for the evolution of a closed system.  “Closed” are the systems ideally insulated, not electromagnetically nor gravitationally interacting with others.  


 Two closed physical systems whose evolution is predictable in one case and unpredictable in the other.  Classic and Quantum viewpoints, shown before in the case of closed systems, encounter similar interpretation in terms of predictability when comparing Classic and Relativistic. Closed are the systems ideally insulated, not interacting with others.  The Classic view implies a predictable path for an object.  Relativistic view, on the opposite, is unpredictable starting by the first bifurcation. Unpredictability which may be understood in terms of coexistence of the future branches (  abridged by F. De Felice, C.J.S. Clarke/1990)

















Three examples: 

  1. quantum bits' (qbits) circuitry operates in an Environment engineered trying to emulate closed systems at least for what it refers to the electromagnetic interactions;
  2. A-class fridges emulate more closely than a C-class fridges a closed system;
  3. the Universe, the only system ideally closed with respect to all kinds of “external” influences simply because, by definition, it does not exist any external environment.

The Classic Mechanics’ view, known the initial conditions, implies a predictable path for an object.  The Modern relativistic view, on the opposite, is unpredictable yet starting by the first foliation, in the figure above represented as a bifurcation, a near synonimous of superposition of linear functions.  In the modern Quantum view, the eigenfunction predicting the state of a physical system after an interaction, includes typically a multiplicity of eigenvalues, later observed associated to different weighs finally translated in the laymen term of probabilities.  What implies an intrinsic unpredictability.  Unpredictability unavoidable when modeling the system as a superposition of coexisting future branches.  Branches in which the Observer, Detector or measurement equipment shall later encounter itself.  For example, the predictability for the outcomes when tossing two ideal dices, is a state vector, whose values are the multiplicity of outcomes visible at right side.  Tossing two (ideal) fair dices we’ll have established a Sample Space comprising 36 Events.  Here Event is a combination of the outcomes of the tossing of the couple of dices.  They do not exist “fair dices” nor it exists a way to manufacture them that ideal way, perfectly symmetric, perfectly balanced over all their sides.   


 Tossing 2 fair dices establishes a state vector composed by 36 combinations of 2 elements.  If for “outcome” we mean the superposition of these 2 elements’ outcomes, as it is typically the case for the real dices, then we discover that the weight of certain sums is much higher than that of others








The true number of Events results enormously higher, coherently with the (potentially) infinite dimensionality of the State space.  In other words, in a case of infinite dimension for the State Space'swe can know the state only by mean of an infinite number of observations in infinite Sample Spaces, along an infinite Time. This viewpoint had been itself revised after 1990, to take into due account the impressive experimental results (for example, Alain Aspect's team at Sorbonne University, France and Akira Tonomura's at Toshiba Research Laboratories, Tokio, Japanmassively accumulated worldwide along the decade 1980-1990.  When these notes are written, it resembles the one described in the page “Quantum Causal Theory”.  



Examples of Events’ Causal Connections 


Triggering Containers by Laser Light










Events’ causal connection, in its classic version pre-2000, encounters an important field of application with respect to the exchange of light signals.  This, in the Electronic Inspectors is commonly associated to Triggering of typically accelerated containers.  In the following, we’ll outline the true rigorous scenario around a Triggering.  


 To use an incremental Encoder, like this optical model, in the perspective of modern Physics means to coarse-grain a container position.   It is a coarse-graining operation because the Shifting-Register cell has typically an extension, measured along the container's direction of movement, bigger than a container diameter and much bigger than the Encoder's resolution (  abridged by U.S. Digital/2015)


This is useful to understand why triggering, an action strictly related to containers' kinematic, results on the opposite always referred to an external speed reference, since decades an Encoder.  This, when it is well known that only direct measurements provide absolute values, those whose relative errors are  minimised.  The choice toward external speed references is forced by the complexity of internal (direct) measurements.  To use an Encoder, in the perspective of modern Physics, means to coarse-grain a container position into a Shifting-Register cell.  It is a coarse-graining operation because the Shifting-Register cell has typically an extension, measured along the container's direction of movement, bigger than a container diameter.   Coarse-graining, for definition, is clearly not a synonimous of increase on precision, just the opposite, and in our case kinematic uncertainty means False Rejects.  Curvature effects described below are in the Shifting-Register emulated and widely reinforced by accelerations and decelerations of Conveyors and by the contacts of the container with the lateral guides causing container sliding.  What should happen if we’d be measuring containers' speed in a direct rigouros way?


Links to the subjects: 


Links to other pages: 


Links to other subjects:





                                                                                                                                                                                                                                                                                                                                                                                                                                                         
Webutation
                                                                                                                       © 2013-2015 Graphene.  All rights reserved                                                         DMCA.com Protection Status                    

                                     
                                              
TRUSTe Privacy Policy Privacy Policy
Site protected by 6Scan