Root Cause Analysis studies the relation between the observed negative effects and their causes.  The wide spectrum of answers to the apparently-only trivial question “what is the difference between a Cause and its Effect?” shows each one has a different idea.  

The direct applications of the ideas of causation are wherever.  Also in the Problem Solving and Root Cause Analysis of technological issues.  Simply put, there is a big difference between the Logic underlying a dialectic debate between humans and Nature’s own Logic, manifest in the physical laws underlying all technological applications. 

Last centuries’ causal connection ideas hint to the validity of conjectures thousands of years ago elevated to the rank of principles, namely:

  1. the time-ordering of the Events;
  2. someone or something here, acts over someone or something there.

Subject examined as seen by different points of view:

  • Quantum causal theory and quantum theory following studies by Richard Dedekind, Hausdorff, Alfred A. Robb, Helmut Hasse and Henri Gustav Vogt, Raphael Bousso, Abhay Ashtekar, George Ellis et al. 
  • General Relativity, originated by Albert Einstein and his followers Bergman, John A. Wheeler, Tullio Regge, Stephen Hawking, Roger Penrose et al.
  • Classic Physics, also due to the mathematicians Henri Poincare’ and Hermann Minkowski;
  • Epistemological, originated by studies started thousands of years ago and more recently fathered between many also by Pierre Simon de Laplace, Leibniz, Hume, Hobbs and Ernst Mach.



Quantum Causal Theory 

root_cause_as_partially_ord_med_hr



Discrete space” means that lengths in our ordinary 3D space are made of a finite number of elementary lengths


How many points lie between two points?  Twenty-five centuries ago the Greek philosopher Zeno first observed the kinematical paradox deriving by the idea that a line joining two points may be infinitely divided.  Since 90 years, also an infinity hitting against the Quantum paradigm, itself derived by the most objective facts.  Infinities and infinitesimals, so frequently recalled in Classic Physics and Engineering computations, i.e., Fourier, Taylor or Mac Laurin’s series, are implicit statements about the nature of the space and time.  A statement of space-time smoothness.  Smoothness contradicted by the experimental and technological results of the last ninety years.  Could the space be discrete?  One where all exists and moves exclusively in discrete amounts, thus relegating infinities and infinitesimal to the rank of useful mathematical abstractions?   The most modern concept about what a Causal Relation is, derived quite recently by the intersection of two branches of Mathematics with the today dominating branch of Physics: 

  • Order Theory,
  • Minkowskian Geometry, 
  • Quantum Mechanics.   

The new proposed principle is the Causal Metric Hypothesis: 

  • characterizing the observed properties of the physical universe as manifestations of causal structure,
  • where the metric properties of classical spacetime arise from a binary relation on a set, representing direct influences between pairs of events.

Rephrasing what before, the Causal Metric Hypothesis assumes that the structure of spacetime is not continuous, rather discrete and structured.  A point of view completely different than what we implicitly assume when, i.e., solving the basic kinematical problems constantly presented by the industrial machinery and equipments. “Discrete space means that in our ordinary 3D space lengths are made of a finite number of elementary lengths.  

  (Click-to-enlarge). Several ways exist to go from a point P  to a point Q  in its Future. Green and orange coloured paths respecting all four Partially Ordered Sets’ rules, discovered 90 years ago and detailed below. The orange coloured pass through an intermediate point R .  Three of the paths outfeeding R  in red colour, correspond to Past-oriented histories prone to become causally pathological closed cycles or chains, violating the Causal Metric rules. The Time axe hints to the existence of an initial condition


Lengths representing the: 

  • smallest possible length le,
  • flow of time occurring in a serie of elementary ticks of duration te, representing the shortest time interval. 

The Causal Metric Hypothesis, new paradigm for causation, rejects the:

  • notion that spacetime is a manifold, 
  • existence of static background structure in the universe, 
  • symmetry interpretation of the relativistic covariance, 
  • several other assumptions. 

The sector is based over studies started one century ago and they exists today two Causal Metric Hypothesis:

  1. Quantum Causal Metric Hypothesis, states that the phases associated with directed paths in causal Configuration space, under Feynman’s Sum-Over-Histories approach to Quantum theory, are determined by the causal structures of their constituent Universes, themselves part of the wider closed quantum system named Multiverse;
  2. Classic Causal Metric Hypothesis, states that the metric properties of spacetime arise from a binary relation on a set, representing direct influences between pairs of Events.  


Why Causal Sets


Nearly whoever has yet observed on his/her own that all movies, photographic or digital, present a continous, fluid evolution of the Events also if intimately built assembling a great number of indipendent frames: a collection of frames, each of them closely recalling a 3D hyper surfaces associated to each instant of Time named Events by Relativity.  In this banal widespread observation there is much more truth than expected. Causal Set Theory proposes itself as the more fundamental theory, probably only one truly capable to solve all five problems listed below:

  1. Electrodynamics.  The electromagnetic (abbreviated em) spectrum visibile in the figure below, represents the range for the frequencies and associated wave-lengths of the electromagnetic interaction.  Interaction transferring quantised (discrete) packets of energy named photons.  The double-side (left and right) arrow-like shape of the diagram has a meaning.  It is a continuous spectrum of infinite extension.   The present  Electrodynamics Theory predicts that the allowed frequencies of photons extend continuously from zero to infinity.  Frequency zero (right side below) implies infinite a wave length tending to infinite: an unreachable effective limit. But, what about the left side, corresponding to extremely short wave-lengths ?    Planck’s law  E = hν implies that a photon of arbitrarily large frequency ν has arbitrarily large energy E.  In open contradiction with the principle of local conservation of energy, suggesting that no infinite energy photon may exist. Then, there ought to be a natural cutoff of the electromagnetic spectrum in correspondence to a maximum allowed frequency νmax, cutoff to the date not conceived by the Electrodynamics Theory, mainly based over a Maxwell’s studies of nearly two centuries ago.  Since the frequency of a photon is the inverse of its period, we see how a discrete temporal structure provides a natural cutoff.  In fact, the minimum time interval te implies a maximum frequency νmax = 1/te.

  The electromagnetic spectrum is based on the energies involved by Planck’s fundamental formula E = hν, where ν = c/λ,  c speed of propagation of the em waves in the vacuum, λ wave length, ν frequency, h Planck's constant, E energy of the photon ( CC 3.0)


  1. General Relativity postulates the existence of hyperdimensional foliated spatial surfaces.  Where Time plays the role of parameter whose meaning is the identification of different leaves.  With Time and space both real numbers, it becomes unavoidable for General Relativity to present frequent divergent solutions for its set of equations.  As an example, consider the gravitational potential F = -k M/R caused by a homogeneous mass M with spherical symmetry, and measured at a radius R (k universal gravitational constant).  Visibly, for R → 0 at the centre of the sphere we’d have a divergent gravitational potential → ∞. Infinity whose effects, on the opposite, no one detects. To solve this and other problems, they’d be welcome new arguments in favour of a discretization of space and time.  


  1. De-Broglie relation  p = h/λ   Its validity is confirmed by several experiments accomplished during past decades.  But, when applied to wavelengths arbitrarily close to zero, photons’ momentum becomes visibly divergent, potentially infinite.    An infinity in open contradiction with the principle of local conservation of momentum, suggesting that photons of infinite momentum cannot exist. Then, we see how the minimum length le implied by a discrete space-time structure provides a natural cutoff for the wave-length λmin = le.


  1. Quantum Electrodynamics.  Its infinite perturbation series require the existence of all the photons in the electromagnetic spectrum and, as a consequence Quantum Electro Dynamics (QED) actually predicts the existence of these (never observed) photons of infinite energy-momentum.  It is believed that the perturbation serie is divergent.  Divergence generally overlooked because QED is not a complete theory accounting for all elementary interactions.  A pseudo-explanation merely hoping that another theory fills QED's logic loopholes.  Again, we see the tensionexisting between fundamental quantum formulæ, like those of Planck’s or de Broglie’s, and the observed reality in the limit of the higher frequencies (shorter wave-lengths). 


  1. Astronomical Observations.  During past fifteen years they have been reduced the bulk of data registered by the humanity's best eyes: the orbiting telescopes.  The impressive result, one that no one imagined until before, is that if all the basic assumptions about space, time and Doppler-effect meaning of the redshift of the extragalactic objects is correct, then it means that along past few decades the Universe should have accelerated its expansion.  Readers understand that it seems unthinkable that our generation, has randomly born in the exact moment when the entire Universe, 15 billions of years after it was born ...decided to change kinematics. It has been christened Dark Energy what could be accelerating the expansion.  Also in this case, the Causal Sets Theory could be the solution. The cosmological model requiring Dark Energy to explain the observational data, uses a cosmological constant along all evolution. On the opposite, the Causal Sets Theory implies fluctuating values for that parameter and the simulations agree with the observations. Then, Dark Energy is not necessary to explain the observations.

It is incoherent to continue to assume continous and infinitely differentiable space and time well over one century after Max Planck changed the course of the human history by mean of his formula: E = hν. 


Partially Ordered Sets 


 Different ways to go from a point in the past, to several different effects in its future. “Discrete space” means that in our ordinary 3D space lengths are made of a finite number of elementary lengths


Quoting J. Neggers and H. Kim (1996): 

“Partially ordered sets (posets) have a long history beginning with the first recognition of ordering in the integers.  In the early nineteenth century, properties of the ordering of the subsets of a set were investigated by De Morgan, while in the late nineteenth century partial ordering by divisibility was investigated by Dedekind.  Although Hausdorff did not originate the idea of a partially ordered set, the first general theory of posets was developed by him in his 1914 book 'Grundziige der mengenlehre.  

It remained until the 1930's for the subset of lattice theory to blossom as an independent entity with Birkhoff's justly famous text on the subject first published in 1940.  It has only been in the last three decades that posets and their relationships to applied areas such as computer science, engineering and the social sciences have been extensively investigated.”  


Causal relations are partial-order relations and the spaces they form have a structure solely defined by causal relations.  The causality relations are not necessarily tied to the notion of a smooth space-time manifold.  In 1914, when a few had truly understood the Special Theory of Relativity, and General Relativity was still not published, the relevance of the Events' causal ordering was considered fundamental by Alfred A. Robb in his Ph.D dissertation.    



 (Click-to-Enlarge) Lateral view of a Hasse diagram. Hasse diagrams are Alexandrov Intervals, or causal diamonds, filled with partially ordered, causally connected Events. 1200 Events have been randomly sprinkled following a Poisson distribution in this (1+1)-dimensional Minkowski spacetime.  Embedded in a random causal ordering. The blue-coloured zig-zag line is just one of the many ways used by an Event at p to determine an Event at q. A visibly tree-like, multiply-connected spacetime structure, showing a convergence of the independent studies in different disciplines of Science ( abridged by R. Salgado/2008)


The figure on side is a modern view, integrating the relativistic perspective of Alexandrov's Intervals and the partially ordered sets ideas of Helmut Hasse and Henri Gustav Vogt conceived early as 1895.Here it is visible a:

  • (1 + 1)-dimensional Minkowski space-time: 2D projection of the Alexandrov's Interval,
  • partially ordered set composed by 1200 elements (click-to-zoom its fine-details).  


Basic Rules of the Partially Ordered Sets

The Events visible therein, part of the huge tree-like branched multi-connected spacetime structure, have been randomly embedded (“sprinkled”) following a Poisson distribution.  Stochastic distribution as much as the physical measurements’ observed outcomes. Randomly distributed but respecting a relation of partial-order based over the following four properties (logic symbol “≺“ read as “precedes”):

  1.  transitivity               If x ≺ y  and  y ≺ z,  then x ≺ z;
  2.  non-circularity        If x ≺ y  and  y ≺ x  then x = y (no closed-timelike-curve is admitted);
  3.  finitarity                  The number of Events lying between any two fixed Events is finite;
  4.  reflexivity                x ≺ x  for any Event in the causal set.


What detailed in the pages devoted to the meaning of Root Cause following General Relativity about hyper surfaces, world points, Events, Past and Future light-cones, equitemp lines and surfaces, etc. in the figure below can be resumed by the depiction at left side.  In General Relativity it is the spacetime geometry which governs the scope of causal influence.  A geometry of curved hypersurfaces, each of them associated to a certain value for a parameter named Time.  But under the (Classical) Causal Metric Hypothesis, depicted below at right side, the geometry of the spacetime is reduced to just one of the many possible descriptive ways opened to Information's propagation.  As an example, to transmit Information from the point p in the Past to the point q lying in p's Future, the path passing through the sequence of points  → y offers a strictly causal way.  However just one of the many ways open Information propagation.

 The Classic Causal Metric perspective has an explanatory power superior to all other existing theories, General relativity included. An happy marriage of the micro and macro scales with the Quantum paradigm. Most important: finally a deep idea of what is a cause, what an effect and of their relation, exempt by circularities and assumptions contradicted by the experimental evidences ( abridged by B.F. Dribus, in eds. A. Aguirre, Z. Merali, B. Foster/2015)


As an example, the point is also receiving Information by two white coloured points (left of and  x) lying in the Future of the point p. Lying out of the light bicone centered in the point x.  What does not impede to the point to represent a configuration accounting for a Past history also including one of those white coloured dots not causally related to the point in q’s strictly causal Past. The distance between the elements makes reason of the proved discreteness of the Environment we inhabit.  We intentionally avoided to name it using the relativistic term “spacetime” because the extremely modern Quantum Causal Theory shows spacetime is an obsolete illusion dated 1908.  In the figure below (click-to-enlarge) the fine-details of the way the four basic rules of the Partially Ordered Sets apply to reality. 


Several ways exist to go from a point P  to the point Q in its Future. Green and orange coloured paths respecting all four Partially Ordered Sets’ rules.  The orange coloured pass through an intermediate Event R . Three of the paths outfeeding R  in red colour, correspond to Past-oriented histories prone to become causally pathological closed cycles or, chains. The Time vertical axe marked at right side, makes sense of the existence of an initial condition: the arrow of Time.



Partially Ordered Sets and Measurements


Implicitly, the Reader may understand now also some others far reaching consequences of the new paradigm. Some of them they yet personally observed when, as an example, performing analytic measurements in a Laboratory or by mean of automated equipments.  What yet observed when measuring whatever: weighing, counting Gamma- or X-rays, measuring diffraction when estimating the amount of sugar in a beverage, etc.  Measurements always providing visibly fluctuating results. The Root Cause of those fluctuations is not that of “errors”, rather they are intrinsic. In our standard conditions (of temperature, gravity and pressure) nearly only caused by decoherence, and not by assumed “limits” of the measurement equipment or its Detectors.  


Three of the paths joining in the spacetime the points P and Q .  Visibly, an identical spatial segment Ω corresponds to three different times, here marked with red, blue, green colours. Causal diamonds in their modern interpretation as poset’s external envelopes, make full sense of this classic observation at all scales


Two examples the:

  • brownian movements of Statistical Mechanics, i.e., the pattern observed in the hot liquid surfaces, 
  • radioactive decays, i.e., the events counted by all Gamma-Rays Fill Level Inspections.  

What expected if the matter-energy measured in the space along a certain time, is made up by discrete quantised amounts stochastically interacting with the Environment. Much more than the analytic measurement equipment or its Detectors.


Quantum Causal Theory and Root Causes 

Resuming what precedes in our root cause analytic perspective, the modern and still being developed Quantum Causal Theory, is saying that a new status observed now for a physical system, whatever its size, mass-energy or duration, has Root Cause:

  • in the superposition of a multitude of Causes,
  • more or less strictly related to the observed Effect, following the kinematic followed by the Information along its transfer from the Causes to the Effect,
  • in the selection of some of the subspaces of the Hilbert space, where have projections the eigenvectors representing each of the properties causing an effect. 


Horizons

 Polymer-like excitations puncturing the external bulk geometry of a massive material body, following Abhay Ashtekar









We conclude into the 3rd Millennium Physics this update about what, keeping apart naive interpretations, a Root Cause really is.  The figure represents a massive material body when considering its geometry until the quantum level.  The filiform curves piercing the object's external surface are polymer-like excitations of the bulk geometry. The amount of punctures is related to the mass of the material body. Proportional to all the possible combinations of the material particles composing the object, and to their spin. We think that this image, better than many others, hints to what a Root Cause really is.  relation between material objects, existing until their smallest particles, and between material objects and fields, like the electromagnetic. or gravitational. These connections make full sense of a causal relation between Events, until its smallest dimension and duration.


root_cause_in_quantum_physi_med


Causal Connection of the Events

In the following we'll introduce the relevance of the history of the Events. The long time-ordered chaining of Events in the Past considered causes for a present Event. We’ll consider in this section three cases at different level of complexity. The Events shall be marked using their various existing and equivalent notations, like PQ , p, q, a, b, c, x, y or  ei (i = 1, 2, 3, …n ).   All following diagrams are spatio-temporal, then adopting the Physics conventional notation where Time is oriented along the vertical ordinate axe: 

  1. Past Events, which may be causes of a Present Event at P, 
  2. Future Events which may be caused by a Present Event at P, 
  3. Events happening Elsewhere, causally disconnected by P ,


         Events’ causal connection and disconnection


follow Minkowski’s formulation. Here, the parameter t represents the Time measured along the world-line crossing the Event P  at the vertex of the bicone.  We’ll treat first the intuitive and unrealistic case where the Events lie along two parallel Time axes, in an Euclidean rectangular 3D spatial system, of which we’ll represent just two axes.  Also, we are assuming the clocks at rest in the points and Q  ticking with same rithm.  P and Q  lie over two distinct equitemp surfaces, perpendicularly crossed by the Time axe.  A single straight equitemp line in this surface joins the surfaces.  For an instance, the segment t1 - t0 is the Time difference between the Events and Q .  In the equitemp surfaces the Time is labelled ti, synchronised by any proper procedure for all Events in the surface, thus having a univoquely defined meaning for all of them.  


All what happened in the Past of the Event Q  is causally related with  since the origin of Time.  The entire superposition of Events contained in the Past cone of causes P.   Some of the effects of P shall enter in ’s Future when and where they'll be concauses of a share of Q’s  Future observed Events


We’ll be representing just one of the two Time axes for the sake of clarity of the graphic representation. The figure at right side depicts two bicones representing an Event P  at time t0 and an Event Q  at time t1 where t0 < t1.  The 2+1 dimensional intersection volume (a 3D space-time) of the bicones in and Q ’s respective: 

  • Past, is a subset of all the Events which caused and Q  effects; 
  • Future, is a subset of all the Events determined by  who will be concauses (shared causes) of what shall be observed in the Future of the Event 

All what happened in the Past of the Event Q is causally related with Q  since the origin of Time. The entire superposition of Events contained in the Past cone of P causes P.  A closer exam of the figure above at right side makes it clear that this way to represent facts and their relations, based over Minkowski geometry in the Special Relativity perspective, is not dynamical.     

The entire system may be imagined like a multitude of frozen frames, created by an holographic camera capable to encode 4D images in a 3D media.  If this should be the entire story, the complete sequence of causes and effects should be predetermined yet at the start of Time, when the single clock reproduced along infinite worldlines, started to tick.  A cosmologic concept for the origin of Time.  In open contradiction with some basic ideas of Quantum Field Theory.  Furthermore, we all think that a choice here and now may later change the outcome there. Then, what before is not the entire story.  The real geometry is not so simple and, most important, there is an initial condition which was given at the origin of Time.  The cones’ null-surfaces visible in the figure above are crossed by photons.  The geodetic curves of the massive particles, on the opposite, pass exclusively into those cones, like in the figure at left side. Also meaning that massive particles cannot propagate themselves at the speed of the photons.  


 Massive particles' geodetics enveloped by the null-surface of the Past light-cone referred to the Event (   abridged by D. Hilbert/1927)



Cauchy Surfaces and Root Causes


The key point we’ll be trying to illuminate in this section is that since over one century a few famous mathematicians, like the great French Henry Poincare’ to name just one, discovered the existence of a layer of knowledge with its own rules, interposed between pure Logic and Geometry.  A branch of Mathematics named Topology.  The discovery is not trivial, because all Technological applications (i.e., Machinery, Equipments, processes) are applications of Physics.  Other applications of technological interest, like those pertaining to the Chemical and Biochemical processes, are themselves based over Physics.  Then, it is the entire Technological world which is related to the basic Logic by mean of a Topologic interface. Could we root cause analyse what happens in a machine, equipment or productive process abiding by Logic?  Surely we could.  We’ll explore in the following examples: 

  • where this happens, 
  • toward what conclusions it carries the analysis, 
  • why technological applications behave in another way, their own way.  

Root Cause Analysis’ methods may be indifferently applied to malfunctions, inefficiencies affecting equipments and machinery yet commissioned or not still commissioned.  In the latter case, the initial information resides in the Project documentation and specifications.  The first case is however the most common and root cause analysing, we are investigating what let a system diverge out of its precedent history.  A history where it was performing into the Project specifications, resumed in an array of numbers, symbols and rules named Technological Guarantees.   


Example

Thermal Process Malfunction

We ask to the Readers to imagine a Root Cause Analysis unknowingly accomplished in violation of any or more basic physical laws.  As an example, imagine an industrial boiler with double separate circuits (hot and cold fluids) which has been correctly heating water to 90ºC and keeping it at that temperature.  A process needing ~3 hours to reach that regime starting from a water temperature of 20ºC.  Suddenly, due to an unknown reason or reasons, the boiler starts to need ~12 hours to reach the same temperature.  Clearly, to identify what, who, when and in what extent could have made the change causing the present malfunction, it is necessary to analyse the entire thermal process and its conditions before and after the problem started to exist.  A multitude of conditions, functions, operations differentially related and some of them apparently unrelated.  At a first sight, inputs and outputs of the Automation controlling system seems present and correctly exchanged.  Then, someone else remembers to have observed months before some cuts in some teflon O-rings and residuals in the piping.  This added to the fact that O-rings’ lifetime is not eternal, lets the root cause Analyst focus the boiler’s Heat Exchanger.  Focusing the eventuality that a foreign massive body (i.e., a plastic foil or some broken O-rings, or something else) entered in the hot fluid circuit, partially blocking the hot fluid inflow to the thermal exchanger.       



 Cauchy surfaces are crossed only once by the objects’ world-lines.  Thus, preventing the otherwise unavoidable causality violations which could derive by worldlines developing themselves along closed cycles.  In the case of the physical space they are spatial-only, with dimensionality ≥ 3.  Green and red coloured a causal and an acausal history





“...root cause analysing what interrupted the normal execution of a physical process, we are always tracing back in time and all-around in the space, who and/or what and in what extent changed its precedent state”


After a long time, Production-consuming stop to empty the hot fluid circuit and thoroughly inspect all pipings, this eventuality is excluded as Root Cause.  Excluded because encountered inconsistent with the result of the inspection: no foreign objects.  A thorough control of the Maintenance Department's logs shows that all O-rings were purportedly renewed a short time (with respect to their standard lifetime specification) before the Event causing the industrial process downtime.  Meaning that the additional Production time lost during the unuseful piping inspection is consequence of an error.  In what a way the eventuality that a foreign object clogged the piping relates to the facts written in the logbook, before to loose additional time, emptying the entire circuit to inspect the piping ?   In brief, the sequential observations about the: 

  1. broken O-rings
  2. change of all O-rings with spares
  3. start of the problem

are three facts related to their own proper times.  Jointly they form a subset of an history.  The history which could have been a reasonable Root Cause if only the point 1. exists, reveals itself an auto-goal after ignoring that also the point 2. existed before the point of time 3. when the problem became manifest.  

The subspace of the eventual histories is composed by the combinations:

                              (1., 2., 3.),   (1., 3., 2.),   (2., 1., 3.),   (2., 3., 1.),   (3., 1., 2.),   (3., 2., 1.)

As an example, the case (1., 3., 2) represents a history which could be a realized by the sequence of observations:

  1. broken O-rings
  2. start of the problem
  3. change of all O-rings with spares

But, what about the sequences (2., 1., 3.) or (3., 2., 1.) representing unrealistic histories of observations ?

  1. change of all O-rings with spares
  2. broken O-rings
  3. start of the problem


  1. start of the problem
  2. change of all O-rings with spares
  3. broken O-rings


 Causal chaining involves a multitude of different pathways open to let an Event cause effects elsewhere. But only some of the ways, those respecting few basic Logic and Topologic rules, participate in creating the realized histories


At our scales of mass-energy, spatial extension and time duration, all processes look Topology-compliant.  Physical processes are mappings from a hypersurface at time t0 to another at time t1, where t0  t1  (““ meaning “precedes”).  What prevents us from observing unrealistic histories is the presence of the Cauchy hyper surfaces.  All Information from a Past Event to a Future Event is present on the Cauchy hyper surface.  Surface crossed only once by the objects’ world-lines.  Thus, preventing otherwise unavoidable causality violations.  The incoherences deriving by the world-lines developing themselves along closed cycles.  Cauchy hyper surfaces can be idealised as the most basic directional filters, acting at all scales of dimension, time and mass-energy.  They are spatial-only, with dimensionality ≥ 3.  Many of the temptative root causes and interpretations failing to solve a Problem, do have hidden inside any or more erroneous assumptions or data.  A deeper exam should discover their flaws because implying causality violations.  




 A Cauchy hypersurface, marked as a bold curved line passing through the Event Q, translates the flat-space idealization lying back of the equitemp surfaces, in the curved space general relativistic perspective














What it'd mean to be developing a RCA based over any or more erroneous assumptions implying causality violations?  Since two-thousands five hundred years it is known a typical flawed logic: the tautology.  A circular disguised restatement of the basic assumption, always and falsely proofing itself.  Cauchy hyper surfaces prevent matter and radiation from following topologies whose logic should be tautologic.  It is easy to conceive this necessity imagining an object pertaining to our scales of dimensions, times and mass-energy in a peculiar topology like in the figure below.  A 3+1-dimensional obiect of which the figure forcedly shows just 2+1-dimensions.  

The object proceeding along its inner surface encounters three different ways to cross the handle from a throat to the other, corresponding to different lengths for each path.  This, at a first sight, does not seems an anomaly.  In the end, we all think that our free-will allows us to choose between different paths when moving from a place to another.  But, our choices happen at scales exempt by topologic pathologies like this.  Looking the details, it is possible to see a closed cycle in the direction of the Time axe.  If an handle like that should exist at our scales, we’d see:

 A handle is an immediate candidate for causality violations. Three non-equivalent ways joining two places ( abridged by F. Hellaby, in J. Plebanski, A. Krasinski/2006)












  • distinctly visible contradictory Events;
  • exchanges of the left- and right-sided sides of what should cross it;
  • creation of energy from…nothing;
  • a Time Machine.

The conclusion is that root cause analysing what interrupted the normal execution of a physical process, whatever its complexity we are always tracing back in time and all-around in the space, who and/or what in what extent changed its precedent state.  Physical systems falling under our experience are realized systems and not ideal systems.  Their operation is filtered at all scales by basic structures like the Cauchy hyper-surfaces, preventing realized contradictions.  As a consequence, when root cause analysing our line of reasoning cannot abide to be based over reliable, controlled data.  The alternatives are ill-fated analysis and consequent “pet theories” redressed to look-like Root Cause Analysis.  Flawed analysis just capable to change the pseudo root cause of a real problem, from an unknown attribution to an erroneous attribution.  


 Events’ causality at large scale. An Event at the point P along a worldline, is causally related to all those in its past light cone.  At its right side, marked with a bold line, 3 different time-ordered positions themselves Events, of another object.  Object interacting and statistically dependent with P in its Past, later no more interacting (i.e., disappeared) but still statistically dependent, finally reappearing in P’s Future when and where interaction and statistical dependence are re-established



Each Observed History is Realized History
















The conclusion of what before is that, one time observed, the history containing a Problem becomes realized history.  The history of a physical process where the Information Flow, from the Past to the Future, is filtered by the Cauchy hyper-surfaces.  Just an extremely small share of all of the realized histories lie in a permanent way in our own memory, logged by Dataloggers, registered by Data Acquisition systems, etc.  We are capable to conceive scenarios or histories pertaining to the wider state space.  Measurements and experiments show that also those multitudes of unrealistic histories are part of the state space.  But the stability of the majority of these configurations is minimal.  That’s why they are observed and registered only by instruments extending our biological detection capabilities.  What before are not really modern ideas, i.e., ideas of Quantum Mechanics or of its actual grandchild Membrane-Theory.  

As a matter of fact, Mechanical and Chemical Engineers reading these notes, remember that by Liouville and Poincare’ recurrence theorems, inherent to subjects of Statistical Mechanics and Thermodynamics, it is derived a similar situation.  Here, it is conceived a system of two chambers, kept separated by a partition (i.e., a gate or door).  Initially, a gas is present just in one of them and the other is completely empty.   Then the gate is open and we all know that the gas shall reach a temporary equilibrium state, where the same number of particles shall be divided between the chambers.  Liouville and Poincare’ theorems state that waiting enough, we’ll be spectator of a paradoxical state where ...all of the particles have returned in the original chamber!   And if we continue to wait, we’ll reencounter all particles again distributed between the chambers.  Just one of the many paradoxical configurations and histories actually part of those existing in the state space.


Introducing Alexandrov Intervals

The bicone here at left side represents a second way to display the same 2+1-dimensional section of the 4D Minkowski geometry.  At Time t = 0 is referred the plane surface where all the electromagnetic and/or gravitational signals outcoming by the Event e1 pass through a circular white coloured surface S after a certain time. All Events lying in S are causally correlated with e1. An infinitely sensible Detector or an Observer placed at the Event esimultaneous with the events in the surface S, however extremely close to them has no idea of their existence.   

 An infinitely sensible Detector or an Observer placed at the Event esimultaneous with the events in the surface S, however spatially extremely close to them, has no idea of their existenceMeaning that at the Time slice t = 0, the events e1 and e0 are causally disconnected or, space-like separated ( abridged by J. A. Winnie/1977)



Meaning that at the Time slice (or leaf, or sheet) t = 0, the events e1 and e0 are causally disconnected or, space-like separated. On the opposite, the Event e2 along the same worldline correlated with e0 in its Past, also if simultaneous with the Event e3, is causally related to e3 because lying in the Future lightcone of e1 not less than e3.   By the figure above it is finally possible to see that the Event e3 is correlated to all (in the Classic view, infinite) events in the circle S.  Because of the fact we are representing a 2+1-dimensional section of a 4D geometry impossible to depict, Readers understand that the nature of the circle S is in the reality that of a 3D sphere.  


In a modern quantomechanical perspective, a Detector placed at the Event esimultaneous with the events in the surface S, extremely close to them, is capable to detect them like Noise pulses. Pulses stochastically occurring and of extremely brief duration, originated by quantum  effects like the tunnelling 













Sphere whose centre is the Event along the worldline joining e1 to e3, crossed by the equitemp surface xy where the Event e0 lies. The third case is represented by the figure below, showing a situation closer to the reality lived by Detectors, Machinery and Observers.  A complex sequence of bicones illustrating why Relativity considers all the spatial dots around a single Event, related to the Event.  Here, on the base of Minkowski geometry of 1907, an entire spatial volume lying out of the light cone should be causally disconnected by the Event a in its Past. In the reality, it shall result still related, however slightly. We start observing the time-ordered sequences where:

  1. the Event a lies in the Past of the Events b and c;
  2. the Event a lies in the Past of the Events y0 x, s and y;
  3. S(a, b) is the hypersurface separating the Future of a by the Past of b;
  4. S(a, c) is the hypersurface separating the Future of a by the Past of c



 A complex sequence of bicones illustrating why General Relativity considers the spatial dots all around a single Event related to the Event.  The image shows a special case: here on the base of Minkowski geometry of 1908 an entire spatial volume lying out of the light cone should be causally disconnected by the Event a in its Past.  In the reality, it shall result still partially related.  Event a lies in the Past of the Events b and c.  Also, a lies in the Past of the Events y0,  x, s and y.  S(a, b) is the hypersurface separating the Future of a by the Past of b.   S(a, c) is the hyper surface separating the Future of a by the Past of c.   Visibly, the Event s in the Future of the Event x, lies on the external boundary of the Future of the Event a, part of the same hypersurface of constant Time of the of the Event b. Because of this reason, the pink coloured hyper surface centered on the Event s results partitioned.  The inner portion hosting a 3D space-like volume causally related to the Event a much more strictly than the portion out of the hyper surface S(a, c) external boundary ( abridged by H.-J. Borchers, R. N. Sen/2006)








The Event s in the Future of the Event x, visibly lies on the external boundary of the Future of the Event a, part of the same hypersurface of constant Time of the Event b  Due to this reason the pink coloured hypersurface centered on the Event s results partitioned.  The inner portion hosting a 3D space-like volume causally related to the Event a much more strictly than the portion lying out of the hypersurface S(a, c) external boundary.


cause_in_general_relativity_med_hr









The definitions of “Events” (whatever kind of Event, triggerings, measurements and “observations” included) and of the derived concept of causal relation between Events are part of the investigational fields of Theoretical Physics, Quantum Physics and Relativity.   Why ?  Because the majority of the evidences hint to a common origin.   One characterised by extremely high values for energy density (then, high temperature) and geometrical curvature for the greater Environment where all, measurement instruments and Machinery included, operates.  With reference to the figure below, several wordlines a, b, c, d intersecting today a 3-dimensional hypersurface (i.e., the worldline a at the point Q) are a System.


























 The worldlines of all elementary constituents of all Triggers, measurement instruments, electronic inspectors and Observers, were joint in an extremely small volume when the initial condition acted.  In the figure are showed four worldlines a, b, c, d originated by the same Event O in the Past, as seen in the classic perspective: four distinct histories.  All technological and analytical applications happen into the slices existing between two Events, e.g. P and Q.  Each slice or leaf is 3-dimensional. Time indicates the location of a chosen 3-dimensional space, an hypersurface in the infinitely wider 4-dimensional space (abridged by   J. A. Wheeler, K. Thorne, C. W. Misner/1973)


It is frequently over looked the fact that all the worldlines were joint in a single dot, back in the Event Oorigin of Time at t = 0.  The nearly spherical surface in the past, crossed by the worldline a at P, represents a phase of the historical evolution when space-time was not locally curved as it is today. 

This classic point of view, mainly derived by the ideas of Einstein, Minkowski, Lemaitre and Gamow, one time backed by Hubble and Eddington’s discoveries, started to hold as a paradigm.  And it still holds today, at least in part.  To reduce all this to mere theory, in opposition to facts, is not possible so many they are today its direct applications in our life and Machinery.  As an example, all smartphones’ GPS location devices integrate routines with formulas of General Relativity to be so precise as they are, evidencing the correctness of the Theory.   

How many 3-dimensional surfaces (or leaves, or sheets) contains the 4-dimensional solid above ?    The answer is the sum of:                

  • 3  points;  

and, in a coordinate system making diagonal the metric:

  •  3 diagonal components of the metric specifiable per space point;

say, 3 choices of the metric per space point. 

Therefore, a total amount of possible 3-dimensional leaves equal to: 

                                           

Amount later heavily reduced by the added dynamical condition of constructive interference,great numbers. but however a huge amount. These numbers are not shown as a sterile exercise of infinitesimal calculus, rather to enforce that since at least one century Nature answers to the continued questions by the humanity inviting us to replace infinities with Numbers so big to be easily confused with infinities.  

  Since thousands of years physicists and philosophers interrogate themselves about the “unreasonable success” of Mathematics in our everyday life. Success directly felt by all the industrial applications


Relevance of the history of the Events

Events strictly or slightly related














In the following we'll introduce the relevance of the history of the Events. The long and, at a first sight, time-ordered chaining of Events which are considered causes for the Events in the present time.  A relatively complex sequence of bicones illustrating why in General Relativity the spatial dots all around a single Event, are related to the Event.  The figure below shows a situation constantly present in the reality lived by us and our measurement devices.  Here, on the base of Minkowski geometry of 1907, an entire spatial volume lying out of the light cone should be causally disconnected by the Event a in its Past.   In the reality, it shall result still related, however slightly.   We start observing the time-ordered sequences where:

  1. the Event a lies in the Past of the Events b and c;
  2. the Event a lies in the Past of the Events y0,  x, s and y;
  3. S(a, b) is the hypersurface separating the Future of a by the Past of b;
  4. S(a, c) is the hypersurface separating the Future of a by the Past of c.   

The Event s in the Future of the Event x, visibly lies on the external boundary of the Future of the Event a, part of the same hypersurface of constant Time of the Event b  Due to this reason the pink coloured hypersurface centered on the Event s results partitioned.  The inner portion hosting a 3-dimensional space-like volume causally related to the Event a much more strictly than the portion lying out of the hypersurface S(a, c) external boundary.


 A relatively complex sequence of bicones illustrating why in General Relativity the spatial dots all around a single Event, are related to the Event.   The image shows a special case: here on the base of Minkowski geometry of 1908 an entire spatial volume lying out of the light cone should be causally disconnected by the Event a in its Past. In the reality, it shall result still partially related.  Event a lies in the Past of the Events b and c.  Also, a lies in the Past of the Events y0,  x, s and y.   S(a, b) is the hypersurface separating the Future of a by the Past of b.   S(a, c) is the hyper surface separating the Future of a by the Past of c.  Visibly, the Event s in the Future of the Event x, lies on the external boundary of the Future of the Event a, part of the same hypersurface of constant Time of the of the Event b.  Because of this reason, the pink coloured hyper surface centered on the Event s results partitioned. The inner portion hosting a 3-dimensional space-like volume causally related to the Event a much more strictly than the portion out of the hyper surface S(a, c) external boundary (abridged by   H.-J. Borchers, R. N. Sen/2006)



Classical version pre-2000









What is a Trigger ?   

Theory of Information point of view:

"device to label the status of physical or logical entities






We’ll start to list, as seen by the point of view until 2000 considered standard, what is and what is not possible in terms of Events' causal connection.    No interaction, gravitational nor electromagnetic, is possible with particles lying in the space out of the bicones: all what lies in that space is causally disconnected with respect to the object placed at the Triggered Event.  In this classic approximation, only what lies in the Past light cone is causally connected with what lies in the Event position. The Future light cone traces the paths of light rays emitted in every possible direction from the Event at the origin, and the Past light cone indicates the paths of light rays arriving at the Event's location at the Present moment.  In this classic view, the Event has a purely geometric meaning: a dot of space-time. It is not necessarily the place where also happens an interaction.    The rules about Events considered valid by Special Relativity, between 1905 and 1915, were:

  • light beams emitted from the Event location travel exclusively along the Future light cone.  (On the opposite, as we’ll see later with more details, in the modern Quantum Physics view, trajectories do not exist and particles are emitted in both directions, Future and Past);
  • arriving at the Event's location at the Present, all fall within the Past light cone;
  • launched from the Event's location, all travel on trajectories bounded by the Future light cone;
  • they cannot exist two Events superposed in the same worldpoint: all Events are unique identifiers;
  • Past and Future light cones of other Events, at different spatial locations, have light cones which are offset from those of the Event we have arbitrarily designated as being at the origin;
  • the areas where the Past and Future light cones of two Events overlap, are in their common Past and Future respectively, but each Event will have regions of spacetime from which they can receive and send Information that are not shared with the others.

 

 Qualitative evolution of the Hubble horizon (dashed line) and of the scale factor (solid curve). The time coordinate is on the vertical axis, while the horizontal axes are space coordinates spanning a two-dimensional spatial section of the cosmological manifold. The inflationary phase extends from ti to tf , the standard cosmological phase from tf to the present time tc. The shaded areas represent causally connected regions at different epochs. At the beginning of the standard evolution the size of the currently observed Universe is larger than the corresponding Hubble radius. All of its parts however, emerge from a spatial region that was causally connected at the beginning of inflation (abridged by   Gasperini/2007)


































After 1915, with the advent of the General Relativity, the point of view changed sensibly.  Every worldpoint is still the origin of the bicone of the active Future and the passive Past but:

  • the two zones are no more separated by an intervening region ef Events causally disconnected;
  • it is possible for the cone of the active Future to overlap with that of the passive Past;
  • it is possible to experience Events now that will in part be an effect of our future Events or decisions (Trigger and Measurement Events included).

Adding to General Relativity the Principle of General Covariance, Einstein made a precise statement about the fact that global evolution does not exist.  From his point of view, the time t is just a label we assign to one of the coordinate axes.  The figure presented in the precedent section titled: "Events’ causal connection.  Introduction” was published in 1973, before the discovery of the Inflationary mechanism by Starobinski, Linde and Guth.  The figure above, on the opposite, includes also this phase and is the model considered standard from 1982 til ~1993.   Here, the qualitative evolution of the Hubble horizon is showed into the couple of dashed lines and of the scale factor with the external solid curve. The time coordinate is on the vertical axis, while the horizontal axes are space coordinates spanning a two-dimensional spatial section of the cosmological manifold. The inflationary phase extends from ti to tf, the standard cosmological phase from tf to the present time tc.   The shaded areas represent causally connected regions at different epochs. At the beginning of the standard evolution the size of the currently observed Universe was larger than the corresponding Hubble radius.    All of its parts emerged from a spatial region which was causally connected at the beginning of inflation, implying that the causal connection is maintained also after the inflation.   It is important to understand the initial causal connection is what still today let a motor in the Blowformer Machine run following the same Physical Laws as the motor in the Palletiser, 150 m afar.  The figure above also shows that there is a wide all-around sector which was causally connected but that receded so fast to have since long time moved out of our horizon.  Then, disconnected also with respect to actual events.   



Modern view

Degrees of freedom

In the year 2000 Tom Banks and Willy Fischler discovered that the mass density parameter Ω is determined as the inverse of the number N of degrees of freedom.  Refer to the figures below: the curvature parameter Ω0 encodes the system density, a relation between mass + energy and volume.   But, the volume strictly depends on the number of degrees of freedomdegrees of freedom.   And that’s why the amount of are since then considered an input parameter at the most fundamental level of physics.    


Curvature determines the sum of the inner angles of a triangle, which in the three cases here depicted shall be:

   Ω0  > 1,     Inner total angle  > 360˚

   Ω0  < 1,     Inner total angle  < 360˚

   Ω0  = 1,     Inner total angle  = 360˚

Matter plus energy density parameter Ω0 is translated in a spatial curvature.  Curvature depending by the number of degrees of freedom of the system. From this discovery of the year 2000 it has been possible to derive a new definition of Causal Connection between Events, close to everyday measurements, referred to our macro scopic scale. Causal connection exists only between points in a small subspace of the bicone.  This, in turn, implies that the Environment, much smaller than expected, cannot be ignored 








Shortly later, Raphael Bousso, Oliver DeWolfe and Robert C. Myers have been capable to derive a new definition, today standard, of causal connection between Events.   The perspective establishes the new concept of Causal Diamond, an example of these in the figure down, carrying on the scenario of everyday practical measurements (or Triggerings) evaluations of causal relation until a few years ago extremely theoretical.   It is an approach looking to macroscopic (and not mesoscopic or microscopic) objects in the relativistic frame in which, as an example, also all smartphones’ GPS operate.    All what we’ll refer for a Trigger is true also for measurement instruments like all the electronic inspections, like triggering nearly exclusively electromagnetic.  

De SItter and hyperbolic spaces. Modern view

Degrees of freedom

In the year 2000 Tom Banks and Willy Fischler discovered that the mass density parameter Ω is determined as the inverse of the number N of degrees of freedom.  Refer to the figures below: the curvature parameter Ω0 encodes the system density, a relation between mass + energy and volume.   But, the volume strictly depends on the number of degrees of freedom.   And that’s why the amount of degrees of freedom are since then considered an input parameter at the most fundamental level of physics.  Shortly later, Raphael Bousso, Oliver DeWolfe and Robert C. Myers have been capable to derive a new definition, today standard, of causal connection between Events.   The perspective establishes the new concept of Causal Diamond, an example of these in the figure down, carrying on the scenario of everyday practical measurements (or Triggerings) evaluations of causal relation until a few years ago extremely theoretical.   It is an approach looking to macroscopic (and not mesoscopic or microscopic) objects in the relativistic frame in which, as an example, also all smartphones’ GPS operate.    All what we’ll refer for a Trigger is true also for measurement instruments like all the electronic inspections, like triggering nearly exclusively electromagnetic.  Imagine that p and q are two points on an Trigger’s world line, with q later than p.   One can think of p as the beginning and q as the end of the Triggering (or, measurement).    Then, it is observed that the total Entropy is bounded by the inverse of the mass density and that it cannot be observed at all out of a small subset of the much wider bicone, say:

increasing Entropy (second Law of Thermodynamics valid);
to consider only the Trigger’s (or, measurement instrument) causal Past and causal Future and ignore everything else.
The last introduce sensible restrictions: 

at the point q, the endpoint of the Triggering (or,  measurement instrument), the Trigger can only have received signals from the Past of q.   The rest of space-time has not yet been seen.  For the purposes of the measurement in question, its Entropy is operationally meaningless and can be ignored;
the Trigger’s Past is bounded by the Past light-cone from the point q, and that Signals within the Trigger’s Past must pass through this cone.  
Thus, if one wishes to encounter the boundary of the observable Entropy, it will be sufficient to bound the Entropy on the Past light-cone of the endpoint, q.   It is not enough for Entropy (or, Information) to lie in the Trigger’s Past.   To be observed, it actually has to get to the Trigger, or at least to a region that can be probed by the Trigger.   But, a measurement that starts at p can only detect what lies in the causal Future of p.   In this modern perspective it is implicit that p and q are joined by at least one world-line, which corresponds to both points of view, classic Hamiltonian and modern Quantum Mechanical.  To go from a point in the Past to a point in the Future, different paths are:

possible, in the Hamiltonian perspective;
actual, in the Quantomechanical perspective (Quantum democracy principle).

  The curvature parameter Ω0 encodes the system density, say the relation between (mass + energy) and the volume they occupy.  Today the experimental evidence favours the idea that the curvature Ω0 is that of an hyperbolic hypersurface.  Implying that all Signals, electromagnetic and gravitational, propagate through a 3+1D spacetime, immersed in the de Sitter spacetime.  De Sitter is a manifold reproducing, after a renormalization, a 5D Minkowski spacetime in the limit of zero curvature.  Green-coloured 4D-surface in the single hyperbolic space is a projection mapped by the corresponding points in the 5D de Sitter.  With reference to the point P (tt0) at the vertex, all what happened in its past is causally related with it since t = 0, the origin of Time.  The graphic is evidencing that today’s idea of causal relation between Events, considers Signals travelling in a five-dimensional space composed by an infinity of four-dimensional spaces 
















Imagine that p and q are two points on an Trigger’s world line, with q later than p.   One can think of p as the beginning and q as the end of the Triggering (or, measurement).    Then, it is observed that the total Entropy is bounded by the inverse of the mass density and that it cannot be observed at all out of a small subset of the much wider bicone, say:

  • increasing Entropy (second Law of Thermodynamics valid);
  • to consider only the Trigger’s (or, measurement instrument) causal Past and causal Future and ignore everything else.

The last introduce sensible restrictions: 

  1. at the point q, the endpoint of the Triggering (or,  measurement instrument), the Trigger can only have received signals from the Past of q.   The rest of space-time has not yet been seen.  For the purposes of the measurement in question, its Entropy is operationally meaningless and can be ignored;
  2. the Trigger’s Past is bounded by the Past light-cone from the point q, and that Signals within the Trigger’s Past must pass through this cone.  

Thus, if one wishes to encounter the boundary of the observable Entropy, it will be sufficient to bound the Entropy on the Past light-cone of the endpoint, q.   It is not enough for Entropy (or, Information) to lie in the Trigger’s Past.   To be observed, it actually has to get to the Trigger, or at least to a region that can be probed by the Trigger.   But, a measurement that starts at p can only detect what lies in the causal Future of p.     



 Bidimensional lateral cut of the Causal Diamond. After 1990, Multiverse replaced the classic concept of Universe: a multitude of coexisting non-communicating bicones and new Events, derives by each one single Event.  Each one of the infinite yellow colour dots in the image, is part of the paths followed by the object to move itself from the point p to q in the Future of p (abridged by image R. Bousso, 2000) 








In this modern perspective it is implicit that p and q are joined by at least one world-line, which corresponds to both points of view, classic Hamiltonian and modern Quantum Mechanical.  To go from a point in the Past to a point in the Future, different paths are:

  • possible, in the Hamiltonian perspective;
  • actual, in the Quantomechanical perspective (Quantum democracy principle).



Causal Diamonds and histories

The descriptions visible in the figures above, below, and on right side are named Causal Diamonds:

  • above, what happens in the 2-D yellow colour region (a lateral cut of the 3-D Causal Diamond) of the form C(p,q) for some pair of points (p,q);  
  • on right side, their derivation by the wider Minkowski causal bicone, in a 3-D geometry;
  • below, the only Causal Diamond in a 3-D geometry, and there is not to forget they are objects whose dimension is ≥ 4.

  3 dimensional view of a Causal Diamond, with two space dimensions plus time.  In the example, Past and Future light cones (light violet colour) are super imposed around a 3-dimensional sphere depicted above as a dark violet circle because we cannot show the third spatial dimension. The light violet volume named causal diamond, after 2000 became a portion smaller than the entire bicone in 1907 imagined causally related.  The oblique lines represent three of the infinite paths joining p and q.  Each one path a different history in the sense attributed by Feynman (abridged by   R. Bousso, 2000)

In the figure at right side, we are showing three of the infinite and actual worldlines conceived by Quantum Mechanics. There are innumerable curves joining two events, and none is privileged.   If q is in the Future of p, there will be several world lines connecting them, otherwise the entire region C(p,q) will be empty or degenerate. To evaluate the phase difference and the coupling between two events, we have to account for the contribution from all paths. Colloquially, we can say everything between two Events contributes to their coupling. The pervasiveness of the agents of interaction further affirms the demise of empty space; what now may appear obscure shall become more clear in the following sections.   Causal Diamonds are bounded by a:

  • top cone, a portion of the Past light-cone of q;
  • bottom cone, a portion of the Future light-cone of p;

occupying this way the entire subset of points into the (pink coloured) bicones in the couple of figures above.   The cones usually intersect at a bidimensional spatial surface A (see the upper figure above), the edge of the Causal Diamond. In any case, the Entropy in the Causal Diamond must pass through the top cone and, we repeat, all Signals must have entered through the bottom cone.   The net positive result of this line of reasoning is that the:

  • Entropy within a Causal Diamond is under strict theoretical control; 
  • 4-dimensional volume is much smaller than what it was conceived in 1907. Being much smaller, the causal relation inferences becomes closer to the scales of dimensions and durations of our everyday measurements.


 





3-dimensional view of the Causal Diamond. These spaces have dimension ≥ 4.  In evidence the worldline γ passing through the point p and proceeding toward the future at q. The 3-dimensional volume into a Causal Diamond is however huge when considering that the wave packets of light and of the curvature of the spatial geometry (gravitons) had available (15-18) billions years to propagate (abridged by Wolfram Research)







Example of Events’ causal connection 

Triggering containers by mean of Laser light













 Exchange of light beams between Trigger and container, to determine container's kinematics




             Also the measurement of a frequency shift of the exchanged photons, through the application of the Doppler formula, fails to provide an alternative way to measure containers’ velocity

Events’ causal connection, in its classic version pre-2000, encounters an important field of application with respect to the exchange of light signals.  This, in the Electronic Inspectors is commonly associated to Triggering of typically accelerated and fast moving containers.  In the following, we’ll outline the true rigorous scenario around a Triggering.  This is useful to understand why Triggering, an action strictly related to the kinematic of a container, results on the opposite always referred to an external speed reference, since decades an Encoder.  This, when it is well known that only direct measurements provide absolute values, those whose relative errors are  minimised.  The choice toward external speed references is forced by the complexity of internal (direct) measurements.   To use an Encoder, in the perspective of modern Physics, means to coarse-grain a container position into a Shifting-Register cell.    It is a coarse-graining operation because the Shifting-Register cell has typically an extension, measured along the container's direction of movement, bigger than a container diameter.  Coarse-graining, for definition, is clearly not a synonimous of increase on precision, just the opposite, and in our case kinematic uncertainty means False Rejects.  Curvature effects described below are in the Shifting-Register emulated and widely reinforced by accelerations and decelerations of Conveyors and by the contacts of the container with the lateral guides causing container sliding.  What should happen if we’d be measuring containers' speed in a direct rigouros way ?



Example 1   

Exchanging light signals

Triggers, in the most general and real case, operate in the Factory accelerated reference frame, meaning they operate along worldines which are not geodetic nor orbits.   Because of this reason the Trigger has to be considered in motion along a curve γ (see figure at right side) with tangent vector u and proper time s as parameter.   What follows is referred to direct reflection Trigger photosensors, however valid also for the widespread thru-beam Trigger photosensors.   This last case carries similar results, because also the interruption of the light beam is not instantaneously received by the Trigger u on the worldline γ.   

By mean of the Trigger, the CPU processing the informations encoded in the electric signals infeeding by the Trigger, on the base of a dedicated software routine, can only deduce the spatial velocity of a passing container relative to its own local rest frame.  

Meaning, by the exchange of light signals:   

  • at the event A on γ the Trigger sends a light signal to the container, which receives it at the event P on γ′; 
  • at P the light signal is reflected back to the Trigger which receives it at the Event B on γ.   

The Line from P to A0 is curved to indicate the fact that the spacetime itself is curved.  Now, denote as Υ and Υ′ the null geodesics connecting A to P and P to B respectively.  Let A0 be the event on γ, subsequent to A and antecedent to B, which is simultaneous with P with respect to the Trigger u and such that the space-like geodesic ζP → A joining P to A0 is extremal with respect to γ.   Repeated reading of the time of: 

  • emission of light signals at A;
  • recording of the reflected echo at B;

allows one to determine the length of the space-like geodesic segment connecting P to A0, which represents, by definition, the instantaneous spatial distance of the container at P from the Trigger on γ like in the see the figure on right side.  The relative velocity of the container with respect to the Trigger u is then deduced, differentiating the above spatial distance with respect to the Trigger’s proper time.  The relative velocity so determined is along the Trigger’s local line of sight, so it is a radial velocity, as an example, a velocity either of recession or of approach.  The measurement process involving the events A, P, and B is non-local insofar as the measurement domain is finite.  





Example 2   

Doppler shift

An alternative way to measure the container velocity is based on the measurement of a frequency shift of the exchanged photons through the application of the Doppler formula. The image on right side hints to the process.  But, the velocity so determined, should be an equivalent velocity because the frequency shift can also be caused by geometry perturbations.  Say, something unrelated to the container’s motion.   

As already stated, curvature effects are in general entangled with inertial terms resulting from the choice of the reference frame, so we shall just term as curvature any possible combination of them.  The measurement of a relative velocity is the result of a local measurement, which does not contain curvature terms, and a nonlocal one, which depends explicitly on the curvature.   






Phenomena Are Independent Of Transfer In Time


 Emmy Noether. The law of conservation of Energy manifests that phenomena are independent of transfer in Time


In 1915 the German mathematician Emmy Noether demonstrated that: 

  1. it is possible to obtain the law of conservation of energy in its ordinary mathematical form, recognising that phenomena are independent of transfer in time.     
  2. the law of conservation of momentum is due to the fact that a phenomenon is not dependent on the region of space in which it occurs. 
  3. the law of conservation of angular momentum follows from the isotropy of space.  















From these theorems it follows that a relationship between cause and effect under fixed conditions, is essentially related to the laws of conservation of energy, momentum and angular momentum.  If one assumes that even one of these conservation laws is violated in some kind of process,  this unavoidably will lead to a change in the form of manifestation of the necessity of causal relations.  For instance, to presume the law of conservation of momentum is violated would force one to acknowledge that absolutely identical conditions realised at different times would be accompanied by different effects.  Time would then be capable of operating physically on processes, and then time would have to be included in the conditions (causes) of these processes.  

The supposition that processes exist for which the laws of conservation of momentum do not hold would compel acceptance of the fact that the same conditions realised in different regions of space would generate different effects.  This would mean that space exerts a physical effect on these processes.  Then space should have to be included in the conditions of these processes.  In the same way, violation of the law of conservation of angular momentum should mean that a rotation of certain phenomena through some angle in space could physically effect the behaviour of the phenomena.  If in the actual world there were entities that did not obey the laws of conservation of energy, momentum and angular momentum, then the idea of the necessity of causal relations determining the behaviour of these entities would have to be stated as the behaviour of an entity is of necessity determined by its interactions with the environment and with space-time.   


Predictability

Quantum versus Classic Causality 




















This relation to the environment as part of the experimental configuration, is exactly what started to be hinted by theory and experiments around ninety years ago. Marking a transition from the Classic to the Quantum (or, modern) point of view, downplaying the relevance of the Time.  What may be better perceived by the analysis of the graphics below.  All Classic points of view about causation consider true only the graphics at left side, corresponding to what a physical object experiences and what can be said about its Past and Future, at two consecutive times t1, t2 where t2 > t1.  Depicted with red colour two different histories. With reference to the history of the object at left side, before the time t1 we: 

  • have full historical knowledge its Past, 
  • may imagine several “possible” Future states.  

With reference to the history of the object below at right side, at the time t1 it undergoes a positional change until the time t2, thus: 

  • excluding a vast amount of (left-way oriented) Events, which were considered possible in its Past time t1,
  • they continue to exist possible Events of the respective futures of the histories (or, branches) that the object’s memory at time t2 considers not happened.

The point of view beloow at right side is the modern, where a happened measurement: 

  • does not reduce the space of the Future eventualities. Eventualities which continue to remain potential,
  • excludes the vast majority of the Past potential histories. 


  What can be said about Past and Future of a physical object, at two consecutive times t1t2 with  t2 > t1.  Depicted with red colour the same history of an object seen as it evolves in the Classic and Quantum view.  At left side the Classic view, also the laymen view.  Here, before the time t1 we have full records about its Past and may imagine several “possible Future” states.  At right side, the Quantum view of the object’s history.  At the time tthe object undergoes a positional change until the time t2thus excluding a vast amount of left-way oriented Events which were considered “possible Futures” at object's Past time t1.    But, visibly, at time t2  they continue to exist possible Events also in the respective futures of the histories (or, branches) that the object’s memory at time t2 considers not happened ( abridged by G. R. F. Ellis, in eds. A. Ashtekar, V. Petkov/2014)












In an open system, like all those part of our everyday life, quantum effects introduce a perceptible unpredictability. Perceptible by the fact that Classic Laws fail to prefetch on base of Past conditions the Future expected outcomes.  As an example (see figure below at right side) divergences between the expected values of physical properties calculated on base of Present informations, and values observed later in the Future.  A situation mirrored by Statistical Mechanics. Readers agree that it is still widespread the idea that, at least in principle, a macroscopic system is on the microscopic level fully, correctly described by deterministic, time-reversible Physical Laws. If such a microscopic description should be complete, then it should also include all the physical properties measured in the Thermodynamic systems.  I.e., those consisting of a mole of particles. As a consequence, the famously irreversible character of the Second Law of Thermodynamics is typically ascribed to our inability to obtain infinitely precise knowledge of the microscopic state of the system, superposed with special initial conditions for the macroscopic, observable quantities of the system.  Namely, the so-called Ignorance Interpretation of Probability Theory.  



 Yet our every day life open systems fail to follow in their Future observed states what expected on base of Present data, also when these would be known with ideally infinite precision. Then, the reason cannot originate by just uncertainties in the knowledge of initial conditions ( abridged by G. R. F. Ellis, in A. Ashtekar, V. Petkov eds./2014)
























Our inability to know the microstate of the system and hence to predict its Future evolution aggravated by the fact that no system is fully isolated from the rest of the world.  Meaning that the Future evolution of a system is influenced by its environment. In order to see the deterministic character of a system’s Time evolution, one would need to know the state of its environment, which in turn depends on still another environment, ab-infinitum. Since it is impossible to include in our calculations such a wider environment, which might consist of the entire Universe, is frequently argued that we have no choice but to describe our system using the concept of Probabilities.  

The Ignorance Interpretation of Probability Theory, born and developed in the Classic framework of Statistical Mechanics, is unavoidably missing the Quantum Mechanics revolution. The body of experimental and theoretical discoveries made after the Interpretation was conceived.  The Author of these notes openly challenges the basic assumption contained in several textbooks that Classical or Quantum Mechanics can provide an accurate microscopic description of a Thermodynamic system. Nowadays’ idea is that Quantum effects intervening in the system after an Event, include also the: 

  1. Future interactions of the open system with all other systems. Interactions starting to happen after the Present t0 and before the Future Time t1;
  2. Entanglement, existing since the start of Time between each of the elementary particles in the system and the grand superposition (Multiverse);
  3. Entanglement, existing since the start of Time between each of the elementary particles in the system.

The Future interactions, originating by the named “Everywhere” region all around a bicone, are impossible to account for when trying to calculate what eigenvalue shall an observable property acquire at a Future Time.  Entanglement refers to a generalized relation of each element with the grand Superposition, with minimal possibility to acquire reliable data about tyhis last banally because the element is by definition part of that Superposition.  A problem of wrong perspective, whose solution is impossible.  The couple of viewpoints Classic and Quantum shown before, encounter a quite similar interpretation when comparing in the figure below Classic and Relativistic predictability for the evolution of a closed system.  “Closed” are the systems ideally insulated, not electromagnetically nor gravitationally interacting with others.  


 Two closed physical systems whose evolution is predictable in one case and unpredictable in the other.  Classic and Quantum viewpoints, shown before in the case of closed systems, encounter similar interpretation in terms of predictability when comparing Classic and Relativistic. Closed are the systems ideally insulated, not interacting with others.  The Classic view implies a predictable path for an object.  Relativistic view, on the opposite, is unpredictable starting by the first bifurcation. Unpredictability which may be understood in terms of coexistence of the future branches ( abridged by F. De Felice, C.J.S. Clarke/1990)


Three examples: 

















  1. quantum bits' (qbits) circuitry operates in an Environment engineered trying to emulate closed systems at least for what it refers to the electromagnetic interactions;
  2. A-class fridges emulate more closely than a C-class fridges a closed system;
  3. the Universe, the only system ideally closed with respect to all kinds of “external” influences simply because, by definition, it does not exist any external environment.

The Classic Mechanics’ view, known the initial conditions, implies a predictable path for an object.  The Modern relativistic view, on the opposite, is unpredictable yet starting by the first foliation, in the figure above represented as a bifurcation, a near synonimous of superposition of linear functions.  In the modern Quantum view, the eigenfunction predicting the state of a physical system after an interaction, includes typically a multiplicity of eigenvalues, later observed associated to different weighs finally translated in the laymen term of probabilities.  What implies an intrinsic unpredictability.  Unpredictability unavoidable when modeling the system as a superposition of coexisting future branches.  Branches in which the Observer, Detector or measurement equipment shall later encounter itself.  For example, the predictability for the outcomes when tossing two ideal dices, is a state vector, whose values are the multiplicity of outcomes visible at right side.  Tossing two (ideal) fair dices we’ll have established a Sample Space comprising 36 Events.  Here Event is a combination of the outcomes of the tossing of the couple of dices.  They do not exist “fair dices” nor it exists a way to manufacture them that ideal way, perfectly symmetric, perfectly balanced over all their sides.   



 Tossing 2 fair dices establishes a state vector composed by 36 combinations of 2 elements.  If for “outcome” we mean the superposition of these 2 elements’ outcomes, as it is typically the case for the real dices, then we discover that the weight of certain sums is much higher than that of others








The true number of Events results enormously higher, coherently with the (potentially) infinite dimensionality of the State space.  In other words, in a case of infinite dimension for the State Space'swe can know the state only by mean of an infinite number of observations in infinite Sample Spaces, along an infinite Time. This viewpoint had been itself revised after 1990, to take into due account the impressive experimental results (for example, Alain Aspect's team at Sorbonne University, France and Akira Tonomura's at Toshiba Research Laboratories, Tokio, Japanmassively accumulated worldwide along the decade 1980-1990.  When these notes are written, it resembles the one described in the page “Quantum Causal Theory”.  



Examples of Events’ Causal Connections 


Triggering Containers by Laser Light


Events’ causal connection, in its classic version pre-2000, encounters an important field of application with respect to the exchange of light signals.  This, in the Electronic Inspectors is commonly associated to Triggering of typically accelerated containers.  In the following, we’ll outline the true rigorous scenario around a Triggering.  


 To use an incremental Encoder, like this optical model, in the perspective of modern Physics means to coarse-grain a container position.   It is a coarse-graining operation because the Shifting-Register cell has typically an extension, measured along the container's direction of movement, bigger than a container diameter and much bigger than the Encoder's resolution ( abridged by U.S. Digital/2015)












This is useful to understand why triggering, an action strictly related to containers' kinematic, results on the opposite always referred to an external speed reference, since decades an Encoder.  This, when it is well known that only direct measurements provide absolute values, those whose relative errors are  minimised.  The choice toward external speed references is forced by the complexity of internal (direct) measurements.  To use an Encoder, in the perspective of modern Physics,means to coarse-grain a container position into a Shifting-Register cell.  

It is a coarse-graining operation because the Shifting-Register cell has typically an extension, measured along the container's direction of movement, bigger than a container diameter.   Coarse-graining, for definition, is clearly not a synonimous of increase on precision, just the opposite, and in our case kinematic uncertainty means False Rejects.  Curvature effects described below are in the Shifting-Register emulated and widely reinforced by accelerations and decelerations of Conveyors and by the contacts of the container with the lateral guides causing container sliding.  What should happen if we’d be measuring containers' speed in a direct rigouros way?


Links to other pages: 


Contact

















                                                                                                            Copyright Graphene Limited 2013-2019