Supersymmetry finally found ‘using’ neural networks, and surprisingly, not at the LHC [Breaking News]

God Bennett
3 min readMay 3, 2019

--

I apologize for what appears to look like clickbait, since this is not the type of discovery that you probably anticipated if you’re reading as physicist. This concerns using supersymmetry, to build a somewhat novel type of artificial neural network, I call the “Supersymmetric Artificial Neural Network”, or “Edward Witten/String theory powered supersymmetric artificial neural network”, which I will try to discuss later at a string theory conference in Europe to which I was accepted after application, namely the Gordon Research String theory conference.

Side note: I could have used a Generative adversarial neural network to get rid of that logo above to the top right.

Why bother to try using supersymmetry, especially due to its failure to be detected at LHC?

Pertinently, the “Edward Witten/String theory powered supersymmetric artificial neural network”, is one wherein supersymmetric weights are sought.

Many machine learning algorithms are not empirically shown to be exactly biologically plausible, i.e. Deep Neural Network algorithms, have not been observed to occur in the brain, but regardless, such algorithms work in practice in machine learning.

Likewise, regardless of Supersymmetry’s state as pertaining to one of mankind’s best theories for describing the cosmos at bottom, together with its elusiveness at the LHC, as seen above, it may be quite feasible to borrow formal methods from strategies in physics even if such strategies are yet to show related physical phenomena to exist; thus it may be pertinent/feasible to try to construct a model that learns supersymmetric weights, as I proposed throughout my paper, following the progression of solution geometries going from 𝑆𝑂(𝑛) to 𝑆𝑈(𝑛) and onwards to 𝑆𝑈(𝑚|𝑛).

Artificial neural network/symmetry group landscape visualization:

Surprisingly, (although not entirely so to some machine learning researchers) machine learning benefits from usage of symmetry group systems. Picture the U(1) symmetry group structure, concerning the unit circle (See page 2 in this text)). This has been applied to machine learning. If you zoom out as seen in the diagram snippet above from my paper, there is a progression of better and better machine learning algorithms going from SO(n) to U(n) usage, the same types of symmetry group structures rife in Physics, which all concern normal space-time.

I propose yet another usage of symmetry group structure, namely SU(m|n), from string theory, which concerns ‘Poincare’ space, where supersymmetry is known to algebraically for example, combine both normal space time and anti-commuting coordinates or features (See page 7 in this text)). Yes, the same Supersymmetry that is yet to be found experimentally at the LHC. Fortunately, machine learning research has been known to somewhat ignore the reality that its algorithms may not be precisely found in nature, while still working well. (Another way to view this, is that people have successfully taken math/formal methods in Physics from various places, and applied it to make machine learning models regardless of the math/formal physics methods’ authors’ original intentions)

A parallel here perhaps, is that despite machine learning’s past ignorance of neuroscience in the experimental regime, machine learning seeks to follow neuroscience results more and more. Likewise physicists seek to unravel supersymmetry’s sparticles at the LHC or other experimental means.

For a clear overview of the supersymmetric artificial neural network see this item below:

Article: My “Supersymmetric Artificial Neural Network” in layman’s terms”

Author:

I am an atheist, casual body builder, and software engineer.

--

--

God Bennett

Lecturer of Artificial Intelligence, and inventor of “Supersymmetric Deep Learning” → Github/Supersymmetric-artificial-neural-network