Diagram snippet from my paper. See large version. Numbers on the axes arbitrarily show increase in preservation depth like orthogonality at one point, and orthogonality and added components at a higher point, versus increase in representation power of models. (See these various academic sources describing representation power wrt neural networks: source-1, source-2, source-3.)
This describes a range of neural networks, from older Artificial Neural Network types, to my Supersymmetric Artificial Neural Network as seen on page 5 of paper: