I know nothing about orthogonally decomposable tensors, but they look at a glance to generalise your usual linear algebra in a way useful for the statistical inference of mixture models, while nonetheless being more computationally tractable than your garden variety tensor methods, which would be useful if it is indeed so.
- Anandkumar, A., Ge, R., Hsu, D., Kakade, S. M., & Telgarsky, M. (2015) Tensor Decompositions for Learning Latent Variable Models (A Survey for ALT). In K. Chaudhuri, C. GENTILE, & S. Zilles (Eds.), Algorithmic Learning Theory (pp. 19–38). Springer International Publishing
- Belkin, M., Rademacher, L., & Voss, J. (2016) Basis Learning as an Algorithmic Primitive. (pp. 446–487). Presented at the 29th Annual Conference on Learning Theory
- Rabusseau, G., & Denis, F. (2014) Learning Negative Mixture Models by Tensor Decompositions. arXiv:1403.4224 [cs].
- Robeva, E. (2016) Orthogonal Decomposition of Symmetric Tensors. SIAM Journal on Matrix Analysis and Applications, 37(1), 86–102. DOI.
- Robeva, E., & Seigal, A. (2016) Singular Vectors of Orthogonally Decomposable Tensors. arXiv:1603.09004 [math].