Articles
- Neural Networks, Manifolds, and Topology, C. Olah (2014), blog post explaining graphically how Neural Net layers separate inputs
- Universal Function Approximation by Deep Neural Networks with Bounded Width and RELU Activations, B. Hanin (2017), explains how piecewise linear (PL) maps can be reconstructed with NNs using ReLU functios, and provides upper bounds for the minimum number of layers needed, within a given NN hidden layer width. Earlier work by Cybenko and Hornik-Stinchcombe-Whit showed that PL maps can be reconstructed with NNs with a single hidden layer if the width of the NN is allowed to be arbitrarily large.
- Topological Deep Learning: Classification Neural Networks, M. Hajij, K. Istvan (2020) - very basic explanation of how NNs are approvximator functions, with good bibliography
- Topology Applied to Machine Learning: From Global to Local, H. Adams, M. Moy (2021)
Brain Networks
- Reviews: Topological Distances and Losses for Brain Networks, M. K. Chang, A. Smith, G. Siu (2021). This is a survey of shape approximating functions used in brain imaging - showing how persistent homology and Morse theory can be used to compute topological features of point cloud
- Since this brings up Morse Theory, here are John Milnor’s very nice lecture notes from 1963: Morse Theory (last accessed Feb 2020)
Posts
- Wikipedia: Persistent homology. Gives definition of persistent homology on filtered simplicial complexes \(\emptyset = K_0 \subseteq K_1 \subseteq ... \subseteq K_n = X\) as images of the simplicial homology maps \(H_p(K_i) \rightarrow H_p(K_j)\), and gives a list of software packages that compute persistent homology.
Other