# Detection of phase transition via convolutional neural networks

###### Abstract

A Convolutional Neural Network (CNN) is designed to study correlation between the temperature and the spin configuration of the 2 dimensional Ising model. Our CNN is able to find the characteristic feature of the phase transition without prior knowledge. Also a novel order parameter on the basis of the CNN is introduced to identify the location of the critical temperature; the result is found to be consistent with the exact value.

Studies of phase transition are connected to various areas among theoretical/experimental physics [1, 2, 3, 4, 5, 6, 7]. Calculating order parameters is one of the conventional ways to define phases and phase transitions. However, some phases like topological phases [8] do not have any clear order parameters. Even if there are certain theoretical order parameters like entanglement entropy [9, 10], they are difficult to measure in experiments.

Machine learning (ML) techniques are useful to resolve this undesirable situation. In fact, ML techniques have been already applied to various problems in theoretical physics: finding approximate potential surface [11], a study of transition in glassy liquids [12], solving mean-field equations [14] and quantum many-body systems [13, 15], a study of topological phases [16].

Especially, ML techniques based on convolutional neural network (CNN) have been developing since the recent groundbreaking record [17] in ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) [18], and it is applied to investigate phases of matters with great successes on classifications of phases in 2D systems [19, 20, 21] and 3D systems [22, 23]. It is even possible to draw phase diagrams [23].

In these previous works, however, one needs some informations of the answers for the problems a priori. For example, to classify phases of a system, the training process requires the values of critical temperatures or the location of phase boundaries. This fact prevents applications of the ML techniques to unknown systems so far.

The learning process without any answers is called unsupervised learning. Indeed, there are known results on detecting the phase transitions based on typical unsupervised learning architectures called autoencoder which is equivalent to principal component analysis [32] and its variant called variational autoencoder [wetzel2017unsupervised]. These architectures encode informations of given samples to lower dimensional vectors, and it is pointed out that such encoding process is similar to encoding physical state informations to order parameters of the systems. However, it is not evident whether the latent variables provide the critical temperature.

We propose a novel but simple prescription to estimate the critical temperature of the system via neural network (NN) based on ML techniques without a priori knowledge of the order parameter. Throughout this letter, we focus on the ferromagnetic 2D Ising model on a square lattice mainly based on the following three reasons. First, this system is one of the simplest solvable systems [24, 25] which has a phase transition, and it is easy to check the validity of our method. Second, this system can be generalized to other classical spin systems like Potts model, XY model and Heisenberg model, so our method can be applied to these other systems straightforwardly. Third, this model is a good benchmark for new computational methods [26, 27].

Using TensorFlow (r0.9) [28], we have implemented a neural network to study the correlation between spin configurations and discretized inverse temperatures. We find that NNs are able to capture features of phase transition, even for simpler fully-connected (FC) NN, in the weight without any information of order parameters nor critical temperature. Fig. 1 reminds us of the discovery of the “cat cell” in the literature [29] in which the model recognizes images of cats without having explicitly learned what a cat is.

We examine the boundary structure in Fig. 1 by defining order parameter, and estimate the inverse critical temperature by fitting distribution of the order parameter (Table 1).

System size | (CNN) | (FC) |
---|---|---|

88 | 0.478915 | 0.462494 |

1616 | 0.448562 | 0.433915 |

3232 | 0.451887 | 0.415596 |

0.440686 |

First, we explain the details of our NNs If the reader is not familiar with machine learning based on NN, we suggest reading literatures reviewing it, e.g. [33, 34]. Our NN model is designed to solve classification problems. It is constructed from a convolution layer and a fully connected layer, so it can be regarded as the simplest model of a CNN (Fig. 2). The whole definition is as follows.

(1) |

The first transformation in (1) is defined by convolution including training parameters called filters, rectified linear activation called ReLU, and flatten procedure:

(2) |

The second transformation is defined by fully-connected layer including training parameters called weights, and softmax activation:

(3) |

We classify configurations into classes labeled by in the final step. This is related to the inverse temperature through (7). Because of the following facts, and , we can interpret as a probability for classifying given state to -th class in a classification problem. In total, we have two types of parameters:

(4) |

In later experiment, these parameters will be updated. In the first trial, we take NN without convolution (12). In this case, the parameters are not the filters in (4) but weights.

We need training set for optimizing the above parameters (4) in CNN. We call it where indicates the size of the square lattice. The definition is

(5) |

where is the generated configuration under the Ising Hamiltonian on the square lattice

(6) |

and inverse temperature using the Metropolis method.

is the minimum (maximum) temperature for the target Ising system. The temperature resolution is defined by where is the number of samples. is the discretized inverse temperature defined by

(7) |

This is called the one-hot representation, and enables us to implement the inverse temperature into the NN directly. We use index or to represent component of the vector as already used in .

Now let us denote our CNN, explained in (1) as We need error function that measures the difference between the output of the CNN and the correct discretized inverse temperature. In our case, the task is classification, so we take cross entropy as the error function:

(8) |

where and .

Roughly speaking, the parameters are updated via with small parameter . More precisely speaking, we adopt a sophisticated version of this method called Adam [30] implemented in TensorFlow (r0.9) [28] to achieve better convergence.

Our neural network, , learns the optimal parameters and in (4) through iterating the optimization of the cross entropy (8) between the answer and the output constructed from stochastically chosen data .

(9) |

As we show later, the weight matrix inheres well approximated critical temperature after 10,000 iterations in (9).

Here, we prepare (5) by using the Metropolis method with the parameters

(10) |

The max and min values for mean that , where is the inverse temperature. Note that the known value form the phase transition is or . This means that configurations in our training data extend from the ordered phase to the disordered phase. In all, we prepare three types of training set,

(11) |

We apply negative magnetic field weakly to realize the unique ground state at zero temperature. As a result, almost all configurations at low temperature () are ordered to .

Before showing the CNN result, let us try a somewhat primitive experiment: training on NN without the convolution layer, i.e. a fully connected NN.

(12) |

We retain the error function and optimizer, i.e. the cross entropy (8) and AdamOptimizer(). We align the heat map for the weight trained by using in right side of Fig. 1. After 10,000 iterations, NN does detected the phase transition. So this NN is sufficient for detecting ising phase transition, but we cannot answer why this NN captures it. To answer it, we turn to our main target: CNN below.

Next, we take and and use the training set. Once we increase the number of iterations to 10,000, we get two possible ordered figures. We denote them case (A) and case (B) respectively as shown in Fig. 3.

Case (A) is characterized by , and we can observe two qualitatively different regions in the heat map of the weight , black colored region () and gray colored region (). The boundary is close to the critical temperature . Case (B) is characterized by , and values in the heat map for are in gray colored region and almost homogeneous. We will discuss later the reason why only case (A) displays phase transition.

We now turn to the multi-filter case with and . The results for all heat maps after 10,000 iterations are shown in Fig. 4. The stripe structure in the heat map of corresponds to its values connecting to the convoluted and flatten nodes via five filters, (A), (A), (B), (B), (B) respectively. Empirically speaking, the number of filters should be large to detect the phase transition because the probability for appearance of (A) increases with increased statistics.

From the experiments, we know that our model (1) seems to discover the phase transition in the given training data for the Ising model. In order to verify this statement, we would like to extract the critical temperature from our CNN after the training. As a trial, we fix the parameters of the CNN as follows: the number of filters, channel and stride are , and respectively. The number of classifications is taken as as well as we did in previous section.

First, we plot heat maps for the weight matrix in CNN trained by . For every lattice size, we observe a domain-like structure with a boundary around . However, the heat map does not give the location of the boundary quantitatively. We propose an order parameter based on the weight matrix :

(13) |

and estimate the critical temperature. The result is shown in Fig. 5.

To quantify the boundary in the heat map for , we define critical temperature extracted by CNN by fitting with the following function: where and are fitting variables and indicates the location of the jump. This function is motivated by the magnetization of the Ising model using the mean field approximation. Table 1 shows the fit results both for CNN and FC. Our results show that matches the critical temperature to 2 – 8 % accuracy. Compared to it, shows less accuracy.

Let us conclude this letter. We have designed simple neural networks to study correlation between configuration of the 2D Ising model and inverse temperature, and we have trained them by SGD method implemented by TensorFlow. We have found that the weight in neural networks captures a feature of phase transition of the 2D Ising model, and defined a new order parameter in (13) via trained neural networks and have found that it can provide the value of critical inverse temperature.

Why are our neural networks able to find a feature of phase transition? There is an intuitive explanation thanks to CNN experiments. The filter with in case (A) has a typical average around . This is close to the convolution with filter which is equivalent to a real space renormalization group transformation, and the filters reflect local magnetization which is related to a typical order parameter and it enables CNN to detect the phase transition. As an analog of this, FC NN might realize the real space renormalization group transformation in inside.

Our NN model has potential to investigate other statistical models. For example, it was reported that CNNs can distinguish phases of matters, topological phases in gauge theories [19], phases in the Hubbard model [22] and Potts model [31]. It is interesting to apply our design of neural networks to these problems and see whether the NN can discover nontrivial phases automatically, as we did in this letter.

We would like to thank to K. Doya, K. Hashimoto, T. Hatsuda, Y. Hidaka, M. Hongo, B. H. Kim, J. Miller, S. Nagataki, N. Ogawa, M. Taki and Y. Yokokura for constructive comments and warm encouragement. We also thank to D. Zaslavsky for careful reading this manuscript. The work of Akinori Tanaka was supported in part by the RIKEN iTHES Project. The work of Akio Tomiya was supported in part by NSFC under grant no. 11535012.

## References

- [1] Kenneth G Wilson and John Kogut. The renormalization group and the expansion. Physics Reports, Vol. 12, No. 2, pp. 75–199, 1974.
- [2] Joseph Polchinski. Effective field theory and the fermi surface. arXiv preprint hep-th/9210046, 1992.
- [3] K Intriligator and N Seiberg. Lectures on supersymmetric gauge theories and electric-magnetic duality. Nuclear Physics B-Proceedings Supplements, Vol. 45, No. 2-3, pp. 1–28, 1996.
- [4] Roman Pasechnik and Michal Šumbera. Phenomenological review on quark-gluon plasma: concepts vs observations. arXiv preprint arXiv:1611.01533, 2016.
- [5] PC Hohenberg and AP Krekhov. An introduction to the ginzburg–landau theory of phase transitions and nonequilibrium patterns. Physics Reports, Vol. 572, pp. 1–42, 2015.
- [6] Qijin Chen, Jelena Stajic, Shina Tan, and Kathryn Levin. Bcs–bec crossover: From high temperature superconductors to ultracold superfluids. Physics Reports, Vol. 412, No. 1, pp. 1–88, 2005.
- [7] Bernhard Kramer, Tomi Ohtsuki, and Stefan Kettemann. Random network models and quantum phase transitions in two dimensions. Physics reports, Vol. 417, No. 5, pp. 211–342, 2005.
- [8] Xiao-Gang Wen. Topological orders in rigid states. International Journal of Modern Physics B, Vol. 4, No. 02, pp. 239–271, 1990.
- [9] Alexei Kitaev and John Preskill. Topological entanglement entropy. Physical review letters, Vol. 96, No. 11, p. 110404, 2006.
- [10] Michael Levin and Xiao-Gang Wen. Detecting topological order in a ground state wave function. Physical review letters, Vol. 96, No. 11, p. 110405, 2006.
- [11] Jörg Behler and Michele Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Physical review letters, Vol. 98, No. 14, p. 146401, 2007.
- [12] Samuel S Schoenholz, Ekin D Cubuk, Daniel M Sussman, Efthimios Kaxiras, and Andrea J Liu. A structural approach to relaxation in glassy liquids. Nature Physics, 2016.
- [13] Louis-François Arsenault, Alejandro Lopez-Bezanilla, O Anatole von Lilienfeld, and Andrew J Millis. Machine learning for many-body physics: The case of the anderson impurity model. Physical Review B, Vol. 90, No. 15, p. 155136, 2014.
- [14] Louis-François Arsenault, O Anatole von Lilienfeld, and Andrew J Millis. Machine learning for many-body physics: efficient solution of dynamical mean-field theory. arXiv preprint arXiv:1506.08858, 2015.
- [15] Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem with artificial neural networks. arXiv preprint arXiv:1606.02318, 2016.
- [16] Dong-Ling Deng, Xiaopeng Li, and S Das Sarma. Exact machine learning topological states. arXiv preprint arXiv:1609.09060, 2016.
- [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
- [18] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), Vol. 115, No. 3, pp. 211–252, 2015.
- [19] Juan Carrasquilla and Roger G Melko. Machine learning phases of matter. arXiv preprint arXiv:1605.01735, 2016.
- [20] Peter Broecker, Juan Carrasquilla, Roger G Melko, and Simon Trebst. Machine learning quantum phases of matter beyond the fermion sign problem. arXiv preprint arXiv:1608.07848, 2016.
- [21] Tomoki Ohtsuki and Tomi Ohtsuki. Deep learning the quantum phase transitions in random two-dimensional electron systems. Journal of the Physical Society of Japan, Vol. 85, No. 12, p. 123706, 2016.
- [22] Kelvin Ch’ng, Juan Carrasquilla, Roger G Melko, and Ehsan Khatami. Machine learning phases of strongly correlated fermions. arXiv preprint arXiv:1609.02552, 2016.
- [23] Tomi Ohtsuki and Tomoki Ohtsuki. Deep learning the quantum phase transitions in random electron systems: Applications to three dimensions. arXiv preprint arXiv:1612.04909, 2016.
- [24] Lars Onsager. Crystal statistics. i. a two-dimensional model with an order-disorder transition. Physical Review, Vol. 65, No. 3-4, p. 117, 1944.
- [25] Yôichirô Nambu. A note on the eigenvalue problem in crystal statistics. Broken Symmetry: Selected Papers of Y Nambu, Vol. 13, No. 1, p. 1, 1995.
- [26] Pankaj Mehta and David J Schwab. An exact mapping between the variational renormalization group and deep learning. arXiv preprint arXiv:1410.3831, 2014.
- [27] Ken-Ichi Aoki, Tamao Kobayashi, and Hiroshi Tomita. Domain wall renormalization group analysis of two-dimensional ising model. International Journal of Modern Physics B, Vol. 23, No. 18, pp. 3739–3751, 2009.
- [28] Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
- [29] Quoc V Le. Building high-level features using large scale unsupervised learning. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 8595–8598. IEEE, 2013.
- [30] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- [31] Chian-De Li, Deng-Ruei Tan, and Fu-Jiun Jiang. Applications of neural networks to the studies of phase transitions of two-dimensional potts models. arXiv preprint arXiv:1703.02369, 2017.
- [32] Lei Wang. Discovering phase transitions with unsupervised learning. arXiv preprint arXiv:1606.00318, 2016.
- [33] Hermann Kolanoski. Application of artificial neural networks in particle physics. In International Conference on Artificial Neural Networks, pp. 1–14. Springer, 1996.
- [34] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, Vol. 521, No. 7553, pp. 436–444, 2015.