Materials scientists can spend years probing just a handful of materials to assess properties like strength, flexibility, radiation resistance, electrical conductivity and magnetism. The problem, however, is that there are thousands of untested metals, ceramics, alloys and other substances out there that could change the world—but not enough lab time to probe them all.
Consequently, scientists have recently turned to machine learning algorithms to speed up the process of identifying the properties of novel materials. However, when it comes to one class called polycrystalline materials, which are composed of microscopic grains of different crystal orientations all stuck together, the current machine learning techniques and computer processors can’t quite cope with the complexity.
That’s why University of Wisconsin-Madison engineering and computer sciences researchers teamed up to apply a new type of machine learning approach to the task of predicting the properties of polycrystalline materials. Led by Jiamian Hu, an assistant professor of materials science and engineering, and Yingyu Liang, an assistant professor of computer sciences, their research appears in the July 9, 2021, issue of the journal npj Computational Materials.
Much of the world around us, including common metals, some ceramics, rock and ice, is composed of polycrystalline materials. In this type of structure, microscopic grains mix together, forming complex microstructures. The size of these grains, their orientation and interactions with neighboring grains determine the properties of the material.
The state-of-the-art machine learning tool, the convolutional neural network, cannot directly capture the adjacency relationships of the grains in polycrystalline materials. In other words, that type of modeling leaves out an extremely important element of the microstructures.
Accordingly, the team chose to apply an emerging type of machine learning method to the problem. “Convolutional neural networks also don’t work since they would use an impractically large amount of computing resources when applied to practical-sized microstructures with billions of voxels per image,” says Hu. “To overcome this challenge, we needed a fundamentally different solution. We found a recently popular method called a graph neural network. There is only a one-word difference in the name, but the concept is totally different.”
In the graph neural network approach, the input information is presented as graphs, or a data structure consisting of nodes and edges. The technique is especially good at modeling systems of relations and interactions, making it ideal for representing the interactions between grains in polycrystalline microstructures.
To demonstrate the concept, materials science and engineering PhD student Minyi Dai and computer science PhD student Mehmet Furkan Demirel, co-authors of the paper, came up with a method to convert 3D polycrystalline microstructures into graphs. Next, the team produced a dataset of roughly 500 polycrystalline microstructures ranging from 12 grains to 297 grains and their corresponding properties and used it to train a graph neural network model for predicting the links between the microstructure and properties. The researchers show that the error rate of property prediction remains less than 10% when the model is trained independently using smaller datasets comprising about 100 microstructures. This is particularly valuable given that the acquisition of microstructure datasets via experimental methods is usually slow and expensive.
The team also demonstrates that their graph neural network model eclipses three different state-of-the-art convolution neural network models in computational efficiency, and remarkably, their model can be extended to model a 5,000-grain microstructure using just one high-end graphics processing unit, which makes modeling polycrystalline materials accessible to the wider research community. “This was never possible before,” says Hu. “This shows the computational efficiency of our model.”
Over time, as the graph neural network analyzes more data, its error rate will further go down, and it will better reveal the secrets of polycrystalline microstructures—for example, which grain feature (e.g., size, shape, crystal orientation) is more important for learning certain properties of materials. The dataset produced by the project, available on GitHub, will also serve as a valuable resource to other scientists seeking to understand microstructures and benchmark their own machine learning models.
While the work itself is important, Hu says he’s equally excited by the collaboration between two areas: materials science and computer science. “Perhaps what is particularly noteworthy for this work is the interdisciplinary, inter-college collaboration between Liang’s group and mine,” he says. “They know the cutting-edge machine learning models, and we know what problems are important in our field. Together, we were able to solve an important problem in materials science and engineering.”
Jiamian Hu is a Grainger Institute for Engineering Fellow.
This work was supported by the American Chemical Society Petroleum Research fund under award PRF 61594-DN19 and the National Science Foundation under awards 1740707 and IIS-2008559; and Air Force Office of Scientific Research under award FA9550-18-1-0166.
Author: Jason Daley