Finite element methods (FEM) play an important role in the digital product development process of many industries, including aerospace and automotive. FEM allow to complement or replace physical experiments in the development process by numerical simulations on a discretization of the design’s smooth geometry into a mesh of a finite number of polygons called elements that connect points sampled from the geometry, called nodes.
Most commercially successful mesh generation is based on variants of the decompositional medial axis algorithm or the recursive advancing front algorithm that have been tailored to industrial applications based on years of experience. There is a significant body of literature on mesh generation. For a general overview, we refer to Owen (2000), and to Bommes et al. (2013) for a recent literature survey on mesh generation with quadrilateral elements. Notable approaches that work globally rather than decompositional or recursive are Kälberer et al. (2007) and Prestifilippo and Sprave (1997).
An important aspect to the practical application of FEM is the quality of meshes with respect to their fitness to enable accurate simulations. A lot of effort is put into formalising quality criteria to guide mesh generation algorithms towards high-quality meshes, albeit with mixed success in the presence of complex and sometimes conflicting requirements from different simulation disciplines. (The advantage of being able to work on a mesh that is shared among different disciplines justifies the effort required to generate joint mesh.) Hence, it is left to experienced engineers to review meshes and rework individual elements or adjacent sets of elements (i.e., subareas
) of meshes in manual, time-consuming work. The cognitive process underlying this highly subjective activity has yet to be formalised into a set of requirements or heuristics for mesh generation.
We propose an alternative, data-driven approach to evaluating the quality of meshes by means of training a machine learning model on data from expert evaluations that is able to generalise to unseen data. An advantage of this approach is that machine learning algorithms can abstract from the low-level features represented as input data to higher-level concepts, e.g., represented by hidden layers in an artificial neural networkRumelhart et al. (1986). A machine learning model with good fit and generalisation can be applied to review and rework unseen, future meshes. Where quality is predicted unsatisfactory by the model, undesirable mesh structures may be generated anew and re-evaluated until a fixed point is reached through a fully automated routine.
However, the application of machine learning to mesh evaluation is non-trivial. For one, the explicit graph structure of meshes makes their evaluation different from traditional machine learning tasks. For instance, the quality of a subarea of a mesh is often interdependent with structure and quality of the remaining mesh, whilst traditional machine learning tasks consider independent observations. For another, meshes are generally unstructured, i.e., the number of elements sharing a node is not constant. This rules out direct constant-size vector-like representations that are required for most off-the-shelf machine learning algorithms. There are many different avenues to represent the task of mesh evaluation as a machine learning problem, including projection to 2D images or neighbourhood aggregation. This paper follows the latter approach. The contributions of this paper are many-fold.
First, we characterise the problem of evaluating mesh quality as a classification problem for machine learning. For this purpose, we introduce the element neighbourhood graph induced by a mesh, where each element is represented by a vertex, and edges between vertices represent adjacent elements. Then, the machine learning task is to predict whether an element belongs to a subarea that requires rework in order to achieve acceptable mesh quality.
Second, as most off-the-shelf machine learning methods work on vector-like representations rather than graph structures, we propose to extract simple but domain-specific, statistical features from the neighbourhood of each element. We will rely on machine learning to abstract from our low-level features to higher-level concepts.
Third, we conducted an empirical study of our approach that includes meshes from parts of a real-world passenger vehicle. Experimental results demonstrate applicability but also some limitations of our approach.
The remainder of this paper is organised as follows: Section 2 introduces necessary background. Section 3 introduces how we capture the neighbourhood of an element on a mesh, and in Section 4 we show how we represent the task of evaluating a mesh for supervised machine learning. We demonstrate practical applicability in Section 5. Then, we outline and discuss some alternative machine learning approaches to mesh evaluation in Section 6. Finally, the paper is concluded in Section 7.
Machine learning studies computer algorithms that optimise the parameters of a (mathematical) model to fit data with respect to some cost function. This process is also called learning or training a machine learning model. Supervised (machine) learning is the task of optimizing the parameters of a function that maps input data points described by the values to the attributes of a feature vector to an output, called label, based on sample pairs of input and label. If the model is able to generalise to unseen pairs, i.e., succeeds predicting the true label of validation data with little error, it may be applied to make predictions about unseen inputs. When labels represent group membership, the task of identifying the label for any given input is called a classification problem
. Many types of models that are suitable for supervised learning on classification problems are known to the literature, including classification trees and feedforward neural networks.
A classification tree Breiman et al. (1984) is a directed tree-like model that can be constructed from data samples by recursively selecting an attribute from the feature vector and partitioning the samples into subsets according to the (discretised) values taken for the attribute. Hence, the parameters of a classification tree model include attribute selections and discretisation of attribute values for each partitioning. Extremely randomised trees Geurts et al. (2006), for instance, randomise strongly both attribute selection and attribute discretisation. A classification tree model is applied to unseen input by recursively following the partitioning (branches
) that matches to the values for the attributes of the input. The (predicted) label of the input data is the majority label of the partition where recursion aborts. In order to control predictive performance, the folklore of machine learning practice also considers thresholds on the probability estimate of class membership as an alternative to majority label. Better generalisation to unseen data may be achieved by combining multiple modelsZhou (2012).
A feedforward (artificial) neural network is a directed acyclical graph-like model that consists of multiple layers of (computational) units, where each unit from one layer has connections to only units of the subsequent layer. Connections represent a flow of information, where the output of a unit is input to another unit whenever the former is connected to the latter. There is precisely one unit per attribute of the feature vector that make up the first layer in the network, one unit per attribute. Their output is provided by the respective values of any input data. The output of each unit from all other layers is computed by a linear combination of their input values that is passed through a non-linear activation function. The output of the units from the last layer represent the label associated with the input data. The parametersof a feedforward neural network include all coefficients to the linear combination computed by each unit. A feedforward neural network can be trained, e.g, by first defining its topology (or architecture), initialising with (random) values, and then adjusting
based on sample input in a process called backpropagationRumelhart et al. (1986).
An important aspect of machine learning is the design of the feature vector. Vector- or grid-like data, where each data point (e.g., row in a data table) takes values for a known fixed set of attributes, are referred to as structured data. Structured data can be easily exploited for machine learning because a feature vector can be directly modelled after the available data columns. A much bigger challenge is finding a way to encode unstructured data, like graphs, into a feature vector for machine learning. A common limitation is that the structure of any feature vector is defined a-priori and, in particular, cannot include data of arbitrary size. However, the number of vertices that share an edge with any given vertex in a graph can be arbitrary in general.
This paper applies the usual basic graph theory definitions and notations, where a finite and undirected graph with no loops or multiple edges is defined by a finite set of vertices and a set of edges , each of which is a set of two vertices from . Instead of and we also use the notation and . For any vertex , the vertex degree of is , i.e., the number of vertices that share an edge with . A graph is called regular if the vertex degree of every vertex in is the same, and irregular otherwise.
Meshes encode unstructured data closely related to graphs. In the context of structural mechanic simulations, meshes consist of elements that connect only few nodes (for performance reasons). Whilst the smallest possible shape for this purpose is a triangle, however, quadrilaterals are preferred over triangles for numerical reasons that go beyond the scope of this paper. Hence, in the following, we consider quad-dominant meshes, i.e., meshes that consist of mostly quadrilateral elements and few triangles. Given a finite set of points sampled from some geometry, an element is either a triangle from or a quadrilateral from . Then, a (quad-dominant) mesh is a finite set .
The choice of mesh to approximate the geometry of a design directly affects accuracy or stability of subsequent FEM. In some of our application scenarios, the ideal mesh consists of coplanar quadratic elements. While it is unrealistic to approximate most relevant geometries with such ideal, triangles are introduced where quadrilaterals would be distorting.
Various metrics to measure element quality and mesh quality
are employed to guide mesh generation towards favorable results. Aspect ratio, skewness, and warpage are examples of element quality metrics from mesh generation folklore. Theaspect ratio of an element is the aspect ratio of a minimum rectangle containing the element. Skewness is the angular difference of the medians of the opposing edges in a quadrilateral. Warpage is a measure that quantifies how much the nodes of a quadrilateral divert from being coplanar. Examples for mesh quality metrics are the minimum edge length of any element, or the fraction of triangles in a quad-dominant mesh.
Whilst mesh generation optimises such quality metrics, industry practice knows many additional, often conflicting requirements on mesh quality from different FEM disciplines. Hence, every relevant mesh undergoes a structured review process, where experienced engineers identify and label sets of elements that require rework in order to achieve a mesh that enables accurate structural mechanic simulations. An example is provided in Figure 1. Rework considers the modification of elements, including the generation of an alternative mesh, until a result is achieved that is deemed satisfactory by the reviewer.
3 Element Neighbourhood Graph
When classifying an element or a set of elements in any given mesh, expert reviewers naturally consider their fit within the remaining mesh, and nearby elements in particular. In order to capture adjacency of elements, we introduce the(element) neighbourhood graph induced by a mesh, where each element is identified by a vertex, and an edge connects two vertices whenever the respective elements share a node. Formally, the element neighbourhood graph of a mesh is the (undirected) graph defined by
For any element , the set of elements that consists of and all elements that share a node with is called the (1-ring) neighbourhood of . The neighbourhood of can be expanded to its 2-ring neighbourhood by recursively including the neighbourhood of each member. Formally, for any graph the k-ring neighbourhood of a vertex for any integer is defined by
In order to access vertices in the -ring neighbourhood of that are not included in its -ring neighbourhood, we define the -ring neighbourhood frontier of for any integer by
Accordingly, the k-ring (element) neighbourhood of an element is given by , and the the k-ring (element) neighbourhood frontier of an element is . Examples are provided in Figure 2.
4 Element Classification
In order to automate the task of reviewing a mesh for necessary rework, we formulate a classification problem for supervised learning. Our formulation exploits the reasonable assumption that the (binary) label of each individual element (i.e., rework or not) in a mesh can be determined by selected properties and adjacency structures. Whilst the precise relationship remains unclear, we apply machine learning to fit a model on historical data from expert evaluations.
To begin with, we associate the label of an element with properties of the element that can be used to guide its classification, i.e., we define the attributes of a feature vector. An obvious choice are properties of the element itself, and includes information that is known for most elements, such as aspect ratio and skewness. The label of an element might also be dependent on features from other elements in the mesh. (This contrasts element classification to traditional machine learning tasks, where every observations is considered independent from others.) In order to capture the neighbourhood of an element, we want to include additional features based on adjacency as captured by the neighbourhood graph. Since a node in a mesh can be shared by an arbitrary number of elements, the neighbourhood graph of a mesh is irregular in general. This provides a challenge for the direct application of many machine learning techniques which naturally handle feature vectors of a fixed size.
A straightforward approach to represent irregular graphs for the application of machine learning is to collect information from neighbouring vertices and aggregate the values of each relevant feature. Following this idea, we characterise the neighbourhood of an element by collecting and aggregating the values of quality metrics from the elements of its -ring neighbourhood frontier at varying distance for some limit .
Let for be a family of real-valued properties associated with each element in , e.g., element quality metrics like warping or aspect-ratio, and for be a family of aggregation functions, e.g., minimum or mean. For any , we define the (constant-size) feature tensor associated with an element
feature tensor associated with an elementby
The concrete choice of properties , aggregation methods , and limit
represent hyperparameters to modelling the classification problem. Having three axis, the feature tensor can be reshaped to a feature vector of size.
Note that our feature vector includes only simple, low-level properties of elements and, in particular, does not cover properties defined between adjacent elements (e.g., the dihedral angle) or more complex properties defined on sets of neighbouring elements. In our approach, however, we rely on machine learning to abstract from the low-level features to higher-level concepts, e.g., represented by hidden layers in a feedforward neural network or branches in classification trees.
5 Experimental Results
In this section, we report the results of extensive experiments to demonstrate the applicability and effectiveness of our approach on industry-standard meshes generated from a real-world design. The objects considered in our analysis include 317 parts of the body-in-white (vehicle body without engine, chassis subframe, electronics, interior fittings, etc.) of a recent Mercedes-Benz passenger car. We have limited our study to shell meshes, sometimes called surface meshes. Shell meshes are of particular importance to industry practice because they are used to simulate the behavior of sheet metal, plastic, and composite parts based on the theory of thin shells. We refer to Olszak (1980) for further reading on this topic. Each mesh was generated by a commercial application to industry-standard and without human interaction, and evaluated by a domain expert. In turn, the expert marked sets of elements for rework whenever their quality (e.g., fitness to enable accurate simulations) was considered unsatisfactory.
The expert evaluation serves as a class label for each element, i.e., marked elements are labelled rework, and the remaining elements are labelled as passed. Note that there is an obvious caveat: The semantics of an element being marked by an expert is not necessarily that of a lack of individual (element) quality, but rather that of belonging to a set of elements that, in combination, might require rework. Adding to the label noise is that, sometimes, domain experts have marked a small set of adjacent elements even though a single element was intended, and very often, rows of otherwise inconspicuous elements were marked to connect low quality elements.
Yet, only a small fraction of elements have been considered for rework by domain experts, as industry-standard mesh generation has been tailored with years of experience. In fact, we observe an extreme class imbalance: only of the elements in our study () have been labelled for rework. The “worst” mesh consists of elements of which are labelled for rework, whilst two meshes contain only elements labelled as passed.
We have extracted seven low-level properties from elements. Five of which are continuous attributes, namely skewness, aspect ratio, warping, area, and the angle between the surface normals (through the centre of mass of an element and other elements from its neighbourhood) as a measure of curvature. The two remaining attributes are boolean, indicating triangle and border elements (elements that have at least one edge that is not shared with another element). We have fixed aggregate statistics to minimum, maximum, and mean, and the limit as the other hyperparameters for the construction of feature tensors. Initial tests showed that increasing did not improve predictive performance, presumably because little useful information can be aggregated from -ring neighbourhood frontiers with increasing as such sets generally include an increasing amount of elements at increasing distance and from opposing directions. Altogether, the feature tensor consists of 84 attributes.
Our experiments considered extremely randomised trees (ExtraTrees) and a feedforward neural network (FNN). The ExtraTrees setting considers an ensemble of extremely randomised trees, each with access to only randomly selected attributes for partitioning. The predicted label to an input was determined by averaging the probability estimate of the trees for the input belonging to the rework class, and applying a threshold. The FNN setting considers a feedforward neural network with three hidden layers of sizes
, respectively, ReLU activation on all units in the hidden layers, and sigmoid activation on the single unit in the output layer. Batch normalisation was applied after the first and second hidden layer. This architecture was selected via limited experimentation. Network optimisation was performed using the Adam optimiser with the learning rate set to.
In order to assess how accurately our approach performs as a predictive model in practice, i.e., how well it generalises to unseen meshes, we have implemented our experiments using -fold crossvalidation, a commonly applied technique to evaluate machine learning models on limited sample size. To begin with, the 317 meshes were randomly partitioned into roughly equal sized subsets. Then, the label of each element from the meshes included in each subset were predicted by a machine learning model trained only on data from the remaining subsets. Note that we have decided to partition element data by mesh in order to avoid information leakage since the label of elements in the same mesh are generally not independent. As a result, the number of elements in each crossvalidation subset can vary strongly with the size of the meshes included in the subset. In fact, the smallest mesh considered in our experiments consists of elements and the largest mesh consists of elements. The median mesh consists of elements. Therefore, we report predictive performance by collecting individual predictions from all crossvalidation subsets (as if predictions stem from a single experiment) instead of averaging performance metrics across crossvalidation subsets.
A summary of our experimental results is given in Table 1. In order to control predictive performance and to demonstrate the available tradeoff between different metrics, we have included varying thresholds for the output of ExtraTrees and FNN, respectively, into our experiments.
Given the stark class imbalance in our experimental data, accuracy can be a misleading metric. Any trivial model for our application, e.g., one that predicts passed for every input, can achieve an accuracy of
. In fact, both ExtraTrees and FNN perform only slightly below this value. Hence, we turn our attention to the precision and recall statistics. Precision and recall are of particular practical relevance to our application domain, where recall measures the probability of detecting an element that expert review marks for rework, and precision measures the probability an element that is predicted to require rework is also marked by expert review. (Note that the aforementioned trivial model achieves zero recall.) With the threshold set to, ExtraTrees and FNN correctly identify approximately of the elements that were marked for rework. Whilst demonstrating some success, these are unfavourable results: both models misclassify nearly of the elements that require rework, yielding meshes that might hinder accurate simulations. Lowering the threshold hyperparameter to , however, lifts recall to approximately and , respectively. This comes at the price of reduced precision, from approximately down to and down to , respectively, indicating a rise in false positives. A more detailled view on the available tradeoff between precision and recall is provided by the diagram in Figure 3. Depending on what cost is associated with false positives, our results demonstrate practical applicability.
We believe that, by large parts, measurable performance is limited by the vague semantics of an element being marked for rework by expert review, as per our previous reservations about label noise. Visual inspection of the predicted labels reveals, however, that both models roughly identify many subareas of a mesh that require rework to a degree that warrants practical applicability. An example is provided in Figure 4. We conclude that using an element-wise evaluation of predictive performance provides structurally unfavourable results, and their conclusiveness with respect to successful completion of the original objective to identify subareas that require rework is limited.
Hence, in a separate experiment, we have presented the results of our approach to two groups of engineers: experienced engineers who have participated in the review process for our previous experiments, and novice engineers from a support team that is not specialised on this particular kind of meshes. Although we cannot publish a detailled analysis, we can report that the more experienced engineers find false positives hindering efficient review. On the other hand, the novices accept and work along the predictions made by the machine learning model, delivering mesh qualities accepted by the remitting departments.
6 Discussion and Future Work
Our work demonstrates a data-driven approach to evaluating mesh quality for FEM. In principle, it can be integrated into a fully automated routine that incrementally re-generates low quality subsets of a mesh and re-evaluates the result, until a fixed point is reached that represents a mesh that is fit to enable accurate simulations. Whilst experimental results show practical applicability, the imprecise semantics of expert evaluations hinders a more conclusive evaluation. The current practice of mesh review leaves unclear, e.g., whether an individual element that was marked for rework was also central to the decision about mesh quality, or whether it is merely included into a set of elements that requires re-meshing as a consequence of other (e.g., more distant) elements rendering the mesh undesirable. The immediate ramifications are two-fold: For one, the resulting label noise can misguide the training of machine learning models to pay undue attention to inconspicuous elements that are labelled for rework and that, because of similar properties reflected in the feature vector, can be easily confused with elements that have passed expert evaluation. This can drive false positives. For another, the imprecise semantics limits applicability of element-level performance metrics. In particular, inconspicuous elements might rightly be predicted as passed, but their ground-truth label can include them into rework of a much larger undesirable structure. This can drive false negatives.
Given this, we have argued that our experiments return unfavourable results that may not accurately reflect the goal of identifying subareas of a mesh that need to be re-generated for better quality, whilst not distracting attention to inconspicuous subareas at the same time. Work in progress considers the development of a domain-specific cost function and performance metric for guiding the optimisation of a machine learning model that reflects this goal. In order to address the problem of subjective labeling, we also contemplate to change the process of mesh reviews, including the automation of the labelling process by comparing meshes before and after correction.
Whilst a more domain-specific metric or clearer labels might improve model optimisation and interpretability of the results, we also expect potential improvements with the inclusions of additional low-level features, and alternative machine learning techniques that might better abstract to higher-level representations. For instance, some additional low-level features could relate an element to the underlying geometry of the original design. The original geometry was not available for our experiments, but might have influenced the review decisions by expert engineers. Of particular relevance are so-called feature lines in the geometry, i.e., lines which define the structure of a part like bends from manufacturing processes or important design features. Promising approaches that might capture dependencies across larger neighbourhoods, i.e., beyond the scope of -ring neighbourhoods for small
, include convolutional neural networks and graph neural networks.
Convolutional neural networks (CNNs, LeCun et al. (1999)
) aim at learning high-level representations from low-level features through kernels that slide along a feature tensor, and have revolutionalised the field of computer vision with their ability to learn to recognise structures within pixel data. Research related to our work has applied CNNs to the problem of mesh classification, i.e., the tasks of identifying the object represented by a mesh or assigning each mesh element to an object part that the element belongs to. The work presented inGuo et al. (2016) organises low-level features of a mesh in a two-dimensional grid and reports impact from the application of CNNs. We have tested the application of CNNs on our feature tensor but have not found them to improve performance over FNNs. Spin images Johnson and Hebert (1999) provide an alternative to aggregating data from -ring neighbourhood frontiers that, in contrast to our approach, can accumulate neighbourhood data by rotating a rectangular sheet along the surface normal through the centre of mass of an element. We have also tested the application of CNNs when spin images are used to aggregate neighbourhood statistics, but have not found them to improve performance over the feature vector presented in this paper. Another, more naïve avenue to making CNNs applicable for element classification is to generate image data via mesh rendering, and using a CNN to solve an image segmentation problem. Image segmentation partitions the resulting image into segments, e.g., ones that might require rework and ones that pass. However, mesh rendering can result in a significant loss of information because there exists no two-dimensional projection of three-dimensional surfaces that retains all relevant properties, such as lengths, angles, and area. In fact, we have tested this approach at an earlier stage of our research with very limited success.
Recent advances in graph neural networks (GNNs, Gori et al. (2005)), including graph convolutional networks Kipf and Welling (2017), graph attention networks Veličković et al. (2018), and gated graph neural networks Li et al. (2016)
, have led to ground-breaking results in learning tasks that require dealing with graph data. Unlike our approach, i.e., aggregating features extracted from each element neighbourhood to allow the application of standard neural networks, a GNN can represent information from a neighbourhood with arbitrary depth. We believe that the latter is required to capture undesirable mesh structures that are beyond the reach of our current representation, e.g., stretches of quadrilaterals that are enclosed by an opening and closing triangle. To our knowledge, classification models for graph-data are mainly studied in semi-supervised learning tasks that naturally arise in online social networks domainsAggarwal (2011). In semi-supervised learning, some of the data is labelled and the task is to find labels for unlabelled data, e.g., by propagating structural information and attributes in a graph to determine the label of unlabelled vertices from some labelled ones. This contrasts to our application scenario, i.e., a supervised learning task, where no labels are available to the elements from unseen meshes. Hence, our work contributes a novel, real-world application scenario and benchmark domain to the study of GNNs. Another, more indirect and speculative alternative is the hybridisation of our approach by considering only a conservative selection of elements for rework, e.g., through higher thresholds, and treating the classification of the remaining elements of a mesh as a semi-supervised task for GNNs. This is left to future work.
We have put forward a novel, automated method for postprocessing finite element meshes, a laborious and time-consuming task that is typically performed by engineering experts to this date. A central problem to this task is the evaluation of mesh quality, e.g., the fitness of a mesh to enable accurate simulations, because many complex and sometimes subjective, and conflicting requirements from different simulation disciplines are yet to be formalised. We address this issue by applying machine learning to abstract from expert evaluations. Our method is entirely data-driven, i.e., it relies on the ability of machine learning techniques to learn a representation of high-level concepts that capture mesh quality from low-level features.
We have explained why standard machine learning models like tree-based models and standard neural networks cannot handle mesh input directly. To allow their application, we have proposed the element-wise extraction of a set of domain-specific low-level properties by means of the neighbourhood graph induced by a mesh. Experimental results from evaluating the quality of finite element shell meshes for the purpose of structural mechanic simulations demonstrate practical applicability with using off-the-shelf machine learning techniques, including extremely randomised trees and feedforward neural networks, albeit an objective evaluation proves tricky in this subjective problem domain.
Potential improvements include the development of a domain-specific cost function and performance metric to better guide the training of machine learning models, the inclusion of geometry-related features, and alternative machine learning methods that better abstracts to high-level concepts of mesh quality. Future work considers potential benefits from the application of graph neural networks, a field that has recently seen considerable advances. We believe that element classification contributes a novel, challenging and practical application scenario to its research, which has previously focussed primarily on social graphs.
This research was supported by the German Federal Ministry of Education and Research (BMBF) via project AIAx-Machine Learning-driven Engineering (Nr. 01IS18048).
- Social network data analytics. Springer. Cited by: §6.
- Quad-mesh generation and processing: a survey. Compututer Graphics Forum 32 (6), pp. 51–76. Cited by: §1.
- Classification and regression trees. Wadsworth and Brooks. Cited by: §2.
- Extremely randomized trees. Machine Learning 36 (1), pp. 3–42. Cited by: §2.
- A new model for learning in graph domains. In IEEE International Joint Conference on Neural Networks, Cited by: §6.
- 3D mesh labeling via deep convolutional neural networks. ACM Transactions on Graphics 35 (1). Cited by: §6.
- Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (5), pp. 433–449. Cited by: §6.
- QuadCover - surface parameterization using branched coverings. Compututer Graphics Forum 26, pp. 375–384. Cited by: §1.
- Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations ICLR, Cited by: §6.
- Object recognition with gradient-based learning. In Shape, Contour and Grouping in Computer Vision, Cited by: §6.
- Gated graph sequence neural networks. In 4th International Conference on Learning Representations ICLR, Cited by: §6.
- Thin shell theory : new trends and applications. Springer. Cited by: §5.
- A survey of unstructured mesh generation technology. 7th International Meshing Roundtable 3. Cited by: §1.
Optimal triangulation by means of evolutionary algorithms. In
2nd International Conference On Genetic Algorithms In Engineering Systems: Innovations And Applications, Vol. , pp. 492–497. Cited by: §1.
- Learning representations by back-propagating errors. Nature 323 (6088), pp. 533–536. Cited by: §1, §2.
- Graph attention networks. Cited by: §6.
- Ensemble methods: foundations and algorithms. Chapman & Hall/CRC. Cited by: §2.