FAIR principles for AI models, with a practical application for accelerated high energy diffraction microscopy

07/01/2022
by   Nikil Ravi, et al.
19

A concise and measurable set of FAIR (Findable, Accessible, Interoperable and Reusable) principles for scientific data are transforming the state-of-practice for data management and stewardship, supporting and enabling discovery and innovation. Learning from this initiative, and acknowledging the impact of artificial intelligence (AI) in the practice of science and engineering, we introduce a set of practical, concise and measurable FAIR principles for AI models. We showcase how to create and share FAIR data and AI models within a unified computational framework combining the following elements: the Advanced Photon Source at Argonne National Laboratory, the Materials Data Facility, the Data and Learning Hub for Science, funcX, and the Argonne Leadership Computing Facility (ALCF), in particular the ThetaGPU supercomputer and the SambaNova DataScale system at the ALCF AI-Testbed. We describe how this domain-agnostic computational framework may be harnessed to enable autonomous AI-driven discovery.

READ FULL TEXT VIEW PDF

page 1

page 2

page 6

page 7

02/24/2021

Actionable Principles for Artificial Intelligence Policy: Three Pathways

In the development of governmental policy for artificial intelligence (A...
08/26/2020

An Impact Model of AI on the Principles of Justice: Encompassing the Autonomous Levels of AI Legal Reasoning

Efforts furthering the advancement of Artificial Intelligence (AI) will ...
02/02/2022

AI Research Associate for Early-Stage Scientific Discovery

Artificial intelligence (AI) has been increasingly applied in scientific...
11/02/2021

On the Current and Emerging Challenges of Developing Fair and Ethical AI Solutions in Financial Services

Artificial intelligence (AI) continues to find more numerous and more cr...
01/13/2020

Artificial Intelligence, Values and Alignment

This paper looks at philosophical questions that arise in the context of...
05/02/2022

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

What explains the dramatic progress from 20th-century to 21st-century AI...
09/30/2021

An Automated Scanning Transmission Electron Microscope Guided by Sparse Data Analytics

Artificial intelligence (AI) promises to reshape scientific inquiry and ...

Introduction

Artificial intelligence (AI) and innovative computing are powering breakthroughs in science, engineering and industry [2022arXiv220203555B, DeepLearning, hep_kyle, Nat_Rev_2019_Huerta, Narita2020ArtificialIP, gw_nat_ast, ai_agriculture, Uddin2019ArtificialIF, fair_hbb_dataset, Huerta:2021ybd]. Advances in AI enable the study of complex phenomena by combining disparate datasets encompassing image, video, text and sound as well as combinations of them [2022arXiv220203555B, DeepLearning]. AI is complementing and boosting mathematician’s intuition to pose new conjectures and solve theorems [ai_math]. Given the proven potential of AI to accelerate discovery, it is timely and relevant to define best AI practices that facilitate cross-pollination of expertise, reduce time-to-insight, increase reusability of scientific data and AI models by humans and machines, and reduce duplication of efforts. To realize these goals, researchers are trying to understand how to adapt FAIR guiding principles [fairguiding, fairmetrics]—originally developed in the context of digital assets, such as data and the tools, algorithms and workflows that produce such data—to streamline the development and adoption of AI methodologies. These activities encompass the creation of FAIR and AI-ready scientific datasets. This transdisciplinary work aims to accelerate innovation and scientific discovery by defining and implementing best practices for data stewardship and governance that enable the automation of data management. Recent accomplishments [fair_hbb_dataset, Sinaci2020FromRD, hpcfair, deagen_nat_sci_dat] of the FAIR for scientific data program provide a glimpse of its transformational impact in science and engineering. Learning from these activities, and given the growth and impact of AI for Science programs, it is critical to define at a practical level what FAIR means for AI models—the theme of this article. The expected outcome of this work is the development of domain-agnostic computational frameworks to create, evaluate and share FAIR AI models that enable researchers to get a falsifiable understanding of the state-of-practice of AI to address contemporary scientific grand challenges. Understanding needs and gaps in our AI portfolio will catalyze the sharing of knowledge and expertise at an accelerate pace and scale, thereby enabling focused R&D in areas where AI capabilities are currently lacking.

Here we introduce a unified computational framework to create FAIR and AI-ready datasets, following accepted best practices [fairguiding, fairmetrics, fair_hbb_dataset], and then describe how to use these datasets to create FAIR AI models following a set of practical FAIR principles that we propose in this article. To quantify the FAIRness of our AI models, we harness advanced scientific data infrastructure and modern computing environments to conduct automated, accelerated and reproducible AI inference. Our approach guides researchers in the process of harnessing FAIR AI models published at the Data and Learning Hub for Science (DLHub) [dlhub, blaiszik_foster_2019], and FAIR and AI-ready datasets at the Materials Data Facility (MDF) [mdf_article]. We then describe how we leverage funcX [chard2020funcx], a distributed Function as a Service (FaaS) platform, to connect DLHub with the ThetaGPU supercomputer at the Argonne Leadership Computing Facility (ALCF) to conduct AI-driven inference. We also describe how to integrate uncertainty quantification metrics in AI models to quantify the reliability of their predictions. Throughout our discussion, we leverage disparate hardware architectures, ranging from GPUs to the SambaNova Reconfigurable Dataflow UnitTM (RDU) at the ALCF AI-Testbed to ensure that our ideas provide useful, easy-to-follow guidance to a diverse ecosystem of researchers and AI practitioners.

We selected high-energy diffraction microscopy as a science driver for this work [Liu:fs5198]. This technique is used to characterize 3D information about the structure of polycrystalline materials through the identification of Bragg diffraction peaks. The data used for the identification of Bragg peaks with our AI models was produced at the Advanced Photon Source at Argonne National Laboratory. While we guide the discussion of our methods with this application, the definitions, approaches and computational framework introduced in this article are domain-agnostic and may be harnessed for any other scientific application.

We release our FAIR AI datasets and models, as well as scientific software to enable other researchers to reproduce our work, and to engage in meaningful interactions that contribute towards an agreed upon, community wide definition of what FAIR means in the context of AI models, with an emphasis on practical applications. Figure 1 summarizes our proposed approach to create FAIR AI models and experimental datasets. This approach has three main pillars, namely: (i) the creation and sharing and FAIR and AI-ready datasets; (ii) the combination of FAIR and AI-ready datasets with modern computing environments to streamline and automate the design, training, validation, testing and publishing of FAIR AI models; and (iii) the combination of FAIR datasets and AI models with scientific data infrastructure and advanced computing resources to automate accelerated AI-inference with well-defined uncertainty quantification metrics to measure the reliability, reproducibility and statistical robustness of AI predictions.

Figure 1: FAIR AI models and data Proposed approach to combine FAIR and AI-ready experimental data, FAIR AI models and scientific data and computing cyberinfrastructure to accelerate and automate scientific discovery.

Results

We present two main results:

  • [nosep]

  • FAIR and AI-ready experimental datasets We published experimental datasets used to train, validate and test our AI models at the MDF, and created Big Data Bags (BDBags) [chard16togo] with associated Minimal Viable Identifiers (minids) [chard16togo] to specify, describe, and reference each of these datasets. We also published Jupyter notebooks in the MDF that describe these datasets and illustrate how to explore and use them to train, validate and test AI models. We FAIRified these datasets following practical guidelines [fairguiding, fairmetrics, fair_hbb_dataset].

  • Definition of FAIR for AI models and practical examples We use the aforementioned FAIR and AI-ready datasets to create three types of AI models: a baseline PyTorch model, an optimized AI model for accelerated inference that is constructed by porting the baseline PyTorch model into an NVIDIA TensorRT engine, and a model created on the SambaNova DataScale® system at the ALCF AI-Testbed. We use the ThetaGPU supercomputer at ALCF to create the first two models. We use these three models to showcase how our proposed definitions for FAIR AI models may be quantified by creating a framework that brings together DLHub, funcX, the MDF and disparate hardware architectures at ALCF.

FAIR and AI-ready Datasets

We FAIRified a high energy microscopy dataset, produced at the Advanced Photon Source at Argonne National Laboratory. This set consists of a training dataset that we used used to create the AI models described below, and a validation dataset that we used to compute relevant metrics pertaining to the performance of our AI models for regression analyses. We published these datasets in the MDF and provide the following information:

  1. [nosep]

  2. Author list, and points of contact including email addresses

  3. Unique Digital Object Identifier (DOI) for Training_Set [braggnn-training-set] and Validation_Set [braggnn-validation-set]

  4. Rich and descriptive metadata

  5. Detailed description of the datasets, including data type and shape, and data split convention used to train, validate and test AI models published in DLHub

  6. Jupyter notebooks to explore and visualize the datasets; train AI models with scientific software released in GitHub [zhengchun_repo]; and to validate and reproduce predictions of AI models published in DLHub

We have also created BDBags and minids for each of these datasets to ensure that creation, assembly, consumption, identification and exchange of these datasets can be easily integrated into a user’s workflow. A BDBag is a mechanism for defining a dataset and its contents by enumerating its elements, regardless of their location. The BDBag has a data/ directory containing (meta)data files, along with a checksum for each file. A minid for each of these BDBags provides a lightweight persistent identifier for unambiguously identifying the dataset regardless of their location. Computing a checksum of BDBag contents allows others to validate that they have the correct dataset, and that there is no loss of data. In short, these tools enable us to package and describe our datasets in a common manner, and to unambiguously refer to the datasets, thereby enabling efficient management and exchange of data. In summary, we provide:

  1. [nosep]

  2. BDBag for training set and its associated minid:olgmRyIu8Am7

  3. BDBag for validation set with its associated minid:16RmizZ1miAau.

In summary, key features of our FAIRified datasets include:

  • [nosep]

  • DOIs for the Training_Set [braggnn-training-set] and Validation_Set [braggnn-validation-set], provided through the MDF

  • Unique, unambiguous minimal identifiers, minid:olgmRyIu8Am7 and minid:16RmizZ1miAau, and associated landing pages for the training dataset and validation dataset, respectively

  • Metadata uses machine-readable keywords

  • Metadata contains resource identifier

  • Metadata contains qualified references to related data and publications [BraggNN-IUCrJ]

  • (Meta)data are indexed in a searchable resource

  • Metadata follows standards for X-ray spectroscopy, uses controlled vocabularies, ontologies and good data model

  • Uses FAIR vocabulary following the ten rules provided in Ref. [10rules]

  • Includes license to the dataset

  • Includes a description of how the dataset is produced and displays this information in a machine-readable metadata format

  • Jupyter notebooks that provide user friendly, step-by-step guidance to explore and visualize datasets; and to harness these datasets to train, validate and test AI models

We have also ensured that our FAIR and AI-ready datasets meet the detailed metrics outlined in Ref. [fair_hbb_dataset]. These datasets are ready-to-use for autonomous, AI-driven discovery.

FAIR AI models

The understanding of FAIR principles in the creation and sharing of AI models is an active area of research. Here we aim to contribute to this community, interdisciplinary effort by showcasing how we may create FAIR AI models. We propose that all these metrics are quantified as Pass or Fail.

Findable Proposition An AI model is findable when a DOI may direct a human or machine to a digital resource that contains all the required information to uniquely define an AI model, i.e., descriptive and rich metadata that provides the title of the model, its authors, free text description, a minimal test set to evaluate the model’s performance; and the actual AI model in a user/machine friendly format, e.g., a Jupyter notebook, a container, etc.

This work We have published our AI models in DLHub, and assigned DOIs to each model. These three types of models include: (i) a traditional PyTorch model; (ii) an inference optimized version with NVIDIA TensorRT; and (iii) a model trained with the SambaNova DataScale® system. All three are readily available at DLHub, and may be found via the following references:

  1. [nosep]

  2. PyTorch Model [pt_bnn_model]

  3. TensorRT Model [trt_bnn_model]

  4. SambaNova Model [sn_trt_model]

Accessible Proposition An AI model is accessible when a human or machine may download it, invoke it, or otherwise interact with it to conduct any task that it was designed to accomplish.

This work Our models are accessible through DLHub. They may be downloaded or directly used for AI inference. To do the latter, we have deployed funcX endpoints at the ThetaGPU supercomputer which we use to conduct AI inference by invoking the AI models and datasets we have published at DLHub and the MDF, respectively.

Interoperable Proposition An AI model is interoperable when it may be readily harnessed by machines to conduct AI-driven inference across disparate hardware architectures. This may be realized by containerizing AI models, and developing the required infrastructure that enables AI models to process data in disparate hardware architectures. Furthermore, the AI model’s metadata should use a formal, accessible, and broadly used format, such as JSON or HTML.

This work We have quantified this property by computing the performance of our AI models across disparate hardware architectures. For instance, we have evaluated our three AI models using RDUs, CPUs and GPUs available at ALCF HPC systems. Furthermore, as we describe below, these models were published in DLHub with metadata that are available as JSON and HTML.

Reusable Proposition An AI model is reusable when it may be harnessed by humans or machines to reproduce its putative capabilities for AI-driven analyses, and when it contains quantifiable metrics that inform users whether it may be used to process datasets that differ from those originally used to create it. This can be accomplished by providing information about the input data type and shape, as well as output data type and shape, and examples that show how to invoke the model, and a control dataset and uncertainty quantification metrics that indicate the realm of usability of the AI model. These reusability metrics may also be used to identify when a model is no longer trustworthy and active/transfer/etc., learning is needed to fine tune the AI model so it may provide trustworthy predictions.

This work Our models published in DLHub include examples that describe the input data type and shape, the output data type and shape, and a test set to quantify their performance. We have also provided domain-informed metrics to ascertain when the predictions of our AI models are trustworthy. In this study we use the L2 norm or Euclidean distance, since it is a well known metric to quantify the performance of AI models for regression.

Expected outcome of proposed FAIR principles for AI models.

The FAIR principles for AI models we have proposed above have the ultimate goal of automating scientific discovery. To realize that goal, we have created a framework that harnesses scientific data infrastructure (DLHub & funcX & Globus), modern computing environments (ALCF), and leverage available FAIR & AI-ready datasets (MDF). This ready-to-use framework addressed common challenges in the adoption and use of AI methodologies, namely, it provides examples that illustrate how to use AI models, and their expected input and output data; it enables users to readily use, download or further develop existing AI models. We have pursued this line of work because, as AI practitioners, we are acutely aware that a common roadblock to using existing AI models is that of deploying a model on a computing platform, and then finding numerous library incompatibilities that slow down progress and lead to duplication of efforts, or discourage researchers to invest limited time and resources in the adoption of AI methodologies. Our proposed framework addresses these limitations, and demonstrates how to combine computing platforms, container technologies, and tools to accelerate AI inference such as TensorRT. Our proposed framework also highlights the importance of including uncertainty quantification metrics in AI models that inform users on the reliability and realm of applicability of AI predictions.

Methods

Dataset Description

The sample dataset we used to train and evaluate BraggNN [BraggNN-IUCrJ] was collected using an undeformed bi-crystal Gold sample [ShadeGold] with 1440 frames (0.25 steps over 360) totaling valid Bragg peaks. Each image is pixels. The spatial resolution of the data is such that each pixel is . We used of these peaks () as our training set, peaks (9%) as our validation set for early stopping [goodfellow2016deep], and the remaining peaks (11%) as a test set. We also created a smaller validation dataset consisting of 13799 samples taken from the training dataset, which we used to compute and report relevant metrics.

AI Models Description

We present three different AI models, namely, (i) a traditional PyTorch model; (ii) an NVIDIA TensorRT engine that optimizes our traditional PyTorch for accelerated inference; and (iii) a model trained with the SambaNova DataScale® system at the ALCF AI-Testbed.

PyTorch Model Our base AI model is implemented using the PyTorch

framework. We trained the model for 500 epochs with a mini-batch size of 512, and used validation-based early stopping to avoid overfitting. Training takes around two hours using an NVIDIA V100 GPU.

Model Conversion to TensorRT We use the built in Open Neural Network Exchange (ONNX) converter within PyTorch to convert our PyTorch BraggNN model into the ONNX format. We then use TensorRT by invoking our Singularity container [singularity_osti] that includes TensorRT, ONNX, PyTorch, PyCUDA

and other common deep learning libraries to build a

TensorRT engine and save it in a .plan format. We used the following parameters when building our engine: the maximum amount of memory that can be allocated by the engine, set to 32 GB, allowed the TensorRT graph optimizer to opportunistically make use of half-precision (FP16) computation when feasible, the input dimensions of the model including the batch size , the output of the model , and a flag that serializes the built engine so that the engine will not have to be reinitialized in subsequent runs. TensorRT

applies a series of optimizations to the model by running a GPU profiler to find the best GPU kernels to use for various neural network computations, applying graph optimization techniques to reduce the number of nodes and edges in a model such as layer fusion, and quantization where appropriate. Finally, we create an inference script using

PyCUDA that allocates memory for the model and data on the GPU and makes the appropriate memory copies between the CPU and GPU in order to perform accelerated inference with our TensorRT engine.

SambaNova Model The SambaNova DataScale® system at the ALCF AI-Testbed uses SambaFlowTM

software, which has been integrated with popular open source APIs, such as

PyTorch and TensorFlow. Leveraging these tools, we used SambaFlow to automatically extract, optimize, and execute our originally PyTorch BraggNN model with SambaNova’s RDUs [liu2021bridge]. We find that the predictions of our SambaNova BraggNN model are consistent with those obtained with PyTorch and TensorRT models.

Figure 2: (a) Bragg peak reconstruction. Top panels. Inference results for the identification of Bragg peak locations in an undeformed bi-crystal gold sample. From left to right, we show results for three AI models, namely, a PyTorch baseline model; an inference optimized TensorRT model; and a model trained with the SambaNova DataScale® system. In the panels, Truth stands for the ground truth location of Bragg peaks; PT, TRT and SN represent the predictions of our baseline PyTorch, TensorRT and SambaNova models, respectively. We produced these results by directly running these models in the ThetaGPU supercomputer, and found that of the predicted peak locations in the test set are within a Euclidean distance of from the actual peak locations. (b) FAIR AI Approach. Bottom panels. AI inference results obtained by combining DLHub, funcX and the ThetaGPU supercomputer. Our three AI models, PT, TRT and SN, are hosted at DLHub. funcX

manages the entire workflow by invoking AI models, launching workers in ThetaGPU and doing AI inference on a test set. This workflow also includes post-processing scripts to quantify the L2 norm that provides a measure for the reliability of our AI-driven regression analysis.

Benchmark results in the ThetaGPU supercomputer We used these three models to conduct AI inference in the ThetaGPU supercomputer using the validation dataset described above, i.e., 13799 Bragg peaks. We quantified the consistency of their predictions using Euclidean distance between the predicted peak locations and ground truth peak locations. We found that for all three models, of the predicted peak locations in our test set are within a Euclidean distance of pixels from the actual peak locations. In addition, the average Euclidean error is only

pixels, and the standard deviation of the Euclidean error is

pixels. For reference, the images used for this study are pixels, and each pixel is in size. Therefore, these results show that our three different models produce accurate and consistent predictions, even if they are trained using different optimization schemes or hardware architectures. We show a sample of these results in the top three panels of Figure 2.

AI models in DLHub AI models published in DLHub are containerized by using Docker, and include instructions for running the models with a sample test set. The models include uncertainty quantification metrics to inform users about their expected performance and realm of applicability. All trained AI models are assigned a DOI, and include descriptive metadata including the title, authors, free text description, and more (following the DataCite metadata standard), input type and shape (e.g., [11,11] image maps), output type and shape (e.g., [1,2] list of predicted Bragg peak positions) as well as examples for how to invoke each individual model. These metadata are available as JSON formatted responses through a REST API or Python SDK; or as HTML through a searchable web interface.

We find that running our AI models via DLHub yields inference results that are consistent with those obtained when the models are run natively on ThetaGPU.

DLHub, funcX and ThetaGPU DLHub is configured to perform on-demand inference in Docker containers on a Kubernetes cluster hosted at the University of Chicago. The DLHub execution model leverages funcX [chard2020funcx], a federated function as a service (FaaS) platform, that enables fire-and-forget remote execution. In this work, we extended that model to use a funcX

endpoint deployed on ThetaGPU and configured to dynamically provision resources from ThetaGPU. ThetaGPU is an extension of the Theta supercomputer and consists of 24 NVIDIA DGX A100 nodes. Each DGX A100 node has eight NVIDIA A100 Tensor Core GPUs and two AMD Rome CPUs that provide 22 nodes with 320 GB of GPU memory and two nodes with 640 GB of GPU memory. Since access to ThetaGPU is restricted to authorized

ALCF users, we configured the funcX endpoint to provide access to members of a Globus Group [chard2016globus]. Users can request access to this group and following a manual review process may be granted access to run the models on ThetaGPU.

Finally, ALCF does not support Docker containers as Docker requires root privileges. As a result, we first had to transform DLHub servable containers into the Apptainer (previously Singularity) containers supported by ALCF. Apptainer provides an effective security model whereby users cannot gain additional privileges on the host system, making them suitable for deployment on high performance computing resources. After creating Apptainer containers for each AI model, we registered each with funcX along with the associated DLHub invocation function, enabling on-demand inference of the models using ThetaGPU.

Reproducibility of AI models The computational framework—DLHub, funcX and ALCF—provides a ready to use, user friendly solution to harness AI models, FAIR datasets, and available computing resources to enable AI-driven discovery. We have tested the reliability of this FAIR framework for AI-driven discovery by processing a test set with each AI model, finding that results across models are consistent, and that these results are the same as those obtained by running AI models directly in the ThetaGPU supercomputer. The uncertainty quantification metrics we have included in our AI models also guide researchers in the use and interpretation of these AI predictions. A sample of these results is shown in the bottom panels of Figure 2.

Discussion

We have showcased how to FAIRify experimental datasets by harnessing the MDF and using BDBags and minids. Using these FAIR and AI-ready datasets, we described how to create and share FAIR AI models using a new set of practical definitions that we introduced in this article. Throughout this analysis, we have showcased how to harness existing data facilities, FAIR tools, modern computing environments and scientific data infrastructure to create a computational framework that is conducive for autonomous AI-driven discovery. To realize that goal, our FAIR and AI-ready datasets are published including Jupyter notebooks that provide key information regarding data type, shape and size, and how these datasets may be readily used for the creation, validation and testing of AI models. Complementing these approaches, our AI models published in DLHub include examples that illustrate how to load FAIR and AI-ready datasets, and how to process them. The models also include uncertainty quantification metrics to ascertain their validity, reliability, reproducibility and statistical robustness.

Figure 3: Autonomous AI-driven discovery Vision for the integration of FAIR & AI-ready datasets with FAIR AI models and modern computing environments to enable autonomous AI-driven discovery. This approach will also catalyze the development of next-generation AI methods, and the creation of a rigorous approach that identifies foundational connections between data, models and innovative computing.

The FAIR for AI data and models approach that we describe in this article focuses on the creation of a ready-to-use, user-friendly computational framework in which data, AI, and computing are indistinguishable components of the same fabric. To realize this vision, we have brought together DLHub, MDF, funcX, and ALCF resources to enabled autonomous AI-driven discovery. Our ultimate goal is a platform that may be readily harnessed by researchers to power advances in science, engineering and industry, as illustrated in Figure 3. We expect the scientific software, data, and AI models released with this manuscript to be used, tested, and further developed by scientists who are eager to explore and incorporate FAIR best practices in their research programs.

Data Availability

Our FAIRified datasets are published at the Materials Data facility: Training_Set [braggnn-training-set] and Validation_Set [braggnn-validation-set].

Data Formatting

All FAIRified BraggNN datasets are released in HDF5 format.

Code Availability

The three AI models introduced in this article, along with the Jupyter notebook and scientific software needed to reproduce our results have been released through DLHub [pt_bnn_model, trt_bnn_model, sn_trt_model]. Scientific software to train these models may be found in GitHub.

References

Acknowledgements

This work was supported by the FAIR Data program and the Braid project within the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research, under contract number DE-AC02-06CH11357. This work used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

Dataset Publication:

This work was performed under financial assistance award 70NANB14H012 from U.S. Department of Commerce, National Institute of Standards and Technology as part of the Center for Hierarchical Material Design (CHiMaD). This work was supported by the National Science Foundation under NSF Award Number: 1931306 "Collaborative Research: Framework: Machine Learning Materials Innovation Infrastructure".

Model Publication:This material is based upon work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357.

We also thank the engineering support from SambaNova Systems, Inc to make our BraggNN AI models work efficiently on their system.

Author contributions statement

E.A.H. conceived and led this work. N.R. and Z.L. developed PyTorch BraggNN models. N.R. ported PyTorch models into TensorRT engines. P.C. containerized both PyTorch and TensorRT models and ran computing tests on the ThetaGPU supercomputer. Z.L. developed a BraggNN SambaNova model and ran tests on that system. N.R. and B.B. curated and published FAIR and AI-ready datasets in the Materials Data Facility. N.R. produced BDBags and minids for these datasets. K.C. provided guidance on the use of BDBags and minids and funcX. B.B., R.C., A.S. and K.J.S. deployed all AI models in DLHub and conducted tests connecting DLHub to ThetaGPU by using funcX. I.F. guided the creation of FAIR datasets and models. All authors contributed ideas and participated in the writing and review of this manuscript.

Competing interests

The authors declare no competing financial and/or non-financial interests in relation to the work described in this manuscript.