Speeding HEP Analysis with ROOT Bulk I/O

06/11/2019 ∙ by Brian Bockelman, et al. ∙ Morgridge Institute for Research 0

Distinct HEP workflows have distinct I/O needs; while ROOT I/O excels at serializing complex C++ objects common to reconstruction, analysis workflows typically have simpler objects and can sustain higher event rates. To meet these workflows, we have developed a "bulk I/O" interface, allowing multiple events data to be returned per library call. This reduces ROOT-related overheads and increases event rates - orders-of-magnitude improvements are shown in microbenchmarks. Unfortunately, this bulk interface is difficult to use as it requires users to identify when it is applicable and they still "think" in terms of events, not arrays of data. We have integrated the bulk I/O interface into the new RDataFrame analysis framework inside ROOT. As RDataFrame's interface can provide improved type information, the framework itself can determine what data is readable via the bulk IO and automatically switch between interfaces. We demonstrate how this can improve event rates when reading analysis data formats, such as CMS's NanoAOD.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

LHC experiment event data models are very complex and slow to read. The problem is that experiments do not care because input I/O time is minimal compared to the reconstruction process. Another critical factor is that experiments care about volume because they have lots of expensive disks.

For analysis case, the situation is different, since the data model is often more straightforward. The same case is about data volume used during the analysis phase, and there will be generated smaller a data volume and often used from SSD (NVMe). It causes minimal CPU costs and allows to iterate over events many times quickly.

ROOT IO is an incredibly flexible format. It can easily store the complex objects that correspond to the experiment’s data. In the same time, ROOT has high overheads for the serialization of simple objects.

2 Bulk IO

The typical mechanism for iterating through data in a TTree is a handwritten for-loop. ROOT uses a API shown in Listing 2 to read objects from a branch (TTree is a structure that contains one or multiple TBranches). This function runs in two steps. First, it searches the underlying storage medium for the basket where the event is located and then read the basket into a memory buffer. The TBasket is the data structure that represents the in-memory buffer. ROOT decompresses the buffer and put the uncompressed buffer in so-called “kernel” space. In the second step, once the basket appears in memory, GetEntry deserializes the requested event from the kernel-space buffer and copy it to user-space buffer.

  Int_t TBranch::GetEntry(Long64_t entry)
When user application is computationally expensive, the cost of library calls, frequently deserializing objects and copying data between memory buffers are amortized to effectively nothing. To overcome such overheads, we introduce a new interface for ROOT to copy all events in an on-disk \texttt{TBasket} directly to a user-provided memory buffer. For the simplest cases - primitives and C-style arrays of primitives, the serialization can be done without a separate buffer or ‘‘fixing up" pointer contents. The user can request the serialized data or deserialized data to be delivered to the user buffer. By requesting the serialized data directly and deserializing directly in the event loop, the user can avoid an expensive scan from main memory.
Pragmatically, the user will not implement code for deserializing data themselves: rather, we have provided a header-only C++ facade around the data, allowing the user to work with a proxy object.  This allows the compiler to inline the deserialization code in the correct place.
%Bulk I/O is a set of techniques and APIs that was developed for ROOT to allow the user to deserialize a broad set of events at a time. Each basket, visualized in \cite{bulk} is compressed and written to the file.  The main goal is that bulk IO allows the user to read all the objects in a basket at once.
%\begin{figure}[h]
%\centering
%\includegraphics[width=0.7\linewidth]{bulk.png}
%\caption{ROOT I/O vs Bulk I/O schema.}
%\label{fig:bulk}
%\end{figure}
%Bulk IO can only work with a limited number of cases. It can only manipulate the data in-place, meaning the deserialized object in memory must be smaller than-or-equal-to the serialized byte stream.
%Examples of  particular cases where bulk I/O can benefit the most:
%\begin{itemize}
%   \item Primitive types.
%   \item C-style structs and nothing with virtual pointers.
%   \item Arrays or std::vector of basic types.
%   \item No references or pointers.
%\end{itemize}
%\subsection{Pro \& cons for Bulk IO}
%For small and simple events, the overhead of ROOT library calls is much more significant than the cost of serialization itself. Bulk I/O is an approach to deliver a cluster of events at once. Further improvements can be achieved by returning serialized events to the user and allowing the compiler to inline deserialization in the event loop.
%Complex objects involving references or from polymorphic classes require expensive lookups to deserialize. In these cases, the library overheads are minimal, and bulk IO provides a little benefit.

3 Implementation

Bulk IO interface is a set of APIs that are built in the existing ROOT IO framework. The user can choose between regular APIs and Bulk IO APIs. We implement Bulk IO in three common use cases: TBranch, TTreeReader and RDataFrame. We discuss about our interface design and integration in this section.

3.1 Bulk IO in TBranch

Listing 3.1 shows Bulk IO API in TBranch in which two input arguments need to be parsed into the function: entry and user_buf. the entry defines an event index number indicating which event the function is going to read. The user_buf parses an user-space TBuffer structure as a reference into the function. In the end of the function call, the user_buf should contain the whole basket of data that contains the inquiry event.

Int_t TBranch::GetBulkEntries(Long64_t entry, TBuffer &user_buf)
%\vspace{5pt}
It is worthwhile to mention that \textit{GetBulkEntries} deserializes events on-the-fly when the data read into the \textit{user\_buf}. Thus no further manipulation is required for user applications. An user can later on access an event in the basket using the code snippet shown in Listing \ref{loop} where \textit{T} is the object type and \textit{idx} is the event index in the \textit{user\_buf}.
\lstset{
  basicstyle=\ttfamily, frame=single,
  xleftmargin=.08\textwidth, xrightmargin=.08\textwidth
}
\begin{lstlisting}[label={loop}]
 *reinterpret_cast<T*>(user_buf.GetCurrent())[idx]
\subsection{Bulk IO in TTreeReader}
\lstset{
  language=C++,
  keywordstyle=\color{blue},
  stringstyle=\color{red},
  commentstyle=\color{green},
%  caption=Access to Events using TTreeReader,
  basicstyle=\ttfamily, frame=single,
  xleftmargin=.09\textwidth, xrightmargin=.09\textwidth
}
\begin{lstlisting}[label={ttreereader}]
TTreeReader myReader("T", hfile);
TTreeReaderValue<float> myF(myReader, "myFloat");
Long64_t idx = 0;
Float_t sum = 1;
while (myReader.Next()) {
   sum += *myF;
}
%\vspace{5pt}
\textit{TTreeReader} is an interface for an user to access simple object (primitives, arrays, etc.) in a ROOT file. \textit{TTreeReaderValue} is the interface to access primitives and \textit{TTreeReaderArray} is the interface to access arrays (each event is either an fixed-size or varialbe-size array). Listing \ref{ttreereader} shows a code sample that uses \textit{TTreeReaderValue} to read events (floats) from a file. \textit{TTreeReader} internally relies on \textit{GetEntry} to access events.
\lstset{
%  caption=Bulk API in TTreeReaderFast,
  basicstyle=\footnotesize\ttfamily, frame=single,
  xleftmargin=0\textwidth,
  xrightmargin=0\textwidth
}
\begin{lstlisting}[label={serialized}]
Int_t TBranch::GetEntriesSerialized(Long64_t entry, TBuffer &user_buf)
%\vspace{5pt}
We design a Bulk API - \textit{GetEntriesSerialized} in TBranch shown in Listing \ref{serialized}. We introduce a new interface - TTreeReaderFast that uses \textit{GetEntriesSerialized} to function as TTreeReader. Unlike \textit{GetBulkEntries}, GetEntriesSerialized does not deserialize events while reading basket into \textit{user\_buf}. Instead, it waits until the user calls \textit{*myF}. Dereference operator invokes the appropriate deserialization code.
\subsection{Bulk IO in RDataFrame}
During our work, Bulk IO is also integrated into RDataFrame \cite{rdataframe} which is a python Pandas \cite{pandas} like data analysis framework for ROOT users. RDataFrame provides a proxy interface - RDataSource (RDS). It allows RDF to read arbitrary data formats such as TTree, CSV, etc..
\lstset{
%  caption=Bulk API in RDataFrame,
  basicstyle=\ttfamily,
  breaklines=true,
  xleftmargin=0.1\textwidth,
  xrightmargin=0.1\textwidth
}
\begin{lstlisting}[label={serializedcount}]
Int_t TBranch::GetEntriesSerialized(Long64_t entry, TBuffer &user_buf, TBuffer *count_buf)
%\vspace{5pt}
We define a new Bulk API \textit{GetEntriesSerialized} shown in Listing \ref{serializedcount}. The only difference from Listing \ref{serialized} is that there is one more argument \textit{count\_buf}. Listing \ref{serialized} actually calls Listing \ref{serializedcount} and set the \textit{count\_buf} as nullptr. \textit{count\_buf} is used to store array length information when events in RDataFrame are arrays. Variable-size arrays need such information to deserialize the butter into multiple individual arrays.

4 Evaluation

4.1 Experiments

All tests are conducted on a desktop Intel i5 4-Core @ 3.2GHz. A TTree with 100 million float values is read with different APIs. We tested three different use cases: GetBulkEntries, TTreeReaderFast and RDataSource.

4.2 Results

Figure 1 shows the time spent on iterating all events in the TTree with GetEntry and GetBulkEntries. Figure 2 shows the read time between TTreeReader and TTreeReaderFast. As shown in the figures, Bulk IO spends 10+ times less than GetEntry and TTreeReader. Bulk IO in both use cases spends similar time on reading events. TTreeReader interface spends more than 3 times reading events than GetEntry due to the overheads of TTreeReader itself (TTreeReader internally calls GetEntry).

Figure 1: Performance between GetEntry and GetBulkEntries.
Figure 2: Performance between TTreeReader and TTreeReaderFast.
Figure 3: Performance improvements on RDataFrame with Bulk IO.

Figure 3 shows the results of Bulk IO in RDataFrame. In the figure, the standard RDF shows the performance by using regular RDataFrame function calls. Bulk RDF and Bulk RDS show the result of Bulk APIs. The difference is that, Bulk RDS test detaches RDataSource from RDataFrame stack and run the test directly through RDS function calls. As shown in Figure 3, Bulk RDS outperforms standard RDF by more than 2 times. In addition, RDataFrame has extra overheads compared to RDataSource (RDataFrame internally relies RDataSource), therefore Bulk RDF runs slower than Bulk RDS, but still outperforms standard RDF.

Acknowledgments

This work was supported by the National Science Foundation under Grant ACI-1450323. This research was done using resources provided by the Holland Computing Center of the University of Nebraska.

References

References

  • [1] Brun R and Rademakers F “ROOT - An object oriented data analysis framework”, Nucl. Instr. Meth. Phys. Res. 389 (1997) 81-86
  • [2] Guiraud E, Naumann A and Piparo D “RDataFrame: functional chains for ROOT data analyses”, (2017) doi: 10.5281/zenodo.260230. url: https://doi.org/10.5281/zenodo.260230.
  • [3] Bockelman B, Zhang z and Pivarski J “Optimizing ROOT IO For Analysis”, J. Phys.: Conf. Ser., 1085 (2018) 032012
  • [4] Mckinney W “pandas: a Foundational Python Library for Data Analysis and Statistics”, PyHPC 2011 : Python for High Performance and Scientific Computing, (2011)