Scope is all you need: Transforming LLMs for HPC Code

08/18/2023
by   Tal Kadosh, et al.
0

With easier access to powerful compute resources, there is a growing trend in the field of AI for software development to develop larger and larger language models (LLMs) to address a variety of programming tasks. Even LLMs applied to tasks from the high-performance computing (HPC) domain are huge in size (e.g., billions of parameters) and demand expensive compute resources for training. We found this design choice confusing - why do we need large LLMs trained on natural languages and programming languages unrelated to HPC for HPC-specific tasks? In this line of work, we aim to question design choices made by existing LLMs by developing smaller LLMs for specific domains - we call them domain-specific LLMs. Specifically, we start off with HPC as a domain and propose a novel tokenizer named Tokompiler, designed specifically for preprocessing code in HPC and compilation-centric tasks. Tokompiler leverages knowledge of language primitives to generate language-oriented tokens, providing a context-aware understanding of code structure while avoiding human semantics attributed to code structures completely. We applied Tokompiler to pre-train two state-of-the-art models, SPT-Code and Polycoder, for a Fortran code corpus mined from GitHub. We evaluate the performance of these models against the conventional LLMs. Results demonstrate that Tokompiler significantly enhances code completion accuracy and semantic understanding compared to traditional tokenizers in normalized-perplexity tests, down to  1 perplexity score. This research opens avenues for further advancements in domain-specific LLMs, catering to the unique demands of HPC and compilation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2023

LM4HPC: Towards Effective Language Model Application in High-Performance Computing

In recent years, language models (LMs), such as GPT-4, have been widely ...
research
06/29/2023

Modeling Parallel Programs using Large Language Models

Parallel software codes in high performance computing (HPC) continue to ...
research
06/27/2023

Evaluation of OpenAI Codex for HPC Parallel Programming Models Kernel Generation

We evaluate AI-assisted generative capabilities on fundamental numerical...
research
07/26/2021

Performance vs Programming Effort between Rust and C on Multicore Architectures: Case Study in N-Body

Historically, Fortran and C have been the default programming languages ...
research
07/24/2023

The potential of LLMs for coding with low-resource and domain-specific programming languages

This paper presents a study on the feasibility of using large language m...
research
05/25/2021

Project CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks

Advancements in deep learning and machine learning algorithms have enabl...
research
09/02/2018

A Serverless Tool for Platform Agnostic Computational Experiment Management

Neuroscience has been carried into the domain of big data and high perfo...

Please sign up or login with your details

Forgot password? Click here to reset