Benchmarking quantized LLaMa-based models on the Brazilian Secondary School Exam

09/21/2023
by   Matheus L. O. Santos, et al.
0

Although Large Language Models (LLMs) represent a revolution in the way we interact with computers, allowing the construction of complex questions and the ability to reason over a sequence of statements, their use is restricted due to the need for dedicated hardware for execution. In this study, we evaluate the performance of LLMs based on the 7 and 13 billion LLaMA models, subjected to a quantization process and run on home hardware. The models considered were Alpaca, Koala, and Vicuna. To evaluate the effectiveness of these models, we developed a database containing 1,006 questions from the ENEM (Brazilian National Secondary School Exam). Our analysis revealed that the best performing models achieved an accuracy of approximately 46 Portuguese questions and 49 evaluated the computational efficiency of the models by measuring the time required for execution. On average, the 7 and 13 billion LLMs took approximately 20 and 50 seconds, respectively, to process the queries on a machine equipped with an AMD Ryzen 5 3600x processor

READ FULL TEXT
research
07/17/2022

Can large language models reason about medical questions?

Although large language models (LLMs) often produce impressive outputs, ...
research
05/18/2023

Are Large Language Models Fit For Guided Reading?

This paper looks at the ability of large language models to participate ...
research
04/29/2022

Developmental Negation Processing in Transformer Language Models

Reasoning using negation is known to be difficult for transformer-based ...
research
08/17/2023

MaScQA: A Question Answering Dataset for Investigating Materials Science Knowledge of Large Language Models

Information extraction and textual comprehension from materials literatu...
research
10/04/2021

Pre-Quantized Deep Learning Models Codified in ONNX to Enable Hardware/Software Co-Design

This paper presents a methodology to separate the quantization process f...
research
09/05/2023

AGIBench: A Multi-granularity, Multimodal, Human-referenced, Auto-scoring Benchmark for Large Language Models

Large language models (LLMs) like ChatGPT have revealed amazing intellig...

Please sign up or login with your details

Forgot password? Click here to reset