Wider and Deeper LLM Networks are Fairer LLM Evaluators

08/03/2023
by   Xinghua Zhang, et al.
0

Measuring the quality of responses generated by LLMs is a challenging task, particularly when it comes to evaluating whether the response is aligned with human preference. A novel approach involves using the LLM itself to make evaluation and stabilizing the results through multiple independent evaluations, similar to a single-layer narrow LLM network. This network consists of a fixed number of neurons, with each neuron being the same LLM. In this paper, we draw upon the extensive research on deep neural networks to explore whether deeper and wider networks can lead to fairer evaluations. Specifically, inspired by the observation that different neurons in a neural network are responsible for detecting different concepts, we first adaptively generate as many neuron roles as possible for each evaluation sample. Each perspective corresponds to the role of a specific LLM neuron in the first layer. In subsequent layers, we follow the idea that higher layers in deep networks are responsible for more comprehensive features, each layer receives representations from all neurons in the previous layer, integrating the locally learned evaluation information to obtain a more comprehensive evaluation result. Interestingly, this network design resembles the process of academic paper reviewing. To validate the effectiveness of our method, we construct the largest and most diverse English evaluation benchmark LLMEval^2 for LLM evaluators, comprising 15 tasks, 8 abilities, and 2,553 samples. Experimental results demonstrate that a wider network (involving many reviewers) with 2 layers (one round of discussion) performs the best, improving kappa correlation coefficient from 0.28 to 0.34. We also leverage WideDeep to aid in the assessment of Chinese LLMs, which has accelerated the evaluation time by 4.6 times, resulting in a 60 agreement level among humans.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 11

research
10/31/2018

Conceptual Content in Deep Convolutional Neural Networks: An analysis into multi-faceted properties of neurons

In this paper we analyze convolutional layers of VGG16 model pre-trained...
research
11/16/2017

NISP: Pruning Networks using Neuron Importance Score Propagation

To reduce the significant redundancy in deep Convolutional Neural Networ...
research
06/12/2020

Online Sequential Extreme Learning Machines: Features Combined From Hundreds of Midlayers

In this paper, we develop an algorithm called hierarchal online sequenti...
research
08/29/2021

NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks

Existing research on making sense of deep neural networks often focuses ...
research
09/25/2019

Wider Networks Learn Better Features

Transferability of learned features between tasks can massively reduce t...
research
05/01/2023

Interpreting Pretrained Source-code Models using Neuron Redundancy Analyses

Neural code intelligence models continue to be 'black boxes' to the huma...

Please sign up or login with your details

Forgot password? Click here to reset