Zero-th Order Algorithm for Softmax Attention Optimization

07/17/2023
by   Yichuan Deng, et al.
0

Large language models (LLMs) have brought about significant transformations in human society. Among the crucial computations in LLMs, the softmax unit holds great importance. Its helps the model generating a probability distribution on potential subsequent words or phrases, considering a series of input words. By utilizing this distribution, the model selects the most probable next word or phrase, based on the assigned probabilities. The softmax unit assumes a vital function in LLM training as it facilitates learning from data through the adjustment of neural network weights and biases. With the development of the size of LLMs, computing the gradient becomes expensive. However, Zero-th Order method can approximately compute the gradient with only forward passes. In this paper, we present a Zero-th Order algorithm specifically tailored for Softmax optimization. We demonstrate the convergence of our algorithm, highlighting its effectiveness in efficiently computing gradients for large-scale LLMs. By leveraging the Zeroth-Order method, our work contributes to the advancement of optimization techniques in the context of complex language models.

READ FULL TEXT
research
04/20/2023

Attention Scheme Inspired Softmax Regression

Large language models (LLMs) have made transformed changes for human soc...
research
08/23/2023

How to Protect Copyright Data in Optimization of Large Language Models?

Large language models (LLMs) and generative AI have played a transformat...
research
03/22/2018

Unbiased scalable softmax optimization

Recent neural network and language models rely on softmax distributions ...
research
09/26/2016

Pointer Sentinel Mixture Models

Recent neural network sequence models with softmax classifiers have achi...
research
07/05/2023

In-Context Learning for Attention Scheme: from Single Softmax Regression to Multiple Softmax Regression via a Tensor Trick

Large language models (LLMs) have brought significant and transformative...
research
12/02/2022

Nonparametric Masked Language Modeling

Existing language models (LMs) predict tokens with a softmax over a fini...
research
11/23/2020

Effectiveness of MPC-friendly Softmax Replacement

Softmax is widely used in deep learning to map some representation to a ...

Please sign up or login with your details

Forgot password? Click here to reset