This notebook has been automatically translated to make it accessible to more people, please let me know if you see any typos.
To understand how LoRA works, we first have to remember what happens when we train a model. Let’s go back to the most basic part of deep learning, we have a dense layer of a neural network that is defined as:
y = Wx + b
Where W is the weights matrix and b is the bias vector.
For the sake of simplicity we will assume that there is no bias, so it would look like this
y = Wx
Suppose that for an input x we want it to have an output ŷ.
- First what we do is to calculate the output we get with our current value of pesos W, i.e. we get the value y.
- Next we calculate the error that exists between the value of y that we have obtained and the value that we wanted to obtain ŷ. We call this error loss, and we calculate it with some mathematical function, now it does not matter which one.
- We compute the gardient (the derivative) of the error loss with respect to the weights matrix W, i.e. ΔW = dloss/dW.
- We update the weights W by subtracting from each of their values the value of the gradient multiplied by a learning factor α, i.e. W = W – α·ΔW.
The authors of LoRA propose that the weights matrix W can be decomposed into
W ~ W + ΔW
So, by freezing the W matrix and training only the ΔW matrix, it is possible to obtain a model that fits new data without having to retrain the whole model.
But you may think that ΔW is a matrix of size equal to W so nothing has been gained, but here the authors rely on Aghajanyan et al. (2020)
, a paper in which they showed that although the language models are large and their parameters are matrices with very large dimensions, to adapt them to new tasks it is not necessary to change all the values of the matrices, but changing a few values is enough, which in technical terms, is called Low Rank Adaptation. Hence the name LoRA (Low Rank Adaptation).
We have frozen the model and now we want to train the ΔW matrix, let’s assume that both W and ΔW are matrices of size 20×10, so we have 200 trainable parameters
Now suppose that the matrix ΔW can be decomposed into the product of two matrices A and B, i.e.
ΔW = A · B
For this multiplication to occur the sizes of the matrices A and B have to be 20xn and 10×10 respectively. Suppose n=5, so A would be of size 20×5, i.e. 100 parameters, and B of size 5×10, i.e. 50 parameters, so we would have 100+50=150 trainable parameters. We already have less trainable parameters than before
Now let’s suppose that W is actually a matrix of size 10.000×10.000, so we would have 100.000.000 trainable parameters, but if we decompose ΔW in A and B with n=5, we would have a matrix of size 10.000×5 and another one of size 5×10.000, so we would have 50.000 parameters of one and another 50.000 parameters of the other, in total 100.000 trainable parameters, that is to say we have reduced the number of parameters 1000 times.
You can already see the power of LoRA, when you have very large models, the number of trainable parameters can be greatly reduced.
If we look again at the image of the LoRA architecture, we will understand it better.
But it looks even better, the savings in number of trainable parameters with this image
QLoRA is performed in two steps, the first consists of quantizing the model and the second of applying LoRA to the quantized model.
QLoRA quantization is based on three concepts, 4-bit model quantization with the normal float 4 (NF4) format, double quantization and paged optimizers. All this together makes it possible to save a lot of memory when fine tuning the language models, so let’s see what each one consists of
In QLoRA, to quantize, what is done is to quantize in normal float 4 (NF4) format, which is a type of 4-bit quantization so that its data have a normal distribution, i.e. they follow a Gaussian bell. To get them to follow this distribution, what is done is to divide the values of the weights in FP16 into quantiles, so that in each quantile there is the same number of values. Once we have the quantiles, a value in 4 bits is assigned to each quantile.
To perform this quantization it uses the SRAM quantization algorithm, which is a very fast quantization algorithm by quantiles, but it has a lot of error with values that are very far away in the distribution of the Gaussian bell, outliers.
As the parameters of the weights of a neural network usually follow a normal distribution (i.e. they follow a Gaussian bell), centered at zero and with a standard deviation σ. What is done is to normalize them to have a standard deviation between -1 and 1, and then quantize them in NF4 format.
As we have mentioned, when quantizing the network parameters, we have to normalize them to have a standard deviation between -1 and 1, and then quantize them in NF4 format. So we have to store some parameters as the values to normalize the parameters, that is, the value by which the data is divided to have a deviation between -1 and 1. These values are stored in FP32 format, so the authors of the paper propose to quantize these parameters to FP8 format.
Although this may not seem to save much memory, the authors estimate that this can save about 0.373 bits per parameter, but if for example we have a model of 8B parameters, which is not an excessively large model today, we would save about 3 GB of memory, which is not bad. In the case of a 70B parameter model, we would save about 26 GB of memory.
Nvidia GPUs have the option to share GPU and CPU RAM, so what they do is store optimizer states in CPU RAM and access them when needed. So they don’t have to be stored in GPU RAM and we can save GPU memory.
Once we have quantized the model we can do fine tuning of the quantized model as in LoRA
Now that we have explained QLoRA, let’s see an example of how to fine tune a model using QLoRA.
First we log in to upload the trained model to the Hub.
from huggingface_hub import notebook_login
notebook_login()
We download the dataset we are going to use, which is a dataset of reviews from Amazon
from datasets import load_dataset
dataset = load_dataset("mteb/amazon_reviews_multi", "en")
dataset
We create a subset in case you want to test the code with a smaller dataset. In my case I will use 100% of the dataset
percentage = 1
subset_dataset_train = dataset['train'].select(range(int(len(dataset['train']) * percentage)))
subset_dataset_validation = dataset['validation'].select(range(int(len(dataset['validation']) * percentage)))
subset_dataset_test = dataset['test'].select(range(int(len(dataset['test']) * percentage)))
subset_dataset_train, subset_dataset_validation, subset_dataset_test
We see a sample
from random import randint
idx = randint(0, len(subset_dataset_train))
subset_dataset_train[idx]
We obtain the number of classes, to obtain the number of classes we use dataset['train']
and not subset_dataset_train
because if the subset is too small it is possible that there are no examples with all the possible classes of the original dataset.
num_classes = len(dataset['train'].unique('label'))
num_classes
We create a function to create the label
field in the dataset. The downloaded dataset has the labels
field but the transformers
library needs the field to be called label
and not labels
.
def set_labels(example):
example['labels'] = example['label']
return example
We apply the function to the dataset
subset_dataset_train = subset_dataset_train.map(set_labels)
subset_dataset_validation = subset_dataset_validation.map(set_labels)
subset_dataset_test = subset_dataset_test.map(set_labels)
subset_dataset_train, subset_dataset_validation, subset_dataset_test
Here is a sample again
subset_dataset_train[idx]
We implement the tokenizer. To avoid errors, we assign the end of string token to the padding token.
from transformers import AutoTokenizer
checkpoint = "openai-community/gpt2"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
tokenizer.pad_token = tokenizer.eos_token
We create a function for tokenizing the dataset
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=768, return_tensors="pt")
We apply the function to the dataset and remove the columns that we do not need
subset_dataset_train = subset_dataset_train.map(tokenize_function, batched=True, remove_columns=['text', 'label', 'id', 'label_text'])
subset_dataset_validation = subset_dataset_validation.map(tokenize_function, batched=True, remove_columns=['text', 'label', 'id', 'label_text'])
subset_dataset_test = subset_dataset_test.map(tokenize_function, batched=True, remove_columns=['text', 'label', 'id', 'label_text'])
subset_dataset_train, subset_dataset_validation, subset_dataset_test
We see again a sample, but in this case we only see the keys
.
subset_dataset_train[idx].keys()
We first download the unquantized model
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=num_classes)
model.config.pad_token_id = model.config.eos_token_id
We see the memory occupied by
model_memory = model.get_memory_footprint()/(1024**3)
print(f"Model memory: {model_memory:.2f} GB")
We pass the model to FP16 and look again at the memory occupied by the model.
model = model.half()
model_memory = model.get_memory_footprint()/(1024**3)
print(f"Model memory: {model_memory:.2f} GB")
We see the architecture of the model before quantization
model
To quantize the model first we have to create the quantization configuration, for this we use the bitsandbytes
library, if you don’t have it installed you can install it with
pip install bitsandbytes
First we check if our GPU architecture allows the BF16 format, if not we will use FP16.
Then we create the quantization configuration, with load_in_4bits=True
we indicate that it quantizes to 4 bits, with bnb_4bit_quant_type="nf4"
we indicate that it does it in NF4 format, with bnb_4bit_use_double_quant=True
we tell it to double quantize and with bnb_4bit_compute_dtype=compute_dtype
we tell it which data format to use when quantizing, which can be FP16 or BF16.
from transformers import BitsAndBytesConfig
import torch
compute_dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=compute_dtype,
)
And now we quantize the model
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=num_classes, quantization_config=bnb_config)
Let’s look again at the memory it occupies now that we have quantized it
model_memory = model.get_memory_footprint()/(1024**3)
print(f"Model memory: {model_memory:.2f} GB")
We see that the size of the model has been reduced.
We return to the architecture of the model once it has been quantized
model
We see that the architecture has changed
Modified Conv1D
layers to Linear4bits
layers.
Before implementing LoRA, we have to set up the model to train on 4 bits
from peft import prepare_model_for_kbit_training
model = prepare_model_for_kbit_training(model)
Let’s see if the size of the model has changed.
model_memory = model.get_memory_footprint()/(1024**3)
print(f"Model memory: {model_memory:.2f} GB")
Memory has been increased, so we look at the model architecture again.
model
The architecture remains the same, so we assume that the memory increase is due to some extra configuration to be able to apply LoRA in 4 bits.
We create a LoRA configuration, but unlike the LoRA post in which we only configured in target_modeules
the scores
layer, now we are going to add also the c_attn
, c_proj
and c_fc
layers since they are now of type Linear4bits
and not Conv1D
.
from peft import LoraConfig, TaskType
config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.1,
task_type=TaskType.SEQ_CLS,
target_modules=['c_attn', 'c_fc', 'c_proj', 'score'],
bias="none",
)
from peft import get_peft_model
model = get_peft_model(model, config)
model.print_trainable_parameters()
While in the LoRA post we had about 12,000 trainable parameters, we now have about 2 million, as we have now added the c_attn
, c_proj
and c_fc
layers.
Once the quantized model has been instantiated and LoRA has been applied, i.e., once we have done QLoRA, we are going to train it as usual
from transformers import TrainingArguments
metric_name = "accuracy"
model_name = "GPT2-small-QLoRA-finetuned-amazon-reviews-en-classification"
LR = 2e-5
BS_TRAIN = 224
BS_EVAL = 224
EPOCHS = 3
WEIGHT_DECAY = 0.01
training_args = TrainingArguments(
model_name,
eval_strategy="epoch",
save_strategy="epoch",
learning_rate=LR,
per_device_train_batch_size=BS_TRAIN,
per_device_eval_batch_size=BS_EVAL,
num_train_epochs=EPOCHS,
weight_decay=WEIGHT_DECAY,
lr_scheduler_type="cosine",
warmup_ratio = 0.1,
fp16=True,
load_best_model_at_end=True,
metric_for_best_model=metric_name,
push_to_hub=True,
logging_dir="./runs",
)
In the post Fine tuning SMLs we had to set a BS train size of 28, in the post LoRA by setting the low rank matrices in the linear layers we were able to set a batch size of 400. Now, as when quantizing the model, the PEFT library has converted some more layers to Linear
we cannot set such a big batch size and we have to set it to 224.
import numpy as np
from evaluate import load
metric = load("accuracy")
def compute_metrics(eval_pred):
print(eval_pred)
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return metric.compute(predictions=predictions, references=labels)
from transformers import Trainer
trainer = Trainer(
model,
training_args,
train_dataset=subset_dataset_train,
eval_dataset=subset_dataset_validation,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train()
Once trained we evaluate on the test dataset
trainer.evaluate(eval_dataset=subset_dataset_test)
We create a model card
trainer.create_model_card()
We publish it
trainer.push_to_hub()
Let’s approve the model
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "GPT2-small-QLoRA-finetuned-amazon-reviews-en-classification"
user = "maximofn"
checkpoint = f"{user}/{model_name}"
num_classes = 5
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=num_classes).half().eval().to("cuda")
tokens = tokenizer.encode("I love this product", return_tensors="pt").to(model.device)
with torch.no_grad():
output = model(tokens)
logits = output.logits
lables = torch.softmax(logits, dim=1).cpu().numpy().tolist()
lables[0]