jtatman's picture
Update README.md
8976355 verified
|
raw
history blame
2.51 kB
---
library_name: transformers
tags:
- function calling
- laser
license: apache-2.0
datasets:
- jtatman/glaive_function_calling_v2_filtered_10k
---
# Model Card
This is a laser fine tuning of Aloobun's [great 1.5b param reyna mini model](https://huggingface.co/aloobun/Reyna-Mini-1.8B-v0.2).
### Model Description
This model is quite conversational - even a bit more so after laser tuning even using Peft. The function calling is mediocre, but will be improved in future versions.
## Uses
As Aloobun's model is well performing and impressive on it's own, I decided to add some function calling while practicing the LaserRMT technique.
### Direct Use
Chat
Conversational
Text Generation
Function Calling
## Bias, Risks, and Limitations
This model will take over your house, borrow your car, talk badly to your family, and generally make everything incrementally worse. If you use it for nefarious purposes.
### Recommendations
Use at your own risk. It's a great small model, owing to the base model before tuning.
## Training Details
### Training Data
- "eval/loss": 2.1797242164611816,
- "_timestamp": 1708624900.2239263,
- "_runtime": 20945.370138406754,
- "train/train_loss": 2.515587423102269,
- "train/global_step": 918,
- "train/train_steps_per_second": 0.044,
- "train/loss": 2.2062,
- "train/learning_rate": 0,
- "train/train_samples_per_second": 1.403,
- "train/train_runtime": 20945.6359,
- "eval/steps_per_second": 4.867,
- "eval/samples_per_second": 4.867,
- "_step": 923,
- "train/epoch": 2.98,
- "eval/runtime": 41.0972,
- "train/grad_norm": 0.2638521194458008,
- "train/total_flos": 141790931224363000
### Training Procedure
[LaserRMT](https://github.com/cognitivecomputations/laserRMT) was used to refine the weights, using the 16 highest scored weights specifically by noise-to-ratio analysis.
This technique avoids training unnecessarily low-performng weights that can turn to garbage. By pruning these weights, the model size is decreased slightly.
![axolotl](https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/image/axolotl-badge-web.png?raw=true)
Axolotl was used for training and dataset tokenization.
#### Preprocessing [optional]
Dataset was formatted in ShareGpt format for the purposes of using with Axolotl, in conversational format.
#### Training Hyperparameters
lora_r: 64
lora_alpha: 16
lora_dropout: 0.05
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00025