shahin-as's picture
Upload README.md with huggingface_hub
0ea0d7f verified
|
raw
history blame
1.03 kB

Fine-Tuned BART-Large for Sentence Compression

Model Overview

This model is a fine-tuned version of facebook/bart-large trained on the sentence-transformers/sentence-compression dataset. The goal of this model is to generate compressed versions of input sentences while maintaining fluency and meaning.

Training Details

Base Model: facebook/bart-large

Dataset: sentence-transformers/sentence-compression

Batch Size: 8

Epochs: 5

Learning Rate: 2e-5

Weight Decay: 0.01

Evaluation Metric for Best Model: SARI Penalized

Precision Mode: FP16 for efficient training

Evaluation Results

Validation Set Performance:

SARI: 89.68

SARI Penalized: 88.42

ROUGE-1: 93.05

ROUGE-2: 88.47

ROUGE-L: 92.98

Test Set Performance:

SARI: 89.76

SARI Penalized: 88.32

ROUGE-1: 93.14

ROUGE-2: 88.65

ROUGE-L: 93.07

Training Loss Curve

The loss curves during training are visualized in bart-large-sentence-compression_loss.eps, showing both training and evaluation loss over steps.