FlamingNeuron's picture
Update README.md
9af0588 verified
|
raw
history blame
1.69 kB
metadata
license: other
license_url: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE
tags:
  - llama3
  - instruction-tuning
  - summarization
  - fine-tuned
  - merged

base_model: NousResearch/Meta-Llama-3.1-8B-Instruct tags: - llama3 - instruction-tuning - summarization - fine-tuned - merged

🧠 FlamingNeuron / llama381binstruct_summarize_short_merged

This is a merged model based on NousResearch/Meta-Llama-3.1-8B-Instruct, fine-tuned using LoRA adapters for legal-domain summarization. The LoRA weights have been merged with the base model for standalone use.

🔍 Task

This model converts legalese into short, human-readable summaries, based on data from the legal_summarization project.

💡 Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("FlamingNeuron/llama381binstruct_summarize_short_merged", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("FlamingNeuron/llama381binstruct_summarize_short_merged")

prompt = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Please convert the following legal content into a short human-readable summary<|eot_id|><|start_header_id|>user<|end_header_id|>

[LEGAL_DOC]by using our services you agree to these terms...[END_LEGAL_DOC]<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))