Question Answering
Transformers
PyTorch
TensorBoard
English
llama
text-generation
language-agent
web-agent
maths
reasoning
planning
text-generation-inference
Instructions to use ai2lumos/lumos_unified_plan_iterative with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ai2lumos/lumos_unified_plan_iterative with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="ai2lumos/lumos_unified_plan_iterative")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ai2lumos/lumos_unified_plan_iterative") model = AutoModelForCausalLM.from_pretrained("ai2lumos/lumos_unified_plan_iterative") - Notebooks
- Google Colab
- Kaggle
Adding `safetensors` variant of this model
#1 opened 12 months ago
by
SFconvertbot