Text Generation
PEFT
Safetensors
GGUF
English
chain-of-thought
step-by-step-reasoning
systematic-research-planning
academic-assistant
thesis-planning
dissertation-planning
research-question-formulation
literature-review-planning
methodology-design
experimental-design
hypothesis-generation
research-proposal-helper
cross-disciplinary-research
student-research-assistant
phd-support
research-gap-analysis
literature-analysis
research-summarization
structured-output
systematic-analysis
problem-decomposition
actionable-planning
scientific-research
social-science-research
engineering-research
humanities-research
ai-research-assistant
research-automation
Research-Reasoner-7B-v0.3
Research-Reasoner-7B
Research-Reasoner
academic-research
research-methodology
research-design
thesis-assistant
dissertation-helper
academic-writing
research-planning
scholarly-research
graduate-student-tool
postgraduate-research
academic-planning
research-framework
study-design
research-strategy
academic-productivity
research-workflow
thesis-development
proposal-writing
research-organization
conversational
mistral
mistral-7b
7b
fine-tuned
llama-cpp
quantized
lora
reasoning-model
education
academic-tool
research-methods
grant-writing
project-management
literature-search
citation-analysis
qualitative-research
quantitative-research
mixed-methods
data-analysis-planning
medical-research
clinical-research
| from llama_cpp import Llama | |
| # Insert your research topic here | |
| RESEARCH_TOPIC = """ | |
| """ | |
| model_path = "./" # Path to the directory containing your model weight files | |
| llm = Llama( | |
| model_path=model_path, | |
| n_gpu_layers=33, | |
| n_ctx=2048, | |
| n_threads=4 | |
| ) | |
| topic = RESEARCH_TOPIC.strip() | |
| prompt = f"USER: Research Topic: \"{topic}\"\nLet's think step by step:\nASSISTANT:" | |
| output = llm( | |
| prompt, | |
| max_tokens=2500, | |
| temperature=0.7, | |
| top_p=0.9, | |
| repeat_penalty=1.1 | |
| ) | |
| result = output.get("choices", [{}])[0].get("text", "").strip() | |
| print(result) | |