Safetensors
GGUF
English
chain-of-thought
cot-reasoning
step-by-step-reasoning
systematic-research-planning
academic-assistant
academic-planning
thesis-planning
dissertation-planning
research-question-formulation
literature-review-planning
methodology-design
experimental-design
qualitative-research-planning
quantitative-research-planning
mixed-methods-planning
student-research-assistant
phd-support
postgraduate-tool
early-career-researcher
grant-writing-assistant
research-proposal-helper
cross-disciplinary-research
interdisciplinary-methodology
academic-mentorship-tool
research-evaluation-assistant
independent-researcher-tool
r-and-d-assistant
reasoning-model
structured-output
systematic-analysis
problem-decomposition
research-breakdown
actionable-planning
scientific-research
social-science-research
humanities-research
medical-research-planning
engineering-research
business-research
mistral-based
mistral-fine-tune
lora-adaptation
foundation-model
instruction-tuned
7b-parameters
ai-research-assistant
research-automation
sota-research-planning
hypothesis-generation
experiment-design-assistant
literature-analysis
paper-outline-generator
structured-output-generation
systematic-reasoning
detailed-planning
zero-shot-planning
research-summarization
biomedical-research-assistant
clinical-trial-planning
tech-r-and-d
materials-science
computational-research
data-science-assistant
literature-synthesis
meta-analysis-helper
best-research-assistant-model
top-research-planning-model
research-ai-assistant
ai-research-mentor
academic-planning-ai
research-workflow-automation
quantum-computing-research
ai-ml-research-planning
cybersecurity-research
neuroscience-research-planning
genomics-research
robotics-research-planning
climate-science-research
behavioral-economics-research
educational-technology-research
research-plan-generator
methodology-recommendation
data-collection-planning
analysis-strategy-development
implementation-planning
evaluation-framework-design
challenge-identification
resource-requirement-analysis
technical-limitation-assessment
research-gap-analysis
knowledge-synthesis
practical-research-tools
affordable-research-assistant
systematic-planning-tool
comprehensive-research-framework
research-project-management
researcher-productivity-tool
text-to-research-plan
dual-output-model
think-answer-format
evidence-based-research-planning
research-mentoring
science-domains-expert
engineering-domains-expert
social-science-domains-expert
multidisciplinary-research
structured-research-planning
hierarchical-plan-generator
convergent-thinking
divergent-thinking
research-ideation
experimental-protocol-design
mistral-research-assistant
focused-research-scope
quantitative-analysis-planning
portable-research-assistant
education-research-tool
Research-Reasoner-7B-v0.3
Research-Reasoner-7B
Research-Reasoner
conversational
Raymond-dev-546730 commited on
Commit
6fecb13
·
verified ·
1 Parent(s): f4095ee

Upload 2 files

Browse files
Scripts/Inference_llama.cpp.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from llama_cpp import Llama
2
+
3
+ # Insert your research topic here
4
+ RESEARCH_TOPIC = """
5
+
6
+ """
7
+
8
+ model_path = "./" # Path to the directory containing your model weight files
9
+
10
+ llm = Llama(
11
+ model_path=model_path,
12
+ n_gpu_layers=33,
13
+ n_ctx=2048,
14
+ n_threads=4
15
+ )
16
+
17
+ topic = RESEARCH_TOPIC.strip()
18
+ prompt = f"USER: Research Topic: \"{topic}\"\nLet's think step by step:\nASSISTANT:"
19
+
20
+
21
+ output = llm(
22
+ prompt,
23
+ max_tokens=2000,
24
+ temperature=0.7,
25
+ top_p=0.9,
26
+ repeat_penalty=1.1
27
+ )
28
+
29
+ result = output.get("choices", [{}])[0].get("text", "").strip()
30
+
31
+ print(result)
32
+
33
+
Scripts/Inference_safetensors.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from transformers import AutoModelForCausalLM, AutoTokenizer
3
+
4
+ # Insert your research topic here
5
+ RESEARCH_TOPIC = """
6
+
7
+ """
8
+
9
+ def load_model(model_path):
10
+ model = AutoModelForCausalLM.from_pretrained(
11
+ model_path,
12
+ torch_dtype=torch.float16,
13
+ device_map="auto"
14
+ )
15
+
16
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
17
+
18
+ return model, tokenizer
19
+
20
+ def generate_response(model, tokenizer, topic):
21
+ topic = topic.strip()
22
+
23
+ prompt = f"USER: Research Topic: \"{topic}\"\nLet's think step by step:\nASSISTANT:"
24
+
25
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
26
+
27
+ outputs = model.generate(
28
+ **inputs,
29
+ max_new_tokens=2000,
30
+ temperature=0.7,
31
+ top_p=0.9,
32
+ repetition_penalty=1.1,
33
+ do_sample=True
34
+ )
35
+
36
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
37
+
38
+ return response.split("ASSISTANT:")[-1].strip()
39
+
40
+ def main():
41
+ model_path = "./" # Path to the directory containing your model weight files
42
+
43
+ model, tokenizer = load_model(model_path)
44
+
45
+ result = generate_response(model, tokenizer, RESEARCH_TOPIC)
46
+
47
+ print(result)
48
+
49
+ if __name__ == "__main__":
50
+ main()