File size: 3,598 Bytes
e84d3d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- Reasoning
- React
- COT
- MachineLearning
- DeepLearning
- FineTuning
- NLP
- AIResearch
---

# Think-and-Code-React

## Table of Contents
1. [Introduction](#introduction)
2. [Problem Statement](#problem-statement)
3. [Solution](#solution)
4. [How It Works](#how-it-works)
5. [How to Use This Model](#how-to-use-this-model)
6. [Future Developments](#future-developments)
7. [License](#license)
8. [Model Card Contact](#model-card-contact)

## Introduction

This is a fine-tuned Qwen model which is designed to provide frontend development solutions with enhanced reasoning capabilities on ReactJS. It can writes code after reasoning task then provide some best prectice after ansering.

## Problem Statement

Coading is a challenging task for small models. small models as not enough capable for writing code with heigh accuracy and reasoning whare React is widely used javascript liberary and most time we found that small LLM are not very specific to programming:

## Solution

Training LLM for React specific dataset and enable reasoning task. This LLM provides us cold start with React based LLM whare it does understand many react concept.:

1. Understands user's query
2. Evaluate Everything in <think> tag
3. Provide answer in <answer>
4. Additionally provide Best Prectices in <verifier_answer>


## How It Works

1. **Data Collection**: The model is trained on (Microsoft's Phi4) model output whare 1000s of react specific senerios. it does provide us cold start with good reasoning capabilities

2. **Feature Extraction**: Upscalling it using RL to enable model with heigh level of accuracy and better output for reasoning.l

3. **Machine Learning**: A sophisticated machine learning algorithm is employed to learn the heigh quality code in React Specific code and can be expand to all freamwork.

## How to Use This Model

### Prerequisites

- Python 3.7 or higher
- Required libraries (install via pip):
  ```bash
  pip install torch transformers
  ```

### Installation

1. Clone this repository:
   ```bash
   git clone https://huggingface.co/nehulagrawal/baby-cry-classification   
   cd baby-cry-classifier
   ```

2. Download the pre-trained model files:
    'model.joblib'
    'label.joblib'

### Usage

1. Import the necessary libraries:

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
```

2. Setting up models:

```python
model_path = "./Path-to-llm-folder"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)

device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
```

3. Setting up AI Response:

```python
def generate_text(prompt, max_length=2000):
    inputs = tokenizer(prompt, return_tensors="pt").to(device)  
    output = model.generate(
        **inputs, 
        do_sample=True, 
        temperature=0.7
    )
    return tokenizer.decode(output[0], skip_special_tokens=True)```

4. Using LLM:

```python
prompt = "Write a code in react for calling api to server at https://example.com/test"
generated_text = generate_text(prompt)

print(generated_text)```

## Future Developments
This is a cold start LLM and can be enhance it's capabalities using RL so that this LLM perform more better. Currently we have found 

## Model Card Contact

For inquiries and contributions, please contact us at [email protected].

```bibtex
@ModelCard{
    author       = {Nehul Agrawal, Priyal Mata and Ayush Panday},
    title         = {Think and Code in React},
    year           = {2025}
}
```