Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,65 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
pipeline_tag: text-classification
|
| 4 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
pipeline_tag: text-classification
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
model_overview:
|
| 7 |
+
|
| 8 |
+
model_name: "Quantum-Neural Hybrid (Q-NH) Model"
|
| 9 |
+
description: >
|
| 10 |
+
A cutting-edge model that combines the power of quantum computing with neural networks for advanced language understanding and sentiment analysis.
|
| 11 |
+
|
| 12 |
+
components:
|
| 13 |
+
- quantum_module:
|
| 14 |
+
num_qubits: 5
|
| 15 |
+
depth: 3
|
| 16 |
+
num_shots: 1024
|
| 17 |
+
description: "Parameterized quantum circuit with single and two-qubit errors, designed for language processing tasks."
|
| 18 |
+
|
| 19 |
+
- neural_network:
|
| 20 |
+
architecture:
|
| 21 |
+
- Linear: 2048 neurons
|
| 22 |
+
- ReLU activation
|
| 23 |
+
- LSTM: 2048 neurons, 2 layers, 20% dropout
|
| 24 |
+
- Multihead Attention: 64 heads, key and value dimensions of 2048
|
| 25 |
+
- Linear: Output layer with 3 classes, followed by Sigmoid activation
|
| 26 |
+
optimizer: Adam with learning rate 0.001
|
| 27 |
+
loss_function: CrossEntropyLoss
|
| 28 |
+
description: "Neural network integrating LSTM, Multihead Attention, and classical layers for comprehensive language analysis."
|
| 29 |
+
|
| 30 |
+
training_pipeline:
|
| 31 |
+
- QNALS-Transformer Integration:
|
| 32 |
+
- Quantum module pre-processes input for quantum features.
|
| 33 |
+
- Transformer model (BERT) processes tokenized input sequences.
|
| 34 |
+
- Outputs from both components concatenated and passed through a classifier.
|
| 35 |
+
- Hyperparameters:
|
| 36 |
+
- Batch size: 32
|
| 37 |
+
- Learning rate: 0.0001 (AdamW optimizer)
|
| 38 |
+
- Training epochs: 10 (with checkpointing and learning rate scheduling)
|
| 39 |
+
|
| 40 |
+
dataset:
|
| 41 |
+
- Source: "jovianzm/no_robots"
|
| 42 |
+
- Labels: "Classify", "Positive", "Negative"
|
| 43 |
+
|
| 44 |
+
external_libraries:
|
| 45 |
+
- PyTorch: Deep learning framework
|
| 46 |
+
- Qiskit: Quantum computing framework
|
| 47 |
+
- Transformers: State-of-the-art natural language processing models
|
| 48 |
+
- Matplotlib: Visualization of training progress
|
| 49 |
+
|
| 50 |
+
custom_utilities:
|
| 51 |
+
- NoiseModel: Custom quantum noise model with amplitude damping and depolarizing errors.
|
| 52 |
+
- QNALS: Quantum-Neural Adaptive Learning System, integrating quantum circuit and neural network.
|
| 53 |
+
- FinalModel: Custom PyTorch model combining QNALS and BERT for end-to-end language analysis.
|
| 54 |
+
|
| 55 |
+
training_progress:
|
| 56 |
+
- Epochs: 10
|
| 57 |
+
- Visualization: Training loss and accuracy plotted for each epoch.
|
| 58 |
+
|
| 59 |
+
future_work:
|
| 60 |
+
- Extended Training:
|
| 61 |
+
- Additional training epochs for the QNALS component.
|
| 62 |
+
- Model Saving:
|
| 63 |
+
- Checkpoints and weights saved for both QNALS and the final integrated model.
|
| 64 |
+
- Entire model architecture and optimizer state saved for future use.
|
| 65 |
+
|