|
--- |
|
license: wtfpl |
|
datasets: |
|
- cakiki/rosetta-code |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
pipeline_tag: text-classification |
|
tags: |
|
- code |
|
- programming-language |
|
- code-classification |
|
base_model: huggingface/CodeBERTa-small-v1 |
|
--- |
|
This Model is fine-tuned version of *huggingface/CodeBERTa-small-v1* on *cakiki/rosetta-code* Dataset for 25 Programming Languages as mentioned below. |
|
## Training Details: |
|
Model is trained for 25 epochs on Azure for nearly 26000 Datapoints for above Mentioned 25 Programming Languages extracted from Dataset having 1006 of total Programming Language. |
|
Below is the Training Result for 25 epochs. |
|
 |