Update README.md
Browse files
README.md
CHANGED
|
@@ -100,7 +100,8 @@ The model was pre-trained continuously on a single A10G GPU in an AWS instance f
|
|
| 100 |
<br> Our models seem to do better than others with an accuracy of 0.58 on validation but,
|
| 101 |
<br> There could be two reasons for this:
|
| 102 |
|
| 103 |
-
- There is still room for improving the quality of the data.
|
|
|
|
| 104 |
- We still do not have enough data for generalization as Transformer models only perform well with large amounts of pre-trained data compared with Classical Sequential Models.
|
| 105 |
|
| 106 |
#### Authors:
|
|
|
|
| 100 |
<br> Our models seem to do better than others with an accuracy of 0.58 on validation but,
|
| 101 |
<br> There could be two reasons for this:
|
| 102 |
|
| 103 |
+
- There is still room for improving the quality of the data. (test with HLP)
|
| 104 |
+
<br>Try below, if HLP >> 0.58
|
| 105 |
- We still do not have enough data for generalization as Transformer models only perform well with large amounts of pre-trained data compared with Classical Sequential Models.
|
| 106 |
|
| 107 |
#### Authors:
|