256 Dimension updated
Browse files- 2_Dense/model.safetensors +1 -1
- README.md +45 -45
- model.safetensors +1 -1
2_Dense/model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1049760
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0fae3f9a09ca238049f5af1b058df30df52a7d29669b1b46037926ecf90eb2c
|
| 3 |
size 1049760
|
README.md
CHANGED
|
@@ -4,35 +4,35 @@ tags:
|
|
| 4 |
- sentence-similarity
|
| 5 |
- feature-extraction
|
| 6 |
- generated_from_trainer
|
| 7 |
-
- dataset_size:
|
| 8 |
- loss:MultipleNegativesRankingLoss
|
| 9 |
base_model: BAAI/bge-large-en-v1.5
|
| 10 |
widget:
|
| 11 |
-
- source_sentence:
|
| 12 |
sentences:
|
| 13 |
-
-
|
| 14 |
-
-
|
| 15 |
-
-
|
| 16 |
-
- source_sentence:
|
| 17 |
sentences:
|
| 18 |
-
-
|
| 19 |
-
-
|
| 20 |
-
-
|
| 21 |
-
- source_sentence:
|
| 22 |
sentences:
|
| 23 |
-
-
|
| 24 |
-
-
|
| 25 |
-
-
|
| 26 |
-
- source_sentence:
|
| 27 |
sentences:
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
- source_sentence:
|
| 32 |
sentences:
|
| 33 |
-
-
|
| 34 |
-
-
|
| 35 |
-
-
|
| 36 |
pipeline_tag: sentence-similarity
|
| 37 |
library_name: sentence-transformers
|
| 38 |
---
|
|
@@ -87,9 +87,9 @@ from sentence_transformers import SentenceTransformer
|
|
| 87 |
model = SentenceTransformer("foochun/bge-large-finetuned")
|
| 88 |
# Run inference
|
| 89 |
sentences = [
|
| 90 |
-
'
|
| 91 |
-
'
|
| 92 |
-
'
|
| 93 |
]
|
| 94 |
embeddings = model.encode(sentences)
|
| 95 |
print(embeddings.shape)
|
|
@@ -143,19 +143,19 @@ You can finetune this model on your own dataset.
|
|
| 143 |
|
| 144 |
#### Unnamed Dataset
|
| 145 |
|
| 146 |
-
* Size:
|
| 147 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 148 |
* Approximate statistics based on the first 1000 samples:
|
| 149 |
| | query | pos | neg |
|
| 150 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 151 |
| type | string | string | string |
|
| 152 |
-
| details | <ul><li>min: 4 tokens</li><li>mean: 8.
|
| 153 |
* Samples:
|
| 154 |
-
| query
|
| 155 |
-
|
| 156 |
-
| <code>
|
| 157 |
-
| <code>
|
| 158 |
-
| <code>
|
| 159 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 160 |
```json
|
| 161 |
{
|
|
@@ -168,19 +168,19 @@ You can finetune this model on your own dataset.
|
|
| 168 |
|
| 169 |
#### Unnamed Dataset
|
| 170 |
|
| 171 |
-
* Size:
|
| 172 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 173 |
* Approximate statistics based on the first 1000 samples:
|
| 174 |
| | query | pos | neg |
|
| 175 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 176 |
| type | string | string | string |
|
| 177 |
-
| details | <ul><li>min: 4 tokens</li><li>mean:
|
| 178 |
* Samples:
|
| 179 |
-
| query
|
| 180 |
-
|
| 181 |
-
| <code>
|
| 182 |
-
| <code>
|
| 183 |
-
| <code>
|
| 184 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 185 |
```json
|
| 186 |
{
|
|
@@ -325,12 +325,12 @@ You can finetune this model on your own dataset.
|
|
| 325 |
### Training Logs
|
| 326 |
| Epoch | Step | Training Loss | Validation Loss |
|
| 327 |
|:----------:|:--------:|:-------------:|:---------------:|
|
| 328 |
-
| 0.
|
| 329 |
-
| 0.
|
| 330 |
-
| 1.
|
| 331 |
-
| 1.
|
| 332 |
-
| 2.
|
| 333 |
-
| **2.
|
| 334 |
|
| 335 |
* The bold row denotes the saved checkpoint.
|
| 336 |
|
|
@@ -340,7 +340,7 @@ You can finetune this model on your own dataset.
|
|
| 340 |
- Transformers: 4.51.3
|
| 341 |
- PyTorch: 2.6.0+cu124
|
| 342 |
- Accelerate: 1.6.0
|
| 343 |
-
- Datasets: 3.
|
| 344 |
- Tokenizers: 0.21.1
|
| 345 |
|
| 346 |
## Citation
|
|
|
|
| 4 |
- sentence-similarity
|
| 5 |
- feature-extraction
|
| 6 |
- generated_from_trainer
|
| 7 |
+
- dataset_size:72454
|
| 8 |
- loss:MultipleNegativesRankingLoss
|
| 9 |
base_model: BAAI/bge-large-en-v1.5
|
| 10 |
widget:
|
| 11 |
+
- source_sentence: mahathir bin mohamad
|
| 12 |
sentences:
|
| 13 |
+
- mahathir mohamad
|
| 14 |
+
- mahathir bin ismail
|
| 15 |
+
- nazhan hafiz rahmat
|
| 16 |
+
- source_sentence: siow yin heng
|
| 17 |
sentences:
|
| 18 |
+
- siu xin loh daniel
|
| 19 |
+
- yin heng siow
|
| 20 |
+
- siow heng yin
|
| 21 |
+
- source_sentence: fadzil bin othman
|
| 22 |
sentences:
|
| 23 |
+
- izzah auni binti zulkifli
|
| 24 |
+
- fadzil othman
|
| 25 |
+
- mariam binti hassan
|
| 26 |
+
- source_sentence: raja muhammad syamil bin raja ishak
|
| 27 |
sentences:
|
| 28 |
+
- ridzuan bin hashim
|
| 29 |
+
- meng leong fang
|
| 30 |
+
- raja muhd syamil bin raja ishak
|
| 31 |
+
- source_sentence: felix koh bing sheng
|
| 32 |
sentences:
|
| 33 |
+
- olivia sinnathamby
|
| 34 |
+
- bing koh sheng
|
| 35 |
+
- koh bing sheng
|
| 36 |
pipeline_tag: sentence-similarity
|
| 37 |
library_name: sentence-transformers
|
| 38 |
---
|
|
|
|
| 87 |
model = SentenceTransformer("foochun/bge-large-finetuned")
|
| 88 |
# Run inference
|
| 89 |
sentences = [
|
| 90 |
+
'felix koh bing sheng',
|
| 91 |
+
'koh bing sheng',
|
| 92 |
+
'bing koh sheng',
|
| 93 |
]
|
| 94 |
embeddings = model.encode(sentences)
|
| 95 |
print(embeddings.shape)
|
|
|
|
| 143 |
|
| 144 |
#### Unnamed Dataset
|
| 145 |
|
| 146 |
+
* Size: 72,454 training samples
|
| 147 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 148 |
* Approximate statistics based on the first 1000 samples:
|
| 149 |
| | query | pos | neg |
|
| 150 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 151 |
| type | string | string | string |
|
| 152 |
+
| details | <ul><li>min: 4 tokens</li><li>mean: 8.06 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.59 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.76 tokens</li><li>max: 16 tokens</li></ul> |
|
| 153 |
* Samples:
|
| 154 |
+
| query | pos | neg |
|
| 155 |
+
|:-------------------------------------|:-------------------------------------|:---------------------------------|
|
| 156 |
+
| <code>ahmad faisal bin zainal</code> | <code>ahmad faisal bin zainal</code> | <code>sakinah binti jamil</code> |
|
| 157 |
+
| <code>daniel lim ling ee</code> | <code>lim ling ee daniel</code> | <code>ee ling lim</code> |
|
| 158 |
+
| <code>lau sze sheng</code> | <code>sheng lau sze</code> | <code>lau sz sheng</code> |
|
| 159 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 160 |
```json
|
| 161 |
{
|
|
|
|
| 168 |
|
| 169 |
#### Unnamed Dataset
|
| 170 |
|
| 171 |
+
* Size: 10,350 evaluation samples
|
| 172 |
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
|
| 173 |
* Approximate statistics based on the first 1000 samples:
|
| 174 |
| | query | pos | neg |
|
| 175 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 176 |
| type | string | string | string |
|
| 177 |
+
| details | <ul><li>min: 4 tokens</li><li>mean: 8.06 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.59 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.81 tokens</li><li>max: 16 tokens</li></ul> |
|
| 178 |
* Samples:
|
| 179 |
+
| query | pos | neg |
|
| 180 |
+
|:-------------------------------------------------|:-------------------------------------------------|:------------------------------------------|
|
| 181 |
+
| <code>xavier loh ling sheng</code> | <code>loh ling sheng xavier</code> | <code>loh sheng ling xavier</code> |
|
| 182 |
+
| <code>chan poh king</code> | <code>chan poh king</code> | <code>chan king poh</code> |
|
| 183 |
+
| <code>siti suzelita sazrikin binti hassan</code> | <code>siti suzelita sazrikin binti hassan</code> | <code>roslilawati binti hj mukhtar</code> |
|
| 184 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 185 |
```json
|
| 186 |
{
|
|
|
|
| 325 |
### Training Logs
|
| 326 |
| Epoch | Step | Training Loss | Validation Loss |
|
| 327 |
|:----------:|:--------:|:-------------:|:---------------:|
|
| 328 |
+
| 0.4413 | 500 | 0.1568 | 0.0153 |
|
| 329 |
+
| 0.8826 | 1000 | 0.0155 | 0.0073 |
|
| 330 |
+
| 1.3239 | 1500 | 0.0086 | 0.0064 |
|
| 331 |
+
| 1.7652 | 2000 | 0.0067 | 0.0054 |
|
| 332 |
+
| 2.2065 | 2500 | 0.0059 | 0.0050 |
|
| 333 |
+
| **2.6478** | **3000** | **0.0052** | **0.0049** |
|
| 334 |
|
| 335 |
* The bold row denotes the saved checkpoint.
|
| 336 |
|
|
|
|
| 340 |
- Transformers: 4.51.3
|
| 341 |
- PyTorch: 2.6.0+cu124
|
| 342 |
- Accelerate: 1.6.0
|
| 343 |
+
- Datasets: 3.6.0
|
| 344 |
- Tokenizers: 0.21.1
|
| 345 |
|
| 346 |
## Citation
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1340612432
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8ed937208fea5f17d65ecb178dd0a6fd0db166daaeae588de942ebe415b59216
|
| 3 |
size 1340612432
|