niltheory's picture
Update README.md
f49142c
|
raw
history blame
567 Bytes
metadata
datasets:
  - niltheory/ExistenceTypes
language:
  - en

Existence Analysis Model

Created for: Compendium Terminum, IP.

Iteration #1: base distilBert used to train the model with a data training of 96.

Iteration #2: Increased the data training from 96 to 296. Which greatly increased the accuracy scores, albeit there is some edge cases.

Iteration #3:

Switched from bert-base-uncased to bert-large-cased-whole-word-masking. Which it has more contextual sensitivity and is slightly more accurate than Iteration #2. It's overall more nuanced and sensitive.