license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
pipeline_tag: text-generation
tags:
- smiles
- chemistry
- reasoning
ether0
ether0 is a 24B language model trained to reason in English and output molecular structures as SMILES. It is derived from fine-tuning and reinforcement learning training from Mistral-Small-24B-Instruct-2501. Ask questions in English, but they may also include molecules specified as SMILES. The SMILES do not need to be canonical and may contain stereochemistry information. ether0 has limited support for IUPAC names.
Usage
This model is trained to reason in English and output a molecule. It is NOT a general purpose chat model. It has been trained specifically for these tasks:
- IUPAC-names
- formulas to structures
- modifying solubilities by specifc LogS
- constrained edits (e.g., do not affect group X or do not affect scaffold)
- pKA
- smell/scent
- human cell receptor binding + mode (e.g., agonist)
- ADME properties (e.g., MDDK efflux ratio, LD50)
- GHS classifications (as words, not codes, like "carcinogen")
- some electronic properties
- 1-step retrosynthesis
- reaction outcome prediction
- natural language caption to molecule
- natural product elucidation (formula + organism to SMILES)
- blood-brain barrier permeability
For example, you can ask "Propose a molecule with a pKa of 9.2" or "Modify CCCCC(O)=OH to increase its pKa by about 1 unit." You cannot ask it "What is the pKa of CCCCC(O)=OH?" If you ask it questions that lie significantly beyond those tasks, it can fail. You can combine properties, although we haven't significantly benchmarked this.
Limitations
It does not know general synonyms and it has poor textbook knowledge (e.g. it does not perform especially well on chembench). For best results, input molecules as SMILES: if you input molecules with their common names, the model may reason using the incorrect smiles, resulting in poor results. For example, we have observed that the model often confuses lysine and glutamic acid if you ask questions using their common names, but should correctly reason about their chemistry if you provide their structures as SMILES.
Training details
We first pre-trained Mistral-Small-24B-Instruct-2501 via mostly incorrect reasoning traces from DeepSeek r1 to elicit reasoning and follow the new tokens/templates. Next, we used indepedent rounds of specialists trained with GRPO and verifiable rewards on one of the above tasks. We then aggregated and filtered reasoning traces (correct answers with reasoning) from the specialists to again fine-tune Mistral-Small-24B-Instruct-2501. Then, we did GRPO over all tasks. This last model was then put through safety post-training.
See our preprint for details on data and training process.
Safety
We performed refusal post-training for compounds listed on OPCW schedules 1 and 2. We also post-trained ether0 to refuse questions about standard malicious topics like making explosives or poisons. As the model knows pharmacokinetics, it can modulate toxicity. However, the structure of toxic or narcotic compounds are generally known and thus we do not consider this a safety risk. The model can provide no uplift on "tacit knowledge" tasks like purification, scale-up, or processing beyond a web search or similar sized language model.
Citation
@article{narayanan2025training,
title={Training a Scientific Reasoning Model for Chemistry},
author={Narayanan, Siddharth M. and Braza, James D. and Griffiths, Ryan-Rhys and Bou, Albert and Wellawatte, Geemi P. and Ramos, Mayk Caldas and Mitchener, Ludovico and Rodriques, Samuel G. and White, Andrew D.},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2025}
}
License
Open-weights (Apache 2.0)