-
WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models
Paper • 2510.22276 • Published • 3 -
llm-jp/WAON-Bench
Viewer • Updated • 1.87k • 394 -
llm-jp/waon-siglip2-base-patch16-256
Zero-Shot Image Classification • 0.4B • Updated • 21 • 1 -
llm-jp/WAON
Updated • 117 • 7
AI & ML interests
None defined yet.
Recent Activity
View all activity
Organization Card
LLM-jp consists of over 1,000 participants, including researchers and engineers in natural language processing and computer systems from universities and corporations organized under the auspices of the National Institute of Informatics (NII) in Tokyo, Japan. The main goals are to collaboratively work on building open-source LLMs that are proficient in Japanese, to share information on LLM research and development, to promote cross-organizational collaborations among researchers, to release models, tools, and technical materials to the public. For more details, please refer to our website and paper.
Links
- Website: https://llm-jp.nii.ac.jp/en/
- Model: See the Collections section below.
- Corpus: See https://gitlab.llm-jp.nii.ac.jp/datasets.
- LLM-jp Chatbot Arena: Visit https://chatbot-arena.apps.llmc.nii.ac.jp/ to try our latest models.
WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models
WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models
-
WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models
Paper • 2510.22276 • Published • 3 -
llm-jp/WAON-Bench
Viewer • Updated • 1.87k • 394 -
llm-jp/waon-siglip2-base-patch16-256
Zero-Shot Image Classification • 0.4B • Updated • 21 • 1 -
llm-jp/WAON
Updated • 117 • 7
Llama-Mimi: Speech Language Models with Interleaved Semantic and Acoustic Tokens
models
247
llm-jp/waon-siglip2-base-patch16-256
Zero-Shot Image Classification
•
0.4B
•
Updated
•
21
•
1
llm-jp/Llama-Mimi-1.3B
Audio-to-Audio
•
1B
•
Updated
•
167
•
9
llm-jp/Llama-Mimi-8B
Audio-to-Audio
•
8B
•
Updated
•
17
•
9
llm-jp/optimal-sparsity-math-d2048-E128-k16-52.2B-A7.1B
Text Generation
•
52B
•
Updated
•
8
llm-jp/optimal-sparsity-math-d2048-E64-k16-26.4B-A7.1B
Text Generation
•
26B
•
Updated
•
5
llm-jp/optimal-sparsity-math-d2048-E32-k16-13.6B-A7.1B
Text Generation
•
14B
•
Updated
•
5
llm-jp/optimal-sparsity-math-d2048-E16-k16-7.1B-A7.1B
Text Generation
•
7B
•
Updated
•
49
llm-jp/optimal-sparsity-math-d1024-E256-k16-26.0B-A1.9B
Text Generation
•
26B
•
Updated
•
4
llm-jp/optimal-sparsity-math-d1024-E128-k16-13.2B-A1.9B
Text Generation
•
13B
•
Updated
•
7
llm-jp/optimal-sparsity-math-d1024-E64-k16-6.7B-A1.9B
Text Generation
•
7B
•
Updated
•
7
datasets
35
llm-jp/leaderboard-contents-v2
Viewer
•
Updated
•
4
•
1.31k
•
1
llm-jp/leaderboard-requests-v2
Viewer
•
Updated
•
5
•
1.45k
llm-jp/leaderboard-results-v2
Updated
•
57
llm-jp/AnswerCarefully
Preview
•
Updated
•
476
•
47
llm-jp/JSSODa
Viewer
•
Updated
•
20.2k
•
147
•
1
llm-jp/JSSODa-test
Viewer
•
Updated
•
2.26k
•
45
•
1
llm-jp/WAON-Bench
Viewer
•
Updated
•
1.87k
•
394
llm-jp/WAON
Updated
•
117
•
7
llm-jp/leaderboard-requests
Viewer
•
Updated
•
3
•
13.2k
•
2
llm-jp/leaderboard-contents
Viewer
•
Updated
•
862
•
8.96k
•
1