warshanks commited on
Commit
47f824b
·
verified ·
1 Parent(s): 3ead37d

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -2,8 +2,8 @@
2
  license: other
3
  license_name: health-ai-developer-foundations
4
  license_link: https://developers.google.com/health-ai-developer-foundations/terms
5
- library_name: transformers
6
- pipeline_tag: image-text-to-text
7
  extra_gated_heading: Access MedGemma on Hugging Face
8
  extra_gated_prompt: To access MedGemma on Hugging Face, you're required to review
9
  and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms).
@@ -16,12 +16,13 @@ tags:
16
  - clinical-reasoning
17
  - thinking
18
  - mlx
19
- - mlx-my-repo
20
  ---
21
 
22
  # mlx-community/medgemma-27b-text-it-8bit
23
 
24
- The Model [mlx-community/medgemma-27b-text-it-8bit](https://huggingface.co/mlx-community/medgemma-27b-text-it-8bit) was converted to MLX format from [google/medgemma-27b-text-it](https://huggingface.co/google/medgemma-27b-text-it) using mlx-lm version **0.24.1**.
 
 
25
 
26
  ## Use with mlx
27
 
@@ -34,12 +35,12 @@ from mlx_lm import load, generate
34
 
35
  model, tokenizer = load("mlx-community/medgemma-27b-text-it-8bit")
36
 
37
- prompt="hello"
38
 
39
- if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
40
  messages = [{"role": "user", "content": prompt}]
41
  prompt = tokenizer.apply_chat_template(
42
- messages, tokenize=False, add_generation_prompt=True
43
  )
44
 
45
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
 
2
  license: other
3
  license_name: health-ai-developer-foundations
4
  license_link: https://developers.google.com/health-ai-developer-foundations/terms
5
+ library_name: mlx
6
+ pipeline_tag: text-generation
7
  extra_gated_heading: Access MedGemma on Hugging Face
8
  extra_gated_prompt: To access MedGemma on Hugging Face, you're required to review
9
  and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms).
 
16
  - clinical-reasoning
17
  - thinking
18
  - mlx
 
19
  ---
20
 
21
  # mlx-community/medgemma-27b-text-it-8bit
22
 
23
+ This model [mlx-community/medgemma-27b-text-it-8bit](https://huggingface.co/mlx-community/medgemma-27b-text-it-8bit) was
24
+ converted to MLX format from [google/medgemma-27b-text-it](https://huggingface.co/google/medgemma-27b-text-it)
25
+ using mlx-lm version **0.24.1**.
26
 
27
  ## Use with mlx
28
 
 
35
 
36
  model, tokenizer = load("mlx-community/medgemma-27b-text-it-8bit")
37
 
38
+ prompt = "hello"
39
 
40
+ if tokenizer.chat_template is not None:
41
  messages = [{"role": "user", "content": prompt}]
42
  prompt = tokenizer.apply_chat_template(
43
+ messages, add_generation_prompt=True
44
  )
45
 
46
  response = generate(model, tokenizer, prompt=prompt, verbose=True)