Upload README.md with huggingface_hub
Browse files
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,37 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            license: apache-2.0
         | 
| 3 | 
            +
            datasets:
         | 
| 4 | 
            +
            - open-r1/codeforces-cots
         | 
| 5 | 
            +
            language:
         | 
| 6 | 
            +
            - en
         | 
| 7 | 
            +
            base_model: open-r1/OlympicCoder-32B
         | 
| 8 | 
            +
            pipeline_tag: text-generation
         | 
| 9 | 
            +
            tags:
         | 
| 10 | 
            +
            - mlx
         | 
| 11 | 
            +
            ---
         | 
| 12 | 
            +
             | 
| 13 | 
            +
            # alexgusevski/OlympicCoder-32B-mlx-4Bit
         | 
| 14 | 
            +
             | 
| 15 | 
            +
            The Model [alexgusevski/OlympicCoder-32B-mlx-4Bit](https://huggingface.co/alexgusevski/OlympicCoder-32B-mlx-4Bit) was converted to MLX format from [open-r1/OlympicCoder-32B](https://huggingface.co/open-r1/OlympicCoder-32B) using mlx-lm version **0.21.5**.
         | 
| 16 | 
            +
             | 
| 17 | 
            +
            ## Use with mlx
         | 
| 18 | 
            +
             | 
| 19 | 
            +
            ```bash
         | 
| 20 | 
            +
            pip install mlx-lm
         | 
| 21 | 
            +
            ```
         | 
| 22 | 
            +
             | 
| 23 | 
            +
            ```python
         | 
| 24 | 
            +
            from mlx_lm import load, generate
         | 
| 25 | 
            +
             | 
| 26 | 
            +
            model, tokenizer = load("alexgusevski/OlympicCoder-32B-mlx-4Bit")
         | 
| 27 | 
            +
             | 
| 28 | 
            +
            prompt="hello"
         | 
| 29 | 
            +
             | 
| 30 | 
            +
            if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
         | 
| 31 | 
            +
                messages = [{"role": "user", "content": prompt}]
         | 
| 32 | 
            +
                prompt = tokenizer.apply_chat_template(
         | 
| 33 | 
            +
                    messages, tokenize=False, add_generation_prompt=True
         | 
| 34 | 
            +
                )
         | 
| 35 | 
            +
             | 
| 36 | 
            +
            response = generate(model, tokenizer, prompt=prompt, verbose=True)
         | 
| 37 | 
            +
            ```
         | 
