Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -136,7 +136,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys 
     | 
|
| 136 | 
         | 
| 137 | 
         
             
            You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download.
         
     | 
| 138 | 
         | 
| 139 | 
         
            -
            This code has been tested on Transformers v4.45.0, torch v2.3.0a0+40ec155e58.nv24.3 and 2  
     | 
| 140 | 
         | 
| 141 | 
         
             
            ```python
         
     | 
| 142 | 
         
             
            import torch
         
     | 
| 
         | 
|
| 136 | 
         | 
| 137 | 
         
             
            You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download.
         
     | 
| 138 | 
         | 
| 139 | 
         
            +
            This code has been tested on Transformers v4.45.0, torch v2.3.0a0+40ec155e58.nv24.3 and 2 H100 80GB GPUs, but any setup that supports meta-llama/Llama-3.1-70B-Instruct should support this model as well. If you run into problems, you can consider doing pip install -U transformers.
         
     | 
| 140 | 
         | 
| 141 | 
         
             
            ```python
         
     | 
| 142 | 
         
             
            import torch
         
     |