rajabmondal commited on
Commit
292c9bf
·
verified ·
1 Parent(s): 7e47307

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -11
README.md CHANGED
@@ -237,17 +237,6 @@ output = llm(
237
  )
238
  ```
239
 
240
- #### Simple example code to load one of these GGUF models
241
-
242
- ```python
243
- from ctransformers import AutoModelForCausalLM
244
-
245
- # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
246
- llm = AutoModelForCausalLM.from_pretrained("infosys/NT-Java-1.1B-GGUF", model_file="NT-Java-1.1B_Q4_K_M.gguf", model_type="gpt_bigcode", gpu_layers=50)
247
-
248
- print(llm("public class HelloWorld {\n public static void main(String[] args) {"))
249
- ```
250
-
251
  ## How to use with LangChain
252
 
253
  Here are guides on using llama-cpp-python and ctransformers with LangChain:
 
237
  )
238
  ```
239
 
 
 
 
 
 
 
 
 
 
 
 
240
  ## How to use with LangChain
241
 
242
  Here are guides on using llama-cpp-python and ctransformers with LangChain: