sapromak commited on
Commit
50474d3
·
verified ·
1 Parent(s): 65b6ff2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -2
README.md CHANGED
@@ -29,13 +29,31 @@ tags:
29
 
30
  This model is derived from [OpenCoder-1.5B-Base](https://huggingface.co/infly/OpenCoder-1.5B-Base) by applying an additional context extension fine-tuning with an Adjustment of the Base Frequency parameter of RoPE from 10,000 to 500,000. The number of optimization steps is 512 with a batch size of 128 on sequences of 16,384 length. The repository context is composed based on the _Path Distance_ heuristics, more details on which and other aspects including all code used can be found on the [Home Page](https://github.com/sapromak/adaptive-code-completion) of the project. Note that this model is created with the intent to answer specific research questions and __not__ to gain the maximum possible performance on the repository-level code completion setup. Consider it more as a baseline.
31
 
 
 
32
  <div align="center">
33
- <img src="https://github.com/sapromak/adaptive-code-completion/blob/main/paper/figures/compilation/beyond-training-window/beyond-training-window-inproject.svg?raw=true" width="70%" alt="Performance" />
34
  <p>Exact Match on the <em>inproject</em> lines of the <em>large-context</em> subset of the <a href="https://huggingface.co/datasets/JetBrains-Research/lca-project-level-code-completion">Project-Level Code Completion task</a> from the <a href="https://arxiv.org/abs/2406.11612">Long Code Arena benchmark</a>. This checkpoint (solid orange curve) demonstrates its best performance at a context length of 32,768. "1K" refers to 1,024 tokens. The star markers denote the context length used during the repository-level pre-training stage.
35
  </div>
36
 
37
  ## Quickstart
38
 
39
  ```python
40
- # TODO
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ```
 
29
 
30
  This model is derived from [OpenCoder-1.5B-Base](https://huggingface.co/infly/OpenCoder-1.5B-Base) by applying an additional context extension fine-tuning with an Adjustment of the Base Frequency parameter of RoPE from 10,000 to 500,000. The number of optimization steps is 512 with a batch size of 128 on sequences of 16,384 length. The repository context is composed based on the _Path Distance_ heuristics, more details on which and other aspects including all code used can be found on the [Home Page](https://github.com/sapromak/adaptive-code-completion) of the project. Note that this model is created with the intent to answer specific research questions and __not__ to gain the maximum possible performance on the repository-level code completion setup. Consider it more as a baseline.
31
 
32
+ The associated research was initialized and conducted by the [JetBrains Research](https://huggingface.co/JetBrains-Research) association.
33
+
34
  <div align="center">
35
+ <img src="https://github.com/sapromak/adaptive-code-completion/blob/main/paper/figures/compilation/beyond-training-window/beyond-training-window-inproject.svg?raw=true" width="100%" alt="Performance" />
36
  <p>Exact Match on the <em>inproject</em> lines of the <em>large-context</em> subset of the <a href="https://huggingface.co/datasets/JetBrains-Research/lca-project-level-code-completion">Project-Level Code Completion task</a> from the <a href="https://arxiv.org/abs/2406.11612">Long Code Arena benchmark</a>. This checkpoint (solid orange curve) demonstrates its best performance at a context length of 32,768. "1K" refers to 1,024 tokens. The star markers denote the context length used during the repository-level pre-training stage.
37
  </div>
38
 
39
  ## Quickstart
40
 
41
  ```python
42
+ import torch
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
+
45
+ model_name = "sapromak/OpenCoder-1.5B-Base-32K-via-16K"
46
+ tokenizer_name = "infly/OpenCoder-1.5B-Base"
47
+
48
+ model = AutoModelForCausalLM.from_pretrained(model_name,
49
+ torch_dtype=torch.bfloat16,
50
+ device_map="auto",
51
+ trust_remote_code=True)
52
+ tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True)
53
+
54
+ inputs = tokenizer("# write a quick sort algorithm", return_tensors="pt")
55
+ outputs = model.generate(**inputs.to(model.device), max_new_tokens=256)
56
+
57
+ result = tokenizer.decode(outputs[0], skip_special_tokens=True)
58
+ print(result)
59
  ```