File size: 5,989 Bytes
092efe0 0089ab7 092efe0 0089ab7 092efe0 9893aa5 bcfebc0 9893aa5 092efe0 bcfebc0 092efe0 bcfebc0 092efe0 bcfebc0 092efe0 bcfebc0 0089ab7 c4d3569 0089ab7 3dbabd8 9018bb3 d91dcaa 9018bb3 0089ab7 3dbabd8 0089ab7 918e74e 0089ab7 0f35095 3dbabd8 0089ab7 3dbabd8 9018bb3 3dbabd8 0089ab7 3dbabd8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
datasets:
- natural_instructions
- the_pile
- cot
- Muennighoff/P3
inference:
parameters:
max_new_tokens: 5
temperature: 1.0
top_k: 1
language:
- en
pipeline_tag: text-generation
widget:
-
example_title: "Sentiment Analysis"
text: |-
The task is to label the post's emotion as sadness, joy, love, anger, fear, or surprise.
Input: I'm feeling quite sad and sorry for myself but ill snap out of it soon.
Output: sadness
Input: I am just feeling cranky and blue.
Output: anger
Input: I can have for a treat or if i am feeling festive.
Output:
-
example_title: "Country Currency"
text: |-
Return the currency of the given country.
Input: Switzerland
Output: Swiss Franc
Input: India
Output:
-
example_title: "Tweet Eval Hate"
text: |-
Label whether the following tweet contains hate speech against either immigrants or women. Hate Speech (HS) is commonly defined as any communication that disparages a person or a group on the basis of some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics.
Possible labels:
1. hate speech
2. not hate speech
Tweet: HOW REFRESHING! In South Korea, there is no such thing as 'political correctness" when it comes to dealing with Muslim refugee wannabes via @user
Label: hate speech
Tweet: New to Twitter-- any men on here know what the process is to get #verified?
Label: not hate speech
Tweet: Dont worry @user you are and will always be the most hysterical woman.
Label:
-
example_title: "Entity Recognition"
text: |-
Extract all the names of people, places, and organizations from the following sentences.
Sentence: Satya Nadella, the CEO of Microsoft, was visiting the Bahamas last May.
Entities: Satya Nadella, Microsoft, Bahamas
Sentence: Pacific Northwest cities include Seattle and Portland, which I have visited with Vikash.
Entities:
-
example_title: "Data Clearning"
text: |-
Format the data into a CSV file:
Input: Jane Doe [email protected] (520) 382 2435
Output: Jane Doe,[email protected],520-382-2435
Input: Peter Lee (510) 333-2429 email: [email protected]
Output:
---
<h1 style="font-size: 42px">GPT-JT<h1/>
# Model Summary
We present GPT-JT, a fork of GPT-6B, trained on 3.53 billion tokens, that outperforms most 100B+ parameter models at classification.
GPT-JT was trained with a new decentralized algorithm on computers networked with 1Gbps interconnect, in contrast with typical 100Gbps-1.6Tbps data center networks.
GPT-JT is a bidirectional dense model, which processes the prompt with bidirectional attention to fully leverage the context information, and uses causal attention only for token generation.
***Please try out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!***
# Quick Start
```python
from transformers import pipeline
pipe = pipeline(model='togethercomputer/GPT-JT-6B-v1')
pipe('''"I love this!" Is it positive? A:''')
```
or
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
```
# Training Details
## UL2 Training Objective
We train GPT-J using UL2 training objective [1][2].
The usual GPT model, including GPT-J, uses the lower left causal mask to do autoregressive generation, so for each token, it can only see the context information before itself.
In order to fully leverage the context information, we continue training with UL2 training objectives, and uses the lower right causal mask with prefix -- using bidirectional attention for the prompt and causal attention for token generation.
$$
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1 & 1
\end{bmatrix}
$$
## Data
We fine-tune [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on NI, P3, COT, the pile data.
- [Natural-Instructions](https://github.com/allenai/natural-instructions)
- [P3](https://huggingface.co/datasets/Muennighoff/P3)
- [MMLU-COT](https://github.com/jasonwei20/flan-2/blob/main/mmlu-cot.json)
- [the pile](https://huggingface.co/datasets/the_pile)
We first conduct training for 2.62 billion tokens using the UL2 loss on the Pile, followed by 0.92 billion tokens with a mixture of the above datasets: 5% of COT, 20% of P3, 20% of NI, and 55% of the Pile.
## Hyperparameters
We used AdamW with a learning rate of 1e-5 and global batch size of 64 (16 for each data parallel worker).
We used mix-precision training where the activation is in FP16 while the optimizer states are kept in FP32.
We use both data parallelism and pipeline parallelism to conduct training.
During training, we truncate the input sequence to 2048 tokens, and for input sequence that contains less than 2048 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
## Infrastructure
We used [the Together Research Computer](https://together.xyz/) to conduct training.
# References
[1]: Tay, Yi, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. "Unifying Language Learning Paradigms." arXiv preprint arXiv:2205.05131 (2022).
[2]: Tay, Yi, Jason Wei, Hyung Won Chung, Vinh Q. Tran, David R. So, Siamak Shakeri, Xavier Garcia et al. "Transcending scaling laws with 0.1% extra compute." arXiv preprint arXiv:2210.11399 (2022). |