CITATION — REFERENCE ENTRY
How to generate text: using different decoding methods for language generation with Transformers — Hugging Face
- Key
- huggingface-how-to-generate-2020
- Authors
- von Platen, Patrick
- Issued
- 2020-3-6
- Type
- webpage
- Publisher
- Hugging Face
Raw CSL JSON
{
"URL": "https://huggingface.co/blog/how-to-generate",
"note": "Updated July 2023. Defines token sampling as randomly picking the next word according to its conditional probability distribution.",
"type": "webpage",
"title": "How to generate text: using different decoding methods for language generation with Transformers",
"author": [
{
"given": "Patrick",
"family": "von Platen"
}
],
"issued": {
"date-parts": [
[
2020,
3,
6
]
]
},
"publisher": "Hugging Face"
}
Claims
-
In its most basic form, sampling in language model generation means randomly picking the next token according to its conditional probability distribution.
"In its most basic form, sampling means randomly picking the next word w_t according to its conditional probability distribution p(w | w_{1:t-1})."
Available in