CITATION — REFERENCE ENTRY

How to generate text: using different decoding methods for language generation with Transformers — Hugging Face

Revision ec1c1930-eebe-4991-bd56-7013f5c15a3c · 2/25/2026, 1:16:16 PM UTC
Key
huggingface-how-to-generate-2020
Authors
von Platen, Patrick
Issued
2020-3-6
Type
webpage
Publisher
Hugging Face
Raw CSL JSON
{
  "URL": "https://huggingface.co/blog/how-to-generate",
  "note": "Updated July 2023. Defines token sampling as randomly picking the next word according to its conditional probability distribution.",
  "type": "webpage",
  "title": "How to generate text: using different decoding methods for language generation with Transformers",
  "author": [
    {
      "given": "Patrick",
      "family": "von Platen"
    }
  ],
  "issued": {
    "date-parts": [
      [
        2020,
        3,
        6
      ]
    ]
  },
  "publisher": "Hugging Face"
}

Claims

  1. In its most basic form, sampling in language model generation means randomly picking the next token according to its conditional probability distribution.
    "In its most basic form, sampling means randomly picking the next word w_t according to its conditional probability distribution p(w | w_{1:t-1})."
    Quote language: en
Available in