• 0 Posts
  • 6 Comments
Joined 5 months ago
cake
Cake day: October 16th, 2025

help-circle





  • LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model’s view of “context” doesn’t change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don’t work perfectly.

    *Token