✨ 3D Artist and trainee psychiatrist. Imagination unleashed. 🧠

🎨 No AI, now or ever. All work is my own creation using Blender.

💖 DM me for commissions.

🌐 Explore my other socials, buy me a coffee or purchase prints: https://linktr.ee/ThisLucidLens

  • 0 Posts
  • 3 Comments
Joined 3 years ago
cake
Cake day: June 21st, 2023

help-circle

  • Is this necessarily true? I remember seeing an article a while back suggesting that prompting “do not hallucinate” is enough to meaningfully reduce the risk of hallucinations in the output.

    From my fairly superficial understanding of how LLMs work, “don’t do X” will plot a completely different vector for the “X” semantic dimension than prompting “do X”. This is different to telling a human, for example, to not think about elephants (congratulations, you’re now thinking about elephants. Aren’t they cute. Look at that little trunk and smiley mouth)