• 0_o7@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 days ago

    When they steal: Innovative approach to knowledge acquisition

    When others steal: A threat to free market by IP violation

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    6 days ago

    Also pictured here: Anthropic stating out loud their models will just give out all the “secret” and “secured” internal data to anyone who asks.

    Of course, that’s by design. LLMs can’t have any barrier between data and instructions, so they can never be secure.