• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    2 days ago

    I disagree, parsing through buckets of text (not one paragraph, a user’s full comment history) is literally the only thing LLMs ARE good at. This is not a logic problem, nor is it something that requires 100% accuracy.

    It doesn’t matter if someone is just weird or malicious, I don’t necessarily want to engage in dialog with someone who is unlikely to respect my time or words.

    • new_world_odor@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Right, if we’re talking about “good” in regards to speed then you’re correct. But if we’re talking about discerning intent, seriously? I find it hard to believe you’re speaking in good faith and without bias yourself here. Disguising intent is the leading method to ‘jailbreak’ an LLM. Half the time at least, trolls are attempting to disguise their intent (with varying degrees of success). So that would be a solid failure at worst, or miss swaths of trolls at best.

      I don’t want to engage with someone like that either, but I care about not skipping over the people on the fringes of behavior, people who don’t just regurgitate an echo chamber. This task might not require 100% accuracy but I personally wouldn’t be satisfied with anything less than 99.9%.

      I think using something like what we’ve been talking about is very very very far off in the future for me, if I were to ever do so at all. This conversation has made me realize that.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        Fair enough! It was really a hypothetical anyway. No such system has been built to my knowledge.