• nonentity@sh.itjust.works
    link
    fedilink
    arrow-up
    63
    arrow-down
    1
    ·
    6 days ago
    1. No it won’t.
    2. Anyone who frames LLMs as ‘intelligence’ is betraying they don’t understand what they’re talking about.
    3. Any work a LLM can perform effectively is work no human should be performing.
    • pkjqpg1h@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      could you explain little bit more

      Any work a LLM can perform effectively is work no human should be performing.

      • nonentity@sh.itjust.works
        link
        fedilink
        arrow-up
        23
        ·
        6 days ago

        LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.

        That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.

        My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.

        • pkjqpg1h@lemmy.zip
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          5 days ago

          What do you think about these;

          Translation
          Grammar
          Text editing
          Categorization
          Summarization
          OCR
          
          • Catoblepas@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            11
            ·
            5 days ago

            OCR isn’t a large language model. That’s why sometimes with poor quality scans or damaged text you get garbled nonsense from it. It’s not determining the statistically most likely next word, it’s matching input to possible individual characters.

          • nonentity@sh.itjust.works
            link
            fedilink
            arrow-up
            8
            ·
            5 days ago

            LLMs can’t perform any of those functions, and the output from tools infected with them and claim to, can intrinsically only ever be imprecise, and should never be trusted.

          • Anna@lemmy.ml
            link
            fedilink
            arrow-up
            7
            ·
            5 days ago

            Translation isn’t as easy as easy as just take the word and replace with another word from different language with same definition. I mean yes a technical document or something similar can be translated word for word. But, Jokes, songs and a lot more things differ from culture to culture. Sometimes author chooses a specific word in a certain language based on certain culture which can be interpreted in multiple ways to reveal hidden meaning for readers.

            And sometimes to convey the same emotion to a reader from different language and culture we need to change the text heavily.

            • GraniteM@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              5 days ago

              I remember the Babelizer from the early internet, where you would input a piece of text, and the Babelizer would run it through five or six layers of translation, like from English to Chinese to Portuguese to Russian to Japanese and back to English again, and the results were always hilariously nonsense that only vaguely resembled the original text.

              One of the first things I did with a LLM was to replicate this process, and if I’m being honest, it does a much better job of processing that text through those multiple layers and coming out with something that’s still fairly reasonable at the far end. I certainly wouldn’t use it for important legal documents, geopolitical diplomacy, or translating works of poetry or literature, but it does have uses in cases where the stakes aren’t too high.

          • FrowingFostek@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            5 days ago

            Not OP. I wouldn’t call myself tech savvy but, suggesting categorization of files on my computer sounds kinda nice. I just can’t trust these clowns to keep all my data local.

        • hector@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          5 days ago

          I mean I think one legitimate use is sifting through massive tranches of information and pulling out everything from a subject. Like if you have these epstein files, whatever is not redacted in the half of the pages they released any of, and you want to pull out all mentions of, say the boss of the company that ultimately owns the company you work for, or the president.

          Propublica uses it for something of that sort anyway they explained how they used it in sifting through tranches of information on one article I read about something a couple of years ago. That seemed like a rare case of where this technology could actually be useful.