• Gamingdexter@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    Could be interesting but depends on how it is trained. I know there are hundreds of people that spend hours just chatting with GPT, if trained correctly it could create some very interesting main/side quest characters. Just have to wait and see, not a bad thing since it seems AI is “the thing” at the moment

    • Traister101@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      12
      ·
      1 year ago

      “Chatting”. LLMs don’t have any idea what words mean, they are kinda like really fancy autocorrect, creating output based on what’s most likely to occur next in the current context.

      • rigatti@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 year ago

        If they put together the right words, does it matter if they know what they’re saying?

        • Traister101@lemmy.today
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          6
          ·
          edit-2
          1 year ago

          I mean plain old autocorrect does a surprisingly good job. Here’s a quick example, I’ll only be tapping the middle suggested word. I will be there for you to grasp since you think your instance is screwy. I think everybody can agree that sentence is a bit weird but an LLM has a comparable understanding of its output as the autocorrect/word suggestion did.

          A conversation by definition is at least two sided. You can’t have a conversation with a tree or a brick but you could have one with another person. A LLM is not capable of thought. It “converses” by a more advanced version of what your phones autocorrect does when it gives you a suggested word. If you think of that as conversation I find that an extremely lonely definition of the word.

          So to me yes, it does matter

          • rigatti@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            3
            ·
            1 year ago

            I think you’re kind of underselling how good current LLMs are at mimicking human speech. I can foresee them being fairly hard to detect in the near future.

            • Traister101@lemmy.today
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              2
              ·
              1 year ago

              That wasn’t my intention with the wonky autocorrect sentence. The point of that was to point out LLMs and my auto correct equally have no idea what words mean.

              • rigatti@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                Yes and my point is that it doesn’t matter if they know what they mean, just that it has the appearance that they know what they mean.

              • FooBarrington@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                1 year ago

                What does it mean to “have an idea what words mean”?

                LLMs clearly have some associations between words - they are able to use synonyms, they are able to explain words, they are able to use words correctly. How do you determine from the outside whether they “understand” something?

                • Traister101@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols. When they create output they don’t decide that one synonym is more appropriate than another, it’s chosen by which collection of symbols is more statistically likely.

                  Take for example attempting to correct GPT, it will often admit fault yet not “learn” from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn’t learn from it because it can’t. It doesn’t know what words mean. It knows that when it sees the symbols representing “You got {thing} wrong” the most likely symbols to follow represent “You are right I apologize”.

                  That’s all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.

                  • FooBarrington@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    edit-2
                    1 year ago

                    We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols.

                    No, LLMs understand a tree to be a complex relationship of many, many individual numbers. Can you clearly define how our understanding is based on something different?

                    When they create output they don’t decide that one synonym is more appropriate than another, it’s chosen by which collection of symbols is more statistically likely.

                    What is the difference between “appropriate” and “likely”? I know people who use words to sound smart without understanding them - do they decide which words are appropriate, or which ones are likely? Where is the border?

                    Take for example attempting to correct GPT, it will often admit fault yet not “learn” from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn’t learn from it because it can’t.

                    This is wrong. If you ask it something, it replies and you correct it, it will absolutely “learn” from it for this session. That’s due to the architecture, but it refutes your point.

                    It doesn’t know what words mean. It knows that when it sees the symbols representing “You got {thing} wrong” the most likely symbols to follow represent “You are right I apologize”.

                    So why can it often output correct information after it has been corrected? This should be impossible according to you.

                    That’s all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.

                    Aaah, the old “stochastic parrot” argument. Can you clearly show that humans don’t analyse inputs and then output what they determine to be most likely to follow?

                    If you’d like, we can move away from the purely philosophical questions and go to a simple practical one: given some system (LLMs, animals, humans) how do I figure out whether the system understands? Can you give me concrete steps I can take to figure out if it’s “true understanding” or “LLM level understanding”? Your earlier approach (tell it when it’s incorrect) was wrong. Do you have an alternative? If not, how is this not a “god of the gaps” argument?