Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • Veraticus@lib.lgbt
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    edit-2
    1 year ago

    LLMs act nothing like our brains and are not neural networks. And they aren’t trained on facts.

    LLMs are essentially complicated mathematical equations that ask “what makes the most sense as the next word following this one?” Think autosuggest on your phone taken to the extreme limit.

    They do not think in any sense and have no knowledge or facts internal to themselves. All they do is compose words together.

    And this is also why they’re garbage at math (and frequently lie, and why they can’t “remember” anything). They are simply stringing words together based on their model, not actually thinking. If their model shows that the next word after “one plus two equals” is more likely to be four than three, they will simply answer four.