The fact that a souped up autocomplete system can solve basic algebra is already impressive IMO. But somehow people think it works like SkyNet and keep asking it to solve their calculus homework.
Looking for emergent intelligence. They’re not designed to do maths, but if they become able to reason mathematically as a result of the process of becoming able to converse with a human, then that’s a sign that it’s developing more than just imitation abilities.
Not really understanding the distinction in the beginning, I asked chatgpt a lot about math and gave it formulas to solve, asked it questions about very large numbers and things like this. I then did the DD on its answers, which overwhelmingly turned out to be spot on.
Then, like is being reported elsewhere, the thing got progressively worse at math and its results seemed to get randomized. I could ask it the very same math questions as I did in the beginning, but now the answers were garbage. It has 100% been ‘smoothed’ out in the thinking department, likely because people were finding ways to monetize it that its creators hadn’t even thought of, so they backpedaled.
Because it’s something completely new that they don’t fully understand yet. Computers have been good at math since always, everything else was built up on that. People are used to that.
Now all of a sudden, the infinitely precise and accurate calculating machine is just pulling answers out of its ass and presenting them as fact. That’s not easy to grasp.
Why do people keep asking language models to do math?
As a biological language model I’m not very proficient at math.
The fact that a souped up autocomplete system can solve basic algebra is already impressive IMO. But somehow people think it works like SkyNet and keep asking it to solve their calculus homework.
Looking for emergent intelligence. They’re not designed to do maths, but if they become able to reason mathematically as a result of the process of becoming able to converse with a human, then that’s a sign that it’s developing more than just imitation abilities.
Not really understanding the distinction in the beginning, I asked chatgpt a lot about math and gave it formulas to solve, asked it questions about very large numbers and things like this. I then did the DD on its answers, which overwhelmingly turned out to be spot on.
Then, like is being reported elsewhere, the thing got progressively worse at math and its results seemed to get randomized. I could ask it the very same math questions as I did in the beginning, but now the answers were garbage. It has 100% been ‘smoothed’ out in the thinking department, likely because people were finding ways to monetize it that its creators hadn’t even thought of, so they backpedaled.
https://technomagnus.vercel.app/posts/gpt-for-all--hacked-no-signups-logging-in/
Because it’s something completely new that they don’t fully understand yet. Computers have been good at math since always, everything else was built up on that. People are used to that.
Now all of a sudden, the infinitely precise and accurate calculating machine is just pulling answers out of its ass and presenting them as fact. That’s not easy to grasp.
They think that’s what “smart” means.