That’s kind of my point. AI’s have no concept of time and even less of death and they “think” (in a metaphorical sense) very differently from us. I think it’s good to be aware of that and since this is a meme page, I don’t even need a real point. I think it’s funny even tho I fully understand the underlying process (I obviously don’t understand the process in detail but you get what I mean)
Interestingly gpt4 gives a slightly different response:
I’m sorry, but as of my last update in September 2021, Michael Jackson passed away in June 2009. I am unable to provide any recent updates or information beyond that date. If there have been any posthumous releases or tributes, I would recommend checking the latest news sources for the most up-to-date information.
Honestly not sure if it’s better or worse. It could be better in that it realizes the absurdity in asking what he’s up to and assumes you must be talking about posthumous music releases instead.
Could be worse in that it should know we’re obviously talking about the person and not his music and shouldn’t jump to that conclusion.
But that’s honestly even a tough call for a human. If someone asks a strange question like that, do you assume they mean exactly what they speak or do you assume what they speak is not what they actually mean and then adjust the answer to fit what they probably mean?
The rest of the way the answer is structured just comes out of the system prompt telling it to remind the user of its cutoff date.
I’m not trying to argue and I’m not saying you’re wrong or right, I’m just kind of thinking out loud.
But as it is a meme, yeah gpt what the hell, who knows lol
Interesting! I recently heard the phrasing that AI’s aren’t intelligent but it’s better to think of them as applied stochastic. It does not “understand” the question, it just calculates the most properly answer. That’s why they suck at suggestive questions sometimes.
And there is a second AI at play that’s important here. The main AI just knows stuff, but doesn’t reflect if the answer is appropriate. The second is trained on real people who interact with chatGPT and give feedback on the output.
So it doesn’t mention the cutoff because it’s self reflexive but because of the second AI that learned not to be too sure about real people’s latest developments and didn’t learn to differentiate between the living and the dead. I think this is a good way to illustrate this process.
That’s kind of my point. AI’s have no concept of time and even less of death and they “think” (in a metaphorical sense) very differently from us. I think it’s good to be aware of that and since this is a meme page, I don’t even need a real point. I think it’s funny even tho I fully understand the underlying process (I obviously don’t understand the process in detail but you get what I mean)
Interestingly gpt4 gives a slightly different response:
I’m sorry, but as of my last update in September 2021, Michael Jackson passed away in June 2009. I am unable to provide any recent updates or information beyond that date. If there have been any posthumous releases or tributes, I would recommend checking the latest news sources for the most up-to-date information.
Honestly not sure if it’s better or worse. It could be better in that it realizes the absurdity in asking what he’s up to and assumes you must be talking about posthumous music releases instead.
Could be worse in that it should know we’re obviously talking about the person and not his music and shouldn’t jump to that conclusion.
But that’s honestly even a tough call for a human. If someone asks a strange question like that, do you assume they mean exactly what they speak or do you assume what they speak is not what they actually mean and then adjust the answer to fit what they probably mean?
The rest of the way the answer is structured just comes out of the system prompt telling it to remind the user of its cutoff date.
I’m not trying to argue and I’m not saying you’re wrong or right, I’m just kind of thinking out loud.
But as it is a meme, yeah gpt what the hell, who knows lol
Interesting! I recently heard the phrasing that AI’s aren’t intelligent but it’s better to think of them as applied stochastic. It does not “understand” the question, it just calculates the most properly answer. That’s why they suck at suggestive questions sometimes.
And there is a second AI at play that’s important here. The main AI just knows stuff, but doesn’t reflect if the answer is appropriate. The second is trained on real people who interact with chatGPT and give feedback on the output.
So it doesn’t mention the cutoff because it’s self reflexive but because of the second AI that learned not to be too sure about real people’s latest developments and didn’t learn to differentiate between the living and the dead. I think this is a good way to illustrate this process.
It’s a very clever predictive text :)
Bean
Literally 1845