There was more drama? I didn’t even notice. They’re always doing drama.
lemmy.world is a reddit clone, reddit-tier politics
lemmygrad.ml and hexbear.net extreme communist (hexbear.net consists mostly of trans people)
lemmy.ml is mostly communist but gets more sorts of people than the two above
lemmy.dbzer0.com is focused on piracy
I actually wasn’t but that’s a different story
Right that’s my point, but what kind
How about something like aliexpress.com/item/1005007010043560.html
If they say “the goose doesn’t like his ball” that is misgandering.
I’ve had people flip out at me on Lemmy for ‘misgendering’ it’s so ridiculous.
Like we don’t know if it’s a man or woman. It’s the internet, yknow?
I love goblins and lizardmen
Fine 😔
and, conversely, posting things you have “verified”
“You’re wrong! I was able to prove it with a quick Google!”
Your knowledge coming from a ‘quick google’ isn’t the flex you think it is. Most things that can be proven with a quick google are false.
I’ve heard the Irish are rather decent bomb makers and have a history of repelling invading cunts.
This will serve you well as a chat-up line.
Nothing makes me angrier than the right appropriating the symbols of anti-imperialist struggle.
These guys will wave signs of Patrick Pearse as a right-wing “Ireland for the Irish” nationalist statement. It’s a total dishonour to the memory of the Fenian movement, who didn’t promote anything like that. (Pearse was the son of an immigrant.)
The anti-immigrant movement is tied with the imperial core. It’s a fundamentally imperialist movement, not anti-imperialist.
when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning
Can you paste an example of this error?
Right. Like if I were talking to someone in total delirium and their responses were random and not a good fit for the question.
LLMs are not like that.
But it’s inherently impossible to “show” anything except inputs&outputs (including for a biological system).
What are you using the word “real” to mean, and is it aloof from the measurable behaviour of the system?
You seem to be using a mental model that there’s
A: the measurable inputs & outputs of the system
B: the “real understanding”, which is separate
How can you prove B exists if it’s not measurable? You say there is an “onus” to do so. I don’t agree that such an onus exists.
This is exactly the Chinese Room paper. ‘Understand’ is usually understood in a functionalist way.
They don’t understand though. A lot of AI evangelists seem to smooth over that detail, it is a LLM not anything that “understands” language, video nor images.
We’re into the Chinese Room problem. “Understand” is not a well-defined or measurable thing. I don’t see how it could be measured except from looking at inputs&outputs.
You’re against computers being able to understand language, video, and images?
lol, that’s obviously a massive typo… I’m not sure were they trying to say 167 or 170?