So my understanding is that Yud is convinced that the inscrutable matrices (note: just inscrutable to him) in his LLM have achieved sentience. In his near-future world where AI can exert itself in the physical world at will and, in particular, transfer data into your body, what possible use does it have for a bitcoin? What possible benefit would come from reprogramming human DNA beyond the intellectual challenge? I’ve recently been thinking about how Yud is supposedly the canonical AI-doomer, but his (and the TESCREAL community in general’s) AI ideation is rarely more than just third-rate, first-thought-worst-thought sci-fi.
also:
people keep on talking about… the near-term dangers of AI but they never come up with any[thing] really interesting"
Given the current public discourse on AI and how it might be exploited to make the working class redundant, this is just Yud telling on himself for the gazillionth time.
also a later tweet:
right that’s the danger of LLMs. they don’t reason by analogy. they don’t reason at all. you just put a computer virus in one end and a DNA virus comes out the other
Well, consider my priors adjusted, Yud correctly identifies that LLMs don’t reason, good job my guy. Yet, somehow he believes it’s possible that today’s LLMs can still spit out viable genetic viruses. Well, last I checked, no one on stack overflow has cracked that one yet.
Actually, if one of us could write that as a stack overflow question, maybe we can spook Yud. That would be fun.
If you’re reading this, here’s a reminder to give your eyes a break from screens. If you like, you can do some eye stretches. Here’s how:
To unpack the post a bit:
So my understanding is that Yud is convinced that the inscrutable matrices (note: just inscrutable to him) in his LLM have achieved sentience. In his near-future world where AI can exert itself in the physical world at will and, in particular, transfer data into your body, what possible use does it have for a bitcoin? What possible benefit would come from reprogramming human DNA beyond the intellectual challenge? I’ve recently been thinking about how Yud is supposedly the canonical AI-doomer, but his (and the TESCREAL community in general’s) AI ideation is rarely more than just third-rate, first-thought-worst-thought sci-fi.
also:
Given the current public discourse on AI and how it might be exploited to make the working class redundant, this is just Yud telling on himself for the gazillionth time.
also a later tweet:
Well, consider my priors adjusted, Yud correctly identifies that LLMs don’t reason, good job my guy. Yet, somehow he believes it’s possible that today’s LLMs can still spit out viable genetic viruses. Well, last I checked, no one on stack overflow has cracked that one yet.
Actually, if one of us could write that as a stack overflow question, maybe we can spook Yud. That would be fun.