Their minds are open to all ideas, so long as the idea is a closed form solution that looks edgy.
Their minds are open to all ideas, so long as the idea is a closed form solution that looks edgy.
I kind of wonder if this whole movement of rationalists believing they can “just” make things better than people already in the field comes from the contracting sense that being rich and having an expensive educational background may in fact be less important than having background experience and situational context in the future, two things they loath?
It’s… it’s almost as if the law about shareholder value as intended as a metaphor for accountability, not a literal, reductive claim that results in ouroboros. Almost like, our economic system is supposed to be a means, not an end in of itself?
No. Definitely can’t be that.
If they squeeze this rock hard enough, maybe it’ll bleed.
Procreate is an example of what good AI deployment looks like. They do use technology, and even machine learning, but they do it in obviously constructive scopes between where the artist’s attention is focused. And they’re committed to that because… there’s no value for them to just be a thin wrapper on an already completely commoditized technology on its way to the courtroom to be challenged by landmark rulings with no more room ceiling to grow into whooooooops.
Stranger things have happened. But in either case, we should commit to supporting every effort. If one punch doesn’t work take another. Death by a million cuts.
Maybe I’m old fashioned but,
I still start by asking someone who knows about the thing what books they might recommend. And I know mushrooms are especially problematic, so I go look for um, active communities of people who aren’t dead from eating the wrong mushrooms.
Is it possible, that we’re looking too far away from accountable sources when we route our knowledge searches through noisy corporate slops?
Like, it seems to me that there’s a notable asymmetry here!
I think that’s a great framing here.
Q*
My understanding is that it was renamed or rebranded to Strawberry which itself nebulous marketting maybe it’s the new larger model or maybe it’s GPT-5 or maybe…
it’s all smoke and mirrors. I think my point is, they made some cost optimizations and mostly moved around things that existed, and they’ll keep doing that.
No joke but actually yes?
Meta: is that his plan all along? Maybe a few well placed sneer is what you need to save America.
LLM, tell me the most obviously persuasive sort of science devoid of context. Historically, that’s been super helpful so let’s do more of that.
It can be both. Like, probably OpenAI is kind of hoping that this story becomes wide and is taken seriously, and has no problem suggesting implicitly and explicitly that their employee’s stocks are tied to how scared everyone is.
Remember when Altman almost got outed and people got pressured not to walk? That their options were at risk?
Strange hysteria like this doesn’t need just one reason. It just needs an input dependency and ambiguity, the rest takes of itself.
Short story: it’s smoke and mirrors.
Longer story: This is now how software releases work I guess. Alot is running on open ai’s anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there’s no more training data. So the next trick is that for their next batch of models they have “solved” various problems that people say you can’t solve with LLMs, and they are going to be massively better without needing more data.
But, as someone with insider info, it’s all smoke and mirrors.
The model that “solved” structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it’s a price optimization afaik).
The next large model launching with the new Q* change tomorrow is “approaching agi because it can now reliably count letters” but actually it’s still just agents (Q* looks to be just a cost optimization of agents on the backend, that’s basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they’re so confident in this model that they don’t run the resulting python themselves. It’s still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um… checks notes count the number of letters in a sentence.
But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.
Expect more of this around GPT-5 which they promise “Is so scary they can’t release it until after the elections”. My guess? It’s nothing different, but they have to create a story so that true believers will see it as something different.
The weird thing, is. From my perspective. Nearly every, weird, cringy, niche internet addiction I’ve ever seen or partaken in myself, has produced both two things: people who live through it and their perspective widens, and people who don’t.
Like, I look back at my days of spending 2 days at a time binge playing World of Warcraft with a deep sense of cringe but also a smirk because I survived and I self regulated, and honestly. Made a couple of lifetime friends. Like whatever response we have to anime waifus, I hope we still recognize the humanity in being a thing that wants to be entertained or satisfied.
Watching this election has been amazing! LIKE WOAH what a fucking obviously self destructive end to delusion. Can I be optimistic and hope that with EA leaning explicitly heavier into the hard right Trump position, when it collapses and Harris takes it, maybe some of them will self reflective on what the hell they think “Effective” means anyways.
Audacious and Absurd Defender of Humanity
Your honor, I’d rather plea guilty than abide by my audacious counsel.
I’m ok with this because everytime Nick Bostrom’s name is used publicly to defend anything, and then I show people what Nick Bostrom believes and writes, I robustly get a, “What the fuck is this shit? And these people are associated with him? Fuck that.”
It can’t stop the usage, it can raise the cost of doing so, by bringing in legal risk of operations operating in a public way. It can create precedence that can be built upon by other parts.
Politics and law move slower than and behind the things it attempts to regulate by design. Which is good, the atlernative is a surveilance state! But it definitely can arrange itself to punish or raise the risk profile of doing something in a certain patterned way.
Yeah that’s totally fair, I just was tailgating the sneer I guess.
That’s a good point, and I think it speaks well to their savior complex. They want above all to push the guilt and discomfort of social issues away so they don’t have to live in the discomfort of reality. Dogma does this, and it really doesn’t matter if you have the veneer of science or the mythology.