• 72 Posts
  • 936 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle




  • like fuck, all you or I want out of these wandering AI jackasses is something vaguely resembling a technical problem statement or the faintest outline of an algorithm. normal engineering shit.

    but nah, every time they just bullshit and say shit that doesn’t mean a damn thing as if we can’t tell, and when they get called out, every time it’s the “well you ¡haters! just don’t understand LLMs” line, as if we weren’t expecting a technical answer that just never came (cause all of them are only just cosplaying as technically skilled people and it fucking shows)



  • We’d be better off not trying to censor it

    this claim keeps getting brought up and every time it doesn’t seem to mean a damn thing, particularly since no, censoring the output of an LLM doesn’t do anything to its ability to predict text. censoring its training set would, but seeing as the topic of this thread is a fact an LLM fabricated by being just a dumb text predictor — there’s no real way to censor the training set to prevent this, LLMs are just shitty.

    I summarize all of that by saying AI is a useful tool

    trying to find a use case for this horseshit has broken your brain into thinking these worthless tools would have value if only they weren’t “being censored” or whatever cope you gleaned from the twitter e/accs




  • lisp machines but networked

    urbit’s even stupider than this, cause lisp machines were infamously network-reliant (MIT, symbolics, and LMI machines wouldn’t even boot properly without a particular set of delicately-configured early network services, though they had the core of their OS on local storage), so yarvin’s brain took that and went “what if all I/O was treated like a network connection”, a decision that causes endless problems of its own

    speaking of, one day soon I should release my code that sets up a proper network environment for an MIT cadr machine (which mostly relies on a PDP-10 emulator running one of the AI lab archive images) and a complete Symbolics Virtual Lisp Machine environment (which needs a fuckton of brittle old Unix services, including a particular version of an old pre-ntp time daemon (this is so important for booting the lisp machine for some reason) and NFSv1 (with its included port mapper dependency and required utterly insecure permissions)) so there’s at least a nice way to experience some of this history that people keep stealing from firsthand


  • Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.

    so much of our industry is dedicated to ensuring that tech workers, most of whom consider themselves experts on complex systems, never analyze or try to influence the social systems surrounding and influencing their labor. these are the same loud voices that insist tech isn’t political, while turning important parts of our public and open source tech infrastructure into a Nazi bar.



  • I agree; LLMs and generative AI are indelibly a product of capitalism, and they can’t exist without widespread theft, exploitation of labor, massive concentrations of capital, and a willingness to destroy the environment. they are the stupidest use of technology I’ve ever seen, and after cryptocurrencies the bar for stupid was pretty fucking high. that the products themselves obscure the theft and exploitation that went into training them is a feature for the corporations developing this horseshit, not a bug.

    and that’s why it’s notable that the self-described AI researchers behind these garbage products can’t even do basic shit like have the LLM not call a journalist a pedophile without resorting to an absolute hack that won’t scale. there’s no fixing LLMs; systemically, they are what they are. and now this absolute horseshit is a component of what’s unfortunately still the dominant desktop operating system.




  • Copilot then listed a string of crimes Bernklau had supposedly committed — saying that he was an abusive undertaker exploiting widows, a child abuser, an escaped criminal mental patient. [SWR, in German]

    These were stories Bernklau had written about. Copilot produced text as if he was the subject. Then Copilot returned Bernklau’s phone number and address!

    and there’s fucking nothing in place to prevent this utterly obvious failure case, other than if you complain Microsoft will just lazily regex for your name in the result and refuse to return anything if it appears



  • Most shockingly to Urbit devotees and outsiders alike, the board welcomed back Curtis Yarvin, the project’s founder, who left in 2019.

    nothing has ever shocked me less than Yarvin “returning” to urbit

    In 2015, a technical conference rescinded his speaking invitation. The following year, another tech conference lost sponsors and was almost canceled because it allowed him to speak, over objections that this verbose, bespectacled engineer would make attendees feel somehow “unsafe.” (Perhaps some feared he would bore everyone to death by reading his posts aloud, or torture them with his poetry.)

    […]

    While the cancel culture of the 2010s and early 2020s may be subsiding, bringing Yarvin back remains a calculated risk for Urbit, William Ball, the board member, said on the developer call.

    these fuckers are still fucking seething over their adult baby godking being asked not to come to a functional programming conference because he publicly advocates for a fascist takeover of the United States, receives funding from fascists, does press interviews promoting the fascist influencer circles he hangs out in, and is the computer science equivalent of a flat earther. how will our industry ever recover from the absolutely no value that was lost by disinviting him?



  • I need to look more into this, but I’ve got the sinking suspicion that the technofascists have come to a realization: that ultra-obscure non-functioning systems like urbit play well if your techfash inroad is forming an influential technocratic thinktank (and there’s plenty of precedent for exactly this type of shit working to gain lasting political influence), but it’s utterly worthless if your path to fascist takeover is through, say, the defense industry. urbit is useless for reliably launching or controlling a missile; urbit can’t even do normal desktop shit right, and unlike a lot of defense contracting failures, this is utterly obvious and can’t be papered over.

    NixOS is too heavy to run on a missile (it might have a place onboard a drone, maybe), but Nix can easily be (and has been) sold as a massive boon to missile firmware development, and a way to modernize a number of launch and control systems external to a missile. that’s why Nix was a good fucking get for the fascists — it’s working, unique technology none of them were smart enough to come up with, its creators are too socially immature and hateful to know what happens when they become a nazi bar, and Nix itself is still obscure and impenetrable enough (and the techfash element of the community has absolutely ensured this has gotten worse) that having a monopoly on software engineering contractors with Nix expertise and clearance can still be used as a wedge to establish an unassailable position with a high level of political control.

    Kinode can’t be used in a missile or a drone, but it’s definitely an adaptation of the non-language parts of urbit to something that wants to look like a more typical cloud deployment. I wish I could analyze what Kinode’s political inroad is, but all the docs on their terrible website 404, so I should dig in and see if they’re still active, or if their funders have decided there’s a more promising inroad elsewhere.