Who could have predicted that a first principles ground up new Internet protocol based on monarchism would be a difficult sell.
*I mean, I think that’s what Urbit is. I’ve read multiple pieces describing it and I’m still not really clear.
Forgot to say: yes AI generated slop is one key example, but often I’m also thinking of other tasks that are often presumed to be basic because humans can be trained to perform them with barely any conscious effort. Things like self-driving vehicles, production line work, call center work etc. Like the fact that full self drive requires supervision, often what happens with tech automation is that they create things that de-skill the role or perhaps speed it up, but still require humans in the middle to do things that are simple for us, but difficult to replicate computationally. Humans become the glue, slotted into all the points of friction and technical inadequacy, to keep the whole process running smoothly.
Unfortunately this usually leads to downward pressure on the wages of the humans and the expectation that they match the theoretical speed of the automation rather than recognise that the human is the the actual pace setter because without them the pace would be 0.
Based on my avid following of the Trashfuture podcast, I can authoritatively say that the “Hoon” programming language relies primarily on Australians doing sick burns and popping tyres in their Holden Commodores.
Funnily enough that was the bit I wrote last just before hitting post on Substack. A kind of “what am I actually trying to say here?” moment. Sometimes I have to switch off the academic bit of my brain and just let myself say what I think to get to clarity. Glad it hit home.
Thanks for the link. I’m going to read that piece and have a look though the ensuing discussion.
Oh god it’s real? I saw pictures and there was a lot of “it’s AI” claims which I kind of hoped were true.
There’s definitely something to this narrowing of opportunities idea. To frame it in a real bare bones way, it’s people that frame the world in simplistic terms and then assume that their framing is the complete picture (because they’re super clever of course). Then if they try to address the problem with a “solution”, they simply address their abstraction of it and if successful in the market, actually make the abstraction the dominant form of it. However all the things they disregarded are either lost, or still there and undermining their solution.
It’s like taking a 3D problem, only seeing in 2D, implementing a 2D solution and then being surprised that it doesn’t seem to do what it should, or being confused by all these unexpected effects that are coming from the 3rd dimension.
Your comment about giving more grace also reminds me of work out there from legal scholars who argued that algorithmically implemented law doesn’t work because the law itself is designed to have a degree of interpretation and slack to it that rarely translates well to an “if x then y” model.
Oh no, the dangers of having people read your work!
It is coming, potentially in the next week. I was on leave for a couple of weeks and since back I’ve been finishing up a paper with my colleague on Neoreaction and ideological alignment between disparate groups. We should be submitting to the journal very soon so then I can get back to finishing off this series.
… Nope. In fact one of my in-laws said that they’d buy us an air frier for Christmas once the sales came. Everyone forgot about it shortly after and I don’t care one bit.
I feel like generative AI is an indicator of a broader pattern of innovation in stagnation (shower thoughts here, I’m not bringing sources to this game).
I was just a little while ago wondering if there is an argument to be made that the innovations of the post-war period were far more radically and beneficially transformative to most people. Stuff like accessible dishwashers, home tools, better home refrigeration etc. I feel like now tech is just here to make things worse. I can’t think of any upcoming or recent home tech product that I’m remotely excited about.
I’m banking on the primary use case being “getting Elon sued into oblivion by Disney” .
Fascinating to see that the politics of the old crypto hype train have carried over to the new hype train.
Ah OK, so it’s sending the email draft in process not sending off the content of incoming messages or your final sent messages. Now I understand. Also, that’s still bad…
I don’t really understand how it’s possible to both not store data in plaintext, but also be able to siphon off some of it in plaintext. Like is this technically possible in the way they suggest it? We shoot off the plaintext before it gets to our storage servers?
Like at some point that means the communication is not encrypted right? But if you’re using https and all good normal security standards that should never be the case from the moment it departs your terminal?
I have a small amount of knowledge about this but it’s the dangerously small type so any illumination would be appreciated.
Dear CHATGPT how do we get our company more money?
“That’s a great question. To get more money simply bring up the console and enter ‘rosebud!;! ;! ;! ;!’”
If this includes their journals then I guess my stuff is off to the big LLM melting pot to be regurgitated wrongly without context or attribution.
I think it’s definitely worth distinguishing between different classes of workers in Silicon Valley. It’s hard to talk about tech ideology in a fully encompassing way because there are for sure dissenting voices. I think to some degree you can say it is the intersection of tech and wealth ideologies but there’s definitely people that aren’t wealthy that also espouse similar thinking so… tricky!
I adopt the handy framing of Silicon valley as a mindset rather than a place to help with this. There’s a great photography book called Seeing Silicon Valley by Mary Beth Meehan that is all photos and stories of the precarious workers that don’t necessarily work in tech. I keep it out in my office to remind me that silicon Valley is not just the rich assholes.
Absolutely. In fact in one major survey of the values of the counterculture conducted back in the 1960s Ayn Rand was listed as one of people’s major influences. There were different strands to the counterculture, one communitarian but the other about self actualisation and the individual. Both positioned themselves in opposition to the state, but differed significantly in what kind of future they wanted.
John Ganz did a good coverage of the ideological side of tech, particularly using Herf’s book Reactionary Modernism that looks at the role of engineers in building Nazi ideology.
You can read Reactionary Modernism for free on the Internet Archive
Thanks for the positive feedback! I have a tendency to over explain things (so much cut text already) partly because I’m never too sure of how far down the rabbit hole I’ve gotten and if a general audience would be lost without it. Glad I was able to pull it together with a flourish though!
My most charitable interpretation of this is that he, like a lot of people, doesn’t understand AI in the slightest. He treated it like Google, asked for some of the most negative quotes from movie critics for past Coppola films and the AI hallucinated some for him.
If true it’s a great example of why AI is actually worse for information retrieval than a basic vector based search engine.