• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: September 3rd, 2023

help-circle

  • My own speculations started with the kinds of perfectly reasonable, cynical things a perfectly reasonable, cynical board might concern themselves with before throwing the CEO to the wolves (they’re lighting money on fire too much faster than money is coming in; one of the lawsuits isn’t looking good, there’s been a horrific security incident, etc.), and ended with “but of course, this is 2023 big tech, it might also be robot cult schismatics.”

    -sigh- But here we are, I guess.

    Get well soon!




  • This is the point where the anxiety patient has to make a rambling reality check.

    It’s obvious they want mass-layoff-as-a-service. They openly say so themselves. But it’s less obvious, at least at this point in time, that generative AI (at least in models like the current ones) actually can create that. I’m worried because I extrapolate from current trends - but my worries are pretty likely to be wrong. I’m too good at worrying for my own good, and at this point, mass layoffs and immiseration are still involuntary speculative fiction. In general, when transformative technologies have come along, people have worried about all the wrong things - people worried that computers would make people debilitatingly bad at math, not that computers would eventually enable surveillance capitalism.

    We’re currently in the middle of an AI bubble. There are companies that have enormous valuations despite not even having a product, and enormous amounts of resources are being poured into systems that nobody at present knows how to make money from. The legal standing of the major industry players is still unestablished, and world-leading experts disagree about what these models can realistically be expected to do and what they can’t. The hype itself is almost certainly part of a deliberate strategy: When ChatGPT landed a year ago, OpenAI had already finished training GPT-4 (which begun a long time prior). When they released that, it looked like they leapt from GPT-3 to GPT-4 in a few months. The image input capability that came out a few months ago were in the original GPT-4 model (according to their publication at the time); they just disabled it until recently. All of this has been very good at keeping the hype bubble inflated, which has both had the effect of getting investors (and other tech companies) to pour money into the project and making a lot of people really worried for their livelihoods. I freak out whenever I see a flashy demo showing that a LLM can solve some problem that no developer actually needs to use their brain for solving, because freaking out is unfortunately what comes naturally to me when the stakes are high.

    I don’t think this is like the crypto bubble. Unlike crypto, people are using LLMs and diffusion models to produce things, ranging from sometimes-useful code and “good enough” illustrations for websites, to spam, homework assignments and cover letters, to nonconsensual deepfake porn and phishing. We now have an infinite bullshit machine, and lots of what people do at work involve producing and managing bullshit. But it’s not all bullshit. A couple months ago, the “jagged frontier” paper gave some examples of tasks for management consultants, with and without LLM assistance. Unsurprisingly, writing fluffy and eloquent memos was much more productive with an LLM in tow, but complex analytical tasks actually saw some of the consultants get less productive than the control group. In my own attempts to use them in programming, my tentative conclusion is that at the moment they help to some extent when the stumbling block is about knowledge, but not really much when it’s about reasoning or skill. And more crucially, it seems that an LLM without a human holding its hand isn’t very good at programming (see the abysmal issue resolution rate for Github issues in the SWE-Bench paper). At the moment, they’re code generators rather than automatic programmers, and no programmer I know works as a code generator. Crucially, not a single one of them (who doesn’t also struggle with anxiety) worries about losing their jobs to LLMs - especially the ones who regularly use them.

    A while ago, I read a blog post by Laurence Tratt, in which he mentions that he gets lots of productivity out of LLMs when he needs a quick piece of Javascript for some web work (something he doesn’t work with daily), but very little for his day job in programming language implementation. This, it seems to me, likely isn’t because programming language implementation is harder than web dev or because there’s not enough programming language implementation literature in the training set (there’s a lot of it, judging by how much PLT trivia even small models can spit out) - it’s because someone like him has high ambitions when working with programming language implementation, and he knows so much about it that the things he doesn’t know are things the LLM also doesn’t know.

    I don’t know if my worries are reasonable. I’m the sort of person who often worries unreasonably, and I’ve never felt as uncertain about the future of my field as I do at the moment. The one thing I’m absolutely sure of is that there’s no future in which I write code for the US military, though.



  • I live in a North European country. My salary is considerably lower than what American devs make, but I live a comfortably middle-class life. My expenses are also lower: There are a lot of things I don’t need to worry about (eg. healthcare), and I live in a walkable city area where pretty much everything I need day-to-day is at most a 15 minute walk away - except my workplace, which is a comfortable 20 minute bus trip. I don’t have a car and never really miss having one. The great thing about this is that I could take a substantial pay cut and still live reasonably comfortably - I wouldn’t lose healthcare, and if I had any kids I wouldn’t have to worry about their education. But this all works because my country spent the last few decades using social policy to establish a comparatively large “knowledge sector” - which is exactly the thing that AI companies want to move fast and break. If all the middle-class jobs implode and all the major domestic businesses are destroyed by Silicon Valley tech giants, then we’re screwed. The ubiquitous answer of “UBI” won’t work when the economy is being drained from the outside - who’s going to pay for the UBI then?

    I don’t know how quickly contemporary AI will be able to replace people to a significant extent (or whether more fundamental breakthroughs are necessary before that sort of labour disruption happens; historically, there’s been a lot of times where it turned out we all dramatically underestimated the real world). There’s a lot of impressive demos, and they do allow rapid code generation (for some well-established tasks) and rapid generation of eloquent natural language - but they suffer from the “last 10%” problem, and the last 10% are already what most professionals seem to be spending all their time on. In programming, this is pretty much by definition: If Copilot is doing it for you, you’re not spending significant time on it - and even the most avid LLM users I know aren’t anywhere near running out of work to do.

    We live in a globalized economy, and that means that some whims of tech oligarchs will fly everywhere, and some won’t. Uber never took off here - their excuse that their drivers were “independent contractors” and not employees didn’t fly, and they ended up legally being considered a taxi company rather than an app company, meaning that all the labour protections that apply to taxi drivers would also apply to Uber drivers. That made the whole thing untenable, so they gave up. But that’s different, because Uber can’t teleport a car here from the US. If a Silicon Valley company actually becomes able to deliver fully autonomous digital knowledge worker replacements (which is what they want to do, but not where they currently are), there’s nothing stopping executives here from using them just like executives in Silicon Valley.



  • I’m afraid this is going to be a bit of a rambling answer.

    Some context: Many devices in the industrial embedded sector have extreme environmental requirements. Some of them have to keep functioning if they’re being blasted with a snowstorm or if they’re right next to horrible exhaust heat. The processors that can handle that sort of abuse are often a lot less powerful than desktop or even mobile consumer processors, and storage is terribly expensive. At the same time, a lot of the software that developers and users reasonably expect to be present has grown awfully large and resource-hungry. A system crash can be very, very unpleasant - and as every dev knows, more code means more potential for bugs.

    What all of this means, taken together, is that we’re all very, very happy when we manage to come up with something that contributes a large negative number of lines of code to the platform. If we figure out something that allows us to make a lot of other code redundant so we can throw it away, everyone is happy. This is the opposite of what tools that enable very rapid generation of repetitive code help with - we spend more time trying to come up with smart ways to avoid ending up with more code. Don’t get me wrong - we use generated code for a lot of tasks where that makes sense, but the part we seem to be spending all our time on at my job, LLMs don’t help very much at present.

    The cheery part: I’ve mentioned elsewhere that one of the problems mentioned in the article wasn’t “tricky”, but rather it was just tedious. These sorts of tasks don’t really require deep reasoning or creativity - they just require a lot of trivia, and they’re things that have been done a billion times already, but the languages in common use don’t necessarily have mechanisms that makes them easily abstractable. There’s probably a lot of software that doesn’t get written simply because people can’t be arsed to do the boring part. 90% of that currently-unwritten software is going to be crap because 90% of everything is crap, but if LLMs help get that last 10% off the floor, then that’s great.

    Historically, whenever software has gotten significantly easier and cheaper to make, we’ve ended up discovering that there’s a lot more things we can do with software we hadn’t done before because it’d be too expensive or bothersome, and this has usually meant that demand for software has gone up. A current-day web dev can whip something up in a couple of days that would have been a major team undertaking in 2010, and completely technically infeasible in 1998. If you showed a modern web framework to a late-1990s web developer, they’d see a tool that had automated their entire job away - but there’s a lot more web developers today than there were in 1998.

    The dark part: We’re discussing a “programmers are over” article. There have been a lot of them in the media in the last year, and while I don’t think that’s an accurate description of the world I actually see around me, this is not at all a fun time to have an anxiety disorder. I’ve spent most of my life filing away the more obviously neurodivergent bits of my personality, and I worked as a teacher for a while - but I am what I am, and “soft skills” will never be my strength.

    There’s not a billion-dollar industry in “better autocomplete”, but there would be one in “mass layoff as a service”, and that’s what many of the big players in the field are pumping enormous amounts of money into trying to achieve.




  • I mean, we’ve been commoditizing our own skills for the entire duration of our profession: Libraries, higher-level languages, open-source. This is the nature of programming, really; we’d be bad at our jobs if we didn’t do that. Today’s afternoon hack would have taken an entire team several months of work a few decades ago, and many of the projects teams start today were unthinkable a few decades ago. This isn’t because we’re a ton better, it’s because a lot of the tough work has already been done.

    Historically, every major increase in programmer productivity has led to demand for software rising faster than the even-more-productive programmers could keep up with, though.


  • I’m not personally concerned that any currently-existing ML system can take my job. The state-of-the-art ones can barely help me in my job, let alone do it for me. The things I’ve found them to be good at are things I spend very little time at work actually doing.

    But they’re vaguely shaped like something that can take our jobs, and I don’t know if they’ll turn into that. So I worry - in part also for the purely personal reason that I’m a disabled, middle-aged guy who’s seen better days; a hypothetical future labour market that has no need for programmer-shaped brains anymore is one that people like me would probably do very poorly in.


  • Well, some analyses are decidable, anyway. ;-)

    But you’re right, of course. The only real data poisoning you could do with code is sharing deliberately bad code … but then you’re also not sharing useful open source code with your fellow humans; you’re just spamming.

    At any rate, I’m not sure that future major gains in LLM coding ability is going to come from simply shoving more code in. The ones we have today have already ingested a substantial chunk of all the open-source code that exists on the public web, and (as the SWE-Bench example I’ve shared elsewhere gives an example of), they still struggle if they aren’t substantially guided by a human.


  • It’s not tricky at all, but it is tedious. It’s tedious precisely because it isn’t tricky. There’s little essential complexity in the task (so it isn’t fun to solve unless you’re a beginner), but it’s buried in a lot of incidental complexity.

    The thing I’ve personally gotten most actual real-world utility out of LLMs for is … writing VimL scripts, believe it or not. VimL is a language that’s almost entirely made out of incidental complexity, and the main source of friction (at least to me) is that while I use Vim all the time, I rarely write new VimL scripts, so I forget (repress?) all the VimL trivia that aren’t just simple compositions of my day-to-day commands. This is exactly what you’d expect LLMs to be good at: The stakes are low, the result is easy to test, the scripts I need are slight variations over various boring things we’ve already done a ton of times, and writing them requires zero reasoning ability, just a large pile of trivia. I’d prefer it if Vim had a nicer scripting language, but here we are.

    They still screw it up, of course, but given that I never want a VimL script to be very large anyway, that’s easy to fix.


  • I’m not going to claim to be an LLM expert; I’ve used them a bit to try to figure out which of my tasks they can and can’t help with. I don’t like them, so I don’t usually use them recreationally.

    I’ll put my stakes on the table too. I’ve been programming for very close to my entire life; my mum taught me to code on a Commodore 64 when I was a tiny kid. Now I’m middle-aged, and I’ve spent my entire professional life either making software or teaching software development and/or software-adjacent areas (maths, security, etc.). I’ve always preferred to call myself a “programmer” rather than a “software engineer” or the like - I do have a degree, but I’ve always considered myself a programmer first, and a teacher/researcher/whatever second.

    I think the point made in the article we’re talking about is both too soon and too late. It’s too soon because - for all my worries about what LLMs and other AI might eventually be, at the current moment they’re definitly not AutoDeveloper 3000. I’ve mentioned my personal experiences. Here is a benchmark of LLM performance on actual, real-world Github issues - they don’t do very well on those at all, at least for the time being. All professional programmers I personally know still program, and when they do use LLM’s, they use them to generate example code rather than to write their production code for them, basically like Stack Overflow, eexcept one you can trust even less than actual Stack Overflow. None of them use its generated code directly - also like you wouldn’t with Stack Overflow. At the moment, they’re tools only; they don’t do well autonomously.

    But the article is also too late, because the kind of programming I got hooked on and that became a lifelong passion isn’t really what professional development is like anymore, and hasn’t been for a long time, long before LLMs. I spend much more time maintaining crusty old code than writing novel, neat, greenfield code - and the kind of detective work that goes into maintaining a large codebase is often one that LLMs are of little use in. Sure, they can explain code - but I don’t need a tool to explain what code does (I can read), I need to know why the code is there. The answer to this question is rarely directly related to anything else in the code, it’s often due to a real-world consideration, an organizational factor, a weird interaction with hardware, or a workaround for an odd quirk of some other piece of software. I don’t spend my time coming up with elegant, neat algorithms and doing all the cool shit I dreamt of as a kid and learnt about at university - I spend most of my time doing code detective work, fighting idiosyncratic build systems, and dealing with all the infuriating edge cases the real world seems to have an infinite supply of (and that ML-based tools tend to struggle with). Also, I go to lots of meetings - many of which aren’t just the dumb corporate rituals we all love to hate, but a bunch of professionals getting together to discuss the best way to solve a problem none of us know exactly how to solve. The kind of programming I fell in love with isn’t something anyone would pay a professional to do anymore, and hasn’t been for a very long time.

    I haven’t been in web dev for over a decade. Most active web devs I know say that the impressive demos of GPT-4 making a HTML page from a napkin sketch would have been career-ending 15 years ago, but doesn’t even resemble what they spend all their time doing at work now: They tear their hair out over infuriating edge cases, they try to figure out why other people wrote specific bits of code, they fight uncooperative tooling and frameworks, they try to decipher vague and contradictory requirements, and they maintain large and complex applications written in an uncooperative language.

    The biggest direct influence LLMs have so far had on me is to completely destroy my enthusiasm for publishing my own (non-professional) code or articles about code on the web.


  • The power loom analogy works very well, actually. Their spot in history is, in part, because of who got to write the history books.

    The inventors and entrepreneurs who developed them spent lots of time spying on weavers - who understandably weren’t cooperative, when they found out what the machines were intended to do. The quality of their products was so shoddy that the weavers’ first attempt at a legal challenge actually tried to have them defined as fraudulent, because they figured the poor-quality fabric would ruin the reputation of the English textile industry. In the early days, they actually did require frequent fix-up jobs.

    Not all of the entrepreneurs who built factories were monstrous assholes; some of them were quite considerate people who paid professional weavers a decent wage to work for them (these weavers still often hated their new working conditions). Some did this out of legitimate concern for their communities (it was a smaller world, and many of them personally knew the very people whose jobs they were degrading), and some did so because they were afraid that Luddites would break into their factories and destroy all the expensive machines. Most of them were put out of business, they were easy to undercut by owners who instead used indentured children taken from orphanages.

    They did drive the price of clothing down, but unfortunately that didn’t directly translate to all-around increased economic prosperity immediately: Aside from all the weavers being put out of business, entire communities suffered economic collapse because they were built around those weavers’ income.

    You’re right that programmers often have little class consciousness. I’m a union member myself (and so are most of my programmer friends and colleagues) - but unfortunately, I’m not sure how much some unions in a tiny country can do against the economic might of Silicon Valley.


  • This response is going to be rambling.

    For the example problem: If the dictionary file comfortably fits in memory and this was just a one-off hack, I probably wouldn’t even have to think about the solution; it’s a bash one-liner (or a couple lines of Python) and I can a certainly write it faster than I could prompt an LLM for it. If I’m reading the file on a Raspberry Pi or the file is enormous, I’d use one of the reservoir sampling algorithms. If performance isn’t all that important I’d just do the naive one (which I could probably hack up in a couple of minutes), if I needed an optimal one I’d have to look at some of my old code (or search the internet). An LLM could probably do the optimal version faster than I could (if prompted specifically to do so) … but obviously I’d have to check if it got it right, anyway, so I’m not sure where the final time would land.

    I am sure, however, that it’d be less enjoyable. And this (like I think the author is trying to express) is saddening. It’s neat that the hardware guy in the story could also solve a software problem, but a bit sad that he can do it without actually learning anything, just by prompting a machine built out of appropriated labour - I imagine this is what artists and illustrators feel about the image generators. It feels like skills it took a long time to build up are devaluing, and the future the AI boosters are selling - one where our role is reduced to quality controlling AI-generated barf, if there’s a role left for us at all - is a bleak one. I don’t know how well-founded this feeling actually is: In a world that has internet connections, Stack Overflow, search engines and libraries for most of the classic algorithms, the value of being able to blam out a reservoir sampling algorithm from memory was very close to zero anyway.

    It sure wasn’t that ability I got hired for: I’ve mentioned before that I’ve not had much luck trying to use LLMs for things that resemble my work. I help maintain an open-source OS for industrial embedded applications. The nice thing about open source is that whenever we need to solve some problem someone else already solved and put under an appropriate license, we can just use their solution directly without dragging anything through an LLM. But this also definitionally means that we spend pretty much all our time on problems that haven’t been solved publicly (and that LLMs haven’t seen examples of). For us, at the moment, LLMs don’t help with any of the tasks we actually could use help with. Neither does Stack Overflow.

    But the explicit purpose of generative AI is the devaluation of intellectual and creative labour, and right now, a lot of money is being spent on an attempt to make people like me redundant. Perhaps this is just my anxiety speaking, but it makes me terribly uneasy.


  • I haven’t really followed Klein for a while, but at least what he wrote in the beginning of the generative AI gold rush was closer to what one might call “social doomerism” than Yudkowskianism: Less “the AI is going to go foom and kill us all with digital brain-magic”, and more “AI is going to cause devastating social disruptions, destroy the livelihoods of millions, enable mass manipulation, and concentrate enormous power into the hands of AI owners”.

    Has he pivoted into “classic sneer territory” since then?


  • One of the reasons I dislike this technology so much is that some of the ridiculous tricks actually (sometimes, sort of) work. But they don’t work for the reasons the interface invites the user to think that they do, they don’t work reproducibly or consistently, so the line between “getting large neural networks to behave requires strange tricks” and pure cargo-cult thinking is blurred.

    I have no idea what exactly went into the training sets of Midjourney (or DALL-E), except that it’s probably safe to assume it’s a set of (image, text) pairs like the open source image generators. The easy thing to put in the text component is the caption, any accessibility alt-text the image might have, and whatever a computer vision system decides to classify the image as. When the scrapers appropriate images from artists’ forums, personal webpages and social media accounts, they could then also scrape any comments present, process them and put some of them into the text component as well. So, it’s entirely possible that 1. there are some of the images the generator saw during training that had “masterpiece”, “great work” etc. in the text component, and 2. there is a statistically significant correlation between those words being present in the text, and the image being something people like looking at. So, when the generator is trying to pull images out of gaussian noise, it’ll be trying to spot patterns that match “masterpiece-ness” if prompted with “masterpiece”. Clearly this doesn’t work consistently - eg. if the generator has never seen a masterpiece-tagged painting of a snake, it’s not at all obvious that its model of “masterpiece-ness” can be applied to snakes at all. Neural networks infamously tend to learn shortcuts rather than what their builders want them to learn.

    Even then, most of it still looks like the result of a mugging in the Uncanny Alley. There’s almost always something “off” about it, even when it is technically impressive. Details that make no sense, weird lighting, shadows and textures, and a feeling of “eeriness” that I’d probably have the vocabulary to describe if I were a visual artist.

    (PS: Does the idea of using well-intentioned accessibility features and kind words to artists to create a machine intended to destroy their livelihood make you feel a bit iffy? Congratulations, you are probably not a sociopath.)