I think you are vastly overestimating the uniqueness of most of what we do, and I think that’s probably an adequate rebuttal here, but for the sake of gratuitous verbosity, let’s say it weren’t: the hypothetical ‘thing’ to which you refer will almost always be made of many pieces that have been made a million times before. And as we can break a problem down to solve it and effectively produce something we consider novel, so too can it - especially with a bit of expert guidance.
If a conventional expert can delegate pieces of their puzzle to ‘an LLM’, and achieve near-instantaneous results comparable in quality to what they might hope to eventually get from a team of motivated but less experienced folks, why wouldn’t they - and how does this not portend the obsolescence of human expertise as we know it? If that seems absurd, consider how expertise is gained.
More directly, but not aimed at you, I am confident that anyone who shares your sentiment has not spent any meaningful time working with GPT-4, or lacks competencies necessary to meaningfully assess and recognize the unmistakeable glints of something new; of a higher level of comprehension and ability.
It worries me, seeing so many intelligent people so wilfully unprepared for what is coming, oblivious to the fact that what we are arguing about today is already irrelevant! Because though things have been changing at a literal-fuck-you rate, technologies are converging and progress is accelerating in ways that are necessarily incomprehensible even to the folks ostensibly driving it.
We should already be way past this point, collectively. It isn’t going to take more than a couple quick iterations to leave the naysayers in the same pool as DT supporters in terms of sheer force of cognitive dissonance required to get through a day.
It is ok that we aren’t special, but failing to come to terms with this reality … probably won’t bode well for us.
I have messed around with generative AI and that is what lead me to the conclusion that it’s just derivative replication of things humans have already done. Trying to direct the AI to create specific visions or wholly original things feels like trying to herd cats, it’s just not very good at it.
While there are obvious applications for AI even if it is only useful for replicating things, it’s starting to feel like the whole thing is smoke and mirrors in how much AI is actually capable of. And they just keep saying “think of how good it will be in the future” which makes it seem even more like the next crypto/nft bubble. Especially when AI companies are burning through money so fast that they’re bound to try and get industries dependent on their tech before squeezing and enshittifying them for all they’re worth
The vast majority of things humans do (and receive monetary compensation for) are things humans have already done; the result of countless generations of failure-driven iteration.
If you’re interested in this you might enjoy exploring the ideas around consciousness as an emergent property, and the work of Douglas Hofstadter.
For my job, I use Copilot, which is built on GPT-4, and I have zero concern that it’s going to replace me.
It’s very useful, don’t get me wrong. It makes generating new code in applications that already exist a breeze a lot of the time (minus hallucinations and other forms of mistakes, of course). But it simply can’t create whole new applications of any complexity from scratch, and requires actual developers to check the code it does create. It doesn’t actually know what you want, it’s just auto-completing based on what its model decides you want.
Again, it’s very good at that. But it’s not so good that you can replace a team of developers with just one… Or worse yet, with an MBA who thinks he can figure it out without paying anyone.
I think you are vastly overestimating the uniqueness of most of what we do, and I think that’s probably an adequate rebuttal here, but for the sake of gratuitous verbosity, let’s say it weren’t: the hypothetical ‘thing’ to which you refer will almost always be made of many pieces that have been made a million times before. And as we can break a problem down to solve it and effectively produce something we consider novel, so too can it - especially with a bit of expert guidance.
If a conventional expert can delegate pieces of their puzzle to ‘an LLM’, and achieve near-instantaneous results comparable in quality to what they might hope to eventually get from a team of motivated but less experienced folks, why wouldn’t they - and how does this not portend the obsolescence of human expertise as we know it? If that seems absurd, consider how expertise is gained.
More directly, but not aimed at you, I am confident that anyone who shares your sentiment has not spent any meaningful time working with GPT-4, or lacks competencies necessary to meaningfully assess and recognize the unmistakeable glints of something new; of a higher level of comprehension and ability.
It worries me, seeing so many intelligent people so wilfully unprepared for what is coming, oblivious to the fact that what we are arguing about today is already irrelevant! Because though things have been changing at a literal-fuck-you rate, technologies are converging and progress is accelerating in ways that are necessarily incomprehensible even to the folks ostensibly driving it.
We should already be way past this point, collectively. It isn’t going to take more than a couple quick iterations to leave the naysayers in the same pool as DT supporters in terms of sheer force of cognitive dissonance required to get through a day.
It is ok that we aren’t special, but failing to come to terms with this reality … probably won’t bode well for us.
I have messed around with generative AI and that is what lead me to the conclusion that it’s just derivative replication of things humans have already done. Trying to direct the AI to create specific visions or wholly original things feels like trying to herd cats, it’s just not very good at it.
While there are obvious applications for AI even if it is only useful for replicating things, it’s starting to feel like the whole thing is smoke and mirrors in how much AI is actually capable of. And they just keep saying “think of how good it will be in the future” which makes it seem even more like the next crypto/nft bubble. Especially when AI companies are burning through money so fast that they’re bound to try and get industries dependent on their tech before squeezing and enshittifying them for all they’re worth
The vast majority of things humans do (and receive monetary compensation for) are things humans have already done; the result of countless generations of failure-driven iteration.
If you’re interested in this you might enjoy exploring the ideas around consciousness as an emergent property, and the work of Douglas Hofstadter.
…and try GPT-4 before you write it off.
For my job, I use Copilot, which is built on GPT-4, and I have zero concern that it’s going to replace me.
It’s very useful, don’t get me wrong. It makes generating new code in applications that already exist a breeze a lot of the time (minus hallucinations and other forms of mistakes, of course). But it simply can’t create whole new applications of any complexity from scratch, and requires actual developers to check the code it does create. It doesn’t actually know what you want, it’s just auto-completing based on what its model decides you want.
Again, it’s very good at that. But it’s not so good that you can replace a team of developers with just one… Or worse yet, with an MBA who thinks he can figure it out without paying anyone.
I call it the “magical meat fallacy”.