This post from @davidcrawshaw hits pretty close to home. My internal repos have exploded almost 800% since GPT-3.5, for the same reason: It's much easier to test hypotheses, make new applications and run ideas now. It's not because LLMs are better than humans at code - they're just built different in a very, VERY useful way.
If you try to exploit the differences, you'll have a much better time.
LLMs have no long-term memory - humans do, and it's very difficult to get fresh eyes from humans on a problem.
LLMs have broader knowledge than any one human, and not all humans with the specific in-depth knowledge are accessible at any time.
LLMs have no problem doing repeated work. They are a practically renewable resource (like solar), unlike humans at the same level.
Modern software dev is designed for humans (incremental updates on increasingly large codebases). Rearchitecting this for llms means more tests on smaller packages - minirepos - that can be worked on independently. It means a lot more throwaway versions before you get to the final product. For example, I'll take an idea, build multiple small Typescript scripts to test viability, add tests, make a quick CLI to test, launch it and give it to some friends, turn it into a GUI on @vercel for more testing, rewrite tests, before I start extracting out core logic to *completely rewrite* the whole thing for the actual intended purpose.
I'll also use multiple llms and crossposts the outputs to get fewer holes in an analysis (e.g. youtube.com/watch?v=p948WOthRyg)
x.com/davidcrawshaw/status/1876407248500793710