AI Coding is useful but in its infancy, giving superpowers to non-developers but only marginally enhancing experienced developers.
Adding useful abstractions has always been the trend:
Time spent on writing implementation details have always been on a downward trend via higher level languages, compilers, and other abstractions.
Because of LLMs, the developer role is expanding more and more towards using natural language to create software. From coding to defining behavior in ‘English’ (specs), then verifying and refining it later in the compiler.
From a business perspective it also makes sense because specs are widely understandable and enable broader participation in software. On paper that's a plus.
With enough tenacity, a non-coder can write a document listing everything they want their software to do. But English is messy and, by nature, ambiguous and subject to interpretation. Not always the best abstraction for defining behavior.
tl;dr, you think AI coding is good because compilers, languages, and libraries are bad.
I think the best path forward lies in combining the strengths of traditional coding with the accessibility of specs written in English.
The next major software tool will smooth the transition between natural language and code. Just as LSPs highlight errors on the fly, a spec-to-code compiler could flag which sentences are ambiguous or which edits conflict with existing logic. It could trace sentences directly to the code they generate and show how changing the text would ripple through the implementation.
The mapping could work both ways: editing the code could suggest updates to the English spec, keeping the two always in sync. That kind of feedback loop would make natural-language programming far more robust and usable.
There are many difficulties and open questions to make something like that useful in practice. I am sure someone is already working on something like that. If not, maybe I should?