AI, Coding, and S-Curves
Balancing the benefits and problems with AI tools while being aware of the likely behavior of AI's innovation life cycle.
I've been using GitHub Codespaces a lot lately. It's a clean Linux environment without any preinstalled problems or OS weirdness. I don't have Copilot or anything in there. Feels much better and more engaging.
I really love the autocomplete of Copilot since it speeds up typing, but without it, I find myself thinking through things better and perhaps typing less by writing more concisely and intently.
Copilot and other AI tools definitely help with context switching between unfamiliar libraries, languages, etc., and can put something "on paper," helping with programming writer's block, but I think it converts your brain from problem-solving to code review. Rather than spending the time to think through how you want to do something, you instantly see a suggestion that you have to read, understand, and choose to accept. With AI, you're not coding; you're guiding an enthusiastic junior coder and doing code reviews in a one-sided pair programming session.
I'm not entirely sure how I feel about that. If AI continues to accelerate at the same rate, we'll likely see AI that can solve problems and coach itself, not needing humans for 99% of jobs (both white, blue, and no-collar), but we're not there yet. There's an idea in technology strategy that the lifecycle of an innovation follows a S-curve. The growth rate sometimes varies between applications, but the curve is almost always present. We can represent this using a sigmoid function.
You've probably inferred this already, but the key phases in capability development are:
- Initial slow growth: Basic research and foundation building
- Rapid acceleration: Breakthroughs and compounding improvements
- Plateau: Approaching theoretical or practical limits
Read Crossing the Chasm for more info on this written from a marketing viewpoint.
This pattern is so common that the question is not if a sigmoid function can represent AI's life cycle but where on the path we are and what the growth rate is.
This is all not to say that AI will eventually stop progress. No, breakthroughs happen either in the technology itself or in adjacent limiting technologies that can restart the curve again, but this trend is something to keep in mind as we consider hyperbolic doomer statements.
We've seen huge progress in the AI space over the past several years, and it would be reasonable to say that we will continue to see significant progress, at least within the short term. I wonder if we're early on our climb up the curve or if we're just seeing the diffusion of innovations where adoption is what's driving perceived advancement rather than actual exponential growth as it appears. The answer will be obvious in hindsight but is presently unclear.
What is the takeaway? Use whatever tools are going to result in the highest quality output, depending on the needs of the situation. Being an anti-AI luddite is foolish, but so is getting brain rot from doom-scrolling and pressing tab all day.