Since ChatGPT entered my life in December 2022, I've been obsessed with two thoughts:
- What does the imminent1 arrival of AI that can code mean for me as a tech PM (i.e. one who leads deliveries with developer teams)?
- Will AI that can code bring on a second coming of value-driven Rapid Application Development (RAD)? Or something else entirely?
This post explores the latter thought: can AI (LLM's that can code) bring back delivery cycles that are truly 'rapid'? If so, what does that enable?
As usual when I start writing about a topic that I've been dwelling on for a long time, I like to start with questions and hypotheses. I find questions more interesting than answers, and the AI/LLM tech I'm thinking about here has been developing awkwardly enough that any conclusions have to be taken together with sweeping caveats and healthy skepticism, making this a difficult topic to write about.
In upcoming posts, I want to share how I've been experimenting with my thoughts around coding with AI/LLM's,2 at home and (as much as I'm allowed to share) work:
- My own experiences with AI-driven RAD, successful and unsuccessful.
- Practices for writing high-quality software with AI.
- Where I imagine potential success with AI-driven development today, and where I see it failing.
- What needs to happen for AI-driven development to be achievable (and desirable) in practice.
Note 1: my domain is business software, that is, software made in and for mid-large enterprises, private and public. If something I write seems off, it might be we are not thinking about the same context. I will point this out if I think it needs to be stated specifically.
Note 2: This is a huge topic, and this post is me only beginning to write down my thoughts on it. Any paragraph here could probably be a chapter in a book. For my own sanity in particular I am allowing myself to fudge and oversimplify as needed.
My RADical fantasy
In my fantasy, (business) software is not made with obscenely expensive long-cycles of requirements-gathering followed by arduous development, in which the actual users are rarely involved until the software hits their devices.
I dream of - and do everything in my power to enable 3- situations where software is made in tight cycles of close collaboration with its users, and leveraging continuous deployment so that working software quickly gets in the hands of the users for feedback and further incremental improvement. In practice, I find that our current set of technologies hinders as much as it helps us to reach and stay at this ideal state.
For me, Rapid Application Development (RAD), reinforced with good Agile/Lean and design-thinking practice, is about using tools that collapse the time it takes to deliver working software, to create as-rapid-as-possible cycles of value creation and learning. When we learn that a more complex solution is needed than RAD tooling supports, we go and build it using different tools.
Especially in the context of business software, it seems radical to imagine that valuable products could ever be made quickly, cheaply, collaboratively, and without sacrificing quality or customization. Working so much with consumer AI tooling over the past two years has convinced me that an era of (positive) disruption is nigh, and we can already take advantage of the tools available today, rough-around-the-edges as they still are.
RAD had its heyday in the 90s, with millions of developers using toolkits like Visual Basic to produce working software that anyone could quickly slap together and put in the hands of its users.
Visual Basic provided the developer with a graphical environment, where they could drag-and-drop controls into a frame and then customize the controls with attributes and event-based actions in simple VB code. The original conception4 was as a 'shell creation kit' in Windows, for business users (or, analysts + users) to build their desired applications themselves, graphically.
I never got to use VB, but I have used its spiritual successor WinForms, and have still never experienced a toolkit that let me build software so productively.5
Admission: I have a shallow grounding in the theory & processes in 'classical' RAD. I didn't get to see it applied in the 90s, and I expect that in many, many cases it wasn't done in the way I, an XP/Lean/Agile punk, would find palatable. I am allergic to complicated processes, and for this reason am purposely choosing to couch my understanding of RAD in a simplified, modern context where I think it can shine when used with AI technologies on the market today.
My conception of the RAD cycle falls roughly where prototyping does in the design thinking paradigm: once a problem space is explored and solutions need validating, we can whip up a prototype and get it in the hands of actual users for testing. Again, I am conceiving of RAD as collapsing the space between high-fidelity-prototype and working software. If we can demonstrate value creation and adoption potential in working software, why shouldn't we? What are we afraid of? My gut tells me that the reason we don't expect this today, is that our technologies don't allow us to conceive of it.6 What if it were now possible to rapidly prototype working software with high-quality UI/UX?
The new software productivity paradox
From the 2000's, the onset of Web 2.0 and the mobile phone have reduced the software my teams & I produce to two deployment targets: web apps and mobile apps.7 This has happened for good reasons, but there is no magic shortcut to producing valuable software in these paradigms, and recent advancements in frameworks and tooling seem to have resulted in ever more complexity and friction, not less. In my own experience8 over the past ~15 years, there has been a significant increase in the boilerplate and plumbing needed to get anything usable up and running in production,9 to say nothing of the level of specialization needed to actually build, deploy, and maintain these increasingly complex applications on increasingly complex infrastructure - and good luck if integrations to enterprise data sources / ERP are required!
An obvious downside to RAD is maintainability & scalability. Usually proper system design & architecture takes a back seat to raw productivity. Taking off my rose-tinted glasses, I'm not naive enough to believe it as a silver bullet to all the problems I see in my line of work (more on those below). The supposed upside to modern development practices is that they support maintainable and scalable software - now wouldn't that be nice, if that were true in reality?
The main problems I see with the technologies we use to make business web & mobile apps today are:
- It costs too much damn money to get something useful built and deployed.
- Long development & ramp-up times mean users wait too damn long to see working software that meets their needs, and in the waiting time often fall out of the design loop, if they were ever in it in the first place.
- It is too damn hard to write high-quality software, and often quality gets down-prioritized. Once this tradeoff is made, quality is effectively non-recoverable.
- Too long & too expensive development lifecycles create pressures to build software that is too damn big and too damn complex.
I think AI tooling will increasingly be leveraged to break down each of these pain points, either as a direct result of its use, or as an indirect consequence of where AI produces the best results.
My dream is this: a small team of AI-empowered development & design professionals, using AI to quickly generate out the tedious parts of software development, narrow down on the tasks that truly matter: creative collaboration with users and rapid incremental delivery of high-quality software.
Software written in this way can offer us the best of both worlds: delivering rapidly with modern frameworks that provide high testability, scalability, and maintainability, without the monumental effort & discipline required to actually achieve this through painstaking manual human effort alone.
The very cool thing with AI-driven development is that by the nature of the LLM technology that enables it, it works best with the technologies that are already in most widespread use today. I'm looking forward to the posts to come, and hopefully can convince potential skeptics and nay-sayers (including the one in the back of my head!) to dream along with me.
Caveats & stray thoughts
Before closing off, I want to quickly point out some errata that I imagine a hypothetical critical reader would point out:
No-code/low-code tooling has a similar pitch: IT professionals - proficient business users even! - can rapidly build apps on top of their existing systems using graphical drag-and-drop tools. In practice I see show-stopping drawbacks to this technology - apps that are hard to test, hard to scale, perform poorly, are highly vendor-dependent, and with UI/UX that is typically lacking in polish, customization options, and user-friendliness (assuming the users were ever involved at all). I have seen a handful of apps built on no/low-code in different companies, and without exception have come away thinking that it all kind of sucked, and fell on the wrong side of the value<>cost equation.
In the world of business software, there are many impediments that no development practice or single technology can overcome, including off the top of my head:
- Bad organizational practices where IT and Business are kept at arm's length from each other, and collaboration is disincentivized either implicitly or explicitly.
- Bad management where user adoption is either not considered or not properly designed for.
- Bad change management & org architecture where excessive friction keeps software from being releasable, quickly or ever.
- Bad data landscapes where the right data is not made accessible or available.10
-
Alright, so maybe not 'imminent', but coming for us some day 'soon', and like Roy Scheider I want to be damn ready for that shark when it does come to get us! ↩
-
In 2024, replace AI with LLM. I use AI in this list because it reads better. ↩
-
Perhaps ironically, regardless of what I write in these posts on AI tech, I believe the main impediments are and will always be human & organizational, and my purpose in life is to understand these impediments so I can break them. In this regard, AI that can code is just another tool, but a potentially very useful and disruptive one. ↩
-
Me paraphrasing Alan Cooper, based on his accounts (1) (2) ↩
-
Ruby on Rails comes close. I love Rails. I wish it was viable in enterprise. ↩
-
Caveat: change & risk management need to be taken into account. I am comfortable pushing these to their limits, but I know it can be counter-productive to run faster than the organization can keep up. ↩
-
Sometimes the line between these blurs. That doesn't make things simpler. ↩
-
Citation needed, I know - it is easy to find studies on productivity for up until about 2010, but hard to find them for the last 10 years. I hope this statement isn't controversial enough to warrant a rundown of supporting studies. ↩
-
The specific context here is business/enterprise software. I am of course aware that it is possible for a small team or even an individual with the right skillset and determination to quickly spin up a web or mobile application and get it onto the market using modern services like Supabase or on a productivity-oriented framework like Rails. But as a technology historian (BSc (Hons)), I know this has been true to some extent in every decade since the dawn of consumer software in the 1970s, so I don't think it fully negates my point. ↩
-
LLM's tend to complicate this, as few businesses are ready or willing to have their data processed by third-party AI's (with good reason), and building in-house AI tech is hard and costly. ↩