A Brief History of DevOps
To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level of experience. In the late ’90s, I was a DSDM (Dynamic Systems Development Methodology) trainer. DSDM was a precursor to agile, a response to the slow, rigid structures of waterfall methodologies. With waterfall, the process was painstakingly slow: requirements took months, design took weeks, coding seemed endless, and then came testing, validation, and user acceptance—all highly formalized.
While such structure was seen as necessary to avoid mistakes, by the time development was halfway done, the world had often moved on, and requirements had changed. I remember when we’d built bespoke systems, only for a new product to launch with graphics libraries that made our custom work obsolete. A graphics tool called “Ilog,” for instance, was bought by IBM and replaced an entire development need. This exemplified the need for a faster, more adaptive approach.
New methodologies emerged to break the slow pace. In the early ’90s, rapid application development and the spiral methodology—where you’d build and refine repeated prototypes—became popular. These approaches eventually led to methodologies like DSDM, built around principles like time-boxing and cross-functional teams, with an unspoken “principle” of camaraderie—hard work balanced with hard play.
Others were developing similar approaches in different organizations, such as the Select Perspective developed by my old company, Select Software Tools (notable for its use of the Unified Modelling Language and integration of business process modelling). All of these efforts paved the way for concepts that eventually inspired Gene Kim et al’s The Phoenix Project, which paid homage to Eli Goldratt’s The Goal. It tackled efficiency and the need to keep pace with customer needs before they evolved past the original specifications.
In parallel, object-oriented languages were added to the mix, helping by building applications around entities that stayed relatively stable even if requirements shifted (hat tip to James Rumbaugh). So, in an insurance application, you’d have objects like policies, claims, and customers. Even as features evolved, the core structure of the application stayed intact, speeding things up without needing to rebuild from scratch.
Meanwhile, along came Kent Beck and extreme programming (XP), shifting focus squarely to the programmer, placing developers at the heart of development. XP promoted anti-methodologies, urging developers to throw out burdensome, restrictive approaches and instead focus on user-driven design, collaborative programming, and quick iterations. This fast-and-loose style had a maverick, frontier spirit to it. I remember meeting Kent for lunch once—great guy.
The term “DevOps” entered the software world in the mid-2000s, just as new ideas like service-oriented architectures (SOA) were taking shape. Development had evolved from object-oriented to component-based, then to SOA, which aligned with the growing dominance of the internet and the rise of web services. Accessing parts of applications via web protocols brought about RESTful architectures.
The irony is that as agile matured further, formality snuck back in with methodologies like the Scaled Agile Framework (SAFe) formalizing agile processes. The goal remained to build quickly but within structured, governed processes, a balancing act between speed and stability that has defined much of software’s recent history.
The Transformative Effect of Cloud
Then, of course, came the cloud, which transformed everything again. Computers, at their core, are entirely virtual environments. They’re built on semiconductors, dealing in zeros and ones—transistors that can be on or off, creating logic gates that, with the addition of a clock, allow for logic-driven processing. From basic input-output systems (BIOS) all the way up to user interfaces, everything in computing is essentially imagined.
It’s all a simulation of reality, giving us something to click on—like a mobile phone, for instance. These aren’t real buttons, just images on a screen. When we press them, it sends a signal, and the phone’s computer, through layers of silicon and transistors, interprets it. Everything we see and interact with is virtual, and it has been for a long time.
Back in the late ’90s and early 2000s, general-use computers advanced from running a single workload on each machine to managing multiple “workloads” at once. Mainframes could do this decades earlier—you could allocate a slice of the system’s architecture, create a “virtual machine” on that slice, and install an operating system to run as if it were a standalone computer.
Meanwhile, other types of computers also emerged—like the minicomputers from manufacturers such as Tandem and Sperry Univac. Most have since faded away or been absorbed by companies like IBM (which still operates mainframes today). Fast forward about 25 years, and we saw Intel-based or x86 architectures first become the “industry standard” and then develop to the point where affordable machines could handle similarly virtualized setups.
This advancement sparked the rise of companies like VMware, which provided a way to manage multiple virtual machines on a single hardware setup. It created a layer between the virtual machine and the physical hardware—though, of course, everything above the transistor level is still virtual. Suddenly, we could run two, four, eight, 16, or more virtual machines on a single server.
The virtual machine model eventually laid the groundwork for the cloud. With cloud computing, providers could easily spin up virtual machines to meet others’ needs in robust, built-for-purpose data centers.
However, there was a downside: applications now had to run on top of a full operating system and hypervisor layer for each virtual machine, which added significant overhead. Having five virtual machines meant running five operating systems—essentially a waste of processing power.
The Rise of Microservices Architectures
Then, around the mid-2010s, containers emerged. Docker, in particular, introduced a way to run application components within lightweight containers, communicating with each other through networking protocols. Containers added efficiency and flexibility. Docker’s “Docker Swarm” and later, Google’s Kubernetes helped orchestrate and distribute these containerized applications, making deployment easier and leading to today’s microservices architectures. Virtual machines still play a role today, but container-based architectures have become more prominent. With a quick nod to other models such as serverless, in which you can execute code at scale without worrying about the underlying infrastructure—it’s like a giant interpreter in the cloud.
All such innovations gave rise to terms like “cloud-native,” referring to applications built specifically for the cloud. These are often microservices-based, using containers and developed with fast, agile methods. But despite these advancements, older systems still exist: mainframe applications, monolithic systems running directly on hardware, and virtualized environments. Not every use case is suited to agile methodologies; certain systems, like medical devices, require careful, precise development, not quick fixes. Google’s term, “continuous beta,” would be the last thing you’d want in a critical health system.
And meanwhile, we aren’t necessarily that good at the constant dynamism of agile methodologies. Constant change can be exhausting, like a “supermarket sweep” every day, and shifting priorities repeatedly is hard for people. That’s where I talk about the “guru’s dilemma.” Agile experts can guide an organization, but sustaining it is tough. This is where DevOps often falls short in practice. Many organizations adopt it partially or poorly, leaving the same old problems unsolved, with operations still feeling the brunt of last-minute development hand-offs. Ask any tester.
The Software Development Singularity
And that brings us to today, where things get interesting with AI entering the scene. I’m not talking about the total AI takeover, the “singularity” described by Ray Kurzweil and his peers, where we’re just talking to super-intelligent entities. Two decades ago, that was 20 years away, and that’s still the case. I’m talking about the practical use of large language models (LLMs). Application creation is rooted in languages, from natural language used to define requirements and user stories, through the structured language of code, to “everything else” from test scripts to bills of materials; LLMs are a natural fit for software development.
Last week, however, at GitHub Universe in San Francisco, I saw what’s likely the dawn of a “software development singularity”—where, with tools like GitHub Spark, we can type a prompt for a specific application, and it gets built. Currently, GitHub Spark is at an early stage – it can create simpler applications with straightforward prompts. But this will change quickly. First, it will evolve to build more complex applications with better prompts. Many applications have common needs—user login, CRUD operations (Create, Read, Update, Delete), and workflow management. While specific functions may differ, applications often follow predictable patterns. So, the catalog of applications that can be AI-generated will grow, as will their stability and reliability.
That’s the big bang news: it’s clear we’re at a pivotal point in how we view software development. As we know, however, there’s more to developing software than writing code. LLMs are being applied in support of activities across the development lifecycle, from requirements gathering to software delivery:
- On the requirements front, LLMs can help generate user stories and identify key application needs, sparking conversations with end-users or stakeholders. Even if high-level application goals are the same, each organization has unique priorities, so AI helps tailor these requirements efficiently. This means fewer revisions, whilst supporting a more collaborative development approach.
- AI also enables teams to move seamlessly from requirements to prototypes. With tools such as GitHub Spark, developers can easily create wireframes or initial versions, getting feedback sooner and helping ensure the final product aligns with user needs.
- LLM also supports testing and code analysis—a labor-intensive and burdensome part of software development. For instance, AI can suggest comprehensive test coverage, create test environments, handle much of the test creation, generate relevant test data, and even help decide when enough testing is sufficient, reducing the costs of test execution.
- LLMs and machine learning have also started supporting fault analysis and security analytics, helping developers code more securely by design. AI can recommend architectures, models and libraries that offer lower risk, or fit with compliance requirements from the outset.
- LLMs are reshaping how we approach software documentation, which is often a time-consuming and dull part of the process. By generating accurate documentation from a codebase, LLMs can reduce the manual burden whilst ensuring that information is up-to-date and accessible. They can summarize what the code does, highlighting unclear areas that might need a closer look.
- One of AI’s most transformative impacts lies in its ability to understand, document, and migrate code. LLMs can analyze codebases, from COBOL on mainframes to database stored procedures, helping organizations understand what’s vital, versus what’s outdated or redundant. In line with Alan Turing’s foundational principles, AI can convert code from one language to another by interpreting rules and logic.
- For project leaders, AI-based tools can analyze developer activity and provide readable recommendations and insights to increase productivity across the team.
AI is becoming more than a helper—it’s enabling faster, more iterative development cycles. With LLMs able to shoulder many responsibilities, development teams can allocate resources more effectively, moving from monotonous tasks to more strategic areas of development.
AI as a Development Accelerator
As this (incomplete) list suggests, there’s still plenty to be done beyond code creation – with activities supported and augmented by LLMs. These can automate repetitive tasks and enable efficiency in ways we haven’t seen before. However, complexities in software architecture, integration, and compliance still require human oversight and problem-solving.
Not least because AI-generated code and recommendations aren’t without limitations. For example, while experimenting with LLM-generated code, I found ChatGPT recommending a library with function calls that didn’t exist. At least, when I told it about its hallucination, it apologized! Of course, this will improve, but human expertise will be essential to ensure outputs align with intended functionality and quality standards.
Other challenges stem from the very ease of creation. Each piece of new code will require configuration management, security management, quality management and so on. Just as with virtual machines before, we have a very real risk of auto-created application sprawl. The biggest obstacles in development—integrating complex systems, or minimizing scope creep—are challenges that AI is not yet fully equipped to solve.
Nonetheless, the gamut of LLMs stands to augment how development teams and their ultimate customers – the end-users – interact. It begs the question, “Whence DevOps?” keeping in mind that agile methodologies emerged because their waterfall-based forebears were too slow to keep up. I believe such methodologies will evolve, augmented by AI-driven tools that guide workflows without needing extensive project management overhead.
This shift enables quicker, more structured delivery of user-aligned products, maintaining secure and compliant standards without compromising speed or quality. We can expect a return to waterfall-based approaches, albeit where the entire cycle takes a matter of weeks or even days.
In this new landscape, developers evolve from purist coders to facilitators, orchestrating activities from concept to delivery. Within this, AI might speed up processes and reduce risks, but developers will still face many engineering challenges—governance, system integration, and maintenance of legacy systems, to name a few. Technical expertise will remain essential for bridging gaps AI cannot yet cover, such as interfacing with legacy code, or handling nuanced, highly specialized scenarios.
LLMs are far from replacing developers. In fact, given the growing skills shortage in development, they quickly become a necessary tool, enabling more junior staff to tackle more complex problems with reduced risk. In this changing world, building an application is the one thing keeping us from building the next one. LLMs create an opportunity to accelerate not just pipeline activity, but entire software lifecycles. We might, and in my opinion should, see a shift from pull requests to story points as a measure of success.
The Net-Net for Developers and Organizations
For development teams, the best way to prepare is to start using LLMs—experiment, build sample applications, and explore beyond the immediate scope of coding. Software development is about more than writing loops; it’s about problem-solving, architecting solutions, and understanding user needs.
Ultimately, by focusing on what matters, developers can rapidly iterate on version updates or build new solutions to tackle the endless demand for software. So, if you’re a developer, embrace LLMs with a broad perspective. LLMs can free you from the drudge, but the short-term challenge will be more about how to integrate them into your workflows.
Or, you can stay old school and stick with a world of hard coding and command lines. There will be a place for that for a few years yet. Just don’t think you are doing yourself or your organization any favors – application creation has always been about using software-based tools to get things done, and LLMs are no exception.
Rest assured, we will always need engineers and problem solvers, even if the problems change. LLMs will continue to evolve – my money is on how multiple LLM-based agents can be put in sequence to check each other’s work, test the outputs, or create contention by offering alternative approaches to address a scenario.
The future of software development promises to be faster-paced, more collaborative, and more innovative than ever. It will be fascinating, and our organizations will need help making the most of it all.