The SDLC and AI

The introduction of AI tools into products and workflows has often been framed as a disruptive force requiring throwing away established processes of how to build them. However, after my experience of working with it, a deeper examination reveals that working with AI is fundamentally a change in implementation, not a change in mindset from the standard software development lifecycle (SDLC). The core challenges and principles that govern traditional software development remain entirely relevant in the AI era.

One of the most persistent challenges in any complex software project is the presence of unknowns. Whether I’m integrating a new third-party library or migrating a legacy system I must manage uncertainty.

  • In Traditional SDLC: Unknowns manifest as ambiguities in requirements, performance under load, or compatibility issues within a vast code base. The process demands meticulous analysis, spikes, and prototyping to add clarity and reduce risk.

  • In Development with AI models: Unknowns are present in data quality, model explainability, and real-world performance. Developers as users of models can wrangle this through tools like DSPy, techniques like prompt optimization, and processes like test suites with judges, to bring back some clarity and trust.

In both contexts, the development mindset is the same: to systematically identify, investigate, and resolve ambiguities to produce a reliable, functional system.

Managing Dynamic Dependencies and Risks

All software exists within an ecosystem of changing dependencies. For traditional software, this includeoperating system updates, evolving libraries, and external service APIs. In AI, this challenge is simply shifted:

  • Evolving Codebases: Just as traditional modern software has a large dependency chain with unknown potential issues that we manage through vulnerability scanners, code review and dependency audits. AI models are essentially code where the resulting behavior is a product of billions of learned parameters. The potential "issues" are unexpected outputs, biases, or performance degradation when given an unexpected input.

  • Data as a Dependency: Data is a critical, ever-changing dependency for AI. Controlling your inputs to the model, and locking down model versions is the same as providing reproducible builds in the development ecosystem.

The process for managing these risks is rooted in the SDLC principle of continuous monitoring, maintenance, and adaptation to external change.

Validation: The Universal Requirement


The most compelling similarity lies in the need for rigorous validation and testing.


In the traditional SDLC, we rely on a suite of tools and practices: unit tests, integration tests, end-to-end tests, and user acceptance testing. These are the mechanisms we use to validate that the code functions as intended and meets the requirements.


When using models in a project, the tools are slightly different, but the intent is identical:

  • Testing Tools: Instead of only looking at code logic, validation includes statistical tools like model metrics (accuracy, precision, recall), fairness metrics, and adversarial testing.

  • Validation Mindset: The goal remains to use tools to validate that the system (the AI model) is robust, fair, and performs reliably in its intended environment. It’s still a process of defining desired outcomes and using a battery of tests to confirm those outcomes are met.

Ultimately, the usage of AI models requires developers to learn new frameworks, adopt new tools, and understand data science concepts; these are changes in implementation. However, the core developer mindset of the methodical approach to problem-solving, managing uncertainty, mitigating risk, and validating results is a constant that binds traditional SDLC to the era of AI.


Mapping SDLC -> AI processes

The SDLC still applies to AI — just map these concepts:

  • Unit tests → Eval suites

  • Dependency management → Model + data versioning

  • Monitoring → Drift + regression testing

  • UAT → Human review / real-world feedback loops


LinkedIn Posts (generated)

The SDLC & AI: A Mindset, Not a Methodology Shift

A lot of the conversation around AI in software is framed like it demands a total reset — new rules, new process, new mindset.

But after working with it hands-on, I’ve come to believe the opposite:

AI changes implementation — not the fundamentals of how we build reliable systems.

The SDLC has always been about managing uncertainty, controlling dependencies, and validating outcomes. AI just shifts where those challenges show up.

Unknowns still exist
Traditional SDLC: unclear requirements, performance edge cases, system compatibility
AI development: data quality, explainability, real-world behavior
Either way, the job is the same: identify unknowns early, reduce risk through experiments, and build confidence iteratively.

Dependencies still shift
Libraries and APIs evolve. Models evolve too.
Data becomes another moving part — like any dependency — requiring versioning, reproducibility, and monitoring.

Validation is still non-negotiable
Instead of only unit tests and integration tests, we add model metrics, fairness checks, and adversarial testing.
But the mindset is unchanged: prove the system works reliably in the real world.

AI tools and frameworks are new — but the core software development principles still apply.

AI isn’t rewriting the SDLC. It’s expanding it.

#AI #SoftwareDevelopment #SDLC #Engineering #TechLeadership #ProductDevelopment #MachineLearning


AI doesn’t eliminate uncertainty in software development.

It relocates it.

In traditional SDLC, the unknowns show up as:

  • unclear requirements

  • integration risks

  • performance under load

  • edge cases in complex codebases

With AI systems, the unknowns shift to:

  • data quality and representativeness

  • model explainability

  • unpredictable behavior at the edges

  • performance in real-world environments

But the mindset is unchanged…

Identify the unknowns early. Run spikes. Prototype. Test assumptions. Reduce risk systematically.

Same engineering discipline. Different tools.


The biggest difference with AI isn’t the need for testing.

It’s the shape of testing.

In traditional SDLC we validate systems with:

  • unit tests

  • integration tests

  • end-to-end tests

  • UAT

With AI systems, validation expands to include:

  • model metrics (accuracy, precision, recall)

  • fairness + bias testing

  • adversarial and red-team testing

  • evaluation suites for regression

  • human review and feedback loops

But the intent stays exactly the same:

Define what “good” looks like. Prove the system meets it. Continuously validate after release.

Validation isn’t optional in AI. It’s the difference between demos… and dependable systems.



AI development feels “new,” so it’s tempting to assume it needs a totally new approach.

But most engineering problems haven’t changed.

We’re still solving for:

  • reliability

  • safety

  • scalability

  • maintainability

  • user trust

AI adds new tools and new sources of uncertainty.

But the SDLC principles still apply:

  • iterate

  • reduce risk early

  • manage dependencies

  • test relentlessly

  • monitor continuously

The real shift isn’t methodology.

It’s maturity.

The teams that build great AI systems aren’t reinventing engineering. They’re extending it.

Previous
Previous

AI Evaluation - Metrics & Human in the Loop

Next
Next

How we cut our LLM infra costs to zero 🚀