Beyond Buy vs. Build, Your AI Sourcing Strategy Needs a Third Option


The “should we build or buy?” question has haunted technology leaders for decades. But in the age of AI, that binary framing is dangerously inadequate. With organisations now pursuing an average of ten AI initiatives simultaneously, and 43% running between five and ten at any given time, the stakes of getting your sourcing strategy wrong have never been higher.

Having spent years as a solution architect working across core banking platforms, cloud-native infrastructure, and now portfolio architecture, I’ve watched organisations struggle with this exact tension. The reality is that most enterprises don’t need to pick a lane. They need a spectrum. The “buy, build, and blend” framework goes beyond the old dichotomy and offers the clearest mental model I’ve seen for how architects should be thinking about this problem.

The traditional buy-vs-build framing implies a clean boundary: either you purchase commercial off-the-shelf (COTS) software, or you write it yourself. In practice, that boundary barely exists anymore. Between pure configuration of a COTS product and building a custom application from scratch, there’s an entire spectrum of options: extending vendor products via marketplace add-ons, building custom integrations with low-code or pro-code, creating automations and connectors between related apps, and, increasingly, building AI agents and custom UIs on top of purchased platforms. This “blend zone” is where most real enterprise work happens today, and it maps neatly onto a spectrum from undifferentiated to differentiated business capabilities. The architectural implication is straightforward: reserve your engineering capacity for capabilities that genuinely set you apart, and lean on vendor ecosystems for everything else.

Before diving into AI-specific considerations, it’s worth anchoring on five factors that should drive all application sourcing decisions, because they apply with even greater force in the AI context. The first is criticality and business value, is the technology your core value proposition, or is it a tool to solve a business problem? If AI is central to your product offering, the calculus shifts heavily toward build. If it’s an operational improvement, buying or blending likely makes more sense. The second is risk and internal competencies, which forces an honest assessment of your IP exposure and vendor lock-in risk alongside a candid look at whether your organisation actually has the skills to build and maintain what it’s contemplating. In AI, this question cuts especially deep, the talent market is ferociously competitive, and the gap between having a few data scientists and having production-grade ML engineering capability is vast.

The third factor, total cost of ownership, is where organisations most frequently deceive themselves. TCO for any application stretches across four phases: go-live costs (design, development, testing, initial licenses), current annual costs (both recurring operations/support and nonrecurring maintenance), future costs (operating cost variations, predictable upgrades, potential enhancements), and decommissioning costs (particularly data retention). I’ve seen too many build decisions justified by comparing initial development cost against multi-year licence fees, while conveniently ignoring the ongoing maintenance, adaptive maintenance, and eventual decommissioning burden. The fourth factor is partners’ abilities, their capacity to execute and the completeness of their vision, which matters enormously in a market where AI vendors range from research-stage startups to hyperscaler platforms. The fifth is opportunities: whether you’re deploying your internal capacity on the highest-value work, or burning cycles on problems that vendors have already solved at scale.

If the framework tilts toward buying for undifferentiated capabilities, it’s worth understanding why organisations still hesitate. Survey data on COTS challenges tells a revealing story. The top three concerns, vendor lock-in (15%), integration issues (14%), and limited customisation (13%), are fundamentally about control and flexibility. The next tier, hidden costs, lack of control over updates, and security concerns, reinforces the same theme. For architects, this is a familiar pattern. The promise of COTS is speed and reduced engineering burden. The reality is that you’re trading one set of problems (building and maintaining software) for another (integration complexity, vendor dependency, and reduced agility). The question isn’t which set of problems is smaller, it’s which set your organisation is better equipped to manage.

When it comes to AI specifically, the decision framework gets richer. There are nine influential factors that CIOs must weigh when choosing between buy, blend, and build. The first three are strategic: external differentiation (will this AI capability set you apart from competitors?), compliance (can the solution meet regulatory requirements?), and security (what are the risk implications?). The next three are ecosystem-related: vendor ecosystem maturity, data origin and its influence on model accuracy, and available skills. The final three are economic and operational: short-term implementation costs, long-term maintenance costs, and impact on workers. What makes this framework powerful is the recognition that these factors carry different weight depending on your strategic intent, and this is where the defend/extend/upend categorisation becomes genuinely useful.

The most actionable insight from this framework is the three-way categorisation of AI use cases by strategic intent. “Defend” use cases aim to maintain competitive parity, think augmenting individual productivity with tools your competitors are also adopting. For these, the bias should be heavily toward buying from incumbent vendors with embedded AI, because the goal is commodity capability delivered quickly and reliably. The decision here is essentially a series of “yes/no” questions about whether your incumbent vendor can handle the use case, and in most cases, the answer should favour the incumbent. Minimise customisation, leverage existing vendor relationships, and move on to higher-value problems.

“Extend” use cases aim to differentiate, transforming processes or teams to create competitive advantage. Here, the buy/blend/build decision gets genuinely complex. Questions like “does this offer more than minor differentiation?” and “is the upfront cost of building justified by freedom from future vendor price hikes?” don’t have easy answers. The presence of “yes, but…” responses throughout the framework is telling. It acknowledges that extend decisions are inherently contextual and require careful judgment rather than formulaic answers.

“Upend” use cases aim to disrupt, creating new propositions, products, or markets. For these, the framework tilts toward blend or build, but with important caveats. Speed to market may still justify buying as a temporary measure, vendor access to data you can’t replicate may make blending essential, and compliance and security requirements in unfamiliar geographies may demand vendor partnerships. The key insight is that even for disruptive AI initiatives, pure build is rarely optimal.

One of the most compelling ways to visualise the enterprise AI stack is as a layered sandwich. The bottom layers, your centralised data and custom-built AI, are within your control. The top layers, external data and embedded AI from vendor products, are outside your control. Trust, risk, and security management and bring-your-own-AI sit in the middle, mediating between what you own and what you consume. For enterprise architects, this translates into a practical design principle: invest in strong foundations (data infrastructure and governance), build protective layers, and be intentional about which AI capabilities you build versus consume. The organisations that get the layering right will be the ones that can absorb the rapid pace of AI innovation without constantly re-architecting their stack.

There’s an interesting maturity dimension to all of this. High-maturity organisations are three times more likely than low-maturity ones to adopt a hybrid vendor management strategy (33% vs 11%). Low-maturity organisations overwhelmingly default to centralised approaches (61%), while high-maturity ones are more evenly distributed across centralised (41%), decentralised (26%), and hybrid (33%) models. This suggests that as organisations mature in AI, they naturally evolve away from one-size-fits-all vendor strategies toward more nuanced, context-dependent approaches. This tracks with what I’ve observed in practice: early AI adoption benefits from central coordination, but scaling AI across the enterprise requires giving business units more autonomy while maintaining guardrails.

There’s also a procurement reality that architects need to confront. Ninety percent of recent software purchases included GenAI capabilities, but only 25% of respondents felt they achieved high-quality deals on those purchases. This gap reveals the current market dynamic, GenAI is being bundled into almost everything, but buyers are struggling to assess value and negotiate effectively. For architects advising on procurement, this means applying extra scrutiny to the GenAI components of vendor pitches. Are the AI features genuinely useful for your use cases, or are they checkbox additions designed to justify premium pricing? Does the vendor’s AI actually leverage your data to deliver differentiated outcomes, or is it generic capability that any competitor could also access?

If I were to distil all of this into practical guidance for fellow architects, it would be this. Start by classifying every AI initiative as defend, extend, or upend. This single step will dramatically simplify your sourcing discussions by establishing the right default bias for each initiative. For defend initiatives, fight the urge to over-engineer. Your incumbent vendors will almost certainly add the capability you need, and the integration cost of a new vendor rarely justifies the marginal improvement. For extend initiatives, invest in the “blend” capabilities, low-code customisation, API integration, and connector architecture, that let you combine vendor platforms with proprietary logic. This is where your architecture practice adds the most value. For upend initiatives, be ruthlessly honest about your organisation’s readiness. The data, skills, compliance, and security requirements for disruptive AI are substantial, and underestimating them is the fastest path to an expensive failure. And for all three categories, model the full TCO, including decommissioning costs and data retention, before committing to a path. The most expensive decision is the one you have to reverse.

AI architecture sourcing enterprise strategy