
Most enterprises are spending more on AI than ever, and getting less clarity in return. The budgets are approved, the vendors are circling, and the executive decks are full of the word “transformation.” But beneath the surface, the same uncomfortable reality keeps surfacing: the data isn’t ready, the platforms aren’t ready, and the use cases aren’t mature enough to scale.
AI readiness depends on three pillars that most organisations are still building simultaneously, clean, governed, AI-ready data; platforms that can orchestrate agents across ecosystems; and use cases that have graduated beyond experimentation into repeatable business outcomes. If you’re honest with yourself, your organisation is probably still working on all three. So what should an Enterprise Architect actually do about it?
The single most important shift I’ve seen in mature AI programmes is the refusal to treat “AI” as a single budget line item. Instead, every AI initiative gets classified against one of three strategic postures. Defend means embedding AI into existing applications to maintain competitive parity, think copilots bolted onto your CRM, or ML models running fraud detection on your existing transaction platform. The adoption model here is simple: consume an application, embed model APIs. Your sourcing rigour should be proportionate, medium diligence on talent and advisory, high on accelerators and IP, low on industry-specific expertise because the vendor is doing the heavy lifting. Extend means optimising specific workflows with custom agents, retrieval-augmented generation, or fine-tuned models. This is where most ambitious enterprises sit today: building domain-specific intelligence on top of foundation models. The evaluation profile shifts dramatically, you need strong partnership ecosystems and data engineering depth, and vendor copyright and compliance scrutiny moves to medium. Upend means building entirely new business models or products powered by AI, custom model development, novel data strategies, proprietary training pipelines. Everything is High on the evaluation scale: talent, ethics, industry experience, commercials. If you’re truly trying to upend your market, you should be treating AI sourcing with the same rigour you’d apply to an M&A deal.
The point is that the level of governance, the vendor you hire, the contract you write, and how you measure success should all flow from which posture you’re in. Conflating “Defend” with “Upend” is how you end up overspending on commoditised capabilities or under-governing a genuinely transformational bet.
We’re watching three traditionally separate disciplines collide: Data & Analytics, Software Engineering & Infrastructure, and Business Process Design. At the intersection sits what’s increasingly being called “AI Services”, but I’d reframe it for architects as the new integration layer you need to own. The emerging competencies in this convergence zone, Insight Engineering, AI Integration, and Work Orchestration, don’t belong neatly to any one team. Your data scientists won’t build the agent orchestration. Your platform engineers won’t design the business workflows. And your business analysts won’t understand the infrastructure constraints of running inference at scale. As an Enterprise Architect, this convergence is your territory. You need to be the one defining the reference architecture that connects these domains, establishing the shared services layer, and ensuring the platform strategy doesn’t fragment into shadow AI projects.
The two-tiered model for AI use-case management is worth studying here. They separate the “why” (a management team of senior marketing leaders meeting monthly to set strategic direction and approve priorities) from the “how” (use-case leaders from each subfunction meeting weekly to plan, adapt, and track execution). The critical insight: having a dedicated subfunctional head for each use case who assesses their team’s capacity to take on AI projects is what prevents the all-too-common pattern of AI initiatives being layered on top of already-overloaded teams. Capacity assessment isn’t glamorous, but it’s the difference between a use case that ships and one that dies in a shared backlog.
One of the most underappreciated failure modes in AI programmes is misalignment across the C-suite. Each leader brings fundamentally different concerns to the table. Business leaders want to define strategic ambition, but worry about losing human control. IT and Data leaders need interoperability with existing systems, but struggle with integration complexity. Legal and Compliance care about IP protection and regulatory adherence, but can’t always ensure fairness and accountability at model level. And Finance and Procurement want ROI and cost transparency, but often lack the frameworks to measure AI-specific value. The architect’s role is to make these concerns legible to each other. A gated approach with human decision points at each stage addresses the business leader’s concern. Validation checkpoints at every integration gate satisfy IT. Contractual IP coverage handles Legal. And FinOps rigour, which I’ll get to, gives Finance what they need.
Here’s the contract model shift that too few organisations have made: if your vendor uses AI to code 50% faster but you’re still paying by the hour, you lose. The productivity gains from AI tooling flow entirely to the vendor’s margin. The move is from buying hours to buying outcomes. The contract model spectrum runs from traditional time-and-materials and staffing deals (labour-intensive, “as-is” performance) through to outcome-based and shared-risk models that tie payment to actual business results. For AI-heavy engagements, you should be demanding outcome-based or value-based pricing instead of FTE blocks, shared-risk models for innovation and proof-of-concept work, AI-Augmented Capacity (AI PODs) rather than pure staff augmentation, and Agent Efficiency Metrics written directly into the contract. The new KPIs for measuring these “digital employees” include metrics like Agent Efficiency Index (how efficiently does the agent complete tasks versus the optimal workflow?), Autonomy Utilisation Ratio (what percentage of tasks complete without human intervention?), and Decision Accuracy. If you’re not measuring these, you’re flying blind on whether your AI investment is actually performing.
The risk landscape for AI is genuinely complex. There are at least 18 interconnected risks across four categories, behavioural (accuracy, bias, scope violations), security (sensitive data leakage, hacker abuse, vendor copyright issues), transparency (failure to disclose AI involvement, explainability gaps), and a catch-all of operational risks (energy waste, HR dependency, multi-agency complexity). This is why the framework of Trust, Risk & Security Management matters for architects. The five mandates are worth internalising: a Governance Framework that integrates AI into your existing enterprise risk taxonomy; Compliance and Accountability mechanisms with continuous monitoring and defined vendor responsibility; Human Oversight and Transparency requirements where vendors must disclose AI use on client data and high-risk outputs need human review; Cross-Functional Collaboration through fusion teams that include Legal, Risk, IT/Data, and Business; and Capability and Training Transfer, meaning your vendors should be contractually obligated to support change management and AI literacy, not just deliver code.
The case study is instructive here. Their GenAI Centre of Excellence delivers structured training that covers everything from foundational model concepts through to the cost implications of token size and prompting decisions. The CIO’s point is sharp: anyone can download a GenAI training, but leaders need to understand the cost impact of their AI decisions. That’s a FinOps discipline, not just a technology one.
The final architectural shift is perhaps the most fundamental. The traditional shared services model, centralised, factory-like, building everything in-house, doesn’t scale for AI. You can’t be the bottleneck. Instead, shared services should provide the AI platform and the guardrails, then let the business build safely within those boundaries. The accountability model flips: “You build it, you pay for it.” Technical debt gets tied back to the owner, not absorbed centrally. This also means getting serious about shared decision rights and shared costs. Decisions about what to share versus what to keep sovereign need to be driven by compliance and data sovereignty requirements. Variable costs, which are inherent to consumption-based AI pricing, need FinOps discipline to manage, with a focus on recovering investment with demonstrable ROI.
When you do go to market, the procurement process itself needs to evolve. The traditional RFP-then-negotiate cycle is too slow and too rigid for AI partnerships. A sprint-based competitive co-design approach works better. Start with a long list and an initial RFS to short-list and onboard candidates. Then run a co-creation sprint with 3-6 suppliers to shape the deal collaboratively. Narrow to 2 for detailed co-creation, due diligence, and SOW development. Final sprint: competitive negotiation and contract signature. This is agile, structured, outcome-driven, and, critically, competitive throughout. You’re not just evaluating proposals on paper; you’re seeing how vendors actually work with your team before you commit. The RFP questions themselves should be outcome-driven: How does the business case demonstrate ROI? How is pricing structured across renewals, scaling, and managed services? How will governance manage risk? What frameworks exist for prioritising use cases? Can you show industry-specific examples with measurable outcomes?
One final caution: AI services costs are cumulative and easy to underestimate. Model development, data management, licensing, infrastructure, integration, and ongoing support all compound. The critical contract terms to watch are pricing mechanics and data/IP terms (high risk), XLAs/SLAs/KPIs and liability allocation (high risk), and exit and continuity clauses (moderate risk). The AI services market is projected to reach $1.11 trillion by 2029, with indirect services growing at a 49% CAGR. Application implementation alone is forecast at $350 billion (combining $160B direct and $190B indirect). The money is flowing, the question is whether it’s flowing towards outcomes or just towards activity.
If I had to distil this into a single actionable framework for fellow architects, it would be this. Source for value, categorise every AI initiative as Defend, Extend, or Upend, and right-size your sourcing rigour accordingly. Stop buying AI generically. Govern for safety, operationalise framework, mandate cross-functional fusion teams, and require vendors to support change management and AI literacy as part of every engagement. Capture the AI dividend, shift from T&M to outcome-based contracts, demand agent efficiency metrics, and establish FinOps discipline before the consumption-based costs spiral. The organisations that get this right won’t just be adopting AI. They’ll be architecting a fundamentally different relationship between technology, business outcomes, and vendor partnerships. And that’s exactly the kind of convergence zone where Enterprise Architects should be leading the conversation.