Learning to Build Agentic Apps with Azure AI Foundry
Building agentic applications with Azure AI Foundry can feel like stepping into a new world for a solution architect. The promise is huge, an entire ecosystem for creating, deploying, and managing AI agents at enterprise scale, but it requires rethinking how we design architectures, plan adoption, and integrate security and governance. Coming from a background of traditional solution design, I quickly realized that approaching this space with the right framework makes all the difference.
I began with Microsoft’s Cloud Adoption Framework, which breaks down the journey into familiar stages. Defining the strategy helped me clarify why the business wanted to adopt agentic AI in the first place and what value we expected. Planning translated those motivations into actionable steps, and preparing the environment with Azure landing zones gave me confidence that the foundations were solid. Adoption meant actually building and deploying workloads, and the final piece, securing them, was a reminder that AI systems must follow the same rigorous governance standards as any enterprise platform.
The next learning curve was understanding AI landing zones. These act as the enterprise-scale foundation for AI adoption and can be deployed with or without a broader platform landing zone. With a platform landing zone, services like networking and identity are centralized, offering scalability and compliance. Without one, you can start faster, but consistency suffers. As I came to see it, landing zones are the equivalent of a data center for AI agents, and they form the baseline that everything else plugs into.
Once the infrastructure was clear, I had to choose how to actually build agents. Azure AI Foundry makes it possible to experiment in multiple ways: low code or no code tools for fast prototyping, and pro code environments with VS Code extensions, REST APIs, or Semantic Kernel SDKs for full customization. At first I leaned on low code tools to get hands-on experience, then gradually moved into pro code scenarios as integration needs and complexity grew. The key lesson was to start simple and deepen over time, carefully selecting models and balancing cost versus performance while deciding which tools the agent should integrate with, such as Azure AI Search, Bing grounding, or Logic Apps.
Another critical design decision was whether to rely on single agents or adopt multi-agent systems. Single agents are predictable and easier to debug, making them a good starting point. Multi-agent setups, however, shine in dynamic or decomposable workloads where specialized agents collaborate, such as combining HR, IT, and compliance agents for employee onboarding. Semantic Kernel provides the orchestration layer for this coordination, allowing workflows to scale as complexity grows. The approach that worked for me was to start with single agents and only move to multi-agent orchestration once the use cases demanded it.
One of the biggest mindset shifts was recognizing that observability and evaluation are not optional. Unlike traditional apps where metrics are straightforward, agents can feel like black boxes unless you design for visibility. Azure AI Foundry’s traceability features log tool calls and agent interactions, while its evaluation metrics check groundedness, fluency, and relevance. Combined with AI safety tooling, these capabilities help ensure outputs remain safe, reliable, and aligned with organizational goals. For me, it was the equivalent of application performance monitoring in conventional systems: without visibility, improvement is impossible.
Of course, none of this matters if the system isn’t secure. Foundry layers governance and security controls across the stack. Managed identities and login protect users, while prompt and content filters ensure responsible AI practices. Virtual networks, NSGs, and VPN gateways provide network security, and Defender for Cloud adds threat protection. Purview further enhances data governance and compliance. I realized that while agents may feel like futuristic AI entities, architecturally they must be treated as microservices that adhere to the same enterprise-grade security principles as any other system.
Looking back on my early steps with Azure AI Foundry, several lessons stand out. Choose your building approach based on the maturity of your use case, whether no code, low code, or pro code. Pick models and tools carefully, weighing cost against performance. Start small with single agents, and scale into multi-agent orchestration when the complexity justifies it. Bake in observability, evaluation, and responsible AI practices from day one. And finally, leverage AI landing zones for enterprise-ready deployments that bring security, scalability, and governance to the forefront.
For me as a solution architect, Azure AI Foundry has become more than just a platform for deploying language models. It is a bridge between experimentation and enterprise readiness, providing the frameworks, tools, and safeguards needed to build agentic applications responsibly. The journey can feel daunting at first, but with a structured approach and focus on architectural principles, agentic AI quickly becomes less of a mystery and more of the next natural step in modern system design.