Skip to content

[EPIC] Improve provision and deployment speed #7659

@kristenwomack

Description

@kristenwomack

Problem statement

azd up is the most important command in the Azure Developer CLI: it's the moment a developer goes from code to a running application on Azure. We want it to be even faster than it is today. Provisioning runs sequentially even when resources have no dependencies on each other, and service deployments wait in line instead of running in parallel. For new developers, this is the first impression of Azure. For experienced developers shipping daily, it's accumulated friction. Every second we shave off azd up is a second returned to every developer, every time they deploy.

Vision

azd up is noticeably, measurably faster: fast enough to demo with confidence, fast enough that developers feel the difference in their daily workflow. Parallel provisioning and deployment become the default, not the exception.

Personas

  • Azure developer (new): First-time azd up is the defining moment. A faster deploy means a faster path to "it works!" and a stronger first impression of Azure.
  • Azure developer (experienced): Ships multiple times a day. A 30-second improvement per deploy compounds into hours saved per month.
  • Platform engineer: Faster provisioning means faster CI/CD pipelines and shorter feedback loops for the entire team.

Goals (in scope)

  • Cut azd up end-to-end latency through parallel provisioning: resources without mutual dependencies provision concurrently
  • Ship parallel service deployment: multiple app services deploy simultaneously instead of sequentially
  • Look into CLI cold-start time and extension startup time (specifically Python on macOS, a known bottleneck)
  • Establish performance baselines with telemetry so improvements are quantifiable and regressions are detectable

Non-goals (out of scope)

  • Rewriting the provisioning engine; this is optimization of the existing code path, not a new architecture
  • Addressing infrastructure-side latency (ARM/Bicep deployment times); those are platform-level and outside azd's control
  • Optimizing azd deploy for scenarios that don't use azd up; focus is on the full lifecycle command first

Success criteria

  • Performance baselines established with telemetry for azd up across representative templates
  • Parallel provisioning implemented and shipping
  • Parallel service deployment implemented and shipping
  • CLI cold-start time reduced for Python/macOS extension startup
  • Measurable latency reduction demonstrated
  • No performance regressions introduced (validated via regression testing in June)

Dependencies

  • Telemetry infrastructure: Baselines require telemetry to be in place before optimization work can be validated
  • Cross-initiative impact: Faster provisioning benefits azd ai agent deploy (Initiative 3) and the azd initazd up first-run experience (Initiative 2)
  • Existing PRs: Two PRs for provisioning/deployment speed are already completed and gathering data (as of April 10), and this epic builds on that foundation

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions