Before Week 1: What Has to Be True

The Claude implementation timeline described here assumes a mid-size enterprise deployment — between 200 and 2,000 users, integrating with 2–4 existing systems, deploying 1–3 primary use cases in the initial scope. Larger deployments may extend Phase 2 by four to six weeks. Smaller deployments — under 100 users, a single use case — may compress to eight weeks.

Before the formal 12-week engagement begins, three things need to be true. First, the platform decision must be made — ideally Claude Enterprise for deployments of 100+ users requiring governance controls, or the Claude API for custom development use cases. If you are still evaluating, our Claude plan comparison guide can accelerate that decision. Second, there must be an identified executive sponsor with the authority to make governance decisions and a named project owner with the availability to engage 20–30% time during the implementation. Third, there must be at least one identified use case with clear success criteria. Deployments that begin without success criteria defined end with arguments about whether they succeeded.

This timeline reflects our enterprise implementation methodology — the same structure we use across financial services, professional services, healthcare, and technology clients. The phase structure and milestone cadence are consistent; the specific activities vary by industry and use case.

Phase 1 · Weeks 1–3

Discovery, Governance Design & Architecture

WEEK 1
Discovery Workshop & Stakeholder Mapping

Week 1 is the foundation. A two-day discovery workshop with the project owner, IT lead, a representative from Legal/Compliance, and two or three subject matter experts from the primary use case area. The workshop outputs are: a confirmed use case scope and success criteria, a data classification inventory (what data will Claude have access to, how is it classified, what are the handling requirements), an inventory of existing tools and systems that Claude may need to connect with, and a preliminary governance framework.

Week 1 Deliverables

  • Use case definition document with success criteria
  • Data classification matrix
  • Stakeholder map and RACI
  • Risk log (initial)
  • Preliminary governance framework (1-pager)

⚠ Common delay: IT or Legal are not available in Week 1. This pushes the governance design into Week 3 and compresses the architecture phase. Block these stakeholders before the engagement starts.

WEEK 2
Governance Framework & System Prompt Architecture Design

Week 2 is typically the most technically intensive week in Phase 1. The governance framework is finalised — data handling rules, acceptable use policy, escalation procedures, and (for regulated industries) regulatory compliance mapping. In parallel, the system prompt architecture is designed: how many user roles or personas need separate system prompts, what firm-specific context needs to be embedded, what boundaries need to be enforced. A system prompt design session with subject matter experts is usually three to four hours of structured work. The output is a specification, not the final prompt — that comes in Week 4.

Week 2 Deliverables
  • Governance framework (final)
  • Acceptable use policy draft
  • System prompt specification (role-by-role)
  • Integration requirements document
  • Training programme outline
WEEK 3
Technical Architecture & Integration Planning

Week 3 finalises the technical architecture. For a Claude Enterprise deployment: SSO configuration spec, admin console setup plan, licence provisioning schedule. For API or agentic deployments: the full technical architecture document — API integration patterns, MCP server design for each required data connection, authentication and authorisation design, and the testing plan. Security review by the client's InfoSec team typically happens in Week 3 — schedule this proactively, as InfoSec calendars can add weeks of delay if not planned.

Week 3 Deliverables
  • Technical architecture document
  • MCP server specifications (if applicable)
  • Security review completed
  • Licence procurement initiated
  • Integration testing plan

⚠ Common delay: InfoSec review takes longer than planned. Typical InfoSec review of a Claude Enterprise deployment takes 5–10 business days. Submit the architecture document to InfoSec at the start of Week 3, not the end.

Phase 2 · Weeks 4–7

Configuration, Build & Testing

WEEK 4
System Prompt Development & Platform Configuration

Week 4 is system prompt engineering week. The specification from Week 2 is turned into tested, validated prompts through an iterative process: draft → test with representative inputs → evaluate output quality → refine. Each role profile typically requires 3–5 iterations to reach production quality. Platform configuration runs in parallel: SSO setup, admin console configuration, usage policy upload, and test account provisioning. By end of Week 4, the platform is configured and the system prompts have passed internal quality review.

Week 4 Deliverables

  • System prompts (all role profiles) — tested and approved
  • Platform configuration complete
  • Test accounts provisioned
  • System prompt version control established
WEEKS 5–6
Integration Build & Quality Testing

Weeks 5 and 6 are integration build for deployments with MCP connections or custom API work. MCP servers are developed, tested, and documented. Integration testing validates that Claude can access the required data sources correctly, that data handling rules are enforced, and that the system behaves correctly at edge cases. For simpler Claude Enterprise deployments without custom integration, these weeks focus on advanced prompt testing against real use case scenarios and the development of training materials.

Weeks 5–6 Deliverables
  • MCP servers built and tested (if applicable)
  • Integration test results documented
  • Training materials developed (slides, job aids, assessment)
  • Pilot cohort identified and scheduled
  • Incident response procedure documented

⚠ Common delay: Integration access credentials not available. API keys, service accounts, or read-only database credentials for the MCP integrations must be provisioned before Week 5. Assign an IT owner to this in Week 3.

WEEK 7
User Acceptance Testing (UAT)

Week 7 is UAT with a small group of five to ten end users from the target population. UAT tests the complete user experience: platform access, system prompt quality, integration functionality (if applicable), and the training materials. UAT outputs: a structured feedback form covering output quality, usability, and training adequacy; a list of required changes before pilot go-live; and a pilot readiness sign-off from the project owner. The most common UAT finding is system prompt gaps for edge case scenarios — build in buffer to iterate before the pilot.

Week 7 Deliverables
  • UAT completion report
  • Issue log with resolution status
  • System prompt revisions (post-UAT)
  • Pilot go-live sign-off
Phase 3 · Weeks 8–10

Pilot Deployment & Refinement

WEEK 8
Pilot Go-Live & Training

Pilot go-live with 10–20% of the target population, chosen to represent the range of roles and use cases. All pilot users complete the training programme before access is granted. A Slack or Teams channel is set up for real-time pilot support (we typically staff this with one implementation team member for the pilot duration). Usage dashboards go live; the first daily usage report is reviewed by the implementation team and project owner at end of Day 1.

Week 8 Deliverables

  • Pilot users trained and live
  • Usage monitoring active
  • Support channel operational
  • Day 1 usage report reviewed
WEEKS 9–10
Pilot Monitoring, Feedback & Iteration

Weeks 9 and 10 are structured data collection. Daily usage review, twice-weekly structured feedback surveys from pilot users, and weekly review calls with the project owner. System prompt iterations based on real usage patterns are typically made 3–5 times during the pilot phase. Usage data at Day 14 of the pilot should show daily active user rate above 50%; if below 40%, there is a change management issue that needs addressing before full go-live. The Claude change management guide covers the interventions that work.

Weeks 9–10 Deliverables
  • Pilot usage analytics report
  • User feedback synthesis
  • System prompt v2 (post-pilot iterations)
  • Full go-live readiness assessment
  • Full rollout training schedule

⚠ Common issue: Low pilot DAU rate at Day 14 (below 40%). Root cause is almost always change management, not product. Interventions: executive sponsor visible endorsement, manager-level accountability for team adoption, peer success sharing (what's working, not just that it's available).

Phase 4 · Weeks 11–12

Full Go-Live & Measurement

WEEK 11
Full Rollout Training & Go-Live

Full rollout training for the remaining population is delivered in the first half of Week 11 (live sessions, segmented by role). Accounts are activated in batches as users complete training. A hypercare period of 72 hours follows full go-live: the support channel is staffed, usage is monitored every four hours, and minor system prompt adjustments are made within 24 hours if quality issues emerge at scale. Day 3 after full go-live is when the daily active user rate stabilises — this is the first meaningful adoption metric.

WEEK 12
Measurement, ROI Report & 90-Day Roadmap

Week 12 closes the initial engagement with three deliverables. First, a 30-day performance report: usage analytics (DAU rate, sessions per user, query patterns), adoption by department, and qualitative feedback themes. Second, an ROI measurement: actual time saved versus baseline, extrapolated to annual value, compared against the pre-deployment business case. Third, a 90-day roadmap: the next phase of capability expansion — new use cases, deeper integrations, agentic workflows, Claude Cowork expansion — with a prioritised sequence and rough effort estimates.

Week 12 Deliverables

  • 30-day performance report
  • ROI measurement and business case update
  • 90-day capability roadmap
  • Hypercare handover to internal support owner
  • Documentation package (system prompts, governance, architecture)

Want This Timeline Delivered for Your Organisation?

Our enterprise implementation service covers all 12 weeks — architecture, governance, training, and change management — with a fixed price and fixed timeline. We have done this across 50+ enterprise deployments.

Book a Free Strategy Call See Our Case Studies

The Five Most Common Causes of Timeline Delay

Across 50+ enterprise Claude deployments, the same five issues cause the majority of timeline slippage. None of them are surprises; all of them are preventable.

1. InfoSec review not booked in advance. Security review is the most commonly delayed activity in the entire timeline. Budget 10 business days for InfoSec review and submit the architecture document on Day 1 of Week 3, not Day 5. If your InfoSec team has a formal review intake process, start it at the end of Week 2.

2. Integration credentials not provisioned. API keys, service accounts, and database read permissions must be in hand before integration build begins in Week 5. Assign a named IT owner to credential provisioning in Week 1. This is the single most common cause of Week 5–6 delays.

3. Subject matter experts not available for system prompt design. System prompt quality is directly determined by the quality of the domain expert input. If the relevant SMEs are not available for the 3–4 hour system prompt design session in Week 2, the prompts are built on assumptions rather than reality. Block SME time in Week 1.

4. Change management deprioritised. It is tempting to treat change management as something to address after go-live. Deployments that do this universally struggle with adoption. The Champions programme, executive sponsorship, and manager accountability should all be designed and launched before the pilot, not after. See our change management guide.

5. Scope creep during the pilot. Pilot phases often surface new use case ideas that stakeholders want to add before full go-live. Adding use cases mid-pilot extends the timeline, introduces untested functionality, and dilutes the quality of the core rollout. Log new use cases for the 90-day roadmap. Ship the original scope first.

What Happens After Week 12

Week 12 is the end of the initial implementation engagement, not the end of the deployment. Successful Claude deployments at 12 months look very different from the same deployment at 12 weeks. The additional capabilities that firms typically activate between Month 1 and Month 12 include: expanded use case coverage (Month 2–3), agentic workflows and Cowork deployment (Month 3–6), deeper system integrations (Month 4–6), custom prompt libraries and knowledge bases (Month 3–4), and measurement and optimisation cycles (ongoing).

For a view of the full deployment arc, read our Claude enterprise deployment playbook which covers the POC-to-production journey in full. For specifics on the ROI trajectory, the Claude ROI calculator methodology shows how value compounds over 12–24 months as adoption deepens and use cases expand. If you are building the business case for your organisation, the 12-week timeline investment — approximately £120,000–£220,000 depending on complexity — typically delivers a payback period under four months. For regulated industries with complex governance requirements, the timeline may extend to 16 weeks and £250,000–£350,000, with correspondingly higher Year 1 ROI given the larger productivity impact of well-governed, deeply-integrated deployments.

Timeline at a Glance
  • Weeks 1–3: Discovery, governance design, technical architecture
  • Weeks 4–7: System prompt build, integration, UAT
  • Weeks 8–10: Pilot deployment, monitoring, iteration
  • Weeks 11–12: Full go-live, measurement, 90-day roadmap
  • Critical path items: InfoSec review (Week 3), integration credentials (Week 5), SME availability (Week 2)
  • Most common delay: Change management deprioritisation — address this before the pilot
CI
ClaudeImplementation Team

Claude Certified Architects with 50+ enterprise deployments. This timeline is drawn from real implementation data, not theory. About our team →