Skip to content
CCA Foundations Practice Tests

Claude Certified Architect, Foundations

Validate your ability to make production tradeoffs across Claude Code, the Claude Agent SDK, the Claude API, and Model Context Protocol. Practice with scenario-based questions covering agentic architecture, tool design, prompt engineering, and reliability patterns drawn directly from the official exam guide.

Duration

N/A

Questions

N/A

Cost

N/A
Where to register
Anthropic

Issued by Anthropic. Delivered via Online proctored via the Anthropic Skilljar portal. Currently delivered through Anthropic's training portal. Access is granted on request; check the official page for current availability and pricing.

01·Overview

Certification overview

The format, prerequisites, and what to expect on exam day.

Exam details
  • Exam Code

    CCA Foundations

  • Format

    Multiple choice (one correct, three distractors)

  • Structure

    Scenario-based; 4 scenarios drawn at random from a set of 6

  • Scoring

    Scaled score 100 to 1,000

  • Passing Score

    720

  • Result

    Pass or fail; no penalty for guessing

  • Issuer

    Anthropic, PBC

Prerequisites
  • Roughly 6+ months of hands-on experience building with the Claude API, Agent SDK, Claude Code, and MCP
  • Practical experience designing agentic loops, multi-agent orchestration, and subagent delegation
  • Familiarity with configuring Claude Code via CLAUDE.md, custom slash commands, skills, and MCP server integrations
  • Comfort with prompt engineering for structured output (JSON schemas, few-shot examples, extraction patterns)
  • Working understanding of context window management across long documents and multi-turn conversations
  • Experience integrating Claude into production workflows including CI/CD pipelines and error handling
02·Domains

Exam domains

Topics on the official blueprint, with their relative weight.

01
Agentic Architecture & Orchestration
27%
  • Agentic loop lifecycle: stop_reason inspection (tool_use vs end_turn) and tool result feedback
  • Coordinator and subagent (hub-and-spoke) multi-agent patterns with isolated context
  • Subagent invocation via the Task tool, including parallel spawning and explicit context passing
  • AgentDefinition configuration: descriptions, system prompts, and tool restrictions per subagent
  • Multi-step workflows with programmatic enforcement (hooks, prerequisite gates) versus prompt-based ordering
  • Agent SDK hooks (PostToolUse) for tool-call interception, data normalization, and policy enforcement
  • Task decomposition: prompt chaining for predictable reviews vs adaptive decomposition for open-ended work
  • Session state, named resumption with --resume, and fork_session for divergent exploration branches
02
Tool Design & MCP Integration
18%
  • Writing tool descriptions that differentiate purpose, inputs, outputs, and when to use each option
  • Splitting overlapping tools into purpose-specific interfaces with defined contracts
  • Structured MCP error responses: errorCategory, isRetryable, isError, and human-readable detail
  • Distinguishing transient, validation, business, and permission errors for recovery decisions
  • Distributing tools across agents to prevent over-broad tool sets that degrade selection reliability
  • tool_choice configuration: auto, any, and forced selection for guaranteed tool ordering
  • MCP server scoping: project-level (.mcp.json) vs user-level (~/.claude.json) and env-var expansion
  • MCP resources for exposing content catalogs (issue summaries, schemas) without exploratory tool calls
  • Selecting built-in tools (Read, Write, Edit, Bash, Grep, Glob) by task type and using Read+Write fallback
03
Claude Code Configuration & Workflows
20%
  • CLAUDE.md hierarchy: user-level, project-level, and directory-level scope and override rules
  • @import syntax for modular CLAUDE.md and the .claude/rules/ directory for topic-specific guidance
  • Project-scoped vs user-scoped slash commands and the .claude/skills/ directory for SKILL.md files
  • Skill frontmatter: context: fork for isolated execution, allowed-tools, and argument-hint
  • Path-scoped rules with YAML paths globs for conditional convention loading across the codebase
  • Plan mode for architectural decisions vs direct execution for well-scoped changes
  • Iterative refinement: input/output examples, test-driven iteration, and the interview pattern
  • Running Claude Code in CI with -p, --output-format json, and --json-schema for structured findings
04
Prompt Engineering & Structured Output
20%
  • Writing explicit, categorical review criteria instead of vague confidence-based filtering
  • Few-shot prompting (typically 2 to 4 examples) for ambiguous cases and consistent output formats
  • Tool use with strict JSON schemas to eliminate syntax errors in structured output
  • Schema design: required vs optional fields, enum + "other" detail strings for extensibility
  • Validation, retry-with-error-feedback, and recognizing when retries cannot succeed
  • detected_pattern fields and self-correction flows (calculated_total vs stated_total)
  • Message Batches API tradeoffs: 50% cost savings, 24-hour window, no multi-turn tool calling
  • Multi-instance and multi-pass review architectures: independent reviewers and per-file local passes
05
Context Management & Reliability
15%
  • Progressive summarization risks and the "lost in the middle" effect on long inputs
  • Persistent "case facts" blocks that survive summarization for multi-turn customer flows
  • Trimming verbose tool outputs and structuring high-signal summaries at the top of context
  • Escalation triggers: explicit human requests, policy gaps, and inability to make progress
  • Why sentiment-based and self-reported confidence are unreliable proxies for case complexity
  • Structured error context across multi-agent systems: failure type, attempted query, partial results
  • Distinguishing access failures from valid empty results to enable correct coordinator recovery
  • Scratchpad files, structured state exports, and /compact for long codebase exploration
  • Stratified accuracy validation by document type and field-level confidence calibration
03·Key topics

What you actually study

Service families and concept clusters that show up across questions.

Claude Agent SDK

  • Agentic loop control flow driven by stop_reason
  • Coordinator and subagent orchestration with the Task tool
  • AgentDefinition: descriptions, system prompts, tool scoping
  • PostToolUse hooks for interception and normalization
  • fork_session and named --resume session management

Model Context Protocol (MCP)

  • Tool description quality and disambiguation
  • Structured error responses with retry metadata
  • Project (.mcp.json) vs user (~/.claude.json) server scoping
  • Environment variable expansion for credentials
  • MCP resources for content catalogs and schemas

Claude Code Configuration

  • CLAUDE.md hierarchy and @import composition
  • Project and user slash commands in .claude/commands/
  • Skills in .claude/skills/ with SKILL.md frontmatter
  • Path-scoped rules in .claude/rules/ with glob patterns
  • Plan mode, direct execution, and the Explore subagent

Prompt Engineering & Schemas

  • Categorical review criteria over vague confidence rules
  • Few-shot examples for ambiguity and format consistency
  • tool_use with JSON schemas for guaranteed structure
  • tool_choice options: auto, any, forced selection
  • Retry-with-error-feedback and self-correction patterns

Reliability & Error Handling

  • Programmatic enforcement vs prompt-based ordering
  • Transient, validation, business, and permission errors
  • Local recovery in subagents vs propagation to coordinators
  • Structured handoff summaries for human escalation
  • Independent review instances over self-review

Production Patterns

  • CI integration with -p, --output-format json, --json-schema
  • Message Batches API for non-blocking workloads
  • Multi-pass review (per-file local + cross-file integration)
  • Stratified accuracy sampling for high-confidence extractions
  • Field-level confidence calibration with labeled validation sets
04·Study tips

How to actually pass it

Practical strategies for the weeks before, and the morning of.

Preparation strategy
  • Build at least one end-to-end project with the Claude Agent SDK that includes a coordinator, two or more subagents, MCP tools, and at least one PostToolUse hook before sitting the exam.
  • Practice every scenario in the official guide hands-on: customer support agent, Claude Code automation, multi-agent research, developer productivity, CI/CD review, and structured data extraction.
  • Internalize the difference between programmatic enforcement (hooks, prerequisite gates) and prompt-based guidance; many questions hinge on choosing deterministic over probabilistic compliance.
  • Memorize the Claude Code configuration hierarchy and what lives where: CLAUDE.md vs .claude/commands/ vs .claude/skills/ vs .claude/rules/ with path globs.
  • Drill the agentic loop mental model: send to Claude, inspect stop_reason, execute tools, append results, iterate until end_turn. Avoid anti-patterns like parsing assistant text to detect completion.
  • Practice JSON schema design with tool use including required vs optional fields, enums with "other" + detail strings, and conflict_detected style booleans for self-correction.
  • Time yourself working through scenario-style multiple-choice questions; the exam draws four scenarios at random from six, so breadth across all scenarios matters.
Exam day
  • Read scenario context fully before the questions; later questions in a scenario often build on architectural assumptions established earlier.
  • When two answers feel close, prefer the option that addresses the root cause with the least over-engineering. The exam consistently penalizes infrastructure-heavy answers when a simpler prompt or tool fix exists.
  • When a question involves financial, identity, or compliance operations, lean toward programmatic enforcement (hooks, prerequisite gates) over prompt instructions.
  • For multi-agent questions, look for the answer that gives the coordinator the most actionable information (structured error context, partial results) rather than ones that hide failures.
  • Watch for "first step" wording: the right answer is usually the lowest-leverage fix that targets the actual root cause, not the most architecturally ambitious change.
  • Answer every question; unanswered questions count as incorrect and there is no guessing penalty.
  • The passing score is 720 on a 100 to 1,000 scaled scale; you do not need a perfect performance, so flag and move on rather than burning time.

Validate your Claude architecture skills.

Practice with scenario-based questions aligned to the official exam blueprint. Start free, no card required.

Claude Certified Architect Foundations Practice Tests | ExamCoachAI | ExamCoachAI