Claude Cowork ATS integration is the step most recruiting teams skip, and it is why their AI workflows break down after the initial enthusiasm. Cowork running in isolation, disconnected from Greenhouse, Lever, or Workday, creates a parallel workflow that recruiters have to manage manually. Data lives in two places. Screening outputs do not flow back into the ATS. Candidates fall through gaps. The AI saves time on one task and creates overhead everywhere else.

The better architecture connects Cowork directly to your ATS data layer, so screening outputs write back into candidate records, status changes in the ATS trigger Cowork workflows, and your recruiting analytics capture the full picture. This is not a theoretical possibility. Greenhouse and Lever both offer robust APIs, Workday has SOAP and REST services, and Claude Cowork connects to all of them through MCP server configurations that our Claude Cowork deployment service has built and tested across dozens of enterprise recruiting teams.

This guide covers the integration architecture for each major ATS platform, the MCP server setup required, practical workflows that run on top of the integration, and the governance controls your IT and security teams will require before approving the connection. If you want to understand the screening workflow itself before worrying about integration, start with our guide to Claude Cowork for candidate screening.

Claude Cowork ATS Integration Architecture

The integration between Claude Cowork and an ATS operates through one of three patterns, depending on your ATS platform and your IT environment. Understanding which pattern applies to your setup determines how the integration is built and what permissions are required.

Pattern 1: File-Based Integration

The most broadly compatible approach. Your ATS exports applications to a shared folder (SharePoint, Google Drive, or an SFTP location) on a scheduled basis. Cowork reads from that folder, processes applications, and writes structured output back to a designated results folder. A secondary process, either manual or automated, pushes those results back into the ATS as notes or status updates. This works with any ATS that has export functionality and requires no API access to Cowork's environment.

Pattern 2: API-Based Integration via MCP

Claude Cowork connects to MCP (Model Context Protocol) servers that expose ATS API endpoints as callable tools. When Cowork needs to retrieve applications for a role, it calls the MCP server, which in turn calls the ATS API and returns the data. When Cowork produces a screening output, it calls the MCP server again to write that output back into the ATS as a candidate note or stage change. This is the higher-fidelity pattern and the one we recommend for teams processing more than 50 applications per week. For more on MCP architecture, see our MCP Protocol guide.

Pattern 3: Webhook-Triggered Workflows

Your ATS sends a webhook event to a Cowork trigger endpoint when a new application is received or a stage changes. Cowork processes the event and returns a structured payload that the ATS ingests. This pattern requires the most engineering work but delivers the lowest latency and the tightest integration. Suitable for high-volume recruiting operations where real-time screening matters.

Greenhouse Integration with Claude Cowork

Greenhouse has one of the most mature APIs in the ATS market. The Harvest API provides full read and write access to applications, candidates, jobs, and stages. The Ingestion API allows external sources to post applications directly. Both are well-documented and straightforward to connect through an MCP server.

Setting Up the Greenhouse MCP Connection

To connect Cowork to Greenhouse, you will need a Harvest API key with the relevant permissions (applications:read, candidates:read, scorecards:write at minimum). Your IT team provisions this key in Greenhouse under Configure > Dev Center > API Credential Management. The MCP server configuration maps these endpoints to Cowork tool calls. Once configured, Cowork can retrieve all applications for a given job ID, read CV attachments, and write scorecard notes back to each candidate record.

Automated Greenhouse Workflows

With the MCP connection in place, a recruiter can open a Cowork session, type "Screen all new applications for job ID 12345 against the role spec in the finance-director-spec.pdf file," and receive a ranked shortlist that simultaneously writes structured notes to each candidate's Greenhouse profile. The recruiter sees the shortlist in Cowork and the individual assessments in Greenhouse without any copy-paste. Stage changes can be triggered from the same session.

Need help building the Greenhouse or Lever MCP connection?

Our team has built ATS integrations across Greenhouse, Lever, Workday, and SmartRecruiters. We can have your Cowork integration live in under three weeks.

Book a Free Strategy Call

Lever Integration with Claude Cowork

Lever's API is REST-based and well-suited to Cowork integration. The key endpoints for recruiting workflows are the Opportunities endpoint (applications, candidates, and pipeline stages in Lever's terminology), the Resumes endpoint (CV file retrieval), and the Notes endpoint (for writing screening output back to candidate records).

Lever Authentication and Permissions

Lever uses OAuth 2.0 for API authentication. Your Lever admin creates an API credential with the required scopes: opportunities:read, resumes:read, notes:write, and stage_changes:write if you want Cowork to advance candidates in the pipeline automatically. The MCP server handles the OAuth token refresh cycle so credentials remain valid without recruiter intervention.

Lever-Specific Workflow Considerations

Lever's data model treats the opportunity (a specific person applying for a specific role) as the primary record rather than the candidate (the person across all applications). Cowork MCP configurations for Lever need to be written with this distinction in mind. When searching for all applications to a role, you query opportunities filtered by posting ID. When writing notes, you write to the opportunity, not the contact record. Getting this model right at setup prevents data hygiene issues downstream.

Workday Recruiting Integration

Workday is the most complex of the three platforms to integrate with Cowork. The Workday Recruiting module uses SOAP-based web services (with a REST API available for newer deployments) and requires more significant IT involvement to set up and maintain. For enterprises already running Workday as their HRIS, the integration payoff is high because Cowork can connect candidate screening directly to position management and compensation data.

Workday Integration Approach

Most enterprises connect Cowork to Workday through an integration middleware layer rather than direct API calls. If your organisation runs MuleSoft, Boomi, or Workato, the cleaner architecture is to build the Workday connector in your existing middleware platform and expose a standardised REST interface to the Cowork MCP server. This keeps Workday integration complexity contained in a layer your IT team already manages, and Cowork connects to a clean, documented interface rather than raw Workday web services.

Workday Data Fields to Map

When configuring the Workday integration, the fields most relevant to Cowork screening workflows are: Candidate Name, Application Date, Requisition ID, Position Title, Department, Application Stage, Resume Attachment URL, and Questionnaire Responses. Map these fields in your integration layer before attempting to run Cowork screening workflows, or the prompt templates will reference fields that do not resolve correctly.

Prompt Templates for ATS-Integrated Claude Cowork Workflows

These prompts assume the ATS integration is configured and Cowork can access candidate data through MCP tools. They reference ATS-specific data structures rather than file folders.

ATS SCREENING TRIGGER PROMPT โ€” GREENHOUSE

Using the Greenhouse API connection, retrieve all applications with status "Active"
for job ID [JOB_ID] that have not yet received a screening scorecard.

For each candidate:
1. Retrieve their CV using the attachments endpoint
2. Evaluate against the role specification in [ROLE_SPEC_FILE]
3. Generate a structured scorecard with:
   - Overall recommendation: Advance / Hold / Decline
   - Score against each required criterion (1-5)
   - One paragraph summary
   - Specific notes on gaps or strengths
4. Write the scorecard to the candidate's Greenhouse profile using the
   scorecards API endpoint
5. Output a summary table of all candidates processed this session

Flag any candidates where the CV attachment is missing or unreadable.
LEVER PIPELINE REVIEW PROMPT

Using the Lever API connection, retrieve all opportunities at the
"Application Review" stage for posting ID [POSTING_ID].

Compare all candidates at this stage:
1. Score each against the role requirements in [ROLE_SPEC_FILE]
2. Rank candidates from strongest to weakest fit
3. For the top 5 candidates, generate detailed notes and write them
   to the opportunity record via the notes endpoint
4. Identify any candidates who should be declined at this stage
   and prepare decline note text for review

Output the ranked list with scores and a brief justification for each ranking.
Do not advance or decline candidates automatically โ€” present recommendations
for recruiter review.
WORKDAY SCREENING SUMMARY PROMPT

Using the Workday integration API, retrieve all active candidates for
requisition [REQ_ID] submitted in the last [N] days.

Generate a screening summary report:
1. Total applications received
2. Applications meeting all hard requirements
3. Applications meeting some but not all requirements (detail which gaps)
4. Applications not meeting hard requirements
5. Top 10 recommended candidates with brief justification
6. Candidates flagged for unusual patterns (e.g., very short tenures, gaps)

Format the output as a structured report suitable for sharing with
the hiring manager. Do not include personally identifiable information
beyond candidate name and current role.

Key Takeaways

  • Claude Cowork ATS integration runs through three patterns: file-based, API via MCP, or webhook-triggered. Most enterprise teams start with MCP-based integration
  • Greenhouse and Lever have mature REST APIs that connect cleanly to Cowork MCP servers. Workday typically requires an integration middleware layer
  • The integration eliminates manual copy-paste between Cowork screening outputs and ATS candidate records, saving 30 to 60 minutes per role
  • Lever's opportunity-centric data model requires specific MCP configuration to avoid data hygiene issues
  • All integrations should be reviewed by your IT security team before going live. Our deployment service includes an integration security review as standard

Frequently Asked Questions

Does Claude Cowork integrate with SmartRecruiters or iCIMS?
SmartRecruiters and iCIMS both have REST APIs that can be connected to Cowork through MCP server configurations. The integration pattern is the same as Greenhouse. The specific endpoint mapping differs by platform. We have built MCP connectors for both and can deploy them as part of a Cowork implementation engagement.
How long does it take to build a Greenhouse or Lever integration?
For a standard MCP-based integration with Greenhouse or Lever, expect two to three weeks from API credential provisioning to go-live. This includes MCP server build, testing with real application data (using anonymised records), recruiter training, and a two-week supervised run period. Workday integrations take longer due to middleware configuration requirements.
Will Cowork automatically advance or decline candidates in our ATS?
Only if you configure it to do so, and we strongly recommend against fully automated stage changes in early deployment. The standard configuration has Cowork write screening notes and recommendations to ATS records, with a recruiter reviewing and approving stage changes manually. Automated declines can be configured for candidates who clearly fail hard requirements (e.g., missing a legally required qualification) after a probationary period where you validate the automation's accuracy.
What security review does our IT team need to conduct?
Your IT team should review the ATS API credentials being provisioned (scope of access, rotation schedule), the MCP server hosting environment (network access, logging, secrets management), and the data flow path (whether CV data leaves your cloud environment). Claude Enterprise does not train on your data and supports data residency controls. Our Claude security and governance documentation covers the full security posture of a Cowork deployment.
Can we use Cowork ATS integration for internal mobility programmes?
Yes, and internal mobility is one of the more compelling use cases. Internal candidates often have unstructured career histories that are hard to evaluate against a new role spec. Cowork can read an employee's complete HR record (with appropriate permissions) alongside their ATS application and generate a detailed fit assessment that accounts for internal context โ€” project work, performance data, and skills assessments โ€” that external candidates cannot provide.
Does the integration work with Workday on older SOAP-based configurations?
Yes. Older Workday tenants on SOAP web services are supported through a middleware layer. We typically configure MuleSoft or a lightweight Lambda function as an adapter that translates Workday SOAP responses into clean REST JSON for the Cowork MCP server. This adds an architecture component but is well-established and does not significantly increase latency for screening workflows.

Your ATS and Claude Cowork Should Be Talking. They Probably Are Not.

A disconnected AI screening tool creates work instead of saving it. We build the integration, configure the MCP servers, and train your recruiting team. Most teams are live in under three weeks.