In our CTO’s post, we argued that trustworthy communications data is the foundation for AI systems that work in production. Nowhere is that more visible than in recruiting, where automation tends to fail quietly and at scale.
Most applicant tracking systems were built to organize resumes and move candidates through stages. They were never designed to understand people or the conversations that shape hiring decisions. That gap explains why most “AI recruiting software” still feels thin. Screening and ranking improve throughput, but hiring outcomes are determined in interviews. Until interviews are treated as system input instead of side notes, agents inside ATS platforms will always operate on partial information.
In a recent Nylas webinar, we demonstrated what changes when interviews stop being artifacts and start being data. Meetings captured live and delivered into systems as structured signals become actionable, consistent, and ready for automation.
Hiring lives in language — not in PDFs, attachments, or free-text notes.
What people say, how they say it, and whether signals repeat across rounds is what determines whether someone gets hired. ATS systems that treat interviews as unstructured files cannot use that information in any meaningful way.
Agents cannot reason over text blobs.
They reason over structure.
To build an AI agent that operates inside an ATS, interviews must become part of the data model.
Without this structure, agents guess. When interviews become deterministic input, agents reason instead of speculate.
Nylas Notetaker converts real conversations into structured API output that can flow directly into scoring, review, decisioning, and coordination workflows.
Manual capture is where reliability breaks.
Interview data cannot depend on a recruiter remembering to upload files or write notes hours later. Automatic, real-time capture provides the stable substrate agents rely on.
Automation cannot fix what the system never sees.
This pattern is universal:
Weeks later, no one can explain why a decision feels inconsistent.
The system didn’t fail.
It simply never had the truth.
Before AI agents, coordinating onsite interviews required:
Onsites involve 3–5+ interviewers, each with:
Previously, LLMs couldn’t meaningfully coordinate this because the underlying data was too inconsistent. With structured meetings, normalized calendar access, and stable identity data, ATS agents can now handle workflows that once required human judgment.
This is the inflection point for recruiting automation:
LLM agents can now orchestrate workflows, not just summarize them.
Once interviews become structured input, agent behavior shifts:
Hiring becomes a system — not a collage of opinions.
Most hiring failures happen between interviews. Slow scheduling kills pipelines faster than bad screening.
If an agent cannot:
…then it isn’t managing the process. It’s watching it.
The Nylas Calendar API gives ATS agents direct control over availability, cadence, reminders, and coordination without relying on humans to stitch everything together.
Candidate experience lives in inboxes.
So do confirmations, reminders, feedback loops, and follow-ups.
An AI agent that cannot access email cannot run the workflow it’s responsible for.
Nylas normalizes email behavior across providers so agents can send, receive, track, and manage communication with the stability production systems require.
AI hiring systems fail when the communications layer is fragile.
Nylas removes that volatility by providing:
ATS teams can build automation on predictable systems instead of provider quirks. You implement once and deploy everywhere.
Reliability becomes proportional to insight.
If your system depends on interviews, calendars, and email, the bottleneck is rarely your model.
It’s your data.
Build agents on infrastructure designed for execution, not storage.
Explore the APIs:
https://developer.nylas.com/
Get started:
https://dashboard-v3.nylas.com/register
This post is part of our “Building AI Agents” series.
If you haven’t yet, read the other entries:
Product Manager