Your site works fine in a browser. AI agents can't use it.
I was building agent workflows for clients when I noticed a pattern that drove me nuts: agents would hit a site, get a 200 OK, and then… nothing useful. No structured data. No clear navigation path. Sometimes a WAF would silently block the request. The agent would fail, the logs would look fine, and I’d waste hours figuring out why.
The thing is, these weren’t bad websites. They ranked well on Google. They looked great in a browser. They just weren’t built for anything that wasn’t a human clicking around in Chrome.
I kept running into the same invisible wall across different clients, then I hit it with my own startup. Ouch. So I built the diagnostic I wished existed, because I needed it too.
Meet Siteline
Siteline is a free scanner that grades how usable your public website is for AI agents. You give it a URL, it tells you what works, what’s broken, and what to fix — in about 10 seconds.
It’s live right now at siteline.snapsynapse.com. Type any URL and see the grade. Totally free.
What it checks
Siteline evaluates four pillars — what I call the SNAP rubric:
🔦 Signal
Can an agent even reach your site? This checks whether your server responds to non-browser clients, whether robots.txt blocks agent user-agents like ClaudeBot or GPTBot, and whether HTTPS is in place. You’d be surprised how many sites return 403 to anything that isn’t Chrome.
🧭 Navigate Once an agent lands on the page, can it figure out where things are? This looks for JSON-LD, site identity signals, clear navigation to key pages (About, Services, Contact), and machine discovery paths like sitemaps and RSS feeds.
📖 Absorb Is the actual content machine-readable? Siteline checks whether the initial HTML has meaningful content or if everything hides behind JavaScript rendering. It looks at heading hierarchy, semantic markup, and whether the content model is clear or confused.
🤝 Perform Can the agent figure out what the user should do next? This checks for interpretable CTAs, form labels, button text, and whether next steps are clear enough for an agent to relay back to a human.
Each pillar gets a weighted score. The final grade is A through F.
What I learned building this
The biggest surprise: bot-blocking is the #1 failure mode, not content quality. Most sites I scanned during development had decent content structure. But their WAF or hosting provider was silently blocking anything that didn’t look like a browser. The site owner had no idea.
The second surprise: most sites that invested heavily in SEO still scored very poorly. SEO optimizes for Google’s crawler. AI agents have different constraints — they need structured data, clear action paths, and machine-readable policy signals that search crawlers don’t care about.
Try it
CLI:
npx siteline scan yoursite.com
API:
curl "https://siteline.snapsynapse.com/api/scan?url=yoursite.com"
MCP Server (for agent developers):
npx siteline mcp
This exposes four tools — scan_url, self_scan, describe_rubric, and explain_score — so your agents can assess sites programmatically before attempting workflows on them.
The stack
- Frontend: Vanilla HTML/CSS/JS — no framework, no build step
- Backend: Node.js on Vercel serverless functions
- Storage: Supabase PostgreSQL (results cached 24h per domain)
- Dependencies: One —
@vercel/ogfor dynamic social images
One architectural decision worth mentioning: Siteline tests with a non-browser user-agent first, then falls back to a headless browser if blocked. This lets it distinguish between “your content is bad” and “your firewall is blocking agents” — which is the diagnostic that matters most.
What’s next
I’m working on multi-page analysis (right now it evaluates the landing page only) and a comparison mode for benchmarking against competitors. The rubric itself is versioned and will evolve as agent capabilities change.
How do you currently test whether AI agents can actually use your site — or do you just assume it works?