Stop Googling 'Can I Use That AI Feature on the Free Plan'
Someone raises their hand: “Can I actually use that on the free plan?”
You’re 90% sure the answer is yes. But the vendor changed their pricing page last month. And the feature you demoed in January got moved to a higher tier a couple weeks ago. And you’re not going to pull up six different pricing pages in front of a live audience to find out.
This is the problem. Not that AI tools are hard to understand. That the ground keeps moving, and nobody maintains a single place where the current answers live.
What this is
AI Capability Reference is an open-source, static-site reference that tracks capabilities, pricing tiers, platform support, and availability across the major consumer AI products: ChatGPT, Claude, Gemini, Copilot, Perplexity, Grok, plus self-hosted options like Ollama and LM Studio.
Live site: airef.snapsynapse.com
It answers the specific, annoying questions that come up when you’re facilitating, teaching, or advising:
- Which plan unlocks Agent Mode in ChatGPT?
- Can I use Claude Cowork on Windows?
- Is Gemini Live free or paid?
- What open models can I realistically run locally?
Every answer links to an evidence source. Nothing is vibes-based.
How the data stays current
This is the part I’m most proud of, and probably the part that matters most for whether you’d trust it.
Every Monday & Thursday, a four-model verification cascade runs. Gemini, Perplexity, Grok, and Claude each cross-check every tracked feature: pricing, platform availability, status, gating, regional restrictions. To prevent provider bias, models are skipped when verifying their own platform’s features (Gemini doesn’t check Google features, etc.). A change only gets flagged when at least three models agree on a discrepancy. Flagged changes become GitHub issues or PRs for human review. Nothing auto-merges.
Features are also stamped with a Checked date. Anything not re-verified within seven days is treated as stale and gets prioritized in the next run.
How it’s built
There is no database. Every piece of data lives in plain markdown files under data/. A single build script reads those files and renders the static site into docs/.
That’s the whole stack: markdown, JavaScript, and Git.
Contributing doesn’t require a dev environment, an ORM, or a running database. Edit a .md file, open a PR, CI rebuilds the site. If you can read a markdown table, you can read and fix the data.
# Clone, build, open
git clone https://github.com/snapsynapse/ai-capability-reference.git
cd ai-capability-reference
node scripts/build.js
open docs/index.html
What the ontology looks like
The reference is organized capability-first, not product-first. Instead of “here’s everything ChatGPT does,” it asks “which products let me do X?”
Capabilities are grouped into plain-language categories:
- Understand
- Respond
- Create
- Work With My Stuff
- Act for Me
- Connect
- Access Context
Each capability maps to specific implementations across products, with plan-level availability for each.
The data also includes ready-to-use talking points for presentations (click-to-copy), category and price tier filtering, provider toggles, permalinks, and shareable URLs with filter state preserved in parameters.
Scope decisions
Covered: major consumer-facing AI products with meaningful public usage. Commercially available systems that ordinary people can sign up for and use. Important self-hosted model families and runtimes.
Not covered: every enterprise AI vendor, infrastructure platform, or niche model release. The roughly 1% market share heuristic is a practical inclusion filter, not a strict cutoff.
Prices are in USD. Feature availability reflects the US region by default.
Accessibility
WCAG 2.1 AA target. Full keyboard navigation (arrow keys, j/k, Enter to copy), skip links, focus indicators, 4.5:1 contrast minimums in both themes, reduced-motion support, 44px minimum touch targets, ARIA live regions, semantic HTML throughout.
If you care about accessibility tooling, there’s a companion project: skill-a11y-audit that automates WCAG audits as a reusable AI skill.
Who this is for
- Facilitators and AI educators who need current, accurate answers during live sessions
- Professionals building AI literacy programs who need to know what’s actually available at each price point
- Designers and developers evaluating which AI capabilities exist on which platforms
- Anyone tired of checking six different pricing pages to answer one question
Get involved
The repo is MIT-licensed. If you spot outdated info or want to add a feature:
- Edit the relevant record under
data/ - Preserve the evidence source link
- Run
node scripts/validate-ontology.js - Open a PR