Research Prospectus
Researchers use AI for papers and code. But grant writing is still stuck in Word docs.
AI can write code, papers, and documentation. But grant proposals are still written manually.
Spend 34 working days per proposal on average[6]. 90%+ say it's too much time.
Claude Code, Cursor, and Copilot work with local files. Grant portals are web-only.
NSF PAPPG has 80+ formatting rules[7]. Errors caught after submission waste weeks.
The National Science Foundation receives over 40,000 proposals annually[1], with an average request of ~$200K. Researchers spend weeks on each proposal, yet 76% are rejected[2].
The problem isn't AI capability. It's infrastructure. AI tools work with local files and version control. Grant portals work with web forms and uploads. There's no bridge between them.
Modern research tools are file-based, version-controlled, and AI-compatible. Grant writing tools are none of these.
The gap: No tool treats grant proposals as structured data that AI can read, validate, and help write. That's what we're building.
GrantKit bridges local AI workflows with grant submission requirements. Markdown files that sync bidirectionally with the cloud.
Proposals as local markdown files AI tools can read and edit.
grantkit pull && grantkit push 200+ compliance rules checked automatically before you waste time.
grantkit validate nsf-cssi-elements Real-time sync without email attachments or version conflicts.
grantkit share user@university.edu Research funding is massive, and the tools are decades behind.
Federal research funding in the US alone. NSF, NIH, DOE, DOD each run billions in annual competitions.
Growing 10%+ annually. Legacy vendors (Cayuse, InfoEd) charge $50K+/year for outdated tools.
US universities, national labs, research nonprofits. Each submits dozens to hundreds of proposals annually.
Jasper, Copy.ai, and others for marketing. No dominant player for research/grant writing.
Nobody combines local-first editing + AI integration + compliance validation.
| Capability | Research.gov | Cayuse | Grantable | Google Docs | GrantKit |
|---|---|---|---|---|---|
| Local file editing | |||||
| AI tool compatible | |||||
| NSF validation | |||||
| Version control | |||||
| Free tier | |||||
| Open source |
NSF's official portal. Web-only, validates on submission (too late), no collaboration features.
Enterprise grant management. $50K+/year. Built for compliance officers, not researchers.
AI grant writing assistant. Web-only, built-in AI (no BYOM), no agency-specific validation.
Great for collaboration. Zero grant-specific features. No validation, no structure.
LaTeX for papers. Not designed for grants. No compliance validation.
Open source CLI. Hosted cloud service. Enterprise for institutions.
CLI tool, local validation, single-user. Apache 2.0 licensed.
Cloud sync, team collaboration, unlimited grants.
Shared workspace, admin controls, priority support.
SSO, compliance reporting, custom integrations, dedicated support.
All started free/cheap, expanded to enterprise.
Early but promising. Built to scratch our own itch.
Using GrantKit for NSF CSSI proposal. Real validation, real workflow.
AI economic futures grant using Claude Code + GrantKit.
Every feature built because we needed it for actual proposals.
Bottom-up adoption in research is proven:
Started with individual researchers, now 12M+ users and institutional deals.
Developers adopted first, enterprises followed. Now $7.5B acquisition.
Teams adopted, IT departments bought enterprise. $27B acquisition.
Researchers adopt for AI workflow. Institutions buy for compliance + collaboration.
Founder
Open Roles
Built on PolicyEngine's open source community
Won't NSF just add these features?
Government portals move slowly. Research.gov hasn't had major updates in years. Even if they add features, they won't support local files or third-party AI tools.
Can't incumbents just bolt on AI?
Enterprise vendors serve compliance officers, not researchers. Their architecture is web-first. Adding local sync would require rewriting their entire product.
Isn't this market price-sensitive?
$29/mo is trivial vs. grant amounts ($100K-$1M+). More importantly, institutions pay for tools researchers demand. Bottom-up adoption, top-down purchasing.
Won't AI just write grants end-to-end?
AI still needs structured inputs and validation. We're not competing with AIβwe're the infrastructure AI tools use to interact with grant systems.
Why no co-founder?
Actively looking. Seed capital enables founding hires. PolicyEngine community provides extended team for contract work.
What about other agencies?
NSF is the beachhead. Same infrastructure extends to NIH, DOE, DOD, foundations. Validation rules differ; sync architecture doesn't.
| Year | ARR | Users | Milestone |
|---|---|---|---|
| Y1 | $200K | 500 | Product-market fit, first institutional deal |
| Y2 | $1M | 2,000 | NIH validation, 5+ institutions |
| Y3 | $5M | 10,000 | Multi-agency, international expansion |
| Y4 | $15M | 30,000 | Platform status, ecosystem |
| Y5 | $40M | 75,000 | Category leader |
Building the infrastructure for AI-native grant writing.