Research Prospectus

The AI Grant Writer's Missing Infrastructure

Researchers use AI for papers and code. But grant writing is still stuck in Word docs.

Every claim in this document is corroborated with a primary source.
Hover over citations to see details. Click to open.

1. The Problem

AI can write code, papers, and documentation. But grant proposals are still written manually.

Researchers

Spend 34 working days per proposal on average[6]. 90%+ say it's too much time.

AI Tools

Claude Code, Cursor, and Copilot work with local files. Grant portals are web-only.

Compliance

NSF PAPPG has 80+ formatting rules[7]. Errors caught after submission waste weeks.

$3B+ in NSF proposals submitted annually[1]

The National Science Foundation receives over 40,000 proposals annually[1], with an average request of ~$200K. Researchers spend weeks on each proposal, yet 76% are rejected[2].

The problem isn't AI capability. It's infrastructure. AI tools work with local files and version control. Grant portals work with web forms and uploads. There's no bridge between them.

2. The Gap

Modern research tools are file-based, version-controlled, and AI-compatible. Grant writing tools are none of these.

What Researchers Want

  • Edit proposals in their IDE (VS Code, Cursor)
  • Use AI assistants (Claude Code, Copilot)
  • Version control with Git
  • Automated compliance checking
  • Collaborate without email chains

What Exists

  • Research.gov: Web-only, no API, manual validation
  • Cayuse: Legacy enterprise, $50K+/year
  • Proposal Central: Web forms, no AI integration
  • Google Docs: Better collaboration, zero validation

The gap: No tool treats grant proposals as structured data that AI can read, validate, and help write. That's what we're building.

3. The Solution

GrantKit bridges local AI workflows with grant submission requirements. Markdown files that sync bidirectionally with the cloud.

πŸ“
Local Files
Markdown + YAML
↔
⚑
GrantKit
Sync + Validate
↔
☁️
Cloud
Team + Portal

Local Sync

Proposals as local markdown files AI tools can read and edit.

  • Works with Claude Code, Cursor, Copilot
  • Git-compatible for version control
  • Offline editing, auto-sync on save
  • YAML frontmatter for structured metadata
grantkit pull && grantkit push

NSF Validation

200+ compliance rules checked automatically before you waste time.

  • Page limits, font requirements, margin checks
  • Budget arithmetic verification
  • Required sections and formatting
  • Inline error messages with fixes
grantkit validate nsf-cssi-elements

Team Collaboration

Real-time sync without email attachments or version conflicts.

  • Share grants with collaborators
  • Web editor for non-technical users
  • Audit trail of all changes
  • Comments and suggestions
grantkit share user@university.edu

4. The Markets

Research funding is massive, and the tools are decades behind.

Research Grants

$90B+/year[3]

Federal research funding in the US alone. NSF, NIH, DOE, DOD each run billions in annual competitions.

Grant Management Software

$2.8B β†’ $5.6B[4]

Growing 10%+ annually. Legacy vendors (Cayuse, InfoEd) charge $50K+/year for outdated tools.

Research Institutions

5,000+

US universities, national labs, research nonprofits. Each submits dozens to hundreds of proposals annually.

AI Writing Tools

$5B+ by 2027[5]

Jasper, Copy.ai, and others for marketing. No dominant player for research/grant writing.

5. Competitive Landscape

Nobody combines local-first editing + AI integration + compliance validation.

Capability Research.gov Cayuse Grantable Google Docs GrantKit
Local file editing β€” β€” β€” β€” βœ“
AI tool compatible β€” β€” ◐ ◐ βœ“
NSF validation ◐ βœ“ β€” β€” βœ“
Version control β€” β€” β€” ◐ βœ“
Free tier βœ“ β€” βœ“ βœ“ βœ“
Open source β€” β€” β€” β€” βœ“

Research.gov

NSF's official portal. Web-only, validates on submission (too late), no collaboration features.

Cayuse

Enterprise grant management. $50K+/year. Built for compliance officers, not researchers.

Grantable

AI grant writing assistant. Web-only, built-in AI (no BYOM), no agency-specific validation.

Google Docs

Great for collaboration. Zero grant-specific features. No validation, no structure.

Overleaf

LaTeX for papers. Not designed for grants. No compliance validation.

6. Business Model

Open source CLI. Hosted cloud service. Enterprise for institutions.

Open Source

CLI tool, local validation, single-user. Apache 2.0 licensed.

Free

Pro

Cloud sync, team collaboration, unlimited grants.

$29/mo

Team

Shared workspace, admin controls, priority support.

$99/mo

Institution

SSO, compliance reporting, custom integrations, dedicated support.

$10K-50K/year

Developer Tool Precedents

Overleaf $20M+ ARR
Notion $500M+ ARR
Linear $25M+ ARR

All started free/cheap, expanded to enterprise.

7. Traction

Early but promising. Built to scratch our own itch.

2 proposals in progress
$1.2M total requested
100% open source
Live app.grantkit.io

Current Usage

PolicyEngine

Using GrantKit for NSF CSSI proposal. Real validation, real workflow.

Anthropic Collaboration

AI economic futures grant using Claude Code + GrantKit.

Dogfooding

Every feature built because we needed it for actual proposals.

Why Researcher-First

Bottom-up adoption in research is proven:

Overleaf

Started with individual researchers, now 12M+ users and institutional deals.

GitHub

Developers adopted first, enterprises followed. Now $7.5B acquisition.

Slack

Teams adopted, IT departments bought enterprise. $27B acquisition.

GrantKit

Researchers adopt for AI workflow. Institutions buy for compliance + collaboration.

8. Team

Max Ghenis

Founder

  • Founded PolicyEngineβ€”used by UK Government, US Congress
  • Former Google data scientist
  • MIT economics, UC Berkeley statistics
  • Submitted 10+ grant proposals, won multiple

Hiring

Open Roles

  • Full-stack engineer (React + Python)
  • Research partnerships lead
  • First hires post-funding

Built on PolicyEngine's open source community

9. Risks & Mitigations

Research.gov improves

Won't NSF just add these features?

Government portals move slowly. Research.gov hasn't had major updates in years. Even if they add features, they won't support local files or third-party AI tools.

Cayuse adds AI

Can't incumbents just bolt on AI?

Enterprise vendors serve compliance officers, not researchers. Their architecture is web-first. Adding local sync would require rewriting their entire product.

Researchers won't pay

Isn't this market price-sensitive?

$29/mo is trivial vs. grant amounts ($100K-$1M+). More importantly, institutions pay for tools researchers demand. Bottom-up adoption, top-down purchasing.

AI gets good enough

Won't AI just write grants end-to-end?

AI still needs structured inputs and validation. We're not competing with AIβ€”we're the infrastructure AI tools use to interact with grant systems.

Single founder

Why no co-founder?

Actively looking. Seed capital enables founding hires. PolicyEngine community provides extended team for contract work.

NSF-specific

What about other agencies?

NSF is the beachhead. Same infrastructure extends to NIH, DOE, DOD, foundations. Validation rules differ; sync architecture doesn't.

10. The Ask

Seed Round $1-2M

Use of Funds

50% Engineering
25% Go-to-Market
15% Support/Ops
10% Legal/Admin

Milestones to Series A

  • 1,000+ active users
  • $500K ARR
  • 3-5 institutional customers
  • NIH + DOE validation added
  • Proposals submitted β†’ funded

Revenue Path

Year ARR Users Milestone
Y1 $200K 500 Product-market fit, first institutional deal
Y2 $1M 2,000 NIH validation, 5+ institutions
Y3 $5M 10,000 Multi-agency, international expansion
Y4 $15M 30,000 Platform status, ecosystem
Y5 $40M 75,000 Category leader

References

  1. NSF Budget Request FY 2024 (2024). nsf.gov
  2. NSF Proposal & Award Statistics (2023). nsf.gov
  3. AAAS. Federal R&D Budget Dashboard (2024). aaas.org
  4. Grand View Research. Grant Management Software Market (2024). grandviewresearch.com
  5. MarketsandMarkets. AI Writing Assistant Market (2024). marketsandmarkets.com
  6. Herbert et al. On the time spent preparing grant proposals: an observational study of Australian researchers (2013). PMC
  7. NSF. Proposal & Award Policies & Procedures Guide (PAPPG) Chapter II (2024). nsf.gov

Interested?

Building the infrastructure for AI-native grant writing.

Get in Touch ← Back to Home