Case Study: Building a Full-Stack AI-Powered Sponsorship CRM at 35,000 Feet - on an iPhone
From iPhone to Production — How Claude Code, Ollama, and a VPS Turned Airplane WiFi Into a Development Environment
Executive Summary
During a return flight from Mexico City to Charleston, a fully custom sponsorship CRM was designed, developed, tested, and deployed — entirely from an iPhone connected to a VPS via SSH over airplane WiFi. The system features an integrated AI prompt tool powered by a sandboxed Ollama instance, a lightning-fast Python backend with a production database, and a rich feature set covering contact management, commitment tracking, prospect status workflows, and full pipeline journey visualization. A purpose-built context database ensures Claude Code can resume work across sessions without re-explaining the entire project — dramatically reducing token burn and eliminating ramp-up delays. The entire build was orchestrated by Claude Code, which wrote, tested, and configured every component on-server.
The Problem
Sponsorship management is relationship-heavy, detail-intensive work. Off-the-shelf CRMs either over-generalize (Salesforce, HubSpot) or under-deliver on the specific workflows sponsorship professionals need: tracking prospect progression through a pipeline, logging granular commitment details, and maintaining context across dozens of concurrent conversations with sponsors at different stages.
What was needed was a system purpose-built for sponsorship operations — one that understood the difference between a warm lead, a verbal commitment, and a signed contract, and that could leverage AI to reduce the manual overhead of data entry and task execution.
The Solution
A fully custom Sponsorship CRM with the following core capabilities:
Contact & Organization Management
The system provides a comprehensive contact database designed specifically for sponsorship workflows. Each contact record captures not just standard fields like name, email, and organization, but sponsorship-specific metadata: relationship history, engagement level, decision-making authority, and preferred communication channels. Contacts are linked to their parent organizations, enabling a clear view of all stakeholders within a single sponsorship target.
Commitment Tracking
Each commitment record captures the commitment type (financial, in-kind, media, activation), the specific terms, dollar value or equivalent, fulfillment timeline, and current status. This eliminates the ambiguity that plagues sponsorship management — no more guessing whether a sponsor verbally agreed to $10K or $15K, or whether the commitment was for this fiscal year or next.
Prospect Status & Pipeline Journey
Every contact and organization moves through a clearly defined pipeline: Identified → Researched → Outreach Initiated → Engaged → Proposal Sent → Negotiating → Committed → Fulfilled → Renewed (or Lost). The system tracks not just current status but the full journey — when a prospect entered each stage, how long they stayed, and what actions triggered progression. This historical pipeline data becomes invaluable for forecasting and process optimization.
Integrated AI Prompt Tool
The standout feature is the embedded AI assistant powered by a sandboxed Ollama instance running directly on the VPS. Users interact with a natural language prompt interface within the application itself. The AI can execute tasks like:
Adding new contacts from unstructured text ("Add John Martinez from Coca-Cola, he's interested in our stadium naming rights package, met him at the NACDA conference")
Updating prospect statuses conversationally ("Move Pepsi to Negotiating, they came back with a counter on the activation package")
Logging commitments from meeting notes ("Nike confirmed $50K cash plus $25K in-kind product for the athlete hospitality suite, contract coming next week")
Querying pipeline data in plain language ("Which prospects have been in Proposal Sent for more than 14 days?")
The AI doesn't just parse commands — it understands sponsorship context, validates data before writing it, and confirms actions back to the user in natural language.
Claude Code Context Database: Persistent Memory Across Sessions
One of the most strategically important features of the entire build isn't user-facing at all — it's the context database that Claude Code itself reads and writes to between sessions.
Anyone who has used AI coding assistants extensively knows the pain: you start a new session, and the AI has no memory of what was built, what decisions were made, or where things left off. You burn tokens and time re-explaining the project architecture, the file structure, the naming conventions, the business logic. On airplane WiFi with limited bandwidth and a phone keyboard, that kind of ramp-up overhead wasn't just inefficient — it was a dealbreaker.
The solution was a structured context database that Claude Code reads at the beginning of every new session and logs to at the end of each session. It captures:
Project architecture and file structure — what exists, where it lives, how components relate to each other
Decisions and rationale — not just what was built but why specific technical choices were made (e.g., why SQLite over PostgreSQL, why a specific prompt template structure for Ollama)
Current state and progress — which features are complete, which are in progress, what's queued next
Known issues and edge cases — bugs encountered, workarounds applied, areas flagged for future improvement
Conventions and patterns — naming conventions, error handling patterns, API response structures, and other standards established during the build
The effect is transformative. When a new Claude Code session starts, it reads the context database and immediately understands the full scope of the project as if it had been there from the beginning. There's no "let me look at your codebase" warm-up period. No repeating yourself. No wasted tokens on context that's already been established. The AI picks up exactly where the last session left off, with full awareness of every architectural decision and implementation detail.
This is especially critical on a build like this one, where sessions were interrupted by WiFi dropouts, turbulence-induced pauses, and the natural rhythm of working in focused sprints from a phone. Each reconnection was seamless because the context database bridged the gap. What would have been a frustrating cycle of re-explanation became a fluid, continuous development process — even across dozens of separate sessions during the flight.
The context database also serves a long-term purpose beyond the initial build. Future maintenance, feature additions, and debugging sessions all benefit from the accumulated institutional knowledge. It's essentially a living technical document that evolves with the project, written by the AI for the AI, in a format optimized for machine consumption rather than human readability.
The Technical Architecture
Backend: Python + SQLite on a VPS
FastAPI handles the web layer, delivering sub-50ms response times for standard CRUD operations. SQLite serves as the database — a deliberate choice for a single-user/small-team tool where the simplicity, zero-configuration, and raw speed of SQLite outperform heavier alternatives. The schema is carefully normalized to support the relational complexity of contacts, organizations, commitments, pipeline stages, and activity logs without sacrificing query performance.
AI Layer: Ollama (Sandboxed)
Ollama runs directly on the VPS in a sandboxed environment, providing local LLM inference without external API dependencies. This means zero latency to a third-party service, complete data privacy (no sponsorship data ever leaves the server), and the ability to fine-tune prompt behavior for the specific domain. The AI layer communicates with the Python backend through an internal API, receiving natural language input and returning structured actions that the backend validates and executes.
Context Persistence Layer
The Claude Code context database sits alongside the application database on the VPS, storing structured session logs, architectural decisions, and project state in a format optimized for rapid ingestion at session start. A pre-session read and post-session write hook ensures continuity is automatic rather than manual — no discipline required, no steps to forget.
Infrastructure: VPS + SSH
The entire system lives on a single VPS, keeping operational costs minimal and deployment dead simple. The server runs behind standard security hardening — SSH key authentication, firewall rules, fail2ban — and the application is served through a reverse proxy with TLS termination.
The Build Process: iPhone, SSH, Airplane WiFi
The Setup
The return flight from Mexico City (MEX) to Charleston (CHS) presented roughly four hours of uninterrupted focus time. The development environment consisted of an iPhone running a terminal/SSH client, connected to the VPS over the aircraft's WiFi. Claude Code was already installed on the server alongside the Ollama instance.
How Claude Code Changed Everything
Claude Code served as the primary development engine. Rather than manually typing Python code character-by-character on a phone keyboard, the workflow was conversational and iterative:
Claude Code reads the context database and immediately understands the current project state
Describe the feature or component in natural language via the Claude Code interface
Claude Code generates the implementation — production-quality code with error handling, input validation, and documentation
Review the output on the phone screen, request modifications or approve
Claude Code writes tests, runs them, and reports results
Session context is logged to the database before moving on
This cycle repeated dozens of times across the flight. Database schemas were defined, API routes were built, the AI prompt integration was wired up, and the entire system was tested — all through a conversational interface that made an iPhone a viable development terminal. And thanks to the context database, every WiFi dropout or session restart cost zero ramp-up time.
Overcoming the Constraints
Airplane WiFi is notoriously unreliable. SSH sessions were maintained through persistent connections (tmux on the server side), so brief connectivity drops didn't kill progress. Claude Code's on-server execution meant the heavy lifting — file I/O, test execution, dependency management — never depended on the WiFi connection's bandwidth. The phone was essentially a thin client sending text commands and receiving text responses. And the context database meant that even when sessions did die and restart, continuity was instant — no tokens wasted catching the AI back up.
Results
By the time the plane touched down in Charleston, the system was live:
Full contact and organization management with sponsorship-specific fields and relational data modeling
Commitment tracking with type classification, dollar values, timelines, and fulfillment status
Complete pipeline workflow with stage history, duration tracking, and transition logging
Working AI prompt interface capable of parsing natural language into CRM actions via the local Ollama instance
Persistent context database ensuring Claude Code maintains full project awareness across sessions with zero token waste
Test coverage across all core API endpoints and database operations
Production deployment behind a reverse proxy with TLS
The total development time was approximately four hours of active work. No laptop. No desktop IDE. No stable broadband connection. Just an iPhone, an SSH session, and Claude Code — with a context database that made every session as productive as the first.
Key Takeaways
1. The Development Environment Is No Longer the Bottleneck
The traditional assumption — that serious software development requires a powerful local machine, multiple monitors, and a full IDE — is increasingly outdated. With AI-assisted coding tools running server-side, the client device becomes almost irrelevant. An iPhone was sufficient because the intelligence lived on the server.
2. Context Persistence Is the Multiplier Nobody Talks About
The context database was arguably the highest-leverage feature of the entire build. AI coding tools are powerful, but their stateless nature creates a hidden tax: every new session starts from zero, and the developer pays for re-establishing context in both time and tokens. By giving Claude Code a persistent memory layer, every session started at full speed. For a build happening over spotty airplane WiFi with frequent interruptions, this wasn't a nice-to-have — it was the difference between shipping and not shipping.
3. AI Coding Tools Excel at Greenfield Builds
Claude Code's strength was most apparent in the rapid scaffolding and implementation of a well-defined system. When you can clearly articulate what you want, the AI delivers production-quality code at a pace no human developer could match on a phone keyboard.
4. Local AI (Ollama) Is a Game-Changer for Data-Sensitive Applications
Running the LLM on the same server as the application eliminates an entire category of concerns: data privacy, API costs, latency, and third-party dependency. For a sponsorship CRM handling sensitive business relationships and financial commitments, keeping everything on-server isn't just convenient — it's a competitive advantage.
5. Constraints Can Drive Better Architecture
Being forced to work from a phone actually encouraged better architectural decisions. There was no temptation to over-engineer, no time wasted configuring IDE plugins. Every interaction with Claude Code was purposeful and outcome-oriented. The result is a system that is lean, well-structured, and free of unnecessary complexity.
6. The Future of Development Is Conversational
The gap between "having an idea" and "having a working system" can collapse to hours when the development process is conversational rather than mechanical. The skill that mattered most wasn't typing speed or memorized syntax — it was the ability to clearly describe intent and make good architectural decisions.
Conclusion
What began as a productive use of flight time became a proof of concept for something larger: the viability of building serious, production-grade software tools using nothing but a mobile device and AI-powered development tools. The sponsorship CRM that resulted isn't a toy or a demo — it's a fully functional system that handles real workflows with an integrated AI assistant that understands the domain.
The context database may be the quietest innovation in the stack, but it's the one with the broadest implications. It solves the fundamental friction of AI-assisted development — the statelessness problem — and turns what would be a series of disconnected sessions into a continuous, cumulative build process. It's the reason a four-hour flight with unstable WiFi was enough to ship a production system.
The constraints of the build — iPhone screen, airplane WiFi, no IDE — didn't limit the quality of the output. They simply changed the interface through which a capable developer communicated intent to an equally capable AI collaborator. And the system that emerged from that collaboration is faster, more private, and more tailored to its purpose than any off-the-shelf alternative could be.
The flight cost less than a month of most SaaS CRM subscriptions. The CRM it produced will outlast all of them.
Built at 35,000 feet. Shipped on landing. Remembered everything.

