How Our Arkeo Dev Team Built a Custom AI Coding Agent (and What SaaS Founders Can Learn)
May 13, 2025

David Brennan MBA
At Arkeo, we don’t just talk about AI innovation, we build it. In our latest episode of SaaS Founder Stories, I shared a behind-the-scenes look at how we developed a custom AI coding agent to solve real pain points in our own dev workflow. No guest this time, just me walking through how and why we created an internal tool that connects directly to our GitHub repos, integrates with any LLM, and keeps our team operating at top speed.
This wasn’t about chasing hype. It was about solving concrete problems: out-of-date documentation, fragmented tribal knowledge, and a growing need for dev teams to move faster without sacrificing quality. In this post, I’ll break down what we built, how it works, and the practical insights SaaS founders can use to future-proof their engineering orgs.
Key Takeaways
A custom AI agent can supercharge developer productivity by embedding real-time context from your codebase into every query.
Tribal knowledge loss is a silent killer for scaling SaaS teams—this framework helps capture and retain it automatically.
Security must be built-in from day one, especially when your AI has access to sensitive IP across your repos.
Code assistants aren't just reactive anymore—ours helps with project planning, documentation, and technical scoping.
You don’t need to wait for an off-the-shelf solution. Building your own agent is more achievable than most think—and the payoff is massive

How We Built the Arkeo Coding Agent (And Why It Was Necessary)
Our engineering team was facing a familiar but growing challenge: multiple evolving codebases across products like Arkeo, Lynda AI, and our core infrastructure, all developing quickly, with different contributors, and often lacking up-to-date documentation.
We tried off-the-shelf GPT tools. They helped, sometimes. But they couldn’t keep up with how fast our stack changed. The moment our code shifted, the assistant was outdated. That was the bottleneck.
So we asked ourselves:
What if we had an internal AI agent that always knew our code—front to back—and could scale with us?
That idea kicked off a two-week sprint to build exactly that:
We started with a mono repo setup to unify our codebase.
We stood up an MCP server, making it the core routing layer between our code and the AI.
Then we added a vector store and embedding pipeline so the model could understand context, not just text.
Authentication and role-based access were critical—because if this thing was going to be used daily, it needed to be secure.
Finally, we created a simple chat UI interface to make interactions fast, intuitive, and dev-friendly.
This wasn’t a theoretical experiment. From Day 1, we were solving real pain:
Helping developers understand code they didn’t write.
Automatically generating dev docs.
Speeding up technical requirement planning.
Running code queries that used to take 30 minutes—in seconds.
It wasn’t perfect, but it was instantly useful. And the most exciting part? It’s just the beginning.
Our AI Strategy for the Next 12 Months
Building the agent was just the first step. Now, we’re focused on turning it into a cornerstone of how we develop software at Arkeo. The strategy is simple: make AI feel like a natural part of every engineer’s workflow—not a bolt-on tool, but a deeply integrated layer.
Here’s where we’re focused for the year ahead:
Continuous Ingestion & Real-Time Context
Every commit, every change, every new feature gets automatically ingested. Our vector store updates daily (or instantly via hooks), so the agent always has the latest version of our code. No more stale context. No more “outdated snapshot” issues.Role-Based Security and Permissions
Not every developer—or AI query—should see the entire codebase. We’ve implemented strict authentication, so access is scoped based on roles. It’s secure by default, which is non-negotiable for us.Modular LLM Integration
We’re model-agnostic. Today it’s OpenAI or Anthropic; tomorrow, it could be something else. The architecture supports plug-and-play LLMs, giving us flexibility as new models emerge.Cursor & Tooling Integrations
We’ve already connected the agent to Cursor for seamless in-editor queries. Next up: building out a layer of custom tools so the agent can go beyond Q&A and start performing actions—initiating pull requests, proposing architecture patterns, even auto-generating test coverage.Scaling the Agent Layer
Long-term, we’re turning this into more than just a chatbot. We see this as the foundation for an intelligent dev layer that can plan, code, refactor, and document—almost like a virtual staff engineer embedded in the repo.
Why does this matter? Because in SaaS, speed and context win. And if your competitor is using AI to ship twice as fast, that’s not a future problem, it’s a now problem.
Great! Here’s the rewritten “Biggest Lessons for SaaS Founders” section, tightened up with founder-level clarity and tactical insight, while keeping David’s voice direct and grounded.
5 Biggest Lessons for SaaS Founders
This build wasn’t just a cool internal project, it reshaped how we think about dev productivity and team scalability. Here are the biggest lessons I’d share with any SaaS founder thinking about integrating AI into their engineering stack:
1. Start Simple. Build What You Need.
You don’t need to launch a moonshot on Day 1. Our MVP was a mono repo, a vector store, and a chat interface. That was enough to generate real value—and set the stage for future automation. The key is to solve your team’s pain, not chase generic AI trends.
2. Treat Security as a Core Feature.
If your agent has access to your proprietary code, security isn’t optional. Role-based access, authentication layers, and scoped visibility must be built in from the start. AI shouldn’t compromise your IP, it should protect it.
3. Automate the Pain Points First.
For us, it was outdated documentation and inaccessible tribal knowledge. Automating code ingestion and embeddings gave the agent real, useful context—and took pressure off the team. Ask yourself: what’s the most annoying, repetitive thing your devs deal with? Start there.
4. AI Can Handle More Than Code.
We’re now using the agent for project planning, technical spec writing, and sprint breakdowns. When your model understands the full codebase, it becomes more than a code assistant—it becomes a thinking partner.
5. Think of It as Infrastructure, Not a Tool.
This isn’t a side project. It’s a foundational layer, like CI/CD, version control, or observability. If you build your AI assistant with that in mind, you’ll make better decisions around extensibility, integration, and long-term evolution.
The Realities of AI Adoption in SaaS
Let’s be clear: building and deploying an AI coding agent isn’t just “flip a switch and go.” There are real hurdles, and we’ve hit a few of them ourselves. Here’s what SaaS founders need to be ready for:
Rapid Model Updates = Constant Tension
The pace of LLM innovation is wild. New models bring better performance, but they can also break things or require major re-tuning. You’ll need to balance staying current with maintaining production stability. Our rule: isolate experimentation from core workflows, but keep testing what’s next.
Adoption Isn’t Automatic
Even the best tool fails without team buy-in. You’re not just installing software, you’re changing how devs work. We’ve had success hosting weekly “AI office hours” and setting up low-risk pilot use cases before asking people to rely on it day-to-day.
Tribal Knowledge Still Needs Feeding
Yes, our agent helps reduce knowledge loss, but only if it’s kept up to date. The ingestion system pulls fresh commits, but documentation still matters. We’re building habits around tagging architectural decisions and annotating major changes so the agent learns from what we build.
You Still Have to Build the Plumbing
Off-the-shelf tools might get you 60% of the way, but if you want something tailored, performant, and secure, you’ll need to invest in infrastructure. That’s not a blocker. It’s an opportunity to future-proof your stack.
AI can transform how your dev team operates, but only if you're intentional about setup, training, and culture. Treat it like any other system: build it, test it, support it, and evolve it with your team.
Final Thoughts & Key Takeaways
Reflecting on this build and sharing it on the podcast, reinforced a belief I’ve had for a while: a custom AI coding agent isn’t a luxury for SaaS companies anymore. It’s becoming table stakes.
The dev teams that scale the fastest over the next few years won’t just write good code. They’ll build smart infrastructure around their code, starting with tools that help them retain knowledge, speed up planning, and reduce repetitive tasks.
If you’re a SaaS founder, my advice is simple: don’t wait. You don’t need a perfect product or a team of ML engineers to get started. Build a vector store. Connect your repos. Add security. Start asking questions and see where it takes you.
AI doesn’t replace your team, it makes them exponentially more capable. But only if you give them the right tools.
Want to implement a custom AI coding agent for your team?
We’re offering technical assessments for SaaS teams interested in building their own internal agent, complete with vector search, repo ingestion, role-based access, and LLM integration.
Book an assessment and we’ll walk you through the exact framework we used.
Thanks for reading. Now go build something bold.