
I thought I had a content creation problem.
I'd been making content decisions based on gut feeling for over 260 issues of the SPARK. "This feels like a good topic." "I haven't written about that in a while." The kind of planning that takes three hours of scrolling through competitor posts and checking Reddit threads and ends with a vague sense that... something is trending? Maybe?
So when I found Dheeraj Sharma's article on building a Content Gap Analyzer using Claude Code, I thought: yes. This is the thing. An AI agent that audits what I've published, scrapes what competitors are covering, researches trending topics, and scores it all into a prioritized list of opportunities. In two minutes instead of three hours.
I built it. Ran it. Got a 629-line strategic analysis back.
And then the interesting part happened.
Before I get into the results — credit where it's due. The Content Gap Analyzer is part of Dheeraj's PubFlow OS Agents series on GenAI Unplugged. He built it. His article walks through the full technical setup. I'm writing this because I want you to see what it's actually like to follow someone else's blueprint as a non-developer — the wins, the walls, and the "wait, that URL doesn't exist anymore" moments.
The agent itself runs inside Claude Code (the terminal-based version of Claude). It connects to two external tools through something called MCP servers: Perplexity for trend research and Firecrawl for competitor scraping. You feed it three files about your business: who you are, what you publish, and who your competitors are, and it does the rest.
The output is a scored, prioritized gap analysis. Not just "here are some topics." Actual opportunity scores based on demand, competition, fit with your audience, and timing.
I set this up on my laptop first, using the Claude Code Chrome extension. It walked me through step by step, and everything worked. And for what it's worth, I did this on a plane. 😂 I was heading home from Costa Rica, had already napped a little (early flight then a 7-hour drive from LA to Northern California), and figured, why not? I pushed it to Git and pulled everything down to my desktop Saturday morning.
And then: nothing worked. Of course.
O.K., it's not that nothing worked, it's just that the connection to Perplexity & Firecrawl wasn't working. As frustrating as that can be, working with Claude and Claude Code, it's completely doable (I wouldn't know where to start without it).
Wall #1:
Claude Code installed fine, but when I tried to run it, my computer couldn't find it. It was like installing an app, only for it not to show up anywhere. I described the problem to Claude in Cursor, and it found the issue and fixed it in about ten seconds. I couldn't tell you what it actually did — something about creating a shortcut to where the file was hiding.
The point is: I described the problem in plain English, and the AI solved it. I didn't need to understand the fix. I just needed to say, "This isn't working."
Wall #2:
The connection to Perplexity was dead. This is the one I want you to hear. The article I followed used a specific web address to connect to Perplexity's search tool. By the time I set it up, that address didn't exist anymore. Gone. Not broken — gone. Perplexity had completely changed how the connection works. We had to find the updated method by reading their current docs.
This is the reality of building with AI right now. The tutorial is correct when it's published. By the time you follow it, something has changed. That's not a reason to stop. It's a reason to expect it and not panic when it happens.
Wall #3:
Firecrawl got saved in the wrong place. Added it to global settings when it should have been in project-level settings. Claude Code couldn't see it. Moved it. Fixed.
Three walls. None of them was "this is too hard for a non-developer." All of them were "this changed, this didn't connect, this went to the wrong folder." Normal building stuff.
Once both MCP servers showed green (the satisfying part), I typed the command and waited about 90 seconds.
The agent came back with a 629-line report covering my specific content pillar. Ten scored content gaps, competitive positioning analysis, trending topics, SEO keyword opportunities, series suggestions, and a 90-day content calendar. 🤯
What jumped out wasn't the individual topics... it was the pattern.
The agent found 10 scored content gaps, and the top five all had something in common: they were things I already do every single day, but had never become dedicated content.
Tools I use constantly but haven't written guides for. Processes I walk clients through but haven't published. Questions my audience asks me directly that I answer in DMs but never in searchable posts. Entire categories where the competition is surprisingly thin... and I'm already the person doing the work.
The gap analyzer didn't tell me to go learn something new. It told me to teach what I'm already doing. That distinction matters.
And one keyword data point made me stop scrolling: a term I use casually in almost every post had 1,900 monthly searches and almost no competition at my audience level. The agent basically said: You can own this. Nobody else is claiming it.

The gap analysis was good.
What happened next was better.
I took the report into Claude Desktop and activated Cowork, which lets Claude browse your actual files and the web. And Cowork did something the gap analyzer couldn't: it went to my live Substack archive and counted.
The agent had seen 25 pieces of content in my local workspace files. Cowork found 62+ published pieces. Thirty-seven published Substack posts had no corresponding file in my workspace. Seven things I'd marked as "drafts" were actually already published. The gap analyzer had been working with incomplete data.
And that's when the reframe hit: I don't have a content creation problem. I have a distribution problem.
55+ polished, published pieces living exclusively on Substack. They're not feeding my site's domain authority. I can't build internal linking structures around them or control how they connect to my offers. They're doing work on someone else's platform instead of mine.
The gap analyzer told me what I was missing. Co-work told me what I already had. The combination changed the entire strategy.

Here's the thing about a 629-line report: it's comprehensive.
It's also a wall of text.
I'm a visual person. I need to see the relationships, the numbers side by side, the priorities ranked and color-coded before anything clicks into action. So I took the raw analysis and did what I do with everything now — I described what I wanted in plain language.
"Take this audit data and turn it into a visual dashboard. Show me the core insight, the numbers that matter, the corrections between what the agent estimated and what actually exists, and rank my strategic priorities."
That's it. That was the prompt. And what came back was a fully interactive dashboard — tabbed sections, color-coded metrics, comparison tables, and ranked action items. Not because I coded it. Because I described it.

Then I asked for a document version I could drop into my operations hub in Cursor, i.e., my second brain, where every project, every strategy, every reference doc lives. Same data, different format, ready to be referenced the next time I sit down to plan content.
This is the part that doesn't get talked about enough: the agent gave me raw intelligence. But the step where I shaped that intelligence into something that matches how I actually think and work? That happened in a conversation. No new tool to learn. No export-import dance. Just "here's what I need to see," and there it was.
You can take anyone's process — Dheeraj's agent, a framework from a course, a strategy from a podcast — and use natural language to reshape the output into something that fits YOUR workflow. The methodology doesn't have to match how your brain works. The output does.
Once we saw the distribution problem, the next step was obvious. We needed to move those Substack posts to kimdoyal.com.
Cowork (with some help from Cursor when co-work got hung up on sandbox limitations) built a Node.js sync script. It fetches my Substack RSS feed, checks what's already on my site via Supabase, strips out the Substack-specific elements (subscribe buttons, tracking pixels, footer boilerplate), generates slugs and SEO metadata, and creates drafts. Not published posts — drafts. I review each one, add internal links, tweak the meta descriptions, and publish when it's ready.

I ran it. 13 posts appeared as drafts in my CMS.
That happened on a Saturday afternoon while I was still jet-lagged from Costa Rica. (Curiosity is stronger than fatigue. Every time 😉.)
You don't need to build this exact agent. The tooling will change. It had already changed by the time Dheeraj published his article, and when I followed it. What won't change is the principle:
Most of us are sitting on content we've forgotten about. Posts that performed well on one platform but never made it to another. Expertise we share in newsletters that never becomes searchable blog content. Internal documents that are 80% of a published guide.
The gap analyzer found what I was missing. But the bigger insight was what I wasn't distributing.
If you want to build the agent yourself, Dheeraj's full technical walkthrough is the blueprint. You'll need Claude Code (comes with a Claude subscription), a free Perplexity API key, and a free Firecrawl API key. Total cost per analysis: about $0.30-$0.50.
And if the endpoints have changed by the time you read this?
That's not a failure.
That's Tuesday.
AI strategy for creators who build with soul. No hype... just what actually works.

Helping entrepreneurs navigate AI with intention and human-first strategy.

If you've been following my journey into "vibe coding," you know I'm always on the lookout for tools that make bringing ideas to life faster and more intuitive. While I've had success with other platforms, a new tool recently caught my eye and has completely changed the game for me.

I had a conversation with a friend last week who said something that will sound familiar to many entrepreneurs: "I keep creating these beautiful PDFs and checklists, but I never hear from people after they download them. It's like they vanish into the ether." This is a problem many of us face.

I've been building with AI for months now, sharing my journey, and having an absolute blast doing it. And apparently, that makes some people uncomfortable.