I Ran an AI-Powered SEO Audit on My Site. It Was Invisible to Google.
I Ran an AI-Powered SEO Audit on My Site. It Was Invisible to Google.
I’ve been writing blog posts for months. Fourteen of them, covering AI development workflows, engineering leadership, personal projects, even a family trip to Australia. I redesigned the site, moved to Astro, set up Cloudflare Pages, added JSON-LD structured data, Open Graph tags, a sitemap — all the things you’re supposed to do.
Then I ran an actual SEO audit and discovered that Google had never crawled a single page.
Zero clicks. Twenty-seven impressions. Every page listed as “URL is unknown to Google.” Three months of writing into a void.
Finding the Tool
I’d been using Claude Code for development work — blog posts, site features, agent projects — and noticed it had a set of SEO skills available as slash commands. One of them was /seo-analysis, part of an open-source toolkit called toprank that plugs into Claude Code’s skill system.
I typed /seo-analysis and it started walking me through setup. The skill connects to Google Search Console via the gcloud CLI, pulls real ranking data, runs URL inspections, crawls your pages for technical issues, and produces a structured audit report with specific, actionable recommendations.
The setup took about ten minutes: install gcloud, authenticate with Google, enable the Search Console API, and point it at my GSC property. Not trivial, but not hard either — the skill guided each step.
What the Audit Found
The results were sobering.
The Site Was a Ghost
URL Inspection returned the same verdict for every page: “URL is unknown to Google.” No crawl history. No mobile usability data. No rich result detection. Nothing. Google didn’t know quevin.com existed.
The likely cause was a combination of two things: I’d never submitted my sitemap to Google Search Console (I assumed the robots.txt reference was enough), and Cloudflare’s bot protection may have been blocking automated crawlers — possibly including Googlebot.
Trailing Slash Chaos
GSC showed duplicate entries for the same pages: /about and /about/, /blog and /blog/, /experience and /experience/. Each pair had different position data, splitting whatever tiny amount of authority the site had earned. My Astro config had no trailingSlash setting, so both URL variants were being served without a canonical redirect.
Everything Else Was Actually Fine
Here’s the thing — the technical foundation was solid. Proper meta tags on every page. Open Graph and Twitter Card tags. JSON-LD structured data with Person, WebSite, BlogPosting, and BreadcrumbList schemas. Canonical URLs set. Security headers configured. The site was well-built. It just hadn’t told Google it existed.
What We Fixed
Claude and I worked through the audit’s 10-item action plan in a single session. I handled the two manual items (submitting the sitemap to GSC and verifying Cloudflare’s bot settings), and Claude implemented the rest directly in the codebase:
Trailing slash fix. Added trailingSlash: 'never' to the Astro config and created a Cloudflare _redirects file with 301 redirects from every trailing-slash URL to its non-trailing-slash canonical. This consolidates the duplicate URLs into single authoritative versions.
Sitemap improvements. Added <lastmod> dates to all static pages in sitemap.xml.ts. The blog posts already had them, but the homepage, about page, experience page, and others were missing this signal that helps Google prioritize crawl frequency.
Schema enhancements. Added a modifiedDate field to the blog content collection schema and wired it into the BlogPosting JSON-LD as dateModified. This gives Google a freshness signal for every post — it falls back to publishDate when no modification date is set.
Blog index metadata. Updated the title from the generic “Writing — Kevin P. Davison” to “Technology Leadership & AI Blog” with a description that actually matches what people might search for: “Practical insights on AI-augmented development, legacy modernization, and engineering leadership.”
Internal cross-linking. Added “Related Reading” sections to 10 blog posts, creating 30 new internal links between topically related content. The AI-and-Claude posts now link to each other. The Quevin Bot series is connected chronologically. The leadership posts reference each other. This strengthens topic clusters and gives Google more paths to discover content.
Performance. Moved GA4 scripts from <head> to just before </body>, removing them as a render-blocking resource. A small change, but it affects every page load.
One commit. Nineteen files changed. 124 lines added, 22 removed.
What I Learned
The obvious thing nobody tells you: Building a technically sound website is necessary but not sufficient. If you don’t explicitly tell Google your site exists — submit your sitemap, verify your crawlability, check your Search Console — you can write great content forever and nobody will find it through search.
AI-powered audits are genuinely useful for solo operators. I could have learned all of this by studying SEO documentation for a few days. But the skill did in one session what would have taken me much longer: pull real data, cross-reference it, identify the actual problems (not hypothetical ones), and produce specific fixes. The trailing slash issue, for example — I wouldn’t have noticed that without seeing the duplicate entries in GSC data.
The fixes are boring and that’s the point. Nothing we did was clever. Submit the sitemap. Fix the trailing slashes. Add lastmod dates. Link your posts to each other. These are the blocking-and-tackling fundamentals that matter more than any advanced SEO technique. The audit’s value wasn’t revealing some secret — it was cutting through the noise to show exactly which fundamentals I’d missed.
What’s Next
Time will tell whether these changes actually move the numbers. Google needs to crawl the site first, then index the pages, then start showing them in search results. That process takes weeks, sometimes months for a new site with no external backlink profile to speak of.
I plan to re-run the audit in 30 days to see what’s changed. If indexing is working, the GSC data should be rich enough to do meaningful keyword analysis, CTR optimization, and content gap work — the stuff the audit wanted to analyze but couldn’t because there was no data to work with.
For now, the foundation is in place. The sitemap is submitted. The pages are crawlable. The content is cross-linked. The metadata is targeted. Whether that translates into organic traffic is the next chapter of this experiment.
Related Reading
- How I Use Claude Every Day — The daily AI workflow that made this audit a natural next step
- Context Engineering: Managing the Smart Zone — The framework for treating AI as an amplifier, not a replacement
- Migrating Quevin.com to Astro — The technical foundation this audit built on
About the Author
Kevin P. Davison has over 20 years of experience building websites and figuring out how to make large-scale web projects actually work. He writes about technology, AI, leadership lessons learned the hard way, and whatever else catches his attention—travel stories, weekend adventures in the Pacific Northwest like snorkeling in Puget Sound, or the occasional rabbit hole he couldn't resist.