How I Use Claude Every Day
I didn’t plan to become the person who talks about AI at work. It happened incrementally, the way most real changes do — not through a revelation but through a series of small problems that got easier to solve with a new tool.
This is a practical account of how Claude went from a thing I was curious about to something I use every day, across nearly every part of my job and a surprising amount of my personal life.
The Starting Point
My first interactions with Claude were exploratory — the way most people start. I’d ask it questions I already knew the answers to, testing whether it could keep up. I’ve been building websites since 1996. I’ve managed enterprise CMS platforms, Drupal migrations, Cloudflare infrastructure, and distributed teams across time zones. I wasn’t looking for a tutor. I was looking for a tool that could handle complexity without needing me to oversimplify the problem first.
What got my attention wasn’t a single dramatic moment. It was the accumulation of small wins: a deployment email that came back tighter than I wrote it, a configuration analysis that caught something I’d missed, a Jira ticket draft that actually captured the technical nuance I needed. Each one saved ten or fifteen minutes. Over weeks, that compounded.
Copy Editing: The Gateway
The single most common thing I use Claude for is editing my professional communications. This is the use case nobody talks about because it doesn’t sound impressive, but it’s the one that changed my daily workflow the most.
I write a lot of emails. Status updates to leadership. Vendor correspondence. Escalations. Cross-functional coordination across time zones. The writing itself isn’t hard — the calibration is. Am I being too direct for this audience? Too hedging? Did I bury the ask three paragraphs down? Is Tuesday actually the 14th, or did I get the date wrong again?
I built a custom Claude Skill specifically for this — a structured set of editing rules tailored to my organization’s communication patterns. It knows to eliminate hedge language, front-load conclusions, catch date-day mismatches, and make asks explicit. When I paste in a draft and say “copy edit this,” I get back a cleaner version with a summary of what changed and why.
This isn’t about grammar. It’s about communication precision under time pressure. When you’re managing projects across multiple teams and vendors, every unclear email creates a downstream clarification thread. Claude catches the ambiguity before it ships.
Claude Code: Where It Gets Interesting
The chat interface is where I started. Claude Code — the command-line tool — is where things shifted from “useful assistant” to “integrated part of how I build.”
Claude Code lives in my terminal. It understands my codebase. It can read files, analyze configurations, create branches, commit, and push. The difference between this and asking Claude a question in a browser tab is the difference between consulting a colleague and actually working alongside one.
I used this heavily during a series of website migration projects — converting legacy sites to modern static frameworks. The work involved content extraction, template creation, redirect mapping, and deployment configuration. Claude Code handled the mechanical parts: transforming hundreds of pages of content, generating component scaffolding, mapping URL structures. I handled the architecture decisions, the edge cases, and the things that required understanding why the legacy system worked the way it did.
This is where I learned the most important lesson about working with AI: context management matters more than prompt cleverness. Early on, I’d load massive codebases and sprawling requirements docs into the same session and wonder why the output degraded. The model wasn’t getting dumber — I was drowning it. Once I learned to separate research, planning, and implementation into distinct phases with clean context boundaries, the quality jumped significantly.
I wrote about this in a previous post on context engineering. The short version: treat the AI’s context window like a shared desk. The more junk you pile on it, the harder it is to find anything useful.
The Custom Skills Layer
One of Claude’s features that doesn’t get enough attention is Skills — reusable instruction sets that you can load into your workflow. Think of them as SOPs for the AI.
I’ve built several:
A copy editing skill for business communications, calibrated to my organization’s stakeholder dynamics and writing standards. A Jira ticket generator that understands our project structure and produces tickets that actually pass definition-of-ready criteria. A platform architecture guide that encodes the dual-system architecture of our web infrastructure so new team members — and Claude itself — can answer questions about how the systems connect without someone explaining it from scratch every time.
These aren’t complex to build. They’re markdown files with structured instructions. But they solve a real problem: institutional knowledge that would otherwise exist only in someone’s head (usually mine) now persists in a format that’s both human-readable and machine-usable. When I’m on vacation, the skills still work.
Beyond Work
The professional uses are the ones that justify the time investment. But Claude has seeped into the rest of my life in ways I didn’t expect.
I used it to plan a family trip to Australia — not to replace our travel agent, who was invaluable for the logistics a human handles best, but to organize the complexity around it. Coordinating flights across the International Date Line, structuring daily itineraries, tracking confirmation numbers. After the trip, I used it to organize photo metadata and build a structured dataset of our images by location and timestamp.
I’ve used it to think through personal decisions — not for the answers, but for the pushback. I set my preferences so Claude challenges assumptions rather than agreeing with me. When I’m working through a career question or a home project plan, I want the counterargument I haven’t considered, not validation.
I also built a personal AI agent on my own infrastructure using a self-hosted platform, connected to my portfolio site’s repository. That was a weekend experiment in persistence and autonomy — exploring what happens when you give an agent access to real systems rather than just conversation. It’s early, imperfect, and the memory system still loses things between sessions. But the exercise of defining what I want an agent to actually know about my work has been as valuable as the tool itself.
What Actually Changed
I want to be specific about what’s different, because the general “AI makes me more productive” claim is meaningless without evidence.
Communication quality went up. My emails are shorter, clearer, and more actionable. I know this because the clarification-reply rate dropped. Fewer “what did you mean by this?” threads.
Development velocity increased on specific project types. Legacy-to-modern migration work — the kind with lots of repetitive transformation and well-defined patterns — is measurably faster. Architecture and design work is not. AI amplifies execution; it doesn’t replace judgment.
Documentation improved. This is the surprise. Because I’m encoding knowledge into Skills and CLAUDE.md files for the AI to use, I’m simultaneously creating better documentation for my team. The act of making something machine-readable forces a level of precision that casual tribal knowledge doesn’t demand.
My thinking got sharper. This one’s harder to measure. But the habit of having a collaborator that pushes back on weak reasoning — because I configured it to — has made me better at catching my own assumptions before I commit to them.
What Hasn’t Changed
AI didn’t make me a better leader. It didn’t solve the hard problems of managing distributed teams, navigating organizational politics, or making decisions with incomplete information. Those are human problems that require human skills.
It also didn’t eliminate the need for deep expertise. If anything, it amplified the gap between knowing what to ask for and not knowing. The migrations worked because I understood the legacy systems well enough to direct the work. Without that understanding, the AI would have confidently produced plausible-looking garbage.
Claude is a power tool. Power tools are dangerous in untrained hands and transformative in skilled ones. The skill isn’t in using the tool — it’s in knowing what you’re building.
Where I’m Headed
My current focus is on figuring out what the AI-augmented development workflow actually looks like for a team, not just an individual. That’s harder. It involves aligning on prompting strategies across different tools, establishing code review practices that account for AI-generated code, and being honest about what’s working and what isn’t — on a weekly basis, because the landscape changes that fast.
I’m also exploring what happens when you give an agent enough context and autonomy to act on its own — watching for new work, evaluating it against documented standards, and taking action without someone sitting at the keyboard. That’s the next frontier, and it’s equal parts exciting and sobering. The security implications alone require more care than most people give them.
But today, right now, the daily reality is simpler than any of that. I open my laptop. I start a conversation. I get my work done a little faster, a little cleaner, and with fewer things falling through the cracks.
That’s not a revolution. It’s a better Tuesday.
Kevin is a Senior Technical Lead with over 20 years of experience in web infrastructure. He writes about technology, AI, and the occasional hard-won lesson at quevin.com.
About the Author
Kevin P. Davison has over 20 years of experience building websites and figuring out how to make large-scale web projects actually work. He writes about technology, AI, leadership lessons learned the hard way, and whatever else catches his attention—travel stories, weekend adventures in the Pacific Northwest like snorkeling in Puget Sound, or the occasional rabbit hole he couldn't resist.