We ran our company website on WordPress for years. Plugin conflicts, upgrade failures, and maintenance overhead were killing engineering time. We replaced it with Next.js, and Claude Code was the agentic tool that made it feasible in weeks.

Our company website ran on WordPress for three years. For the first eighteen months, it was fine. Then it became a liability.
Plugin conflicts, PHP upgrade failures, and theme incompatibilities consumed engineering time that should have been spent on our actual product. The final straw was a plugin update that silently broke our contact form and sat undetected for eleven days.
We made the decision to migrate to Next.js 15 with the App Router, TypeScript, and Tailwind CSS. We used Claude Code (Anthropic's CLI for Claude) as the primary tool to execute the migration. What would have taken a frontend developer six to eight weeks of focused effort took us under three weeks. The total elapsed time across the project was approximately 10 hours, of which roughly 3 hours was active prompting with Claude Code. The rest was Claude Code running, building, and iterating while I focused elsewhere.
This is the story of what we did, what we learned, and why we think it matters beyond our own website.
If you have run a WordPress site for longer than a year, the pattern will feel familiar.
You start with a solid theme and ten or twelve plugins. Each plugin ships on its own release cadence. WordPress core ships major versions roughly twice a year. PHP has its own lifecycle. WooCommerce, if you are using it, adds another layer. The combinations multiply quickly, and not all of them are tested together by any single vendor.
Our specific failure points were:
Across twelve months, we tracked approximately four to five hours per week of reactive maintenance across our team: investigating alerts, testing updates in staging, rolling back, or applying patches. That is roughly 200 to 250 engineer-hours per year on infrastructure that was not differentiating our product.
The decision to migrate was not primarily about page speed. It was about eliminating a category of maintenance overhead entirely.
We evaluated three paths:
| Option | What appealed to us | What held us back |
|---|---|---|
| Stay on WordPress, harden it | Familiar, low migration cost | Root problem (plugin complexity) does not go away |
| Headless WordPress + Next.js frontend | Decouples frontend from WP plugins | WP still runs; still needs maintenance; double the surface area |
| Full migration to Next.js + Markdown | Eliminates WordPress entirely; content in Git | Migration cost; no WYSIWYG for non-technical editors |
We chose the third option. The content on our site (blog posts, agent pages, solutions pages) is primarily created by our team, not by external editors who need a GUI. Moving to Markdown files in a Git repo meant our content workflow became a pull request workflow: version-controlled, reviewable, and deployable alongside code changes.
The specific stack we landed on:
standalone outputAll blog content from WordPress was fetched via the WP REST API, converted to Markdown, and stored locally. After the migration, the WordPress dependency was fully severed.
The migration to Next.js is not technically hard. Any competent frontend developer could do it. What makes it expensive is the volume and breadth of work: designing a component system from scratch, migrating 170+ blog posts across 180+ total pages, building category and author pages, implementing structured data, handling image localization, setting up Docker, and then doing a full SEO audit to confirm nothing was lost.
Sequentially, with one engineer doing this alongside product work, the realistic timeline was eight to ten weeks.
We used Claude Code from the start, and it changed that estimate materially.
Claude Code is not an AI assistant you chat with. It is a CLI tool that runs in your terminal, has access to your entire codebase, can read and write files, run commands, inspect build output, and iterate on failures without waiting for you to copy-paste error messages back and forth. It operates more like a junior engineer who is always available, never loses context about the codebase, and can execute multi-step tasks with minimal human interaction.
The practical difference: when I asked it to "build a blog post card component that matches the brand tokens in tailwind.config.ts and shows a category badge, title, excerpt, and author," it read the config file, inspected the existing component patterns, wrote the component, and checked that the import paths resolved correctly. I reviewed the output. I did not write a line of the component.
That pattern (describe goal, review output, iterate once) held across the entire migration.
WaveCanvas animation (Framer Motion SVG paths) for the hero sectionWhyUsCard, UseCaseCard, FeaturedAgents, and the architecture diagram popup systemThe frontend-design skill (a structured Claude Code skill that specializes in UI component generation against a design system) was the primary tool here. You provide it with a brand palette and a component description; it generates production-quality TypeScript components. The output still requires review and iteration, but the iteration cost is low.
scripts/fetch-posts.mjs) that hit the WordPress REST API and pulled all 170+ posts as structured JSONogImage URL)scripts/download-images.mjs) that localized all remote image URLs to /public/blog/images/, converting them to WebP via sharpcontent/blog/_index.json) consumed statically by the buildThe image localization step alone would have taken a full day manually. Claude Code wrote the script, ran it inside Docker, and handled the path-rewriting in one session.
app/[slug]/page.tsx (blog post pages) and app/category/[...slug]/page.tsxgenerateMetadata() for every page typelib/schema.ts with helper functions for JSON-LD structured data: buildArticle(), buildOrganization(), buildBreadcrumb(), buildSoftwareApplication()app/sitemap.ts reading from the static manifest (no API calls at build time)next.config.mjs with the Content Security Policy header, redirect rules, and Docker-compatible standalone outputStructured data is one of those areas where small mistakes (wrong field types, missing required properties, incorrect @type values) produce silent failures that only surface weeks later in Google Search Console. Claude Code cross-referenced the schema.org specification as it built the helpers, flagging fields like price: '0' on a SoftwareApplication schema (which Google renders as "$0" in SERPs, not appropriate for an enterprise platform) and substituting priceType: 'InquiryPrice' instead.
After the build was stable, we ran a community SEO audit skill against the deployed Docker container. This skill runs a structured audit across a defined set of SEO dimensions: Core Web Vitals, meta tag completeness, structured data validity, canonical URL correctness, internal linking, image alt coverage, and mobile tap target sizing.
The audit surfaced six issues:
openGraph.images array was not being inherited from the root layout for pages that defined their own openGraph object (a Next.js 15 behavior that surprises most developers)<a> elements with placeholder href values left from the component scaffoldImageObject schema on blog posts was missing the url field (required for Google Image Search eligibility)og:type, siteName, and locale fields were absent on non-blog pagesEach of these would have been invisible without a systematic audit. Fixing them manually after discovery in Search Console would have meant waiting weeks between identification and validation. Running the audit skill before go-live compressed that loop to a single afternoon.
There is a phrase we use internally: we eat our own cooking. lowtouch.ai builds agentic AI platforms for enterprises. Our core argument to customers is that agentic AI can take on complex, multi-step tasks (not just answer questions) with minimal human interaction, and that the human oversight model (developer commit review, engineering lead pull request approval) is the right governance layer.
Rebuilding our own website with Claude Code was, deliberately, a test of that argument.
The result was consistent with what we see in enterprise deployments. Claude Code did not replace engineering judgment. It did not make architectural decisions. It did not know, without being told, that our brand colour is #041250 (not #0d1857, which a pixel-sampling analysis of our logo revealed). It did not catch, on its own, that ?? for environment variable fallbacks would cause new URL('') to throw at Docker build time when NEXT_PUBLIC_SITE_URL was not set (the correct operator is ||).
What it did was eliminate the execution gap between a decision and its implementation. When we decided the blog post hero should use a two-column grid with the image filling the full column height, that decision went from statement to working code in minutes, not hours.
At the level of a website migration, the ROI is clear. At the level of an enterprise database migration or a multi-system integration, the same pattern scales with 12 or 20 specialized subagents instead of one general-purpose CLI session. The economics improve further.
Three weeks after starting the migration, the Next.js site went live. Here is where we landed:
| Metric | WordPress (before) | Next.js (after) |
|---|---|---|
| Google PageSpeed (mobile) | 61 | 94 |
| SEO score (audit tool) | 40/100 | 70+/100 |
| Time to First Byte | ~480ms | ~90ms |
| Total pages rebuilt | n/a | 180+ |
| Plugin dependencies | 18 | 0 |
| Weekly maintenance overhead | 4-5 hrs | ~20 min (content deploys) |
| Structured data errors in GSC | 7 (active) | 0 |
| Content in version control | No | Yes (Git) |
| Total elapsed migration time | n/a | ~10 hours |
| Active Claude Code prompting time | n/a | ~3 hours |
The SEO score improvement (40 to 70+) is real but incomplete. SEO is not a one-time task: title tag optimization, internal linking depth, page-level content improvements, and Core Web Vitals edge cases all take iterative attention. The audit got us from critically broken to functional; the ongoing work is what takes it further. More on that below.
The zero plugin dependency line is the one I care about most. It means our website can no longer break on a Tuesday because a plugin vendor shipped a bad minor release.
A few lessons from the migration that would save time on a repeat:
openGraph inheritance issue before it propagated across twenty pages.|| vs ?? distinction for empty string fallbacks cost two debugging hours during the Docker build. Once documented, it was obvious. Before documentation, it was a silent failure.If you are a CTO or CIO evaluating where agentic AI delivers real ROI, a website migration is not the most exciting use case. It is, however, a useful calibration exercise.
The value of agentic AI is not that it replaces engineers. It is that it removes the execution cost from decisions. Every technology organization has a backlog of projects that are technically straightforward but perpetually deferred because the execution cost is too high relative to competing priorities. A website that needs to be rebuilt. A legacy reporting database that should be migrated. A test coverage gap that should be closed.
Agentic AI, applied correctly, compresses the execution cost without reducing the quality bar. The human judgment layer (architecture decisions, code review, stakeholder approval) remains essential. The AI layer absorbs the volume.
We rebuilt our website in three weeks. We used the same Claude Code patterns we recommend to our enterprise customers. The outcome was a faster, more maintainable site with better SEO and zero ongoing plugin debt.
That is not a coincidence. It is a proof point.
Rejith Krishnan is the Founder and CEO of lowtouch.ai, a private agentic AI platform for enterprises. lowtouch.ai is SOC 2 Type II and ISO/IEC 27001:2022 certified.
About the Author

Rejith Krishnan
Founder and CEO
Rejith Krishnan is the Founder and CEO of lowtouch.ai, a platform dedicated to empowering enterprises with private, no-code AI agents. With expertise in Site Reliability Engineering (SRE), Kubernetes, and AI systems architecture, he is passionate about simplifying the adoption of AI-driven automation to transform business operations.
Rejith specializes in deploying Large Language Models (LLMs) and building intelligent agents that automate workflows, enhance customer experiences, and optimize IT processes, all while ensuring data privacy and security. His mission is to help businesses unlock the full potential of enterprise AI with seamless, scalable, and secure solutions that fit their unique needs.