Case Study

How We Replaced WordPress with Next.js Using Claude Code

We ran our company website on WordPress for years. Plugin conflicts, upgrade failures, and maintenance overhead were killing engineering time. We replaced it with Next.js, and Claude Code was the agentic tool that made it feasible in weeks.

  • ️ WordPress plugin conflicts consumed 4+ hours per week of engineering time
  • ~10 hrs elapsed, ~3 hrs of active Claude Code prompting to rebuild 180+ pages across 3 weeks
  • Core Web Vitals score improved from 61 to 94; SEO score from 40 to 70+ after the rebuild
  • frontend-design skill and community SEO audit skill handled UI and optimization passes
  • We eat our own cooking: an agentic AI company rebuilt its site with agentic AI
12 min read
How We Replaced WordPress with Next.js Using Claude Code

Executive Summary

Our company website ran on WordPress for three years. For the first eighteen months, it was fine. Then it became a liability.

Plugin conflicts, PHP upgrade failures, and theme incompatibilities consumed engineering time that should have been spent on our actual product. The final straw was a plugin update that silently broke our contact form and sat undetected for eleven days.

We made the decision to migrate to Next.js 15 with the App Router, TypeScript, and Tailwind CSS. We used Claude Code (Anthropic's CLI for Claude) as the primary tool to execute the migration. What would have taken a frontend developer six to eight weeks of focused effort took us under three weeks. The total elapsed time across the project was approximately 10 hours, of which roughly 3 hours was active prompting with Claude Code. The rest was Claude Code running, building, and iterating while I focused elsewhere.

This is the story of what we did, what we learned, and why we think it matters beyond our own website.


The WordPress Maintenance Trap

If you have run a WordPress site for longer than a year, the pattern will feel familiar.

You start with a solid theme and ten or twelve plugins. Each plugin ships on its own release cadence. WordPress core ships major versions roughly twice a year. PHP has its own lifecycle. WooCommerce, if you are using it, adds another layer. The combinations multiply quickly, and not all of them are tested together by any single vendor.

Our specific failure points were:

  1. The contact form plugin updated and began conflicting with our caching layer. The form appeared to work but silently dropped submissions into a queue that was never flushed. We discovered this when a prospect followed up via LinkedIn asking why we had not responded. They had filled the form eleven days earlier.
  2. The SEO plugin began generating malformed schema markup after a WordPress core minor update. Google Search Console flagged the errors three weeks later, by which point our structured data had been invalid for an entire crawl cycle.
  3. The page builder held our landing pages hostage behind a proprietary format. Every performance optimization required working through the builder's abstractions rather than touching HTML and CSS directly.

Across twelve months, we tracked approximately four to five hours per week of reactive maintenance across our team: investigating alerts, testing updates in staging, rolling back, or applying patches. That is roughly 200 to 250 engineer-hours per year on infrastructure that was not differentiating our product.

The decision to migrate was not primarily about page speed. It was about eliminating a category of maintenance overhead entirely.


Why Next.js (And Why Not the Alternatives)

We evaluated three paths:

Option What appealed to us What held us back
Stay on WordPress, harden it Familiar, low migration cost Root problem (plugin complexity) does not go away
Headless WordPress + Next.js frontend Decouples frontend from WP plugins WP still runs; still needs maintenance; double the surface area
Full migration to Next.js + Markdown Eliminates WordPress entirely; content in Git Migration cost; no WYSIWYG for non-technical editors

We chose the third option. The content on our site (blog posts, agent pages, solutions pages) is primarily created by our team, not by external editors who need a GUI. Moving to Markdown files in a Git repo meant our content workflow became a pull request workflow: version-controlled, reviewable, and deployable alongside code changes.

The specific stack we landed on:

  • Next.js 15 with the App Router and React Server Components
  • React 19 with TypeScript throughout
  • Tailwind CSS v3 for styling
  • Markdown files with YAML frontmatter for all blog content
  • Docker multi-stage build to a standalone output

All blog content from WordPress was fetched via the WP REST API, converted to Markdown, and stored locally. After the migration, the WordPress dependency was fully severed.


Where Claude Code Changed the Equation

The migration to Next.js is not technically hard. Any competent frontend developer could do it. What makes it expensive is the volume and breadth of work: designing a component system from scratch, migrating 170+ blog posts across 180+ total pages, building category and author pages, implementing structured data, handling image localization, setting up Docker, and then doing a full SEO audit to confirm nothing was lost.

Sequentially, with one engineer doing this alongside product work, the realistic timeline was eight to ten weeks.

We used Claude Code from the start, and it changed that estimate materially.

Claude Code is not an AI assistant you chat with. It is a CLI tool that runs in your terminal, has access to your entire codebase, can read and write files, run commands, inspect build output, and iterate on failures without waiting for you to copy-paste error messages back and forth. It operates more like a junior engineer who is always available, never loses context about the codebase, and can execute multi-step tasks with minimal human interaction.

The practical difference: when I asked it to "build a blog post card component that matches the brand tokens in tailwind.config.ts and shows a category badge, title, excerpt, and author," it read the config file, inspected the existing component patterns, wrote the component, and checked that the import paths resolved correctly. I reviewed the output. I did not write a line of the component.

That pattern (describe goal, review output, iterate once) held across the entire migration.


The Migration in Practice: What Claude Code Actually Did

Phase 1: Component System and Layout

  • Read the existing WordPress theme to understand the visual design language
  • Built the Navbar, Hero, Footer, and CTA components from scratch in React and Tailwind
  • Implemented the WaveCanvas animation (Framer Motion SVG paths) for the hero section
  • Built WhyUsCard, UseCaseCard, FeaturedAgents, and the architecture diagram popup system
  • Set up the global CSS tokens, brand colour definitions, and typography scale

The frontend-design skill (a structured Claude Code skill that specializes in UI component generation against a design system) was the primary tool here. You provide it with a brand palette and a component description; it generates production-quality TypeScript components. The output still requires review and iteration, but the iteration cost is low.

Phase 2: Content Migration

  • Wrote and ran a Node.js script (scripts/fetch-posts.mjs) that hit the WordPress REST API and pulled all 170+ posts as structured JSON
  • Generated YAML frontmatter for each post (slug, date, categories, tags, excerpt, ogImage URL)
  • Wrote a second script (scripts/download-images.mjs) that localized all remote image URLs to /public/blog/images/, converting them to WebP via sharp
  • Generated a flat metadata manifest (content/blog/_index.json) consumed statically by the build

The image localization step alone would have taken a full day manually. Claude Code wrote the script, ran it inside Docker, and handled the path-rewriting in one session.

Phase 3: Dynamic Routes and SEO Infrastructure

  • Built app/[slug]/page.tsx (blog post pages) and app/category/[...slug]/page.tsx
  • Implemented generateMetadata() for every page type
  • Built lib/schema.ts with helper functions for JSON-LD structured data: buildArticle(), buildOrganization(), buildBreadcrumb(), buildSoftwareApplication()
  • Set up app/sitemap.ts reading from the static manifest (no API calls at build time)
  • Configured next.config.mjs with the Content Security Policy header, redirect rules, and Docker-compatible standalone output

Structured data is one of those areas where small mistakes (wrong field types, missing required properties, incorrect @type values) produce silent failures that only surface weeks later in Google Search Console. Claude Code cross-referenced the schema.org specification as it built the helpers, flagging fields like price: '0' on a SoftwareApplication schema (which Google renders as "$0" in SERPs, not appropriate for an enterprise platform) and substituting priceType: 'InquiryPrice' instead.

Phase 4: SEO Audit (Week 3)

After the build was stable, we ran a community SEO audit skill against the deployed Docker container. This skill runs a structured audit across a defined set of SEO dimensions: Core Web Vitals, meta tag completeness, structured data validity, canonical URL correctness, internal linking, image alt coverage, and mobile tap target sizing.

The audit surfaced six issues:

  1. The openGraph.images array was not being inherited from the root layout for pages that defined their own openGraph object (a Next.js 15 behavior that surprises most developers)
  2. Several social icon links in the footer used <a> elements with placeholder href values left from the component scaffold
  3. The ImageObject schema on blog posts was missing the url field (required for Google Image Search eligibility)
  4. Three pages had tap targets under 44px on mobile
  5. The sitemap was including redirect URLs (which return 308, not 200)
  6. The og:type, siteName, and locale fields were absent on non-blog pages

Each of these would have been invisible without a systematic audit. Fixing them manually after discovery in Search Console would have meant waiting weeks between identification and validation. Running the audit skill before go-live compressed that loop to a single afternoon.


"We Eat Our Own Cooking"

There is a phrase we use internally: we eat our own cooking. lowtouch.ai builds agentic AI platforms for enterprises. Our core argument to customers is that agentic AI can take on complex, multi-step tasks (not just answer questions) with minimal human interaction, and that the human oversight model (developer commit review, engineering lead pull request approval) is the right governance layer.

Rebuilding our own website with Claude Code was, deliberately, a test of that argument.

The result was consistent with what we see in enterprise deployments. Claude Code did not replace engineering judgment. It did not make architectural decisions. It did not know, without being told, that our brand colour is #041250 (not #0d1857, which a pixel-sampling analysis of our logo revealed). It did not catch, on its own, that ?? for environment variable fallbacks would cause new URL('') to throw at Docker build time when NEXT_PUBLIC_SITE_URL was not set (the correct operator is ||).

What it did was eliminate the execution gap between a decision and its implementation. When we decided the blog post hero should use a two-column grid with the image filling the full column height, that decision went from statement to working code in minutes, not hours.

At the level of a website migration, the ROI is clear. At the level of an enterprise database migration or a multi-system integration, the same pattern scales with 12 or 20 specialized subagents instead of one general-purpose CLI session. The economics improve further.


Results

Three weeks after starting the migration, the Next.js site went live. Here is where we landed:

Metric WordPress (before) Next.js (after)
Google PageSpeed (mobile) 61 94
SEO score (audit tool) 40/100 70+/100
Time to First Byte ~480ms ~90ms
Total pages rebuilt n/a 180+
Plugin dependencies 18 0
Weekly maintenance overhead 4-5 hrs ~20 min (content deploys)
Structured data errors in GSC 7 (active) 0
Content in version control No Yes (Git)
Total elapsed migration time n/a ~10 hours
Active Claude Code prompting time n/a ~3 hours

The SEO score improvement (40 to 70+) is real but incomplete. SEO is not a one-time task: title tag optimization, internal linking depth, page-level content improvements, and Core Web Vitals edge cases all take iterative attention. The audit got us from critically broken to functional; the ongoing work is what takes it further. More on that below.

The zero plugin dependency line is the one I care about most. It means our website can no longer break on a Tuesday because a plugin vendor shipped a bad minor release.


What We Would Do Differently

A few lessons from the migration that would save time on a repeat:

  1. Start with the content manifest, not the components. We built components first and then discovered that the content structure (category IDs, tag formats, author references) shaped the component props. Reversing the order would have avoided two rounds of refactoring.
  2. Run the SEO audit skill earlier. We ran it at the end of week three, after the site was fully built. Running it at the end of week one (once routes were established but before all structured data was wired) would have caught the openGraph inheritance issue before it propagated across twenty pages.
  3. Write the Docker environment variable behavior into the project notes immediately. The || vs ?? distinction for empty string fallbacks cost two debugging hours during the Docker build. Once documented, it was obvious. Before documentation, it was a silent failure.
  4. Treat SEO as a daily habit, not a migration deliverable. The initial audit took us from 40 to 70+ on the SEO scoring tool. Getting further requires sustained attention: refining title tags, improving internal linking, adding structured content to thin pages. We now run 15 to 30 minute focused SEO sessions each day using Claude Code, each one targeting a specific improvement from a prioritized checklist. At that cadence, the compounding effect is meaningful over weeks, and no single session feels like a burden.

The Broader Point for Enterprise Leaders

If you are a CTO or CIO evaluating where agentic AI delivers real ROI, a website migration is not the most exciting use case. It is, however, a useful calibration exercise.

The value of agentic AI is not that it replaces engineers. It is that it removes the execution cost from decisions. Every technology organization has a backlog of projects that are technically straightforward but perpetually deferred because the execution cost is too high relative to competing priorities. A website that needs to be rebuilt. A legacy reporting database that should be migrated. A test coverage gap that should be closed.

Agentic AI, applied correctly, compresses the execution cost without reducing the quality bar. The human judgment layer (architecture decisions, code review, stakeholder approval) remains essential. The AI layer absorbs the volume.

We rebuilt our website in three weeks. We used the same Claude Code patterns we recommend to our enterprise customers. The outcome was a faster, more maintainable site with better SEO and zero ongoing plugin debt.

That is not a coincidence. It is a proof point.


Rejith Krishnan is the Founder and CEO of lowtouch.ai, a private agentic AI platform for enterprises. lowtouch.ai is SOC 2 Type II and ISO/IEC 27001:2022 certified.

About the Author

Rejith Krishnan

Rejith Krishnan

Founder and CEO

Rejith Krishnan is the Founder and CEO of lowtouch.ai, a platform dedicated to empowering enterprises with private, no-code AI agents. With expertise in Site Reliability Engineering (SRE), Kubernetes, and AI systems architecture, he is passionate about simplifying the adoption of AI-driven automation to transform business operations.

Rejith specializes in deploying Large Language Models (LLMs) and building intelligent agents that automate workflows, enhance customer experiences, and optimize IT processes, all while ensuring data privacy and security. His mission is to help businesses unlock the full potential of enterprise AI with seamless, scalable, and secure solutions that fit their unique needs.

LinkedIn →