AI Insights

Every Claude Code Skill We Use to Run Engineering and Content Operations

We do not use Claude Code as a chatbot that happens to touch files. We use it as an orchestration layer across the full engineering lifecycle, with specialized skills for every phase. Here is every skill in our active setup and what each one does.

  • 16 active Claude Code skills covering frontend, backend, data, security, and content operations
  • Commit-to-PR workflow automated: branch naming, message drafting, and PR body generation in under 30 seconds
  • Security review skill runs before every PR merge; zero injection vulnerabilities shipped in 8 months
  • Data pipeline skills manage Airflow agentic workflows, dbt models, and upstream lineage tracing
  • Loop and schedule skills handle recurring audits and automated content checks without manual triggers
By Rejith Krishnan12 min read
Every Claude Code Skill We Use to Run Engineering and Content Operations

Executive Summary

Most teams that adopt Claude Code use it as a smart terminal assistant. They describe what they want, Claude Code writes the code, and they paste it in. That is a reasonable starting point. It is also leaving most of the value on the table.

Claude Code ships with a skills layer: structured, composable capabilities that can be invoked via slash commands, triggered by hooks, or chained together across a workflow. Skills are not macros. They are specialized agents with defined objectives, tailored prompts, and access to the full codebase context. A skill knows the difference between a frontend design task and a security audit. It behaves accordingly.

At lowtouch.ai, we have been running our full engineering and content operation through Claude Code skills for eight months. This post documents every skill in our active setup, what each one does, and why the combination matters more than any individual capability.


Why Skills, Not Prompts

Before walking through the list, it is worth explaining why we invested in structuring our Claude Code usage around skills rather than ad hoc prompting.

Three reasons:

Consistency. A freeform prompt is a new instruction every time. A skill is a repeatable behavior. When a new engineer joins and runs the code review skill on a pull request, they get the same structured output the senior engineer gets. The quality bar is encoded in the skill, not in the person's ability to write a good prompt.

Discoverability. Our CLAUDE.md documents the project architecture, brand standards, and engineering conventions. Skills are the activation layer on top of that context. A developer does not need to know how to prompt for a frontend component audit; they invoke /frontend-design and the skill handles it.

Composability. Skills can be combined. A typical content publishing workflow at lowtouch.ai runs three skills in sequence: frontend-design for any page-level UI changes, security-review before the PR is raised, and commit-push-pr to create the commit, push the branch, and open the pull request with a pre-drafted body. Three commands. Zero manual steps.


The Full Skill Map

Here is every skill in active use, mapped to the phase of the delivery lifecycle where it applies.

The Claude Code Skill Delivery Lifecycle Two-row diagram: row one covers Architect, Build, Quality; row two covers Ship and Operate. Maps 16 active Claude Code skills across 5 delivery phases. THE CLAUDE CODE SKILL DELIVERY LIFECYCLE ① ARCHITECT feature-dev init revise-claude-md auto-recommender ② BUILD frontend-design simplify claude-api ③ QUALITY code-review security-review review-pr ④ SHIP commit-push-pr commit clean_gone ⑤ OPERATE loop schedule fewer-prompts airflow cosmos-dbt lineage-tracing Claude Code skill map · lowtouch.ai · 16 active skills across 5 delivery phases

Development Skills

frontend-design

This is the highest-frequency skill in our setup. Every component in our Next.js 15 site (Navbar, WaveCanvas hero, blog post cards, solution pages, architecture diagrams, the Aurora background animation) was built or iterated using this skill.

The skill takes a description of the component goal, reads the existing Tailwind configuration, inspects adjacent component patterns for consistency, and generates production-quality TypeScript. It does not generate generic code that happens to compile: it generates code that matches our brand token system (lt-dark = #041250, pink accents, lime green highlights), respects our typography scale (container-query-based fluid type for the hero, fixed text-[4rem] for section headings), and follows our Framer Motion animation patterns.

What would have taken two to four hours of engineering time per component now takes fifteen to thirty minutes of review and iteration. Across 180+ pages in the current site, the cumulative time savings are substantial.

feature-dev

When a task requires more than a single component, feature-dev handles the architecture first. It reads the codebase, traces the relevant data flows, identifies the files that need to change, and produces a blueprint before any code is written. We review the blueprint, approve or redirect it, and then implementation proceeds against a shared plan.

This matters on tasks like adding a new agent landing page (which requires a new route, a JSON metadata entry, a sitemap update, JSON-LD schema helpers, and OG image wiring) or extending the blog category system. Without a blueprint, Claude Code will make reasonable-but-divergent choices about where to put things. With feature-dev, those choices are made explicitly and reviewed before execution begins.

simplify

After any non-trivial implementation session, we run simplify on the modified files. The skill scans recently changed code for redundant abstractions, inconsistent patterns, unnecessary state, and opportunities to use built-in framework capabilities rather than custom implementations.

The most common output: removing intermediate variables that exist only to name a value that is already named by its usage context, collapsing nested conditionals that could be a single guard clause, and replacing custom fetch wrappers with the idiomatic Next.js fetch with revalidation options. None of these are correctness bugs. All of them are maintainability costs that compound over time.

claude-api

We build internal tooling on top of the Anthropic API. The claude-api skill handles implementation tasks that involve the SDK directly: prompt caching configuration, tool use schemas, streaming response handling, and model version migrations when Anthropic ships new model families. It also enforces prompt caching by default on every API call, which meaningfully reduces cost on high-frequency internal tools.


Code Quality and Review Skills

pr-review-toolkit:review-pr

Before any pull request is merged, it runs through review-pr. The skill dispatches multiple specialized sub-reviewers in parallel: one checks for logic errors and edge cases, one checks for silent failures (catch blocks that swallow errors without logging, fallback values that mask real failures), one audits type design quality, and one checks test coverage gaps.

The output is a prioritized list of findings with confidence scores. Low-confidence findings are filtered; only issues that the reviewer is confident are genuine problems surface in the report. Engineers do not wade through noise.

security-review

We run security-review as a pre-merge gate on every pull request that touches user-facing rendering, form handling, API routes, or environment variable usage. The skill specifically checks for OWASP Top 10 patterns: injection vulnerabilities, insecure deserialization, broken access control, and XSS exposure.

We have a PreToolUse hook that also blocks certain XSS-prone code patterns at the file write level, but security-review catches the cases that require broader code context to identify. Eight months in, zero injection vulnerabilities have shipped to production.

code-review

For lightweight review (a configuration change, a documentation update, a small refactor), code-review is faster than the full pr-review-toolkit suite. It does a single-pass review focused on correctness and adherence to project conventions. It is the right tool when the surface area of the change is narrow.


Git and Workflow Skills

commit-commands:commit-push-pr

This skill handles the full commit-to-PR flow: it reads the git diff, drafts a commit message that focuses on the why rather than the what, stages the relevant files, creates the commit, pushes the branch to remote, and opens a pull request with a pre-populated summary and test plan. The PR body follows a consistent template that our team reads the same way every time.

The practical effect: raising a PR takes thirty seconds of review instead of five minutes of writing. Over fifty or sixty PRs per month, that compounds into hours recovered.

commit-commands:commit

For local commits that do not need a PR immediately (work-in-progress checkpoints, content updates that will be batch-pushed later), commit handles just the commit step with the same message-quality standard as the full flow.

commit-commands:clean_gone

Local branch hygiene degrades fast on an active codebase. clean_gone identifies all local branches whose remote tracking branches have been deleted (the [gone] state after a merged PR) and removes them, including any associated git worktrees. We run this weekly. It takes ten seconds and keeps the branch list readable.


Documentation and Project Memory Skills

init

When starting a new sub-project (an internal tool, a new agent service, an experimental data pipeline), init reads the codebase and generates a CLAUDE.md that documents the stack, conventions, data flows, and non-obvious constraints. The CLAUDE.md becomes the persistent context layer that every subsequent Claude Code session inherits.

Our main website CLAUDE.md (generated with init and maintained over eight months) currently documents: Docker build behavior, brand colour constraints, typography scale rules, the || vs ?? env var fallback distinction, the JSON-LD schema helper library, the blog content manifest structure, and the editorial standards for user-facing copy. Without it, every new Claude Code session would rediscover these constraints from scratch.

claude-md-management:revise-claude-md

After a complex session that surfaces new constraints (a Docker build failure that reveals an undocumented behavior, a schema.org field that Google interprets differently than expected), revise-claude-md updates the CLAUDE.md with the new learnings. It reads the session context, identifies what was non-obvious, and writes the relevant entry in the documentation.

This is how institutional knowledge accretes in our setup. It is not a one-time setup task; it is a habit at the end of any session that taught us something new.


Data Pipeline Skills

data:airflow

Our internal data infrastructure runs on Apache Airflow. The data:airflow skill manages the full operational surface: listing agentic workflows, triggering test runs, reading task logs, checking connection health, and debugging failure chains. It understands our Airflow environment's authentication model and vault integration pattern, so engineers do not need to reconstruct that context on each session.

data:cosmos-dbt-core and data:cosmos-dbt-fusion

Our dbt models are orchestrated through Cosmos (both the current core release and the new Fusion architecture). These skills handle model testing, run execution, lineage inspection, and troubleshooting when a model fails validation. The Fusion skill specifically handles the updated orchestration patterns introduced in dbt-fusion, which differ enough from core dbt that a separate skill is warranted.

data:tracing-upstream-lineage and data:tracing-downstream-lineage

When a data issue surfaces (an unexpected null, a metric that looks wrong in a dashboard), the first question is always: where did this value come from, and what downstream tables depend on it? These two skills trace the dependency graph in both directions, producing a readable map of sources and consumers without requiring the engineer to manually inspect YAML files and agentic workflow dependencies.


Automation and Scheduling Skills

loop

For tasks that need to run on a recurring interval during an active session (polling a build for completion, checking a deployed container's health every two minutes, monitoring a data ingestion job), loop handles the scheduling without manual re-invocation. The skill self-paces its interval based on what it is waiting for: short intervals for imminent state changes, longer intervals for jobs that take several minutes.

schedule

For recurring tasks that should run outside active sessions (a nightly SEO audit, a weekly branch cleanup, a daily check on new Search Console errors), schedule creates a cron-based remote agent that executes on the defined cadence and notifies when results are ready. This is how we run 15-minute daily SEO improvement sessions without requiring manual initiation every morning.

fewer-permission-prompts

Claude Code, by default, prompts for permission before running any shell command that is not explicitly allowlisted. In a mature project, this creates friction: the same git status, docker ps, and curl commands that are clearly safe get re-approved dozens of times per session. fewer-permission-prompts scans recent session transcripts, identifies read-only commands that have been approved repeatedly, and adds them to the project's permission allowlist. One setup session eliminates hundreds of future approval prompts.


Configuration and Setup Skills

update-config

When a new automation behavior is needed ("every time Claude stops, show the git status"), update-config modifies the appropriate settings.json file. This matters because behavioral automation in Claude Code is implemented via hooks in the settings file, not via Claude's memory. Memory cannot execute shell commands on events. Hooks can. The skill knows the difference and routes the configuration correctly.

claude-code-setup:claude-automation-recommender

When onboarding a new project to Claude Code, claude-automation-recommender audits the codebase and recommends the full automation layer: which hooks to configure, which skills to enable, which MCP servers would add value, and which subagent types apply to the project's domain. It is the setup skill that points to all other skills.


The Compound Effect

Individual skills deliver individual productivity gains. The compound effect of running them together, with consistent CLAUDE.md context, is where the real ROI lives.

Consider what happens when a developer adds a new agent landing page to our site:

  1. feature-dev produces the implementation blueprint: new route, new metadata entry, sitemap update, schema helper call, OG image wiring.
  2. frontend-design implements the component against the brand design system.
  3. simplify cleans the implementation.
  4. security-review validates the rendering logic.
  5. commit-push-pr raises the PR with a drafted summary.
  6. pr-review-toolkit:review-pr runs the full review suite before merge.

Six skills. One coherent workflow. The developer's contribution is the architectural judgment at each review step: approving the blueprint, accepting or redirecting the component output, merging the PR. The execution is automated.

That is the model we recommend to enterprise engineering leaders. Not AI as a coding assistant. AI as the execution layer on an engineering process where human judgment remains the governance mechanism.


What Skills Cannot Do

For completeness: skills are not a replacement for senior engineering judgment. They do not make architectural decisions. They do not know that your brand colour is #041250 unless the CLAUDE.md says so. They do not catch the distinction between || and ?? for empty-string fallbacks unless it is documented. They do not know which data schema changes will break downstream consumers unless those dependencies are mapped.

The investment in CLAUDE.md quality, in well-named skills with clear trigger conditions, and in the habit of running revise-claude-md after each learning session is what makes the compound effect possible. The skills are the execution layer. The context layer is the human work that makes them precise.


Rejith Krishnan is the Founder and CEO of lowtouch.ai, a private agentic AI platform for enterprises. lowtouch.ai is SOC 2 Type II and ISO/IEC 27001:2022 certified.

About the Author

Rejith Krishnan

Rejith Krishnan

Founder and CEO

Rejith Krishnan is the Founder and CEO of lowtouch.ai, a platform dedicated to empowering enterprises with private, no-code AI agents. With expertise in Site Reliability Engineering (SRE), Kubernetes, and AI systems architecture, he is passionate about simplifying the adoption of AI-driven automation to transform business operations.

Rejith specializes in deploying Large Language Models (LLMs) and building intelligent agents that automate workflows, enhance customer experiences, and optimize IT processes, all while ensuring data privacy and security. His mission is to help businesses unlock the full potential of enterprise AI with seamless, scalable, and secure solutions that fit their unique needs.

LinkedIn →