Migrating the WideWorldImporters OLTP database from SQL Server to PostgreSQL 15 using a Claude Code slash command toolchain: DDL conversion, stored procedure translation to PL/pgSQL, and containerised smoke testing, all repeatable, auditable, and consistent across hundreds of objects.

Migrating a production OLTP database from Microsoft SQL Server to PostgreSQL is one of the more consequential decisions an enterprise architecture team can make. The motivations are well understood: elimination of per-core licensing costs, cloud-native portability across providers, and access to a rich open-source extension ecosystem including PostGIS, pgvector, and TimescaleDB. The challenge is execution: SQL Server and PostgreSQL share ANSI SQL as a common ancestor but diverge sharply in DDL syntax, procedural language, JSON handling, and temporal data features.
This case study documents the migration of the WideWorldImporters OLTP database (Microsoft's official SQL Server sample) to PostgreSQL 15. Rather than a one-time manual effort, we built a repeatable AI-assisted toolchain using Claude Code slash commands that converts DDL, stored procedures, and API endpoints incrementally, with containerised smoke testing after each conversion. The result is a migration pipeline that scales across hundreds of objects without accumulating technical debt.
The WideWorldImporters database is a realistic OLTP schema covering order management, inventory, customer records, and supplier operations. The source is a SQL Server Data Tools (SSDT) project with the following inventory:
| Schema | Objects |
|---|---|
Application |
10 tables, 1 function, 15 stored procedures |
Sales |
3 tables |
Purchasing |
3 tables |
Warehouse |
5 tables |
WebApi |
45 stored procedures, 2 views |
Sequences |
26 sequence objects |
The target environment is PostgreSQL 15.1 running in a shared Docker container. Converted output mirrors the source directory structure under a postgres/ folder, keeping MSSQL source and PostgreSQL output side by side in the same repository for diff-based review.
Three schemas formed the core of this engagement: Application (the people and reference data backbone), Sales (orders and customers), and WebApi (the JSON-based stored procedure API layer). Together they exercise every major translation challenge the MSSQL-to-PostgreSQL surface area presents.
Rather than write conversion scripts or rely on third-party migration tools, we defined the conversion rules declaratively as Claude Code slash commands — each command a markdown file that specifies what to read, how to transform it, and what to write. This gave us a toolchain with five distinct stages:
| Command | Purpose | Example |
|---|---|---|
/mssql-list-deps |
Pre-flight dependency graph — shows which FK/sequence dependencies are converted or missing | /mssql-list-deps wwi-ssdt/wwi-ssdt/Sales/Tables/Orders.sql |
/mssql-to-postgres |
Convert a MSSQL DDL table file to PostgreSQL, with full type mapping and extended properties | /mssql-to-postgres wwi-ssdt/wwi-ssdt/Sales/Tables/Orders.sql |
/mssql-to-pgfunc |
Convert a MSSQL stored procedure to a native PL/pgSQL function | /mssql-to-pgfunc "wwi-ssdt/wwi-ssdt/WebApi/Stored Procedures/UpdateSalesOrderFromJson.sql" |
/pgfunc-test |
Smoke-test a converted function against a live PostgreSQL container | /pgfunc-test postgres/WebApi/Functions/update_sales_order_from_json.sql |
/mssql-to-api |
Convert a stored procedure to a FastAPI endpoint with SQLAlchemy async | /mssql-to-api "wwi-ssdt/wwi-ssdt/WebApi/Stored Procedures/UpdateSalesOrderFromJson.sql" |
The workflow followed a strict dependency order for each logical unit of work:
/mssql-list-deps <SP>
↓ (identify which tables need converting first)
/mssql-to-postgres <referenced tables>
↓
/mssql-to-pgfunc <SP>
↓
/pgfunc-test <converted function>
Each command produced a .sql output file and a companion .md conversion summary — documenting what was changed, which extended properties became COMMENT ON statements, and which dependencies remained unresolved. This companion file served as the audit trail during review.
The bulk of the complexity in MSSQL-to-PostgreSQL DDL conversion falls into four categories. Each required a specific, non-obvious solution.
SQL Server's temporal table feature (SYSTEM_VERSIONING = ON) maintains a full row history in a paired archive table automatically. PostgreSQL has no native equivalent.
The Application.People table is a temporal table with GENERATED ALWAYS AS ROW START/END columns:
-- MSSQL source
[ValidFrom] DATETIME2 (7) GENERATED ALWAYS AS ROW START NOT NULL,
[ValidTo] DATETIME2 (7) GENERATED ALWAYS AS ROW END NOT NULL,
CONSTRAINT [PK_Application_People] PRIMARY KEY CLUSTERED ([PersonID] ASC),
PERIOD FOR SYSTEM_TIME ([ValidFrom], [ValidTo])
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE=[Application].[People_Archive], DATA_CONSISTENCY_CHECK=ON));
-- PostgreSQL output
ValidFrom TIMESTAMP(6) NOT NULL DEFAULT CURRENT_TIMESTAMP,
ValidTo TIMESTAMP(6) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT PK_Application_People PRIMARY KEY (PersonID)
The PERIOD FOR SYSTEM_TIME clause, WITH (SYSTEM_VERSIONING ...) clause, and GENERATED ALWAYS AS ROW START/END markers are all stripped. The columns become plain timestamps with a DEFAULT CURRENT_TIMESTAMP. This is documented as a semantic gap: history tracking must be re-implemented via trigger-based audit tables or the pg_audit extension if required.
Application.People contains a computed column for full-text search:
-- MSSQL
[SearchName] AS (concat([PreferredName], N' ', [FullName])) PERSISTED NOT NULL
The PostgreSQL equivalent is GENERATED ALWAYS AS ... STORED, but with a critical constraint: the expression must use only IMMUTABLE functions. The conversion revealed that concat() in PostgreSQL is classified as STABLE (not IMMUTABLE), making it illegal in a generated column expression.
-- PostgreSQL (correct)
SearchName VARCHAR(101) GENERATED ALWAYS AS (PreferredName || ' ' || FullName) STORED NOT NULL,
The || string concatenation operator is IMMUTABLE in PostgreSQL. This is a non-obvious difference that would cause a cryptic ERROR: generation expression is not immutable at DDL apply time if not caught during conversion.
A second computed column (OtherLanguages AS (json_query([CustomFields], '$.OtherLanguages'))) was non-persisted. PostgreSQL has no virtual computed column concept, so this became a regular TEXT NULL column with a comment directing callers to use (CustomFields::jsonb -> 'OtherLanguages')::text in queries.
SQL Server sequences are referenced inline in column defaults using named constraint syntax:
-- MSSQL
[OrderID] INT CONSTRAINT [DF_Sales_Orders_OrderID] DEFAULT (NEXT VALUE FOR [Sequences].[OrderID]) NOT NULL
The PostgreSQL conversion requires two changes: emit the sequence DDL before the table, and strip the named constraint wrapper:
-- PostgreSQL
CREATE SEQUENCE IF NOT EXISTS sequences.order_id_seq START 1 INCREMENT 1;
CREATE TABLE sales.orders (
"OrderID" INTEGER DEFAULT nextval('sequences.order_id_seq') NOT NULL,
...
The sequence name follows a snake_case convention: [Sequences].[OrderID] → sequences.order_id_seq. The sequences schema must be pre-created separately. The /mssql-to-postgres command handles this, but the /pgfunc-test smoke-test command also pre-creates it unconditionally to avoid failures when testing functions in isolation.
Sales.Customers stores a delivery location as a geography type — a SQL Server spatial type with no PostgreSQL equivalent outside of the PostGIS extension:
-- MSSQL
[DeliveryLocation] [sys].[geography] NULL
-- PostgreSQL
DeliveryLocation geography NULL
The column type maps directly once PostGIS is installed. The conversion notes this as a deployment prerequisite: CREATE EXTENSION IF NOT EXISTS postgis; must run before the table DDL is applied. The companion .md file flags this with a dedicated PostGIS note section.
The WebApi schema contains 45 stored procedures following a consistent pattern: accept a JSON payload, shred it, and apply a partial update to the target table. UpdateSalesOrderFromJson is representative:
-- MSSQL source
CREATE PROCEDURE [WebApi].[UpdateSalesOrderFromJson]
(@SalesOrder NVARCHAR(MAX), @SalesOrderID int, @UserID int)
WITH EXECUTE AS OWNER
AS BEGIN
UPDATE Sales.Orders SET
SalespersonPersonID = ISNULL(json.SalespersonPersonID, Sales.Orders.SalespersonPersonID),
OrderDate = ISNULL(json.OrderDate, Sales.Orders.OrderDate),
LastEditedBy = @UserID
FROM OPENJSON(@SalesOrder)
WITH (SalespersonPersonID int, OrderDate date, ...) AS json
WHERE Sales.Orders.OrderID = @SalesOrderID
END
Three conversion steps were applied:
Parameter renaming: @SalesOrder → p_sales_order, @SalesOrderID → p_sales_order_id, @UserID → p_user_id. The p_ prefix avoids ambiguity with column names of the same logical name inside the function body.
OPENJSON → jsonb_to_record: SQL Server's typed JSON shredding syntax maps to PostgreSQL's jsonb_to_record for single-object payloads:
-- PostgreSQL
FROM jsonb_to_record(p_sales_order::jsonb) AS j(
"SalespersonPersonID" integer,
"OrderDate" date,
"IsUndersupplyBackordered" boolean,
...
)
ISNULL → COALESCE with table alias: The target table receives an alias (o) to disambiguate it from the JSON-derived relation (j) when both contribute columns of the same name to the COALESCE expressions:
UPDATE sales.orders o SET
"SalespersonPersonID" = COALESCE(j."SalespersonPersonID", o."SalespersonPersonID"),
"OrderDate" = COALESCE(j."OrderDate", o."OrderDate"),
"LastEditedBy" = p_user_id
FROM jsonb_to_record(p_sales_order::jsonb) AS j(...)
WHERE o."OrderID" = p_sales_order_id;
One semantic subtlety surfaced during testing: passing "BackorderOrderID": null in the JSON payload does not clear the existing value. jsonb_to_record returns NULL for a null JSON value, and COALESCE(NULL, existing) = existing. This matches the original MSSQL ISNULL behaviour — the pattern was never designed to allow explicit null-clearing via JSON. If that capability is needed, the function would require a jsonb ? 'key' key-presence check to distinguish "absent" from "explicitly null".
Each converted function was validated against the live postgres_15.1 Docker container immediately after conversion, using an isolated wwi_test schema. The /pgfunc-test command orchestrated the full setup-seed-execute-verify cycle.
The FK-stripping challenge. Applying a fully converted table DDL (complete with foreign key constraints) fails when referenced tables have not yet been converted. The naive approach of piping the raw DDL to psql produced cascading errors:
ERROR: schema "application" does not exist
ERROR: relation "sales.orders" does not exist
The solution was a two-step preprocessing approach built into /pgfunc-test:
First, all schemas referenced in REFERENCES schema.table clauses are pre-created:
grep -oP 'REFERENCES \K\w+(?=\.)' Orders.sql | sort -u | \
while read s; do
docker exec postgres_15.1 psql -U postgres -d postgres \
-c "CREATE SCHEMA IF NOT EXISTS $s;"
done
Second, FK constraint lines are stripped from the DDL before applying, and any trailing comma left before the closing parenthesis is repaired:
import re
sql = open('postgres/Sales/Tables/Orders.sql').read()
# Remove CONSTRAINT ... FOREIGN KEY ... lines
sql = re.sub(
r',?\s*\n\s*CONSTRAINT\s+\S+\s+FOREIGN KEY\s*\([^)]+\)\s*REFERENCES\s+\S+\s*\([^)]+\)',
'', sql
)
# Fix trailing comma before closing paren
sql = re.sub(r',(\s*\n\s*\);)', r'\1', sql)
This approach applies the real column structure (correct types, indexes, sequences, and comments) without FK constraints that cannot be resolved until the full schema is converted. FK enforcement belongs in full integration tests, not smoke tests.
Smoke test execution. Four representative orders were seeded and the function was called with targeted JSON payloads:
| OrderID | JSON payload | Fields updated | Validated |
|---|---|---|---|
| 1 | {"SalespersonPersonID": 2, "OrderDate": "2024-03-01"} |
2 | ✓ |
| 2 | {"SalespersonPersonID": 9, "PickingCompletedWhen": "2024-01-19"} |
2 | ✓ |
| 3 | {"BackorderOrderID": null, "ExpectedDeliveryDate": "2024-02-28", "IsUndersupplyBackordered": false} |
2 (null = no-op) | ✓ |
| 4 | {"CustomerPurchaseOrderNumber": "PO-9999", "PickedByPersonID": 3} |
2 | ✓ |
A post-call SELECT * FROM sales.orders confirmed that every updated field changed to its expected value, and every omitted field retained its seeded value — validating the COALESCE-based partial-update semantics end to end.
concat() is STABLE, not IMMUTABLE. This is the single most surprising PostgreSQL behaviour encountered. Any persisted generated column using concat() will fail at DDL apply time. Use || instead: it is immutable for all standard types.
JSON null and field absence are semantically equivalent in COALESCE-based updates. Callers who expect to null out a field via {"field": null} will be surprised. Document this limitation explicitly in the function's companion markdown, or implement a key-presence check using jsonb ? 'key'.
Dependency order is load-bearing. Sequences must exist before tables that reference them. Tables must exist before stored procedures that reference them. The /mssql-list-deps pre-flight check makes this dependency graph visible before any conversion starts, preventing cascading failures.
Temporal table semantics are silently lost. Converting GENERATED ALWAYS AS ROW START/END to DEFAULT CURRENT_TIMESTAMP produces valid DDL but loses the automatic history-tracking behaviour entirely. This gap must be documented and addressed with a trigger-based audit pattern or an extension before the migrated database is considered production-equivalent.
FK stripping at test time is safe and correct. The smoke-test environment is not a substitute for integration testing against the full schema. Stripping FK constraints for smoke tests is an acceptable pragmatic choice: the goal is to validate function logic, not referential integrity, which is a deployment-time concern.
The WideWorldImporters migration demonstrated that a structured, toolchain-driven approach to MSSQL-to-PostgreSQL migration produces more consistent results than ad hoc conversion. By encoding conversion rules in slash commands, every table and stored procedure followed the same transformation patterns (type mappings, naming conventions, sequence handling, extended property conversion) without relying on the individual engineer's recollection of the rules.
The incremental test-as-you-convert pattern, anchored by the /pgfunc-test command and its FK-stripping technique, allowed each converted function to be validated in isolation against a live PostgreSQL engine within minutes of conversion. Issues were caught at the function level rather than discovered during end-to-end schema deployment.
For enterprise teams facing similar migrations, the key investment is in the conversion rule specification: the more precisely the rules are defined, the more consistently the toolchain applies them. The WideWorldImporters toolchain is a template, not a black box. Every conversion decision is visible in the companion .md files, reviewable before commit, and correctable without re-running the full migration.
About the Author

Pradeep Chandran
Lead - Agentic AI & DevOps
Pradeep Chandran is a seasoned technology leader and a key contributor at lowtouch.ai, a platform dedicated to empowering enterprises with no-code AI solutions. With a strong background in software engineering, cloud architecture, and AI-driven automation, he is committed to helping businesses streamline operations and achieve scalability through innovative technology. At lowtouch.ai, Pradeep focuses on designing and implementing intelligent agents that automate workflows, enhance operational efficiency, and ensure data privacy. His expertise lies in bridging the gap between complex IT systems and user-friendly solutions, enabling organizations to adopt AI seamlessly. Passionate about driving digital transformation, Pradeep is dedicated to creating tools that are intuitive, secure, and tailored to meet the unique needs of enterprises.