How to plan, convert, move, and validate—without surprises
Migrating between PostgreSQL and SQL Server (including AWS RDS/Aurora) can feel daunting, but with the right framework you can make it predictable and repeatable. This post covers several scenarios: Postgres-to-Postgres, SQL Server-to-Postgres, Postgres-to-SQL Server, and how to use Microsoft’s SQL Server Migration Assistant (SSMA).
Along the way, I’ll also call out one critical factor that often gets overlooked: how tightly your application is coupled to the database. That will define how much work your migration really takes.
Database-Centric vs. Application-Centric Logic
Database-Centric Logic
Apps that push a lot of logic into stored procedures, triggers, and functions run fast but tie you to one engine. Moving these means rewriting T-SQL into PL/pgSQL (or the other way around). Expect time on translation and validation.
Application-Centric Logic
If most of the business rules live in the code layer, the database swap is simpler. You’ll still need to adjust syntax (e.g., TOP → LIMIT) and type mappings, but you avoid rewriting large chunks of procedural code.
👉 Key takeaway: the more logic in the DB, the more work in migration.
Step 1: Planning and Assessment
1) Inventory and Interfaces
- List all databases to migrate.
- Document dependencies (linked servers, ETL jobs, reports, cross-DB queries).
- Map app entry points (ORMs, direct SQL, procs, triggers, views).
- Flag “specials”: CLR, Service Broker, FILESTREAM, geospatial, custom collations, full-text search.
2) Evaluate Compatibility
Common SQL Server → PostgreSQL Data Type Mappings
| SQL Server | PostgreSQL | Notes |
|---|---|---|
| bigint | bigint | Direct |
| binary(n) | bytea | Binary |
| bit | boolean | Direct |
| char(n) | char(n) | Direct |
| date | date | Direct |
| datetime/datetime2 | timestamp | Use timestamptz if TZ needed |
| datetimeoffset | timestamptz | With time zone |
| decimal(p,s), numeric | numeric(p,s) | Direct |
| float | double precision | Direct |
| image | bytea | Legacy |
| int | integer | Direct |
| money | numeric(19,4) | Emulate |
| nchar(n) | char(n) | Direct |
| ntext/text | text | Direct |
| nvarchar(n) | varchar(n) | Direct |
| nvarchar(max)/varchar(max) | text | Direct |
| real | real | Direct |
| smalldatetime | timestamp | No TZ |
| smallint | smallint | Direct |
| smallmoney | numeric(10,4) | Emulate |
| sql_variant | — | Needs redesign |
| time | time | Direct |
| timestamp (rowversion) | bytea | Different semantics |
| tinyint | smallint | No tinyint in PG |
| uniqueidentifier | uuid | Direct |
| varbinary | bytea | Binary |
| xml | xml | Direct |
Identity vs Sequences
SQL Server IDENTITY → PostgreSQL GENERATED AS IDENTITY or explicit SEQUENCE. Always reseed after bulk load.
Functions and Expressions
-- SQL Server
SELECT GETDATE(), ISNULL(col, 'x'), NEWID(), DATEPART(year, SomeDate);
-- PostgreSQL
SELECT NOW(), COALESCE(col, 'x'), GEN_RANDOM_UUID(), DATE_PART('year', SomeDate);
Other differences:
TOP→LIMITLEN()→LENGTH()IIF()→CASE+string concat →||
Procedures and Triggers
-- SQL Server
CREATE PROCEDURE dbo.GetOrders @CustomerID INT AS
BEGIN
SELECT * FROM dbo.Orders WHERE CustomerID = @CustomerID;
END;
-- PostgreSQL
CREATE OR REPLACE FUNCTION GetOrders(p_customer_id INT)
RETURNS TABLE (orderid INT, orderdate TIMESTAMP) AS $$
BEGIN
RETURN QUERY SELECT orderid, orderdate FROM orders WHERE customerid = p_customer_id;
END; $$ LANGUAGE plpgsql;
3) Assess Size and Complexity
- Database size:
-- SQL Server EXEC sp_spaceused;-- PostgreSQL SELECT pg_size_pretty(pg_database_size('yourdb')); - Table sizes:
SELECT t.name, SUM(a.total_pages) * 8 AS TotalKB FROM sys.tables t JOIN sys.indexes i ON t.object_id=i.object_id JOIN sys.partitions p ON i.object_id=p.object_id AND i.index_id=p.index_id JOIN sys.allocation_units a ON p.partition_id=a.container_id GROUP BY t.name ORDER BY TotalKB DESC; - Complex queries & joins: flag heavy reports for testing and tuning post-migration.
4) Resource & Rollback Planning
- Size CPU/RAM/storage in target.
- Prep access, secrets, and networking.
- Test a rollback process, not just write it down.
Step 2: Schema Conversion
- Extract schema with SSMS/SSDT or
pg_dump --schema-only. - Use SSMA to auto-convert, but review:
- Identity columns → sequences or
GENERATED AS IDENTITY - Collations, constraints, and triggers
- Defaults and functions
- Identity columns → sequences or
Example
-- SQL Server
CREATE TABLE Customers (
CustomerID INT IDENTITY PRIMARY KEY,
FirstName NVARCHAR(50),
BirthDate DATETIME
);
-- PostgreSQL
CREATE TABLE customers (
customerid INT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
firstname VARCHAR(50),
birthdate TIMESTAMP
);
Validate by running the DDL in Postgres and testing with sample data.
Step 3: Migration Scenarios
1. PostgreSQL → PostgreSQL (AWS RDS or Aurora)
A. One-time copy (short downtime)
Pre-flight: align versions, install extensions, prep roles and schemas.
# Schema
pg_dump --host=SRC_HOST --port=5432 --username=SRC_USER \
--format=plain --schema-only --no-owner --no-privileges SRC_DB > schema.sql
psql "host=TARGET_HOST port=5432 user=TARGET_USER dbname=TARGET_DB sslmode=require" -f schema.sql
# Data
pg_dump --host=SRC_HOST --port=5432 --username=SRC_USER \
--format=custom --no-owner --data-only SRC_DB > data.dump
pg_restore --host=TARGET_HOST --port=5432 --username=TARGET_USER \
--dbname=TARGET_DB --no-owner --disable-triggers data.dump
Fix sequences
DO $$
DECLARE r record;
BEGIN
FOR r IN
SELECT relname FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE relkind = 'S'
LOOP
EXECUTE format(
'SELECT setval(%L, GREATEST((SELECT COALESCE(MAX(id),0) FROM %I), 1))',
r.relname, regexp_replace(r.relname, '_seq$', '')
);
END LOOP;
END$$;
B. Near zero-downtime (logical replication)
-- On target
CREATE PUBLICATION app_pub FOR ALL TABLES;
-- On source
-- wal_level = logical, PKs required
-- On target
CREATE SUBSCRIPTION app_sub
CONNECTION 'host=SRC_HOST port=5432 dbname=SRC_DB user=SRC_USER password=*** sslmode=require'
PUBLICATION app_pub
WITH (copy_data = true, create_slot = true, enabled = true);
2. SQL Server → PostgreSQL
A. One-time bulk load
bcp "SELECT * FROM dbo.YourTable" queryout "C:\tmp\YourTable.csv" -c -t, -S MSSQL_HOST -d DB -U USER -P PWD
— In Postgres
\copy public.yourtable FROM ‘/mnt/data/YourTable.csv’ WITH (FORMAT csv, HEADER true);
ALTER TABLE public.yourtable ADD CONSTRAINT yourtable_pk PRIMARY KEY (id);
CREATE INDEX ON public.yourtable(col1);
B. Automated with pgloader
pgloader "mssql://USER:PWD@MSSQL_HOST/SourceDb" \
"postgresql://PGUSER:PGPWD@PG_HOST/TargetDb?sslmode=require"
C. Minimal downtime (AWS DMS)
- Source: SQL Server
- Target: RDS/Aurora Postgres
- Task: full load + CDC
- Cut over once CDC is caught up
3. PostgreSQL → SQL Server
Quick picks
- Fastest: SSMA
- Low downtime: AWS DMS
- DIY: CSV +
bcp
Path A — SSMA for PostgreSQL
- New SSMA project → connect to both systems
- Assessment Report → type incompatibilities flagged
- Convert Schema → adjust mappings (
boolean→BIT,uuid→UNIQUEIDENTIFIER) - Sync schema into SQL Server
- Migrate data (disable indexes for speed)
- Validate counts and checksums
Path B — AWS DMS
- Source: Postgres with logical decoding
- Target: SQL Server (on-prem or cloud)
- Full load + CDC, cut over at sync
Path C — DIY bulk
-- Export from Postgres
\COPY public.orders TO '/tmp/orders.csv' WITH (FORMAT csv, HEADER true);
-- Target table in SQL Server
CREATE TABLE dbo.Orders (
OrderId INT PRIMARY KEY,
CreatedAt DATETIME2 NOT NULL,
CustomerId INT NOT NULL,
TotalAmount DECIMAL(18,2) NOT NULL,
IsPaid BIT NOT NULL,
SourceGuid UNIQUEIDENTIFIER NULL,
Notes NVARCHAR(MAX) NULL
);
bcp dbo.Orders in "C:\data\orders.csv" ^
-S YOUR_SQLSERVER -d TargetDb -T ^
-c -t"," -r"\n" -q
4. Complete PostgreSQL → SQL Server Migration (End-to-End)
Sometimes you’re moving the entire database. Here’s the structured path:
1. Planning
- Inventory schemas, extensions, functions.
- Check Postgres-only features (
jsonb, arrays, PostGIS). - Measure DB size with
pg_database_size().
2. Schema Conversion
- Use SSMA for first pass.
- Adjust types:
boolean→BITuuid→UNIQUEIDENTIFIERtext→NVARCHAR(MAX)
- Convert sequences into
IDENTITYorSEQUENCE.
3. Data Migration
- SSMA Migrate Data: one-time bulk load.
- CSV + bcp/BULK INSERT: scripting-friendly.
- AWS DMS: minimal downtime with CDC.
4. Validation
- Compare row counts between systems.
- Run checksum/hash comparisons.
- Spot-check queries and reports.
5. Post-Migration
- Reseed identities.
- Run
UPDATE STATISTICS. - Rebuild indexes.
- Test app connections.
- ✅ Tool choice
- SSMA: easiest all-in-one.
- CSV + bcp: flexible for scripting.
- AWS DMS: best for high-availability cutovers.
Quick Checklist
PostgreSQL → SQL Server
Path A — SSMA for PostgreSQL (Step-by-Step)
- Install SSMA (free from Microsoft).
- New Project → choose Migration To: SQL Server.
- Connect to PostgreSQL → enter RDS/Aurora endpoint, port 5432, database name, credentials.
- Connect to SQL Server → on-prem, RDS SQL, or Azure SQL.
- Assessment Report → review flagged incompatibilities (types, functions, indexes).
- Convert Schema → check mappings:
boolean→BITuuid→UNIQUEIDENTIFIERtext→NVARCHAR(MAX)
- Synchronize with Database → apply schema into SQL Server.
- Migrate Data → load rows into SQL Server; disable nonessential indexes for speed.
- Validate → row counts, checksums, and spot-check queries.
- Post-migration cleanup → rebuild indexes, reseed identities, and test application behavior.
Validation & Performance
- Row counts
SELECT s.name, t.name, SUM(p.rows) AS row_count
FROM sys.tables t
JOIN sys.schemas s ON s.schema_id = t.schema_id
JOIN sys.partitions p ON p.object_id = t.object_id AND p.index_id IN (0,1)
GROUP BY s.name, t.name
ORDER BY row_count DESC;
- Checksums
SELECT HASHBYTES('SHA2_256',
STRING_AGG(CONVERT(NVARCHAR(MAX),
CONCAT_WS('|',
ISNULL(CONVERT(NVARCHAR(MAX), Id), ''),
ISNULL(CONVERT(NVARCHAR(MAX), CreatedAt, 126), ''),
ISNULL(CONVERT(NVARCHAR(MAX), TotalAmount), '')
)
), '|')
) AS table_hash
FROM dbo.Orders WITH (NOLOCK);
- Performance tips
- Bulk load in 100k row batches
- Disable FKs + nonclustered indexes during load
- Use SIMPLE recovery during load
- Rebuild indexes and stats afterward
Quick Scenario Map
- Postgres → Postgres:
pg_dump/pg_restoreor logical replication - SQL Server → Postgres: CSV +
\copy, pgloader, AWS DMS - Postgres → SQL Server: SSMA, DMS, or CSV + bcp
Final Thoughts
Migration isn’t just about copying data. It’s about understanding how your app logic and database features interact, planning accordingly, and validating every step.
- Use SSMA when you want fast schema + data conversion into SQL Server.
- Use pgloader or DMS when moving SQL Server into Postgres with low downtime.
- Always validate with counts, checksums, and application testing.
A smooth migration is one you can rehearse and repeat. Don’t shortcut planning, and you’ll get predictable results.
Discover more from SQLyard
Subscribe to get the latest posts sent to your email.


