PostgreSQL provides two ways to bulk-load CSV data into a table: the server-side COPY statement and the client-side psql \copy meta-command. COPY reads the file directly on the database host and is the fastest option for large files, but it requires the file to exist on the server and the executing role to hold superuser privileges. \copy streams the file from your local machine through the psql connection and works with ordinary user permissions, making it the practical choice for development and CI environments.
Syntax
Server-side import with COPY:
COPY table_name [(column_list)]
FROM '/absolute/path/to/file.csv'
[ DELIMITER ',' ]
[ CSV [ HEADER ] ];
Client-side import with \copy (run inside psql — no semicolon):
\copy table_name [(column_list)]
FROM '/local/path/to/file.csv'
WITH (FORMAT csv, HEADER true)
Key options:
| Option | Purpose |
|---|---|
DELIMITER ',' | Field separator character (default is tab for TEXT format) |
CSV | Enable CSV quoting and escape rules |
HEADER | Skip the first line of the file (treat it as column names) |
ENCODING 'UTF8' | Specify file encoding when it differs from the database default |
NULL 'N/A' | String to interpret as SQL NULL |
Practical Example
Create a shipments table to track order fulfillments:
CREATE TABLE shipments (
shipment_id SERIAL PRIMARY KEY,
order_id INT NOT NULL,
destination VARCHAR(200) NOT NULL,
carrier VARCHAR(50),
shipped_at DATE NOT NULL,
delivered_at DATE
);
Assume the CSV file /var/lib/postgresql/shipments.csv contains:
order_id,destination,carrier,shipped_at,delivered_at
1001,Berlin Germany,DHL,2026-03-01,2026-03-04
1002,Paris France,FedEx,2026-03-02,
1003,Madrid Spain,UPS,2026-03-03,2026-03-06
Import the file, skipping the header row and letting PostgreSQL auto-generate shipment_id:
COPY shipments (order_id, destination, carrier, shipped_at, delivered_at)
FROM '/var/lib/postgresql/shipments.csv'
DELIMITER ','
CSV HEADER;
PostgreSQL confirms the row count:
COPY 3
Verify the imported data:
SELECT * FROM shipments ORDER BY shipment_id;
shipment_id | order_id | destination | carrier | shipped_at | delivered_at
-------------+----------+-------------------+---------+-------------+--------------
1 | 1001 | Berlin Germany | DHL | 2026-03-01 | 2026-03-04
2 | 1002 | Paris France | FedEx | 2026-03-02 | NULL
3 | 1003 | Madrid Spain | UPS | 2026-03-03 | 2026-03-06
Importing from a Local Machine with \copy
When the file lives on your workstation — or when you are connecting to a managed PostgreSQL service and cannot place files on the server — use \copy inside psql:
\copy shipments (order_id, destination, carrier, shipped_at, delivered_at)
FROM '/Users/you/shipments.csv'
WITH (FORMAT csv, HEADER true)
\copy uses the same option syntax as COPY but the file path is resolved on your local machine. Because the data passes through the psql connection, very large files will be slower than a direct server-side COPY, but correctness is identical.
Testing with Vela
Bulk imports that touch production data are risky: a malformed CSV can leave a table in a partial state, and fixing constraint violations after a failed import is tedious. Vela’s database branching lets you create an instant, copy-on-write clone of your production database and run the import there first. Test the full COPY command against the branch — including constraint validation and row counts — before executing it against the live database. If anything goes wrong you simply discard the branch with no impact on production.
Production Tips
- Import into a staging table first (no constraints), validate rows, then
INSERT INTO production SELECT ... FROM staging WHERE ...inside a transaction. This pattern avoids leaving the target table in a partial state. - Drop secondary indexes on the target table before a large import and recreate them afterward. Index maintenance during bulk inserts significantly increases load time.
- Use
TRUNCATE TABLE shipments RESTART IDENTITYbefore a full reload so serial IDs restart from 1 and do not accumulate gaps. - If your CSV uses a different delimiter, set
DELIMITER E'\t'for tabs orDELIMITER '|'for pipe-separated files. - Add
ENCODING 'UTF8'(or the appropriate encoding) when the source file was not generated in the database’s default encoding to prevent encoding errors mid-import. - Use
\copyin CI pipelines and deployment scripts to avoid hard dependencies on server-side file system paths.