Import CSV File into a PostgreSQL Table

Learn how to import a CSV file into a PostgreSQL table using the server-side COPY command and the client-side psql \copy meta-command, with examples for headers, custom delimiters, and partial column imports.

5 min read · Back to overview

Quick Answer

Use COPY table_name FROM '/path/to/file.csv' DELIMITER ',' CSV HEADER; to import a CSV file when the file lives on the database server. If the file is on your local machine, use the psql \copy command instead — it requires no superuser privileges and streams data through the client connection.

Spin up a Postgres database in 20 seconds with Vela.

Try Vela Sandbox

PostgreSQL provides two ways to bulk-load CSV data into a table: the server-side COPY statement and the client-side psql \copy meta-command. COPY reads the file directly on the database host and is the fastest option for large files, but it requires the file to exist on the server and the executing role to hold superuser privileges. \copy streams the file from your local machine through the psql connection and works with ordinary user permissions, making it the practical choice for development and CI environments.

Syntax

Server-side import with COPY:

COPY table_name [(column_list)]
FROM '/absolute/path/to/file.csv'
[ DELIMITER ',' ]
[ CSV [ HEADER ] ];

Client-side import with \copy (run inside psql — no semicolon):

\copy table_name [(column_list)]
FROM '/local/path/to/file.csv'
WITH (FORMAT csv, HEADER true)

Key options:

OptionPurpose
DELIMITER ','Field separator character (default is tab for TEXT format)
CSVEnable CSV quoting and escape rules
HEADERSkip the first line of the file (treat it as column names)
ENCODING 'UTF8'Specify file encoding when it differs from the database default
NULL 'N/A'String to interpret as SQL NULL

Practical Example

Create a shipments table to track order fulfillments:

CREATE TABLE shipments (
  shipment_id   SERIAL PRIMARY KEY,
  order_id      INT           NOT NULL,
  destination   VARCHAR(200)  NOT NULL,
  carrier       VARCHAR(50),
  shipped_at    DATE          NOT NULL,
  delivered_at  DATE
);

Assume the CSV file /var/lib/postgresql/shipments.csv contains:

order_id,destination,carrier,shipped_at,delivered_at
1001,Berlin Germany,DHL,2026-03-01,2026-03-04
1002,Paris France,FedEx,2026-03-02,
1003,Madrid Spain,UPS,2026-03-03,2026-03-06

Import the file, skipping the header row and letting PostgreSQL auto-generate shipment_id:

COPY shipments (order_id, destination, carrier, shipped_at, delivered_at)
FROM '/var/lib/postgresql/shipments.csv'
DELIMITER ','
CSV HEADER;

PostgreSQL confirms the row count:

COPY 3

Verify the imported data:

SELECT * FROM shipments ORDER BY shipment_id;
 shipment_id | order_id |    destination    | carrier | shipped_at  | delivered_at
-------------+----------+-------------------+---------+-------------+--------------
           1 |     1001 | Berlin Germany    | DHL     | 2026-03-01  | 2026-03-04
           2 |     1002 | Paris France      | FedEx   | 2026-03-02  | NULL
           3 |     1003 | Madrid Spain      | UPS     | 2026-03-03  | 2026-03-06

Importing from a Local Machine with \copy

When the file lives on your workstation — or when you are connecting to a managed PostgreSQL service and cannot place files on the server — use \copy inside psql:

\copy shipments (order_id, destination, carrier, shipped_at, delivered_at)
FROM '/Users/you/shipments.csv'
WITH (FORMAT csv, HEADER true)

\copy uses the same option syntax as COPY but the file path is resolved on your local machine. Because the data passes through the psql connection, very large files will be slower than a direct server-side COPY, but correctness is identical.

Testing with Vela

Bulk imports that touch production data are risky: a malformed CSV can leave a table in a partial state, and fixing constraint violations after a failed import is tedious. Vela’s database branching lets you create an instant, copy-on-write clone of your production database and run the import there first. Test the full COPY command against the branch — including constraint validation and row counts — before executing it against the live database. If anything goes wrong you simply discard the branch with no impact on production.

Production Tips

  • Import into a staging table first (no constraints), validate rows, then INSERT INTO production SELECT ... FROM staging WHERE ... inside a transaction. This pattern avoids leaving the target table in a partial state.
  • Drop secondary indexes on the target table before a large import and recreate them afterward. Index maintenance during bulk inserts significantly increases load time.
  • Use TRUNCATE TABLE shipments RESTART IDENTITY before a full reload so serial IDs restart from 1 and do not accumulate gaps.
  • If your CSV uses a different delimiter, set DELIMITER E'\t' for tabs or DELIMITER '|' for pipe-separated files.
  • Add ENCODING 'UTF8' (or the appropriate encoding) when the source file was not generated in the database’s default encoding to prevent encoding errors mid-import.
  • Use \copy in CI pipelines and deployment scripts to avoid hard dependencies on server-side file system paths.

Continue in Import / Export: Export Table to CSV File.

Related in this section: Export Table to CSV File

Frequently Asked Questions

What is the difference between COPY and \copy in PostgreSQL?
COPY is a server-side SQL command executed by the PostgreSQL backend. It reads the file directly on the database host and requires the executing role to have superuser privileges. \copy is a psql client-side meta-command that reads the file from your local workstation and streams it to the server over the existing connection, so no superuser rights are needed.
Does COPY lock the table during import?
COPY acquires a ROW EXCLUSIVE lock, which is the same lock held by INSERT. It does not block concurrent SELECT queries, so reads continue uninterrupted while a bulk import is running. Other writes (UPDATE, DELETE, INSERT) will wait for the lock.
What happens if a row in the CSV violates a constraint?
The entire COPY command fails and no rows are committed. To handle bad rows selectively, import into an unconstrained staging table first, inspect and clean the data, then INSERT the valid rows into the production table inside a transaction.
Can I import only some columns from a CSV file?
Yes. List the target columns in parentheses after the table name: COPY shipments (order_id, destination, shipped_at) FROM '/path/file.csv' CSV HEADER. Columns not listed must have a DEFAULT value or allow NULL, otherwise the import will fail.
How do I import a tab-separated file instead of CSV?
Change the DELIMITER option: COPY table_name FROM '/path/file.tsv' DELIMITER E'\t'. You can also use the TEXT format, which defaults to tab-separated output without quoting.