PostgreSQL provides two ways to import CSV data into a table: the server-side COPY statement and the client-side \copy psql command. COPY is faster because it reads the file directly on the server, but it requires the file to exist on the database host and the executing role to have superuser privileges. \copy streams the file from your local machine through the psql connection and works with ordinary user privileges.
Importing with the COPY statement
Create a target table first:
CREATE TABLE persons (
id SERIAL,
first_name VARCHAR(50),
last_name VARCHAR(50),
dob DATE,
email VARCHAR(255),
PRIMARY KEY (id)
);
Import a CSV file that has a header row and uses comma delimiters:
COPY persons (first_name, last_name, dob, email)
FROM '/var/lib/postgresql/persons.csv'
DELIMITER ','
CSV HEADER;
COPY 2
The COPY 2 output confirms two rows were imported. Key points about the syntax:
- List the columns explicitly when the CSV does not include the
idcolumn (or any serial/identity column) so PostgreSQL can generate it automatically. - The
HEADERkeyword tellsCOPYto skip the first line of the file. - The file path must be readable by the OS user that runs the PostgreSQL server process.
Verify the imported data:
SELECT * FROM persons;
id | first_name | last_name | dob | email
----+------------+-----------+------------+-----------------------
1 | John | Doe | 1995-01-05 | [email protected]
2 | Jane | Doe | 1995-02-05 | [email protected]
Importing from a local machine with copy
When the CSV file lives on your workstation rather than the server, use the psql meta-command \copy. It does not require superuser rights and handles the file transfer transparently:
copy persons (first_name, last_name, dob, email)
FROM '/Users/you/persons.csv'
WITH (FORMAT csv, HEADER true)
\copy uses the same syntax as COPY but runs on the client side. The file path is resolved on your local machine, not the server.
Production tips for importing CSV data
- Import into a staging table first, validate the data, then insert into the production table. This avoids leaving the production table in a partial state if the CSV contains errors.
- Use
TRUNCATE TABLE persons RESTART IDENTITYbefore a full reload so serial IDs restart from 1. - For large files, drop indexes and constraints before importing and rebuild them afterward — bulk loading without indexes is significantly faster.
- If your CSV uses a different delimiter (tab, pipe), change the
DELIMITERoption:DELIMITER E'\t'for tab-separated files. - Handle encoding issues by adding
ENCODING 'UTF8'(or the appropriate encoding) to theCOPYoptions when source files are not in the database default encoding.