Stop testing against stale, synthetic, or shared environments. Clone your production PostgreSQL database in seconds — then test migrations, features, and QA workflows in complete isolation.
Most PostgreSQL staging environments share the same fundamental problems: they drift from production, they're shared between developers causing conflicts, and refreshing them is slow enough that teams just don't bother.
Staging was last refreshed 3 weeks ago. The schema has drifted. Features that depend on recent data fail in staging but work in prod — or the reverse. Bugs slip through.
Three developers are testing simultaneously. Developer A's test data breaks Developer B's tests. QA can't run a full regression because the environment is in an inconsistent state.
You can't easily test a complex migration against real production data before applying it. So you test against a small sample — and discover the issue in production.
Clone production instantly, test in isolation, refresh on demand
Create a copy-on-write clone of your production database. The clone is ready in under 30 seconds, regardless of database size.
Run any schema migrations against the staging clone — not against production. Catch issues before they reach prod.
Developers, QA, and reviewers work against the clone. Writes are fully isolated from production.
When staging gets stale, create a new clone in seconds. Or schedule a daily refresh automatically. No manual pg_dump cycles.
Schema migrations are one of the highest-risk operations in any database-backed application. An ALTER TABLE that takes 2 seconds on a 10 GB staging database might take 45 minutes on a 400 GB production database — locking the table the entire time.
With Vela staging environments, you can clone production and test the full migration before applying it to prod:
-- 1. Clone production (done via Vela API or UI — takes < 30 seconds) -- 2. Connect to the staging clone and run the migration: BEGIN; ALTER TABLE orders ADD COLUMN processed_at TIMESTAMPTZ; UPDATE orders SET processed_at = created_at WHERE status = 'complete'; ALTER TABLE orders ALTER COLUMN processed_at SET NOT NULL; -- 3. Validate SELECT COUNT(*) FROM orders WHERE processed_at IS NULL; -- Should be 0 EXPLAIN ANALYZE SELECT * FROM orders WHERE processed_at > NOW() - INTERVAL '7 days'; -- Check index scan vs seq scan, execution time COMMIT; -- 4. If all looks good, apply to production. -- If not, DELETE the clone and iterate with zero consequence.
| Dimension | Manual pg_dump refresh | Cloud snapshot restore | Vela CoW clone |
|---|---|---|---|
| Data freshness | Stale — manually refreshed | Semi-fresh — scheduled snapshots | On-demand or scheduled clone of prod |
| Setup time (100 GB DB) | 45–90 minutes | 10–30 minutes restore time | < 30 seconds |
| Storage cost | Full second copy on disk | Full copy after restore | Near-zero (shared blocks) |
| Multiple staging envs | Very expensive | Expensive | Each near-zero additional cost |
| Migration testing | Yes, if data is current | Yes | Yes — clone prod, run migration, validate |
| Environment isolation | Full | Full | Full — writes don't affect prod |
| Automated in CI | Yes, but slow | Yes, with cloud CLI | Yes, via API in seconds |
A good staging environment has four properties: (1) it mirrors production schema exactly, (2) it contains representative production-like data (not synthetic data), (3) it's isolated — writes to staging don't affect production, and (4) it can be refreshed quickly when it becomes stale. Most teams achieve the first and third properties but compromise on the second (using synthetic data) and struggle with the fourth (manual refresh is slow).
Ideally, staging should be refreshed before every significant test cycle — at minimum, whenever the production schema changes. In practice, teams refresh staging infrequently because pg_dump/restore is slow and disruptive. With copy-on-write cloning, refreshing staging takes under 30 seconds and can be scripted to run daily (or on-demand), so there's no reason to let it go stale.
Yes. With copy-on-write cloning, each staging environment shares the unchanged data blocks with the production snapshot. Ten simultaneous staging environments of a 100 GB database use nearly the same additional storage as one — because only the writes to each environment require separate blocks. This makes per-team or per-feature staging environments economically viable.
The workflow is: (1) clone production to a temporary database, (2) apply the migration to the clone, (3) run validation queries to check data integrity, (4) run EXPLAIN ANALYZE on key queries to check performance impact, (5) if all looks good, apply the migration to production. If the migration causes issues on the clone, delete the clone and iterate — with no impact on production.
A staging environment is a persistent, shared environment — typically refreshed periodically and used by QA and developers for longer-lived testing. A branch-per-PR database is an ephemeral environment created specifically for one pull request and deleted when the PR is merged. Both use the same copy-on-write cloning under the hood. Many teams use both: a persistent staging clone for QA regression testing, and ephemeral branch databases for individual PR review.
Try Vela's instant database cloning in the sandbox. No infrastructure required.