

- AZURE DATABASE FOR POSTGRESQL FLEXIBLE SERVER MANUAL
- AZURE DATABASE FOR POSTGRESQL FLEXIBLE SERVER ARCHIVE
AZURE DATABASE FOR POSTGRESQL FLEXIBLE SERVER ARCHIVE
You can use the pg_restore utility to restore a PostgreSQL database from an archive that's created by pg_dump. Pg_dump -h -U -d -Fd -j -Z0 -f sampledb_dir_format Best practices for pg_restore You can run pg_dump on a PITR server and then drop the PITR server after the pg_dump process is completed. The trade-off for this approach is that you wouldn't be concerned with extra CPU, memory, and IO utilization that comes with a pg_dump process that runs on an actual production server. Running pg_dump on a PITR would be a cold run process. Small databases might be good candidates for performing a pg_dump on the production server.įor large databases, you could create a point-in-time recovery (PITR) server from the production server and perform the pg_dump process on the PITR server. Consider the database size and other business or customer needs before you start the pg_dump process. It doesn't block other users from using the database. It makes consistent backups even if the database is being used. You can perform a pg_dump on an online or live server.
AZURE DATABASE FOR POSTGRESQL FLEXIBLE SERVER MANUAL
For more information, see Autovacuum tuning in Azure Database for PostgreSQL - Flexible Server.įor each table that you identify, you can perform a manual vacuum analysis by running the following: vacuum(analyze, verbose) A high dead_pct value for a table might indicate that the table isn't being properly vacuumed. The dead_pct column in this query is the percentage of dead tuples when compared to live tuples. Execute the following query to identify table bloats: select schemaname,relname,n_dead_tup,n_live_tup,round(n_dead_tup::float/n_live_tup::float*100) dead_pct,autovacuum_count,last_vacuum,last_autovacuum,last_autoanalyze,last_analyze from pg_stat_all_tables where n_live_tup >0 Bloat on tables significantly increases pg_dump times. Table bloats and vacuumingīefore you start the pg_dump process, consider whether table vacuuming is necessary. Zero compression during the pg_dump process could help with performance gains. This option specifies the compression level to use. The number of parallel jobs should be less than or equal to the number of vCPUs that are allocated for the database server.The number of connections must equal the number of parallel jobs +1, so be sure to set the max_connections value accordingly.When you're setting a value for the parallel jobs option, pg_dump requires the following: We recommend that you arrive at a parallel job value after closely monitoring the source server metrics, such as CPU, memory, and IOPS (input/output operations per second) usage. This option reduces the total dump time but increases the load on the database server. With pg_dump, you can run dump jobs concurrently by using the parallel jobs option.

This option outputs a directory-format archive that you can input to pg_restore. A few of the command line options that you can use to reduce the overall dump time by using pg_dump are listed in the following sections. You can use the pg_dump utility to extract a PostgreSQL database into a script file or archive file. It also explains the best server configurations for carrying out pg_restore. This article reviews options and best practices for speeding up pg_dump and pg_restore.
