

It can be executed over multiple hours or days as needed.ĪLTER TABLE PRU ADD COLUMN A1 INTEGER, ADD COLUMN A1_CHANGED BOOLEAN The advantages of this method is you have more control over the process. You might have to stop your application to perform this type of long running operation.Īnother approach to change the datatype of the column could be to

This exclusive lock could generate errors in the application. This method is the easiest one, but could generate high contention due to required exclusive lock for the table. SELECT * FROM pg_stat_statements WHERE query like '%optionA%' We could review stats from the command above with following query: Generate rows until 2M, by looping the following statement:ĪLTER /*optionA*/ TABLE PRU ALTER COLUMN A TYPE INTEGER USING A::INTEGER If you'd like to follow along with an example of this scenario, let's first create a table and generate data for it. Let's say we want to change the type of column A to Integer. In the second column called A we have integer data currently saved as Text type. One is a column called id with type bigserial. Suppose we have a table PRU with two columns. Due to performance and locking reasons, changing a datatype column using ALTER COLUMN can be a long-running operation. Recommendations are to set Effective_cache_size at 50% of the machine’s total RAM.įor more details and other parameters, please refer to the PostgreSQL documentation. Index scans are most likely to be used against higher values otherwise, sequential scans will be used if the value is low. The PostgreSQL query planner decides whether it’s fixed in RAM or not. The effective_cache_size parameter estimates how much memory is available for disk caching by the operating system and within the database itself. In general it should be: Total RAM * 0.05 It’s recommended to set this value higher than work_mem this can improve performance for vacuuming. The default value for this parameter, which is set in nf, is: #maintenance_work_mem = 64MB The maintenance_work_mem parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations. We can also directly assign work_mem to a role: postgres=# alter user test set work_mem='4GB' The max_connections parameter is one of the GUC parameters to specify the maximum number of concurrent connections to the database server. We can use the formula below to calculate the optimal work_mem value for the database server: Total RAM * 0.25 / max_connections Setting the correct value of work_mem parameter can result in less disk-swapping, and therefore far quicker queries. The default value for this parameter, which is set in nf, is: #work_mem = 4MB Hash tables are used in hash joins and hash based aggregation. Sort operations are used for order by, distinct, and merge join operations. The work_mem parameter basically provides the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Please note that the database server needs to be restarted after this change. For example: if your machine’s RAM size is 32 GB, then the recommended value for shared_buffers is 8 GB. The value should be set to 15% to 25% of the machine’s total RAM. The default value for this parameter, which is set in nf, is: #shared_buffers = 128MB The shared_buffers parameter determines how much memory is dedicated to the server for caching data. All of these parameters reside under the nf file (inside $PDATA directory), which manages the configurations of the database server. In this post, we are going to look at some of the important GUC parameters recommended for memory management in PostgreSQL, which is helpful for improving the performance of your database server. Recommended settings for each parameter are also provided. This article looks at parameters that can be used to help manage memory in PostgreSQL.
