PostgreSQL Installation Instructions
IFI CLAIMS will create a single tar.gz file that includes a subdirectory of tar.gz files for each of the tables in the PostgreSQL data warehouse. We will provide you with a link to access and download the file. Alternately, if you have received the data on a USB, connect it to your intended PostgreSQL machine and mount the drive so that it can be read.
Note: It is recommended to copy and paste the code provided in these instructions.
1. If you received the data as a tar.gz file, extract it into your local environment. The receiving drive requires approximately 4TB of free space to download and extract the file. The extracted file will consist of a number of smaller tar.gz files. There is no need to extract each of these smaller files.
2. Prepare repositories and run a
yum update to pull in the patched version of libxml2 from the IFI CLAIMS repository and any other pending updates. Adjust the code if you are using a different version of PostgreSQL.
Note: Reboot if kernel was upgraded.
3. CLAIMS Direct requires a working PostgreSQL cluster. If you have a working cluster, skip to step 5. If you do not have an initialized cluster, the following commands install PostgreSQL and initialize the cluster. The
initdb command has to be run by the user who owns PostgreSQL (user postgres).
Note: PostgreSQL, by default, only allows local connections. If you would like to open access to network clients or are installing the Client Tools on a separate server, please see Allowing Remote Access to PostgreSQL Alexandria.
4. Enable and restart the PostgreSQL cluster.
5. Create the role alexandria and load the SQL via
psql into the instance.
Install the schema and tools.
Create the database.
6. To ensure that the database has been created, run:
The results should show the alexandria database.
7. Tune the database before loading.
postgresql.conf, adjust the autovacuum settings as follows:
This is the recommended setting for a 16-core machine.
For a 4-core machine, a setting of 2 is recommended.
|0.02||This setting indicates the threshold which determines when autovacuum needs to run per table.|
|0.01||This setting tells the autovacuum process to analyze the table (i.e., update query planner statistics) when the fragmentation percentage reaches 1% (the default is 10%).|
For other performance tuning, use the online tool https://pgtune.leopard.in.ua/#/. Fill in the required values that correspond to your system. Add the suggested changes to the bottom of
Note: For changes to be applied, PostgreSQL needs to be restarted:
8. Run the pre-flight check script to confirm that your system is properly configured to load the data.
The sample output of a properly configured system looks like this:
Resolve any recognized errors. For unfamiliar errors, contact email@example.com.
9. Switch to the directory which holds the backfile you extracted in step 1. Use the load script to load the CLAIMS Direct data into PostgreSQL tables. Since the loading process will take 1-2 days, we recommend that you use the
nohup command to detach the script from the terminal and allow it to run in the background.
10. You can monitor the load using
11. Once the loading process is complete, you can run the
cd-count.sh script, a simple QA of table counts, to ensure that the tables have loaded correctly. This may take an hour or more to run.
The results should show that 39 xml tables and 4 cdws tables have loaded. The following tables will show a count of 0:
The following tables will be populated if you have a Premium Plus subscription. For Basic and Premium subscriptions, they will show a count of 0:
More information about the tables can be seen in Data Warehouse Design.
12. Optional: you may want to run a simple SQL query as an additional test to confirm that the data is present.