Following are configuration changes that should be performed after a Console is installed and usable to improve performance.
1. Create a Database Maintenance Service Task to just defragment the indexes and to just run daily and execute it now when you configure it in the Console. Do not enable the other options for that task. If you want to have those other options run then create a separate task for those options:
2. In SSMS (SQL Server Management Service) set the “Maximum server memory”. The option is shown in the following screenshot.
- The value you should set it to can be calculated as follows:
- stop SQL Server and note the used RAM
- The approximate value to set it at can be determined by using this formula:
Max RAM = Total RAM – used RAM when SQL Server is stopped – (~1/8 Total RAM GB)
You will need to restart the SQL Server from within SSMS after changing the setting.
For Consoles with many endpoints (e.g. >500)
If there are many endpoints (e.g. >500) communicating with the Console then it is also recommended to increase the polling time interval of the clients (e.g. to 30 minutes) in a System Policy applied to the endpoints and have that policy a higher priority than other policies to ensure that the setting is not being overwritten by other policies with the setting configured lower.
If there are many endpoints (e.g. >500) communicating with the Console then it is also recommended to decrease the Maximum Concurrent Clients value in the CAT (Console Administrator Tool) from 50 to 20:
If there are many endpoints (e.g. >500) communicating with the Console then it is also recommended to increase the Maximum Client Retry Count (e.g. to 3 for a start) in the CAT. That will cause endpoints that were rejected that number of times to be allowed regardless of the Maximum Concurrent Clients limit. This exception will last for the number of seconds defined by the Maximum Clients Ignore Time value. That should allow starved clients to request state and then policies within that time.
If the database is larger than 20 - 30 GB large
If the database is larger than around 20 to 30 GB or if there is a high I/O stall percentage then it is recommended that the tempdb database should be placed on a separate physical disk (if not already placed on a separate physical drive) than the IdfMC database. With large databases there could be I/O bottlenecks on the tempdb data file when on the same drive as the database itself.
The I/O stall percentage can be shown by performing a gather data in the CAT and then after extracting the gather data zip file, opening the DataStistics.xml file in a web browser:
Or by running the following SQL query on the database and viewing in the results of the query shown in the below screenshot: