Patching the SQL Server
When patching the server running SQL Server or whenever you take SQL Server offline it is a good idea to stop the Server Service on the Console IIS server so there is not a series of errors produced while the SQL Server is down. If you do not stop the Server Service then afterwards you should restart the Server Service.
Following are configuration changes that should be performed after a Console is installed and usable to improve performance.
1. Create a Database Maintenance Service Task to just defragment the indexes and to just run daily and execute it now when you configure it in the Console. Do not enable the other options for that task. If you want to have those other options run then create a separate task for those options:
2. In SSMS (SQL Server Management Service) set the “Maximum server memory”. The option is shown in the following screenshot.
- The value you should set it to can be calculated as follows:
- stop SQL Server and note the used RAM found inside of Windows Task Manager.
- The approximate value to set it at can be determined by using this formula:
Max RAM = Total RAM – used RAM when SQL Server is stopped – (~1/8 Total RAM GB)
You will need to restart the SQL Server from within SSMS after changing the setting.
4. To prevent the logs in the CAT from becoming very large please clean them up. You can do so from within the CAT or you can create a Trace Log Cleanup Service Task to clean them up periodically. Usually removing the ones older than 1 month old and having the task run once monthly is preferred. http://my.spirion.com/Help/EnterpriseConsole/index.htm#3539.htm
5. Create a Data Cleanup Service Task to run once a week. The details about this Service Task can be found in the following linked user guide:
Trace Log Cleanup
The Trace Log Cleanup service task will remove console and services logs based on a user specified time period. These logs can be viewed using the Console Administrator Tool (CAT). If the Console encounters an unexpected issue, a message will be written to a log file. Storing large amount of logs in the console database may slow the performance of the console over time. To delete old, unnecessary logs stored in the console database, you can run the Trace Log Cleanup service task.
For Consoles with more than 500 endpoints (e.g. >500)
If there are many endpoints (e.g. >500) communicating with the Console then it is also recommended to increase the polling time interval of the clients (e.g. to 30 minutes) in a System Policy applied to the endpoints and have that policy a higher priority than other policies to ensure that the setting is not being overwritten by other policies with the setting configured lower. They System Policy setting that configures the polling interval is the following:
If there are many endpoints (e.g. >500) communicating with the Console then it is also recommended to decrease the Maximum Concurrent Clients value in the CAT (Console Administrator Tool) from 50 to 20:
If there are many endpoints (e.g. >500) communicating with the Console then it is also recommended to increase the Maximum Client Retry Count (e.g. to 3 for a start) in the CAT. That will cause endpoints that were rejected that number of times to be allowed regardless of the Maximum Concurrent Clients limit. This exception will last for the number of seconds defined by the Maximum Clients Ignore Time value. That should allow starved clients to request state and then policies within that time.
If the database is larger than 20 - 30 GB large
If the database is larger than around 20 to 30 GB or if there is a high I/O stall percentage then it is recommended that the tempdb database should be placed on a separate physical disk (if not already placed on a separate physical drive) than the IdfMC database. With large databases there could be I/O bottlenecks on the tempdb data file when on the same drive as the database itself.
The I/O stall percentage can be shown by performing a gather data in the CAT and then after extracting the gather data zip file, opening the DataStistics.xml file in a web browser:
Or by running the following SQL query on the database and viewing in the results of the query shown in the below screenshot:
For normal operations (not while upgrading) the tempdb would be around 25% as large as the total db size.
Here is a little more information that you may find useful:
Please ensure that the Autogrowth for the IdfMC_log is enabled, and that the Restricted File Growth is set to the appropriate level:
e.g. Autogrowth enabled, Maximum file size larger than the current size of the db + file growth value
The transaction log is truncated on a checkpoint under the SIMPLE recovery mode. Here is some more technical information about checkpoints:
You can try to force a checkpoint by running this in the Console’s database:
Here are a couple of Microsoft articles that explain what the tempdb is used for:
Jump to Optimizing tempdb performance in SQL Server - Tempdb is used to hold: Temporary user objects that are explicitly created, such as: global or local temporary tables and indexes, temporary stored procedures, table variables, Tables returned in table-valued functions, or cursors. Internal objects that are created by the database engine.
The tempdb system database is a global resource that is available to all users connected to the instance of SQL Server and is used to hold the following: Temporary user objects that are explicitly created, such as: global or local temporary tables, temporary stored procedures, table variables, or cursors.
NOTE: The size of the database can be affected by several things you may have configured. For example, if you have old results that you have not purged, that will add to the size of the database. Also, if you are sending up "Preview Data" with your results, that will make the database twice as large, on average. There are some tasks you can run to manage and reduce the overall size of your Spirion database. Following is the link to the article 'How To Manage and Reduce The Size Of Your Spirion Database':