How to optimize SQL Server Server

  
1. Data and log files are stored separately on different disks

Data files and log files will generate a lot of I /O. When possible, log files should be stored on a different hard disk than the data and index data files to spread I/O, while also facilitating database disaster recovery.

2. Tempdb database is stored separately on different disks

The tempdb database is a temporary database that all other databases may use. When you use select into to execute Orderby on a column that is not indexed, a temporary table is generated in the tempdb database to store the intermediate data. Since building and populating temporary tables can severely degrade system performance, the columns to be sorted should be indexed whenever possible. At the same time, the tempdb database is shared for all users and applications, so if one user occupies all the space in the tempdb database, other databases will no longer be available. When possible, the tempdb database should be placed separately on a faster hard drive or RAID array. Separate I/O operations from the tempdb database to speed performance. The tempdb database should have the appropriate capacity to meet the needs of the user. The space of the tempdb database should be allowed to grow automatically. If set to not allow automatic growth, the operation will not complete when the query operation establishes a temporary table that exceeds the tempdb database capacity.

Properly set the growth rate of the tempdb database. Too small a growth rate will generate more external debris and will occupy more resources.

3. Avoid hot data generation

Prior to SQL Server 7.0, for tables without a clustered index (heap table), the newly inserted data row is always placed on the disk. At the end. If there are a lot of concurrent users, and at the same time perform the operation of inserting or updating data on the table, this will make it possible to generate data hotspots at the end of the very busy table. Concurrent I/O operations focus on a small number of pages, which can lead to a decline in database performance.

In SQL Server, the allocation of physical storage space for new data rows is done through PFS pages. The management algorithm of the PFS page spreads the insert operations to avoid generating data hotspots.

When designing applications and databases, avoid creating primary keys on naturally growing columns, which can lead to hot data.

4. Less data types

When designing tables, use as few data types as possible. The most information can be saved on such a data page. There are fewer data pages, and there are fewer I/O operations for retrieving data pages, so the efficiency is high.

5. Monitoring and defragmenting space debris

Automatic growth of file spaces improves auto-management, but can lead to space fragmentation. The logical space of physical space and data is no longer continuous. Regular monitoring and space defragmentation help improve I/O performance.

6. Using the primary data file and the secondary data file

One primary data file for each database belongs to the primary file group. For a database of about 1 GB size, a data file is sufficient. If there is a secondary data file, the main data file has a pointer for managing the secondary data file.

When using multiple data files, the primary data file is used to store system objects and tables, and the secondary data files are used to store user data and indexes. When possible, the primary and secondary data files can be stored separately on separate disks to spread I/O.

If multiple data files are used, it is recommended that the primary data file store system data, and the secondary data file store user data and indexes, which will help improve I/O performance.


7. Improve performance with filegroups

In large database systems, consider creating filegroups to manage data files. The table and index are stored on different physical disks for performance monitoring and comparison, and finally an optimized storage solution is obtained.

8. Pay attention to the performance problems that can be caused by automatic growth and automatic shrinking

The automatic growth and automatic shrinking of database files is very useful for the management of small databases. But it can cause performance problems with large databases. Because of the natural growth of files, storage fragmentation can occur. When the file space becomes larger, the newly allocated space is not necessarily continuous with the original space. When the file space shrinks, part of the space is freed. However, when the file needs to grow storage space but can not use the space that was originally released, it will also lead to fragmentation.

9. Separating system data and user data

Separating system and user databases on different physical disks helps improve I/O performance and helps database backup and recovery .

10. Optimize index design

The design of the index is very important to the performance of the database. For details, please refer to this blog related article.

11. Regularly update statistics

SQL Server uses cost-based optimization by default, so timely update of statistics is very important for query optimization.

12. Regular consistency check

Regularly check the consistency of the database to ensure the integrity of the database.

Copyright © Windows knowledge All Rights Reserved