I’m not a DBA, But I Play One on TV: Part 3 – Database Files

When a customer invites me to review their SQL Server or Oracle databases and server architecture, I start with the servers. I review the hard disk layout and a few server settings. The very next thing I do is review the data files and log files for the databases. In the case of SQL Server, when I see one data file and one log file in the same directory and the database has one file group called Primary, I know I am once again presiding over amateur hour at the local chapter of the Jr. Database Developer Wannabe Club.

 

One file pointing to one file group indicates to me:

  1. Someone went through the “create new database” wizard.
  2. There wasn’t any pre-development design analysis done before the database was created
  3. No one bothered to check readily available best practices for SQL Server
  4. I can anticipate equally uninformed approaches to table and index design and query authoring

 

This will antagonize the hardware striping advocacy group, but there are reasons to split up split up your data files and log files. Specifically in the case of TempDB files, you can greatly improve performance by creating the same number of log files as you have processors. With this configuration each processor will control the I/O for each file.

 

Check out number 8 here: http://technet.microsoft.com/en-US/library/cc966534

 

In addition to performance, recovery processes greatly benefit for splitting up the database files. Previously, if a data file failed, if everything was in one file or not, SQL Server would take the database offline. With SQL Server 2012 a new feature was added that will leave your database accessible, just not the data located in the corrupt or otherwise unavailable file. Well if all the data is in that one file your dataset is down until you can recover. Even if that data file contains only a subset of the data in a table, the rest of the data in that table is still available for querying.

 

Now, you might say ok we’re going to have a separate file for every table and multiple files for some. Ok, I’ve seen that configuration and there isn’t anything wrong with it. If your IT department isn’t using SQL Server to manage their backups, instead they’re backing up the actual files across all the drives, they’re going to be annoyed with you. However, this configuration gives you maximum flexibility.  For instance, placing tables that are commonly used at the same time on different spindles won’t conflict for disk I/O.

 

Splitting up your log files is also beneficial. Log files are populated in a round robin fashion. When one reaches the level you’ve set it starts filling up the next. Hopefully you have at least 4 and they are of a sufficient size. This gives you time to archive the transaction logs between backups making sure no transactions are lost due to the file rolling over before the backup removes completed transactions and shrinks the file.

 

Next episode will cover backup basics. The purpose in all these posts is to provide the understanding to apply the best configuration to the database system your building.

 

SQL Server Indexes: Using the Clustered Index

If you really want to understand SQL Server indexing I suggest following Stairway to SQL Server Indexes. My first blog post on SQL Server indexes is going to focus on clustered indexes I’m going to paraphrase a lot of the information found in the articles linked above to save those of us who don’t want an intimate scientific knowledge on this topic and address what is pertinent to re-factoring indexes for to reduce deadlocks.

Simply put, indexes are smaller, concise reference tables associated with the data tables that tell SQL Server where some of the information requested in a query is located. Without indexes, the query engine performs a full table scan (sequentially looks at every Page in a table) to retrieve the requested data rather than jumping around segmented portions of the data where the requested rows are stored. In some cases where all the information requested resides in an index, SQL Server will simply return the row from the index and not access the main data table at all.

Non-clustered indexes are separate objects with separate storage. A clustered index instructs SQL Server how to sort the data in the main data table itself and creates a logical hidden permanent key. This is why identity columns are popular to create clustered indexes on. These integer or big integer column values are created sequentially by SQL Server when a new row is inserted; there is basically never any resorting when rows are inserted or deleted.

SQL Server saves data tables and non-clustered indexes in 8k byte blocks called Pages within their data files. As CRUD actions are performed on your tables the data within these pages must be re-sorted. SQL Server determines which rows go into which Pages. In the case where sequentially created pages do not contain the data sequentially, external fragmentation is created. The percentage of empty page space is called internal fragmentation. The fixed size nature of pages means some data types are better for indexes then others (creating less fragmentation), additionally the number of field included in an index can adversely affect index performance. I’m going to address fragmentation in a later post and explain why creating clustered indexes on GUID data types is death to a database.

From Stairway to SQL Server Indexes: Level 3, Clustered Indexes:

The clustered index key can be comprised of any columns you chose; it does not have to be based on the primary key.

Keep in mind these additional points about SQL Server clustered indexes:

  • Because the entries of the clustered index are the rows of the table, there is no bookmark value in a clustered index entry. When SQL Server is already at a row, it does not need a piece of information that tells it where to find that row.
  • A clustered index always covers the query. Since the index and the table are one and the same, every column of the table is in the index.
  • Having a clustered index on a table does not impact your options for creating non-clustered indexes on that table.

There can be only one clustered index on a table; the data in a table can’t be sorted two different ways at the same time. Clustering does not require uniqueness. In the case where the clustered index is made up of non-unique fields the sorting results in grouping for these fields.

If the preceding is clear, it should be obvious that creating a clustered index on a particular column and then also creating a non-clustered index with the same column is a waste of resources. The creation of the clustered index enforces sorting of the data and created a permanent key for SQL server to quickly locate the Page where every row is stored.

Additionally, creating indexes in this matter can lead to deadlocks. Consider a large number of rows being inserted into a table in a batch. Even if your clustered index is created on an identity column, SQL Server will perform a sort check on the clustered index. After the Clustered sort is finished the non-clustered indexes that also include this column will be filled and the batch will not be committed until the indexes are ready. The reason is the non-clustered indexes can’t create their bookmarks until page each record is going to reside is determined. When a select against this table happens at the same time, the query engine may decide to access the non-clustered index, but the insert batch has it locked. The insert can’t complete because the select is accessing the non-clustered index is the insert also has to write to.

Now wait, if the clustered column is removed from the non-clustered index, the select query is just going to use the clustered index (if the clustered index column is part of the table links or where clause of the select statement) and then the deadlock will occur due to the clustered index being locked. So how do we get the query engine to use the non-clustered index without using it to create the bookmark in the index? We use a covering index, with is created with the include statement.

From Stairway to SQL Server Indexes: Level 5, Included Columns:

Columns that are in a non-clustered index, but are not part of the index key, are called included columns. These columns are not part of the key, and so do not impact the sequence of entries in the index. Also, as we will see, they cause less overhead than key columns.

There are several advantages in having these columns in the index but not in the index key, such as:

  • Columns that are not part of the index key do not affect the location of the entry within the index. This, in turn, reduces the overhead of having them in the index.
  • The effort required to locate an entry(s) in the index is less.
  • The size of the index will be slightly smaller.
  • The data distribution statistics for the index will be easier to maintain.

Deciding whether an index column is part of the index key, or just an included column, is not the most important indexing decision you will ever make. That said, columns that frequently appear in the SELECT list but not in the WHERE clause of a query are best placed in the included columns portion of the index.

select objt.object_id, idxc.index_id, clmn.column_id

into #ClustIdxCols

from  sys.objects objt

inner join sys.indexes idx on objt.object_id = idx.object_id

inner join sys.index_columns idxc on objt.object_id = idxc.object_id and idx.index_id = idxc.index_id

inner join sys.columns clmn on objt.object_id = clmn.object_id and idxc.column_id = clmn.column_id

where objt.type = ‘U’ and idx.type = 1

order by objt.object_id, idxc.index_id, clmn.column_id

select distinct objt.name, idx.name

from  sys.objects objt

inner join sys.indexes idx on objt.object_id = idx.object_id

inner join sys.index_columns idxc on objt.object_id = idxc.object_id

and idx.index_id = idxc.index_id

inner join sys.columns clmn on objt.object_id = clmn.object_id

and idxc.column_id = clmn.column_id

inner join #ClustIdxCols test on objt.object_id = test.object_id

and idxc.column_id = test.column_id

where idx.type = 2 and idxc.is_included_column = 0

order by objt.name, Idx.name

This script identifies non-clustered indexes using clustered index columns (include columns are filtered out). In some cases the tables only contain two indexes, a clustered index on the identity column for the table and an non-clustered index on the same column.

GUID’s – Never for Clustered Indexes

Globally Unique Identifiers have their place in software development. They’re great for identifying a library in the GAC or windows registry. They are, however, huge data types from the database perspective.

Oracle, MYSQL, Sybase, and DB2 do not provide any special data type for fields storing GUID’s, for these vendors a GUID is a 34-38 character string (depending on including dashes and “{}”). SQL Server has provided a Unique Identifier data type which has some benefits in storage and access speeds over a 36 character varchar, or nvarchar field. However, they’re still huge…

Unique Identifier Data Type

http://msdn.microsoft.com/en-us/library/ms190215(v=sql.105).aspx

SQL Server’s Unique Identifier displays as a 36 character string (dashes and no “{}”) and stores a GUID as 16 byte binary value. There’s no argument that it’s nearly impossible (not mathematically impossible) to create a duplicate GUID, but how many data sources are going to outgrow a bigint (-2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807)) data type? That’s only 8 bytes, half a Unique Identifier. Hard disk space has gotten cheap, why do we care about data type size anyway?  In the article mentioned above, it’s mentioned that indexes created on Unique Identifier fields are going to perform slower than indexes built on integer fields. That statement hardly scratches the surface of performance implications with Unique Identifier indexes, and it’s all related to the size.

Pages and Extents

http://msdn.microsoft.com/en-us/library/ms190969(SQL.105).aspx

The above article explains how SQL stores data and indexes in 8KB pages. 96 bytes are reserved for the page header, there’s a 36 byte row offset and then 8,060 bytes remain for the data or index storage. If your table consisted of just one column, a page could store 503 GUID’s,  or  1007 BigInt’s, or 2015 int’s. Put another way, the smaller the amount of bytes in a row, the more you can store in one page. SQL Server doesn’t control where the Pages are written on the hard disk, the O/S and hardware decide. The chances of consecutive or sequential pages being stored in distant disk sectors increases with the more pages stored for each table or index in the system. As the number of index pages grows, the more out of sync they become with the data pages leading to index fragmentation.

Index Fragmentation

http://www.brentozar.com/archive/2009/02/index-fragmentation-findings-part-1-the-basics/

http://www.brentozar.com/archive/2009/02/index-fragmentation-findings-part-2-size-matters/

Let’s recap what we have so far,

  1. GUID’s are randomly generated values without any sequential nature or restrictions.
  2. GUID’s are twice as big as the biggest integer data types.
  3. The larger a tables rows are the more pages have to be created to store the data.
  4. The more pages an index has, the more fragmented they get.
  5. The more fragmented the indexes get the more frequently they have to be rebuilt.

Clustered Index Implications

Clustered indexes set the organization and sorting of the actual data in the table. Non-clustered indexes created on a table with a clustered index have to be updated with pointer changes as records are inserted or deleted, or the clustered index value updated because these changes require the data pages to be resorted and new keys generated. SQL Server Identity columns of an integer data type reduce a lot of I/O overhead and SQL server processing because the rows are always inserted in the correct order. GUID values are almost never inserted in the correct order because of their random nature. Thus, with a GUID clustered index every insert or delete or update of that field requires data page reorganization, non-clustered index updates, more pages to be created, and increased fragmentation.