Introduction to TempDB in SQL Server


Tempdb Information

The tempdb is a temporary workspace. Among other uses, SQL Server uses the tempdb for:
Repeated create and drop of temporary tables (local or global).
Table variables that use tempdb for storage purposes.
Work tables associated with CURSORS.
Work tables associated with an ORDER BY clause.
Work tables associated with an GROUP BY clause.
Work files associated with HASH PLANS.

Heavy and significant use of these activities may lead to the contention problems.

SQL Server records only enough information in the tempdb transaction log to roll back a transaction but not to redo transactions during database recovery. This feature increases the performance of INSERT statements in the tempdb. In addition, there is no need to log information to redo any transactions because the tempdb is re-created every time you restart SQL Server; therefore, it does not have any transactions to roll forward or roll back. When SQL Server starts, the tempdb is re-created by using a copy of the model database and is reset to its last configured size. The tempdb system database is very similar to a user database. The main difference is that data in tempdb does not persist after SQL Server shuts down.

By default, the tempdb database is configured to autogrow as needed; therefore, this database may grow in time to a size larger than desired. A simple restart of SQL Server resets the size of tempdb to its last configured size. The configured size is the last explicit size set with a file size changing operation such as ALTER DATABASE with the MODIFY FILE option or the DBCC SHRINKFILE statement.

The SQL Server system database, tempdb, has undergone a number of changes in SQL Server 2005. There are new tempdb usages and internal optimizations in SQL Server 2005; tempdb architecture is mostly unchanged since SQL Server 2000.

Only one file group in tempdb is allowed for data and one file group for logs. You can configure the size of the files. When auto grow is enabled (which is the default), the file grows until the disk volume is full. When the server restarts, the tempdb file size is reset to the configured value (the default is 8 MB). Auto grow is temporary for tempdb (unlike other types of databases). It is reset when SQL Server restarts.

Users can explicitly create and use tables in tempdb. Transactions can be used to modify data in the tables. Transactions can be rolled back. However, there is no need to REDO them because the contents of tempdb do not persist across SQL Server restarts. Because the transaction log does not need to be flushed, transactions are committed faster in tempdb than in user databases.

In a user database, transactions have the ACID attributes: atomicity, concurrency, isolation, and durability. In tempdb, transactions lose the durability attribute.

SQL Server uses tempdb to store internal objects such as the intermediate results of a query. Most of these internal operations on tempdb do not generate log records because there is no need to roll back. So, these operations are faster. Backup and restore operations are not allowed on tempdb.






Tempdb Space Usage

The following types of objects can occupy tempdb space:
Internal objects
Version stores
User objects

Internal objects
Internal objects are created internally by SQL Server. Applications cannot directly insert into or delete rows from internal objects. Internal object metadata is stored in memory. The metadata does not appear in system catalog views such as sys.all_objects. Internal objects are considered to be hidden objects.
Internal objects are used:
To store intermediate runs for sort.
To store intermediate results for hash joins and hash aggregates.
To store XML variables or other large object (LOB) data type variables. The LOB data type includes all of the large object types: text, image, ntext, varchar(max), varbinary(max), and all others.
By queries that need a spool to store intermediate results.
By keyset cursors to store the keys.
By static cursors to store a query result.
By Service Broker to store messages in transit.
By INSTEAD OF triggers to store data for internal processing.
Internal objects are also used by any feature that uses these operations. For example, DBCC CHECK internally uses a query that may need to spool intermediate results. Query notification and event notification use Service Broker, so they need space in tempdb as well.
Updates to internal objects do not generate log records. Page allocations on internal objects do not generate log records unless on a sort unit. If the statement fails, these objects are deallocated. Each internal object occupies at least nine pages (one IAM page and eight data pages) in tempdb.

Version store
Version stores are used to store row versions generated by transactions for features such as snapshot isolation, triggers, MARS (multiple active result sets), and online index build. There are two version stores in tempdb for the whole instance of SQL Server. The online index build version store is for row versions from tables that have online index build operations on them. The common version store is for row versions from all other tables in all databases.

The version store consists of append-only store units. The append-only store units are highly optimized for sequential inserts and random look up. They do not appear in catalog views, such as sys.all_objects. Inserts into the version store do not generate log records. Each unit can store many row versions. If there are versions to be stored, a new store unit is created about every minute. Versions generated by a transaction are stored in the same store unit in each of the two version stores for the life of the version store.



In the illustration, there is one append-only store unit. It is created every minute starting with time T1. Any version generating a transaction that started between times T1 and (T1 + 1) will use Append-Only Store Unit – 1 for storing any row version it generates, independent of its duration. For example, Tran1 starts at time T1 and updates a different row every 30 seconds until it commits at time T1 + 20. In this case, all 40 row versions are stored in Append-Only Store Unit – 1. The same is true for other transactions that start under different Append-Only Store Units as well.
Note that the size of a specific append-only store unit depends on the number of versions that are generated by the transactions that are associated with it. This is reflected in the Append-Only Store Units of different sizes in the illustration. However, a query that is running under snapshot isolation may read row versions from any Append-Only Store Unit that exists at the time the transaction under snapshot isolation starts until the time it is completed. The same is true for queries that are running under read committed snapshot as well. A background thread periodically deallocates whole append-only store units, not individual row versions, that are not needed. This deallocation is done strictly in the order of allocation; therefore, Append-Only Store Unit-2 can only be deallocated after Append-Only Store Unit – 1.
For snapshot isolation and read committed snapshot isolation, the versions are generated by DML operations in the database when the respective database options are enabled. For AFTER triggers, versions are generated for all the update operations by the transaction that fired the trigger during the INSERT, DELETE, or UPDATE statement in any database, independent of database options. INSTEAD OF triggers do not generate versions. For MARS, versions are generated by the UPDATE or DELETE statement when there is a pending SELECT on the same transaction and the same connection. Similarly, building an online index generates versions of the rows while the online index is being built.

User objects
User objects appear in catalog views such as sys.all_objects. Sp_spaceused can show the size occupied by these objects. User objects include both user-defined tables and indexes, and system catalog tables and indexes. These two types of tables have exactly the same data structure on disk. User-defined tables can be manipulated by Transact-SQL. System catalog tables cannot be directly modified by Transact-SQL. They are visible by using system catalog views, and they are modified by using DDL statements to ensure logical consistency. Operations on user objects in tempdb are mostly logged. Bulk copy program (BCP), bulk insert, SELECT INTO, and index rebuild operations are bulk logged. This is exactly the same as in other databases with the recovery model set to simple.
User-defined tables include the global temporary tables such as ##t, and local temporary tables such as #t. Both of these objects are session scoped but a global temporary table lives until all sessions that are using it expire or terminate. Local temporary tables, on the other hand, are destroyed when the scope (for example, stored procedure or session) they were created in expires or terminates. Local temporary tables also include table variables such as @t, the return value in table valued functions, and the mapping index for online clustered index build with the SORT_IN_TEMPDB option.

Improvements in SQL Server 2005
Following are some important improvements in tempdb in SQL Server 2005.
The tempdb logging optimization avoids logging the “after value” in certain log records in tempdb. When an UPDATE operation is performed without this optimization, the before and after values of the data are recorded in the log file. For example, if someone updates a data item from ABC to XYZ, ABC is the before value and XYZ is the after value, which does not need to be logged. For INSERT, there is only the after value (the value being inserted), which does not need to be logged. For DELETE, there is only the before value. This optimization primarily benefits INSERT and UPDATE operations on Heap and LOB data. This optimization can significantly reduce the size of the tempdb log as well as reduce the amount of I/O traffic on the tempdb log device.
Instant data file initialization works by not zeroing out the NTFS file when the file is created or when the size of the file is increased. This minimizes overhead significantly when tempdb needs to auto grow. Without this, auto grow could take a long time and lead to application timeout. This optimization requires Microsoft® Windows XP or Windows 2003, and it requires special volume permission on the SQL Server service account. For more information, see Instant File Initialization in SQL Server Books Online.
There is less use of the UP type page latch when allocating pages and extents in tempdb. In addition, allocating the first eight pages of a table is more efficient. As a result there should be less contention on the PFS (Page Free Space), SGAM (Secondary Global Allocation Map), and GAM (Global Allocation Map) pages when there are a high number of users using tempdb at the same time.
Proportional fill has been optimized to reduce UP latch contention. Proportional fill means that, when there are multiple data files in tempdb, each file is filled in proportion to the free space that is available in the file so that all of the files fill up at about the same time. This was accomplished by removing a latch that was taken during proportional fill. In SQL Server 2000, this latch could become a contention point when there were a high number of users using tempdb at the same time. Proportional fill has been optimized to eliminate this latch. With this change in SQL Server 2005, applications will experience less UP latch contention in tempdb.
There is now deferred drop in tempdb. This means, for example, that when a large temporary table is dropped by an application, it is handled by a background task and the application does not have to wait. This results in faster response time to applications.
Worktable caching is improved. When a query execution plan is cached, the work tables needed by the plan are not dropped across multiple executions of the plan but merely truncated. In addition, the first nine pages for the work table are kept. In SQL Server 2000, the work tables used during query plan execution are dropped. Because the work table is cached, the next execution of the query is faster. When the system is low on memory, the execution plan may be removed from the cache and the associated work tables dropped as well. Both SQL Server 2000 and SQL Server 2005 use a small global pool of pre-allocated pages and extents that make the initial creation of work tables faster.
SQL Server 2005 caches temporary objects. When table-valued functions, table variables, or local temporary tables are used in a stored procedure, function, or trigger, the frequent drop and create of these temporary objects can be time consuming. This can cause contentions on tempdb system catalog tables and allocation pages. In SQL Server 2005, these are cached. That means that dropping and creating temporary objects is very fast. When SQL Server drops a temporary object, it does not remove the catalog entry for the object. If a temporary object is smaller than 8 MB, then one data page and one IAM page are also cached so that there is no need to allocate them when re-creating the objects. If a temporary object is larger than 8 MB, defer drop is used. When tempdb is low on space, SQL Server frees up the cached temporary objects. You can drop the associated stored procedure(s) or free the procedure cache to get rid of these temporary tables.
SQL Server 2005 caches the temporary table that is created by using a CREATE TABLE or SELECT INTO statement. In some cases, temporary tables are not cached, such as when there is an explicit DDL on a temporary table after it is created, or if there is a named constraint on the temporary table. Temporary tables are also not cached if they are part of dynamic SQL or ad-hoc batch. The following example shows the creation of a temporary table using dynamic SQL. This table will not be cached.
sp_executeSQL N'create table #t(a int)'


How to shrink the tempdb database in SQL Server

There are three methods you can use to shrink the tempdb database to a size smaller than its last configured size. The first method gives you complete control of the size of the tempdb files but it requires you to restart SQL Server. The second method shrinks the tempdb as a whole, with some limitations, which may include restarting SQL Server. The third method allows you to shrink individual files in the tempdb. The last two methods require that no activity occur in the tempdb database during the shrink operation.

If you are using SQL Server 2005, these methods also apply.

Method 1 to Shrink Tempdb
This method requires you to restart SQL Server.
1.
Stop SQL Server. Open a command prompt, and then start SQL Server by typing the following command:

sqlservr -c -f

The -c and -f parameters cause SQL Server to start in a minimum configuration mode with a tempdb size of 1 MB for the data file and 0.5 MB for the log file.

NOTE: If you use a SQL Server named instance, you must change to the appropriate folder (Program Files\Microsoft SQL Server\MSSQL$instance name\Binn) and use the -s switch (-s%instance_name%).
2.
Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands:
ALTER DATABASE tempdb MODIFY FILE (NAME = 'tempdev', SIZE = target_size_in_MB) --Desired target size for the data file ALTER DATABASE tempdb MODIFY FILE (NAME = 'templog', SIZE = target_size_in_MB) --Desired target size for the log file
3.
Stop SQL Server by pressing Ctrl-C at the command prompt window, restart SQL Server as a service, and then verify the size of the Tempdb.mdf and Templog.ldf files.
A limitation of this method is that it only operates on the default tempdb logical files, tempdev and templog. If additional files were added to tempdb you can shrink them after you restart SQL Server as a service. All tempdb files are re-created during startup; therefore, they are empty and can be removed. To remove additional files in tempdb, use the ALTER DATABASE command with the REMOVE FILE option.


Method 2 to Shrink Tempdb
Use the DBCC SHRINKDATABASE command to shrink the tempdb database as a whole. DBCC SHRINKDATABASE receives the parameter target_percent, which is the desired percentage of free space left in the database file after the database is shrunk. If you use DBCC SHRINKDATABASE, you may have to restart SQL Server.

If you run DBCC SHRINKDATABASE, no other activity can be occurring with the tempdb database. To make sure that other processes cannot use tempdb while DBCC SHRINKDATABASE is run, you must start SQL Server in single user mode.
1.
Determine the space currently used in tempdb by using the sp_spaceused stored procedure. Then, calculate the percentage of free space left for use as a parameter to DBCC SHRINKDATABASE; this calculation is based on the desired database size.

NOTE: In some cases you may have to execute sp_spaceused @updateusage=true to recalculate the space used and to obtain an updated report.

Consider this example:
Assume that tempdb has two files, the primary data file (Tempdb.mdf), which is 100 MB in size and the log file (Tempdb.ldf), which is 30 MB. Assume that sp_spaceused reports that the primary data file contains 60 MB of data. Also assume that you want to shrink the primary data file to 80 MB. Calculate the desired percentage of free space left after the shrink, 80 MB - 60 MB = 20 MB. Now, divide 20 MB by 80 MB = 25% and that is your target_percent. The transaction log file is shrunk accordingly, leaving 25% or 20 MB of space free after the database is shrunk.
2.
Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands:
dbcc shrinkdatabase (tempdb, 'target percent') -- This command shrinks the tempdb database as a whole
There are limitations for use of the DBCC SHRINKDATABASE command on the tempdb database. The target size for data and log files cannot be smaller than the size specified when the database was created or the last size explicitly set with a file-size changing operation such as ALTER DATABASE with the MODIFY FILE option or the DBCC SHRINKFILE command. Another limitation of DBCC SHRINKDATABASE is the calculation of the target_percentage parameter and its dependency on the current space used.

Method 3 to Shrink Tempdb
Use the command DBCC SHRINKFILE to shrink the individual tempdb files. DBCC SHRINKFILE provides more flexibility than DBCC SHRINKDATABASE because you can use it on a single database file without affecting other files that belong to the same database. DBCC SHRINKFILE receives the target size parameter, which is the desired final size for the database file.

You must run DBCC SHRINKFILE command while no other activity occurs in the tempdb database. To make sure that other processes cannot use tempdb while DBCC SHRINKFILE executes, you must restart SQL Server in the single user mode.
1.
Determine the desired size for the primary data file (tempdb.mdf), the log file (templog.ldf), and/or additional files added to tempdb. Make sure that the space used in the files is less than or equal to the desired target size.
2.
Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands for the specific database files that you need to shrink:
use tempdb go dbcc shrinkfile (tempdev, 'target size in MB') go -- this command shrinks the primary data file dbcc shrinkfile (templog, 'target size in MB') go -- this command shrinks the log file, look at the last paragraph.
An advantage of DBCC SHRINKFILE is that it can reduce the size of a file to a size smaller than its original size. You can issue DBCC SHRINKFILE on any of the data or log files. A limitation of DBCC SHRINKFILE is that you cannot make the database smaller than the size of the model database.

In SQL Server 7.0 a transaction log shrink is a deferred operation and you must issue a log truncation and backup to help the shrink operation in a database. However, by default, tempdb has the trunc log on chkpt option set ON; therefore, there you do not need to issue a log truncation for that database





Effects of Execution of DBCC SHRINKDATABASE or DBCCSHRINKFILE While Tempdb Is In Use
If tempdb is in use and you attempt to shrink it by using the DBCC SHRINKDATABASE or DBCC SHRINKFILE commands, you may receive multiple consistency errors similar to the following type and the shrink operation may fail:
Server: Msg 2501, Level 16, State 1, Line 1 Could not find table named '1525580473'. Check sysobjects.
-or-
Server: Msg 8909, Level 16, State 1, Line 0 Table Corrupt: Object ID 1, index ID 0, page ID %S_PGID. The PageId in the page header = %S_PGID.
Although error 2501 may not be indicative of any corruption in tempdb, it causes the shrink operation to fail. On the other hand, error 8909 could indicate corruption in the tempdb database. Restart SQL Server to re-create tempdb and clean up the consistency errors. However, keep in mind that there could be other reasons for physical data corruption errors like error 8909 and those include input/output subsystem problems.

Concurrency enhancements for the tempdb database
When the tempdb database is heavily used, SQL Server may experience contention when it tries to allocate pages.

From the sysprocesses system table output, the waitresource may show up as "2:1:1" (PFS Page) or "2:1:3" (SGAM Page). Depending on the degree of contention, this may also lead to SQL Server appearing unresponsive for short periods.

 
Increase the number of tempdb data files with equal sizing
If the data file size of tempdb is 5 GB, and the Log file size is 5 GB, the recommendation is to increase the single datafile to 10 (each of 500 MB to maintain equal sizing), and leave the log file as is. Having the different data files on separate disks would be good. However, this is not required and they can co-exist on the same disk.

The optimal number of tempdb data files depends on the degree of contention seen in tempdb. As a starting point, you can configure the tempdb to be at least equal to the number of processors assigned for SQL Server. For higher end systems (for example, 16 or 32 proc), the starting number could be 10. If the contention is not reduced, you may have to increase the number of data files more.

The equal sizing of data files is critical because the proportional fill algorithm is based on the size of the files. If data files are created with unequal sizes, the proportional fill algorithm tries to use the largest file more for GAM allocations instead of spreading the allocations between all the files, thereby defeating the purpose of creating multiple data files.

The auto-grow of tempdb data files can also interfere with the proportional fill algorithm. Therefore, it may be a good idea to turn off the auto-grow feature for the tempdb data files. If the auto-grow option is turned off, you must make sure to create the data files so that they are large enough to prevent the server from experiencing a lack of disk space with tempdb.


How increasing the number of tempdb data files with equal sizing reduces contention
Here is a list of how increasing the number of tempdb data files with equal sizing reduces contention:
With one data file for the tempdb, you only have one GAM page, and one SGAM page for each 4 GB of space.
Increasing the number of data files with the same sizes for tempdb effectively creates one or more GAM and SGAM pages for each data file.
The allocation algorithm for GAM gives out one extent at a time (eight contiguous pages) from the number of files in a round robin fashion while honoring the proportional fill. Therefore, if you have 10 equal sized files, the first allocation is from File1, the second from File2, the third from File3, and so on.
The resource contention of the PFS page is reduced because eight pages are marked as FULL at a time because GAM is allocating the pages.
Disadvantages
The only downside to the recommendations mentioned earlier is that you may see the size of the databases increase when the following conditions are true:
New objects are created in a user database.
Each of the new objects occupy less than 64 KB of storage.
If these conditions are true, you may allocate 64 KB ( 8 pages * 8 KB = 64 KB ) for an object that only requires 8 KB of space, thus wasting 56 KB of storage. However, if the new object uses more than 64 KB (8 pages) in it's lifetime, there is no downside with the trace flag. Therefore, in a worst case scenario, SQL Server may end up allocating seven (7) additional pages during the first allocation only for new objects that never grow beyond one (1) page.

Be careful when you move the tempdb database
Be careful when you move the tempdb database because if the tempdb database cannot be created, SQL Server will not start. If the tempdb database cannot be created, start SQL Server by using the (-f) startup parameter and move the tempdb database to a valid location.

To change the physical location of the tempdb database, follow these steps:
1.
Use the ALTER DATABASE statement and the MODIFY FILE clause to change the physical file names of each file in the tempdb database to refer to the new physical location, such as the new disk.
Alter database tempdb modify file (name = tempdev, filename = 'C:\MyPath\tempdb.mdf') Alter database tempdb modify file (name = templog, filename = 'C:\MyPath\templog.ldf')
2.
Stop and then restart SQL Server.

How to Move Tempdb to a Different Device
By default, tempdb is on the master device. You may want to move tempdb so that it resides entirely on another device.

The basic procedure for moving tempdb to a different device is to first put tempdb in RAM, then make sure the device you want to put it on is the only default device (and has at least 2 MB of available space), and finally take tempdb back out of RAM. You can do this either from the SQL Enterprise Manager (SEM) GUI tool or by issuing ISQL commands. Each method is described below.

Each method described here includes steps for creating a new device on which to place tempdb. However, you can also use an existing device, without creating a new one. If you want to use an existing device, skip the steps in the following procedures that deal with the creation of a new device and follow the rest of the steps. Also, if you decide to use an existing device, make sure it has at least 2 MB of free space available for when tempdb is moved back out of RAM.


Moving Tempdb by Using SQL Enterprise Manager
1.
Make sure you have a current backup of the Master.dat file. To do this, shut down SQL Server and copy the Master.dat file to another location.
2.
Start SQL Enterprise Manager. Create a new device for tempdb by doing the following:
a.
Right-click the Database Devices folder and click New Device on the shortcut menu. Alternatively, you can click Database Devices on the Manage menu and then click the New Device toolbar button.
b.
In the Name box, type the name "TEMPDEVICE" (without the quotation marks) and then click to select the Default Device option.
c.
In the Size box, type the number of megabytes you want your new device to be (for example, type 100 for a 100-MB device).
d.
Click Create Now.
3.
Configure tempdb to temporarily reside in RAM. To do this, perform the following steps:
a.
Right-click the server name and click Configure on the shortcut menu. Alternatively, you can click the Server menu, point to SQL Server, and then click Configure.
b.
Click the Configuration tab.
c.
Scroll down until you see the "tempdb in RAM (MB)" configuration option. In the Current column, type a number of megabytes of RAM to allocate to tempdb (for example, 2). This will only be temporary. Then click OK.
4.
Prevent the master device or any other devices from being a default device. To do this, perform the following steps:
a.
Under the Database Devices folder, double-click the master device or right-click the master device and click Edit on the shortcut menu. Alternatively, you can click Database Devices on the Manage menu, select the master device, and then click the Edit Device toolbar button.
b.
Click to clear the Default Device check box, and then click Change Now.
c.
Repeat Steps a-b for any other default devices you may have, except for the TEMPDEVICE created in Step 2 of this procedure.
5.
Shut down and restart SQL Server. Tempdb should now be in RAM and not on the master device.
6.
Take tempdb back out of RAM. To do this, repeat Step 3 of this procedure, but type 0 in the Current column for the "tempdb in RAM (MB)" configuration option.
7.
Shut down and restart SQL Server again. Because master is no longer a default device, tempdb will go onto your new device that is specified as the default device (that is, TEMPDEVICE).
8.
After restarting SQL Server, tempdb will have the default options set and a default size of 2 MB. You can change the tempdb database options and expand its size by performing the following steps:
a.
Under the Databases folder, double-click tempdb or right-click tempdb and click Edit on the shortcut menu. Alternatively, you can click Databases on the Manage menu, select tempdb, and then click the Edit Database toolbar button.
b.
On the Options tab, verify that the following options (which are desirable in most cases) are set:
Select Into/Bulk Copy is enabled.
Truncate Log On Checkpoint is enabled.
Single User is disabled.
DBO Use Only is disabled.
c.
On the Database tab, in the Size box, click Expand.
d.
Under Data Device, select TEMPDEVICE.
e.
In the Size (MB) box, type a number of megabytes to be added to tempdb. For example, if you type 8, it adds 8 MB to tempdb, for a total size of 10 MB.
f.
Click Expand Now.
For more information on expanding the size of a database, see the "Expanding or Shrinking Databases" topic in the SQL Server Books Online.

Moving Tempdb by Using ISQL
1.
Make sure you have a current backup of the Master.dat file.
2.
Create your new device for tempdb, using a statement similar to the following:
DISK INIT NAME = 'TEMPDEVICE', PHYSNAME = 'C:\MSSQL\DATA\TEMPDEV.DAT', SIZE = 51200, VDEVNO = 250
NOTE: This example creates a 100-MB device for tempdb. You can adjust the SIZE option as desired.
3.
Issue the following statements, which will put tempdb into RAM, prevent the master device from being a default device, and make the newly-created tempdevice into a default device:
sp_configure 'allow updates', 1 go reconfigure with override go sp_configure 'tempdb in ram', 2 go sp_diskdefault master, defaultoff go sp_diskdefault tempdevice, defaulton go reconfigure with override go
NOTES:
The sp_configure 'tempdb in ram' configuration option is specified in MB, not 2-KB pages, as the 'memory' parameter is.
If you have other default devices, you should also run sp_diskdefault with the defaultoff option for those devices as well. To see if you have other default devices, you can run the sp_helpdevice stored procedure.
4.
Shut down and restart SQL Server. Tempdb should now be in RAM and not on the master device.
5.
Now use sp_configure to take tempdb back out of RAM:
sp_configure 'tempdb in ram', 0 go reconfigure with override go
6.
Shut down and restart SQL Server. Because master is no longer a default device, tempdb will go onto your new device.
7.
After restarting SQL Server, tempdb will have a default size of 2 MB. You can expand its size by using an ALTER DATABASE statement similar to the following:
ALTER DATABASE tempdb ON TEMPDEVICE = 8
This statement increases the size of tempdb by 8 MB, for a total size of 10 MB. For more information on the ALTER DATABASE statement, see the "ALTER DATABASE Statement" topic in the SQL Server Books Online.
8.
Verify that the following database options are set for tempdb:
Select Into/Bulk Copy is enabled.
Truncate Log On Checkpoint is enabled.
Single User is disabled.
DBO Use Only is disabled.
To do this, use the sp_dboption stored procedure.


In SQL Server 2005, tempdb performance is improved in the following ways:
Temporary tables and table variables may be cached. Caching allows operations that drop and create the temporary objects to execute very quickly and reduces page allocation contention.
Allocation page latching protocol is improved. This reduces the number of UP (update) latches that are used.
Logging overhead for tempdb is reduced. This reduces disk I/O bandwidth consumption on the tempdb log file.
The algorithm for allocating mixed pages in tempdb is improved.
Moving the tempdb Data and Log Files



Database Options
The following table lists the default value for each database option in the tempdb database and whether the option can be modified. To view the current settings for these options, use the sys.databases catalog view.
Database option
Default value
Can be modified
ALLOW_SNAPSHOT_ISOLATION
OFF
Yes
ANSI_NULL_DEFAULT
OFF
Yes
ANSI_NULLS
OFF
Yes
ANSI_PADDING
OFF
Yes
ANSI_WARNINGS
OFF
Yes
ARITHABORT
OFF
Yes
AUTO_CLOSE
OFF
No
AUTO_CREATE_STATISTICS
ON
Yes
AUTO_SHRINK
OFF
No
AUTO_UPDATE_STATISTICS
ON
Yes
AUTO_UPDATE_STATISTICS_ASYNC
OFF
Yes
CONCAT_NULL_YIELDS_NULL
OFF
Yes
CURSOR_CLOSE_ON_COMMIT
OFF
Yes
CURSOR_DEFAULT
GLOBAL
Yes
Database Availability Options
ONLINE
MULTI_USER
READ_WRITE
No
No
No
DATE_CORRELATION_OPTIMIZATION
OFF
Yes
DB_CHAINING
ON
No
NUMERIC_ROUNDABORT
OFF
Yes
PAGE_VERIFY
NONE
No
PARAMETERIZATION
SIMPLE
Yes
QUOTED_IDENTIFIER
OFF
Yes
READ_COMMITTED_SNAPSHOT
OFF
No
RECOVERY
SIMPLE
No
RECURSIVE_TRIGGERS
OFF
Yes
Service Broker Options
ENABLE_BROKER
Yes
TRUSTWORTHY
OFF
No


Operations that cannot be performed on the tempdb database
The following operations cannot be performed on the tempdb database:
Adding filegroups.
Backing up or restoring the database.
Changing collation. The default collation is the server collation.
Changing the database owner. tempdb is owned by dbo.
Creating a database snapshot.
Dropping the database.
Dropping the guest user from the database.
Participating in database mirroring.
Removing the primary filegroup, primary data file, or log file.
Renaming the database or primary filegroup.
Running DBCC CHECKALLOC.
Running DBCC CHECKCATALOG.
Setting the database to OFFLINE.
Setting the database or primary filegroup to READ_ONLY.


How to recover SQL Server after a tempdb database is marked suspect

When you try to start a SQL Server service, the service may fail because the tempdb database is marked suspect. You may see the following text in the SQL Server error log:
2001-02-08 14:04:07.64 spid1 Clearing tempdb database. 2001-02-08 14:04:07.66 spid1 Creating file e:\MSSQL7\DATA\TEMPDB.MDF. 2001-02-08 14:04:07.87 spid1 Closing file e:\MSSQL7\DATA\TEMPDB.MDF. 2001-02-08 14:04:07.90 spid1 Creating file e:\MSSQL7\DATA\TEMPLOG.LDF. 2001-02-08 14:04:07.97 spid1 Closing file e:\MSSQL7\DATA\TEMPLOG.LDF. 2001-02-08 14:04:08.02 spid1 Opening file e:\MSSQL7\DATA\TEMPDB.MDF. 2001-02-08 14:04:08.03 spid1 Opening file e:\MSSQL7\DATA\TEMPLOG.LDF. 2001-02-08 14:04:08.32 spid1 Closing file e:\MSSQL7\DATA\TEMPDB.MDF. 2001-02-08 14:04:08.34 spid1 Closing file e:\MSSQL7\DATA\TEMPLOG.LDF. 2001-02-08 14:04:08.36 spid1 Starting up database 'tempdb'. 2001-02-08 14:04:08.36 spid1 Opening file e:\MSSQL7\DATA\TEMPDB.MDF. 2001-02-08 14:04:08.39 spid1 Opening file e:\MSSQL7\DATA\TEMPLOG.LDF. 2001-02-08 14:04:08.43 spid1 Bypassing recovery for database 'tempdb' because it is marked SUSPECT. 2001-02-08 14:04:08.47 spid1 Database 'tempdb' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information. 2001-02-08 14:04:08.50 spid1 WARNING: problem activating all tempdb files. 2001-02-08 14:04:08.50 spid1 Trying just primary files. 2001-02-08 14:04:08.50 spid1 Database 'tempdb' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information. 2001-02-08 14:04:08.50 spid1 Database 'tempdb' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information.
Note There are multiple reasons as to why a database becomes marked suspect but those reasons are beyond the scope of this article.

The following steps explain how you can create a new tempdb database. With this workaround, you are able to successfully start the SQL Server service.
1.
If they exist, rename the current Tempdb.mdf and Templog.ldf files. If the files do not exist, that is one possible reason for the suspect status, the files are missing.
2.
Start SQL Server from a command prompt by using the following command:
sqlservr -c -f -T3608 -T4022
For SQL Server 2000 named instances you have to add the additional -s parameter.

On a SQL Server 7.0 cluster, you must first run this statement:
set _CLUSTER_NETWORK_NAME=YourSQLVirtualName
This will allow SQL Server to start from a command prompt.
Note Make sure that the command prompt window remains open after SQL Server starts. Closing the command prompt window terminates the SQL Server process.
3.
Connect to the server by using Query Analyzer, and then use the following stored procedure to reset the status of the tempdb database.
exec master..sp_resetstatus Tempdb
4.
Shut down the server by pressing CTRL+C in the command prompt window.
5.
Restart the SQL Server service. This creates a new set of tempdb database files, and recovers the tempdb database.

Note The environment variable creates a new set of tempdb database files when the SQL Server service is restarted. To remove the environment variable, run the following statement:
set _CLUSTER_NETWORK_NAME=


Troubleshooting Insufficient Disk Space in tempdb 
Running out of disk space in tempdb can cause significant disruptions in the SQL Server production environment and can prohibit applications that are running from completing operations.
The tempdb system database is a global resource that is available to all users that are connected to an instance of SQL Server. The tempdb database is used to store the following objects: user objects, internal objects, and version stores. In SQL Server 2005, tempdb requires more disk space than in earlier versions of SQL Server.
You can use the sys.dm_db_file_space_usage dynamic management view to monitor the disk space used by the user objects, internal objects, and version stores in the tempdb files. Additionally, to monitor the page allocation or deallocation activity in tempdb at the session or task level, you can use the sys.dm_db_session_space_usage and sys.dm_db_task_space_usage dynamic management views. These views can be used to identify large queries, temporary tables, or table variables that are using a large amount of tempdb disk space.
The following table lists error messages that indicate insufficient disk space in the tempdb database. These errors can be found in the SQL Server error log, and may also be returned to any running application.
Error
Is raised when
1101 or 1105
Any session must allocate space in tempdb.
3959
The version store is full. This error usually appears after a 1105 or 1101 error in the log.
3967
The version store is forced to shrink because tempdb is full.
3958 or 3966
A transaction cannot find the required version record in tempdb.
tempdb disk space problems are also indicated when the database is set to autogrow, and the size of the database is quickly increasing.

tempdb and Index Creation
When you create an index, you can specify WITH SORT_IN_TEMPDB option, which directs the database engine to use tempdb to store the intermediate sort results used to build the index. Although this option increases the amount of disk space used to create an index, it reduces the time it takes to create an index when tempdb is on a different set of disks than the user database.
As the database engine builds an index, it goes through two phases:
The database engine first scans the data pages to retrieve key values and builds a index leaf row for each data row. When the internal sort buffers have been filled with leaf index entries, the entries are sorted and written to disk as an intermediate sort run. The database engine then resumes the data page scan until the sort buffers are again filled. This pattern of scanning multiple data pages followed by sorting and writing a sort run continues until all the rows of the base table have been processed. In a clustered index, the leaf rows of the index are the data rows of the table, so the intermediate sort runs contain all the data rows. In a nonclustered index, the leaf rows do not contain values from nonkey columns, so are generally smaller. A nonclustered sort run can be large, however, if the index keys are large.
The database engine merges the sorted runs of index leaf rows into a single, sorted stream. The sort merge component of the engine starts with the first page of each sort run, finds the lowest key in all the pages, and passes that leaf row to the index create component. The next lowest key is then processed, then the next, and so on. When the last leaf index row is extracted from a sort run page, the process shifts to the next page from that sort run. When all the pages in a sort run extent have been processed, the extent is freed. As each leaf index row is passed to the index create component, it is placed in a leaf index page in the buffer. Each leaf page is written as it is filled. As leaf pages are written, the database engine also builds the upper levels of the index. Each upper level index page is written when it is filled.
If you create a clustered index on a table that has existing nonclustered indexes, the general process is:
The nonclustered indexes are deallocated, but the definitions of the indexes are retained. The space is not available for use until the end of the transaction containing the CREATE INDEX statement, so that the old index pages are still available if they have to be restored during a rollback of the transaction.
The clustered index is created.
The nonclustered indexes are re-created.
When SORT_IN_TEMPDB is not specified, the sort runs are stored in the destination filegroup. During the first phase of creating the index, the alternating reads of the base table pages and writes of the sort runs move the disk read-write heads from one area of the disk to another. The heads are in the data page area as the data pages are scanned. They move to an area of free space when the sort buffers fill and the current sort run has to be written to disk, then move back to the data page area as the table page scan is resumed. The read-write head movement is higher in the second phase. At that time the sort process is typically alternating reads from each sort run area. Both the sort runs and the new index pages are built in the destination filegroup, meaning that at the same time the database engine is spreading reads across the sort runs, it has to periodically jump to the index extents to write new index pages as they are filled.
If the SORT_IN_TEMPDB option is specified and tempdb is on a separate set of disks from the destination filegroup, then during the first phase the reads of the data pages occur on a different disk than the writes to the sort work area in tempdb. This means the disk reads of the data keys tend to proceed more serially across the disk, and the writes to the tempdb disk also tend to be serial, as do the writes to build the final index. Even if other users are using the database and accessing separate disk addresses, the overall pattern of reads and writes are more efficient when SORT_IN_TEMPDB is specified than when it is not.
The SORT_IN_TEMPDB option may improve the contiguity of index extents, especially if the CREATE INDEX is not being processed in parallel. The sort work area extents are freed on a somewhat random basis with respect to their location in the database. If the sort work areas are contained in the destination filegroup, then as the sort work extents are freed, they can be acquired by the requests for extents to hold the index structure as it is built. This can randomize the locations of the index extents to a certain degree. If the sort extents are held separately in tempdb, the sequence in which they are freed has no bearing on the location of the index extents. Also, when the intermediate sort runs are stored in tempdb instead of the destination filegroup, there is more space available in the destination filegroup, which increases the chances that index extents will be contiguous.
The SORT_IN_TEMPDB option affects only the current statement. No meta data records that the index was or was not sorted in tempdb. For example, if you create a nonclustered index using the SORT_IN_TEMPDB option, and later create a clustered index without specifying the option, the database engine does not use the option when it re-creates the nonclustered index.
Free Space Requirements
When you specify the SORT_IN_TEMPDB option, you must have sufficient free space available in tempdb to hold the intermediate sort runs, and enough free space in the destination filegroup to hold the new index. The CREATE INDEX statement fails if there is not enough free space and there is some reason the databases cannot autogrow to acquire more space (such as no space on the disk, or autogrow turned off).
If SORT_IN_TEMPDB is not specified, the available free space in the destination filegroup must be roughly the size of the final index. During the first phase, the sort runs are built and require about the same amount of space as the final index. During the second phase, each sort run extent is freed after it has been processed. This means that sort run extents are freed at about the same rate at which extents are acquired to hold the final index pages, so the overall space requirements do not greatly exceed the size of the final index. One side effect of this is that if the amount of free space is very close to the size of the final index, the database engine will tend to reuse the sort run extents very quickly after they are freed. Because the sort run extents are freed in a somewhat random manner, this reduces the continuity of the index extents in this scenario. If SORT_IN_TEMPDB is not specified, the continuity of the index extents is improved if there is enough free space available in the destination filegroup that the index extents can be allocated from a contiguous pool rather than from the freshly deallocated sort run extents.
At the time you execute the CREATE INDEX statement, you must have available as free space:
When you create a nonclustered index:
If SORT_IN_TEMPDB is specified, there must be enough free space in tempdb to store the sort runs, and enough free space in the destination filegroup to store the final index structure. The sort runs contain the leaf rows of the index.
If SORT_IN_TEMPDB is not specified, the free space in the destination filegroup must be large enough to store the final index structure. The continuity of the index extends may be improved if more free space is available.
When you create a clustered index on a table that does not have nonclustered indexes:
If SORT_IN_TEMPDB is specified, there must be enough free space in tempdb to store the sort runs, which include the data rows of the table. There must be enough free space in the destination filegroup to store the final index structure, including the data rows of the table and the index B-tree. A rough estimate is 1.2 times the size of the original table, although you may need to adjust the estimate for factors such as having a large key size or a fillfactor with a low value.
If SORT_IN_TEMPDB is not specified, the free space in the destination filegroup must be large enough to store the final table, including the index structure. The continuity of the table and index extents may be improved if more free space is available.
When you create a clustered index on a table that has nonclustered indexes:
If SORT_IN_TEMPDB is specified, there must be enough free space in tempdb to store the collection of sort runs for the largest index (typically the clustered index), and enough free space in the destination filegroup to store the final structures of all the indexes, including the clustered index that contains the data rows of the table.
If SORT_IN_TEMPDB is not specified, the free space in the destination filegroup must be large enough to store the final table, including the structures of all the indexes. The continuity of the table and index extents may be improved if more free space is available.

Recommended Options for Tempdb and the Systems Management Server Database
TempDB should be 20% of the size of the largest database on the SQL server.

Tempdb database options enabled:
Select Into/ Bulk Copy Truncate Log on CheckPoint
Tempdb database options disabled:
Columns Null by Default No CheckPoint on Recovery Single User DBO Use Only Read Only
Systems Management Server database options enabled:
Truncate Log on CheckPoint
Systems Management Server database options disabled:
Select Into/ Bulk Copy Columns Null by Default No CheckPoint on Recovery Single User DBO Use Only Read Only