By assuming that there are no Hardware issues with SQL Server, but still SQL Server performance is not up to the mark, following query can provide some useful information for troubleshooting.
select a.*, p.query_plan , p.number
from sys.dm_exec_procedure_stats as a
cross apply sys.dm_exec_query_plan(a.plan_handle) as p
where a.object_id=object_id('[sch].[User_SP]')
There are some very useful columns returned by this query, those are:
cached_time : Recent Recompile Time
last_worker_time : Execution time in micro seconds
last_physical_reads: Number of pages read from physical storage system.
last_logical_reads: Number of pages read from buffer, instead of physical storage.
query_plan: XML representation of query plan, which was used during recent sp run.
The tables in query just keeps the last event i.e. only one record. Adding this query in job allows to store the information in temporary table and after couple of minutes you can get enough details for investigation. For single procedure, more than one query plan may exist. You may compare query plans to find out the differences.
In one of the real time scenarios, I realized that number of logical reads were few millions, for just returning 100 rows. There was issue with one of the join that caused more number of logical reads.
This is not the end, you may also compare physical reads with logical reads. Also consider frequency of procedure being recompiling. Too much recompiling may cause overhead to CPU.
Note: Logical Reads over Physical reads are good. More logical reads shows that the query is getting it’s data from buffer, more physical reads are sign of potential IO bottleneck.
Showing posts with label SQL Engine. Show all posts
Showing posts with label SQL Engine. Show all posts
Nov 30, 2009
Sep 21, 2009
Red Gate SQL Backup 5.4 vs SQL Server 2008 Backup
RedGate 5.4 Backup Vs SQL 2008 Backup Vs SQL 2008 Backup with Encryption
RedGate 5.4 Encrypted Compressed Backup
How it works?
1. Backup the database,
2. Compress the Database,
3. Encrypt the Database with a Password,
4. Password needs to be supplied while restoring/backing up the database, or password needs to be place in script in job.
Database size for testing
7 GB
Backup Time
134 Secs
Backup Size
715 MB
Restore Time
247 Seconds.
Encryption
Yes, 256 Bit.
Compression Level
3 - Full (Available options, 1, 2, 3)
Advantages
1. Encrypts the Backup file, not the database.
2. Flexible Compression levels can be used.
3. Compression Ratio is large than SQL Compression.
4. Backup can be split into multiple files.
Limitations
1. Requires to buy another License.
2. Password is displayed on backup/restore script, anyone can see the password, who has access to view the jobs.
Tested Script
DECLARE @exitcode int
DECLARE @sqlerrorcode int
exec master..sqlbackup N'-SQL "BACKUP DATABASES [myDB] TO DISK = ''C:\SQL Data\backup\'' WITH NAME = '''', DESCRIPTION = '''', ERASEFILES = 5, COMPRESSION = 1, THREADS = 1,MAILTO = ''DBATeam@Domain.com''"' , @exitcode OUTPUT, @sqlerrorcode OUTPUT
IF (@exitcode <>0) OR (@sqlerrorcode <> 0)
BEGIN
RAISERROR ('SQL Backup job failed with exitcode: %d SQL error code: %d', 16, 1, @exitcode, @sqlerrorcode
END
SQL 2008 Ent Full Backup Compressed
How it works?
1. Backup the Database,
2. Compress the Database,
3. Password Protection is available, however without encryption. Open URL below for more information - http://www.mssqltips.com/tip.asp?tip=1108
Database size for testing
7 GB
Backup Time
120 Seconds
Backup Size
865 MB
Restore Time
246 Seconds
Encryption
No.
Compression Level
Full, Only Option
Advantages
1. No Need to purchase extra License, this is part of SQL 2008 License.
2. Password protection is available to backups.
3. Supported by Microsoft as part of Microsoft Product.
4. Compresses the Database.
Limitations
1. Does not support Encryption.
Tested Script
BACKUP DATABASE MyDB TO DISK= 'C:\SQL Data\backup\myDB.bak'
WITH COMPRESSION
GO
RESTORE DATABASE MyDB FROM DISK = 'C:\SQL Data\backup\myDB.bak'
WITH RECOVERY,
MOVe 'DATA1' TO 'C:\SQL Data\Test\DATA1.mdf',
Move 'LOG1' TO 'C:\SQL Data\Test\LOG1.ldf',
Move 'INDEX1' To 'C:\SQL Data\Test\INDEX1.mdf'
GO
SQL 2008 Ent Full Backup Encrypted Compressed
How it works?
1. SQL 2008 supports Encryption at database level and encryption has to be enabled first time using the following steps.
a. A database certificate has
to be created followed by a Master Key.
b. Using certificate, Database Encryption Key
can be created to make database encrypted.
c. Encryption can be enabled on database at
this point (CPU Intensive task).
2. Backup the Encrypted Database.
3. Compresses the database.
Database size for testing
7 GB
Backup Time
218 Seconds
Backup Size
4.73 GB
Restore Time
337 Seconds
Encryption
Yes, 256 bit.
Compression Level
Full, Only Option
Advantages
1. No need to display the password on restore command or backup command. SQL Server uses the inbuilt database certificate for security.
2. No Need to purchase extra License, this is part of SQL 2008 License.
Limitations
1. CPU Intensive Task.
2. Backup Compression ratio least of other two option.
3. In case certificate is lost, we can lose the Data.
Tested Script
--1. CREATE MASTER ENCRYPTION KEY
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '*****'
--2. CREATE DATABASE CERTIFICATE
CREATE CERTIFICATE dbCERT WITH SUBJECT = 'Database Certificate TDE'
--3. BACKUP CERTIFICATE
BACKUP CERTIFICATE dbCERT TO FILE = 'C:\SQL Data\DB CERTIFICATE\db_bkupcert_1.cert'
WITH PRIVATE KEY(
FILE = 'C:\SQL Data\DB CERTIFICATE\db_bkupcert_1.cert', ENCRYPTION BY PASSWORD = '*****');
--4. CREATE DEK -- DATABASE ENCRYPTION KEY
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE dbCERT
--5. ENABLE ENCRYPTION ON SELECTED DATABASE
ALTER DATABASE myDB SET ENCRYPTION ON
--6. ABOVE COMMAND WILL START A BACKUP PROCESS OF ENCRYPTING DATABASE, MONITOR THE STATUS OF ENCRYPTION PROCESS USING
--FOLLOWING COMMAND.
SELECT db_name(database_id), encryption_state,percent_complete
FROM sys.dm_database_encryption_keys
Sep 8, 2009
Improving database performance with Table Partitioning
Table Partitioning is new concept introduced in MS SQL 2005. Partitioning is concept of taking performance benefit from SQL server by splitting the tables in multiple parts, on the basis of values in pre-defined column. However in old days of sql (6.5-2000), good developers knew how to take benefit from sql by splitting tables. They used to create multiple tables instead of one table and writing check constraint to group data in tables. Let me try to explain you this concept using following example:
create table Sales_1
(
SKU varchar(20),
SaleDate datetime
)
alter table Sales_1
ADD CONSTRAINT sales_date_1 check(SaleDate < '2009-01-01')
create table sales_2 (
SKU varchar(20)
SaleDate datetime
)
alter table Sales_2 ADD CONSTRAINT sales_date_1 check (saledate >= '2009-01-01')
We have two tables, sales_1 and sales_2, on each table we have setup the check constraint. On one of the tables we added constraint to keep the records prior to 2009 and in other table we are keeping records where sales belong to 2009.
create view sales
as
select * from sales_1
union all
select * from sales_2
We created a view sales, in which we used union to get combine the rows from both of the tables. If you query data for date prior to 2009, it will hit only sales_1 table, and otherwise it will hit sales_2. This approach is when we think of archiving data, since in this case, we almost archived our prior to 2009 data and that table will not get hit, unless we write query to do that.
The only limitation to this approach was to inserting rows using views.
In sql 2005, it is quite easy and systematic approach to achieve the same thing, but using Table Partitioning. By default sql creates one partition for each table in database, however more than one partitions can be added to tables. Logically speaking partitioning is a horizontal split of rows. Each set of rows will be saved into certain partition that matches the value in specified column on the basis of certain rule.
Let's say, In your database you have data for ten years with million of rows, and you can split the data (rows) into ten different partitions and each partition belongs to each year. Thus it logically makes it well organized and also boosts the performance. This is importance to know that to get performance benefit, partitioning should be applied to large tables, where performance is generally poor, while selecting and storing data.
Well, once partitioning is in place, you don't need to worry about putting data into selective partitions, or while selecting data you don't need to worry about writing more than one queries. It would be SQL's duty to take care of partitioning rules.
Prior to creating partition on tables, two things needs to be followed, First creating function and second creating a scheme on basis of function. Once partition function and scheme are in place, last step will be to create table on the basis of function and scheme.
Partition Function
In simple terms, a partition function defines the logical boundary for the data. Let's take the same example as I took for partitioned view. I want to distribute the sales information of year 2008-2009 in first partition, 2009-2010 in second partition. To do this, first I need to decide which data type need to be considered to partition. Of-course in this case, it will be datetime datatype. One thing needs to be understand here that Partition Functions are independent of any tables. Partition functions can be referenced by any table.
Now I know that I have to make the groups or boundaries, i.e. 2008-1-1 to 2008-12-31, then 2009-1-1 to 2009-12-31. There are two type of partition functions, Left Range Functions and Right Range Functions.
Left Range
Data equivalent to the boundary will move into the Left Partition. If left boundary is defined as the first boundary in the range then all data equivalent to the boundary or less than the boundary will move into the first partition. If we create a Left Range Function with boundaries of 2008-12-31, and 2009-12-31, then it creates three partitions. Partition 1 - values equal to 2008-12-31 or less than this. Partition 2 - values equal to 2009-12-31 and greater than 2008-12-31. Any value greater than 2009-12-31 will go into partition 3.
CREATE PARTITION FUNCTION part_sales(DATETIME)
AS RANGE LEFT FOR VALUES
('2008-12-31','2009-12-31')
GO
Right Range
The only difference between Left Range and Right Range is that using Right Range data equal to the boundary will move into the next partition. To accomplish the same result as Left Range, the Right Range can be defined in following way.
CREATE PARTITION FUNCTION part_sales(DATETIME)
AS RANGE RIGHT FOR VALUES
('2008-1-1', '2009-1-1')
GO
Partition Schemes
This is not all. Once partition function is created, next step is to decide in to which file group partitioned data should go. In the function we created three different boundaries and to gain highest performance out of the partition function, we need to create three different file groups. Let's assume that I have created three different file groups, fg1, fg2 and fg3 on three different storage arrays. Partition scheme decides which file group the data should go for each partition.
CREATE PARTITION SCHEME ps_Sales
AS PARTITION part_sales
TO(fg1,fg2,fg3)
GO
Partition Table
Once the partition function and partition schemes are created, next step is to create the partition table. A partition table must be created with non clustered primary key. The clustered index while adding to the table, will partition the table on the basis of partition scheme.
CREATE TABLE sales(
order_id INT NOT NULL PRIMARY KEY NONCLUSTERED,
productID int NOT NULL,
Qty INT NOT NULL,
Sale_agent VARCHAR(50) NOT NULL,
Sale_Final_Date datetime NOT NULL
)
CREATE CLUSTERED INDEX idx_Sales_Date
ON sales(sale_final_date)
ON ps_Sales(sale_final_date);
When you query the partition table, no special query is required. Query Optimizer it self choses the right partition to display the results.
I will write more blogs on partitioning that will show how to edit the partition, removing partition etc.
Aug 26, 2009
Moving data or log files to another location without detaching database.
There are various ways you can move the datafiles or logfiles to new location, However, I am going to share the method I used because I don't need to detach the database and reattach the database using this method. This method saves lot of maintenance time, in case if your database is replicated or it is required to be present 24/7. However, there will be performance fallback while performing any of these methods.
The only limitation using this method of moving files is that the primary data file and primary log file can not be moved while online, however the primary files can be marked as Empty File and sql server will transfer all data to new file and sql server will not put data in files marked as empty files anymore.
It has to be done in the following sequence:
Here are the details you can follow against each step I mentioned above:-
1. Add new files to database to new location.
Assuming that you have database named TESTING and you have a file group named SECONDARY. Just make sure that you add new file to one of the existing FILEGROUPS (In my case it is SECONDARY), It is required because the existing data will be moved to existing FILEGROUPS when you mark the database as EMPTYFILE. Run the following script, edit the parameters as per your requirements.
ALTER DATABASE testing
ADD FILE ---- for adding log file just change this line to ADD LOG FILE
(NAME = 'data_5'
, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL\data\data_5_Data.NDF'
, SIZE =5MB, MAXSIZE = UNLIMITED, FILEGROWTH = 10MB)
TO FILEGROUP secondary
2. Move data from existing file to another file and mark file as EMPTYFILE so that you could remove the file later.
DBCC SHRINKFILE(data_3, EMPTYFILE)
GO
Once you mark the file as EMPTYFILE, you may delete the file from list, or you can keep the file if the file is Primary file, however, the file now is empty (no pages) and sql will not store any data in this file.
3. This is the final step. Remove the file you just marked as EMPTYFILE
ALTER DATABASE TESTING
REMOVE FILE data_3
GO
Thanks
Swarndeep
The only limitation using this method of moving files is that the primary data file and primary log file can not be moved while online, however the primary files can be marked as Empty File and sql server will transfer all data to new file and sql server will not put data in files marked as empty files anymore.
It has to be done in the following sequence:
- Add new files to database to new location.
- Transfer data from old file to new files and marking file as not to be used for storing any data.
- Remove old files (except primary data or primary log file). Though primary data or log file will not be used for storing data in future and all existing data will be moved to new file in same filegroup.
Here are the details you can follow against each step I mentioned above:-
1. Add new files to database to new location.
Assuming that you have database named TESTING and you have a file group named SECONDARY. Just make sure that you add new file to one of the existing FILEGROUPS (In my case it is SECONDARY), It is required because the existing data will be moved to existing FILEGROUPS when you mark the database as EMPTYFILE. Run the following script, edit the parameters as per your requirements.
ALTER DATABASE testing
ADD FILE ---- for adding log file just change this line to ADD LOG FILE
(NAME = 'data_5'
, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL\data\data_5_Data.NDF'
, SIZE =5MB, MAXSIZE = UNLIMITED, FILEGROWTH = 10MB)
TO FILEGROUP secondary
2. Move data from existing file to another file and mark file as EMPTYFILE so that you could remove the file later.
DBCC SHRINKFILE(data_3, EMPTYFILE)
GO
Once you mark the file as EMPTYFILE, you may delete the file from list, or you can keep the file if the file is Primary file, however, the file now is empty (no pages) and sql will not store any data in this file.
3. This is the final step. Remove the file you just marked as EMPTYFILE
ALTER DATABASE TESTING
REMOVE FILE data_3
GO
Thanks
Swarndeep
Subscribe to:
Posts (Atom)
-
DISCLAIMER: ANONYMOUS ACCESS IS NOT RECOMMENDED as it may give direct access to your report server or report objects to any one who knows th...
-
There might be some situations when user must be notified that NO RECORD was transferred to destination while executing SSIS Package. There ...
-
When you add new Domain Login to SQL Server, you may see error Windows NT user or group 'Domain\User' not found. Check the name aga...