Showing posts with label SQL 2014. Show all posts
Showing posts with label SQL 2014. Show all posts

Feb 25, 2017

SQL Server TDE (Data at Rest Encryption)





What is TDE ?

Transparent Data Encryption (TDE)  is used by Microsoft, Oracle, and other Database provider companies to encrypt data files. This solves the compliance issue generally known as Data at Rest Encryption.

TDE was first introduced by Microsoft with SQL Server 2008. TDE is available with Evaluation, Developer and Enterprise Editions only. (Also Datacenter Editions in some older versions).

How TDE works ?

With TDE Database Files are encrypted, hence, prevents the Data Files from exploitation if data files or backups are copied out with bad intentions. 

Some Facts about TDE

  • During Backup, if compression is enabled, it cannot take much advantage from compression due to the encryption.
  • If any of the database is encrypted on an instance, TempDB is anyways encrypted.
  • TDE only encrypts Data at Rest. Data in Motion, Data in Transit or Data in Memory is not encrypted.
  • Any File groups which are Read-Only, won't be encrypted, hence TDE implementation will fail.
  • Once TDE is enabled, Mirroring or Log Shipped databases on the other side will also need be encrypted as TDE keeps Transaction Logs Encrypted.
  • Even though the Certificate Expires, it still encrypts database. Certificate Expiration is more of compliance related and you can set custom expiry date with EXPIRY_DATE option while creating the certificate, by default it is a year.


Encryption Hierarchy (https://msdn.microsoft.com/en-us/library/bb934049.aspx)

  • Encryption Hierarchy starts with Windows level Data Protection API (DPAPI). During the setup or installation of SQL Server Instance, the Service Master Key is encrypted with DPAPI. 
  • Service Master Encrypts the Database Master Key on Master database.
  • Using Database Master key, a Certificate on Master Database is created. 
  • The Certificate on Master Database is used by all other databases on same instance to create Database Encryption Key (DEK). 
  • Finally the DEK is used to Encrypt the entire User Database.

Implementing TDE

  1. Create Master Key on Master Database
  2. Create Certificate protected by Master Key on Master Database
  3. Create Database Encryption Key using Certificate on User Database that has to be encrypted
  4. Change the Database Option setting to Encryption
  5. Backup Certificate and Private Key at safe place (optional but recommended)


Demo

Create Master Key

Use Master
Go
Create Master Key
Encryption By Password ='@$trongPa$$word@'
Go


Create Certificate

use master
Go
create Certificate TDE_CERT
WITH SUBJECT ='TDE Certificate'
Go



Create DEK (Database Encryption Key) using the certificate created in last step. This step has to be run on the database where the TDE has to be enabled

Use myDB
Go
Create Database Encryption Key
with algorithm = AES_256
Encryption By Server Certificate TDE_CERT;
GO



Finally Enable TDE for the database

use master
Go
Alter database myDB
Set encryption on
Go

You can check the status of encryption by using DMV: sys.dm_database_encryption_keys

select encryption_state, percent_complete, * from sys.dm_database_encryption_keys

Encryption_States:
0 - No Encryption
1 = Unencrypted
2 = Encryption in Progress
3 = Encrypted
4 = Key Change in Progress
5 = Decryption in Progress (Disabling TDE)
6 = Protection change in progress



Backup Certificate


Use Master
Go
backup certificate TDE_CERT
To FIle='C:\encryption\TDE_CERT.cer'
with private key (file='C:\encryption\pkey.key' ,encryption by password ='@$trongPa$$word@')
Go

Make sure to keep the Certificate and Private key at safe place. Without Certificate and Private Key, database recovery will be impossible.






Sep 7, 2016

SSRS 2014 - IE 11 - Toolbar items may render in multiple rows instead of one row.

I have been facing issues with SSRS 2008 R2, The reports were freezing and not showing any data. Microsoft did not provide any support as SQL 2008 R2. MS anyways closed for any new enhancements SQL 2008 variants.

We upgraded to SSRS 2014, reports started working and data was rendered. Wait, now the toolbar started appearing in multiple rows. For smaller resolution screens, this was not acceptable. I could not find any solution or fix by Microsoft for SSRS 2014, though I could find a workaround that worked for me.

This workaround requires me to modify the ReportViewer.aspx file.

The ReportViewer.aspx file is located under Pages Folder at %ProgramFiles%\Microsoft SQL Server\MSRS12.MSSQLSERVER\Reporting Services\ReportServer\Pages. 

The folder name and path may differ based on the SQL Version and installation location, the path provided above is generalized.

  1. Take backup of original ReportViewer.aspx file, just in case the changes have to be restored.
  2. Open ReportViewer.aspx, 
    • find meta Tag
    • Replace the content property with content="IE=EmulateIE10; IE=Edge" in the tag
  3. Restart the SSRS service
  4. Try the report now, it should work
  5. It it does not work, then try different options in the content, example: IE=EmulateIE7 etc.

Changing this option will try to Emulate the Report with version specified in order.

May 18, 2016

How to reload Missing SQL Server Performance Counters

Recently I was working on one of the servers to analyze the performance using Performance Counters which were collected historically for a long period of time.

I noticed some of the needed counters were missing for the collection. 

So what was happening with that server where performance counters were missing? After running the following query on current snapshot of performance counters, I did not get the result. SQL Server was not reporting any counters. 



After doing some research I found that there is one configuration file that lists all the performance counters which SQL Server loads. The name of the performance counter configuration file is generally perfsqlctr.ini or perf-sqlctr.ini located in Binn directory.
To precisely get the file name and location, you may take help from Registry.

  1. In Registry Go to HKLM\SYSTEM\CurrentControlSet\services\
  2. Extract the value from Key ImagePath until Binn
  3. Go to HKLM\SYSTEM\CurrentControlSet\services\[InstanceName]\Performance
  4. Extract the value from key PerfIniFile
  5. Merge the values from #2 and #4, that is the file name which has all of the counters.
Now you got all of the file names needed, follow these steps to reload the counters.
  1. Open file in text editor, make sure that all of the needed counters are there in the file, if not there then add counter while matching the format of existing counters.
  2. Unload the Counters
    • If Default Instance then run following command from shell.
      • Unlodctr MSSQLSERVER
    • If Named Instance then run the following command from shell.
      • Unlodctr MSSQL$[ServiceName]
  3. Restart the MSSQL Service for that instance
  4. Reload the counters with following command from shell.
      • Lodctr [path\file you got in #5]
  5. Restart MSSQLService for the instance
It worked for me, I could see the counters.



If you still see the issues, then please make sure that the registry keys as discussed above are accessible by Service Account of SQL Service. Then repeat steps between 2 and 4.

Nov 9, 2015

In Memory OLTP or Memory Optimized OLTP

If an enterprise server has to be upgraded to boost performance then memory can be the first and foremost resource that can be considered as a quick and cheap upgrade. Memory is not only cheap , but also very easy to install. If server supports, memory can be added without bringing the server down for maintenance.

Starting with SQL Server 2014, Microsoft introduced In Memory Tables also known as Memory Optimized Tables or In Memory OLTP. Memory optimized tables can take advantage of additional Memory (RAM) to provide better performance.

As name suggests, the data in these tables remain in memory when SQL Service is running. This does not mean that data will always vanish away when server is powered off or service interrupts. SQL Server keeps copy of data in Logs (write ahead log), if data has to be retained, the pages from log will be loaded back to memory optimized tables.

While defining the table you can specify whether the durability of the table will be Schema only or Schema and Data both.

Memory Optimized Table is brand new concept for SQL 2014 and the allocation works little differently from regular indices. Unlike regular tables, data in Memory Optimized table is stored in streams also called Data Streams. These Tables don't use IAM (Index Allocation Map) to link extent chain, because of this reason each Memory Optimized Table requires at least one index to link the data.

These are the requirements for Memory Optimized Tables are:

1. File Group designated for Memory Optimized Data,
2. Data Files in Memory Optimized File Group.
3. Disable Broker (or any feature that enables Event Notification) for that database. It is not supported.
4. At least one Index per Memory Optimized Table. Index can be Hash Index or Range Index. Hash Indexes use Buckets to organize and point the underlying data, and Range Indexes are the regular indexes using "BW"-Tree architecture.

You can either create a new Memory Optimized Database or you can add Memory Optimized Data file to existing database also.

Here is the small example to introduce with Memory Optimized OLTP/Table.


Create Database

CREATE DATABASE [MYDB]
CONTAINMENT = NONE
ON PRIMARY
(NAME ='DataFile',
FILENAME='H:\SQLDATAFILES\MYDB_DATAFILE.MDF',
SIZE = 4096KB,
MAXSIZE = 10240KB,
FILEGROWTH=1024KB
)
LOG ON
(NAME = 'LogFile',
FILENAME='H:\SQLDATAFILES\MYDB_LOGFILE.LDF',
SIZE=4096,
MAXSIZE=UNLIMITED,
FILEGROWTH=10%
)


Disable Broker


IF EXISTS (SELECT * FROM SYS.DATABASES WHERE NAME = 'MYDB'  AND IS_BROKER_ENABLED = 1)
          ALTER DATABASE MYDB SET DISABLE_BROKER


Add Memory Optimized File Group and Data File


ALTER DATABASE MYDB
ADD FILEGROUP [MYDB_MEMORY_OPTIMIZED_GROUP] CONTAINS MEMORY_OPTIMIZED_DATA

ALTER DATABASE MYDB
ADD FILE
(NAME='MYDB_MEM_OPT_DAT',
FILENAME='H:\SQLDATAFILES\MYDB_MEM_OPT_DAT',
MAXSIZE=UNLIMITED

) TO FILEGROUP [MYDB_MEMORY_OPTIMIZED_GROUP]


Create Table

USE MYDB;

CREATE TABLE MYDB_INMEMORY_TABLE
(
COLUMN1 INT IDENTITY(1,1) NOT NULL PRIMARY KEY NONCLUSTERED,
COLUMN2 INT NOT NULL INDEX INDEX1 HASH WITH (BUCKET_COUNT=1000),
COLUMN3 CHAR(100) COLLATE LATIN1_GENERAL_100_BIN2 NOT NULL INDEX INDEX2
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_AND_DATA)


It is all done and ready to use. Though it is recommended to create resource pooling on memory for the database so that memory utilization can be maintained and monitored.

Create Resource Pool

IF NOT EXISTS(SELECT * FROM SYS.RESOURCE_GOVERNOR_RESOURCE_POOLS WHERE NAME = 'POOL_MYDB')
BEGIN
CREATE RESOURCE POOL POOL_MYDB WITH (MAX_MEMORY_PERCENT=10);
ALTER RESOURCE GOVERNOR RECONFIGURE;
END

Map Resource Pool and Database

EXEC sp_xtp_bind_db_resource_pool 'MYDB', 'POOL_MYDB';

Take Database Offline and Online again to apply the changes

USE MASTER;
ALTER DATABASE MYDB SET OFFLINE WITH ROLLBACK IMMEDIATE;
ALTER DATABASE MYDB SET ONLINE;



Please leave your feedback, if you like this article or have any comments, below.


Oct 2, 2015

A glance at SQL Server Statistics

Statistics are sometimes ignored when it comes to Performance Tuning. Statistics can play a big role in identifying recurring slowness.

Before executing any query, SQL Engine Optimizer should know in advance some information, like the number of records, page density, indexes etc. If updated information is available, the query optimizer will use best execution plan to execute the query.

Before go further on this, here is an example, an online store makes sure that they have enough stock in warehouse that is ready to ship to meet the demand. But their system is broken and online ordering system is unaware of stock quantities in warehouse, then probably online ordering system will order item directly from distributor which will take longer time for item delivery.

On the other hand if online store has outdated information that their warehouse has enough stock, the online order will be mistakenly sent to warehouse, which then be re-routed to the distributor.

Both of these scenarios have bad impacts because information was outdated somewhere.

This example may not fit 100% with SQL Server statistics but it is very close.

Let's take a look at SQL Server with same example. Store A has 10 warehouses in a state at different zip code locations. Store decided to merge all warehouses in one big warehouse to reduce overhead cost.

After the physical change, this information has been updated in database, by simply updating affecting zip codes to one zip code. Note that the index is built on Zip code. There is an pre-existing index on the zipcode column.

Order entry system complains that after merging the Zip codes, the system is little sluggish. The reason of this sluggishness lies behind mass updates. The reason behind this issue could most probably be outdated statistics. Let's take this example:


  • Create a table:
CREATE TABLE [dbo].[Warehouse_Inventory](
[ItemCode] [varchar](50) NOT NULL,
[ZipCode] [int] NOT NULL
) ON [PRIMARY]
GO

  • Now insert some data:


declare @zip int, @inventory int
set @zip = 1
while (@zip < 200000)
begin
set @inventory = @zip+22.5
insert into [dbo].[Warehouse_Inventory]
select 'Inventory ' + cast(@inventory as varchar(10)), @zip
select @zip = @zip+1
end


  • Create a non clustered index on zip code:



CREATE NONCLUSTERED INDEX [ClusteredIndex-myindex] ON [dbo].[Warehouse_Inventory]
(
[ZipCode] ASC
)
GO

  •  Let's run our first query to get inventory from location 30001. Only 1 row should be returned.
select zipcode from [Warehouse_Inventory] where ZipCode = 30001 OPTION (RECOMPILE)


Query used the index as expected, let's look at the index statistics. See from the screenshot below the Estimated and Actual number of rows matches. What query optimizer estimated, same was returned. This sounds good.

Execution Plan after creating Index


Since the zip codes were updated, about 90% of the warehouses are moved to one zip code, 30001.  In table, we need to update those 90% of the zip codes.


update [dbo].[Warehouse_Inventory]
set zipcode = 30001
where zipcode < 180000

Without making any further changes, without rebuilding the indexes etc., running the same query again should return 179,000 rows. Let's look at the same index execution plan created by query optimizer.

Query optimizer estimated only 1 row to be returned, though there were 179,000 rows expected.



Now, let's fix the statistic by updating it manually.
update statistics dbo.warehouse_inventory.
After updating the statistics the number of rows estimated by query optimizer are 179,000. This is what we wanted to see in execution plan.


If query optimizer sends wrong estimate then probably wrong execution plan can be picked which can cause some performance sluggishness.

Here is the query that can be used to find out how many modifications were made to statistics. This can probably help in devising a custom solution for some of the indexes which are updated throughout the day in production application.

SELECT
[sp].[object_id],
OBJECT_NAME([sp].[object_id]) AS 'ObjectName',
[sp].[stats_id] AS 'StatID',
[st].[name] AS 'StatName',
[sp].[last_updated] AS 'LastTimeStatUpdated',
[sp].[modification_counter] AS 'NumberOfModifications'
FROM [sys].[stats] AS [st]
OUTER APPLY sys.dm_db_stats_properties ([st].[object_id],[st].[stats_id]) AS [sp]
where OBJECT_NAME([sp].[object_id]) is not null
and OBJECT_NAME([sp].[object_id]) not like 'sys%'
and OBJECT_NAME([sp].[object_id]) in ('Warehouse_Inventory')

The example above is very simple and probably you may not see any major difference in performance, it was just to demonstrate how to proceed with Statistiscs.

There are some options that SQL Server provides at database level. These options are Auto Update Statistics and Auto Update Statistics Asynchronously.

Auto Update Statistics: If while executing the query, the statistics found outdated by a ratio then update the statistics before executing the query.

Auto Update Statistics Asynchronously: If while executing the query, the statistics found outdated by a ratio and statistics update request will be sent to engine in background.


One or both of these options can be switched on. Since all servers have different load, environment, usage, thus be little careful before opting these options. In General, OLTP may respond very well to ASYNC, though database warehouses have SYNC options go well.

Optimizing Indexes with Execution Plans

Contact Me

Name

Email *

Message *