Skip to main content

Posts

Showing posts from 2014

Azure SQL Database Scalability for IO and CPU bound scenarios

In the previous post  I ran the test with 5 concurrent clients and in this post I present results fgenerated by the same test but executed with different numbers of concurrent users: Azure SQL databases Standard (S0, S1, S2) and Premium (P1, P2, P3) editions were used During all test only a single Azure SQL database existed on the server at any given time The tests were executed for 200 seconds with 1,2,3,5, and 10 concurrent clients and after completion the test for a given number of the concurrent users all tables were dropped and re-created  Three tests were executed for each edition and number of concurrent users and the average number of transactions was calculated for the table below After completion the tests against a particular database edition the database was dropped and a new one was created in the next edition and/or tier (e.g., when all 5 tests have been completed against the S0 database that database was dropped and a new one was created in the S1 edition/tier)

Azure SQL Database Performance for different editions

As of now Azure allows creating SQL Databases in three tiers - Basic, Standard, and Premium. The Basic tier has a single performance level with 5 DTU ( Database Throughput Units ) The standard tier has three levels - S0 (10 DTU), S1 (20 DTU), and S2 (50 DTU). And the Premium tier has also three levels - P1 (100 DTU), P2 (200 DTU), and P3 (800 DTU). To compare performance I created a test database in each tier and performance level. The test itself is the same as the one used in one of my previous blogs : The test was running from 5 clients for about 280 seconds and doing the following: - drop and create the test table - generate a random ID - integer within the [0, 2000000] range - if a table has no row witch such ID then insert a new row - if there is a row with such ID then update the row with the UpdateTime and a new DummyText (randomly generated length) The results clearly show that the higher DTU the more transactions per second the database can handle.

CPU performance: Azure VM vs DELL M4700.

How powerful Azure VMs are in regard to CPU intensive operations (e.g., file compression)? I decide to compare Azure VMs with my DELL M4700 laptop for file compression operations. I created one VM of size A6 in the US-East2 region. Size A6 means 4 CPUs, 28 GB of RAM, and 127 GB on the system drive. The second VM has size D4 in the US-East region. D4 is a newly announce type of VMs that comes with SSD disks presented as drive D: (non-persistent drive). The D4 size has 8 CPUs', 28 GB of RAM, and 400 GB of the temporary drive D: (backed by SSD disks). Both VMs report the same CPU information Caption           : Intel64 Family 6 Model 45 Stepping 7 DeviceID          : CPU0 Manufacturer      : GenuineIntel MaxClockSpeed     : 2200 Name              : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz SocketDesignation : None On my DELL M4700 laptop the CPU information is Caption           : Intel64 Family 6 Model 58 Stepping 9 DeviceID          : CPU0 Manufacturer      : Gen

SQL 20114 - AlwaysOn vs. Standalone

AlwaysOn availability groups has been around since SQL 2012 and they've been used to build highly available and disaster resilient solutions. But how much performance may you lose when placing your database into an availability group? The alwaysOn availability groups depend on the Windows Server Failover Cluster (WSFC) which in turn depends on Active directory. Thus, for  my test I need an environment like this Two servers (Node1 & Node2) are nodes in a WSFC and they have SQL 2014 installed. On Node1 I have two databases used for testing. Both databases are in Full recovery mode and before running each test I backed up the log file. The test script was executed from the Srv1 server. At the beginning all tables were dropped and recreated and then the script launched 2, 5, 10, and 15 concurrent clients running the same test (the description of the test is here ). The results are presented in this chart and they show approximately 50% drop in number of transactions

SQL 2014 - Encrypted backup overhead

In SQL 2014 there is a new feature in database backups - an encrypted backup. To encrypt backups you need to create a database master key for the MASTER database and either a certificate (as in example below) or an asymmetric key to use for the backup. Here is a script to backup the IndexDemo database with an encryption. USE master; GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = '1qaz!QAZ'; GO IF EXISTS (select * from sys.certificates where name = 'TestBackupCert') DROP CERTIFICATE TestBackupCert; CREATE CERTIFICATE TestBackupCert WITH SUBJECT = 'Backup Encryption Certificate'; GO BACKUP DATABASE [IndexDemo] TO DISK = 'IndexDemoDB.bak'  WITH FORMAT, STATS=5 , ENCRYPTION (ALGORITHM = TRIPLE_DES_3KEY, SERVER CERTIFICATE = TestBackupCert);   -- AES_128, AES_192, AES_256, TRIPLE_DES_3KEY GO I was wondering how much overhead the encryption algorithms would add and decided to test it on my IndexDemo database which was hosted on a Azur

SQL 2014 performance - Local disk vs. Azure Blob vs. Azure VM attached disk

Today I decided to compare SQL 2014 (RTM) performance when running a test against  - a local database (created on  WD7500BPKT - 750 GB 7200 RPM)  - a DB created on a disk attached to the A3 (3 cores, 7 GB) VM in Azure - a DB created on an Azure blob The VM in Azure was created in the US East region using the SQL 2014 template from the gallary provided by Microsoft. All databases were created with a single 10 GB data file (10 GB) and 1GB  log file. On the local SQL instance the DB was created as CREATE DATABASE TestDBonLocal ON     (NAME = file_data1, FILENAME = 'C:\TEMP\filedata1.mdf', SIZE = 10GB, FILEGROWTH = 256 MB )  LOG ON  (NAME = file_log1, FILENAME = 'C:\TEMP\filelog1.ldf', SIZE = 1GB, FILEGROWTH = 32 MB)  On the Azure VM the database on the attached disk (the disk was mapped as drive F:) was created as such CREATE DATABASE TestDBonDisk ON     (NAME = file_data1, FILENAME = 'F:\TMP\filedata1.mdf', SIZE = 10GB, FILEGROWTH = 256 M

SQL 2014 - ColumnStore Compression

I've decided to compare the compression options available in SQL 2014 CTP2 and created two identical tables T1 (without compression) and T2 (with PAGE compression). CREATE TABLE T1 ( ID INT IDENTITY PRIMARY  KEY CLUSTERED , FirstName VARCHAR(255) , LastName VARCHAR(255) , Age TinyInt , Province CHAR(2) ); GO CREATE TABLE T2 ( ID INT IDENTITY PRIMARY  KEY CLUSTERED , FirstName VARCHAR(255) , LastName VARCHAR(255) , Age TinyInt , Province CHAR(2) ) WITH (DATA_COMPRESSION = PAGE); GO I populated them with 42,5 mil records like this Declare @i int, @j int, @age tinyint, @id int, @maxFN INT, @maxLN INT, @province CHAR(2) , @st datetime; Declare @Prov TABLE (ID INT IDENTITY, Name CHAR(2)); SET NOCOUNT ON; INSERT INTO @Prov(Name) VALUES ('ON'); INSERT INTO @Prov(Name) VALUES ('QC'); INSERT INTO @Prov(Name) VALUES ('NS'); INSERT INTO @Prov(Name) VALU