Skip to main content

SQL 2014 performance - Azure VM (SSD) vs. Local SSD vs. Azure DB (V12)

Microsoft has recently added a new G-series of VMs built on  the latest Intel Xeon processor E5 v3 family with plenty (up to 6.5 TB0 of  local Solid State Drive (SSD) space.
Microsoft has also made available (in preview mode) the latest Azure SQL Database V12 which provides nearly complete compatibility with the Microsoft SQL 2014 Server engine and promises better performance (in the Premium level).

I've decided to run my simple OLTP test against the database created:

  • on the local instance of SQL 2014 with data and log files placed on the SATA hard drive (labelled below as DELL (HDD))
  • on the local instance of SQL 2014 with data and log files placed on the SSD drive (labelled below as DELL (SSD))
  • on the SQL 2014 installed within a G2-series VM (4 cores, 56 GB) with data and log files placed on an attached 100 GB drive (labelled below as Azure G2)
  • on the SQL 2014 installed within a G2-series VM with data and log files placed on a virual disk created on top of the 8 x 100 GB attached disks (the disks were created in the US West region - the sme region where the G2 VM was located)
  • on the SQL 2014 installed within a G2-series VM (4 cores, 56 GB) with data and log files placed on the local SSD drive (labelled below as Azure G2(SSD))
  • within the Azure SQL V12 P3  (the highest performance level available) server (labelled below  Azure SQL V12 (P3)


The build number of the SQL server installed locally on my laptop (DELL M4800) and on the Azure G2 VM  was identical:

Microsoft SQL Server 2014 - 12.0.2430.0 (X64)

The SQL instance was configured with Instant File initialization enabled, and maximum server memory as configured as 20000 MB for the Azure VM and 10240 MB for the instance installed on the laptop.

The VM was created based on the Microsoft provided template with SQL 2014 installed


At that time the G2-series VMs were available in the US-West region


The CPU information returned by WMIC for the VM was as  follows

C:\>wmic cpu get name,CurrentClockSpeed,MaxClockSpeed

CurrentClockSpeed  MaxClockSpeed  Name
1995                     1995           Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz



The same command on the laptop shows this information about the CPU

C:\Windows\system32>wmic cpu get name, CurrentClockSpeed, MaxClockSpeed

CurrentClockSpeed  MaxClockSpeed  Name

2494                      2494                  Intel(R) Core(TM) i7-4710MQ CPU @ 2.50GHz


And the disks installed in the laptop (as reported by the get-disk cmdlet) are here:

Friendly Name                            Total Size 
-------------                            ---------- 
LITEONIT LMT-256L9M-11 MSATA 256GB        238.47 GB 
HGST HTS725050A7E630                      465.76 GB 



And then I created a test Azure SQL V12 database as shown below




I executed the test for 500 seconds with 1, 5, 10, and 15 concurrent clients  and here are the average number of transactions reported:


And the same data in a chart



As the results show:

  • placing the database on the local SSD (DELL (SSD)) delivers the best performance and none of other options come even close to it
  • the second place goes to the case when the database is placed on the SSD disk in the  the Azure G2 VM, but unfortunately, the SSD disk is not persistent and placing production databases on a non-persistent storage is not an option
  • the new Azure SQL V12 database demonstrated a good performance and if compare this results with the same test ran against Azire SQL V11 we can clearly see 3-fold performance improvement in the current (still in preview mode) SQL version.
  • the second disk in the laptop (DELL (HDD)) gets almost 100% saturated even with a single client
  •  using a single disk attached to the Azure VM is just a fraction of a percent better then using a virtual disk created on top  volume comprising 8 attached Azure disks

Thus, if the application expects to handle a huge number of small transactions then a solution leveraging the local SSD disks is the best possible option.

  



  

Comments

Popular posts from this blog

Create 3-Node Windows 2012 Multi-subnet Cluster

Environment There are two Data centers connected via a WAN link. Two Windows 2012 Servers (called SQLDEV1 and SQLDEV2) are located in the Primary Data Center (on the IP subnet 192.168.79.0/24) and the third server is placed in the Secondary Data Center with the 192.168.69.0/24 subnet. We’ll be creating a three-node Windows cluster with no shared storage on the multi subnet network with a file share witness at the Primary Data Center. We’ll be using a file share witness to protect from the cluster failure in a situation when the network between the Data Centers is unavailable and one of the servers in the Primary Data Center is also down (or being rebooted). The final state will look like depicted above: -           Two Virtual IP’s will be assigned (192.168.76.218 and 192.168.69.134) to the cluster -           The servers at the Primary Data Center will have a vote (Vote=1) and the ...

SQL 2014 performance - Local disk vs. Azure Blob vs. Azure VM attached disk

Today I decided to compare SQL 2014 (RTM) performance when running a test against  - a local database (created on  WD7500BPKT - 750 GB 7200 RPM)  - a DB created on a disk attached to the A3 (3 cores, 7 GB) VM in Azure - a DB created on an Azure blob The VM in Azure was created in the US East region using the SQL 2014 template from the gallary provided by Microsoft. All databases were created with a single 10 GB data file (10 GB) and 1GB  log file. On the local SQL instance the DB was created as CREATE DATABASE TestDBonLocal ON     (NAME = file_data1, FILENAME = 'C:\TEMP\filedata1.mdf', SIZE = 10GB, FILEGROWTH = 256 MB )  LOG ON  (NAME = file_log1, FILENAME = 'C:\TEMP\filelog1.ldf', SIZE = 1GB, FILEGROWTH = 32 MB)  On the Azure VM the database on the attached disk (the disk was mapped as drive F:) was created as such CREATE DATABASE TestDBonDisk ON     (NAME = file_data1, FILENAME = 'F:\TMP\filedat...

SQL 2012 AlwaysOn: Synchronous vs. Asynchronous commit. Performance impact

Recently I've had a chance to build a 3-server AlwaysOn environment distributed between the primary and secondary data centers. The configuration looks like this: Primary Data Center                         Secondary Data Center                        SQLDEV1                                        SQLDEV3          SQLDEV2 The availability group was crated with synchronous commit replicas on SQLDEV1 and SQLDEV2 and the replica on SQLDEV3 was configured for asynchronous commit. The link between the data centers was not great and when I pinged SQLDEV3 from SQLDEV1 I got these results Approximate round trip times in milli-seconds:     Minimum = 39ms, Maximum = 63ms, Average = 42ms I also created a very simp...