Skip to main content

Azure VM's as powerful as high-end laptop... Almost

Microsoft on January 8th announce availability of a new series of VM sizes for Microsoft Azure Virtual Machines called the G-series. According to Microsoft "G-series sizes provide the most memory, the highest processing power and the largest amount of local SSD of any Virtual Machine size currently available in the public cloud".

I decided to run a series of CPU-intensive tests on a G2 VM (4 cores, 56 GB) created in the West US region.

For the test I used 7-zip (64-bit, v 9.20) to compress these 8 text files.

           106,624 ConfirmedDates.txt
           137,160 DataLoadTracking.txt
       684,637,229 DEfile.txt
     4,456,322,407 DEMfile.txt
     1,951,333,329 EPfile.txt
        12,975,399 ProgramTitle.txt
     2,303,480,896 SWDfile.txt
    24,168,394,643 SWOfile.txt

Compression mode was  configured (in the '-m' parameter) as Low (-mx1), Fast (-mx3), and Normal (-mx5).

The text files were placed on the D: drive within the G2 VM (this is a local non-persistent SSD storage).

For comparison I ran the same tests on my DELL M4800 laptop and placed the text files on drive C: which is Liteonit LMT-256L9M SSD disk (and in Windows this disk has 'write caching' enabled).

The results are shown below (the low numbers are better)


Finally, we have an option (at least for CPU-bound workload) where a G-series VM performs a little better then a high-end laptop.

But what happens if I move the text files off the SSD disks (on the VM and the laptop) to 'normal' disks? On the laptop the compression time was a bit longer vs. the case when the text files were stored on the SSD disk but on the VM is was a different story. It was almost 2 times slower (and I gave up waiting for the normal compression (-mx5) to complete that's why I didn't have a chart for this case).

So, if your workload is primarily CPU-bound then the new G-series VM's might be the right fit. But if your workload has a good portion of IO operation then you'd better off run your tests to check if the new VM's meet your requirements.

To compare storage options available in the G-series VM's I ran Crystal Disk Mark tests (v 3.03) on:

G2 VM drive C: (with read caching enabled. And because of the caching the Read numbers are so high)


G2 VM drive D: (non-persistent SSD-based disk)


G2 VM drive K: (built on top of 4 x 100 GB attached disks with caching disabled. The caching is disabled as per Microsoft recommendations)

DELL M4800 drive C: (liteonli lmt-256l9m with 'write caching' enabled)




DELL M4800 drive D: (HGST Travelstar 2.5 Inch 500GB 'write caching' enabled)



Now looking at these benchmarks it becomes clear why compression time increased dramaticlly after moving the text files off the SSD-based drive D: to the system disk.










Comments

Popular posts from this blog

Joining Windows 10 to Azure AD Domain

As of October 2016 to join Windows 10 computers to Azure AD Domain service requires these steps: Create a VNET in the classic portal . The VNET must be placed to a region where Azure AD domain service is available (( https://azure.microsoft.com/en-us/regions/services/ )  In the classic portal  go to Directory -> Configure and enable the domain service. And wait for ~ 30 min When completed the IP address will be populated Go back to the VNET configuration and add a DNS server with the IP (10.0.0.4 in this case) Create the "AAD DC Administrator" administrators group (again in Directory -> Group). Members of this group are granted administrative privileges on machines that are domain-joined to the Azure AD Domain Services managed domain. Add to the group your users who are supposed to have the administrative access on a Windows 10 computer go to Settings -> Accounts (this is true for Windows 10 version  1607) then select 'Access work

Create 3-Node Windows 2012 Multi-subnet Cluster

Environment There are two Data centers connected via a WAN link. Two Windows 2012 Servers (called SQLDEV1 and SQLDEV2) are located in the Primary Data Center (on the IP subnet 192.168.79.0/24) and the third server is placed in the Secondary Data Center with the 192.168.69.0/24 subnet. We’ll be creating a three-node Windows cluster with no shared storage on the multi subnet network with a file share witness at the Primary Data Center. We’ll be using a file share witness to protect from the cluster failure in a situation when the network between the Data Centers is unavailable and one of the servers in the Primary Data Center is also down (or being rebooted). The final state will look like depicted above: -           Two Virtual IP’s will be assigned (192.168.76.218 and 192.168.69.134) to the cluster -           The servers at the Primary Data Center will have a vote (Vote=1) and the server at the Secondary Data Center will have no vote (Vote=0). The file share witness al

Generate Calendar Table in Power BI with Fiscal Year Attributes

In Power BI go to Get Data --> Blank Query and paste into the Function windows the text below. This function takes as parameters: - StartYear (e.g., 2012) - EndYear (e.g., 2018) -FiscalYearStartMonth (e.g., 4) And it will generate a calendar table for dates from Jan-1-<StartYear> till Dec-31-<EndYear> and columns for Fiscal Year, Fiscal Month, Fiscal Quarter, Fiscal Year Day where the Fiscal year begins on FiscalYearStartMonth = (StartYear as number, EndYear as number, FiscalYearStartMonth as number)=> let     //Capture the date range from the parameters     StartDate = #date(StartYear, 1, 1),     EndDate = #date(EndYear, 12, 31), //Get the number of dates that will be required for the table     GetDateCount = Duration.Days(EndDate - StartDate)+1, //Take the count of dates and turn it into a list of dates     GetDateList = List.Dates(StartDate, GetDateCount,     #duration(1,0,0,0)), //Convert the list into a table     DateListToTable