Microsoft on January 8th announce availability of a new series of VM sizes for Microsoft Azure Virtual Machines called the G-series. According to Microsoft "G-series sizes provide the most memory, the highest processing power and the largest amount of local SSD of any Virtual Machine size currently available in the public cloud".
I decided to run a series of CPU-intensive tests on a G2 VM (4 cores, 56 GB) created in the West US region.
For the test I used 7-zip (64-bit, v 9.20) to compress these 8 text files.
106,624 ConfirmedDates.txt
137,160 DataLoadTracking.txt
684,637,229 DEfile.txt
4,456,322,407 DEMfile.txt
1,951,333,329 EPfile.txt
12,975,399 ProgramTitle.txt
2,303,480,896 SWDfile.txt
24,168,394,643 SWOfile.txt
Compression mode was configured (in the '-m' parameter) as Low (-mx1), Fast (-mx3), and Normal (-mx5).
The text files were placed on the D: drive within the G2 VM (this is a local non-persistent SSD storage).
For comparison I ran the same tests on my DELL M4800 laptop and placed the text files on drive C: which is Liteonit LMT-256L9M SSD disk (and in Windows this disk has 'write caching' enabled).
The results are shown below (the low numbers are better)
Finally, we have an option (at least for CPU-bound workload) where a G-series VM performs a little better then a high-end laptop.
But what happens if I move the text files off the SSD disks (on the VM and the laptop) to 'normal' disks? On the laptop the compression time was a bit longer vs. the case when the text files were stored on the SSD disk but on the VM is was a different story. It was almost 2 times slower (and I gave up waiting for the normal compression (-mx5) to complete that's why I didn't have a chart for this case).
So, if your workload is primarily CPU-bound then the new G-series VM's might be the right fit. But if your workload has a good portion of IO operation then you'd better off run your tests to check if the new VM's meet your requirements.
To compare storage options available in the G-series VM's I ran Crystal Disk Mark tests (v 3.03) on:
G2 VM drive C: (with read caching enabled. And because of the caching the Read numbers are so high)
G2 VM drive D: (non-persistent SSD-based disk)
G2 VM drive K: (built on top of 4 x 100 GB attached disks with caching disabled. The caching is disabled as per Microsoft recommendations)
DELL M4800 drive C: (liteonli lmt-256l9m with 'write caching' enabled)
DELL M4800 drive D: (HGST Travelstar 2.5 Inch 500GB 'write caching' enabled)
Now looking at these benchmarks it becomes clear why compression time increased dramaticlly after moving the text files off the SSD-based drive D: to the system disk.
I decided to run a series of CPU-intensive tests on a G2 VM (4 cores, 56 GB) created in the West US region.
For the test I used 7-zip (64-bit, v 9.20) to compress these 8 text files.
106,624 ConfirmedDates.txt
137,160 DataLoadTracking.txt
684,637,229 DEfile.txt
4,456,322,407 DEMfile.txt
1,951,333,329 EPfile.txt
12,975,399 ProgramTitle.txt
2,303,480,896 SWDfile.txt
24,168,394,643 SWOfile.txt
Compression mode was configured (in the '-m' parameter) as Low (-mx1), Fast (-mx3), and Normal (-mx5).
The text files were placed on the D: drive within the G2 VM (this is a local non-persistent SSD storage).
For comparison I ran the same tests on my DELL M4800 laptop and placed the text files on drive C: which is Liteonit LMT-256L9M SSD disk (and in Windows this disk has 'write caching' enabled).
The results are shown below (the low numbers are better)
Finally, we have an option (at least for CPU-bound workload) where a G-series VM performs a little better then a high-end laptop.
But what happens if I move the text files off the SSD disks (on the VM and the laptop) to 'normal' disks? On the laptop the compression time was a bit longer vs. the case when the text files were stored on the SSD disk but on the VM is was a different story. It was almost 2 times slower (and I gave up waiting for the normal compression (-mx5) to complete that's why I didn't have a chart for this case).
So, if your workload is primarily CPU-bound then the new G-series VM's might be the right fit. But if your workload has a good portion of IO operation then you'd better off run your tests to check if the new VM's meet your requirements.
To compare storage options available in the G-series VM's I ran Crystal Disk Mark tests (v 3.03) on:
G2 VM drive C: (with read caching enabled. And because of the caching the Read numbers are so high)
G2 VM drive D: (non-persistent SSD-based disk)
G2 VM drive K: (built on top of 4 x 100 GB attached disks with caching disabled. The caching is disabled as per Microsoft recommendations)
DELL M4800 drive C: (liteonli lmt-256l9m with 'write caching' enabled)
DELL M4800 drive D: (HGST Travelstar 2.5 Inch 500GB 'write caching' enabled)
Now looking at these benchmarks it becomes clear why compression time increased dramaticlly after moving the text files off the SSD-based drive D: to the system disk.
Comments
Post a Comment