Skip to main content

Analysis of Free-Tier, Cloud Compute Platforms - GCP

[ This is part 3 of the series. Check-out the Introduction and AWS.]

Google's Cloud Computing platform was unknown to me when I started this experiment. I knew it was there, and I knew people used it. Just not as many as Amazon or Azure. However, if an organization could rival Amazon's technology footprint, it would have to be the massive efforts of the ubiquitous search and email giant.

Unlike AWS, Google's computing platform offers a $300 credit for the first year of service and Always Free service model; you can read more about GCP Free Tier. With a free $300 in burning a hole in my pocket, I build a machine using the 'n1-standard-1' template and placed it in the US-Central(Iowa) region. The n1-standard is a 1vCPU machine with 3.75GB of RAM, running Debian Linux. With the exception of the region, all of these configurations are similar to the t2.micro instances built in AWS.

NOTE: There are shared core f1-micro and g1-small instances that I could have built. The n1-standard was the default, so I went with it.

With BOINC/SETI installed, I started to get some feedback. So what do you get with the n1-standard? A generic Intel Xeon processor:





Interestingly, the model name of the Xeon processor was not identified by the client, just family model number and stepping.

The BOINC/SETI client benchmarked both Integer Operations and Floating-Point operations/second benchmarks. The results for floating-point operations were consistent with other services evaluated. The Integer Operations measure was nearly twice that of the MacMini (even with its two cores) and four-time greater than the AWS benchmarks.






After running for about a month, the Average Daily Credit for the GCP server was just over 1100 units; not bad for a pokey little machine.




How did GCP compare? 
It appears that the BOINC/SETI client likes Integer math more than floating-point. The 2x difference between the MacMini Average Credit and GCP Average credit is almost certainly related to the same differential between the Integer Operations metrics.

Interestingly, the difference 50% difference between Integer Operations between the MacMini and AWS resulted in an 80% difference in the Average Credit metric. Something else must be going on here.

The table below shows how all three services compare.


Processor Info
Floating Point Operations/second
Integer Operations /Second
Recent Average Credit
Region
Last Updated
MacMiniIntel Core2 T7200
2 Processors@2.00gHz
2.21 Billion33.28 Billion548.33Eagan, MN9/16
Amazon Web Services Intel Xeon E5-2676 1 Processor @2.40 gHz3.43 Billion15.1 Billion87.56 US-East9/17
Amazon Web ServicesIntel Xeon E5-2676 1 Processor @2.40 gHz3.44 Billion18.2 Billion83.4 US-East9/17
Amazon Web ServicesIntel Xeon E5-2676 1 Processor @2.40 gHz3.35 Billion18.16 Billion75.19 US-East9/17
Google Cloud ComputeIntel Xeon CPU 1 processor @ 2.30gHz3.53 Billon65.42 Billion1132.59us-central1-a10/2


Final Note
According to the GCP console, the n1-standard would cost me $0.034/hr or $25/mo. But thanks to my $300 credit, I wasn't going to get charged. The console makes it pretty clear how much of my yearly credit remains so I don't go over. Thank you, Google! I may revisit this experiment using the f1-micro or g1-small instances at some point.

Next, Azure compute.






Comments