AWS Vs. Azure: Which One’s Right for Your Cloud Career?

AWS vs Azure

  • All Courses

AWS Vs. Azure: Which One’s Right for Your Cloud Career?

Akshata Chandrashekar

Last updated February 13, 2018


Cloud service providers like Microsoft Azure and AWS have more in common with superheroes than one might think. Cloud storage companies touch the lives of millions; often making the world a better place.

Azure and AWS are superheroes in their own rights—but in the battle of the clouds, who is on top?

A superficial glance might lead you to believe that AWS has an unprecedented edge over Azure, but a deeper look will prove the decision isn’t that easy. To determine the best cloud service provider, one needs to take multiple factors into consideration, such as cloud storage pricing, data transfer loss rate, and rates of data availability, among others.

From elementary schools to NASA, clouds have touched every sphere of our lives. Who said superheroes are just found in comic books?

A Little Push – The Origins of AWS

In the early 2000s, Amazon was forced to re-examine their development platforms as they catered to their third-party clients. Over the years, they had created a jumbled mess of IT infrastructure where multiple teams worked in silos—often performing the same tasks—with no thought given to efficiency. In an effort to improvise, Amazon’s software team detangled the mess that was their infrastructure and replaced it with well-documented APIs. All was quiet until 2003 when, during a retreat, Amazon executives realized that they had the skills necessary to operate and execute scalable, effective data centers. The rest is history.

AWS is the world’s leading provider of cloud solutions, providing IT infrastructure solutions on an as-needed basis for companies of all sizes. Prominent companies that utilize AWS include Netflix, Expedia, Hulu, Spotify, and Ubisoft. AWS is a complex and highly customizable platform that works best for companies who run non-Windows services.

Why Do We Fall? AWS and Cloud Domination

In the Microsoft Azure vs. AWS battle, AWS had an unprecedented upper hand. AWS was first launched in 2002 and its earliest competitor, Google, didn’t arrive until  2009. Microsoft didn’t step into the cloud market until 2010. Microsoft believed that the cloud infrastructure was just a trend that was soon going to fade away. However, after Amazon’s success, Microsoft had to play catch up.

When Azure first launched, it was not received well and faced many challenges, especially when compared to AWS. AWS had been running for almost 7 years and as a result, they had more capital, more infrastructure, and better and more scalable services than Azure did. More importantly, Amazon could add more servers to their cloud infrastructure and make better use of economies of scale—something that Azure was scrambling to do. This was a setback for Microsoft—not only was Microsoft dethroned as the leader in software infrastructure, but it was now being shown the door by a non-IT newbie.

Mind if I Cut In? Azure’s Redemption

The tide soon changed for Azure. Microsoft quickly revamped its cloud offering and added support to a variety of programming languages and operating systems. They made their systems more scalable and made peace with Linux. Today, Azure is one of the leading cloud providers in the world.

AWS and Azure: Making the World a Better Place

Both Amazon and AWS technologies have, in their own way, contributed to the welfare of society.

For example, NASA has used the AWS platform to make its huge repository of pictures, videos, and audio files easily discoverable in one centralized location, giving people access to images of galaxies far away.

Similarly, People in Need, a nonprofit organization, uses AWS to scale an early warning system that alerts about 400,000 people in Cambodia when floods threaten. This technology has not only helped save hundreds of lives, but has also made available a cost-effective method that can be replicated by other at-risk regions.

The Azure IoT Suite was used to create the Weka Smart Fridge, which keeps vaccinations properly stored. This has helped nonprofit medical agencies ensure that their vaccinations reach people who otherwise don’t have access to these facilities.

Azure is also used to find solutions to the world’s looming freshwater crisis. By working with Microsoft Azure, Nalco Water, the main water operational unit within Ecolab, uses cloud computing and advanced analytics to create solutions to help organizations reuse and recycle water.

Aggressive Expansions – Azure vs AWS: Who’s Better?

Azure and AWS are both well-respected members of the cloud domain. They fight for a larger piece of the cloud pie and take the world by storm while doing so. Azure holds about 29.4 percent of all installed application workloads while AWS stands at 41.5 percent and Google holds just 3 percent of all installed application workloads.

In 2017, AWS’s market share featured at 47.1 percent with a revenue of $3.66 billion, while Azure’s market share didn’t rise above 10 percent with a revenue of $6.9 billion (of course, Microsoft’s revenue figures are higher because their cloud division includes both Azure and Office 365).

However, in its recent Q1 FY 2018 earnings report, Microsoft’s revenue from Azure grew over 90% this year, doubling the growth rate of AWS.

The Game has Changed – The Cloud is the Future: Are you Ready?

Cloud computing allows companies to get new products on the market faster, increase efficiency, lower operational costs, improve interdepartmental collaboration, reduce capital expenditures, and increase innovation.

Companies that are ill-equipped to handle these changes could run the risk of falling behind.

However, to make the move to the cloud, organizations must have trained professionals on the job who are certified in cloud computing. Certified professionals can easily address the concerns that may arise during the transition to the cloud and are familiar with the nuances of cloud-based computing.

Due to this need for certified professionals, a huge demand for skilled employees has been largely unmet. LinkedIn reports that cloud and distributed computing topped the list of sought-after skills in both 2016 and 2017. reports that job listings for the AWS cloud platform increased by 76 percent between 2015 and 2016. In 2015, 3.9 million jobs were affiliated with cloud computing in the United States and over 18 million worldwide. For qualified professionals, salaries are very high and competitive. According to Forbes, jobs in cloud computing are well compensated with an average salary of $125,591 for AWS certified professionals.

With the onset of cloud computing, several major cloud providers quickly rose to dominance but today, AWS and Azure lead the industry. These two cloud hosting platforms drive much of the job growth in the cloud computing space which leads to a dilemma for job seekers.  With both AWS and Azure as dominant players in the market, which cloud certification makes the most sense for your career path? Should you pursue AWS certification or Azure certification? There are benefits and drawbacks to each certification which should be considered before choosing which one to pursue.

The Awakening – Azure vs AWS: The Certification Game

The differences between AWS and Azure are plenty. Both come with their own advantages and disadvantages. AWS and Azure are the two top players in the cloud technology space because both are very good at what they provide in different ways. In order to narrow down which platform is the right one to become certified in, an evaluation of the benefits of each certification is warranted.

The Benefits of AWS Certification: Although Azure is rapidly gaining market share, AWS is still by far the largest cloud computing service provider in the world today. AWS certification carries extra weight because of additional marketability due to the number of companies utilizing the platform. In addition, AWS certification grants access to the AWS Certified LinkedIn Community and other AWS certified professionals.

There are several types of AWS certifications to choose from, including AWS Solution Architect Associate, AWS SysOps Associate, AWS Developer Associate, AWS DevOps Associate, and Cloud Architect.

The Benefits of Azure Certification: An Azure certification is backed by the Microsoft brand, giving the added benefit to candidates familiar with the in-house data platforms. Azure is used by over 55 percent of all Fortune 500 companies and gaining Azure certification increases the possibility of candidates finding a job in one of these companies. In addition, about 365,000 new companies adopt Azure every year, regularly increasing the need for Azure-certified professionals. Several Azure certifications are available to choose from, including Cloud Solution Architect, Developing Microsoft Azure Solutions, Architect Microsoft Azure, Implementing Microsoft Azure, and Cloud Architect.

Both AWS and Azure are considered to be adaptable, reliable, and resolute—much like the superheroes we all admire. They help us solve global problems and make our lives easy. They adapt to the needs of their customers and lend a hand to governments and companies in solving various social and logistical issues. Sure, superheroes have helped their citizens and kept them safe, but cloud service providers like AWS and Azure have helped professionals revolutionize their industries without having to break the bank. Cloud systems have made it possible for companies like Uber, Salesforce, and Facebook to exist—all services we take for granted today.

Rise Or Fall?

So who’s to say what will come next? In 2015, no one thought Azure could catch up; but they’ve proven the naysayers wrong. The cloud wars are unpredictable and exciting. Who would you count on – AWS or Azure? Will Azure overtake AWS? Will Google Cloud be the underdog that will disrupt the cloud domain? Only time will tell. But one thing is certain – cloud is here to stay.

A note from the illustrator:

I love “Raiders of the Lost Ark” like many people out there and so we structured the story based on it. The way the heroes in the comicographic take punches, that’s Indy. The comicographic does not have a definite conclusion and curiosity is maintained with the rat at the end, that’s all Raiders! The film’s soundtrack was the real power-booster for me while I illustrated the comicographic for 2 months, I even dreamt about the music after I was done working on it! It is the finest popcorn movie ever made, the only reason it’s not the greatest blockbuster film ever made is that it wasn’t the 1975 movie with a certain Shark named Bruce dying in the end.
It’s a real thrill paying homage to some of my favorite comic book artists through the comicographic like–Jack Kirby, Alex Ross, Jim Lee, Frank Cho and many more.

– Chetan Ramesh


5 Reasons to Take up A Cloud Computing Certification

Akshatha Kamath

Published on Dec 22, 2017


Over the past few years, the cloud computing industry has generated a lot of interest and investment. Cloud computing has become an integral part of the IT infrastructure for many companies worldwide. Industry analysts report that the cloud computing industry has grown swiftly over recent years and the cloud computing market is growing at a 22.8 percent CAGR (Compound annual growth rate) and will reach $127.5 billion in 2018.

According to Wikibon, the Amazon Web Services (AWS) revenue will climb to $43 billion by 2022 with Microsoft Azure and Google Cloud close behind. As cloud computing becomes critical to IT and business in general, the demand for cloud skills will increase. Aspiring cloud professionals must prove that they have the skills and knowledge to be able to compete favorably in the market, and a cloud certification is the best way to do that.

Here are the top reasons why you should gain a certification in cloud computing if you’re looking to join this innovative field:

1. The Demand for Cloud Computing Professionals Will Continue to Grow

Organizations are looking for IT professionals who have gained professional training in cloud computing and can help them implement a cloud environment into their infrastructure as seamlessly as possible. A search on revealed that there are more than 25,000 unfilled positions in the U.S. that are related to AWS alone. Comprehensive cloud computing training and certification such as the  AWS Certified Solutions Architect Training is a great advantage. It covers the key concepts, latest trends and best practices for working with the AWS infrastructure to become an industry-ready AWS Certified Solutions Architect.

The number of jobs in Microsoft cloud (Azure) has increased over the years, and a study of 120 Microsoft partners indicated that hiring companies had a tough time sourcing professionals who were skilled to work on Microsoft Azure platform. A certification in Azure infrastructure solutions will enable you to have skills necessary for those jobs.

Most companies today use DevOps to deliver new software applications and features. DevOps is now being adopted by 74 percent of all organizations, compared to just 66 percent in 2015. As more companies realize the benefits of DevOps methodologies, the demand for certified professionals in this domain continues to grow. You can get certified by taking a course on DevOps Practitioner Certification.

2. Improves Your Earning Potential

Simplyhired reports that the average salaries for cloud administrators are less than $78,000, while cloud developers earn an average of $118,758 a year. Cloud architects are big earners, with median salaries of $124,406 and some salaries as high as $173,719. A research by Forbes shows that professionals with an AWS Certified Solutions Architect Certification have a potential annual salary of $125,971. Cloud computing training will be a step in the right direction and can help you enhance your earning potential.

3. Secure Jobs

If you gain the latest skills in cloud computing, you can land jobs that aren’t influenced by volatile market conditions. This is because most companies find it difficult to find IT professionals with the cloud computing skills they need.

4. It Proves Your Expertise and Promotes Credibility with Employer and Peers

Certifications are a great way to measure knowledge and skills against industry benchmarks. According to Microsoft and IDC, certification, training, and experience are three of the top four important attributes an organization looks for when hiring for a cloud-related position. A certification in cloud computing implies that you are skilled to help your organization reduce risks and costs to implement your workloads and projects on different cloud platforms. This will provide opportunities for cloud-related projects, and your clients will see you as a credible subject matter expert. It shows that you can work on complex procedures and handle cloud deployment in an enterprise.

If you want to specialize in one area of cloud computing or you want a new job, you can specialize in one or more vendor-specific certifications, such as AWS or VMWare. Cloud certifications are a great way to take your career to greater heights.

5. Better Chances of Getting Shortlisted for an Interview

If you are looking to penetrate the cloud industry, a cloud computing course such as Cloud Engineer Masters Program can help you reach the  interview stage.  Having a certification on your resume will help you get noticed by hiring companies and proves to your employers that you have the right cloud computing skills, knowledge, and expertise for the job.

Cloud Technology Is The Past, Present, and The Future

Over the last few years, cloud technology has transformed the way businesses operate. Today, companies big and small rely on public cloud platforms to host and implement critical applications—this trend will only grow stronger in 2018. Whether you’re planning to carve an entry into this domain or looking to grow your cloud computing career, a certification will help you gain the most-recent skills and contribute to your organization’s business.

About the Author

Writer, Marketer, Traveler, Experimenter and a huge Book(read Kindle)-addict. Akshatha heads content at Simplilearn and when not at work, she’s all of the above.


AWS vs Google Cloud – The Showdown


Published on Feb 23, 2016


As cloud computing continues to find its way into mnc big and small, the choice of the right cloud computing solution has become a talking point for specialists and business owners alike. Among public cloud providers, Amazon Web Services (AWS) seems to have the lead in the competition, with Google Cloud and Microsoft Azure close behind.

In this article, we compare the two leading cloud computing services – Amazon Web Services’ Elastic Compute Cloud (EC2) and Google Cloud’s Google Compute Engine (GCE) – on the basis of their performance, cost, features, services, and the overall advantages and disadvantages of these two cloud computing platforms. Beyond helping you choose the right Internet as a Service (IaaS) platform, we hope this comparison also helps the eager professionals among you understand where you would have to focus your learning efforts.

Some Striking Contrasts Between EC2 & GCE

AWS has been the cloud computing market-leader for the past seven years. Available in more zones and regions than Google Cloud, licensed users are assured of minimal impact of outages. In addition, AWS boasts a wide array of services, many of which -such as the Simple Email Service and the CloudFront content delivery network- are not available in GCE.

AWS also offers micro-Windows instances as part of its Free Usage Tier, whereas support for Windows workloads is not a part of GCE’s offerings. As Jillian Mirandi, a tech analyst points out, “AWS has a more complete, enterprise-grade portfolio”.

AWS also customizes its networking equipment, the corresponding protocols that travel over it to boost network performance, and also has its own fiber-optic network between zones. Google’s edge in network performance is not perceived to last long.

Now for some noteworthy advancements made by Google Cloud’s GCE.

GCE’s impressive network performance is something to note. It is largely because Google’s network traffic passes through its own fiber network rather than traversing the public internet. Each GCE instance is also attached to a single network that spans all regions without VPNs or gateways as middlemen.

On the whole, considering the number of services available, AWS is in a league of its own, well ahead of GCE. The varied services AWS offers are well-integrated and provide a very comprehensive cloud solution. AWS, it is proclaimed, has no rivals with regard to platform completeness and the productivity level that you can reach.

However, the choice of the right platform does depend upon the needs of the enterprise.

So – how do these two measure up?

About the Author

Priyadharshini is a knowledge analyst at Simplilearn, specializing in Project Management, IT, Six Sigma, and e-Learning. With a penchant for writing and a passion for professional education & development, she is adept at penning educative articles. She was previously associated with Oxford University Press and Pearson Education, India.


10 Most Popular DevOps Interview Questions and Answers

Akshatha Kamath

Published on Aug 10, 2017


Until recently, development engineers often worked in isolation, restricting their knowledge and skill sets to coding and testing, while operations engineers would focus on delivery and infrastructure configuration jobs, with minimal knowledge about software development.

However, with the fast-paced growth of the IT domain and technology advancements, the traditional approach of most IT companies has seen a paradigm shift. The culture of DevOps, although in its infancy, acts as the perfect bridge between IT development and operations and has become a popular methodology for software development in recent years.

An article entitled, “The DevOps Hiring Boom” claims that as many as 80 percent of Fortune 1000 organizations are expected to adopt DevOps by 2019. A survey conducted by shows that the average annual salary of a DevOps engineer in the U.S. is approximately $123,439.

If you’ve started cross-training to prepare for development and operations roles in the IT industry, you know it’s a challenging field that will take some real preparation to break into. Here are some of the most common DevOps interview questions and answers that can help you while you prepare for DevOps roles in the industry.

Want to become certified DevOps Practitioner?

Q1. What do you know about DevOps?

A1. Your answer must be simple and straightforward. Begin by explaining the growing importance of DevOps in the IT industry. Discuss how such an approach aims to synergize the efforts of the development and operations teams to accelerate the delivery of software products, with a minimal failure rate. Include how DevOps is a value-added practice, where development and operations engineers join hands throughout the product or service lifecycle, right from the design stage to the point of deployment.

Q2. Why has DevOps gained prominence over the last few years?

A2. Before talking about the growing popularity of DevOps, discuss the current industry scenario. Begin with some examples of how big players such as Netflix and Facebook are investing in DevOps to automate and accelerate application deployment and how this has helped them grow their business. Using Facebook as an example, you would point to  Facebook’s continuous deployment and code ownership models and how these have helped it scale up but ensure quality of experience at the same time. Hundreds of lines of code are implemented without affecting the quality, stability, and security.

Your next use case should be Netflix. This streaming and on-demand video company,  follows similar practices with fully automated processes and systems. Mention the user base of these two organizations: Facebook has 2 billion users while Netflix streams online content to more than 100 millions users worldwide.  These are great examples of how DevOps can help organizations to ensure higher success rates for releases, reduce lead time between bug fixes, streamline and continuous delivery through automation, and an overall reduction in manpower costs.

Q3. Which are some of the most popular DevOps tools? Do you have experience working with any of these tools?

A3. The more popular DevOps tools include:

  1. Selenium
  2. Puppet
  3. Chef
  4. Git
  5. Jenkins
  6. Ansible
  7. Docker

Want to master all these DevOps tools?

Thoroughly describe any tools that you are confident about, what it’s abilities are and why you prefer using it. For example, if you have expertise in Git, you would tell the interviewer that Git is a distributed Version Control System (VCS) tool that allows the user to track file changes and revert to specific changes when required. Discuss how Git’s distributed architecture gives it an added edge where developers make changes locally, and can have the entire project history on their local Git repositories, which can be later shared with other team members.

Now that you have mentioned VCS, be ready for the next obvious question.

Q4. What is version control and why should VCS be used?

A4. Define version control and talk about how this system records any changes made to one or more files and saves them in a centralized repository. VCS tools will help you recall previous versions and perform the following:

  • Go through the changes made over a period of time and check what works versus what doesn’t.
  • Revert specific files or specific projects back to an older version.
  • Examine issues or errors that have occurred due to a particular change.

Using VCS gives developers the flexibility to simultaneously work on a particular file and all modifications can be logically combined later.

Q5. Is there a difference between Agile and DevOps? If yes, please explain.

A5. As a DevOps engineer, interview questions like this are quite expected. Start by describing the obvious overlap between DevOps and Agile. Although implementation of DevOps is always in sync with Agile methodologies, there is a clear difference between the two. The principles of Agile are associated to seamless production or development of a piece of software. On the other hand, DevOps deals with development, followed by deployment of the software, ensuring faster turnaround time, minimum errors, and reliability.

If you are preparing for senior DevOps roles, prepare for these specific Chef DevOps interview questions.

Q6. Why are configuration management processes and tools important?

A6. Talk about multiple software builds, releases, revisions, and versions for each software or testware that is being developed. Move on to explain the need for storing and maintaining data, keeping track of development builds and simplified troubleshooting. Don’t forget to mention the key CM tools that can be used to achieve these objectives. Talk about how tools like Puppet, Ansible, and Chef help in automating software deployment and configuration on several servers.

Q7. How is Chef used as a CM tool?

A7. Chef is considered to be one of the preferred industry-wide CM tools. Facebook migrated its infrastructure and backend IT to the Chef platform, for example. Explain how Chef  helps you to avoid  delays by automating processes. The scripts are written in Ruby. It can integrate with cloud-based platforms and configure new systems. It provides many libraries for infrastructure development that can later be deployed within a software. Thanks to its centralized management system, one Chef server is enough to be used as the center for deploying various policies.

Q8. How would you explain the concept of “infrastructure as code” (IaC)?

A8. It is a good idea to talk about IaC as a concept, which is sometimes referred to as a programmable infrastructure, where infrastructure is perceived in the same way as any other code. Describe how the traditional approach to managing infrastructure is taking a back seat and how manual configurations, obsolete tools, and custom scripts are becoming less reliable. Next, accentuate the benefits of IaC and how changes to IT infrastructure can be implemented in a faster, safer and easier manner using IaC. Include the other benefits of  IaC like  applying regular unit testing and integration testing to infrastructure configurations, and maintaining up-to-date infrastructure documentation.

If you  have completed a certification on Amazon Web Services (AWS), and are interviewing for niche roles such as AWS-certified DevOps engineer, here are some AWS DevOps interview questions that you must be prepared for:

Q9. What is the role of AWS in DevOps?

A9. When asked this question in an interview, get straight to the point by explaining that AWS is a cloud-based service provided by Amazon that ensures scalability through unlimited computing power and storage. AWS empowers IT enterprises to develop and deliver sophisticated products and deploy applications on the cloud. Some of its key services include Amazon CloudFront, Amazon SimpleDB, Amazon Relational Database Service, and Amazon Elastic Computer Cloud. Discuss the various cloud platforms and emphasize any big data projects that you have handled in the past using cloud infrastructure.

Q10. How is IaC implemented using AWS?

A10. Start by talking about the age-old mechanisms of writing commands onto script files and testing them in a separate environment before deployment and how this approach is being replaced by IaC. Similar to the codes written for other services, with the help of AWS, IaC allows developers to write, test, and maintain infrastructure entities in a descriptive manner, using formats such as JSON or YAML. This enables easier development and faster deployment of infrastructure changes.

As a DevOps engineer, an in-depth knowledge of processes, tools, and relevant technology are essential. You must also have a holistic understanding of the products, services, and systems in place. If your answers matched the answers we’ve provided above, you’re in great shape for future DevOps interviews. Good luck! If you’re looking for answers to specific DevOps interview questions that aren’t addressed here, ask them in the comments below. Our DevOps experts will help you craft the perfect answer.

[Preparing for DevOps Certification? Take this test to know where you stand!]

Watch this video on What Is DevOps | DevOps Tutorial For Beginners

Find our DevOps Practitioner Certification Training Course at your nearby cities:

Chennai  New Delhi  Noida  Pune  Gurgaon  Houston  Los Angeles  Washington
New York City  Boston  Melbourne  Sydney  Toronto  Brisbane  Dubai

About the Author

Writer, Marketer, Traveler, Experimenter and a huge Book(read Kindle)-addict. Akshatha heads content at Simplilearn and when not at work, she’s all of the above.


Why Cloud Computing certification makes sense

  • Tue, November 15, 2016
  • Views 234

About the On-Demand Webinar

In this webinar you will learn how is the IT landscape changing, why moving to the cloud is a good idea and what can you do to join this wave of changes that are already happening.

The session covers the following topics

  1. How is the IT industry evolving?
  2. Do IT companies use the cloud?
  3. What are the advantages of cloud vs on premise?
  4. Do companies look for cloud ready IT people?

Hosted By

Bogdan Nourescu

A Google Cloud Platform Authorized Trainer Bogdan Nourescu has over 6 years of experience with Google Cloud, He has worked on technologies such as JQuery, Polymer, App Engine, Compute Engine, Cloud Storage, in order to develop apps for different industries.

View On-Demand Webinar

The Growing Importance of Cloud Certifications

Bernard Golden

Published on Aug 16, 2016


Cloud computing has clearly, unmistakably, moved into the mainstream of enterprise IT. AWS announced its numbers last week: 58% growth, with total quarterly revenue jumping from $1.8 billion to $2.89 year over year. In other words, AWS grew its quarterly revenues by over one billion dollars in the space of a year!

There’s more evidence of how embedded cloud computing is in the future of enterprise IT. As I wrote a couple of months ago in analyzing a JP Morgan CIO survey, the shift to the cloud is accelerating: while today only 16% of all enterprise applications are deployed in the cloud, that number will grow to 41% in five short years.

What these two data points tell us is that cloud computing must now be a core competency of all IT organizations. No longer can cloud computing be dismissed as the province of SMBs or startups, or denigrated as “shadow IT.” Instead, IT organizations must be ready to design, deploy, monitor, and manage cloud-based applications.

That brings with it a new challenge: cloud skills. While security has traditionally been cited by IT executives as the biggest issue holding back cloud adoption, 2016 sees a new barrier impeding adoption: organization skills. In a recent survey conducted by cloud management company RightScale, lack of resources/expertise has emerged as the #1 challenge regarding cloud computing — 32% vs. only 29% citing security as the biggest challenge.

This makes sense, of course. When a technology moves beyond early adopters and becomes the de facto platform for applications, it requires building a foundation of technology skills across the organization. This is not new with cloud computing. We’ve seen the same trend with VMware and Microsoft before it.

And, just as we witnessed with VMware and Microsoft, we’ll see an increasing demand for certifications — by both individuals and organizations.

For individuals, the motive for obtaining cloud certifications is clear: demonstrating competence and gaining a competitive edge in the job market. Having a certification demonstrates core knowledge and indicates the individual holds a demonstrable level of skills.

For organizations, cloud certifications are just as important, but for different reasons:

  1. First, organizations are buying, not selling skills. For them, looking for certifications is a way of determining base skill levels. While holding a certification may not indicate the holder is a domain-area genius, it does show that the certificate holder will be able to join a team and make a contribution.
  2. Second — and less appreciated, but perhaps more important — organizations can, with certification training, guarantee that employees draw from a single, consistent knowledge base. This ensures that people can work together, use common terminology, and perform at a given level of expertise — in short, certifications can be used to guarantee that employees across the department have a shared understanding of cloud computing, which can easily make the difference between project success and failure. Moreover, by creating a common knowledge base among employees, the systems the organization creates will share standardized designs and implementation, which will result in consistent operations which, in turn, will result in more efficiency and lower operational cost.

Now that cloud computing has hit the mainstream, one can expect to see certifications become a mainstream topic. And, as Rightscale’s survey indicates, skills are now the number one barrier to cloud adoption. Therefore, one can expect to see a huge rush over the next couple of years for cloud computing training and certifications. For individuals, obtaining a certification is a good way to improve personal marketability. For organizations, certifications will be a key action item now that cloud computing is a core competence.

About the Author

Bernard Golden is the CEO of Navica & serves as advisor for CIO magazine. As the author of 4 books on virtualization and cloud computing, Bernard is a highly-regarded speaker and has keynoted cloud conferences around the world. Bernard is also among the ten most influential persons in cloud computing according to


Fireside Chat: Edge Computing Vs. Cloud Computing

  • Wed, December 13, 2017
  • Views 246

About the On-Demand Webinar

Is the era of cloud computing coming to an end? Experts predict that cloud computing is gradually making way for the next big thing: Edge.

NASSCOM Product Connect and Simplilearn together present this live fireside chat on edge computing and how it stacks up against cloud. Tune in to watch Bernard Golden, Cloud Computing Expert and Anand Narayanan, Chief Product Officer at Simplilearn discuss this latest technology.

This webinar will provide answers to these common questions about edge computing:

  • What is edge computing?
  • How does it fare when compared with cloud computing?
  • What role do edge devices play in the future of computing?
  • Will IoT spell the death of the cloud?

Date: Dec 13, 2017

Time: 09:30 PM IST | 08:00 AM PST

Hosted By

Anand Narayanan

A Product leader with deep experience in building products across various industries and product types, Anand leads the product vision, roadmap and delivery at Simplilearn. Prior to this role, Anand headed the complete portfolio for the cloud division at Rackspace in San Antonio, Texas. He has also led product at Dell and National Instruments prior to this in products ranging from test and measurement software solutions to enterprise software solutions. Anand strongly believes in a customer driven, data augmented, lean approach to delivering products.

View On-Demand Webinar

Cloud Database Wars: Google Spanner vs. Microsoft CosmosDB

Bernard Golden

Published on Jun 20, 2017


One of the reasons cloud computing is such a powerful force in the industry today is the innovation the providers are delivering. AWS is famous for the staggering pace at which new features and services are released (see Figure 1).”

Figure 1: AWS yearly feature improvements

Google recently delivered Spanner, a remarkably innovative SQL database service that provides global consistency, leveraging GPS and atomic clocks.

Not to be left out, Microsoft responded with CosmosDB, a database service that, while quite different from Spanner, is tremendously innovative in its own way. I regard CosmosDB as a powerful storage service that offers tremendous scale and flexibility.

CosmosDB’s powerful service can be more difficult to comprehend due to its unique capabilities. In this regard, it is different than Spanner; which benefits from the fact that relational databases are well-understood functionally. Let’s break down the key differences between the two database services to fully understand the benefits and attributes of each.

The Difference Between Spanner and CosmosDB

It’s relatively easy to understand the unique aspects of Spanner—how it extends relational database technology in ways that are noteworthy, and addresses shortcomings that bedevil application developers. In other words, Spanner is like what developers have always used, only much better.

CosmosDB, on the other hand, offers highly flexible use cases and provides multiple options for data state access. Both of these are important, but they’re different than what has been available in the past, so it’s necessary to understand the functionality before one can recognize what makes CosmosDB so innovative.

Let’s start by discussing the CosmosDB architecture. Microsoft describes this as structured around storage containers (Note: this does not appear to have anything to do with execution containers like Docker, but rather implies a pool of storage not bound to specific servers or storage devices, i.e., a virtual storage construct).

“An Azure Cosmos DB container is a schema-agnostic container of arbitrary user-generated entities and stored procedures, triggers and user-defined-functions (UDFs),” according to Microsoft.

In other words, a CosmosDB is a container of data that is schema-agnostic that can be operated on in a variety of ways. Here is a figure that depicts the CosmosDB architecture:

(Image Courtesy Microsoft)

The key term here is “projections.” Internally, CosmosDB lumps all the data into a container in which Microsoft automagically tracks the individual data attributes and their relationship, but—and here is where its innovation shows—the data can be projected as key-value, document, or graph databases, each of which can be accessed by a use case-specific API.

In other words, you can use CosmosDB as any of these types of databases, depending on what your application is best served by, but under the covers, it’s all one melange of data. And, by the way, if you use CosmosDB as a document database, Azure provides SQL capabilities, including triggers and stored procedures. Again, this is extremely innovative and very useful.

Network Latency

There is more to CosmosDB than clever storage projected through a variety of use case-specific APIs.

Naturally, CosmosDB can mirror data across the world, to allow for local low-latency access. That raises the issue of how quickly changes to the data (or the schema that describes the data) can be propagated.

Just as Spanner leverages Google’s globe-spanning fiber network to reduce latency, so too does CosmosDB. When users make a schema change in one location, that change is propagated to every other Azure location that is set to provide database access via a mirrored version. And the schema change is fast—on the order of milliseconds, which means that applications are never far out of date in terms of the structure of data they can work with.

(NOTE: Microsoft states that the service is schemaless and that each piece of data is indexed, but the point is that when you add a field to a key-value database in one location, every copy of that database knows about and can work with that field very quickly. I call that schema propagation).

A second issue with latency addresses individual items of data; in other words, if an application located in Dallas changes the data it works with at one point in time, how soon will a mirrored version of that data located in, say, Mumbai see that changed data? As you’ll recall, Spanner addresses this with a very clever two-phase commit protocol powered by atomic clocks to ensure consistency.

CosmosDB approaches it somewhat differently, by offering five different data consistency choices,  ranging from strong to eventual.

So, in our example, the application could be set to request strong consistency, and that would ensure that when a query is set against that data—no matter which storage mirror the application user happens to run against—will be consistent throughout the use of the data. Any changes to the data that would suffice for the terms of the query would be excluded from the query access.

This ensures that no inconsistent data can sneak in during a time-bounded use of the data. So if someone wants to see the current balance of a bank account and the application is set to strong consistency, transactions that flow in subsequent to the balance query would be excluded in the returned data. Obviously, consistent data is important for many transactional applications.

On the other hand, if an application displays vacation photos and the collection of photos is being updated while a user is viewing the collection, a photo being added to the display is probably not a big deal.

The Bottom Line

CosmosDB is a remarkably innovative offering for myriad reasons, including:

  • The use of data containers that can be projected as different types of databases is unique in my experience.
  • Its provision of an SQL interface to document projections, including triggers and stored procedures will bring use of CosmosDB into the skill capabilities of a large percentage of the world’s technical staff. In other words, despite the impressive technical underpinnings of the service, its use is not limited to super-programmers.
  • The flexible consistency model makes CosmosDB a good fit for a wide variety of application use cases ranging from the most stringent to the least restrictive data requirements.

From a larger perspective, CosmosDB is a perfect illustration of how the big cloud providers—AWS, Azure, and Google, (AAG)—are changing the very nature of the IT industry.

No end user, no matter how large, could ever hope to implement a storage system like CosmosDB. Its scale and operation would be beyond the talent pool and budget of any individual IT organization.

This is why savvy IT organizations are shifting their investment from infrastructure to applications and focusing their efforts on leveraging the innovative services emerging from the AAG cloud providers.

Planning for a career in Cloud Computing? Here are the Popular Courses

Check out our course on AWS Tutorial For Beginners | AWS Introduction | What Is AWS

About the Author

Bernard Golden is the CEO of Navica & serves as advisor for CIO magazine. As the author of 4 books on virtualization and cloud computing, Bernard is a highly-regarded speaker and has keynoted cloud conferences around the world. Bernard is also among the ten most influential persons in cloud computing according to


What is the Benefit of Modern Data Warehousing?

Ronald Van Loon

Published on Jul 5, 2017


Access to relevant customer and industry information is the primary competitive advantage businesses have over their direct and indirect competitors today. It’s the smartest approach to remaining vigilant in a business environment where competition is at an all-time high.

That’s where data warehousing comes in. Data warehouses are central repositories of integrated data from one or more disparate sources used for reporting and data analysis, which—in an enterprise environment—supports management’s decision-making process.

Digitalization is integrated into the foundations of today’s business landscape, and there is no going back from here. Software companies are improving data engineering algorithms, and data analytics providers are using advanced techniques to provide better solutions to businesses. The result is much more efficient business intelligence solutions.

Businesses who are new to this trend and are skeptical about the availability of the data often inquire, “why do we need data warehousing systems?”

Data is Power

The simple answer is because data is knowledge and knowledge is power. A business with relevant information and access to useful industry insights has a greater chance of successfully striving in the business landscape and dominating the niche.

Access to Data Minimizes Risks

Entrepreneurs know that there is always some kind of risk involved when it comes to business processes. Although entrepreneurs are risk takers by nature, the smaller the chance of risk, the better. This is probably the most convincing reason for businesses to invest in data warehousing solutions.

Accurate data about your customers and the state of the industry minimizes uncertainty because it helps you make better decisions with definite outcomes. With the right kind of data, you can analyze past trends and forecast future outcomes with a high probability of accurate results.

For example, in IT operations, Distributed Denial of Service (DDoS) attacks are increasingly becoming common. To ward off such attacks, having a centralized logging architecture to identify suspicious activities and pinpoint the potential threats from thousands of entries is essential.

Another example is the core objective of a healthcare facility is to cater to the needs of patients. Data analytics has not only proven to be beneficial for the mundane tasks related to administration but also to have a positive impact on a patient’s overall experience. By maintaining a database of patients’ records and medical histories, a hospital facility can cut down the cost of unnecessary, repetitive processes.

Apparel industry is a third example that benefits from the access of data. The data enables companies to better cater to the needs of the public. With an idea of what the masses require, fashion industry thrives and rarely misses.

What’s Next for Data Warehousing Systems?

Big data analytics is a fast-paced industry that keeps on improving and evolving. There is no constant state in this industry, and that is why Business Intelligence (BI) should continuously evolve at the same pace.

BI is one of the determining factors shaping the future of data warehousing, data mining, and data engineering.

Why is there a need for continuous changes? The simple answer is that older BI processes cannot keep up with the customers’ demand. They lack in their ability to accurately interpret and quantify business’ Key Performance Indicators (KPIs) and Return on Investment (ROI).

A data management system is useless if it cannot fulfill its primary purpose of predicting accurate information about business processes.

In addition to that, older BI systems are also ineffective ways to integrate information from multiple channels and streamline the communication between various departments.

Big data companies are trying to improve data warehousing solutions in order to meet the new set of requirements. The modern data warehousing structure can store data in its raw form instead of the previously opted hierarchical structure. This enables users to more easily access data.

New data warehousing solutions also minimize the inefficiencies caused by gaps in communication. State-of-the-art structures can integrate the information from multiple channels and store it on one platform, streamlining the communication process.

The biggest recent advancement that we’ve seen in data engineering structures is that data analytics software has become extremely user-friendly. Previously, only professionals with the appropriate skillset were able to work with data. Now anyone with basic information can deduce results and information because the complex aspects of data engineering are taken care of by computer algorithms.

About the Author

Ronald is named one of the 3 most influential people in Big Data by Onalytica. He is also an author for a number of leading big data & data science websites, including Datafloq, Data Science Central, and The Guardian, and he regularly speaks at renowned events.


Need for cloud-skilled technical professional with growth of cloud market

  • Mon, August 22, 2016
  • Views 129

About the On-Demand Webinar

Cloud computing is on fire. All the major players are reporting double- and triple-digit growth figures. The shift to cloud computing means IT organizations will need to build cloud skills internally — and fast. For individuals, cloud computing offers a fantastic avenue for rapid career growth. The key constraint for both is learning — learning how cloud computing works, how to build cloud-native applications, and how to manage complex cloud-based application portfolios. This webinar will discuss recent cloud computing growth trends and how cloud providers are extending their offerings — and what that means for individuals and IT organizations as they chart out their cloud journey. Agenda: Cloud computing revenue growth How cloud providers are extending their core computing services How to build cloud computing skills — as an individual or an organization Knowledge and cloud certifications The world of IT is changing rapidly. Cloud computing will soon be the dominant form of computing, and preparing for that future is crucial for everyone in the industry. Watch this webinar to learn all about the latest trends in cloud computing, cutting edge developments in the field, and how to meet the future needs of what’s being touted as the next big revolution in IT.

Hosted By

Bernard Golden

CEO of Navica, a consulting firm focused on cloud computing and DevOps. Named by as one of the ten most influential persons in cloud computing.. Cloud computing advisor for CIO magazine. Blog finds mention on over a dozen ‘best of cloud computing’ lists.. Highly regarded keynote speaker at cloud computing conferences around the world. Author\co-author of 4 books on virtualization and cloud computing, including his most recent book, Amazon Web Services for Dummies.

View On-Demand Webinar

Enterprises Confront Cloud: Adoption and Confusion

Bernard Golden

Published on Oct 6, 2017


It’s interesting to watch the stream of cloud computing articles that cross my desk. Five years ago, they all focused on how enterprises worried about cloud security and would, as a result, choose to build private clouds. In fact, most enterprises made their peace with cloud security and began adopting cloud computing.

Two years ago, given the ongoing adoption, the trade press turned to hybrid cloud and multi-cloud, positing that enterprises would opt for spreading application workloads across multiple computing environments. This prediction has pretty much played out as described, although the details are much messier than the press portrays.

Last year, the conversation turned to digital transformation—the need for companies to change the way they do business. Today’s consumers prefer online interaction and companies need to deliver rich digital offerings to engage them.

So, where does that leave us today? Two recent studies depict a completely understandable state of affairs: One management consulting firm, McKinsey, published a survey that describes how companies are putting pressure on IT organizations to deliver the digital goods. Another one, Talkin’ Cloud, published a piece discussing how IT organizations are feeling overwhelmed with the options they have to sort through to build solutions.

The good news? McKinsey identified a path forward for IT organizations to sort through their options, confirm priorities and prepare for digital transformation. And, that path runs directly through staff skills and education. It must be said, however, that the path will be bumpy and will challenge IT organizations as never before.

Turning to the McKinsey piece first, it is absolutely fascinating. McKinsey is not a technology consulting firm; its constituency is the business side of the house. McKinsey surveyed 700+ senior executives and published the results in a piece titled, IT’s Future Value Proposition.

The survey turned up some very interesting perspectives. As shown in Figure 1, senior executives believe that today’s IT organizations focus on business process enablement and operational stability and management. These terms are McKinsey-speak for the traditional role of IT: internally-oriented applications that automate business processes and improve the internal workings of business units.

However, five years from now, these same executives believe that the value of those traditional activities will drop dramatically—from 45 percent to 27 percent  (business process enablement) and 39 percent to just seven percent (operational stability and management).

Figure 1

Also, they’re not sure whether or not IT is ready to take on the way it needs to deliver value in five years: innovation and integrating technology solutions, which are the foundation of digital transformation. As Figure 2 shows, across all the categories of activity associated with digital transformation, barely a third believe that IT is up to snuff.

Consequently, those executives draw a logical conclusion: IT can be replaced by an external substitute (see Figure 3). A full 75 percent of the survey’s respondents believe that an external party can do a better job than the existing IT organization.

Figure 2

IT is struggling to respond to the demands of digital transformation. There is so much change and innovation, especially in the cloud space, that IT organizations are overwhelmed just trying to keep up.

Figure 3

This is a key message of the Commvault survey that is the basis of the Talkin’ Cloud article. Things are going so fast that “eighty-one percent of IT leaders report to be either extremely concerned or very concerned about missing out on cloud advancements.”

This situation is only going to grow worse over time because the cloud providers are picking up the pace of their innovation. Here is a chart (see Figure 4) that shows the service improvements from AWS over the past few years:

Figure 4

So what should the IT organization do? They’re stuck between an irresistible force (cloud computing innovation) and an immovable object (the CEO’s belief that they’re doing a terrible job and someone else could do better)?

Well, first of all, they should recognize this longing for an external IT savior for what it is: a fantasy. I’ve worked in companies where the CEO decided he was tired of the way things were getting done and began using external IT providers. A year later, once promised projects came in much later than scheduled and way more expensive than predicted, they decided that the internal technology organization wasn’t so bad, after all.

The fact is, IT is hard, and digital transformation is making it more so.

Fortunately, in amidst the bleak picture of IT’s future, McKinsey offers guidance on how to improve things:

  • CIOs need to establish themselves as genuine business leaders and partners. They need to rewrite their job descriptions to focus on looking outward rather than inward.
  • The root causes of IT’s ineffectiveness must be addressed. According to IT respondents, the most significant problems are a lack of clear priorities for the IT function, weakness in IT’s operating model and talent issues. In fact, talent has actually grown as a root cause; respondents are twice as likely to cite talent issues now than as they were in 2015.

This means that IT organizations need to address things at the highest and lowest levels. At the highest level, they need to form collaborative working arrangements with business partners to mutually develop digital offerings. And at the lowest level, IT organizations need to gear up the skills required to support digital offerings.

Cloud computing is a foundation capability in this area and education in this area is critical. All of the partnering talks in the world won’t help if the IT side of the partnership doesn’t have the skills to instantiate the vision.

For IT, it’s the best of times and it’s the worst of times. Its capabilities are needed as never before, but its abilities are viewed with suspicion. Smart IT leaders will get in front of the demand from business units for help with digital transformation and help co-create solutions with business partners.

About the Author

Bernard Golden is the CEO of Navica & serves as advisor for CIO magazine. As the author of 4 books on virtualization and cloud computing, Bernard is a highly-regarded speaker and has keynoted cloud conferences around the world. Bernard is also among the ten most influential persons in cloud computing according to


3 Things IT Companies Can Learn from AAG Cloud Revenue Numbers

Bernard Golden

Published on May 25, 2017


In a kind of cloud revenue ‘alignment of the stars,’ the big three cloud providers—Amazon Web Services (AWS), Azure, and Google, known collectively as AAG—announced their financial data on April 28.

I always pay attention to the financial announcements because they offer real insight to the state of enterprise IT cloud adoption. And the news reinforced the general intuition of the industry: cloud computing is big and getting bigger—and it’s probably not going to stop getting bigger for a long time.

AWS is the only one of the big three to break out its pure cloud financials, and its announcement illustrates just how big its business is.

For the quarter, AWS racked up $3.66 billion of revenues, achieved on a 43 percent growth rate year over year. The chart below depicts its quarterly revenues since 2015 along with the quarterly growth rates. As one can easily see, the only negative about AWS’s quarter is that its growth decelerated to “only” 43 percent.

The other AAG members do not break out their cloud numbers, but there are ways to get a sense of the size of their cloud businesses.

Microsoft lumps its Azure number in with a category it calls “Commercial Cloud,” which includes Office365, Dynamics365, and “other properties.” Microsoft stated that its commercial cloud annual run rate exceeds $15.2 billion.

Because of this revenue lumping approach, it’s not clear how Azure itself is doing, but the company did note that Azure grew 93 percent year over year.

Another way to approach the question of pure Azure numbers is to look at analyst estimates. One analyst, JP Morgan, put Azure’s 2016 yearly revenues at $2.7 billion, or around $675 million per quarter. Of course, with such torrid growth rates, that would more likely be something like $300 million for Q116 and something like $800 million for Q416. Frankly, this seems a little high to me, but if one accepts the estimate, it implies that Azure could exit 2017 at a $4 billion plus run rate.

Google announced its numbers as well, and like Microsoft, it lumps its cloud numbers into a larger category, which it refers to as “Google other revenues.” Said other revenues totalled $2.2 billion for the quarter, a large part of which, according to Google, was its cloud and apps revenue. This category achieved a 33 percent year over year growth rate. It’s hard to know just how much of this is actual cloud revenue, so one might put that figure at perhaps $200 million.

So, to sum up the actual and estimated numbers:

Synergy Research Group published a chart comparing the provider market, as seen below:

Synergy’s estimates, as portrayed in the chart, seem roughly in line with the table above.

The blended growth rate of the cloud providers, adjusted for revenue percentages of the individual providers, is on the order of something like 60 percent. This indicates the big three revenues might achieve something around $23 billion of revenues for 2017, $39 billion for 2018, and $62 billion for 2019.

To get a sense of just how powerful the big three vendors are in terms of the tech industry, one member of a Facebook group I participate in, posted this after Amazon announced its numbers:

I think his math is a little off, as it puts a current valuation based on what he estimates AWS 2019 revenues will be. On the other hand, his 2018 and 2019 AWS revenue numbers appear to be a bit conservative—I put them at something like $24 billion in 2018 and perhaps $33 billion in 2019. So his $250 billion AWS valuation looks to work out in the end.

3 Steps IT Organizations Need to Take

The cloud provider revenue figures, growth rate, and valuation estimates make clear that public cloud computing is an unstoppable force in the technology industry.

Every enterprise IT organization must come to accept the long-term shift in infrastructure use. Here are three action steps your IT organization should take in light of the cloud numbers outlined above:

  • Adopt a cloud-first policy. The blistering growth of the AAG providers illustrates that end users are fleeing internal application deployments in favor of public cloud environments. You should align with this approach and adopt a policy of cloud-first, meaning the default deployment option for applications should be with one of the public providers. Only if some application characteristic precludes this approach should on-premise deployment be considered.
  • Deliver cloud-native architecture. While many IT organizations gain benefits from transferring traditionally-architected applications to public cloud environments (the so-called “lift-and-shift” approach), that’s not the way to get the best outcomes from using the public providers. Designing applications with a cloud-native architecture can increase availability, improve resilience, reduce costs, and most importantly, deliver functionality impossible to achieve in traditional infrastructure environments.
  • Provide internal skill acquisition. It’s unfair to expect staff with traditional skills to deliver cloud-native architectures. It’s critical to mirror the external adoption with internal transformation. This implies different application architectures, adoption of agile and DevOps processes, and new approaches to application monitoring and management. Key to implementing these new approaches is skill development, and Simplilearn offers a wide range of courses to help build new skills.

Every quarter’s cloud provider financial results makes clearer the reality that IT is undergoing a sea change in infrastructure and applications. Every IT organization must respond to that sea change or risk being inundated by change.

About the Author

Bernard Golden is the CEO of Navica & serves as advisor for CIO magazine. As the author of 4 books on virtualization and cloud computing, Bernard is a highly-regarded speaker and has keynoted cloud conferences around the world. Bernard is also among the ten most influential persons in cloud computing according to


AWS Security: Identity and Access Management (IAM)

  • Tue, August 2, 2016
  • Views 102

About the On-Demand Webinar

Security is essential for any application, but when that app is hosted on a public cloud – security becomes vital. The AWS Identity and Access Management features play a key role in providing secure and controlled access to AWS resources.The popularity of AWS as a public cloud is due in large part to these features. Watch this webinar to learn more about IAM users, groups, and roles to better secure your information with AWS.

Hosted By

Tarun Dave

Tarun is the co-founder of OneHop, and has over 15 years of experience as a Consultant, Developer, Technical Leader, and Account Manager with global clients.

View On-Demand Webinar

How to Build Cloud Applications the Right Way

Bernard Golden

Published on Jul 4, 2017


There’s no denying that cloud computing has reached a tipping point.  In the future, new applications will target cloud environments as the favored deployment option.

That raises some issues:

  • Are your applications well-suited for the cloud?
  • Will they offer high availability?
  • Do they respond well to erratic workloads and user populations?

All of these questions get at the underlying issue of whether or not traditional application architectures operate effectively in the cloud and do they need to be modified in light of how cloud environments operate?

The simple answer is no, traditional application architectures don’t operate effectively in the cloud, and yes, they need to be modified.

Why is this, and what should you do to build cloud applications the right way?

Here are four recommendations to build cloud applications the right way.

 1. Understand the infrastructure

The first and most important thing to understand about cloud applications is the nature of the cloud infrastructure.

AWS famously proclaims “everything fails all the time.” What this means is that, unlike traditional infrastructure, which is assumed to be robust and failure-proof (even though it actually frequently fails), when using cloud infrastructure, application developers should assume resources may fail.

The reasons for the failure may vary—from network switches going down, servers crashing, or even AWS services becoming unavailable.

The point is that when planning a cloud application, one should expect that some application resources will fail unexpectedly. Consequently, it’s critical to insulate the application from underlying failure.

How do you do that?

2. Design for failure

Clearly, the right way to approach this issue of infrastructure failure is to recognize it will happen. Rather than treat failure as a surprise and then get mad because the application residing on it also fails—the pattern with traditional infrastructure—one should design applications so that they are resilient in the face of failure.

What does that mean?

The best way to deal with unreliable infrastructure is to design with redundancy. Make sure that every operational part of the application runs in at least a paired topology: Two web servers; two application logic layers; mirrored database servers.

And then disperse the redundant pieces of the application. Place them in different data centers. Or even in different regions. Place them such that even a significant outage will not take the entire application down.

There’s no question that this makes application design more complex and development and operations more work, but it protects the application against infrastructure failure. And in today’s world, where applications are often the primary interface for customers, application uptime isn’t a nicety, it’s a requirement.

3. Expect load variance

Now that applications are the primary customer interface, gone are the days of predictable user populations associated with employee-focused applications. One should expect erratic loads, both because customer counts inevitably grow (one hopes), but also because customer use can vary according to the whim of the hour.

Some celebrity mentioned to her 3 million followers on Twitter that she just remortgaged her house? If you’re a financial institution you can expect a huge influx of traffic as people think they should consider refinancing.

You get the drift. Cloud workloads are highly erratic and your application should be ready to handle them.

You’ve already got redundancy in place, right? The next step is to design your application so additional application resources can join and drop off the executing resource pool.

So you should be able to add three (or 30) web servers to the redundant pair you’ve got running to address that celebrity-driven traffic.

4. Leverage cloud services

One huge mistake IT organizations make is thinking of cloud computing as pure computing infrastructure. You’ll often see IT groups talking about adopting IaaS to relieve internal data center pressure, or using cloud virtual machines to operate applications.

This assumes that, for all the software that runs in the application, the IT organization will install, configure, and manage it. Need a database? Well, the DBA will install MySQL, configure it, connect it to storage, and then an operations group will take responsibility to keep the MySQL system up and running.

This approach completely ignores the reality that all of the AAG cloud providers have built out rich services on top of their IaaS offerings.

They all offer managed database services. In fact, AWS just enriched their key/value DynamoDB service (a managed service) by placing caching in front of it (another managed service) to improve performance. Microsoft just launched CosmoDB, an extremely innovative combined key/value and document database service. And, of course, Google has rolled out Spanner, a global highly-consistent, highly-performant SQL database.

Kind of puts your MySQL installation to shame, eh? (Don’t worry, they all offer a managed MySQL service if that’s your fancy).

And this is just one category. They have managed IoT systems,data warehouses,machine learning.

It’s critical to extend your thinking beyond “cloud as infrastructure” to recognize it’s really “cloud as computing capability” delivered in a number of different forms.


Building cloud applications the right way isn’t trivial. It requires knowledge, persistence, and a willingness to discard long-held assumptions.

What that makes possible, in turn, is the ability to create much more powerful applications that are vastly better than their traditional counterparts.

About the Author

Bernard Golden is the CEO of Navica & serves as advisor for CIO magazine. As the author of 4 books on virtualization and cloud computing, Bernard is a highly-regarded speaker and has keynoted cloud conferences around the world. Bernard is also among the ten most influential persons in cloud computing according to


Why Cloud Computing is Essential to Your Organization

Loraine Burger

Published on Jun 29, 2017


With the increased importance of Cloud Computing, qualified Cloud solutions architects and engineers are in great demand. Organizations have moved to cloud platforms for better scalability, mobility, and security. Cloud solutions architects are among the highest paid professionals in the IT industry. IDG found that by mid-2018 nearly one-third of all organizations will be relying on private clouds as part of their IT infrastructure. A report by Cisco found that more than four-fifths of all data center traffic — 83 percent — will be based in the cloud within the next three years.

IDC report that while IT employment worldwide will grow about four percent every year from 2015-2020 – all of that growth will come in cloud-related positions. By 2020, more than one in three IT positions will be cloud-related.

With the cloud market set to grow more than ever before the need for IT staff with the appropriate technical and business skills has never been greater.

The must-have Cloud Computing skills

In order for businesses to successfully execute Cloud Computing strategies they need to recruit, hire and train people with the right skills.

The future holds great promise. Cloud Computing is one of the hottest fields in the world today, with plenty of high-paying jobs available for skilled candidates. The migration to the cloud brings with it new opportunities for those with the right skills to make it happen. The pace of change is accelerating. Both individuals and managers need to continually review and assess their Cloud Computing expertise. It’s safe to say that the cloud is still in its infancy. A lot will change over the next few years.

The impact of these changes on employment means new blends of skills are required to successfully manage today’s cloud environments.

Candidates for cloud engineering jobs should possess strong technical skills, the ability to think through business use cases and the curiosity and aptitude to learn new tools and technology. Technical staff need a mix of operations, software and architecture abilities. Manpower reports that the list of commodity Cloud Computing providers includes Amazon Web Services, Microsoft Azure, IBM Cloud, Google Compute and HP’s Converged Infrastructure with OpenStack. A familiarity with one or more of these platforms is a requirement for technical candidates.

In addition to technical positions there will be plenty of jobs for those with relevant business skills. When organizations adopt their IT from third-party providers, there is a critical need for people who know what services to pick, who can negotiate service level agreements, and can integrate those off-site offerings with on-site data and operations.

Hiring managers must evaluate the blend of technical and business skills, of specific and generalist abilities, they need in order to implement cloud solutions successfully.

The Cloud Computing skills gap

Research conducted by Robert Half Technology based on interviews with more than 100 CIOs and IT executives across the UK, revealed that three-quarters of the CIOs and IT directors polled admitted they frequently encountered IT professionals who were not up to the task. A 2016 Gartner survey asked the IT professionals to identify the skills gaps that their organizations were trying to fill in relation to information, technology or digital business. Cloud led the list.

A 2017 UK survey by Microsoft found that almost a third of respondents had actively sought to recruit team members with cloud skills within the last year. A significant number found difficulty in recruiting the right people. They predict that over 3,500 organizations in the UK alone could be hampered by a lack of qualified staff. Employees and candidates with the right qualifications will be welcomed by these companies.

Source: Microsoft Cloud Skills Report

How to bridge the Cloud Computing skills gap

For job seekers with a desire to land a Cloud computing job, a clear understanding of where they might have a shortfall, and what skills they need to backfill, is an essential first step.

Candidates should decide if they want to acquire technical skills, business skills, or a blend of the two.

Companies require skilled professionals in every area of Cloud Computing.  They require people with the skills to design and plan cloud solution architecture, manage and provision infrastructure, analyze and optimize technical and business processes, manage implementation and ensure solution and operations reliability.

Many of the jobs that are the most difficult to fill didn’t even exist when many people were going through college. Continued education and investment in building the skills of the current workforce to meet the demands Cloud Computing will reduce the skills gap.

Microsoft found that a majority of organizations look to training to meet their cloud skills needs.

Source: Microsoft Cloud Skills Report

There are a growing number of training opportunities available for employees to acquire the appropriate skills.

While no single training delivery model works best for every learner, a learner centric model must be utilized. We’ve found that a “blended learning delivery model” coupled with 24×7 access to teaching assistants in combination with project based learning opportunities and quizzes/assessments increases competencies and proficiency.

Designed by expert authors, our Cloud Computing courses offer high-quality training in both technical and business skills to ensure professional success.

About the Author

Loraine is a content marketing specialist with more than ten years of experience in technical writing, content management, social media strategy and analytics. Her writing aims to engage, entertain and educate on topics ranging from technology to travel and digital marketing, and pairs well with her passion for data and analytics. Combined, these skillsets deliver content strategies that are goal-oriented, data-driven and measurable.


Get Your Head in the Cloud: A New Demand for Skilled Employees

Loraine Burger

Published on Jun 29, 2017


The cloud computing market is growing at a breakneck speed and shows no sign of slowing down. As the industry grows, so do the opportunities for people who know how to use the technology. In 2018, spending on public cloud services will account for more than half of worldwide software, server and storage spending growth, according to market research firm IDC.

Cloud computing, which powers an increasing number of digital services and devices, allows many computers—in any number of locations—to operate as one giant machine. The IT job market reflects this trend, and companies now face the challenge of finding people who can manage these systems. In a survey conducted by cloud management company RightScale, a lack of resources and expertise is the biggest challenge regarding cloud computing.

To support the expanding adoption of the cloud, businesses require skilled employees. Online education aimed at certification is a convenient and effective way that both aspiring cloud experts and organizations can get ahead, but not in over their heads, with cloud technology.

Emerging Markets Causing Job Titles to Evolve

As cloud computing emerges as the dominant mode of delivery for business IT needs, job titles and skillsets also evolve. Future solution architects, system operators, application developers and security managers can all benefit from online learning solutions designed for working professionals.

Students can train to become certified on the Amazon Web Services (AWS), Google Cloud, or Microsoft Azure platforms in addition to mastering skills like DevOps. With 3.9 million cloud-related jobs in the U.S., and 385,000 of them in the IT sector alone, the decision for IT professionals to take advantage of the growing demand for cloud experts is easy.

Cloud computing jobs also command the biggest salaries in tech. According to Computerworld’s IT Salary Survey 2017, the average compensation for cloud computing jobs is $129,743.

Online Learning: The Fastest Path to Cloud Certification

As organizations struggle to keep their IT departments staffed with employees who understand cloud computing, upskilling current workers with certification courses is an excellent option. For individuals, becoming certified on one or more cloud platforms greatly increases marketability. With such fierce competition for talent, employers who provide opportunities for professional development are in an excellent position to recruit and retain the workforce they need to profit.

“As more and more of the tech industry moves to the cloud, preparing IT professionals to meet the challenge will continue to be incredibly important,” says Bernard Golden, cloud expert and Advisory Board Member of “Using online education to prepare for certification is an excellent option for working students and their organizations.”

What’s Ahead in the Cloud?

The decision about whether businesses should adopt the cloud for business software has already been made; 93% already use it.  As more organizations migrate their IT needs to the cloud and reap benefits, IT workers with the right skills gain job security. The good news is, there are plenty of cloud certification programs to help IT professionals who want to increase their chances of landing a lucrative position.

Still, even with one or more certificates, the field is new and technology is constantly changing. Successful cloud computing experts will always be at least partially self-taught. And with so many digital education options, it’s easy to stay on top of tech trends with enough motivation. The opportunities are out there, and for many, that’s motivation enough.

About the Author

Loraine is a content marketing specialist with more than ten years of experience in technical writing, content management, social media strategy and analytics. Her writing aims to engage, entertain and educate on topics ranging from technology to travel and digital marketing, and pairs well with her passion for data and analytics. Combined, these skillsets deliver content strategies that are goal-oriented, data-driven and measurable.


3 Predictions for AWS in 2018

Marc Weaver

Published on Jan 23, 2018


Someone much wiser than I am once said, “It’s easy to predict everything, except the future.” This is especially true when it comes to predicting anything to do with cloud computing and Amazon Web Services (AWS). Technology changes rapidly, and sometimes in surprising ways. Nevertheless, it is human nature to make predictions and “guess-timates” about the future. With that disclaimer, I present to you three predictions for AWS that we might see come to fruition in 2018. Yet, in accordance with that same disclaimer, I also offer you “egg-on-my-face” versions of those predictions, as well.

Prediction 1: Edge Computing

Back in the dark ages of computing, it was common practice to store all your infrastructure in a server room in your office building. This approach meant that—in the event of a natural or manmade disaster—you had a very large (and vulnerable) single point of failure. To combat this, IT infrastructure was moved to off-site data centers to provide failover and high availability. In recent years, this has evolved into the adoption of the cloud as the off-site data center.

The recent explosion in the Internet of Things (IoT) devices has meant that there are more data and bandwidth than ever being consumed and processed. Gartner and McKinsey predict that by 2020 there could be 30 billion connected devices, a huge increase from the 6 billion we have today. Inevitably, this means that data centers will start to reach capacity and networks will become congested with the huge processing requirements and amount of data traveling back and forth.

The proposed solution to this is edge computing. Edge computing decentralizes processing away from the cloud to prevent bandwidth congestion and the slowdown of response times. The idea is that an edge location filters out only what needs to go the cloud. This reduces the amount of data that the cloud must deal with so that it can focus on large, heavy-scale requests, while the edge location can process smaller requests in a timely manner. An edge location can take the form of a data center, a mobile phone mast, or even a router, switch or computer.

Those familiar with AWS are already aware of an AWS service called Edge, which AWS has located in most of the major cities around the world. These are used predominantly by CloudFront, the AWS CDN service used to quickly distribute web content to end users to reduce latency, but also by Lambda. My prediction? AWS is heavily into IoT technology, so it might not be a stretch to see AWS expand the use of the existing edge locations to provide more than just CDN functionality. They could act as mini cloud data centers that provide smaller scale data processing services and faster response times to end users.

Prediction: AWS edge locations will offer enhanced data processing services for IoT devices.

Egg-on-my-face prediction: Amazon will offer enhanced Echo products with processing power to act as mini edge locations.

Prediction 2: Serverless Computing

What if you could take the code that runs your business reports and upload it to a service that manages its execution on your behalf? And only pay for the actual time that the code takes to run? And you didn’t have to worry about the infrastructure that runs the code? And it’s fully scalable, meaning you can run it just once or one million times in parallel?

While it might sound too good to be true, that is the definition of serverless computing. It allows you to run your code without requiring the provisioning or management of servers. You don’t need to be Nostradamus to predict that serverless computing is going to take off in 2018, as the technology has already reached the end of the runway and is very much in flight. However, expect the further adoption of serverless technologies to propel it to a much higher altitude.

At AWS re:Invent 2017, we saw the launch of the “serverless” Aurora database, which is an “on-demand auto-scaling configuration for Aurora where the database will automatically start-up, shut down, and scale up or down capacity based on your application’s needs.” My prediction? I expect to see AWS releasing additional serverless products to enhance the already impressive stable of serverless offerings, as well as a simplified Lambda service for people with little development experience, like a Lambda version of AWS Lightsail.

Prediction: More serverless computing services will be offered to work in conjunction and a simplified Lambda service for non-developers.

Egg-on-my-face prediction: Release of AWS Lambda 2.0, which offers more programming languages and enhanced runtime configuration.

Prediction 3: Blockchain Technology

The term blockchain reached public consciousness in 2017 with the price explosion of Bitcoin and subsequently, all the other cryptocurrencies out there. A blockchain is described as “a continuously growing list of records, called blocks, which are linked and secured using cryptography.” However, blockchain technology isn’t limited to currency use. The technology is highly secure which means it is theoretically suitable for many other uses, such as the recording of events, medical records, property sales or even voting.

AWS has not shown a huge interest in blockchain technology. For now, it is only investing in the technology through its partner ecosystem. In fact, only late last year, AWS CEO Andy Jassy said, “There aren’t a lot of use cases of the blockchain beyond the distributed ledger” and that most of these can be solved using other methods, which AWS offers with its existing services. My prediction? Blockchain is such a potentially huge growth area that I find it inconceivable that AWS will not get involved somehow, perhaps through increased partnerships with blockchain providers, a blockchain management service or even its own flavor of the blockchain.

Prediction: Increased interest and partnership with blockchain technology companies and a service used to cost-effectively store blockchains.

Egg-on-my-face prediction: Launch of Amazon Blockchain, a service that allows you to create and manage your own blockchain.

As I warned you at the beginning, making predictions such as these is fraught with peril in our rapidly changing times. But a new year demands a look forward into the future, and who knows? It might turn out a year from now that I was three for three in my AWS predictions!

About the Author

Marc Weaver is a certified AWS solutions architect, who runs databasable, a cloud computing consultancy that specializes in AWS. He has spent over 15 years as a database administrator for investment banks such as Nomura, Commerbank and Macquarie. Marc is also a cloud computing advisor for Simplilearn. He has authored courses on AWS solutions architecture and database migration.


Cloud Computing: Benefits and Steps to Prepare your IT Team for the Cloud

Sandilya CH

Published on Oct 6, 2017


It’s probably not a matter of “if” your organization will migrate to a cloud platform but when. And when that time comes, you must be prepared, whether you’re the manager ensuring you have trained IT staff or a member of that staff.

Sure, you’ve been hearing about cloud computing for a while now. But maybe it seems like a futuristic computing model that only the biggest brands can justify or a risky way to operate a critical IT infrastructure.

It’s neither. It’s the future.

If you have any doubts about the direction of IT toward cloud computing, these numbers from Forbes should convince you otherwise:

  • Cloud computing is projected to increase to $162 billion by 2020 (from $67 billion in 2015).
  • The worldwide public cloud services market is predicted to grow 18 percent year over year by the end of 2017, from $209.2 billion in 2016 to $246.8 billion in 2017.
  • More than 50 percent of IT spending will be cloud-based by 2018.
  • Of all software, services and technology spending, up to 70 percent will be cloud-based by 2020.
  • Almost three-fourths (74 percent) of technical chief financial officers (CFOs) say that cloud computing will have the most measurable impact on their businesses in 2017.

Benefits of Cloud Computing

There are several reasons for migrating to the cloud, which is why we see these significant numbers when looking at predictions. With cloud computing, companies can save money, decrease staff, and become more agile and competitive. They are able to get new products to market faster, be more efficient, lower operational costs, decrease costs for IT and IT maintenance specifically, lower capital expenditures, and improve collaboration between departments.

However, this is not just a tangible shift from the physical mainframes you can touch and see to the virtual cloud computing you cannot. This shift will require IT teams to change as well. Does cloud computing do away with IT? Not at all. In the age of cloud computing, IT departments can become an integral part of the business rather than an enabler viewed as an obstacle or expense as in days past. There is an opportunity here to become a value-add to the business, but that change requires some new knowledge and companies are already looking for it.

Companies are Already Demanding These Skills

Even if your company isn’t there yet, cloud computing is here, and employers are looking for people trained for these jobs. In fact, LinkedIn reported that “cloud and distributed computing” topped the list of sought-after skills both in 2016 and in 2017. Dice reports that job listings for the Amazon Web Services (AWS) cloud platform increased by 76 percent between 2015 and 2016. In 2015, there were 3.9 million jobs affiliated with cloud computing in the U.S. alone.

And the money is there for those who are qualified for the jobs. BusinessInsider lists 12 jobs in cloud computing—all paying over $100,000.

Preparing for the Cloud

For IT departments, the switch to cloud computing requires not only a different skillset but a different mindset. It’s an IT paradigm shift, so much so that it has been suggested that we simply call it modern computing instead. Without in-house computer networks and servers to maintain and troubleshoot, your IT employees will need to focus their energy on tasks specific to cloud computing as well as business needs, such as working in collaboration with developers to improve time to market and responsiveness to internal and external needs.

Cloud computing jobs range from general to specific and include IT job titles you’re already familiar with, such as developers, engineers, systems administrators, and managers, but all with a cloud emphasis. Your IT folks will need to know cloud computing, in particular how to deploy and manage cloud solutions, and you’ll need people to fill the roles specific to cloud computing.

Required Cloud Computing Knowledge

In general, you should be aware of the major players in the cloud computing marketplace, so you can make sure the training your staff pursues aligns with the platform you’ll be working with. Three of the major public cloud players in the market today are:

  • Amazon Web Services (AWS)
  • Microsoft Azure
  • Google Cloud

There are, of course, other contenders vying for lead positions in the marketplace as well as private cloud platforms, but a familiarity with the three big names listed above will give you a good start. Or, if your organization has already decided on one, training with Simplilearn can help to get you or your staff trained on one or more of these platforms.

You or your staff will also need to know DevOps, which brings together the software development side and the operational side to integrate what used to be two disparate teams. DevOps knowledge is required for cloud computing regardless of the platform you choose. Simplilearn offers courses in DevOps, including the DevOps Engineering Master’s Program, as well as a certificate program for AWS DevOps Architects.

Below are a few of the particular skillsets you might need to train for or expect from new hires, depending on the job. If you’re seeking new knowledge to prepare for a career in cloud computing, these are skills you might need or choose to focus on:

  • Data management—With cloud computing, data storage is less of an issue because you’re no longer hosting that data in-house, but data access and management can become critical, especially if you’re still working with legacy systems while migrating to cloud computing.
  • Cyber security—Although the vendor should own a lot of the responsibility for the security of your data, you’ll still need these skills, as data is conceivably more at risk in the cloud, and you are ultimately responsible.
  • Task automation—Being able to automate tasks and processes is essential in a cloud environment, in part because of the sheer amount of data that organizations now manage. You’ll want staff trained in systems automation to make this happen.

No matter the cloud computing vendor your company chooses, no matter the organizational goals you hope to achieve, you’ll need to start with people who are trained in cloud computing essentials in order to make this migration a successful one from the start. Whether that means training existing staff through certifications and programs like those offered by Simplilearn or getting the training yourself as an investment in your own career so you can pursue one of the plentiful and lucrative jobs out there, the first step toward the cloud is in the direction of new knowledge and skills.

About the Author

Sandilya is a Senior Product Manager with more than 6 years of experience in new product development, Category and Product management. This section provides access to all the insightful resources that Sandilya has contributed to our audience.


Three Key Trends in the Thriving Field of DevOps

Sandilya CH

Published on Sep 21, 2017


Virtually every company is fast becoming a digital enterprise, driven by exciting new technologies used to streamline a vast array of internal and external processes. In fact, we really need to thank the teams that develop and deploy software, services, and applications so we can run our operations more smoothly. But there’s another group that has been taking center stage lately and gaining recognition for bringing IT and software deployments to new heights – the role of DevOps.

What is DevOps?

In short, DevOps (or development + operations) is a software development and delivery process that establishes better communication and collaboration between the people that develop software, deploy it and align it with business objectives. DevOps views cross-functional teams across many disciplines as a collaborative unit that enables continuous development and deployment cycles, encourages automation, removes bottlenecks from the process, reduces mistakes and improves IT service agility and recovery. Individuals who are trained to execute on both disciplines generally fill these critical roles.

Here are the key trends that have been shaping DevOps in 2017:

1. While Tools Thrive, Culture Makes the Difference

It’s true that a key driver of the DevOps revolution is the availability of powerful software tools. DevOps managers turn to Docker, Jenkins, Puppet, PowerShell, Chef and other technologies to make their interactive processes work. Teams communicate more effectively with GitHub, Microsoft Teams, Slack and other collaboration tools, and teams must be able to continually monitor software releases to quickly alert IT teams about failures or other shortcomings. Tools make the process happen, yes, but the exchange of knowledge and alignment with business objectives is where the rubber really meets the road for companies.

DevOps was conceived to build a culture of better process management so that teams can get things done better and faster. Sharing tools and giving every constituent in the development, deployment, redevelopment and redeployment process greater visibility empowers a more fluid process that is simply a better way of doing business. According to one developer report, 80% of companies reported that development and operations now share at least some tools, which is the first step in establishing the DevOps culture. Business benefit that contributes to the bottom line is what the management looks for, helping raise the bar for companies to truly be “digital.”

2. Cross-pollination Drives More Diverse Skillsets

On a related note, the tenets of DevOps will undoubtedly drive IT and software development professionals to pursue more holistic skillsets that can be applied across the organization. DevOps creates an ongoing feedback loop between constituents and continuously improves operational efficiency. A recent Interop ITX/BMC survey on DevOps revealed that 43 percent of respondents indicated that operations staff had become involved in future product feature enhancements, and 41 percent said that development had become more involved in application deployments. And importantly, 25 percent reported that corporate management structure had been changed to better align development and IT staff and goals. IT and software development professionals who can fill multiple roles will be well-positioned to thrive in DevOps and drive further evolutions of continuous improvement ideals.

3. DevOps Adoption Flourishes and Expectations Remain High

It should come as no surprise that adoption of DevOps is growing fast. Companies that are adopting DevOps reached 74% in 2016, up from 66% in 2015 according to The software development lifecycle is a complex one, and organizations now have a way to bridge the divide between teams that deal with design, testing, quality assurance, deployment, and support. DevOps contributes to better performance and business results as well. In the earlier Interop ITX/BMC report, nearly 80 percent of companies had indicated that they had seen or expect to see improvement in production stability and 78 percent of them had seen or expect to see improvement in application performance.


As a perfect combination of technology innovation and IT process culture, DevOps is revolutionizing the way software and services are deployed. It is a critical skillset to have, especially for those who intend to master deployments in the Cloud and improve the way their organizations make use of technology in this digital age.

About the Author

Sandilya is a Senior Product Manager with more than 6 years of experience in new product development, Category and Product management. This section provides access to all the insightful resources that Sandilya has contributed to our audience.


Machine Learning Transforms Cloud Providers into Custom Chip Developers

Bernard Golden

Published on Sep 18, 2017


I remember my first exposure to cloud computing quite clearly. I attended an enterprise architecture meetup and someone from Amazon was talking about a new service from the company called Amazon Web Services (AWS) that offered what he referred to as “Infrastructure as a Service” (this was so early in AWS’s life that the term cloud computing had not even invented).

Join Simplilearn’s Fireside Chat to find out what skills will help you build a career in AI and machine learning. Click here to register

AWS streamlined access to industry-standard X86 computing resources. Instead of the interminable provisioning timelines typical of on-premises environments, AWS delivered virtual machines in less than ten minutes. To someone used to waiting weeks or months for computing resources, this seemed like sorcery.

It was immediately clear to me that Infrastructure as a Service would revolutionize the technology industry. And so it has proved. Users flocked to AWS and its peers because they could design and run applications faster and cheaper than ever before.

Demand for cloud computing is so large today that the providers are building data centers at a furious pace. According to a Data Center Knowledge article, the big three of cloud—AWS, Microsoft, and Google (aka AMG) are spending about $30 billion per year on new infrastructure.

Each of them has moved far beyond industry-standard X86 servers. They employ in-house staff to create new hardware designs tuned to their environments. They work with Intel to develop new chips better suited for their use cases. One can think of them as implementing custom X86 computing environments designed to enable massive scale while operating at the lowest possible cost point.


The rise of machine learning (ML) changes this formula. While X86-based computing is great for common application workloads, it’s not nearly as well-suited for ML workload execution. This is because parallel processing, which is a hallmark of ML execution, is a poor match for the X86 single thread processing approach.

GPU boards are often used for ML, as the parallel processing of graphics chips is a better match for these workloads. All of the big three currently or will soon offer virtual machines with GPU boards attached to allow customers to execute ML workloads.

However, while GPU parallel processing is better than X86 chips for machine learning, these graphics-oriented chips are not ideally suited for machine learning. Stated another way, GPUs perform ML more efficiently and less expensively than X86 environments, but there are additional capabilities available with ML-focused chip designs. This has caused Google and Microsoft to move beyond system design and into the realm of chip manufacture. Simply put, the demands of ML require specialized processors, and there isn’t an Intel equivalent for ML chips—so AMG have stepped forward to implement them.

Interestingly, AMG have diverged in their approach to ML processors, which reflects their overall approach to ML.


Google created an open source ML framework called TensorFlow and is putting all of its ML efforts behind it. It focuses on a single framework and can implement the TensorFlow code directly in silicon—a term meaning that it has implemented the framework in an ASIC it refers to as a tensor processing unit. TPUs offer impressive performance gains compared to the CPU and GPU alternatives. To learn more on this topic, read the Google blog that describes their TPU initiative. Google launched its public TPU service in 2016, and has since followed up with an updated version a few months ago.


In contrast to Google, Microsoft is not wedded to a single ML framework. Its employees use several open source frameworks (including TensorFlow), depending on the task at hand. This dictates against an ASIC approach, so the company recently announced a FPGA-based service called Brainwave.

Microsoft’s approach offers flexibility to its researchers and engineers. This gives them performance well beyond GPUs, but without restricting them to a single ML framework. Brainwave is currently only available to Microsoft employees, but the company plans on making it available to external customers in the future.

The quiet one in this flurry of hardware announcements? AWS. As noted, it does offer GPU capability, but to date has made no announcement regarding custom hardware.

However, after a bit of a slow start, AWS is going headlong after AI, and the company is unlikely to be willing to take a back seat to the other members of AMG. I expect the upcoming Reinvent conference is likely to witness one or more ML hardware announcements from the company.

One might question the ML hardware investments these companies are making. Sure, the new hardware will run ML workloads better than X86 CPUs or even GPUs, but why go to the work and expense of designing and building entire new chip architectures?

In a phrase: customer demand. The use of machine learning is exploding with the technology applied across an enormous range of use cases, from clothing recommendation to railway maintenance to medical diagnosis.

Machine learning is now poised where cloud computing was a decade ago and will be accelerated by the same phenomenon: easy availability at a low price. Its growth is likely to be more rapid, though.

With cloud computing, we’ve spent a decade in debating whether on-premises or public clouds are better, with adoption lagging due to the ongoing controversy.

Machine learning will not undergo the same kind of bickering. ML’s natural home is a public offering, because it improves with scale and data, which are both more available from a public provider than in a single user location.

It will be interesting to see if, in a few year’s time we’ll see AMG doubling down with even more investment as they add ML facilities to their data center arms race. For sure, though, we’re only at the beginning of their custom hardware efforts.

About the Author

Bernard Golden is the CEO of Navica & serves as advisor for CIO magazine. As the author of 4 books on virtualization and cloud computing, Bernard is a highly-regarded speaker and has keynoted cloud conferences around the world. Bernard is also among the ten most influential persons in cloud computing according to


Are Your Employees Ready for the Cloud?

Bernard Golden

Published on Nov 20, 2017


The growth of cloud adoption has many IT groups asking how they can become cloud-native organizations. They recognize they can’t depend on everyone to seek out knowledge on their own, nor afford inconsistent knowledge across the organization, so they resolve to develop a structured approach to making sure their employees are ready for the cloud.

Unfortunately, too many companies find that their efforts fall short. After devoting time and money to on employee training, the organization fails to develop a critical mass of cloud knowledge.

In turn, this causes many problems for employees and their projects. Employees lack critical skills, and their projects deployed into cloud environments fall short of expectations in terms of performance and stability.

Clearly, this is an undesirable state of affairs.

So what should IT organizations do to make sure their employees are ready for the cloud? There are three phases of building a cloud-native organization staffed with employees capable of building robust cloud applications.

Education the Right Way

Training is the bedrock of a cloud-capable IT organization—but only when it’s done the right way. Rather than send people to classes based on what’s available, a better approach is to design a curriculum for every job role. Each curriculum should contain a set of classes designed to develop skills, progressively getting deeper and more detailed as students progress through the curriculum.

An obvious organizing principle for curricula is from a vendor. The large cloud providers have training tracks structured by role. For example, training tracks for developers, operations, and more. Each track begins with an introductory course and then offers a sequential set of classes that explore cloud services germane to the role.

However, don’t overlook training that is not vendor-specific. There are topics that address skills needed for success in cloud adoption, yet are not focused on an individual cloud provider. For example, Agile DevOps skills are important for today’s application development but are not specific to any particular provider.

Besides agile development, other vendor-neutral skill sets that IT organizations build include DevOps, analytics, machine learning, and IoT.

The right training curriculum will represent a mix of vendor-specific and neutral courses organized according to employee role.

Immediate Application

It’s a cliche of education that the half-life of a course is two weeks. Two weeks after a student attends a course, he or she will have forgotten half of the content of it. In the next two weeks, they’ll forget another 50 percent.

This reflects a critical truth: education is cemented by reinforcement and reinforcement occurs when new skills are applied to real-world applications.

What this means is that organizations must have topic-relevant assignments for employees to take on directly after they take cloud training. Failing to do so risks wasting money on training. Worse, employees whose hard-won knowledge declines over time will be frustrated when they are called upon to apply their training long after it’s fresh.

Growth Assignments

Even the best training in the world can’t provide everything. Developing deep skills in a technology requires challenging work beyond what was presented in class. Furthermore, most technical personnel embrace challenging tasks that cause them to learn new things and grow knowledge. For these reasons, organizations should give employees assignments that push them to go beyond their comfort zone. Fortunately, this is easy in the realm of cloud computing—the field’s rapid innovation means that there is always new information to absorb and new skills to apply.

This approach inevitably causes tension in organizations. Giving people challenging work necessitates moving people to new assignments, while the desire for organizational efficiency calls for leaving staff in place in current assignments.

But just as a plant that outgrows its pot eventually suffers through an inability to expand, so too will employees constrained from developing new skills. A common outcome of this is employees either leaving or “retiring in place.” No IT organization can afford this—simply stated, the role of IT is so critical to the success of companies today that failing to implement conditions that attract the best possible IT talent is ruinous.

Cloud computing is transforming the face of IT. The pace of innovation and the restructuring of IT processes to keep up with that pace means IT organizations need to build new skill sets.

Clearly, training is a prerequisite for success in the cloud era. Putting your staff through cloud courses gives them the core knowledge necessary to create cloud-native applications.

However, training courses are just the foundation of building a skilled workforce. Organizations need to create a structured curriculum for employees, one that provides a program of courses appropriate to the roles that employees take on.

In addition to formal training, organizations need to ensure that newly-educated employees have a chance to apply their fresh skills to real-world tasks. In this way, their knowledge is reinforced and made more relevant by applying them to actual business problems.

Beyond training and skill reinforcement, IT organizations must continue to develop employees by providing more challenging work assignments. Technical personnel thirst to develop new skills and challenging work assignments are an excellent way to develop knowledge borne of solving real-world tasks. For employees, this is personally rewarding; from the organization’s perspective, it is good because it retains employees.

The massive transformation in IT caused by cloud computing is going to continue for the foreseeable future. Smart IT organizations recognize that smart, skilled employees are the best resource to address the need for innovation and new business offerings. A three-phase education approach is the best way to ensure your people are ready for the cloud.

About the Author

Bernard Golden is the CEO of Navica & serves as advisor for CIO magazine. As the author of 4 books on virtualization and cloud computing, Bernard is a highly-regarded speaker and has keynoted cloud conferences around the world. Bernard is also among the ten most influential persons in cloud computing according to


Top 3 Takeaways From AWS re:Invent 2017

Marc Weaver

Published on Dec 25, 2017


As the cloud marketplace gets ever larger and rivalry between the largest cloud providers heats up, Amazon Web Services (AWS) uses it’s re:Invent conference to assert its authority in the space and its intention to take on the competition.

AWS re:Invent is an annual event that takes over multiple Las Vegas conference halls. It’s used as a vehicle to showcase AWS’s latest and greatest services as well as declarations of war on its competitors. This year was no exception. During re:Invent 2017, AWS made over 60 major announcements covering all of their services. We can’t discuss all of them here, but I’ve highlighted my three top takeaways from the event below.

1. Cloud Adoption Is Officially Mainstream

The adoption of cloud computing has been growing over the last couple of years as IT decision-makers have realized that the benefits of a flexible, scalable, cost-effective and highly available infrastructure far outweigh the somewhat misguided concerns about the insecurity of the cloud. As a result, this year’s conference was around 50 percent larger than the previous year’s. An estimated 45,000 people attended and re:Invent filled conference halls at the Venetian, Encore, Mirage, Aria and MGM hotels. If you haven’t been to Las Vegas, it’s difficult to describe the scale of the event, but the size demonstrates just how popular cloud computing has become, in particular, AWS.

Over 1,000 learning sessions were held to showcase the comprehensive list of products and services from the AWS ecosystem and each one was well attended. However, it was a keynote speech that proved cloud adoption is now widespread, when Roy Joseph, Managing Director at Goldman Sachs, spoke about how his company uses AWS for analytics. If investment banks—one of the industries most reticent to adopt cloud computing—are now embracing AWS, then cloud computing is officially mainstream.

2. AWS Wants to Make Your Life Easier

AWS wants to make it as easy as possible for you to build your systems and applications. This was a predominant theme at the event, and many new services were announced, including:

  • AWS Elastic Container Service for Kubernetes (EKS): This is an open-source system for automating the deployment, scaling, and management of containerized applications. It has been a long time coming, but AWS has finally delivered and can now compete against Microsoft and Google for Kubernetes customers.
  • AWS Fargate: In typical AWS fashion, AWS announced a product that the competition already offers, but enhanced it to make it even more attractive. This is the case with Fargate. Fargate allows you to run containers, but without managing servers or clusters. It’s comparable to EC2, but using containers rather than VMs.
  • Amazon Aurora Multi-Master: Aurora is already a fully managed database service, but the addition of Multi-Master allows the creation of multiple read/write master instances across multiple Availability Zones. As a former database administrator, it’s easy for me to see the value in a service like this.
  • Amazon Aurora Serverless: This is essentially a serverless database that is designed for highly variable workloads and customers only pay for the database resources they use on a second-by-second basis.
  • Amazon DynamoDB Global Tables: This service creates automatically replicated tables across two or more AWS Regions for Multi-Master writes.
  • Amazon S3 Select and Glacier Select: These are an extension of Amazon Athena. They enable the retrieval of a subset of data from an S3 object using simple SQL expressions. Now you can grab data from compressed files stored on S3 or Glacier without having to download a file.

3. Machine Learning Is the Next Battlefront

AWS has a worthy machine learning adversary in IBM Watson, and they announced several machine learning products intended to help them compete.

Amazon DeepLens was the main product announcement. DeepLens is an HD deep-learning enabled video camera that allows developers to easily get started in the world of machine vision. It was demonstrated with the simple “hot dog or not a hot dog” example during which the camera indicates if your plate of food contains a hot dog or not. Obviously, it has more important uses, but it was an important demonstration. AWS wants to get developers building applications for DeepLens, which in turn gets more people using AWS products. The DeepLens product was so popular that people were literally fighting to get their hands on one: I heard rumors of a scuffle breaking out in the line waiting to see it.

AWS also announced Amazon Rekognition Video, a tool that lets you upload video to the cloud for real-time analysis to detect and recognize faces and objects in live streams. Rekognition can also analyze files stored in Amazon S3 to detect, track, recognize and extract faces, objects, and content.

In addition to those two major products, AWS announced a suite of services already offered by other vendors that will allow AWS to build greater inroads into the machine learning arena: Comprehend, which lets you analyze large quantities of documents; Translate, a language translation server; and Transcribe, a speech-to-text service. When these services are combined with the existing Amazon Lex and Polly offerings, AWS will be one of the major players—if not the major player—in the machine learning space.

As I said above, AWS made over 60 announcements during re:Invent2017, far too many to cover in this blog post. You can find the full list of announcements here.

re:Invent2017 was a well-attended and significant event, demonstrating both the growth of cloud computing and the rising dominance of AWS in the space. I look forward to seeing the new services and products put to use in the coming year, and I anticipate an even larger event in 2018 as AWS consolidates its position at the front of the cloud computing pack.

About the Author

Marc Weaver is a certified AWS solutions architect, who runs databasable, a cloud computing consultancy that specializes in AWS. He has spent over 15 years as a database administrator for investment banks such as Nomura, Commerbank and Macquarie. Marc is also a cloud computing advisor for Simplilearn. He has authored courses on AWS solutions architecture and database migration.


Webinar Wrap-up: Edge Computing Vs. Cloud Computing

Sandilya CH

Published on Jan 16, 2018


Companies big and small are continually moving their applications to the cloud. More than 28 percent of an organization’s total IT budget is now kept aside for cloud computing. Today, 70 percent of organizations have at least one application in the cloud, indicating that enterprises are realising the benefits of cloud computing and slowly adapting.

Even as companies and industry experts predict the future growth of cloud computing, experts believe that the cloud has reached the end of its run at the top, and are betting on the growing popularity and benefits of edge computing.

In a fireside chat, Anand Narayanan, Chief Product Officer of Simplilearn, and Bernard Golden, an influential voice in cloud computing, discussed the current state of cloud computing and why edge is poised to become the future of IT transformation in companies. We’ve collected some nuggets from this conversation to help you gain advanced insights into edge computing and why it’s the next big thing in IT.

Watch the fireside chat video recording here.

Why Is Edge Computing Needed When Cloud Computing Is Available?

This is a pertinent question asked by most IT professionals. In the fireside chat, Bernard explains how edge computing is helpful in situations where organizations wish to bypass the latency caused while communicating information from the device across the network to the centralised computing system. He gives the example of a machine whose functionality is very crucial for an organization. A delay in the machine’s decision-making process due to latency would result in losses for the organization. In such cases, organizations will prefer edge computing because smart devices with computation power are placed on the edge of the network. The device monitors a pre-defined metrics set for tolerance levels, if the metrics are outside of the prescribed tolerance, a warning signal is issued as soon as the machine reaches the failure level, resulting in the shutdown of the machine within microseconds to avoid further losses.

The process of edge computing differs from cloud computing because it takes time, sometimes up to 2 seconds to relay the information to the centralized data center, delaying the decision-making process. The signal latency can lead to the organization incurring losses, hence organizations prefer edge computing to cloud computing.

Edge Computing Vs. Cloud Computing—Which One’s Better?

First, it’s important to understand that cloud and edge computing are different, non-interchangeable technologies that cannot replace one another. Edge computing is used to process time-sensitive data, while cloud computing is used to process data that is not time-driven.

Besides latency, edge computing is preferred over cloud computing in remote locations, where there is limited or no connectivity to a centralized location. These locations require local storage, similar to a mini data center, with edge computing providing the perfect solution for it.

Edge computing is also beneficial to specialized and intelligent devices. While these devices are akin to PCs, they are not regular computing devices designed to perform multiple functions. These specialized computing devices are intelligent and respond to particular machines in a specific way. However, this specialization becomes a drawback for edge computing in certain industries that require immediate responses.

What Does the Future of the IT Sector Look Like?

Though many companies are adopting edge computing and are predicting the end of cloud computing, Bernard points out that this is not substantiated because there is currently no analytical framework to prove it. Edge computing is not the only solution for the challenges faced by IT vendors and organizations and does not handle all applications across every environment, thus, cloud computing will still remain a crucial part of an organization’s IT infrastructure. To demonstrate this, Bernard cites the example of an IoT device with computing power attached to it, along with Azure functionality. The device-deployed code responds in real-time by shutting down the IoT machine in case of a damaging failure condition, while the rest of the application runs in Azure. The million-dollar machine is no longer dependent on cloud loop for emergency response due to its utilization of edge computing and still works in harmony with cloud computing to run, deploy, and manage the IoT devices remotely. This sustains that cloud computing will remain relevant and work alongside edge computing to provide data analytics and real-time solutions for organizations.

If you have any questions about edge computing that are not answered in this webinar, share them in the comments section below.

About the Author

Sandilya is a Senior Product Manager with more than 6 years of experience in new product development, Category and Product management. This section provides access to all the insightful resources that Sandilya has contributed to our audience.