What Does the Google Anthos Announcement Mean?

Posted by Dustin Keib, Head of Cloud Enablement

May 29, 2019


The talk at Google Next 2019 centered around Google’s introduction of Anthos, an enterprise platform that customers running its managed container services can use to manage multiple cloud or hybrid cloud deployments.

That sounds pretty cool. But what does this announcement really mean for the cloud community and those organizations already in the cloud? And what about those who are thinking about taking that first step in the  journey toward migrating to the cloud? It's a cloud computing revolution.

A Game Changer Like Kubernetes?

In 2014, Google open-sourced the Kubernetes project — and the cloud was never the same. Kubernetes was a literal game changer due to the flexibility it brought orchestrating complex containerized microservices architectures in the cloud.

And it does this how?

An open source container orchestration platform, Kubernetes allows large numbers of containers to work together to create complex applications and services while reducing operational burdens by providing basic services the applications depend on like networking, storage and a watchdog to enable service health. 

With Kubernetes, you can:

  • Run many containers across different nodes (machines).
  • Scale up or down by adding or removing cluster nodes.
  • Maintain consistent storage and load distribution.
  • Ensure self-healing of services and pods in many failure states.
  • Create various deployment strategies including canary and blue/green.

Check out our Kubernetes 101 series for a deeper dive into what Kubernetes is and how it works.

So What is Anthos?

According to Google, Google Cloud’s new open platform “lets you run an app anywhere — simply, flexibly and securely. By embracing open standards, Anthos lets you run your applications — unmodified — on existing on-prem hardware or in the public cloud.”

Anthos’ hybrid functionality is available “both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE), and in your data center with GKE On-Prem. Anthos also lets you manage workloads running on third-party clouds like AWS and Azure, giving you the freedom to deploy, run and manage your applications on the cloud of your choice, without requiring administrators and developers to learn different environments and APIs.”

Essentially, Anthos targets those customers that are hesitant to modify their older applications for the cloud, and gives them a way to migrate those applications without the need to change their coding. When coupled with GKE On-Prem on self-managed servers, Anthos will let customers run on their own servers, on Google’s servers — or even on cloud services provided by rivals Amazon Web Services and Microsoft Azure.

On top of all this, Google also announced Anthos Migrate, “which auto-migrates VMs from on-premises or other clouds, directly into containers in GKE with minimal effort. This unique technology enables you to migrate and modernize your infrastructure in one move, without front-end modifications to the original virtual machines (VM) or applications.” This keeps your IT team free to focus on managing and developing apps because it’s not bogged down managing infrastructure tasks like VM maintenance and operating system patching.”

OK, this is all well and good, but what does it really mean?

Anthos Migrate attempts to containerize existing monolithic applications running in VMs using a plugin to VMware vRealize Suite in conjunction with a Google-modified version of Velostrata, a VM migration tool Google recently acquired. This creates an environment focused on moving contemporary microservices-based applications (greenfield) in Kubernetes while migrating existing VMs (brownfield) to containers. Meanwhile, applications running in non-x86 architecture and legacy apps will continue to run either in physical or virtual machines.

We’ve all heard about container anti-patterns like "Run a single process in a container," "Keep your image lean" and "No data or logs stored in containers." Developers need to be mindful of these anti-patterns while focusing on the ideal architecture for containers, microservices.

So what does all this tech-speak mean for those that will be using it?

What Does Anthos Mean for Customers?

If you’re a customer, you no doubt have to ask, “What does Anthos mean for me?” “Why should I care?” “Does this fit into my strategy?”

But before you can ask yourself those questions, you first must decide if microservices are indeed a part of your strategy. If you answered yes, and your application supports it, containers are a great way to build microservices. If we are using containers, an orchestration technology like Kubernetes makes defining and maintaining services a lot easier.

The problem with containers is that you need to monitor the health of the various services that comprise the app, just like with virtual or physical machines. You need controls on things like network ingress/egress, scaling factor and service discovery.

Orchestration tackles these problems, and this is where Kubernetes can play a key role.

As previously noted, Kubernetes is a container and cluster management tool. It enables you to deploy containers to clusters (that is, many virtual machines combined to provide basic services such as network or storage.) Its inherent flexibility permits it to work with different configurations of containers. Essentially, this flexibility allows you to deploy highly scalable, fault-tolerant applications using containers in a microservices architecture.

What is Anthos’ Business Value?

Without question, Anthos has great business value.

For starters, it allows you to install GKE in an on-prem cluster and link your on-prem GKE to the GKE environment in GCP so you can deploy Kubernetes pods and services to on-prem or cloud — as well as moving services back and forth between the two. It will give customers the ability to push new features that can be used right away.

For example, let’s say you’ve invested in a large on-premise environment, running Kubernetes, and that you’re using Kubeflow to develop and train ML models. You want to train the models on-premise, as you’ve already invested that money and don’t want the environment to go to waste, however you want to run the model in production using GKE and Kubeflow. It’s important to your developers to have identical tooling in both the on-premise and cloud environments.

Thanks to Anthos, this is now possible, allowing your developers to use all the same tools and commands they use in the on-premise environment to deploy to the cloud.

But the business value has to be aligned with a specific business environment or type of customer. Anthos isn’t a “magic bullet” or “magic elixir” that is ideal for everyone. Anthos’ hybrid functionality is not a simple “plug and play” platform that takes your older unmodified apps and “automagically” runs them on the cloud or on-prem hardware. It is still very important to work through the use-case specifics of your application’s architecture, and to plan the workflow and tooling (including Anthos) to support your digital modernization strategy.

As is the case with any new technology, you want to team up with a partner that will customize the solution to your unique needs. You might want to consider a Google Cloud Infrastructure partner that will think through your actual use case, work with you to stand up the proper microservices architecture and build a containerized solution that takes your needs and usage patterns into account.  You also will want to be sure you address things like developer workflow, continuous integration /continuous delivery (CI/CD) and testing needs in your multi-cloud strategy or hybrid cloud environment.

To request a consultation, please complete the form below.

Subscribe for Updates

Dustin Keib, Head of Cloud Enablement

Dustin is a software engineer, systems architect, and cloud scalability expert at Onix. His deep understanding of the full SaaS and Paas stack comes from 20+ years of enterprise IT experience. Dustin is a Certified Google Cloud Solutions Architect, AWS Solutions Architect - Associate, and Puppet Professional and has a deep knowledge of infrastructure automation, containers, and CI/CD system design and implementation.

Popular posts

AWS 101: What is Amazon S3 and Why Should I Use It?

Kubernetes 101: What are Nodes and Clusters?

Google Workspace vs. Microsoft 365: A Comparison Guide (2022)