Modernizing microservices with Cloud Run for Anthos a.k.a. how to get serverless in your Kubernetes cluster Anthos Google Cloud Platform On-Premises Public Cloud Anthos GKE deployed on VMware Hub / Connect Anthos Config Management Anthos Service Mesh GKE Anthos GKE deployed on AWS Kubernetes Cluster GCP Marketplace Anthos Migrate Cloud Run on Anthos Serverless Serverless Operational Model Programming Model No infra management No ops for scaling Managed security Pay per usage (request, etc) Service-based Request/event driven Stateless applications / Serverless > Functions Compute Data Analytics ML & AI Database & Storage Smart assistants & chat DevOps Messaging Serverless philosophy: efficient developers + efficient operators Developers care about velocity, reproducibility, not doing infra Cloud Run Run any stateless container on Google’s fully managed infrastructure Container image to production URL in a few seconds Run services in any language or framework Fully-managed, rapid autoscaling, pay per request Cloud Run ✔ Legacy application deployment ✔ Fully managed , rapid autoscaling , scale-to-zero ✔ Production-ready, secured (HTTPS) endpoint What if you want these on your Kubernetes cluster ? What we just saw Serverless, on your terms Cloud Run (fully managed) • Serverless dev/operator experience • Runs on Google’s infrastructure • Pay-per-request Cloud Run for Anthos • Serverless developer experience • Runs in your Anthos/GKE cluster next to your Kubernetes workloads • Customizable/pluggable for your needs GKE on GCP GKE on-prem Google infra Cloud Run for Anthos Knative GKE (Kubernetes) Knative API UI CLI YAML Cloud Run Developer & Operator Knative Open source API and implementation that codifies "serverless on Kubernetes". Adds capabilities to Kubernetes to run stateless microservices more effectively. Heavily customizable and pluggable. Managed for you, with Cloud Run for Anthos. Has a strong community, backed by Google, Red Hat, IBM, SAP and other contributors. knative.dev knative.dev Kubernetes Memory/CPU based autoscaling (slow) Knative Rapid, request-oriented autoscaling Handles traffic spikes Knative enhances Kubernetes Autoscaling Kubernetes N/A Knative Scale application to 0, if no requests coming Activate (0→1) on the next request Knative enhances Kubernetes Scale to zero Kubernetes Connection-based load balancing Knative Per-request load balancing Traffic splitting (blue/green deployments) Knative enhances Kubernetes Load Balancing What we just saw ✔ Same developer/ops experience as the fully-managed Cloud Run ✔ Knative installation, managed for you by Cloud Run for Anthos on GKE ✔ Traffic splitting , without writing YAML files ✔ Knative is still Kubernetes. Migrating Kubernetes Deployments to Cloud Run ✓ Stateless applications (microservices, frontends, event handlers, queue processing) ✓ Listens on a port number with HTTP or gRPC ✓ Ideally, doesn't take too long to start up and process requests. What’s good with Cloud Run? Serverless Eventing with Cloud Run and Kafka