KUBERNETES
Note
Availability of Kubernetes capabilities and articles across the CloudFerro Cloud regions:
WAW3-1
Available now
WAW3-2
Available now
FRA1-2
Available now
CF-2
Not available
- How to Create a Kubernetes Cluster Using CloudFerro Cloud OpenStack Magnum
- Default Kubernetes cluster templates in CloudFerro Cloud Cloud
- How To Install OpenStack and Magnum Clients for Command Line Interface to CloudFerro Cloud Horizon
- How To Issue Commands to the OpenStack and Magnum Servers
- What We Are Going To Cover
- Notes On Python Versions and Environments for Installation
- Prerequisites
- Step 1 Install the CLI for Kubernetes on OpenStack Magnum
- Step 2 How to Use the OpenStack Client
- The Help Command
- Step 4 How to Use the Magnum Client
- What To Do Next
- How To Use Command Line Interface for Kubernetes Clusters On CloudFerro Cloud OpenStack Magnum
- How To Access Kubernetes Cluster Post Deployment Using Kubectl On CloudFerro Cloud OpenStack Magnum
- What We Are Going To Cover
- Prerequisites
- The Plan
- Step 1 Create directory to download the certificates
- Step 2A Download Certificates From the Server using the CLI commands
- Step 2B Download Certificates From the Server using Horizon commands
- Step 3 Verify That kubectl Has Access to the Cloud
- What To Do Next
- Using Dashboard To Access Kubernetes Cluster Post Deployment On CloudFerro Cloud OpenStack Magnum
- How To Create API Server LoadBalancer for Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum
- What We Are Going To Do
- Prerequisites
- How To Enable or Disable Load Balancer for Master Nodes
- One Master Node, No Load Balancer and the Problem It All Creates
- Step 1 Create a Cluster With One Master Node and No Load Balancer
- Step 2 Create Floating IP for Master Node
- Step 3 Create config File for Kubernetes Cluster
- Step 4 Swap Existing Floating IP Address for the Network Address
- Step 4 Add Parameter –insecure-skip-tls-verify=true to Make kubectl Work
- Creating Additional Nodegroups in Kubernetes Cluster on CloudFerro Cloud OpenStack Magnum
- The Benefits of Using Nodegroups
- What We Are Going To Cover
- Prerequisites
- Nodegroup Subcommands
- Step 1 Access the Current State of Clusters and Their Nodegroups
- Step 2 How to Create a New Nodegroup
- Step 3 Using role to Filter Nodegroups in the Cluster
- Step 4 Show Details of the Nodegroup Created
- Step 5 Delete the Existing Nodegroup
- Step 6 Update the Existing Nodegroup
- Step 7 Resize the Nodegroup
- Autoscaling Kubernetes Cluster Resources on CloudFerro Cloud OpenStack Magnum
- What We Are Going To Cover
- Prerequisites
- Horizontal Pod Autoscaler
- Vertical Pod Autoscaler
- Cluster Autoscaler
- Define Autoscaling When Creating a Cluster
- Autoscaling Node Groups at Run Time
- How Autoscaling Detects Upper Limit
- Autoscaling Labels for Clusters
- Create New Cluster Using CLI With Autoscaling On
- Nodegroups With Worker Role Will Be Automatically Autoscalled
- How to Obtain All Labels From Horizon Interface
- How To Obtain All Labels From the CLI
- Use Labels String When Creating Cluster in Horizon
- What To Do Next
- Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on CloudFerro Cloud OpenStack Magnum
- What We Are Going To Cover
- Prerequisites
- Step 1 - Create Cluster Using –docker-volume-size
- Step 2 - Create Pod Manifest
- Step 3 - Create a Pod on Node 0 of dockerspace
- Step 4 - Executing bash Commands in the Container
- Step 5 - Saving a File Into Persistent Storage
- Step 6 - Check the File Saved in Previous Step
- What To Do Next
- Backup of Kubernetes Cluster using Velero
- Using Kubernetes Ingress on CloudFerro Cloud OpenStack Magnum
- Deploying Helm Charts on Magnum Kubernetes Clusters on CloudFerro Cloud Cloud
- Deploying HTTPS Services on Magnum Kubernetes in CloudFerro Cloud Cloud
- What We are Going to Cover
- Prerequisites
- Step 1 Install Cert Manager’s Custom Resource Definitions (CRDs)
- Step 2 Install CertManager Helm chart
- Step 3 Create a Deployment and a Service
- Step 4 Create and Deploy an Issuer
- Step 5 Associate the Domain with NGINX Ingress
- Step 6 Create and Deploy an Ingress Resource
- What To Do Next
- Installing JupyterHub on Magnum Kubernetes Cluster in CloudFerro Cloud Cloud
- Install and run Argo Workflows on CloudFerro Cloud Magnum Kubernetes
- Installing HashiCorp Vault on CloudFerro Cloud Magnum
- What We Are Going To Cover
- Prerequisites
- Step 1 Install CFSSL
- Step 2 Generate TLS certificates
- Step 3 Install Consul Helm chart
- Step 4 Install Vault Helm chart
- Sealing and unsealing the Vault
- Step 5 Unseal Vault
- Step 6 Run Vault UI
- Return livenessProbe to production value
- Troubleshooting
- What To Do Next
- HTTP Request-based Autoscaling on K8S using Prometheus and Keda on CloudFerro Cloud
- Create and access NFS server from Kubernetes on CloudFerro Cloud
- Deploy Keycloak on Kubernetes with a sample app on CloudFerro Cloud
- What We Are Going To Do
- Prerequisites
- Step 1 Deploy Keycloak on Kubernetes
- Step 2 Create Keycloak realm
- Step 3 Create and configure Keycloak client
- Step 4 Create a User in Keycloak
- Step 5 Retrieve client secret from Keycloak
- Step 6 Create a Flask web app utilizing Keycloak authentication
- Step 7 Test the application
- Install and run Dask on a Kubernetes cluster in CloudFerro Cloud cloud
- Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on CloudFerro Cloud
- Private container registries with Harbor on CloudFerro Cloud Kubernetes
- Benefits of using your own private container registry
- What We Are Going To Cover
- Prerequisites
- Deploy Harbor private registry with Bitnami-Harbor Helm chart
- Access Harbor from browser
- Associate the A record of your domain to Harbor’s IP address
- Create a project in Harbor
- Create a Dockerfile for our custom image
- Ensure trust from our local Docker instance
- Build our image locally
- Upload a Docker image to your Harbor instance
- Download a Docker image from your Harbor instance
- Deploying vGPU workloads on CloudFerro Cloud Kubernetes
- What Are We Going To Cover
- Prerequisites
- Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023
- Scenario 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023
- Scenario 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup
- Add non-GPU nodegroup to a GPU-first cluster
- Kubernetes cluster observability with Prometheus and Grafana on CloudFerro Cloud
- Enable Kubeapps app launcher on CloudFerro Cloud Magnum Kubernetes cluster
- Install GitLab on CloudFerro Cloud Kubernetes
- Sealed Secrets on CloudFerro Cloud Kubernetes
- CI/CD pipelines with GitLab on CloudFerro Cloud Kubernetes - building a Docker image
- What We Are Going To Cover
- Prerequisites
- Step 1 Add your public key to GitLab and access GitLab from your command line
- Step 2 Create project in GitLab and add sample application code
- Step 3 Define environment variables with your DockerHub coordinates in GitLab
- Step 4 Create a pipeline to build your app’s Docker image using Kaniko
- Step 5 Trigger pipeline build
- What To Do Next
- How to create Kubernetes cluster using Terraform on CloudFerro Cloud
- GitOps with Argo CD on CloudFerro Cloud Kubernetes
- What We Are Going To Cover
- Prerequisites
- Step 1 Install Argo CD
- Step 2 Access Argo CD from your browser
- Step 3 Create a Git repository
- Step 4 Download Flask application
- Step 5 Push your app deployment configurations
- Step 6 Create Argo CD application resource
- Step 7 Deploy Argo CD application
- Step 8 View the deployed resources
- What To Do Next