Jumpstart ArcBox for DevOps
ArcBox for DevOps is a special “flavor” of ArcBox that is intended for users who want to experience Azure Arc-enabled Kubernetes capabilities in a sandbox environment.
Use cases
- Sandbox environment for getting hands-on with Azure Arc technologies and Azure Arc-enabled Kubernetes landing zone accelerator
- Accelerator for Proof-of-concepts or pilots
- Training solution for Azure Arc skills development
- Demo environment for customer presentations or events
- Rapid integration testing platform
- Infrastructure-as-code and automation template library for building hybrid cloud management solutions
Azure Arc capabilities available in ArcBox for DevOps
Azure Arc-enabled Kubernetes
ArcBox for DevOps deploys two Kubernetes clusters to give you multiple options for exploring Azure Arc-enabled Kubernetes capabilities and potential integrations.
- ArcBox-CAPI-Data - A single-node Rancher K3s cluster which is then transformed to a Cluster API management cluster using the Cluster API Provider for Azure (CAPZ), and a workload cluster (ArcBox-CAPI-Data) is deployed onto the management cluster. The workload cluster is onboarded as an Azure Arc-enabled Kubernetes resource. ArcBox automatically deploys multiple GitOps configurations on this cluster for you, so you have an easy way to get started exploring GitOps capabilities.
- ArcBox-K3s - One single-node Rancher K3s cluster running on an Azure virtual machine. This cluster is then connected to Azure as an Azure Arc-enabled Kubernetes resource. ArcBox provides the user with PowerShell scripts that can be manually run to apply GitOps configurations on this cluster.
Sample applications
ArcBox for DevOps deploys two sample applications on the ArcBox-CAPI-Data cluster. The cluster has multiple GitOps configurations that deploy and configure the sample apps. You can use your own fork of the sample applications GitHub repo to experiment with GitOps configuration flows.
The sample applications included in ArcBox are:
-
Hello-Arc - A simple Node.js web application. ArcBox will deploy three Kubernetes pod replicas of the Hello-Arc application in the hello-arc namespace onto the ArcBox-CAPI-Data cluster.
-
Bookstore - A sample microservices Golang (Go) application. ArcBox will deploy the following five different Kubernetes pods as part of the Bookstore app.
- bookbuyer is an HTTP client making requests to bookstore.
- bookstore is a server, which responds to HTTP requests. It is also a client making requests to the bookwarehouse service.
- bookwarehouse is a server and should respond only to bookstore.
- mysql is a MySQL database only reachable by bookwarehouse.
- bookstore-v2 - this is the same container as the first bookstore, but for Open Service Mesh (OSM) traffic split scenario we will assume that it is a new version of the app we need to upgrade to.
The bookbuyer, bookstore, and bookwarehouse pods will be in separate Kubernetes namespaces with the same names. mysql will be in the bookwarehouse namespace. bookstore-v2 will be in the bookstore namespace.
Open Service Mesh (OSM) integration
ArcBox deploys OSM by installing the Open Service Mesh cluster extension on the ArcBox-CAPI-Data cluster. Bookstore application namespaces will be added to OSM control plane. Each new pod in the service mesh will be injected with an Envoy sidecar container.
OSM is a lightweight, extensible, cloud-native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
GitOps
GitOps on Azure Arc-enabled Kubernetes uses Flux. Flux is deployed by installing the Flux extension on the Kubernetes cluster. Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (such as Git repositories) and automating updates to the configuration when there is a new code to deploy. Flux provides support for common file sources (Git and Helm repositories, Buckets) and template types (YAML, Helm, and Kustomize).
ArcBox deploys five GitOps configurations onto the ArcBox-CAPI-Data cluster:
- Cluster scope config to deploy NGINX-ingress controller.
- Cluster scope config to deploy the “Bookstore” application.
- Namespace scope config to deploy the “Bookstore” application Role-based access control (RBAC).
- Namespace scope config to deploy the “Bookstore” application open service mesh traffic split policies.
- Namespace scope config to deploy the “Hello-Arc” web application.
Key Vault integration
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume.
ArcBox deploys Azure Key Vault during the automation scripts that run after logging into ArcBox-Client for the first time. The automation configures the ArcBox-CAPI-Data cluster to Azure Key Vault by deploying the Azure Key Vault Secrets Provider extension.
A self-signed certificate is synced from the Key Vault and configured as the secret for the Kubernetes ingress for the Bookstore and Hello-Arc applications.
Microsoft Defender for Cloud / k8s integration
ArcBox deploys several management and operations services that work with ArcBox’s Azure Arc resources. One of these services is Microsoft Defender for Cloud that is deployed by installing the Defender extension on your Kubernetes cluster in order to start collecting security related logs and telemetry.
Hybrid Unified Operations
ArcBox deploys several management and operations services that work with ArcBox’s Azure Arc resources. These resources include an an Azure Automation account, an Azure Log Analytics workspace, an Azure Monitor workbook, Azure Policy assignments for deploying Kubernetes cluster monitoring and security extensions on the included clusters, Azure Policy assignment for adding tags to resources, and a storage account used for staging resources needed for the deployment automation.
ArcBox Azure Consumption Costs
ArcBox resources generate Azure consumption charges from the underlying Azure resources including core compute, storage, networking, and auxiliary services. Note that Azure consumption costs vary depending on the region where ArcBox is deployed. Be mindful of your ArcBox deployments and ensure that you disable or delete ArcBox resources when not in use to avoid unwanted charges. Please see the Jumpstart FAQ for more information on consumption costs.
Deployment Options and Automation Flow
ArcBox provides multiple paths for deploying and configuring ArcBox resources. Deployment options include:
- Azure portal
- ARM template via Azure CLI
- Azure Bicep
- HashiCorp Terraform
ArcBox uses an advanced automation flow to deploy and configure all necessary resources with minimal user interaction. The previous diagrams provide an overview of the deployment flow. A high-level summary of the deployment is:
- User deploys the primary ARM template (azuredeploy.json), Bicep file (main.bicep), or Terraform plan (main.tf). These objects contain several nested objects that will run simultaneously.
- Client virtual machine ARM template/plan - deploys the Client Windows VM. This is a Windows Server VM that comes preconfigured with kubeconfig files to work with the two Kubernetes clusters, as well multiple tools such as VSCode to make working with ArcBox simple and easy.
- Storage account template/plan - used for staging files in automation scripts.
- Management artifacts template/plan - deploys Azure Log Analytics workspace, its required Solutions, and Azure Policy artifacts.
- User remotes into Client Windows VM, which automatically kicks off multiple scripts that:
- Deploys OSM Extension on the ArcBox-CAPI-Data cluster, create application namespaces and add namespaces to OSM control plane.
- Applies five GitOps configurations on the ArcBox-CAPI-Data cluster to deploy nginx-ingress controller, Hello Arc web application, Bookstore application and Bookstore RBAC/OSM configurations.
- Creates certificate with DNS name arcbox.devops.com and imports to Azure Key Vault.
- Deploys Azure Key Vault Secrets Provider extension on the ArcBox-CAPI-Data cluster.
- Configures Ingress for Hello-Arc and Bookstore application with a self-signed TLS certificate from the Azure Key Vault.
- Deploy an Azure Monitor workbook that provides example reports and metrics for monitoring and visualizing ArcBox’s various components.
Prerequisites
-
Install or update Azure CLI to version 2.36.0 and above. Use the below command to check your current installed version.
az --version
-
Login to AZ CLI using the
az login
command. -
Ensure that you have selected the correct subscription you want to deploy ArcBox to by using the
az account list --query "[?isDefault]"
command. If you need to adjust the active subscription used by Az CLI, follow this guidance. -
ArcBox must be deployed to one of the following regions. Deploying ArcBox outside of these regions may result in unexpected behavior or deployment errors.
- East US
- East US 2
- Central US
- West US 2
- North Europe
- West Europe
- France Central
- UK South
- Australia East
- Japan East
- Korea Central
- Southeast Asia
-
ArcBox DevOps requires 22 DSv4-series vCPUs when deploying with default parameters such as VM series/size. Ensure you have sufficient vCPU quota available in your Azure subscription and the region where you plan to deploy ArcBox. You can use the below Az CLI command to check your vCPU utilization.
az vm list-usage --location <your location> --output table
-
Some Azure subscriptions may also have SKU restrictions that prevent deployment of specific Azure VM sizes. You can check for SKU restrictions used by ArcBox by using the below command:
az vm list-skus --location <your location> --size Standard_D2s --all --output table az vm list-skus --location <your location> --size Standard_D4s --all --output table
In the screenshots below, the first screenshot shows a subscription with no SKU restrictions in West US 2. The second shows a subscription with SKU restrictions on D4s_v4 in the East US 2 region. In this case, ArcBox will not be able to deploy due to the restriction.
-
Fork the sample applications GitHub repo to your own GitHub account. You will use this forked repo to make changes to the sample apps that will be applied using GitOps configurations. The name of your GitHub account is passed as a parameter to the template files so take note of your GitHub user name.
-
The name of your GitHub account is passed as the
githubUser
parameter to the template files so take note of your GitHub user name in your forked repo. -
Create an Azure service principal (SP). To deploy ArcBox, an Azure service principal assigned with multiple role-based access control (RBAC) roles is required:
-
“Contributor” - Required for provisioning Azure resources
-
“Security admin” - Required for installing Microsoft Defender for Cloud Azure Arc-enabled Kubernetes extension and dismiss alerts
-
“Security reader” - Required for being able to view Azure Arc-enabled Kubernetes Cloud Defender extension findings
To create it login to your Azure account run the below commands (this can also be done in Azure Cloud Shell.
az login subscriptionId=$(az account show --query id --output tsv) az ad sp create-for-rbac -n "<Unique SP Name>" --role "Contributor" --scopes /subscriptions/$subscriptionId az ad sp create-for-rbac -n "<Unique SP Name>" --role "Security admin" --scopes /subscriptions/$subscriptionId az ad sp create-for-rbac -n "<Unique SP Name>" --role "Security reader" --scopes /subscriptions/$subscriptionId
For example:
subscriptionId=$(az account show --query id --output tsv) az ad sp create-for-rbac -n "JumpstartArcBox" --role "Contributor" --scopes /subscriptions/$subscriptionId az ad sp create-for-rbac -n "JumpstartArcBox" --role "Security admin" --scopes /subscriptions/$subscriptionId az ad sp create-for-rbac -n "JumpstartArcBox" --role "Security reader" --scopes /subscriptions/$subscriptionId
Output should look similar to this.
{ "appId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "displayName": "JumpstartArcBox", "password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "tenant": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX" }
NOTE: If you create multiple subsequent role assignments on the same service principal, your client secret (password) will be destroyed and recreated each time. Therefore, make sure you grab the correct password.
NOTE: The Jumpstart scenarios are designed with as much ease of use in-mind and adhering to security-related best practices whenever possible. It is optional but highly recommended to scope the service principal to a specific Azure subscription and resource group as well considering using a less privileged service principal account
-
-
Generate SSH Key (or use existing ssh key). The SSH key is used to configure secure access to the Linux virtual machines that are used to run the Kubernetes clusters.
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
Deployment Option 1: Azure portal
Deployment Option 2: ARM template with Azure CLI
-
Clone the Azure Arc Jumpstart repository
git clone https://github.com/microsoft/azure_arc.git
-
Edit the azuredeploy.parameters.json ARM template parameters file and supply some values for your environment.
sshRSAPublicKey
- Your SSH public keyspnClientId
- Your Azure service principal idspnClientSecret
- Your Azure service principal secretspnTenantId
- Your Azure tenant idwindowsAdminUsername
- Client Windows VM Administrator namewindowsAdminPassword
- Client Windows VM Password. Password must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character. The value must be between 12 and 123 characters long.logAnalyticsWorkspaceName
- Name for the ArcBox Log Analytics workspaceflavor
- Use the value “DevOps” to specify that you want to deploy the DevOps flavor of ArcBoxgithubUser
- Specify the name of your GitHub account where you cloned the Sample Apps repo
-
Now you will deploy the ARM template. Navigate to the local cloned deployment folder and run the below command:
az group create --name <Name of the Azure resource group> --location <Azure Region> az deployment group create \ --resource-group <Name of the Azure resource group> \ --template-file azuredeploy.json \ --parameters azuredeploy.parameters.json
Deployment Option 3: Azure Bicep deployment via Azure CLI
-
Clone the Azure Arc Jumpstart repository
git clone https://github.com/microsoft/azure_arc.git
-
Upgrade to latest Bicep version
az bicep upgrade
-
Edit the main.parameters.json template parameters file and supply some values for your environment.
sshRSAPublicKey
- Your SSH public keyspnClientId
- Your Azure service principal idspnClientSecret
- Your Azure service principal secretspnTenantId
- Your Azure tenant idwindowsAdminUsername
- Client Windows VM Administrator namewindowsAdminPassword
- Client Windows VM Password. Password must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character. The value must be between 12 and 123 characters long.logAnalyticsWorkspaceName
- Name for the ArcBox Log Analytics workspaceflavor
- Use the value “DevOps” to specify that you want to deploy the Devops flavor of ArcBoxdeployBastion
- Set to true if you want to use Azure Bastion to connect to ArcBox-ClientgithubUser
- Specify the name of your GitHub account where you cloned the Sample Apps repo
-
Now you will deploy the Bicep file. Navigate to the local cloned deployment folder and run the below command:
az login az group create --name "<resource-group-name>" --location "<preferred-location>" az deployment group create -g "<resource-group-name>" -f "main.bicep" -p "main.parameters.json"
Deployment Option 4: HashiCorp Terraform Deployment
-
Clone the Azure Arc Jumpstart repository
git clone https://github.com/microsoft/azure_arc.git
-
Download and install the latest version of Terraform here
NOTE: Terraform 1.x or higher is supported for this deployment. Tested with Terraform v1.0.9+.
-
Create a
terraform.tfvars
file in the root of the terraform folder and supply some values for your environment.azure_location = "westus2" resource_group_name = "ArcBoxDevOps" spn_client_id = "1414133c-9786-53a4-b231-f87c143ebdb1" spn_client_secret = "fakeSecretValue123458125712ahjeacjh" spn_tenant_id = "33572583-d294-5b56-c4e6-dcf9a297ec17" client_admin_ssh = "C:/Temp/rsa.pub" deployment_flavor = "DevOps" deploy_bastion = false github_username = "GitHubUser"
-
Variable Reference:
azure_location
- Azure location code (e.g. ‘eastus’, ‘westus2’, etc.)resource_group_name
- Resource group which will contain all of the ArcBox artifactsspn_client_id
- Your Azure service principal idspn_client_secret
- Your Azure service principal secretspn_tenant_id
- Your Azure tenant idclient_admin_ssh
- SSH public key path, used for Linux VMsdeployment_flavor
- Use the value “DevOps” to specify that you want to deploy the DevOps flavor of ArcBoxdeployBastion
- Set to true if you want to use Azure Bastion to connect to ArcBox-Clientclient_admin_username
- Admin username for Windows & Linux VMsclient_admin_password
- Admin password for Windows VMs. Password must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character. The value must be between 12 and 123 characters long.workspace_name
- Unique name for the ArcBox Log Analytics workspacegithub_username
- Specify the name of your GitHub account where you cloned the Sample Apps repo
NOTE: Any variables in bold are required. If any optional parameters are not provided, defaults will be used.
-
Now you will deploy the Terraform file. Navigate to the local cloned deployment folder and run the commands below:
terraform init terraform plan -out=infra.out terraform apply "infra.out"
-
Example output from
terraform init
: -
Example output from
terraform plan -out=infra.out
: -
Example output from
terraform apply "infra.out"
:
Start post-deployment automation
Once your deployment is complete, you can open the Azure portal and see the ArcBox resources inside your resource group. You will be using the ArcBox-Client Azure virtual machine to explore various capabilities of ArcBox such as GitOps configurations and Key Vault integration. You will need to remotely access ArcBox-Client.
NOTE: For enhanced ArcBox security posture, RDP (3389) and SSH (22) ports are not open by default in ArcBox deployments. You will need to create a network security group (NSG) rule to allow network access to port 3389, or use Azure Bastion or Just-in-Time (JIT) access to connect to the VM.
Connecting to the ArcBox Client virtual machine
Various options are available to connect to ArcBox-Client VM, depending on the parameters you supplied during deployment.
- RDP - available after configuring access to port 3389 on the ArcBox-NSG, or by enabling Just-in-Time access (JIT).
- Azure Bastion - available if
true
was the value of yourdeployBastion
parameter during deployment.
Connecting directly with RDP
By design, ArcBox does not open port 3389 on the network security group. Therefore, you must create an NSG rule to allow inbound 3389.
-
Open the ArcBox-NSG resource in Azure portal and click “Add” to add a new rule.
-
Specify the IP address that you will be connecting from and select RDP as the service with “Allow” set as the action. You can retrieve your public IP address by accessing https://icanhazip.com or https://whatismyip.com.
Connect using Azure Bastion
-
If you have chosen to deploy Azure Bastion in your deployment, use it to connect to the VM.
NOTE: When using Azure Bastion, the desktop background image is not visible. Therefore some screenshots in this guide may not exactly match your experience if you are connecting to ArcBox-Client with Azure Bastion.
Connect using just-in-time access (JIT)
If you already have Microsoft Defender for Cloud enabled on your subscription and would like to use JIT to access the Client VM, use the following steps:
-
In the Client VM configuration pane, enable just-in-time. This will enable the default settings.
The Logon scripts
-
Once you log into the ArcBox-Client VM, multiple automated scripts will open and start running. These scripts usually take 10-20 minutes to finish, and once completed, the script windows will close automaticly. At this point, the deployment is complete.
-
Deployment is complete! Let’s begin exploring the features of Azure Arc-enabled Kubernetes with ArcBox for DevOps!
Using ArcBox
After deployment is complete, it’s time to start exploring ArcBox. Most interactions with ArcBox will take place either from Azure itself (Azure portal, CLI, or similar) or from inside the ArcBox-Client virtual machine. When remoted into the VM, here are some things to try:
Key Vault integration
ArcBox uses Azure Key Vault to store the TLS certificate used by the sample Hello-Arc and OSM applications. Here are some things to try to explore this integration with Key Vault further:
-
Configure Azure Key Vault to allow your access to certificates.
-
Navigate to the deployed Key Vault in the Azure portal and open the “Access Policies” blade
-
Click “Add access policy” and in the dropdown for “Certificate Permissions” check Get and List.
-
Next to “Select principal” Click “None selected” and search for your user name and select it. Click “Add”.
-
Click “Save” to commit the changes.
-
-
Open the extension tab section of the ArcBox-CAPI-Data cluster resource in the Azure portal. You can now see that Azure Key Vault Secrets Provider, Flux (GitOps), and Open Service Mesh extensions are installed.
-
Click on the CAPI Hello-Arc icon on the desktop to open Hello-Arc application and validate the Ingress certificate arcbox.devops.com used from the Key Vault.
-
Validate that Key Vault certificate is being used by comparing the certificate thumbprint reported in the browser with your certificate thumbprint in Key Vault. Click on the lock icon and then select “Connection is secure”.
-
Click on the certificate icon.
-
Open the “Details” tab to view the thumprint of the certificate.
-
Browse to the certificate “ingress-cert” in Key Vault to view and compare the thumbprint.
GitOps configurations
ArcBox deploys multiple GitOps configurations on the ArcBox-CAPI-Data workload cluster. Click on the GitOps tab of the cluster to explore these configurations:
-
You can now see the five GitOps configurations on the ArcBox-CAPI-Data cluster.
- config-nginx to deploy NGINX-ingress controller.
- config-bookstore to deploy the “Bookstore” application.
- config-bookstore-rbac to deploy the “Bookstore” application RBAC.
- config-bookstore-osm to deploy the “Bookstore” application open service mesh traffic split policy.
- config-helloarc to deploy the “Hello Arc” web application.
-
We have installed the “Tab Auto Refresh” extension for the browser. This will help you to show the real-time changes on the application in an automated way. Open “CAPI Hello-Arc” application to configure the “Tab Auto Refresh” extension for the browser to refresh every 3 seconds.
-
To show the GitOps flow for the Hello-Arc application open two side-by-side windows.
-
A browser window with the open Hello-Arc application
https://arcbox.devops.com/
URL. -
PowerShell running the command
kubectl get pods -n hello-arc -w
command.The result should look like this:
-
-
In your fork of the “Azure Arc Jumpstart Apps” GitHub repository, open the
hello_arc.yaml
file (/hello-arc/yaml/hello_arc.yaml
), change the text under the “MESSAGE” section and commit the change. -
Upon committing the changes, notice how the Kubernetes pods rolling upgrade will begin. Once the pods are up & running, refresh the browser, the new “Hello Arc” application version window will show the new message, showing the rolling upgrade is completed and the GitOps flow is successful.
RBAC configurations
ArcBox deploys Kubernetes RBAC configuration on the bookstore application for limiting access to deployed Kubernetes resources. You can explore this configuration by following these steps:
-
Show Kubernetes RBAC Role and Role binding applied using GitOps Configuration.
-
Review the RBAC configuration applied to the ArcBox-CAPI-Data cluster.
-
Show the bookstore namespace Role and Role Binding.
kubectl --namespace bookstore get role kubectl --namespace bookstore get rolebindings.rbac.authorization.k8s.io
-
Validate the RBAC role to get the pods as user “Jane”.
kubectl --namespace bookstore get pods --as=jane
-
To test the RBAC role assignment, as user “Jane”, try to delete the pods. As you can see, the operation fails since Jane is assigned with the role of “pod-reader”.
The “pod-reader” role only allows get, watch and list Kubernetes operations permissions in the bookstore namespace but does not allow for delete operations permissions.
$pod=kubectl --namespace bookstore get pods --selector=app=bookstore --output="jsonpath={.items..metadata.name}" kubectl --namespace bookstore delete pods $pod --as=jane
-
Optionally, you can test the access using auth can-i command to validate RBAC access.
kubectl --namespace bookstore auth can-i get pods --as=jane kubectl --namespace bookstore auth can-i delete pods --as=jane
-
OSM Traffic Split using GitOps
ArcBox uses a GitOps configuration on the OSM bookstore application to split traffic to the bookstore APIs using weighted load balancing. Follow these steps to explore this capability further:
-
Review the OSM Traffic Split Policy applied to the ArcBox-CAPI-Data cluster
-
To show OSM traffic split, open below windows.
-
PowerShell running the below commands to show the bookbuyer pod logs.
$pod=kubectl --namespace bookbuyer get pods --selector=app=bookbuyer --output="jsonpath={.items..metadata.name}" kubectl --namespace bookbuyer logs $pod bookbuyer -f | Select-String Identity:
-
Click on the CAPI Bookstore icon on the desktop to open bookstore applications.
-
Move the browser tabs and PowerShell window, so the end result should look like this:
-
The count for the books sold from the bookstore-v2 browser window should remain at 0. This is because the current traffic split policy is configured as weighted 100 for bookstore as well because the bookbuyer client is sending traffic to the bookstore service and no application is sending requests to the bookstore-v2 service.
-
-
In your fork of the “Azure Arc Jumpstart Apps” GitHub repository, open the
traffic-split.yaml
file (/bookstore/osm-sample/traffic-split.yaml
), update the bookstore weight to “75” and bookstore-v2 weight to “25” and commit the change. -
Wait for the changes to propagate and observe the counters increment for bookstore and bookstore-v2 as well.
We have updated the Service Mesh Interface (SMI) Traffic Split policy to direct 75 percent of the traffic sent to the root bookstore service and 25 percent to the bookstore-v2 service by modifying the weight fields for the bookstore-v2 backend. Also, observe the changes on the bookbuyer pod logs in the PowerShell window.
-
You can verify the traffic split policy by running the below command and examine the Backends properties.
kubectl describe trafficsplit bookstore-split -n bookstore
-
In your fork of the “Azure Arc Jumpstart Apps” GitHub repository, open the
traffic-split.yaml
file (/bookstore/osm-sample/traffic-split.yaml
), update the bookstore weight to “0” and bookstore weight to “100” and commit the change. -
Wait for the changes to propagate and observe the counters increment for bookstore-v2 and freeze for bookstore. Also, observe pod logs to validate bookbuyer is sending all the traffic to bookstore-v2.
-
Optional, you may want to reset the traffic split demo to start over with the counters at zero. If so, follow the below steps to reset the bookstore counters.
-
Browse to the ResetBookstore.ps1 script placed under C:\ArcBox\GitOps. The script will:
- Connect to ArcBox-CAPI-Data cluster
- Deploy a Kubernetes Ingress resource for each bookstore apps reset API
- Invoke bookstore apps rest API to reset the counter
-
Before we run the reset script, did you update the Traffic split on GitHub? In your fork of the “Azure Arc Jumpstart Apps” GitHub repository, open the
traffic-split.yaml
file (/bookstore/osm-sample/traffic-split.yaml
), update the bookstore weight to “100” and bookstore weight to “0” and commit the change. -
Right click ResetBookstore.ps1 script and select Run with PowerShell to execute the script.
-
Counters for Bookbuyer, Bookstore-v1, and Bookstore-v2 will reset.
-
Microsoft Defender for Cloud
After you have finished the deployment of ArcBox, you can verify that Microsoft Defender for Cloud is working properly and alerting on security threats by running the below command to simulate an alert on the ArcBox-CAPI-Data workload cluster:
kubectx arcbox-capi
kubectl get pods --namespace=asc-alerttest-662jfi039n
After a period of time (typically less than an hour), Microsoft Defender for Cloud will detect this event and trigger a security alert that you will see in the Azure portal under Microsoft Defender for Cloud’s security alerts and also on the security tab of your Azure Arc-enabled Kubernetes cluster.
NOTE: This feature requires Microsoft Defender for Cloud to be enabled on your Azure subscription.
Additional optional scenarios on the ArcBox-K3s cluster
Optionally, you can explore additional GitOps and RBAC scenarios in a manual fashion using the ArcBox-K3s cluster. When remoted into the ArcBox-Client virtual machine, here are some things to try:
-
Browse to the Azure portal and notice how currently there is no GitOps configuration and Flux extension installed on the ArcBox-K3s cluster.
-
Deploy multiple GitOps configurations on the ArcBox-K3s cluster.
-
Browse to the K3sGitOps.ps1 script placed under C:\ArcBox\GitOps. The script will:
- Log in to your Azure subscription using your previously created service principal credentials
- Connect to ArcBox-K3s cluster
- Create the GitOps configurations to install the Flux extension as well deploying the NGINX ingress controller and the “Hello Arc” application
- Create a certificate with arcbox.k3sdevops.com DNS name and import to Azure Key Vault
- Deploy the Azure Key Vault k8s extension instance
- Create Kubernetes SecretProviderClass to fetch the secrets from Azure Key Vault
- Deploy a Kubernetes Ingress resource referencing the Secret created by the CSI driver
- Create an icon for the Hello-Arc application on the desktop
-
Optionally, you can open the script with VSCode to review.
-
Right click K3sGitOps.ps1 script and select Run with PowerShell to execute the script. This will take about 5-10 minutes to run.
-
You can verify that Azure Key Vault Secrets Provider and the Flux (GitOps) extensions are now installed under the extension tab section of the ArcBox-K3s cluster resource in the Azure portal.
-
You can verify below GitOps configurations applied on the ArcBox-K3s cluster.
- config-nginx to deploy NGINX-ingress controller
- config-helloarc to deploy the “Hello Arc” web application
-
Click on the K3s Hello-Arc icon on the desktop to open Hello-Arc application and validate the Ingress certificate arcbox.k3sdevops.com used from Key Vault.
-
To show the GitOps flow for the Hello-Arc application open two side-by-side windows.
-
A browser window with the open Hello-Arc application
https://k3sdevops.devops.com/
URL. -
PowerShell running the command
kubectl get pods -n hello-arc -w
command.kubectx arcbox-k3s kubectl get pods -n hello-arc -w
The result should look like this:
-
In your fork of the “Azure Arc Jumpstart Apps” GitHub repository, open the
hello_arc.yaml
file (/hello-arc/yaml/hello_arc.yaml
). Change the replica to 2 and text under the “MESSAGE” section and commit the change. -
Upon committing the changes, notice how the Kubernetes pods rolling upgrade will begin. Once the pods are up and running, refresh the browser, the new “Hello Arc” application version window will show the new message, showing the rolling upgrade is completed and the GitOps flow is successful.
-
-
-
Deploy Kubernetes RBAC configuration on the Hello-Arc application to limit access to deployed Kubernetes resources.
-
Browse to the K3sRBAC.ps1 script placed under C:\ArcBox\GitOps. The script will:
- Log in to your Azure subscription using your previously created service principal credentials
- Connect to ArcBox-K3s cluster
- Create the GitOps configurations to deploy the RBAC configurations for hello-arc namespace and cluster scope
-
Right click K3sGitOps.ps1 script and select Run with PowerShell to execute the script.
-
You can can verify below GitOps configurations applied on the ArcBox-K3s cluster.
-
config-helloarc-rbac to deploy the hello-arc namespace RBAC.
-
-
Show the hello-arc namespace Role and Role Binding.
kubectx arcbox-k3s kubectl --namespace hello-arc get role kubectl --namespace hello-arc get rolebindings.rbac.authorization.k8s.io
-
Validate the namespace RBAC role to get the pods as user Jane.
kubectl --namespace hello-arc get pods --as=jane
-
To test the RBAC role assignment, as user “Jane”, try to delete the pods. As you can see, the operation fails since Jane is assigned with the role of “pod-reader”.
The “pod-reader” role only allows get, watch and list Kubernetes operations permissions in the hello-arc namespace but does not allow for delete operations permissions.
$pod=kubectl --namespace hello-arc get pods --selector=app=hello-arc --output="jsonpath={.items..metadata.name}" kubectl --namespace hello-arc delete pods $pod --as=jane
-
Show the Cluster Role and Role Binding.
kubectl get clusterrole | Select-String secret-reader kubectl get clusterrolebinding | Select-String read-secrets-global
-
Validate the cluster role to get the secrets as user Dave.
kubectl get secrets --as=dave
-
Test the RBAC role assignment to check if Dave can create the secrets. The operation should fail, as the user Dave is assigned to the role of secret-reader. The secret-reader role only allows get, watch and list permissions.
kubectl create secret generic arcbox-secret --from-literal=username=arcdemo --as=dave
-
ArcBox Azure Monitor workbook
Open the ArcBox Azure Monitor workbook documentation and explore the visualizations and reports of hybrid cloud resources.
Included tools
The following tools are including on the ArcBox-Client VM.
- kubectl, kubectx, helm
- Chocolatey
- Visual Studio Code
- Putty
- 7zip
- Terraform
- Git
- ZoomIt
Next steps
ArcBox is a sandbox that can be used for a large variety of use cases, such as an environment for testing and training or a kickstarter for proof of concept projects. Ultimately, you are free to do whatever you wish with ArcBox. Some suggested next steps for you to try in your ArcBox are:
- Use the included kubectx to switch contexts between the two Kubernetes clusters
- Deploy new GitOps configurations with Azure Arc-enabled Kubernetes
- Build policy initiatives that apply to your Azure Arc-enabled resources
- Write and test custom policies that apply to your Azure Arc-enabled resources
- Incorporate your own tooling and automation into the existing automation framework
- Build a certificate/secret/key management strategy with your Azure Arc resources
Clean up the deployment
To clean up your deployment, simply delete the resource group using Azure CLI or Azure portal.
az group delete -n <name of your resource group>
Basic Troubleshooting
Occasionally deployments of ArcBox may fail at various stages. Common reasons for failed deployments include:
-
Invalid service principal id, service principal secret or service principal Azure tenant ID provided in azuredeploy.parameters.json file.
-
Invalid SSH public key provided in azuredeploy.parameters.json file.
-
An example SSH public key is shown here. Note that the public key includes “ssh-rsa” at the beginning. The entire value should be included in your azuredeploy.parameters.json file.
-
-
Not enough vCPU quota available in your target Azure region - check vCPU quota and ensure you have at least 52 available. See the prerequisites section for more details.
-
Target Azure region does not support all required Azure services - ensure you are running ArcBox in one of the supported regions listed in the above section “ArcBox Azure Region Compatibility”.
Exploring logs from the ArcBox-Client virtual machine
Occasionally, you may need to review log output from scripts that run on the ArcBox-Client, ArcBox-CAPI-MGMT or ArcBox-K3s virtual machines in case of deployment failures. To make troubleshooting easier, the ArcBox deployment scripts collect all relevant logs in the C:\ArcBox\Logs folder on ArcBox-Client. A short description of the logs and their purpose can be seen in the list below:
Logfile | Description |
---|---|
C:\ArcBox\Logs\Bootstrap.log | Output from the initial bootstrapping script that runs on ArcBox-Client. |
C:\ArcBox\Logs\DevOpsLogonScript.log | Output of DevOpsLogonScript.ps1 which configures the Hyper-V host and guests and onboards the guests as Azure Arc-enabled servers. |
C:\ArcBox\Logs\installCAPI.log | Output from the custom script extension which runs on ArcBox-CAPI-MGMT and configures the Cluster API for Azure cluster and onboards it as an Azure Arc-enabled Kubernetes cluster. If you encounter ARM deployment issues with ubuntuCapi.json then review this log. |
C:\ArcBox\Logs\installK3s.log | Output from the custom script extension which runs on ArcBox-K3s and configures the Rancher cluster and onboards it as an Azure Arc-enabled Kubernetes cluster. If you encounter ARM deployment issues with ubuntuRancher.json then review this log. |
C:\ArcBox\Logs\MonitorWorkbookLogonScript.log | Output from MonitorWorkbookLogonScript.ps1 which deploys the Azure Monitor workbook. |
C:\ArcBox\Logs\K3sGitOps.log | Output from K3sGitOps.ps1 which deploys GitOps configurations on ArcBox-K3s. This script must be manually run by the user. Therefore the log is only present if the user has run the script. |
C:\ArcBox\Logs\K3sRBAC.log | Output from K3sRBAC.ps1 which deploys GitOps RBAC configurations on ArcBox-K3s. This script must be manually run by the user. Therefore the log is only present if the user has run the script. |
Exploring installation logs from the Linux virtual machines
In the case of a failed deployment, pointing to a failure in either the ubuntuRancherDeployment or the ubuntuCAPIDeployment Azure deployments, an easy way to explore the deployment logs is available directly from the associated virtual machines.
-
Depending on which deployment failed, connect using SSH to the associated virtual machine public IP:
-
ubuntuCAPIDeployment - ArcBox-CAPI-MGMT virtual machine.
-
ubuntuRancherDeployment - ArcBox-K3s virtual machine.
NOTE: Port 22 is not open by default in ArcBox deployments. You will need to create an NSG rule to allow network access to port 22, or use Azure Bastion or JIT to connect to the VM.
-
-
As described in the message of the day (motd), depending on which virtual machine you logged into, the installation log can be found in the jumpstart_logs folder. This installation logs can help determine the root cause for the failed deployment.
-
ArcBox-CAPI-MGMT log path: jumpstart_logs/installCAPI.log
-
ArcBox-K3s log path: jumpstart_logs/installK3s.log
-
-
From the screenshot below, looking at ArcBox-CAPI-MGMT virtual machine CAPI installation log using the
cat jumpstart_logs/installCAPI.log
command, we can see the az login command failed due to bad service principal credentials.
If you are still having issues deploying ArcBox, please submit an issue on GitHub and include a detailed description of your issue, the Azure region you are deploying to, the flavor of ArcBox you are trying to deploy. Inside the C:\ArcBox\Logs folder you can also find instructions for uploading your logs to an Azure storage account for review by the Jumpstart team.