Jumpstart ArcBox “Full” Edition - Overview

ArcBox is a solution that provides an easy to deploy sandbox for all things Azure Arc. ArcBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for a user to get hands-on with all available Azure Arc technology with nothing more than an available Azure subscription.

ArcBox architecture diagram

Use cases

  • Sandbox environment for getting hands-on with Azure Arc technologies
  • Accelerator for Proof-of-concepts or pilots
  • Training tool for Azure Arc skills development
  • Demo environment for customer presentations or events
  • Rapid integration testing platform
  • Infrastructure-as-code and automation template library for building hybrid cloud management solutions

Azure Arc capabilities available in ArcBox

Azure Arc-enabled servers

ArcBox servers diagram

ArcBox includes five Azure Arc-enabled server resources that are hosted using nested virtualization in Azure. As part of the deployment, a Hyper-V host (ArcBox-Client) is deployed with five guest virtual machines. These machines, ArcBox-Win2k22, ArcBox-Win2k19, ArcBox-SQL, ArcBox-Ubuntu-01, and ArcBox-Ubuntu-02 are connected as Azure Arc-enabled servers via the ArcBox automation.

Azure Arc-enabled Kubernetes

K8s diagram

ArcBox deploys one single-node Rancher K3s cluster running on an Azure virtual machine. This cluster is then connected to Azure as an Azure Arc-enabled Kubernetes resource (ArcBox-K3s).

Azure Arc-enabled data services

ArcBox deploys one single-node Rancher K3s cluster (ArcBox-CAPI-MGMT), which is then transformed to a Cluster API management cluster using the Cluster API Provider Azure(CAPZ), and a workload cluster is deployed onto the management cluster. The Azure Arc-enabled data services and data controller are deployed onto this workload cluster via a PowerShell script that runs when first logging into ArcBox-Client virtual machine.

Data services diagram

Hybrid Unified Operations

ArcBox deploys several management and operations services that work with ArcBox’s Azure Arc resources. These resources include an an Azure Automation account, an Azure Log Analytics workspace with the Update Management solution, an Azure Monitor workbook, Azure Policy assignments for deploying Log Analytics agents on Windows and Linux Azure Arc-enabled servers, Azure Policy assignment for adding tags to resources, and a storage account used for staging resources needed for the deployment automation.

ArcBox unified operations diagram

ArcBox Azure Consumption Costs

ArcBox resources generate Azure Consumption charges from the underlying Azure resources including core compute, storage, networking and auxiliary services. Note that Azure consumption costs vary depending the region where ArcBox is deployed. Be mindful of your ArcBox deployments and ensure that you disable or delete ArcBox resources when not in use to avoid unwanted charges. Please see the Jumpstart FAQ for more information on consumption costs.

Deployment Options and Automation Flow

ArcBox provides multiple paths for deploying and configuring ArcBox resources. Deployment options include:

  • Azure portal
  • ARM template via Azure CLI
  • Bicep
  • Terraform

Deployment flow diagram for ARM-based deployments

Deployment flow diagram for Terraform-based deployments

ArcBox uses an advanced automation flow to deploy and configure all necessary resources with minimal user interaction. The previous diagrams provide an overview of the deployment flow. A high-level summary of the deployment is:

  • User deploys the primary ARM template (azuredeploy.json), Bicep file (main.bicep), or Terraform plan (main.tf). These objects contain several nested objects that will run simultaneously.
    • ClientVM ARM template/plan - deploys the Client Windows VM. This is the Hyper-V host VM where all user interactions with the environment are made from.
    • Storage account template/plan - used for staging files in automation scripts
    • Management artifacts template/plan - deploys Azure Log Analytics workspace and solutions and Azure Policy artifacts
  • User remotes into Client Windows VM, which automatically kicks off multiple scripts that:
    • Deploy and configure five (5) nested virtual machines in Hyper-V
      • Windows Server 2022 VM - onboarded as Azure Arc-enabled server
      • Windows Server 2019 VM - onboarded as Azure Arc-enabled server
      • Windows VM running SQL Server - onboarded as Azure Arc-enabled SQL Server (as well as Azure Arc-enabled server)
      • 2 x Ubuntu VMs - onboarded as Azure Arc-enabled servers
    • Deploy an Azure Monitor workbook that provides example reports and metrics for monitoring ArcBox components

Prerequisites

  • Install or update Azure CLI to version 2.40.0 and above. Use the below command to check your current installed version.

    az --version
    
  • Login to AZ CLI using the az login command.

  • Ensure that you have selected the correct subscription you want to deploy ArcBox to by using the az account list --query "[?isDefault]" command. If you need to adjust the active subscription used by Az CLI, follow this guidance.

  • ArcBox must be deployed to one of the following regions. Deploying ArcBox outside of these regions may result in unexpected results or deployment errors.

    • East US
    • East US 2
    • Central US
    • West US 2
    • North Europe
    • West Europe
    • France Central
    • UK South
    • Australia East
    • Japan East
    • Korea Central
    • Southeast Asia
  • ArcBox Full requires 44 B-series and 16 DSv4-series vCPUs when deploying with default parameters such as VM series/size. Ensure you have sufficient vCPU quota available in your Azure subscription and the region where you plan to deploy ArcBox. You can use the below Az CLI command to check your vCPU utilization.

    az vm list-usage --location <your location> --output table
    

    Screenshot showing az vm list-usage

  • Register necessary Azure resource providers by running the following commands.

    az provider register --namespace Microsoft.HybridCompute --wait
    az provider register --namespace Microsoft.GuestConfiguration --wait
    az provider register --namespace Microsoft.Kubernetes --wait
    az provider register --namespace Microsoft.KubernetesConfiguration --wait
    az provider register --namespace Microsoft.ExtendedLocation --wait
    az provider register --namespace Microsoft.AzureArcData --wait
    az provider register --namespace Microsoft.OperationsManagement --wait
    
  • Create Azure service principal (SP). To deploy ArcBox, an Azure service principal assigned with the Owner Role-based access control (RBAC) role is required. You can use Azure Cloud Shell (or other Bash shell), or PowerShell to create the service principal.

    • (Option 1) Create service principal using Azure Cloud Shell or Bash shell with Azure CLI:

      az login
      subscriptionId=$(az account show --query id --output tsv)
      az ad sp create-for-rbac -n "<Unique SP Name>" --role "Owner" --scopes /subscriptions/$subscriptionId
      

      For example:

      az login
      subscriptionId=$(az account show --query id --output tsv)
      az ad sp create-for-rbac -n "JumpstartArcBoxSPN" --role "Owner" --scopes /subscriptions/$subscriptionId
      

      Output should look similar to this:

      {
      "appId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
      "displayName": "JumpstartArcBox",
      "password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
      "tenant": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
      }
      
    • (Option 2) Create service principal using PowerShell. If necessary, follow this documentation to install Azure PowerShell modules.

      $account = Connect-AzAccount
      $spn = New-AzADServicePrincipal -DisplayName "<Unique SPN name>" -Role "Owner" -Scope "/subscriptions/$($account.Context.Subscription.Id)"
      echo "SPN App id: $($spn.AppId)"
      echo "SPN secret: $($spn.PasswordCredentials.SecretText)"
      

      For example:

      $account = Connect-AzAccount
      $spn = New-AzADServicePrincipal -DisplayName "JumpstartArcBoxSPN" -Role "Owner" -Scope "/subscriptions/$($account.Context.Subscription.Id)"
      echo "SPN App id: $($spn.AppId)"
      echo "SPN secret: $($spn.PasswordCredentials.SecretText)"
      

      Output should look similar to this:

      Screenshot showing creating an SPN with PowerShell

      NOTE: If you create multiple subsequent role assignments on the same service principal, your client secret (password) will be destroyed and recreated each time. Therefore, make sure you grab the correct password..

      NOTE: The Jumpstart scenarios are designed with as much ease of use in-mind and adhering to security-related best practices whenever possible. It is optional but highly recommended to scope the service principal to a specific Azure subscription and resource group as well considering using a less privileged service principal account

  • Generate a new SSH key pair or use an existing one (Windows 10 and above now comes with a built-in ssh client). The SSH key is used to configure secure access to the Linux virtual machines that are used to run the Kubernetes clusters.

    ssh-keygen -t rsa -b 4096
    

    To retrieve the SSH public key after it’s been created, depending on your environment, use one of the below methods:

    • In Linux, use the cat ~/.ssh/id_rsa.pub command.
    • In Windows (CMD/PowerShell), use the SSH public key file that by default, is located in the C:\Users\WINUSER/.ssh/id_rsa.pub folder.

    SSH public key example output:

    ssh-rsa o1djFhyNe5NXyYk7XVF7wOBAAABgQDO/QPJ6IZHujkGRhiI+6s1ngK8V4OK+iBAa15GRQqd7scWgQ1RUSFAAKUxHn2TJPx/Z/IU60aUVmAq/OV9w0RMrZhQkGQz8CHRXc28S156VMPxjk/gRtrVZXfoXMr86W1nRnyZdVwojy2++sqZeP/2c5GoeRbv06NfmHTHYKyXdn0lPALC6i3OLilFEnm46Wo+azmxDuxwi66RNr9iBi6WdIn/zv7tdeE34VAutmsgPMpynt1+vCgChbdZR7uxwi66RNr9iPdMR7gjx3W7dikQEo1djFhyNe5rrejrgjerggjkXyYk7XVF7wOk0t8KYdXvLlIyYyUCk1cOD2P48ArqgfRxPIwepgW78znYuwiEDss6g0qrFKBcl8vtiJE5Vog/EIZP04XpmaVKmAWNCCGFJereRKNFIl7QfSj3ZLT2ZXkXaoLoaMhA71ko6bKBuSq0G5YaMq3stCfyVVSlHs7nzhYsX6aDU6LwM/BTO1c= user@pc
    

Deployment Option 1: Azure portal

  • Click the button and enter values for the the ARM template parameters.

    Screenshot showing Azure portal deployment of ArcBox

    Screenshot showing Azure portal deployment of ArcBox

    Screenshot showing Azure portal deployment of ArcBox

    NOTE: If you see any failure in the deployment, please check the troubleshooting guide.

Deployment Option 2: ARM template with Azure CLI

  • Clone the Azure Arc Jumpstart repository

    git clone https://github.com/microsoft/azure_arc.git
    
  • Edit the azuredeploy.parameters.json ARM template parameters file and supply some values for your environment.

    • sshRSAPublicKey - Your SSH public key
    • spnClientId - Your Azure service principal id
    • spnClientSecret - Your Azure service principal secret
    • spnTenantId - Your Azure tenant id
    • windowsAdminUsername - Client Windows VM Administrator name
    • windowsAdminPassword - Client Windows VM Password. Password must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character. The value must be between 12 and 123 characters long.
    • logAnalyticsWorkspaceName - Unique name for the ArcBox Log Analytics workspace
    • flavor - Use the value “Full” to specify that you want to deploy the full version of ArcBox

    Screenshot showing example parameters

  • Now you will deploy the ARM template. Navigate to the local cloned deployment folder and run the below command:

    az group create --name <Name of the Azure resource group> --location <Azure Region>
    az deployment group create \
    --resource-group <Name of the Azure resource group> \
    --template-file azuredeploy.json \
    --parameters azuredeploy.parameters.json 
    

    Screenshot showing az group create

    Screenshot showing az deployment group create

    NOTE: If you see any failure in the deployment, please check the troubleshooting guide.

Deployment Option 3: Bicep deployment via Azure CLI

  • Clone the Azure Arc Jumpstart repository

    git clone https://github.com/microsoft/azure_arc.git
    
  • Upgrade to latest Bicep version

    az bicep upgrade
    
  • Edit the main.parameters.json template parameters file and supply some values for your environment.

    • sshRSAPublicKey - Your SSH public key
    • spnClientId - Your Azure service principal id
    • spnClientSecret - Your Azure service principal secret
    • spnTenantId - Your Azure tenant id
    • windowsAdminUsername - Client Windows VM Administrator name
    • windowsAdminPassword - Client Windows VM Password. Password must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character. The value must be between 12 and 123 characters long.
    • logAnalyticsWorkspaceName - Unique name for the ArcBox Log Analytics workspace
    • flavor - Use the value “Full” to specify that you want to deploy the full version of ArcBox

    Screenshot showing example parameters

  • Now you will deploy the Bicep file. Navigate to the local cloned deployment folder and run the below command:

    az login
    az group create --name "<resource-group-name>"  --location "<preferred-location>"
    az deployment group create -g "<resource-group-name>" -f "main.bicep" -p "main.parameters.json"
    

    NOTE: If you see any failure in the deployment, please check the troubleshooting guide.

Deployment Option 4: Terraform Deployment

  • Clone the Azure Arc Jumpstart repository

    git clone https://github.com/microsoft/azure_arc.git
    
  • Download and install the latest version of Terraform here

    NOTE: Terraform 1.x or higher is supported for this deployment. Tested with Terraform v1.011.

  • Create a terraform.tfvars file in the root of the terraform folder and supply some values for your environment.

    azure_location    = "westus2"
    spn_client_id     = "1414133c-9786-53a4-b231-f87c143ebdb1"
    spn_client_secret = "fakeSecretValue123458125712ahjeacjh"
    spn_tenant_id     = "33572583-d294-5b56-c4e6-dcf9a297ec17"
    client_admin_ssh  = "C:/Temp/rsa.pub"
    deployment_flavor = "Full"
    
  • Variable Reference:

    • azure_location - Azure location code (e.g. ‘eastus’, ‘westus2’, etc.)
    • resource_group_name - Resource group which will contain all of the ArcBox artifacts
    • spn_client_id - Your Azure service principal id
    • spn_client_secret - Your Azure service principal secret
    • spn_tenant_id - Your Azure tenant id
    • client_admin_ssh - SSH public key path, used for Linux VMs
    • deployment_flavor - Use the value “Full” to specify that you want to deploy the full version of ArcBox
    • client_admin_username - Admin username for Windows & Linux VMs
    • client_admin_password - Admin password for Windows VMs. Password must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character. The value must be between 12 and 123 characters long.
    • workspace_name - Unique name for the ArcBox Log Analytics workspace

    NOTE: Any variables in bold are required. If any optional parameters are not provided, defaults will be used.

  • Now you will deploy the Terraform file. Navigate to the local cloned deployment folder and run the commands below:

    terraform init
    terraform plan -out=infra.out
    terraform apply "infra.out"
    
  • Example output from terraform init:

    terraform init

  • Example output from terraform plan -out=infra.out:

    terraform plan

  • Example output from terraform apply "infra.out":

    terraform plan

    NOTE: If you see any failure in the deployment, please check the troubleshooting guide.

Start post-deployment automation

Once your deployment is complete, you can open the Azure portal and see the ArcBox resources inside your resource group. You will be using the ArcBox-Client Azure virtual machine to explore various capabilities of ArcBox such as GitOps configurations and Key Vault integration. You will need to remotely access ArcBox-Client.

Screenshot showing all deployed resources in the resource group

NOTE: For enhanced ArcBox security posture, RDP (3389) and SSH (22) ports are not open by default in ArcBox deployments. You will need to create a network security group (NSG) rule to allow network access to port 3389, or use Azure Bastion or Just-in-Time (JIT) access to connect to the VM.

Connecting to the ArcBox Client virtual machine

Various options are available to connect to ArcBox-Client VM, depending on the parameters you supplied during deployment.

  • RDP - available after configuring access to port 3389 on the ArcBox-NSG, or by enabling Just-in-Time access (JIT).
  • Azure Bastion - available if true was the value of your deployBastion parameter during deployment.

Connecting directly with RDP

By design, ArcBox does not open port 3389 on the network security group. Therefore, you must create an NSG rule to allow inbound 3389.

  • Open the ArcBox-NSG resource in Azure portal and click “Add” to add a new rule.

    Screenshot showing ArcBox-Client NSG with blocked RDP

    Screenshot showing adding a new inbound security rule

  • Specify the IP address that you will be connecting from and select RDP as the service with “Allow” set as the action. You can retrieve your public IP address by accessing https://icanhazip.com or https://whatismyip.com.

    Screenshot showing adding a new allow RDP inbound security rule

    Screenshot showing all inbound security rule

    Screenshot showing connecting to the VM using RDP

Connect using Azure Bastion

  • If you have chosen to deploy Azure Bastion in your deployment, use it to connect to the VM.

    Screenshot showing connecting to the VM using Bastion

    NOTE: When using Azure Bastion, the desktop background image is not visible. Therefore some screenshots in this guide may not exactly match your experience if you are connecting to ArcBox-Client with Azure Bastion.

Connect using just-in-time access (JIT)

If you already have Microsoft Defender for Cloud enabled on your subscription and would like to use JIT to access the Client VM, use the following steps:

  • In the Client VM configuration pane, enable just-in-time. This will enable the default settings.

    Screenshot showing the Microsoft Defender for cloud portal, allowing RDP on the client VM

    Screenshot showing connecting to the VM using RDP

    Screenshot showing connecting to the VM using JIT

The Logon scripts

  • Once you log into the ArcBox-Client VM, multiple automated scripts will open and start running. These scripts usually take 10-20 minutes to finish, and once completed, the script windows will close automatically. At this point, the deployment is complete.

    Screenshot showing ArcBox-Client

  • Deployment is complete! Let’s begin exploring the features of Azure Arc with ArcBox!

    Screenshot showing complete deployment

    Screenshot showing ArcBox resources in Azure portal

Azure Arc-enabled SQL Server onboarding

  • During deployment, a check is performed to determine whether or not the Service Principal being used has permissions of ‘Microsoft.Authorization/roleAssignments/write’ on the target resource group. This permission can be found in the Azure built-in roles of Owner, User Access Administrator, or you may have a custom RBAC role which provides this permission. If the Service Principal has been granted the rights to change the role assignments on the resource group, the Azure Arc-enabled SQL Server can be automatically onboarded as part of the port-deployment automation.

  • In the event that the Service Principal does not have ‘Microsoft.Authorization/roleAssignments/write’ on the target resource group, and icon will created on the ArcBox-Client desktop, which will allow you to onboard the Azure Arc-enabled SQL Server after the post-deployment automation is complete. To start the onboarding process in this scenario, simply click the ‘Onboard SQL Server’ icon on the desktop. This process should take around 10-15 minutes to complete.

    Screenshot showing ArcBox-Client

  • A pop-up box will walk you through the target SQL Server which will be onboarded to Azure Arc, as well as provide details around the flow of the onboarding automation and how to complete the Azure authentication process when prompted.

    Screenshot showing ArcBox-Client

  • The automation uses the PowerShell SDK to onboard the Azure Arc-enabled SQL Server on your behalf. To accomplish this, it will login to Azure with the -UseDeviceAuthentication flag. The device code will be copied to the clipboard on your behalf, so you can simply paste the value into box when prompted.

    Screenshot showing ArcBox-Client

  • You’ll then need to provide your Azure credentials to complete the authentication process. The user you login as will need ‘Microsoft.Authorization/roleAssignments/write’ permissions on the ArcBox resource group to complete the onboarding process.

    Screenshot showing ArcBox-Client

  • The output of each step of the onboarding process will be displayed in the PowerShell script window, so you’ll be able to see where the script currently is in the process at all times.

    Screenshot showing ArcBox-Client

  • Once complete, you’ll receive a pop-up notification informing you that the onboarding process is complete, and to check the Azure Arc blade in the Azure portal in the next few minutes.

    Screenshot showing ArcBox-Client

  • From the Azure portal, the SQL Server should now be visible as an Azure Arc-enabled SQL Server.

    Screenshot showing ArcBox-Client

Using ArcBox

After deployment is complete, its time to start exploring ArcBox. Most interactions with ArcBox will take place either from Azure itself (Azure portal, CLI or similar) or from inside the ArcBox-Client virtual machine. When remoted into the client VM, here are some things to try:

  • Open Hyper-V and access the Azure Arc-enabled servers

    • Username: arcdemo
    • Password: ArcDemo123!!

    Screenshot showing ArcBox Client VM with Hyper-V

  • Use the included kubectx tool to switch Kubernetes contexts between the Rancher K3s and AKS clusters.

    kubectx
    kubectx arcbox-capi
    kubectl get nodes
    kubectl get pods -n arc
    kubectx arcbox-k3s
    kubectl get nodes
    

    Screenshot showing usage of kubectx

  • Open Azure Data Studio and explore the SQL MI and PostgreSQL instances.

    Screenshot showing Azure Data Studio usage

ArcBox Azure Monitor workbook

Open the ArcBox Azure Monitor workbook documentation and explore the visualizations and reports of hybrid cloud resources.

Screenshot showing Azure Monitor workbook usage

Azure Arc-enabled data services operations

Open the data services operations page and explore various ways you can perform operations against the Azure Arc-enabled data services deployed with ArcBox.

Screenshot showing Grafana dashboard

Included tools

The following tools are including on the ArcBox-Client VM.

  • Azure Data Studio with Arc and PostgreSQL extensions
  • kubectl, kubectx, helm
  • Chocolatey
  • Visual Studio Code
  • Putty
  • 7zip
  • Terraform
  • Git
  • SqlQueryStress

Next steps

ArcBox is a sandbox that can be used for a large variety of use cases, such as an environment for testing and training or a kickstarter for proof of concept projects. Ultimately, you are free to do whatever you wish with ArcBox. Some suggested next steps for you to try in your ArcBox are:

  • Deploy sample databases to the PostgreSQL instance or to the SQL Managed Instance
  • Use the included kubectx to switch contexts between the two Kubernetes clusters
  • Deploy GitOps configurations with Azure Arc-enabled Kubernetes
  • Build policy initiatives that apply to your Azure Arc-enabled resources
  • Write and test custom policies that apply to your Azure Arc-enabled resources
  • Incorporate your own tooling and automation into the existing automation framework
  • Build a certificate/secret/key management strategy with your Azure Arc resources

Do you have an interesting use case to share? Submit an issue on GitHub with your idea and we will consider it for future releases!

Clean up the deployment

To clean up your deployment, simply delete the resource group using Azure CLI or Azure portal.

az group delete -n <name of your resource group>

Screenshot showing az group delete

Screenshot showing group delete from Azure portal

Basic Troubleshooting

Occasionally deployments of ArcBox may fail at various stages. Common reasons for failed deployments include:

  • Invalid service principal id, service principal secret or service principal Azure tenant ID provided in azuredeploy.parameters.json file.

  • Invalid SSH public key provided in azuredeploy.parameters.json file.

    • An example SSH public key is shown here. Note that the public key includes “ssh-rsa” at the beginning. The entire value should be included in your azuredeploy.parameters.json file.

      Screenshot showing SSH public key example

  • Not enough vCPU quota available in your target Azure region - check vCPU quota and ensure you have at least 52 available. See the prerequisites section for more details.

  • Target Azure region does not support all required Azure services - ensure you are running ArcBox in one of the supported regions listed in the above section “ArcBox Azure Region Compatibility”.

  • “BadRequest” error message when deploying - this error returns occasionally when the Log Analytics solutions in the ARM templates are deployed. Typically, waiting a few minutes and re-running the same deployment resolves the issue. Alternatively, you can try deploying to a different Azure region.

    Screenshot showing BadRequest errors in Az CLI

    Screenshot showing BadRequest errors in Azure portal

Exploring logs from the ArcBox-Client virtual machine

Occasionally, you may need to review log output from scripts that run on the ArcBox-Client, ArcBox-CAPI-MGMT or ArcBox-K3s virtual machines in case of deployment failures. To make troubleshooting easier, the ArcBox deployment scripts collect all relevant logs in the C:\ArcBox\Logs folder on ArcBox-Client. A short description of the logs and their purpose can be seen in the list below:

Log file Description
C:\ArcBox\Logs\Bootstrap.log Output from the initial bootstrapping script that runs on ArcBox-Client.
C:\ArcBox\Logs\ArcServersLogonScript.log Output of ArcServersLogonScript.ps1 which configures the Hyper-V host and guests and onboards the guests as Azure Arc-enabled servers.
C:\ArcBox\Logs\DataServicesLogonScript.log Output of DataServicesLogonScript.ps1 which configures Azure Arc-enabled data services baseline capability.
C:\ArcBox\Logs\deployPostgreSQL.log Output of deployPostgreSQL.ps1 which deploys and configures PostgreSQL with Azure Arc.
C:\ArcBox\Logs\deploySQL.log Output of deploySQL.ps1 which deploys and configures SQL Managed Instance with Azure Arc.
C:\ArcBox\Logs\installCAPI.log Output from the custom script extension which runs on ArcBox-CAPI-MGMT and configures the Cluster API for Azure cluster and onboards it as an Azure Arc-enabled Kubernetes cluster. If you encounter ARM deployment issues with ubuntuCapi.json then review this log.
C:\ArcBox\Logs\installK3s.log Output from the custom script extension which runs on ArcBox-K3s and configures the Rancher cluster and onboards it as an Azure Arc-enabled Kubernetes cluster. If you encounter ARM deployment issues with ubuntuRancher.json then review this log.
C:\ArcBox\Logs\MonitorWorkbookLogonScript.log Output from MonitorWorkbookLogonScript.ps1 which deploys the Azure Monitor workbook.
C:\ArcBox\Logs\SQLMIEndpoints.log Output from SQLMIEndpoints.ps1 which collects the service endpoints for SQL MI and uses them to configure Azure Data Studio connection settings.

Screenshot showing ArcBox logs folder on ArcBox-Client

Exploring installation logs from the Linux virtual machines

In the case of a failed deployment, pointing to a failure in either the ubuntuRancherDeployment or the ubuntuCAPIDeployment Azure deployments, an easy way to explore the deployment logs is available directly from the associated virtual machines.

Screenshot showing failed deployments

  • Depends on which deployment failed, connect using SSH to the associated virtual machine public IP:

    • ubuntuCAPIDeployment - ArcBox-CAPI-MGMT virtual machine.

    • ubuntuRancherDeployment - ArcBox-K3s virtual machine.

      Since you are logging in using the provided SSH public key, all you need is the arcdemo username.

      Screenshot showing ArcBox-CAPI-MGMT virtual machine public IP

      Screenshot showing ArcBox-K3s virtual machine public IP

  • As described in the message of the day (motd), depends on which virtual machine you logged into, the installation log can be found in the jumpstart_logs folder. This installation logs can help determine the root cause for the failed deployment.

    • ArcBox-CAPI-MGMT log path: jumpstart_logs/installCAPI.log

    • ArcBox-K3s log path: jumpstart_logs/installK3s.log

      Screenshot showing login and the message of the day

  • From the screenshot below, looking at ArcBox-CAPI-MGMT virtual machine CAPI installation log using the cat jumpstart_logs/installCAPI.log command, we can see the az login command failed due to bad service principal credentials.

    Screenshot showing cat command for showing installation log

    Screenshot showing az login error

  • You might randomly get a similar error in the InstallCAPI.log to Error from server (InternalError): error when creating "template.yaml": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: Post "https://capz-webhook-service.capz-system.svc:443/mutate-infrastructure-cluster-x-k8s-io-v1beta1-azuremachinetemplate?timeout=10s": EOF. This is an issue we are currently investigating. To resolve please redeploy ArcBox.

If you are still having issues deploying ArcBox, please submit an issue on GitHub and include a detailed description of your issue, the Azure region you are deploying to, the flavor of ArcBox you are trying to deploy. Inside the C:\ArcBox\Logs folder you can also find instructions for uploading your logs to an Azure storage account for review by the Jumpstart team.