Jumpstart HCIBox - Overview
HCIBox is a turnkey solution that provides a complete sandbox for exploring Azure Stack HCI capabilities and hybrid cloud integration in a virtualized environment. HCIBox is designed to be completely self-contained within a single Azure subscription and resource group, which will make it easy for a user to get hands-on with Azure Stack HCI and Azure Arc technology without the need for physical hardware.
Use cases
- Sandbox environment for getting hands-on with Azure Stack HCI and Azure Arc technologies
- Accelerator for Proof-of-concepts or pilots
- Training tool for skills development
- Demo environment for customer presentations or events
- Rapid integration testing platform
- Infrastructure-as-code and automation template library for building hybrid cloud management solutions
Azure Stack HCI capabilities available in HCIBox
2-node Azure Stack HCI cluster
HCIBox automatically provisions and configures a two-node Azure Stack HCI cluster. HCIBox simulates physical hardware by using nested virtualization with Hyper-V running on an Azure Virtual Machine. This Hyper-V host provisions three guest virtual machines: two Azure Stack HCI nodes (AzSHost1, AzSHost2), and one nested Hyper-V host (AzSMGMT). AzSMGMT itself hosts three guest VMs: a Windows Admin Center gateway server, an Active Directory domain controller, and a Routing and Remote Access Server acting as a BGP router.
Azure Arc Resource Bridge
HCIBox installs and configures Azure Arc Resource Bridge. This allows full virtual machine lifecycle management from Azure portal or CLI. As part of this configuration, HCIBox also configures a custom location and deploys two gallery images (Windows Server 2019 and Ubuntu). These gallery images can be used to create virtual machines through the Azure portal.
Azure Kubernetes Service on Azure Stack HCI
HCIBox includes Azure Kubernetes Services on Azure Stack HCI (AKS-HCI). As part of the deployment automation, HCIBox configures AKS-HCI infrastructure including a management cluster. It then creates a target, or “workload”, cluster (HCIBox-AKS-$randomguid). As an optional capability, HCIBox also includes a PowerShell script that can be used to configure a sample application on the target cluster using GitOps.

Azure Arc-enabled SQL Managed Instance on Azure Stack HCI
HCIBox includes Azure Arc-enabled SQL Managed Instance on Azure Stack HCI. As part of the deployment automation, HCIBox configures AKS-HCI infrastructure including a management cluster. It then creates a target, or “workload”, cluster (HCIBox-AKS-$randomguid) and deploys an Azure Arc-enabled SQL Managed Instance.

Hybrid unified operations
HCIBox includes capabilities to support managing, monitoring and governing the cluster. The deployment automation configures Azure Stack HCI Insights along with Azure Monitor and a Log Analytics workspace. Additionally, Azure Policy can be configured to support automation configuration and remediation of resources.
HCIBox Azure Consumption Costs
HCIBox resources generate Azure Consumption charges from the underlying Azure resources including core compute, storage, networking and auxiliary services. Note that Azure consumption costs may vary depending the region where HCIBox is deployed. Be mindful of your HCIBox deployments and ensure that you disable or delete HCIBox resources when not in use to avoid unwanted charges. Please see the Jumpstart FAQ for more information on consumption costs.
Deployment Options and Automation Flow
HCIBox provides two methods for deploying and configuring the necessary resources in Azure.
-
A Bicep template that can be deployed manually via Azure CLI.
-
An Azure Developer CLI template that can be used to for a more streamlined experience.
HCIBox uses an advanced automation flow to deploy and configure all necessary resources with minimal user interaction. The previous diagram provides an overview of the deployment flow. A high-level summary of the deployment is:
- User deploys the primary Bicep file (main.bicep). This file contains several nested objects that will run simultaneously.
- Host template - deploys the HCIBox-Client VM. This is the Hyper-V host VM that uses nested virtualization to host the complete HCIBox infrastructure. Once the Bicep template finishes deploying, the user remotes into this client using RDP to start the second step of the deployment.
- Network template - deploys the network artifacts required for the solution
- Storage account template - used for staging files in automation scripts and as the cloud witness for the HCI cluster
- Management artifacts template - deploys Azure Log Analytics workspace and solutions and Azure Policy artifacts
- User remotes into HCIBox-Client VM, which automatically kicks off a PowerShell script that:
- Deploys and configure three (3) nested virtual machines in Hyper-V
- Two (2) Azure Stack HCI virtual nodes
- One (1) Windows Server 2019 virtual machine
- Configures the necessary virtualization and networking infrastructure on the Hyper-V host to support the HCI cluster.
- Deploys an Active Directory domain controller, a Windows Admin Center server in gateway mode, and a Remote Access Server acting as a BGP router
- Registers the HCI Cluster with Azure
- Deploys AKS-HCI and a target AKS cluster
- Deploys Arc Resource Bridge and gallery VM images
- Deploys an Azure Arc-enabled SQL Managed Instance on top of the AKS cluster
- Deploys and configure three (3) nested virtual machines in Hyper-V
Prerequisites
The following prerequisites must be completed in order to deploy HCIBox using the manual Bicep template option. If you elect to use Azure Developer CLI instead, then many of these prerequisites are configured for you as part of the AZD experience and can be skipped.
Required for both manual and Azure Developer CLI deployment
-
Clone the Azure Arc Jumpstart repository
git clone https://github.com/microsoft/azure_arc.git
-
Install or update Azure CLI to version 2.49.0 and above. Use the below command to check your current installed version.
az --version
-
Login to AZ CLI using the
az login
command.
Required for manual deployment only (skip this section if using Azure Developer CLI)
-
Ensure that you have selected the correct subscription you want to deploy HCIBox to by using the
az account list --query "[?isDefault]"
command. If you need to adjust the active subscription used by Az CLI, follow this guidance. -
HCIBox must be deployed to one of the following regions. Deploying HCIBox outside of these regions may result in unexpected results or deployment errors.
- East US
- East US 2
- West US 2
- North Europe
NOTE: Some HCIBox resources will be created in regions other than the one you initially specify. This is due to limited regional availability of the various services included in HCIBox.
-
HCIBox requires 32 ESv5-series vCPUs when deploying with default parameters such as VM series/size. Ensure you have sufficient vCPU quota available in your Azure subscription and the region where you plan to deploy HCIBox. You can use the below Az CLI command to check your vCPU utilization.
NOTE: If using Azure Developer CLI the preprovision step will check your subscription for available capacity.
az vm list-usage --location <your location> --output table
-
Register necessary Azure resource providers by running the following commands.
az provider register --namespace Microsoft.HybridCompute --wait az provider register --namespace Microsoft.GuestConfiguration --wait az provider register --namespace Microsoft.Kubernetes --wait az provider register --namespace Microsoft.KubernetesConfiguration --wait az provider register --namespace Microsoft.ExtendedLocation --wait az provider register --namespace Microsoft.AzureArcData --wait az provider register --namespace Microsoft.OperationsManagement --wait az provider register --namespace Microsoft.AzureStackHCI --wait az provider register --namespace Microsoft.ResourceConnector --wait az provider register --namespace Microsoft.OperationalInsights --wait
-
Create Azure service principal (SP). To deploy HCIBox, an Azure service principal assigned with the Owner Role-based access control (RBAC) role is required. You can use Azure Cloud Shell (or other Bash shell), or PowerShell to create the service principal.
-
(Option 1) Create service principal using Azure Cloud Shell or Bash shell with Azure CLI:
az login subscriptionId=$(az account show --query id --output tsv) az ad sp create-for-rbac -n "<Unique SP Name>" --role "Owner" --scopes /subscriptions/$subscriptionId
For example:
az login subscriptionId=$(az account show --query id --output tsv) az ad sp create-for-rbac -n "JumpstartHCIBox" --role "Owner" --scopes /subscriptions/$subscriptionId
Output should look similar to this:
{ "appId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "displayName": "JumpstartHCIBox", "password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "tenant": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX" }
-
(Option 2) Create service principal using PowerShell. If necessary, follow this documentation to install Azure PowerShell modules.
$account = Connect-AzAccount $spn = New-AzADServicePrincipal -DisplayName "<Unique SPN name>" -Role "Owner" -Scope "/subscriptions/$($account.Context.Subscription.Id)" echo "SPN App id: $($spn.AppId)" echo "SPN secret: $($spn.PasswordCredentials.SecretText)"
For example:
$account = Connect-AzAccount $spn = New-AzADServicePrincipal -DisplayName "HCIBoxSPN" -Role "Owner" -Scope "/subscriptions/$($account.Context.Subscription.Id)" echo "SPN App id: $($spn.AppId)" echo "SPN secret: $($spn.PasswordCredentials.SecretText)"
Output should look similar to this:
NOTE: If you create multiple subsequent role assignments on the same service principal, your client secret (password) will be destroyed and recreated each time. Therefore, make sure you grab the correct password.
NOTE: The Jumpstart scenarios are designed with as much ease of use in-mind and adhering to security-related best practices whenever possible. It is optional but highly recommended to scope the service principal to a specific Azure subscription and resource group as well considering using a less privileged service principal account
-
Azure Developer CLI deployment
-
Follow to install guide for the Azure Developer CLI for your environment.
NOTE: PowerShell is required for using azd with HCIBox. If you are running in a Linux environment be sure that you have PowerShell for Linux installed.
-
Login with azd using
azd auth login
which will open a browser for interactive login. -
Run the
azd init
command from your cloned repo azure_jumpstart_hcibox folder. -
Run the
azd up
command to deploy the environment. Azd will prompt you to enter the target subscription, region and all required parameters. -
Once complete, continue on in the section Start post-deployment automation
Bicep deployment via Azure CLI
-
Upgrade to latest Bicep version
az bicep upgrade
-
Edit the main.parameters.json template parameters file and supply some values for your environment.
spnClientId
- Your Azure service principal idspnClientSecret
- Your Azure service principal secretspnTenantId
- Your Azure tenant idwindowsAdminUsername
- Client Windows VM Administrator namewindowsAdminPassword
- Client Windows VM Password. Password must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character. The value must be between 12 and 123 characters long.logAnalyticsWorkspaceName
- Unique name for the HCIBox Log Analytics workspacedeployBastion
- Option to deploy Azure Bastion which used to connect to the HCIBox-Client VM instead of normal RDP.registerCluster
- Option to automatically register the cluster; set to true by defaultdeployAKSHCI
- Option to automatically deploy and configure AKS on HCI; set to true by defaultdeployResourceBridge
- Option to automatically deploy and configure Arc Resource Bridge; set to true by default
-
Now you will deploy the Bicep file. Navigate to the local cloned deployment folder and run the below command:
az group create --name "<resource-group-name>" --location "<preferred-location>" az deployment group create -g "<resource-group-name>" -f "main.bicep" -p "main.parameters.json"
Start post-deployment automation
Once your deployment is complete, you can open the Azure portal and see the initial HCIBox resources inside your resource group. You will be using both Azure portal the HCIBox-Client Azure virtual machine to interact with the HCIBox resources.
NOTE: For enhanced HCIBox security posture, RDP (3389) and SSH (22) ports are not open by default in HCIBox deployments. You will need to create a network security group (NSG) rule to allow network access to port 3389, or use Azure Bastion or Just-in-Time (JIT) access to connect to the VM.
Connecting to the HCIBox Client virtual machine
Various options are available to connect to HCIBox-Client VM, depending on the parameters you supplied during deployment.
- RDP - available after configuring access to port 3389 on the HCIBox-NSG, or by enabling Just-in-Time access (JIT).
- Azure Bastion - available if
true
was the value of yourdeployBastion
parameter during deployment.
Connecting directly with RDP
By design, HCIBox does not open port 3389 on the network security group. Therefore, you must create an NSG rule to allow inbound 3389.
NOTE: If you deployed with Azure Developer CLI then this step is automatically done for you as part of the automation.
-
Open the HCIBox-NSG resource in Azure portal and click “Add” to add a new rule.
-
Specify the IP address that you will be connecting from and select RDP as the service with “Allow” set as the action. You can retrieve your public IP address by accessing https://icanhazip.com or https://whatismyip.com.
Connect using Azure Bastion
-
If you have chosen to deploy Azure Bastion in your deployment, use it to connect to the VM.
NOTE: When using Azure Bastion, the desktop background image is not visible. Therefore some screenshots in this guide may not exactly match your experience if you are connecting to HCIBox-Client with Azure Bastion.
Connect using just-in-time access (JIT)
If you already have Microsoft Defender for Cloud enabled on your subscription and would like to use JIT to access the Client VM, use the following steps:
-
In the Client VM configuration pane, enable just-in-time. This will enable the default settings.
The Logon scripts
-
Once you log into the HCIBox-Client VM, a PowerShell script will open and start running. This script will take between 3-4 hours to finish, and once completed, the script window will close automatically. At this point, the deployment is complete and you can start exploring all that HCIBox has to offer.
NOTE: The automation will take 3-4 hours to fully complete. Do not close the PowerShell window during this time. When automation is completed successfully, the desktop background will be changed to the HCIBox wallpaper.
-
Deployment is complete! Let’s begin exploring the features of HCIBox!
NOTE: The Register-AzStackHCI PowerShell command registers the cluster to the East US region. This region is hardcoded into the script. If you have regional limitations in your Azure subscription that prevent resource creation in East US the registration will fail.
Using HCIBox
HCIBox has many features that can be explored through the Azure portal or from inside the HCIBox-Client virtual machine. To help you navigate all the features included, read through the following sections to understand the general architecture and how to use various features.
Nested virtualization
HCIBox simulates a 2-node physical deployment of Azure Stack HCI by using nested virtualization on Hyper-V. To ensure you have the best experience with HCIBox, take a moment to review the details below to help you understand the various nested VMs that make up the solution.
Computer Name | Role | Domain Joined | Parent Host | OS |
---|---|---|---|---|
HCIBox-Client | Primary host | No | Azure | Windows Server 2022 |
AzSHOST1 | HCI node | Yes | HCIBox-Client | Azure Stack HCI |
AzSHOST2 | HCI node | Yes | HCIBox-Client | Azure Stack HCI |
AzSMGMT | Nested hypervisor | No | HCIBox-Client | Windows Server 2022 |
JumpstartDC | Domain controller | Yes (DC) | AzSMGMT | Windows Server 2022 |
AdminCenter | Windows Admin Center gateway server | Yes | AzSMGMT | Windows Server 2022 |
Bgp-Tor-Router | Remote Access Server | No | AzSMGMT | Windows Server 2022 |
Active Directory domain user credentials
Once you are logged into the HCIBox-Client VM using the local admin credentials you supplied in your template parameters during deployment you will need to switch to using a domain account to access most other functions, such as logging into the HCI nodes or accessing Windows Admin Center. This domain account is automatically configured for you using the same username and password you supplied at deployment. The default domain name is jumpstart.local, so if the username supplied at deployment is “arcdemo", your domain account in UPN format would be arcdemo@jumpstart.local.
NOTE: The password for this account is set as the same password you supplied during deployment for the local account. Many HCIBox operations will use the domain account wherever credentials are required.
Monitoring Azure Stack HCI
Azure Stack HCI integrates with Azure Monitor to support monitoring HCI clusters through the Azure portal. Follow these steps to configure monitoring on your HCIBox cluster.
-
From the Overview blade of the HCIBox-Cluster resource, select the “Capabilities” tab, then click on “Not configured” on the “Logs” box.
-
On the dialog box, select the HCIBox-Workspace log analytics workspace in the dropdown, then click “Add”. This will begin the process of installing the Log Analytics extensions on the host nodes and will take a few minutes. When complete, the Logs box will show as “Configured” on the Capabilities tab.
-
On the “Capabilities” tab, click on “Not configured” on the “Insights” box.
-
On the dialog box, click “Turn on”. After a few seconds, the Insights box should show as “Configured” on the Capabilities tab.
-
It will take time for logs data to flow through to Log Analytics. Once data is available, click on the Insights blade of the HCIBox-Cluster resource to view the Insights workbook and explore logs from your cluster.
VM provisioning through Azure portal with Arc Resource Bridge
Azure Stack HCI supports VM provisioning through the Azure portal. Open the HCIBox VM provisioning documentation to get started.
Windows Admin Center
HCIBox includes a deployment of a Windows Admin Center (WAC) gateway server. Windows Admin Center can also be used from the Azure portal. Open the HCIBox Windows Admin Center documentation to get started.
NOTE: Registering Windows Admin Center with Azure is not supported in HCIBox.
Azure Kubernetes Service
HCIBox comes pre-configured with Azure Kubernetes Service on Azure Stack HCI. Open the HCIBox AKS-HCI documentation to get started with AKS-HCI in HCIBox.
Azure Arc-enabled SQL Managed Instance
HCIBox supports deploying Azure Arc-enabled SQL Managed Instance on an AKS HCI cluster. Open the HCIBox SQLMI documentation to get started with Azure Arc-enabled SQL Managed Instance in HCIBox.
Advanced Configurations
HCIBox provides a full Azure Stack HCI sandbox experience with minimal configuration required by the user. Some users may be interested in changing HCIBox’s default configuration. Many advanced settings can be configured by modifying the values in the HCIBox-Config.psd1 PowerShell file. If you wish to make changes to this file, you must fork the Jumpstart repo and make the changes in your fork, then set the optional githubAccount and githubBranch deployment template parameters to point to your fork.
NOTE: Advanced configuration deployments are not supported by the Jumpstart team. Changes made to the HCIBox-Config.psd1 file may result in failures at any point in HCIBox deployment. Make changes to this file only if you understand the implications of the change.
Next steps
HCIBox is a sandbox that can be used for a large variety of use cases, such as an environment for testing and training or a to jumpstart a proof of concept projects. Ultimately, you are free to do whatever you wish with HCIBox. Some suggested next steps for you to try in your HCIBox are:
- Explore Windows Admin Center from either Azure portal or from the WAC gateway server
- Deploy GitOps configurations with Azure Arc-enabled Kubernetes
- Build policy initiatives that apply to your Azure Arc-enabled resources
- Write and test custom policies that apply to your Azure Arc-enabled resources
- Reuse automation for external solutions or proof-of-concepts
Clean up the deployment
To clean up your deployment, simply delete the resource groups using Azure CLI, Azure Developer CLI, or Azure portal. Be sure to delete the ArcServers resource group first as seen in the example below.
-
Clean up Using Azure CLI
az group delete -n <name of your resource group>-ArcServers az group delete -n <name of your resource group>
-
Clean up using Azure Developer CLI
azd down
Basic Troubleshooting
Occasionally deployments of HCIBox may fail at various stages. Common reasons for failed deployments include:
- Invalid service principal id, service principal secret or service principal Azure tenant ID provided in main.parameters.json file. This can cause failures when running automation that requires logging into Azure, such as the scripts that register the HCI cluster, deploy AKS-HCI, or configure Arc resource bridge.
- Not enough vCPU quota available in your target Azure region - check vCPU quota and ensure you have at least 48 available. See the prerequisites section for more details.
- Target Azure region does not support all required Azure services - ensure you are running HCIBox in one of the supported regions. See the prerequisites section for more details.
- Authentication issues - Most HCIBox operations require the use of the domain credentials configured during deployment. These credentials take the UPN format of
@jumpstart.local . If you have issues accessing services such as Windows Admin Center make sure you are using the correct credential. - Script failures due to upstream dependencies - This can happen due to network issues or failures in upstream services that HCIBox depends on (such as package repositories) - in most cases deleting the deployment and redeploying is the simplest resolution.
If you have issues that you cannot resolve when deploying HCIBox please submit an issue on the Github repo
Exploring logs from the HCIBox-Client virtual machine
Occasionally, you may need to review log output from scripts that run on the HCIBox-Client virtual machines in case of deployment failures. To make troubleshooting easier, the HCIBox deployment scripts collect all relevant logs in the C:\HCIBox\Logs folder on HCIBox-Client. A short description of the logs and their purpose can be seen in the list below:
Log file | Description |
---|---|
C:\HCIBox\Logs\Bootstrap.log | Output from the initial bootstrapping script that runs on HCIBox-Client. |
C:\HCIBox\Logs\New-HCIBoxCluster.log | Output of New-HCIBoxCluster.ps1 which configures the Hyper-V host and builds the HCI cluster, management VMs, and other configurations. |
C:\HCIBox\Logs\Register-AzSHCI.log | Output of Register-AzSHCI.ps1 which registers the cluster with Azure. |
C:\HCIBox\Logs\Deploy-AKS.log | Output of Deploy-AKS.ps1 which deploys and configures AKS on HCI. |
C:\HCIBox\Logs\Deploy-ArcResourceBridge.log | Output of Deploy-ArcResourceBridge.ps1 which deploys and configures Arc resource bridge and builds gallery images. |
C:\HCIBox\Logs\Deploy-SQLMI.log | Output of Deploy-SQLMI.ps1 which deploys and configures Arc-enabled SQL Managed Instance. |
If you are still having issues deploying HCIBox, please submit an issue on GitHub and include a detailed description of your issue and the Azure region you are deploying to. Inside the C:\HCIBox\Logs folder you can also find instructions for uploading your logs to an Azure storage account for review by the Jumpstart team.