Running the OpenTelemetry Collector in an Azure Container Instance

Last updated: 2025-11-01

There are a number of options for running the oTel Collector - with Kubernetes being one of the most popular for production loads. Sometimes though, you may not want the overhead of K8S or may not have access to a cluster. In this case, another deployment option is Azure Container Instances. This is often regarded as a lightweight, serverless option that can potentially be much cheaper than running the workload in K8S. For our example, we are running our source application as an Azure Web App.

Let's see how it works.

Mounting your Config

Whichever flavour or distribution of the Collector you are using you will almost certainly need to apply some custom configuration so that you can send your telemetry to one or more backends of your choosing. This means that you will need to point the oTel Collector executable to a config file that replaces the default configuration. This involves mounting a volume, and, in the case of ACI, it will need to resolve to a Storage Account Container (this is container in the sense of bucket, not in the sense of Docker container).

For our example we we will be using the otelcontrib distribution. The entry point for this distribution is called "otel-contrib" but other distributions will have different names for their binaries.

The code below is a Powershell script that will spin up the oTel Collector in an Azure Container Instance and mount a volume for our custom configuration.

        
        # assumes that otel-config-custom.yaml has been uploaded
        # to the specified Azure Storage Account share:
                       
        $StgResourceGroup = "<storage-account-resource-group>"
        $StorageAccount = "<storage-account-name>"
        $AccountKey = az storage account keys list ` 
            --resource-group $StgResourceGroup `
            --account-name $StorageAccount `
            --query "[0].value" -o tsv
      
        $ShareName = "otelconfig"
        $ContainerName = "otel-collector"
        $Image = "otel/opentelemetry-collector-contrib:latest"
        $MountPath = "/etc/otelcol-contrib"
        $appResourceGroup="<otel-collector-resource-group>"
   
        az container create 
            --resource-group $appResourceGroup `
            --name $ContainerName `
            --image $Image `
            --ports 4317 4318 55679 `
            --azure-file-volume-account-name $StorageAccount `
            --azure-file-volume-account-key $AccountKey `
            --azure-file-volume-share-name $ShareName `
            --azure-file-volume-mount-path $MountPath `
            --environment-variables `
               OTLP_ENDPOINT=<your-otlp-endpoint> `
               API_KEY="" `
               USERID_ID="<your-api-user-id>" `
            --command-line "/otelcol-contrib --config $MountPath/otel-config-custom.yaml"
        
        

Adding Connectivity

Unfortunately, in this state, our container is, effectively, useless. This is because it is running in a private process. It has no IP address and there is no way for our application to connect to it. This means that we actually need to do some plumbing before our container can receive and forward telemetry.

On the Azure platform there a number of options for creating the connectivity we need. The option we are going with is vNet integration. This means that we will create a private vNet on the Azure cloud. We will then register both our Container Instance and our App Service into the network. This means that our app service can still expose a public endpoint but it can also send outbound traffic securely across a private vNet to the oTel Collector.

Within our vNet we will create two subnets. The first subnet will will host our oTel Collectorand will be delegated to running Container Instances. First let's create the vNet:

        
        $ResourceGroup = "<otel-collector-resource-group>"
        $Location = "<location>"
        $VNetName = "otel-vnet"
        $SubnetName = "otel-subnet"
  
        # Create the virtual network

        az network vnet create `
          --resource-group $ResourceGroup `
          --name $VNetName  `
          --address-prefix 10.10.0.0/16 `
          --subnet-name $SubnetName `
          --subnet-prefix 10.10.1.0/24
    
    

And now let's delegate the SubNet:

    
        az network vnet subnet update `
            --resource-group $ResourceGroup `
            --vnet-name $VNetName `
            --name $SubnetName `
            --delegations Microsoft.ContainerInstance/containerGroups
        
        

Next we will create a subnet within this vNet to host our App Service:

    
        az network vnet subnet create `
            --resource-group $ResourceGroup `
            --vnet-name my-$VNetName `
            --name appservice-subnet `
            --address-prefixes 10.10.2.0/24
    
    

Now we will delegate this subnet to App Services:

     
        az network vnet subnet update `
            --resource-group $ResourceGroup `
            --vnet-name my-$VNetName `
            --name appservice-subnet `
            --delegations Microsoft.Web/serverFarms
    
    

We will now integrate our app Service into our vNet and join it to our App Service subnet:

    
        az webapp vnet-integration add `
            --name <app-service-name> `
            --resource-group $ResourceGroup `
            --vnet $VNetName `
            --subnet otel-subnet

    

So that is our plumbing completed. Just to recap, we have

  • created a vNet
  • created a subnet for Container instances
  • integrated our Container instance into the Container instance subnet
  • created a subnet for App Services
  • Integrated our App Service into the subnet for App Services.

Adding Local DNS

The good news is that we have done all the heavy lifting. At this point we could simply look up the private IP address of our oTel Collector container instance and then configure our app service to send telemetry to port 4217/18 at that IP address. Unfortunately, this would be a rather fragile solution as it will break as soon as the IP address for out container instance changes.

To mitigate this we can set up local DNS resolution in our vNet. This means that we will assign a name to the TCP endpoint of our oTel collector and our app service will connect using that name rather than an IP address. The process for doing this will consist of these three steps:

  • create a DNS zone
  • add an A record for the oTel Collector
  • link the DNS zone to our vNet

The code below will create the DNS Zone:

        
        $ResourceGroup = "myRG"
        $Location = "westeurope"
        $PrivateDnsZone = "internal.local"
        $RecordName = "otel-collector"
        $CollectorIP = "10.10.1.4"
        $VNetName = "my-vnet"
  
        az network private-dns zone create `
            --resource-group $ResourceGroup `
            --name $PrivateDnsZone
        
        

This will add the A record:

    
        az network private-dns record-set a add-record `
            --resource-group $ResourceGroup `                                   
            --zone-name $PrivateDnsZone `                                       
            --record-set-name $RecordName `
            --ipv4-address $CollectorIP
        
    

And this will link the DNS zoned to our vNet:

    
        az network private-dns link vnet create `                               
            --resource-group $ResourceGroup `                                   
            --zone-name $PrivateDnsZone `                                        
            --name "link-to-my-vnet" `                                            
            --virtual-network $VNetName `
            --registration-enabled false
        
    

In our appsettings.json file we can now define the oTel collector as follows:

        
        "AppSettings": {
        "oTelCollectorUrl": "http://otel-collector:4317"
        },
        
        

And that's a wrap. Now, once run our application, it will send telemetry to our oTel collector completely securely via our private vNet. By default, subnets within the same vNet can communicate with other, so we do not need to create any allow lists or rules.

Conclusion

As I disclaimer, I should say that I have not used this in production. It has been an exercise in looking at an alternative to running the oTel Collector in a Kubernetes cluster. At first it might seem as if there is a lot of preparation involved in getting this off the ground - vNets, subnets, delegation, DNS Zones, A-Names.

So yes, it is not a quick fix. On the other hand, each of the individual steps are highly manageable and once you have the structure in place it is cheaper and probably more easily maintainable.

Like this article?

If you enjoyed reading this article, why not sign up for the fortnightly Observability 360 newsletter. A wholly independent newsletter dedicated exclusively to observability and read by professionals at many of the world's leading companies.'

Get coverage of observability news, products events and more straight to your inbox in a beautifully crafted and carefully curated email.