Microsoft Intune App Deployment Performance: Entra ID vs Hybrid Join Research

Table of Contents

In End-User Computing (EUC), it’s all about the applications. In modern management solutions like Microsoft Intune, application deployment is a core part of the platform, enabling admins for end users to either enforce installations or offer them on demand.

Application deployment times in a cloud-based management solution can vary due to factors such as network conditions, endpoint types, and package sizes. With Intune, are those deployment times actually consistent? That question has been explored before in a previous article, and this research builds on that foundation to continue the investigation.

Background

Before diving into the new scenarios and results, it is helpful to understand the previous research. For full details, you can read the original article, but here is a concise summary:

Previous research focused on measuring application deployment times when installations were required, and validating whether those deployments were consistent. To support this, a custom workflow was developed to trigger the installation, verify that the application was installed correctly, and then force an uninstallation. To assess consistency, this process was repeated 10 times, and the data was collected.

The initial findings showed significant inconsistency, and some deployments for example, never even completed. As a result, a general timeout was introduced, defining a maximum deployment window of 8 hours.

Because the initial results were worse than expected, the GO-EUC affiliates, who are experts on the topic, are consulted. The recommendation was to modify the existing package by altering the metadata (Image Version). This adjustment had a major positive impact, resulting in consistent deployment times.

However, the initial research raised additional questions beyond the original scope. Those questions form the basis for this follow-up research.

Scenario

While presenting the data and insights of the initial research at various events, a significant amount of feedback and many suggestions were received from both attendees, but especially the GO-EUC affiliates, which helped shape this follow-up study. As the feedback covered multiple topics, it was decided to split them into separate research tracks, each focused on a single, well-defined subject.

For this follow-up research, the focus returns to application deployment times. However, instead of modifying the existing package metadata, this research validates a version upgrade scenario. Additionally, as the primary package of the previous research was small, it was decided to include multiple applications of various sizes, in order to create a more realistic deployment scenario.

The setup and configuration are largely similar to the previous research, with a few minor adjustments. The default device platform is Windows 11. Windows 10 is no longer included, as it has since been retired. Eight virtual machines are used, all deployed with HashiCorp Packer; four are Microsoft Entra ID-joined only, four are Hybrid joined, and all are enrolled in Microsoft Intune.

The entire research process is fully automated using a PowerShell script that performs the following steps:

flowchart LR
    A["Upload New
Package Version"] --> B["Target Static Device Group
for Deployment"] B --> C["Detect Installation
Remotely"] C --> D["Target Static Device Group
for Uninstallation"] D --> E["Detect Uninstallation
Remotely"] E --> F["Unassign Package
from Deployment Groups"] classDef step fill:#1f2937,stroke:#ffffff,color:#ffffff,stroke-width:2px; class A,B,C,D,E,F step; linkStyle 0 stroke:#ffffff,stroke-width:3px; linkStyle 1 stroke:#ffffff,stroke-width:3px; linkStyle 2 stroke:#ffffff,stroke-width:3px; linkStyle 3 stroke:#ffffff,stroke-width:3px; linkStyle 4 stroke:#ffffff,stroke-width:3px;

Based on our testing methodology, this sequence is repeated 10 times for each device. The deployment timeout remains set to 8 hours, meaning that if the application is not available/ detected within that window, the workflow continues to the next step.

For this research, two distinct scenarios are defined:

  • Hybrid joined machines running on premises
    • Single application deployment
    • Multiple application deployment
  • EntraID joined machines running in Azure
    • Single application deployment
    • Multiple application deployment

The following applications are used in this research, which are packaged using Master Packerer as Win32 packages:

  • 7-zip (7.7 mb)
  • Adobe Reader DC (612 mb)
  • Microsoft Office 365 (3.48 gb)

By comparing these scenarios, the research aims to determine:

  • Whether a new application package upload behaves differently from an existing application version number modification.
  • See the difference in application deployment consistency and timing.
  • If there is any difference between Entra-only joined devices and Hybrid devices.

It is important to note that in the previous research, only the version number of an existing package was modified. In this new research a new package is created in Intune and a new version of the application is uploaded cleanly at the beginning of each test run.

Deep dive into the deployment logic of Microsoft Intune

It is valuable to understand the deployment logic of Microsoft Intune. While Microsoft does not explicitly share information about this, there is Rudy Oomes, a Microsoft MVP, who does provide this information.

Focusing specifically on application deployment, the article highlights that Microsoft Intune does not rely on the same delivery mechanism used for traditional device configuration policies. Instead, application deployment, particularly for Win32 applications, is handled by a separate component: the Intune Management Extension (IME). This distinction is critical to understanding why application delivery behaves differently from other Intune workloads.

When an application is assigned to a device or user, the Intune process does not immediately result in installation following a manual sync. Unlike the MDM channel, which responds more directly to sync triggers, the IME operates on its own polling cycle. This means that even after a sync is initiated, the IME may not immediately check in to retrieve new assignments. As a result, there can be a delay between assignment and the moment the application becomes visible to the endpoint for processing.

Once the IME retrieves the assignment, additional steps further influence timing. The extension evaluates requirements, dependencies, and detection rules before initiating installation. Each of these checks introduces conditional logic that can delay execution, especially in environments where applications are chained together or rely on prerequisite conditions. Furthermore, retry mechanisms and scheduling behavior within the IME can extend the total time before successful deployment, particularly if initial attempts fail or conditions are not yet met at the time when IME had retrieved the assignment.

Another factor is that application deployment is influenced by service-side processing before the IME is even involved. Intune must first process the assignment in the cloud, determine targeting, and prepare the policy payload. Only after this processing is complete can the IME retrieve the relevant instructions. This introduces an additional layer of latency that is not visible from the endpoint perspective.

The result is that application deployment in Intune is inherently asynchronous and multi-phased. The timeline from assignment to installation includes cloud-side processing, IME polling intervals, local evaluation logic, and execution scheduling. Because these steps are decoupled, the process does not follow a linear or immediate pattern. What may appear as inconsistent behavior is, in fact, the expected outcome of this architecture.

From a research perspective, this reinforces the idea that application deployment timing in Microsoft Intune should not be evaluated as a single event. Instead, it should be understood as a sequence of dependent stages, each contributing to overall variability. Measuring the duration and consistency of these stages provides more meaningful insight than focusing solely on the total time to installation.

Action PowerShell Generic IME Win32 app
Default check-in 24 hours 6 hours  
Check-in if exists 8 hours 3 hours  
Full result reporting 8 hours    
Forced weekly report 7 days    
Win32App required app evaluation     1 hour
Available app evaluation     8 hours
Content cleanup     24 hours

Source: https://patchmypc.com/blog/how-windows-intune-really-syncs-policies-vs-apps/

Hypothesis and results

Based on the learnings from the previous initial research, it is expected that updating the package to a new version will result in consistent deployment times.

However, as the upgrade process was not previously executed, and the version of an existing package was amended, there is cautious optimism that this corrected approach may lead to faster deployments compared to the earlier results.

In addition, there is a suspicion that Entra ID–only joined devices may show faster application deployments. This could be related to these virtual machines running in Azure, with different specifications and network characteristics.

The focus will first be on single application deployments for both Hybrid and Entra ID–only joined scenarios. While similar to previous research, the key difference in this case is that every deployment uses a new application package.

The data is presented in a scatter chart, where the Y-axis represents the time it took for the deployment to complete, and the X-axis represents the individual iterations. Data where the minimum, average, and maximum are presented is important to understand, as it is tightly clustered, resulting in a more consistent and reliable result.

The pattern shows similar results to the previous research, where the initial deployment is faster compared to the others, even though a new package is used for every deployment. We do not know why this first initial deployment is always quicker.

Most deployments fall within a consistent timing range, with some outliers. However, from a user perspective, these differences are not noticeable.

When comparing the Hybrid-joined scenario to the Entra ID–only joined scenario, there is a slight difference in favor of the Entra ID–joined devices. The main distinction between these two scenarios is that the Entra setup runs natively in Azure, using a different machine specification and network configuration compared to the on-premises environment.

Overall, the difference is only between 1 to 4 minutes, which from a user perspective may not be noticeable.

Now let’s move to the second scenario: deploying multiple applications.

When deploying multiple applications for both Hybrid and Entra ID–only joined devices, the spread in timing increases compared to the single application scenario. This is expected, as multiple applications of various sizes are being processed.

The Entra ID–joined scenario shows a more consistent pattern compared to the Hybrid scenario.

When directly comparing both scenarios, the same conclusion can be drawn as with the single application deployment: the Entra ID–joined scenario shows faster deployments. With multiple applications, the difference increases from just five to approximately eight minutes.

However, as there are multiple parameters involved, it remains difficult to draw a definitive conclusion based on this observation alone.

With multiple applications being deployed, it also becomes possible to analyze the data at the individual application level.

Applications are always processed sequentially. Intune does not provide a mechanism to control or determine the order of deployment, meaning this sequence is effectively random. This behavior is also reflected in the data across both scenarios.

Although the overall timings are relatively close, it is interesting to note that in some iterations the installations are more spread out. This suggests that either the installation itself is taking longer or there is an additional delay before the next installation starts.

Unfortunately, based on the dataset collected during this research, it is not possible to definitively confirm which of these two hypotheses is the primary cause.

When directly comparing both scenarios, the overall pattern remains consistent across the individual applications.

Conclusion

Microsoft Intune application deployments are rumored within the community to be slow or inconsistent, resulting in an overall negative user experience. With the insights from Rudy’s, this behavior is actually by design, as Intune operates on defined sync and evaluation cycles for different workloads. To better understand this, it is highly recommended to read his full article.

Based on this research, it can be concluded that Microsoft Intune application deployment performance is primarily governed by synchronization mechanics rather than execution speed.

While Entra ID–joined devices show slightly faster deployment times compared to Hybrid-joined devices, the difference is marginal when compared to the variability introduced by Intune’s app evaluation cycles. When looking at multi-application deployments, the overall timing remains within the same range, with only a slightly higher average. This increase is expected due to multiple applications being processed, but it does not fundamentally change the deployment behavior.

Based on this research, it can be concluded that application deployment timing in Microsoft Intune is heavily governed by sync and evaluation cycles, which are not clearly documented, predictable, or visible to admins. While community research (like Rudy’s) sheds light on this behavior, this shouldn’t require reverse engineering.

Photo by Pawel Czerwinski on Unsplash