This domain is about understanding where your Mule applications run after you build them.
Think of it like this:
You’ve written an app using Anypoint Studio.
Now you need to decide where and how it will be hosted, how it can scale, how to ensure uptime, and how to secure and monitor it.
That’s what Runtime Plane Technology Architecture focuses on.
This is important because a good design at the runtime layer ensures:
Your applications don’t crash under load.
They recover from failure automatically.
They stay secure and compliant with IT policies.
MuleSoft provides different ways to run and host your applications. Each model has different levels of control, automation, and responsibility.
CloudHub is MuleSoft’s fully managed cloud environment.
You do not need to worry about servers, OS, containers, or infrastructure.
Hosted and managed entirely by MuleSoft on AWS.
Each app runs in its own dedicated container.
Each container is called a worker.
Apps are isolated from each other.
You can manually scale by adding more workers.
Auto-scaling is not available in CloudHub 1.0.
| Area | Managed by MuleSoft |
|---|---|
| Infrastructure (servers, OS) | Yes |
| Networking, load balancers | Yes |
| Application code | You |
| App-level monitoring/logs | You |
Small to medium-sized integrations.
When you want minimal operations overhead.
When you don’t need full control over deployment and infrastructure.
CloudHub 2.0 is the next-generation version of CloudHub, based on Kubernetes, which supports:
Horizontal auto-scaling (apps can scale automatically based on load).
Improved operational control (better metrics, observability).
Smaller deployment units called replicas (instead of full workers).
You don't need to manually assign vCores per app.
Apps can scale in/out automatically based on CPU/memory/load.
Supports advanced routing, custom domains, and better multi-region deployment.
More cost-efficient for dynamic workloads.
Easier to manage high-throughput apps.
APIs with variable load (e.g., public-facing apps).
Customers already using Kubernetes-style architecture.
Runtime Fabric is a container-based deployment option where you control the environment.
You can install RTF:
On-premises (your own data center)
In your private cloud (e.g., AWS, Azure)
On Kubernetes clusters you manage (Bring Your Own K8s)
Full control of deployment.
Supports autoscaling, advanced security, custom networking.
Can run multiple apps in shared or isolated modes.
Supports Microgateway deployment for API traffic filtering.
Used in regulated industries or high-security environments.
Enterprises with strict IT governance.
Scenarios where apps must run inside a private network.
Need for advanced performance tuning.
This refers to running Mule apps on your own servers or virtual machines, outside of CloudHub or RTF, but still managing them through Anypoint Runtime Manager.
MuleSoft provides the Mule Runtime binaries.
You install them on your own hardware or VMs.
Runtime Manager connects to your servers using an agent.
You can deploy, stop, and monitor apps via the cloud.
Full control of the server environment.
No need for containerization or Kubernetes.
Useful for legacy environments or gradual migration to cloud.
You manage everything: OS, patches, monitoring, scaling.
No built-in autoscaling.
| Deployment Model | Managed By | Autoscaling | Use Case |
|---|---|---|---|
| CloudHub | MuleSoft | No | Fast setup, low ops burden |
| CloudHub 2.0 | MuleSoft | Yes | Dynamic workloads, better performance |
| Runtime Fabric | You | Yes | High control, private cloud, advanced ops |
| Hybrid | You | No | Legacy systems, on-premise environments |
A deployment topology describes the physical and logical arrangement of your Mule applications at runtime.
It includes:
How many workers your app uses
Whether your app runs in a cluster
How CPU and memory are allocated
How your app scales when load increases
Choosing the right topology is key to:
Improving performance under load
Ensuring fault isolation and availability
Managing costs by avoiding over-provisioning
In CloudHub, each Mule application is deployed with one or more workers.
Each worker is a dedicated container running your Mule app. They do not share memory or processing with each other.
Concurrency: More workers can handle more simultaneous requests.
Fault isolation: If one worker crashes, others keep running.
Availability: Workers can be spread across different availability zones.
You manually choose:
Number of workers (e.g., 2 workers)
Size of each worker (e.g., 1 vCore, 0.5 GB RAM)
Example:
You deploy an API with 2 workers, each with 1 vCore. This gives your app 2 vCores total of processing power, and it can survive if one worker fails.
In Hybrid deployments, you can cluster multiple Mule runtimes together so that they work as one logical unit.
This is useful for stateful applications that need:
Shared memory
Transactional processing
Session replication
Uses Mule 4 Clustering (available for on-premise licensed customers).
Supports reliable message processing across nodes.
Required for certain JMS-based apps or apps using persistent queues.
You install 3 Mule runtimes on separate servers and cluster them. A message sent to one server can be processed by another in the cluster if needed.
On-prem apps with stateful needs.
Environments requiring zero message loss.
You predefine how much CPU and memory the app will use.
Example: 2 workers, each with 1 vCore and 2 GB RAM.
Used in CloudHub 1.0 and Hybrid.
Pros:
Predictable behavior
Fixed cost
Cons:
May under-utilize resources during low traffic
May be insufficient during traffic spikes
Resources are allocated automatically based on usage.
App can scale up or down without manual action.
Pros:
Cost-effective
Handles traffic bursts smoothly
Cons:
Slightly more complex to manage
Requires good observability and autoscaling policies
Increase resources for a single node (more vCores, more memory).
Used when your app is single-threaded or monolithic.
Pros:
Simple
No need for app code changes
Cons:
There's a limit to how big a single machine can get
Not fault tolerant (if the node goes down, app goes down)
Add more instances of the app (workers, pods).
Requests are distributed across them.
Pros:
More fault tolerant
Easier to scale linearly
Cons:
Needs stateless design (to avoid session problems)
Slightly more complex deployment
| Concept | Description | Best For |
|---|---|---|
| Multi-worker Design | Multiple isolated workers for one app (CloudHub) | Stateless APIs, concurrency, fault isolation |
| Clustered Deployment | Mule runtimes working together (Hybrid) | Stateful apps, reliable messaging |
| Static Allocation | Fixed CPU/RAM settings per app | Simple workloads, predictable traffic |
| Dynamic Allocation | Auto-assign CPU/RAM based on load | Variable workloads, optimized cost |
| Vertical Scaling | Add more CPU/RAM to same instance | Single-threaded or resource-heavy apps |
| Horizontal Scaling | Add more instances of app (workers or pods) | Scalable, fault-tolerant, cloud-native apps |
High Availability means that your Mule applications can:
Continue running without interruption, even if something fails (like a server, worker, or zone).
Recover automatically without manual intervention.
Meet Service Level Agreements (SLAs) like 99.9% uptime.
In production environments, HA is not optional — it's a requirement.
CloudHub 1.0 does not support traditional clustering, but it achieves HA using multiple workers and Availability Zones (AZs).
Deploy with at least 2 workers:
Each worker runs in a separate container.
If one fails, the other continues processing.
Distribute workers across AZs:
MuleSoft's infrastructure automatically spreads workers across different Availability Zones within a region.
This protects against zone-level failures (like a data center outage).
Use Object Store v2 for shared state:
Since workers are isolated, they don’t share memory.
Object Store v2 allows you to share small pieces of data between workers.
All applications should be stateless whenever possible.
For any shared data or session state, always use external stores (e.g., Object Store, Redis, DB).
RTF provides true container-level High Availability using Kubernetes.
Multiple replicas of your app run across different nodes.
If a pod fails, Kubernetes automatically replaces it.
Built-in autoscaling can increase replicas during traffic spikes.
Load balancing is handled via Kubernetes services and ingress controllers.
Use stateless applications whenever possible.
For stateful workloads, use:
Shared external storage (e.g., Object Store, DB)
Kubernetes persistent volumes
Sticky sessions (if unavoidable)
In traditional on-prem environments, you implement HA using Mule Clustering.
A cluster is a group of Mule runtimes that work together as a single logical unit.
All runtimes in a cluster share state, and coordinate message processing.
If one server fails, others take over without losing messages.
Supports persistent queues, transactions, and synchronous message processing.
Requires MuleSoft Enterprise license.
Can be deployed with load balancers to distribute requests.
Apps are deployed identically to all cluster nodes.
Messages are assigned to nodes using round-robin or intelligent routing.
State is synchronized between nodes (for supported components).
| Deployment Model | HA Mechanism | What You Need to Do |
|---|---|---|
| CloudHub | Multiple workers + distributed AZs | Deploy with 2+ workers, use Object Store v2 |
| RTF | Kubernetes-based redundancy and autoscaling | Deploy multiple replicas, configure autoscaling |
| Hybrid (On-Prem) | Mule 4 Clustering | Set up a cluster, deploy to all nodes, use shared state |
| Best Practice | Explanation |
|---|---|
| Design for statelessness | Avoid keeping session or state in memory. Use external systems like DB or Object Store. |
| Use health checks | Ensure Runtime Manager or Kubernetes knows when an app is unhealthy. |
| Avoid single points of failure | Use multiple workers, AZs, nodes, and replicas. |
| Monitor key metrics | Track availability, memory usage, thread pools. |
| Use retries and fallback logic in apps | For transient failures, retry with limits and log gracefully. |
Once an application is deployed, your job is not finished. You need to:
Track how it's performing
Detect failures early
Understand what went wrong when something breaks
Prove compliance (for security, privacy, uptime)
Logging and monitoring are the foundation of observability in enterprise integration.
Logging means recording events that happen while your app runs.
Key events: "Order received", "Customer record updated"
Errors and exceptions: Full stack trace, flow name, correlation ID
Warnings: Timeout when calling external API, retry attempts
Metadata: Request IDs, timestamps, environment
In MuleSoft environments:
CloudHub 1.0: Logs are stored per worker and viewable in Runtime Manager.
CloudHub 2.0: Logs are accessible via log streaming or files.
Runtime Fabric and Hybrid: Logs are stored locally, or sent to external systems like ELK or Splunk.
Use a standard log format across all apps.
Include a correlation ID to trace requests across systems.
Avoid logging sensitive data like passwords or access tokens.
Log at the correct level:
INFO: Normal operational messages
DEBUG: Development-level detail
WARN: Something unexpected but not fatal
ERROR: Application failures
For large organizations, logs are usually forwarded to centralized systems for analysis.
Splunk
ELK Stack (Elasticsearch, Logstash, Kibana)
Datadog
New Relic
You configure your app or the Mule platform to send log output to an external collector.
In CloudHub:
Use log forwarding options in the platform.
Configure external logging endpoints via Runtime Manager.
In Runtime Fabric:
Centralized search and filtering
Alerting on log patterns (e.g., more than 10 errors in 5 minutes)
Log retention and compliance
This is MuleSoft's built-in monitoring solution.
| Feature | Description |
|---|---|
| Application Metrics | View CPU, memory, response time, throughput |
| Custom Dashboards | Create visual charts to track KPIs |
| Alerts | Notify your team when thresholds are crossed |
| Distributed Tracing | Track how a request flows through multiple APIs or systems |
| Log Search (CloudHub) | Search logs from within the monitoring UI |
| API Monitoring | Track SLA breaches, error rates, and usage of APIs |
Number of requests handled
Average response time
Number of failed transactions
JVM metrics (heap memory, GC time)
API usage metrics (hits per endpoint, response codes)
You can configure alerts in Anypoint Monitoring to:
Notify your team via email, Slack, or Ops tools
Trigger based on:
Error rate > 5%
Response time > 1000 ms
App CPU > 80%
Visual tools that show metrics over time:
Use pre-built or custom dashboards
Group by application, environment, or API
Share with other teams
| Area | Description | Tools Involved |
|---|---|---|
| Logging | Capturing application events and errors | Mule logs, Log4j, JSON logs |
| Log Streaming | Sending logs to external analysis platforms | ELK, Splunk, Datadog, Fluentd |
| Monitoring | Real-time tracking of performance and behavior | Anypoint Monitoring |
| Alerts | Notifications when metrics exceed defined thresholds | Anypoint Monitoring, email, Slack |
| Dashboards | Visual representation of trends and KPIs | Anypoint Monitoring, Kibana |
Enable centralized logging from the beginning.
Use consistent naming and tagging (e.g., app name, env, team).
Monitor error trends over time, not just single events.
Automate alerting to detect problems before users notice.
Keep logs and metrics for audit and compliance.
The runtime plane is where your Mule application physically runs — on a server, container, or cloud worker.
Securing the runtime plane means protecting:
The network paths your app uses
The data moving in and out
The infrastructure components that host your app
TLS (Transport Layer Security) is used to encrypt communication between clients and your Mule app.
Protects data from being read by attackers.
Required by most security standards (e.g., GDPR, HIPAA).
Mule apps support HTTPS endpoints by configuring TLS connectors.
You can provide:
A keystore (for SSL certificates and private keys)
A truststore (to trust incoming client certificates, if mutual TLS is used)
You expose an API at https://api.company.com/v1/orders.
TLS ensures the request is encrypted between the client and the Mule runtime.
Mule applications often connect to internal systems like:
Databases
ERPs (e.g., SAP)
File servers
If the connection is open to the public internet, it can be exploited.
Internal systems should only be reachable by trusted apps or IPs.
IP whitelisting: Only allow specific IP ranges to connect to your backend.
VPN: Set up a Virtual Private Network so Mule apps connect securely to internal systems.
PrivateLink or DirectConnect (cloud environments): For secure cloud-to-cloud communication.
You configure IP whitelisting so that only CloudHub workers can access your internal DB.
A Virtual Private Cloud (VPC) is a logically isolated section of the cloud.
In CloudHub:
You can create a dedicated VPC for your Mule apps.
You can control traffic between your VPC and the public internet or internal networks.
A Private Space is a newer feature in CloudHub 2.0.
It provides enhanced control over:
Networking (custom subnets, routes)
DNS resolution
Ingress/egress control
It’s based on Kubernetes namespaces and network policies.
Better network isolation
Supports compliance requirements
Enables fine-grained firewall rules
RTF gives full control over the underlying infrastructure, including the network.
Deploy to private cloud or on-premise Kubernetes
Configure internal-only endpoints (not exposed to the public)
Use AWS VPC Peering, Azure VNet Integration, or on-prem routing
Secure API traffic using service mesh or ingress controllers
You deploy an internal app that only Finance systems can call — it’s never exposed publicly.
| Security Measure | Description | Applies To |
|---|---|---|
| TLS | Encrypts data in transit via HTTPS | All deployments |
| IP Whitelisting | Only allow trusted IPs to access endpoints | CloudHub, RTF |
| VPN | Secure tunnel between apps and backend systems | CloudHub, RTF, Hybrid |
| VPC / Private Space | Isolated network with fine-grained access control | CloudHub, CloudHub 2.0 |
| Private Networking in RTF | Complete control over traffic routing | Runtime Fabric (RTF) |
| Mutual TLS (mTLS) | Verifies both client and server identities | Advanced deployments |
| Practice | Explanation |
|---|---|
| Use HTTPS by default | Always encrypt data in transit |
| Restrict public access | Only expose necessary endpoints to the internet |
| Use internal DNS or private IPs | Avoid hardcoding public addresses |
| Encrypt secrets and credentials | Use secure property placeholders or vault integrations |
| Regularly rotate certificates and keys | Prevent key leakage or expiration-based failures |
| Separate environments using VPCs | Avoid cross-talk between dev, test, and prod |
| Apply least-privilege access controls | Give minimum access required to each app and service |
This section focuses on how to design the network layout that supports your Mule applications. Network architecture decisions are critical because they:
Control who can access your applications
Ensure connectivity to internal and external systems
Impact performance, latency, and security
This is traffic coming into your Mule application, such as:
API calls from a web app or mobile client
Messages from partner systems
This is traffic leaving your Mule app to access:
Databases
SaaS platforms (Salesforce, Workday, etc.)
Other APIs or services
Ensure firewalls allow inbound traffic from only trusted IPs or domains.
Allow outbound traffic only to required destinations (block all others).
In some companies, firewall requests must be raised in advance to open specific ports or IPs.
Your app needs to call a third-party payment API. You must:
Open outbound HTTPS port (443) to their IP range.
Confirm the remote IP is whitelisted by your network team.
An API gateway acts as the entry point to your APIs and provides:
Centralized security
Rate limiting
Logging and monitoring
Policy enforcement (e.g., CORS, client ID validation)
| Option | Description |
|---|---|
| API gateway in CloudHub | Use MuleSoft’s built-in API Gateway via API Manager |
| API gateway in RTF | Deploy a Hybrid API Gateway alongside RTF |
| External API gateway | Use a third-party gateway (e.g., Apigee, Kong, AWS API Gateway) |
Place the gateway outside the internal network, but behind a load balancer or WAF.
Ensure it handles TLS termination securely.
Enforce security policies (e.g., OAuth2, JWT validation) at the gateway, not inside apps.
When you expose a Mule application, you need to decide if it will be:
Internal only: Accessible within the company network.
Publicly accessible: Reachable over the internet.
| Type | Use Case | Security Requirement |
|---|---|---|
| Internal | HR app for employee data | VPN or VPC-only access |
| Public | Customer-facing API (e.g., product catalog) | OAuth2, TLS, IP filtering |
In CloudHub, assign apps to private or public workers.
In RTF or Hybrid, use ingress controllers and internal load balancers.
DNS (Domain Name System) is used to resolve domain names (e.g., api.example.com) to IP addresses.
Use private DNS zones for internal apps.
Ensure Mule runtimes can resolve internal hostnames (e.g., db.internal.corp.com).
When using VPCs or hybrid connectivity, configure custom DNS resolvers.
You deploy a Mule app in Runtime Fabric that must call internal-api.mycompany.local. You configure your Kubernetes cluster to use your company’s internal DNS server for name resolution.
| Element | Description | Why It Matters |
|---|---|---|
| Firewall Rules | Control what traffic can enter or leave the runtime | Prevents unauthorized access |
| API Gateway Placement | Central point for traffic filtering, security, and policies | Improves API governance and protection |
| Internal vs Public Apps | Controls exposure of your apps based on intended use | Reduces attack surface |
| DNS Resolution | Ensures apps can locate and connect to services inside/outside | Avoids runtime failures due to name errors |
| Best Practice | Explanation |
|---|---|
| Use least privilege networking | Open only the ports/IPs required for the app to function |
| Deploy internal-only apps inside private zones | Prevent public exposure unless explicitly required |
| Secure all public endpoints | Always use TLS, authentication, and rate limiting |
| Use DNS aliases for flexibility | Avoid hardcoding IPs; allows seamless migration or failover |
| Place gateway before apps | Enforce security and traffic policies at the edge |
While both planes are part of the Anypoint Platform, they have clear functional boundaries that impact architecture, operations, and security decisions.
Purpose: Governs design, deployment orchestration, security policy application, user roles, and metadata management.
Accessed via: Anypoint Platform console (cloud-hosted or GovCloud).
Contains:
Design Center – API/spec design
API Manager – Policy enforcement and contract governance
Exchange – Reusable asset catalog
Access Management – RBAC and SSO integration
Purpose: Executes Mule applications and processes data.
Deployable to:
CloudHub 1.0 / 2.0 (fully managed cloud runtime)
Runtime Fabric (RTF) (Kubernetes-based hybrid runtime)
On-premises Mule servers (standalone or clustered)
Architectural Implication:
The control plane manages assets, while the runtime plane executes them.
Architects must design interfaces between these two responsibly:
Secure API communication (TLS, IP whitelisting).
Controlled deployment from control to runtime via Mule Maven Plugin or Anypoint CLI.
Role separation — e.g., developers may access design tools, but only ops manage runtimes.
Exam Tip:
If a question involves policy application, user management, or asset discovery, the answer belongs to the Control Plane.
If it involves scaling, traffic routing, or logging, it belongs to the Runtime Plane.
Modern Mule runtimes (CloudHub 2.0, RTF) are containerized.
Each deployed application is an isolated runtime unit, ensuring security and scalability.
Each application runs in its own container or pod.
Isolation is enforced for CPU, memory, and filesystem resources.
Containers are ephemeral — destroyed and recreated as needed.
Platform orchestrators (Kubernetes or MuleSoft scheduler) handle restarts automatically.
Design stateless applications. Store session data in external stores (e.g., Object Store, DB).
Define resource limits (CPU, memory) for each pod to prevent noisy neighbor effects.
Separate environments logically to avoid cross-contamination.
Exam Application:
If asked how to maximize scalability in RTF, the correct answer is to design lightweight, stateless containers and define resource quotas.
Health checks ensure Mule applications are operational and can recover autonomously from failure.
Liveness Probe: Detects whether an app is stuck or crashed. If failed, Kubernetes restarts it.
Readiness Probe: Determines if the app is ready to receive traffic. Prevents load balancer from sending requests prematurely.
In RTF/Kubernetes, probes are defined in YAML deployment specs.
In CloudHub, health is monitored via Runtime Manager dashboards and alerts.
Implement custom health endpoints (/health, /ping) that validate external dependencies (DB, APIs).
Configure auto-restart thresholds conservatively to prevent restart loops.
Architectural Impact:
Health checks are the foundation of self-healing, a key differentiator of containerized deployment models in MuleSoft.
Load balancing ensures even request distribution and fault tolerance across replicas.
CloudHub 1.0/2.0:
Traffic distributed automatically across workers.
Can use CloudHub Load Balancer (CHLB) for custom domains, TLS termination, and sticky sessions.
Runtime Fabric:
Hybrid / On-prem:
Blue/Green deployments: Two environments (active + standby); traffic switched after validation.
Canary releases: Gradual traffic shift (e.g., 10%, 25%, 100%) to new versions.
Weighted routing: Allocate partial traffic between versions for testing or phased rollout.
Design Principle:
Keep network routing externalized — the app should not control load balancing logic.
DR ensures continued operation after catastrophic failure.
CloudHub:
Deploy across multiple regions manually.
Maintain backups of configurations and object stores.
Runtime Fabric:
Multi-cluster or multi-AZ deployment for redundancy.
Config synchronization between clusters via CI/CD pipelines.
Hybrid:
Define RTO (Recovery Time Objective) and RPO (Recovery Point Objective) for each integration.
Test DR processes at least quarterly.
Replicate configuration metadata to prevent state loss.
Exam Insight:
In questions about “business continuity,” DR and multi-region design are preferred over manual restarts or backups.
Autoscaling optimizes resource usage while maintaining performance.
CloudHub 2.0: Autoscaling managed by the platform based on CPU/memory thresholds.
RTF: Controlled via Kubernetes Horizontal Pod Autoscaler (HPA).
Always define minimum and maximum replica limits.
Monitor CPU throttling and OOM kills to refine sizing.
Schedule off-peak scaling down to reduce costs.
Cost Strategy:
Right-sizing matters as much as scaling. Over-provisioning leads to wasted cost; under-provisioning causes SLA breaches.
Version management in runtime deployments supports controlled evolution and recovery.
RTF/Kubernetes:
Use Helm rollback or Kubernetes rollout undo.
Support Blue/Green or Canary deployments for risk-free release.
CloudHub:
Runtime Manager allows redeployment of previous versions manually.
Artifacts stored in Maven/Nexus facilitate rollbacks.
Always tag releases with immutable build IDs (e.g., orders-api-1.0.3+build45).
Store all deployment configurations in version control.
Resource governance ensures fair and secure usage in multi-team environments.
RTF: Use namespaces, quotas, and RBAC for isolation.
CloudHub: Separate apps by business groups and environments.
Hybrid: Apply VM-level separation and OS-level quotas.
Tag resources by cost center or project.
Enforce naming conventions (team-appname-env).
Implement budgets and alerts to prevent resource hoarding.
Architectural Objective:
Maintain cost visibility and operational independence without sacrificing control.
In Runtime Fabric, container security is essential to enterprise compliance.
PodSecurityPolicies (PSP) / PodSecurityStandards (PSS): Restrict privileges (no root access, no host networking).
NetworkPolicies: Limit east-west traffic between pods.
Image Scanning: Use tools like Trivy, Aqua, or Anchore before deployment.
Runtime Protection: Tools like Falco or Sysdig detect anomalies in real time.
Use signed, verified base images.
Disable shell access to containers.
Store secrets in Kubernetes Secrets or external vaults, not environment variables.
Exam Hint:
Security questions usually expect “defense in depth” — apply multiple layers (image, network, runtime) instead of a single tool.
Distributed tracing is critical in multi-API ecosystems to trace requests end-to-end.
Correlation IDs: Unique identifiers injected in HTTP headers to track requests.
Distributed Tracing Tools: OpenTelemetry, Zipkin, or Jaeger integrate via sidecars or agents.
Anypoint Monitoring: Provides basic tracing within CloudHub.
In RTF/Hybrid, configure tracing sidecars (e.g., OpenTelemetry agents).
In multi-API chains, propagate headers (X-Correlation-ID) to maintain trace continuity.
Use trace data to analyze latency, identify bottlenecks, and optimize flow design.
Architectural Purpose:
Tracing ensures observability, enabling rapid root cause analysis across distributed integrations.
What runtime-plane architectural capability does CloudHub provide to support scaling integration workloads?
CloudHub provides automatic worker scaling and load balancing for Mule applications.
In CloudHub deployments, Mule applications run on workers that can be scaled vertically or horizontally depending on performance demands. The platform automatically distributes traffic across workers, improving throughput and reliability. Architects must design stateless Mule applications when possible so scaling additional workers effectively distributes processing load without creating session dependencies.
Demand Score: 58
Exam Relevance Score: 79
Why might an enterprise choose Runtime Fabric over CloudHub for sensitive integrations?
Runtime Fabric allows Mule runtimes to operate within the organization’s controlled infrastructure environment.
With Runtime Fabric, organizations deploy Mule applications on their own Kubernetes clusters or private infrastructure while still managing them through Anypoint Platform. This allows integration workloads to remain within corporate networks or private clouds, meeting regulatory, security, or compliance constraints. CloudHub environments operate within MuleSoft-managed infrastructure, which may not satisfy certain governance policies for sensitive workloads.
Demand Score: 62
Exam Relevance Score: 84
What is the primary architectural factor when deciding between CloudHub and Runtime Fabric deployments?
The primary factor is control over infrastructure and network topology.
CloudHub is a fully managed MuleSoft platform where Mule runtimes are deployed and managed by MuleSoft, simplifying operations and scaling. Runtime Fabric, however, allows organizations to run Mule runtimes on their own Kubernetes infrastructure, giving greater control over networking, security boundaries, and compliance requirements. Enterprises with strict regulatory or network isolation requirements often choose Runtime Fabric. CloudHub is typically preferred for rapid deployment and minimal operational overhead.
Demand Score: 70
Exam Relevance Score: 86