This is the first thing you do in any practical task: understand what is really being asked.
You will typically be given a mix of information:
Text description of the environment
Diagrams (network/storage)
Emails or “tickets” from users or managers
Logs or alerts from systems
Your job is to read carefully and extract the key points.
Carefully reading all provided information: diagrams, emails, tickets, logs
In the exam (and in real life):
Don’t jump straight into the GUI and start clicking.
First, read all:
Diagrams: show which servers, storage, networks exist.
Emails/tickets: show what the “user” or “manager” thinks the problem is, or what they requested.
Logs/alerts: show what the systems are actually complaining about.
Beginner tip:
Take 1–3 minutes just to read and highlight (mentally or on scratch paper) key facts:
What is broken (if anything)?
What is the requested change?
Are there any constraints (e.g., “no downtime for DB1”)?
Identifying the actual task: implement, fix, optimize, or validate
Common types of tasks in a scenario:
Implement
Fix
Optimize
Validate
You must recognize which kind of task it is, because:
Implementation tasks follow a build/configure flow.
Troubleshooting tasks follow a diagnose/fix flow.
Validation tasks focus on testing and checking.
Distinguishing requirements vs “nice to have”
Not everything written in the scenario is equally important.
Requirements (must have):
SLA or compliance needs.
Explicit statements like “RTO must be less than 1 hour” or “No reboot of DB01 is allowed.”
Nice to have:
“It would be better if…”
“The team prefers…”
In exam mindset:
First, satisfy all must-have requirements.
Only then, if time allows and it doesn’t add risk, do some “nice to have” improvements.
This distinction avoids wasting time on low-value tasks while more critical requirements remain unmet.
After understanding the scenario, you decide what to do first.
Safety and data protection first: avoid actions that could risk data loss
Golden rule:
Never rush into actions that might delete, overwrite, or corrupt data.
Examples of dangerous actions:
Reformatting a volume “just to test something”.
Deleting LUN mappings without understanding which hosts use them.
Powering off a node that you assume is idle, without checking.
Safer approach:
Verify which LUN/volume you are about to modify.
Check that backups/snapshots exist before making risky changes.
If you’re unsure, pause and re-check the scenario text.
Quick wins: resolve major outages quickly before fine-tuning
If the scenario describes a major outage:
Prioritization strategy:
Get the critical service back online in a simple, safe way.
Then later, clean up and optimize.
Example:
If a VM lost network connection due to a wrong VLAN:
First, fix VLAN tagging so the website works again.
Then, later adjust naming conventions or documentation.
In the exam, this shows you understand business impact.
Respect business priorities: critical workloads first
Not all workloads are equal:
Critical: payment systems, ERP, production DBs.
Less critical: test/dev, reporting, batch jobs.
If multiple issues exist:
Fix Tier 0 / Tier 1 workloads first.
Fix lower-tier issues later.
This is exactly how real incident response is done, and the exam expects this mindset.
In HPE1-H05 style practicals, most hands-on work falls into a few categories.
These are “from scratch” or “add new capacity” type tasks.
Creating server profiles, assigning to hardware
In HPE ecosystems, a server profile often defines:
BIOS settings
Firmware baselines
NIC/HBA configurations
Boot order and SAN boot parameters
Typical steps:
Create a server profile template according to standards.
Assign profiles to physical hardware.
Boot and verify that servers come up with the correct configuration.
This ensures consistency across all nodes.
Deploying OS/hypervisor according to standards
You may need to:
Install a hypervisor (e.g., ESXi) or a standard Linux/Windows build.
Use automated deployment where possible (PXE, templates).
Apply correct:
IP configuration
Timezone and NTP
Local security baseline (e.g., password policies, SSH settings)
Goal:
Every new host is compliant with organizational standards.
Creating storage pools, LUNs, shares, and mapping them
Common tasks:
On the storage array:
Create disk pools (performance and capacity tiers).
Create LUNs/volumes for specific workloads (e.g., VM datastore, DB volume).
Create file shares (NFS/SMB) if file storage is required.
Map LUNs to host groups and verify that hosts can see them.
In exam scenarios, you might be told:
“Create a 2 TB datastore for the production cluster on SSD tier with thin provisioning.”
You must translate that into correct actions on the storage system.
Configuring SAN zoning and host connectivity
For FC environments:
Create aliases for host and storage WWPNs.
Create zones (single initiator, single target recommended).
Add zones to a zoneset and activate it.
Then:
On the hosts, rescan HBAs.
Verify that the new LUNs appear.
This proves you can build end-to-end connectivity between compute and storage.
These tasks ensure different components work together.
Connecting hosts to storage using FC or iSCSI
For FC:
Verify HBAs, WWPNs.
Check zoning and LUN masking.
For iSCSI:
Configure target portals on hosts.
Set CHAP credentials (if required).
Make sure correct VLAN and MTU are set.
After configuration, you:
Rescan storage.
Confirm that disks or LUNs are visible to the OS/hypervisor.
Setting up multipathing and verifying failover
Multipathing ensures:
Tasks:
Enable and configure multipath software on hosts.
Confirm that each LUN has multiple paths.
Test by disabling one link or switch and seeing that I/O continues.
The exam may simulate a path failure; you must recognize and fix missing multipath.
Integrating with directory services (AD/LDAP) for RBAC
Many management tools support:
Tasks:
Configure directory connection (server address, bind user, search base).
Map AD groups to roles (e.g., “StorageAdmins” group → storage admin role).
Test by logging in as a user from AD and verifying permissions.
This shows you understand enterprise-grade security and access control.
Troubleshooting is a huge part of practical exams.
Connectivity issues: host cannot see LUNs, ping tests, path analysis
Typical scenario:
“Host ESX01 cannot see the newly created LUN; other hosts can.”
You should:
Check physical layer: cables seated? ports up?
Check zoning: is ESX01’s WWPN in the right zone?
Check LUN masking: is ESX01 part of the correct host group on the array?
Check OS/hypervisor: rescan adapters; review logs.
Ping tests and traceroute help for IP-based issues; FC tools help on SAN side.
Performance issues: high latency, low throughput
Symptoms:
Users complain “system is slow”.
Monitoring shows high latency and/or low throughput.
Your steps:
Check if CPU or memory on hosts are maxed out.
Check disk IOPS and latency on storage.
Check for hot-spots: one volume or node overloaded.
Check network: link saturation, errors, incorrect MTU, or duplex mismatches.
Often performance issues are multi-layer; you must identify the main bottleneck.
Hardware failures: degraded disks, failed controllers, failed nodes
Examples:
Disk showing as “degraded” or “failed” in the array.
Storage controller down in a dual-controller system.
One node in a compute cluster is offline.
Tasks:
Identify which component failed (using logs and alerts).
Check if redundancy is working (e.g., RAID rebuild, cluster HA).
Replace or logically remove failed hardware following best practices.
You should never just ignore a degraded state; part of the exam is showing you react appropriately.
Misconfigurations: incorrect VLANs, wrong IP addressing, incorrect zoning
Many incidents are caused by simple misconfigurations:
NIC in wrong VLAN → no access to storage/management.
Wrong IP/subnet mask/gateway → no routing to required networks.
Zoning mismatch → host cannot see its LUN.
Your job:
Systematically check configuration at each layer.
Compare a working host with a broken host and spot differences.
Migration tasks test how you move from old to new without causing outages.
Moving data from legacy storage to new arrays
Typical methods:
Storage-based replication between old and new arrays.
Host-based copy (e.g., rsync, robocopy, backup and restore).
Key decisions:
Can we migrate online (while users work), or is downtime required?
Do we have enough bandwidth/time to synchronize all data?
Storage vMotion or similar live migration where available
In virtualized environments:
Steps:
Ensure both source and target datastores are presented to the host/cluster.
Use vMotion/Storage vMotion or equivalent to migrate.
Monitor performance and impact during migration.
Great for reducing downtime during storage upgrades.
Cutover plans and minimal-downtime strategies
The cutover is the moment you switch production from old to new.
Plan includes:
Final sync of data (if replication used).
Short downtime window for:
Repointing hosts or applications.
Updating DNS or configuration.
Rollback plan if the new system misbehaves.
In the exam, demonstrating a clean, controlled cutover approach is important.
How you think is as important as what you know.
Identify the layer: physical, network, storage, OS, application
When something is broken, ask:
“At which layer is the problem?”
Layers:
Physical (power, cables, ports, LEDs)
Network (VLANs, IP, routing, firewalls)
Storage (LUNs, pools, RAID, controllers)
OS/hypervisor (drivers, multipath, services)
Application (config, code, DB connections)
If you try to fix everything at once, you get lost.
A methodical approach: top-down or bottom-up, but always structured.
Isolate: test from different hosts, use different paths
Isolation means:
Try the same operation from another host.
Use another NIC, another path, another switch.
See if the issue follows a certain component.
Example:
If only one host can’t access a LUN, likely host config issue.
If all hosts on one switch have the issue, likely switch/VLAN problem.
Compare working vs non-working examples
One of the most powerful techniques:
Take a working system (host, VM, LUN).
Compare its configuration to the broken system.
Look at:
IP addresses, VLAN IDs, gateways
Zoning memberships, LUN mapping
Driver versions, firmware versions
Spot the differences → often reveals the root cause quickly.
System logs from arrays, switches, servers
Every system logs events:
Storage arrays: disk failures, controller reboots, path issues.
Switches: link downs, STP events, errors, congestion.
Servers: driver failures, filesystem errors, kernel events.
You should:
Know where to find logs.
Filter for relevant time periods.
Interpret major error codes and warnings.
Performance dashboards: IOPS, latency, CPU, memory, bandwidth
Monitoring tools give charts and dashboards:
IOPS and latency per volume.
CPU and memory per host.
Bandwidth per interface or port.
In troubleshooting:
Check if metrics align with user complaints.
Find unusual spikes or drops.
Identify hot spots or saturation.
Event codes and alerts to identify failing components
Systems often raise alerts with codes:
Critical, warning, informational.
Error IDs you can look up in documentation.
In exam scenarios:
You might see “Controller X offline” or “Path degraded”.
You must connect the alert to real actions: failover, replace, reconfigure.
After changes, you must prove everything works and record what you did.
Confirming that configuration matches design (e.g., HA level, performance)
Check:
Are the right number of hosts in the cluster?
Is HA enabled with correct admission control?
Are storage volumes on the intended tiers?
Compare your final state against:
The written design.
The scenario requirements.
Re-running relevant tests after changes
If you changed anything:
Re-run connectivity tests (ping, storage discovery).
Re-run performance tests for affected workloads.
Re-test failover if you modified HA/DR settings.
This ensures you didn’t fix one problem while creating another.
Validating non-functional requirements: HA, DR readiness, security settings
Non-functional requirements include:
High availability:
DR readiness:
Is replication working?
Are backups scheduled correctly?
Security:
RBAC roles configured?
Encryption enabled where required?
You confirm these are implemented, not just assumed.
Note exactly what you changed: commands, GUIs, parameters
Document:
Which commands you ran.
Which GUI options you changed.
Before/after values for critical settings.
This is essential if:
Another admin needs to understand what happened.
You must reverse or audit changes later.
Update diagrams or maps if required
If the network or storage layout changed:
Update logical diagrams (VLANs, LUN mapping).
Update asset lists (new hosts, new arrays).
Outdated diagrams are dangerous—they mislead future troubleshooting efforts.
Summary for stakeholders: what was done, why, and what’s the effect
Write a short summary:
Problem or request.
Root cause (for incidents).
Actions taken.
Outcome (service restored, performance improved, capacity added).
Any remaining risks or next steps.
This is what managers and business owners care about.
In hands-on exams, time is your most limited resource.
Quickly assess the complexity of each task
At the start:
Scan all tasks.
Estimate which ones are:
Quick (5–10 minutes).
Medium.
Long or complex.
You might:
Do all quick tasks first to secure easy points.
Then tackle the more complex ones.
Don’t over-engineer: do what the scenario requires, not more
Example:
Scenario: “Create an NFS share and present it to two hosts.”
You do not need to:
Redesign the entire network.
Change unrelated settings.
Only meet the specified requirements, safely and correctly.
Extra “nice” work wastes time and might introduce new problems.
Avoid rabbit holes; if stuck, verify assumptions and move to another angle
If you are stuck more than a few minutes:
Re-read the scenario text: did you misread something?
Check assumptions:
If still stuck, move to a different part of the exam and return later with a fresh mind.
This prevents one tricky issue from consuming all your time.
Double-check destructive actions (format, delete, overwrite) before execution
Before you:
Format a disk or volume.
Delete a LUN or snapshot.
Overwrite a configuration.
Ask yourself:
“Am I 100% sure this is the right object?”
“Did I confirm this is not in use by some critical workload?”
One wrong deletion can fail the scenario and, in real life, cause major incidents.
Prefer reversible changes if unsure
If you’re not fully sure:
Change things that are easy to undo:
Add a new path rather than removing the old one.
Create a new volume instead of heavily modifying an existing critical one.
Use snapshots or backups as a safety net before big changes.
This way, if you’re wrong, you can revert with minimal damage.
Save and apply configurations only after validation when possible
Some systems let you:
Stage configuration changes.
Review and validate them.
Then apply all at once.
Whenever possible:
Review settings carefully.
Apply changes.
Immediately test and verify.
This makes your work look professional and reliable, both in the exam and in real projects.
Hands-on practical exams require strict adherence to the boundaries of the lab environment. Any modification outside the explicit task instructions may result in loss of points or task failure.
Modify only the components, settings, and objects named in the scenario.
Avoid making structural or cosmetic changes such as renaming objects, altering folder hierarchies, or cleaning up unused items unless explicitly allowed.
Never restart shared infrastructure components including switches, controllers, arrays, or clusters unless the scenario requires it; doing so may affect other tasks and invalidate parts of the exam.
Do not disable or weaken essential security functions like RBAC, firewall rules, secure protocols, or auditing unless the instructions specifically authorize it.
Always assume that the environment is shared, fragile, and partially pre-configured for multiple tasks.
Candidates are expected to understand how to utilize HPE ecosystem tools for configuration, monitoring, and troubleshooting within the scenario.
HPE OneView dashboards can be used to identify alert states, profile inconsistencies, firmware mismatches, and hardware warnings.
Storage-array management interfaces (web-based or CLI) are essential for checking controller health, disk status, LUN mapping, zoning dependencies, replication, and volume performance.
Monitoring and analytic tools should be used to interpret performance graphs, identify latency spikes, or observe path and controller transitions during tests.
InfoSight-style insights may reveal bottlenecks, misconfigurations, and predictive alerts that guide troubleshooting workflow.
Efficient navigation of these tools is part of the exam’s evaluation of operational readiness.
The exam is designed to award points not only for full completion but also for partially completed tasks. A strategic approach can significantly increase the total score.
Complete the fundamental required actions first; these typically unlock the majority of the available points.
Save or apply configurations after each successfully completed stage to ensure work is recorded even if time expires or an unexpected issue occurs.
If a multi-step configuration becomes complex or time-consuming, move to faster tasks to secure additional points, then return if time allows.
Avoid over-engineering or unnecessary optimization because it adds time but seldom adds scoring value.
Use a clear mental or written order of operations so that each completed step contributes to cumulative partial credit.
Practical exams frequently include intentionally incorrect configurations to simulate real-world troubleshooting. Identifying these early improves accuracy and prevents compounding errors.
Incorrect VLAN assignments, mismatched subnets, or invalid IP gateways.
Incorrect SAN zoning entries or missing paths in multipathing configurations.
Volumes orphaned from host groups, stale LUN mappings, or mismatching WWPNs.
Degraded disks, controller warning states, incomplete replication pairs, or unhealthy clusters.
These misconfigurations are often hinted at by system warnings, degraded icons, or alerts, which act as clues within the exam scenario.
Always begin by validating the existing environment before making new changes; fixing a misconfiguration may be the actual requirement.
Working efficiently within an exam environment requires careful tracking of details. A personal scratchpad or notes section is an essential tool.
Maintain a concise checklist of required steps for each task to avoid missing mandatory actions.
Write down key environment details such as IP addresses, VLAN identifiers, subnet masks, LUN names, storage pool IDs, and host profile names.
Record dependencies or sequence-sensitive actions to avoid errors such as configuring storage before zoning or creating volumes before mapping host groups.
Track progress by marking each task or subtask as completed, pending, or requiring validation.
Use notes to quickly reference parameters during repeat operations or when switching between complex tasks.
A server cannot detect a newly created LUN on the storage array. What is the first thing to verify?
Verify that the LUN is correctly mapped to the host.
If a LUN is created but not mapped to the correct host or host group, the server will not be able to detect it. Storage administrators must confirm that the host's WWN or iSCSI initiator is properly registered and associated with the correct LUN. This is typically the most common reason newly provisioned storage does not appear on a server. After verifying mapping, administrators should rescan storage devices on the host system to detect the new LUN.
Demand Score: 88
Exam Relevance Score: 90
What should be checked if a server intermittently loses access to storage in a SAN environment?
Check network connectivity, multipath configuration, and switch health.
Intermittent storage connectivity often results from unstable network paths or misconfigured multipathing. Administrators should examine switch logs, verify link status, and confirm that redundant paths are functioning correctly. If multipath software is improperly configured, the host may fail to switch to an alternate path during disruptions. Monitoring SAN switch ports and verifying firmware compatibility can also help identify the root cause.
Demand Score: 85
Exam Relevance Score: 88
What step should be performed on a host after provisioning new storage volumes?
Rescan the host storage adapters to detect new devices.
After a storage administrator creates and maps a new LUN, the host operating system must detect the new device. This is done by rescanning storage adapters such as Fibre Channel HBAs or iSCSI initiators. Without this step, the operating system may not recognize the newly available storage resource. Once detected, administrators can partition and format the device for application use.
Demand Score: 83
Exam Relevance Score: 87
What is a common cause of SAN performance issues?
Network congestion or improperly configured paths.
SAN performance depends on reliable, high-bandwidth connectivity between servers and storage arrays. If multiple workloads share the same network links without proper traffic management, congestion can increase latency and reduce throughput. Incorrect zoning, misconfigured multipathing, or overloaded switches can also cause performance degradation. Monitoring tools and switch diagnostics help identify bottlenecks so administrators can rebalance traffic or upgrade infrastructure.
Demand Score: 80
Exam Relevance Score: 84
Why might a host fail to access a storage volume even though the LUN is mapped?
The host may lack proper permissions, zoning configuration, or multipath recognition.
Even when LUN mapping is correct, other configuration layers may prevent access. Fibre Channel environments require correct zoning on SAN switches to allow communication between host and storage array. Additionally, the host operating system must recognize the storage device through its HBA or iSCSI initiator. Incorrect drivers or multipath settings may also prevent the host from seeing the volume. Troubleshooting requires verifying each layer of connectivity from host to storage.
Demand Score: 79
Exam Relevance Score: 83
What is a typical first step when troubleshooting storage connectivity issues?
Verify physical connections and link status.
Before investigating complex configuration issues, administrators should confirm that cables, switches, and adapters are functioning correctly. Loose cables, disabled switch ports, or failed network interfaces can immediately disrupt storage connectivity. Checking link lights, switch port status, and hardware logs helps quickly identify hardware failures. Starting troubleshooting with physical verification prevents unnecessary time spent diagnosing higher-level configuration issues.
Demand Score: 77
Exam Relevance Score: 82