Shopping cart

Subtotal:

$0.00

AZ-104 Implement and Manage Storage

Implement and Manage Storage

Detailed list of AZ-104 knowledge points

Implement and Manage Storage Detailed Explanation

1. Secure Storage

1.1 Encryption

Encryption ensures that data is unreadable to unauthorized users. In Azure Storage, encryption happens by default.

1.1.1 Server-side encryption (SSE)

Server-side encryption means Azure encrypts your data as it’s written to storage, and decrypts it when read.

There are two main key management options:

a. Microsoft-managed keys (MMK)
  • Default option (no setup required).

  • Microsoft controls and rotates the encryption keys.

  • Fully automatic and compliant with most standards (e.g., ISO, SOC, GDPR).

b. Customer-managed keys (CMK)
  • You manage your own keys using Azure Key Vault.

  • Gives you more control:

    • You can rotate keys manually.

    • You can revoke access to data by disabling the key.

  • Useful in regulated industries (finance, healthcare, etc.).

1.1.2 Client-side encryption
  • Here, you encrypt data on the client before uploading it to Azure Storage.

  • You manage your own encryption libraries and keys.

  • Used when you need end-to-end control or extra protection for sensitive data.

  • Example: Use a custom app to encrypt data using AES before uploading.

1.2 Access Control

Controlling who can access what data is essential for security. Azure offers several mechanisms:

1.2.1 Shared Access Signatures (SAS)

SAS tokens let you grant limited-time and permission-scoped access to storage data.

There are three types:

Type Purpose
User delegation SAS Uses Azure AD credentials, more secure, applies to Blob Storage
Service SAS Grants access to specific storage resources (like a blob, file, or table)
Account SAS Grants access at the account level, broader access than service SAS

SAS tokens are URLs with query parameters — shareable via code or email, but should be treated like passwords.

1.2.2 Azure AD integration for storage authentication
  • Azure Storage supports Azure AD-based access for Blob and Queue storage.

  • Benefits:

    • Centralized identity management.

    • Role-based access control (RBAC) applies.

  • Users must authenticate with Azure AD and be assigned roles like:

    • Storage Blob Data Reader

    • Storage Blob Data Contributor

1.2.3 Storage access policies and permissions
  • Access control can be done using:

    • RBAC (via Azure AD) — recommended for most secure scenarios.

    • Access keys — provide full admin-level access to a storage account (not recommended for sharing).

    • SAS tokens — provide scoped and temporary access.

Best Practice: Avoid sharing access keys; prefer SAS or Azure AD RBAC.

1.2.4 Firewall and virtual network rules

You can restrict storage account access to:

  • Specific IP address ranges

  • Virtual networks (VNets) and subnets

Steps:

  1. Go to the storage account in the portal.

  2. Under Networking, choose Selected networks.

  3. Add VNets or IP ranges allowed to access the account.

Use private endpoints for secure access over private IPs within your VNet.

2. Manage Storage Accounts

2.1 Types of Storage Accounts

Azure Storage accounts are containers for your data services, such as blobs, files, queues, and tables. Choosing the correct type affects pricing, performance, and available features.

2.1.1 General-purpose v2 (GPv2) – Recommended
  • Supports all storage types: Blob, File, Queue, Table, and Disk

  • Access tiers: Hot, Cool, Archive

  • Most feature-rich and cost-efficient for modern use cases

  • Supports all redundancy and performance options

This is the default and recommended storage account type.

2.1.2 General-purpose v1 (Legacy)
  • Older type, limited features

  • No support for access tiers

  • Not recommended for new deployments

2.1.3 Blob Storage Account (Legacy)
  • Optimized for storing only blobs

  • Supports Hot/Cool access tiers

  • Rarely used now—GPv2 accounts include all Blob features and more

2.1.4 Premium vs Standard Performance Tiers
Tier Use Case Media Notes
Standard General workloads HDD/Standard SSD Cost-effective
Premium High-performance needs Premium SSD or NVMe Low latency, higher IOPS

Examples:

  • Use Premium for workloads like databases or virtual machine disks.

  • Use Standard for logs, backups, documents.

2.1.5 Access Tiers (for Blob Storage)

Control how frequently your data is accessed:

Tier Use case Pricing
Hot Frequently accessed data Higher storage cost, lower access cost
Cool Infrequently accessed data (≥30 days) Lower storage cost, higher access cost
Archive Rarely accessed data (≥180 days) Cheapest storage, highest access cost and delay (~hours to rehydrate)

You can change tiers per blob or automatically with lifecycle rules.

2.2 Replication

Replication determines how your data is duplicated for durability and high availability.

Replication Options
Type Scope Use Case
LRS (Locally Redundant Storage) 3 copies within one datacenter Least expensive
ZRS (Zone-Redundant Storage) Across 3 availability zones in a region Higher availability
GRS (Geo-Redundant Storage) Replicates to secondary region (read access disabled) Disaster recovery
RA-GRS (Read-Access GRS) Like GRS, but with read access to secondary Read during outages
GZRS (Geo+Zone Redundant) Across zones + replicated to another region Max durability
RA-GZRS GZRS with read access to secondary Premium option

Choose based on criticality and budget.

2.3 Configure Storage Settings

2.3.1 Network Settings
  • Public access: Open to all networks (not recommended)

  • Selected networks: Allow only specific IP ranges or VNets

  • Private endpoint: Assigns a private IP address inside your VNet, ensures secure communication

2.3.2 Custom Domain and TLS
  • Custom Domain: Replace default *.blob.core.windows.net with your own domain (e.g., storage.mycompany.com)

  • TLS Settings: Enforce secure transfer (https only) to enhance security

2.3.3 Lifecycle Management Rules

Used to automate blob tiering and deletion based on rules:

  • Move blobs to Cool after 30 days of no access

  • Move to Archive after 90 days

  • Delete blobs after 365 days

Steps:

  1. Go to Storage Account > Data Management > Lifecycle Management

  2. Add a rule

  3. Set filters (e.g., prefix or blob type)

  4. Define actions and conditions

3. Manage Data in Azure Storage

This section is about how you actually store, organize, access, and transfer data in Azure using different storage services.

3.1 Blob Storage

Blob = Binary Large Object
It is optimized for massive unstructured data, such as images, videos, backups, and logs.

3.1.1 Blob types
Blob Type Description Use Case
Block Blob Stores data in blocks Most common; used for images, documents, backups
Append Blob Optimized for append operations Ideal for logging; you can only add to the end
Page Blob Optimized for random read/write Used for virtual hard disks (VHDs), like Azure VM disks

Block blob is the default and most widely used blob type.

3.1.2 Upload, download, delete blobs

You can interact with blobs using:

  • Azure Portal (manual upload/download)

  • AzCopy (command-line tool)

  • Azure CLI or PowerShell

  • SDKs and REST APIs

Supports operations like:

  • Uploading large files in blocks (resumable)

  • Deleting blobs individually or in batch

  • Versioning and snapshot support

3.1.3 Configure containers, metadata, soft delete, versioning
  • Containers are like folders inside a blob storage account. Each blob must belong to a container.

  • Metadata: Key-value pairs stored with each blob (e.g., owner = “finance”)

  • Soft Delete: Allows recovery of deleted blobs (like a recycle bin)

  • Versioning: Automatically stores past versions of a blob for rollback or recovery

Enable these features from:
Storage Account > Data Protection

3.2 File Shares

Azure Files provides shared folders in the cloud that you can mount via SMB or NFS.

3.2.1 SMB and NFS protocol support
  • SMB (Server Message Block): Used by Windows clients; can map drives like \\storageaccount.file.core.windows.net\share

  • NFS (Network File System): For Linux-based systems

Azure File Shares support standard file operations and access control (NTFS ACLs on SMB shares)

3.2.2 On-prem sync via Azure File Sync

Use Azure File Sync to:

  • Synchronize your on-prem Windows Server file shares with Azure Files

  • Enable cloud tiering: keep frequently accessed files local, others in the cloud

  • Create a hybrid file server model

Setup involves:

  1. Create a file share in Azure

  2. Install Azure File Sync agent on your Windows Server

  3. Register the server in Azure

  4. Create a sync group and add endpoints

3.2.3 Share snapshots and backups
  • Snapshots: Point-in-time copies of a file share. Useful for recovery or auditing.

  • Can restore individual files or entire share to a previous state.

  • Snapshots don’t need extra tools — managed within the Azure portal or CLI.

3.3 Data Transfer Methods

Azure offers several options for moving data in and out of Azure Storage.

3.3.1 AzCopy
  • Command-line tool

  • Fast, efficient, resumable uploads/downloads

  • Example command:

    azcopy copy "file.txt" "https://<account>.blob.core.windows.net/<container>?<SAS>" --recursive
    
3.3.2 Azure Storage Explorer
  • GUI tool for Windows/macOS/Linux

  • Easy to manage blobs, tables, queues, and file shares

  • Great for non-developers or ad-hoc data tasks

3.3.3 Azure Data Factory
  • ETL tool used for data integration and migration

  • Supports moving data from:

    • On-prem SQL, Oracle

    • AWS S3, FTP

    • Other cloud storage

3.3.4 Import/Export Service
  • For large-scale data transfer (TBs to PBs)

  • Microsoft provides encrypted disks for physical shipping to/from Azure datacenters

  • Useful when network bandwidth is limited or expensive

3.3.5 ADLS Gen2 hierarchical namespace
  • Azure Data Lake Storage Gen2 supports a file-system-like structure with folders and access control

  • Hierarchical namespace enables:

    • Faster access to files

    • ACL-based security

    • Efficient big data analytics

Ideal for use with Azure Synapse, Databricks, or Hadoop

4. Configure Azure Files and Azure Blob Storage

This section focuses on advanced configurations, optimization features, and troubleshooting techniques for Azure’s two core data storage services.

4.1 Azure File Sync Setup and Cloud Tiering

4.1.1 What is Azure File Sync?

Azure File Sync allows you to synchronize on-premises Windows file servers with Azure Files, creating a hybrid cloud file system.

Benefits:

  • Centralized file management in Azure

  • Reduced on-prem storage usage with cloud tiering

  • Disaster recovery and data protection

4.1.2 How to Set Up Azure File Sync

Steps:

  1. Create an Azure File Share

  2. Deploy Azure File Sync Agent on the Windows server

  3. Register the server in Azure Storage Sync Service

  4. Create a sync group in Azure

  5. Add the Azure file share and on-prem path as endpoints

File changes are synchronized both ways (cloud ↔ local)

4.1.3 Cloud Tiering
  • Cloud tiering saves space on the local server.

  • Frequently accessed files remain on-premises.

  • Other files become stubs, pulled from Azure on demand.

Configuration settings:

  • Set a volume free space threshold (e.g., keep 20% of disk free)

  • File tiering is automatic and policy-driven

4.2 Implement Soft Delete for File Shares

Soft delete protects against accidental deletions.

4.2.1 How it works
  • When a file share is deleted, it is retained in the "soft deleted" state for a configurable number of days (up to 365).

  • You can undelete it during that retention window.

4.2.2 Enabling Soft Delete
  1. Go to Storage Account > Data Protection

  2. Enable soft delete for file shares

  3. Set the retention period

Can also be enabled for blobs and containers.

4.3 Monitor and Troubleshoot Access Issues

4.3.1 Diagnostic Logging

Enable diagnostic logs to track:

  • Access patterns

  • Errors

  • Latency and throughput

Steps:

  1. Go to Storage Account > Diagnostic settings

  2. Choose logs to send to:

    • Log Analytics

    • Event Hub

    • Storage account (for archiving)

4.3.2 Common Troubleshooting Scenarios
Issue Possible Cause Solution
Access denied (403) Invalid SAS or RBAC permissions Verify token or role
Network timeout Firewall or NSG blocking traffic Check network settings
Slow access Too many concurrent requests Scale up or optimize request patterns
Sync delays Agent offline or misconfigured Restart sync agent and check status

Use Azure Monitor, Metrics, and Activity Log to investigate issues in real-time.

5. Implement Storage Security and Monitoring

This section focuses on tracking usage, detecting threats, and monitoring the performance and health of your Azure Storage resources.

5.1 Enable and Configure Diagnostic Logging

5.1.1 What is diagnostic logging?

Diagnostic logging provides detailed logs about:

  • Who accessed what

  • What operations were performed (read, write, delete)

  • Latency and request details

Useful for auditing, troubleshooting, and security forensics.

5.1.2 How to enable diagnostic logging
  1. Go to your Storage Account > Monitoring > Diagnostic settings

  2. Click + Add diagnostic setting

  3. Choose what to log:

    • Blob, File, Queue, Table logs

    • Read, Write, Delete operations

    • Metrics (requests, errors, latency)

  4. Choose where to send logs:

    • Log Analytics (for querying)

    • Event Hub (for streaming to SIEM tools)

    • Storage account (for long-term storage)

You can enable one or more destinations at once.

5.2 Monitor Performance and Access Logs via Azure Monitor or Log Analytics

5.2.1 Azure Monitor

Azure Monitor provides:

  • Built-in metrics for storage accounts (e.g., total requests, egress, latency)

  • Custom alerts

  • Charts and dashboards

Access from:
Storage Account > Monitoring > Metrics

Common metrics:

  • TotalRequests

  • Availability

  • Egress / Ingress (data transfer)

  • SuccessServerLatency, SuccessE2ELatency

5.2.2 Log Analytics with Kusto Query Language (KQL)

Use Log Analytics to write powerful queries for diagnostics logs.

Example:

StorageBlobLogs
| where StatusCode == 403
| summarize count() by CallerIpAddress

Helps identify:

  • Unauthorized access attempts

  • IPs causing errors

  • Access trends over time

5.3 Set Up Alerts for Key Storage Events

Azure allows you to define alerts for any abnormal behavior.

5.3.1 How to create alerts
  1. Go to Storage Account > Monitoring > Alerts

  2. Click + New alert rule

  3. Define:

    • Condition: e.g., TotalRequests > 10,000, or Availability < 99%

    • Scope: specific storage account

    • Action Group: who to notify and how (email, SMS, webhook, etc.)

Use alert severity levels (0–4) to classify urgency.

5.3.2 Best Practices
  • Set up alerts on performance thresholds to detect slowdowns

  • Alert on error spikes (e.g., 403 or 500 errors)

  • Create security alerts for suspicious IPs or denied requests

  • Combine alerts with Log Analytics Workbooks for visualization

Implement and Manage Storage (Additional Content)

1. Blob Object Replication

What It Does:
Blob object replication allows asynchronous replication of blobs between containers — even across storage accounts and Azure regions.

Use Cases:

  • Cross-region data redundancy

  • Regulatory compliance

  • Global content distribution

Key Points:

  • The destination container must have versioning enabled.

  • Replication is automatic and does not require application-level intervention.

  • It’s configured per rule, specifying source and destination conditions.

Where to Enable:
Storage Account > Data Management > Object Replication

Exam Tip:
Do not confuse this with storage redundancy like GRS or RA-GRS. Object replication is application-level, not storage-level redundancy.

2. Azure Files and Azure AD DS Integration

Scenario:
To control access to Azure File Shares using NTFS ACLs, integration with Azure Active Directory Domain Services (Azure AD DS) is required.

Benefits:

  • Enables domain-based access control (SMB + ACL).

  • Users can access file shares using Windows credentials.

  • Supports traditional enterprise scenarios with group-based access control.

How It Works:

  1. Deploy Azure AD Domain Services.

  2. Join file shares to the domain.

  3. Assign access via NTFS permissions and group membership.

Exam Tip:
Know that regular Azure AD (without AD DS) does not support traditional Windows authentication for file shares.

3. Storage RBAC vs Azure RBAC Roles

Important Distinction:
Azure has storage-specific roles that are different from general-purpose roles like Contributor or Owner.

Role Type Grants Access To
Contributor Manage resource settings
Storage Blob Data Reader Read blob contents
Storage Queue Data Contributor Write to storage queues

Key Point:

  • Being a Contributor does not automatically grant data-level access to blob content.

  • You must assign storage-specific RBAC roles for data plane access (reading/writing blobs, queues, tables).

Exam Tip:
When troubleshooting access denial errors in blob storage, check for missing data roles, not just management roles.

4. Configurable Soft Delete Retention

Scope:
Soft delete is supported for:

  • Blobs

  • Blob containers

  • File shares

Retention Period:

  • Default: 7 days

  • Customizable: 1 to 365 days

Functionality:

  • Deleted data is not immediately removed.

  • Can be restored within the configured window.

  • Works like a "recycle bin" for Azure Storage.

Exam Tip:
Questions may test whether soft delete applies at the container level (it does) and what the default or maximum retention limits are.

5. Private Endpoint vs Firewall Behavior

Key Concepts:

  • A private endpoint maps a resource to a private IP in your VNet.

  • Once connected via private endpoint:

    • Traffic uses internal Azure backbone, bypassing firewalls.

    • Public network access can be disabled or restricted via firewall rules.

Firewall Behavior:

  • If firewall rules block all public access, users can still reach the storage account through the private endpoint.

Exam Tip:
Expect scenarios asking whether firewall rules block private traffic — they do not unless explicitly misconfigured.

6. Archive Tier Access Latency

Overview:
The Archive tier is optimized for long-term, rarely accessed data.

Rehydration Required:

  • Before reading from archive, you must rehydrate the blob.

  • This takes 6 to 15 hours, depending on the priority level.

Use Cases:

  • Legal compliance

  • Financial data archiving

  • Medical records

Exam Tip:
You may be asked about access latency or cost implications of rehydration. Archive is low-cost for storage, high-cost for access.

7. Blob Index Tags (Advanced but Useful)

What It Is:
Blob index tags let you assign key-value pairs (e.g., project=finance) to blobs.

Benefits:

  • Query blobs without scanning entire containers.

  • Apply lifecycle management rules based on tag filters.

  • Easier search, filtering, and compliance management.

Prerequisites:

  • Available only in Blob Storage accounts (GPv2).

  • Must enable versioning and lifecycle management.

Use Cases:

  • Tagging invoices, contracts, backups

  • Automating data retention or archival

Exam Tip:
This is less commonly tested but valuable in real-world design. Be familiar with the syntax and logic of index-based filtering rules.

Frequently Asked Questions

What is the primary difference between Hot, Cool, and Archive tiers in Azure Blob Storage?

Answer:

The tiers differ mainly in storage cost, access cost, and expected data access frequency.

Explanation:

Hot tier is optimized for frequently accessed data and has higher storage cost but low access cost. Cool tier is designed for infrequently accessed data and offers lower storage cost but higher access costs. Archive tier provides the lowest storage cost but requires rehydration before data can be accessed, which can take hours. Organizations typically choose tiers based on how frequently data is accessed and cost optimization goals.

Demand Score: 84

Exam Relevance Score: 92

How can administrators restrict access to an Azure Storage account so that it is accessible only from a private network?

Answer:

By using a Private Endpoint with Azure Private Link.

Explanation:

A private endpoint assigns a private IP address from a virtual network to the storage account service. This allows traffic to flow through the Azure backbone network rather than the public internet. When combined with storage firewall rules that disable public access, only resources inside the virtual network can reach the storage account. This is a common design for secure enterprise workloads requiring strict network isolation.

Demand Score: 82

Exam Relevance Score: 93

What happens if public access is disabled on an Azure Storage account but data is still reachable?

Answer:

The data may still be accessible through authorized identities or private endpoints.

Explanation:

Disabling public access prevents anonymous access from the internet but does not block authenticated requests using Azure AD identities, shared access signatures (SAS), or private network endpoints. If a user has valid credentials or if a service connects through a private endpoint within the virtual network, the storage account remains accessible. This distinction often confuses administrators who expect public access disabling to block all connectivity.

Demand Score: 79

Exam Relevance Score: 90

Which feature protects Azure Storage data from accidental deletion?

Answer:

Soft delete for blobs or containers.

Explanation:

Soft delete retains deleted data for a configurable retention period. During this time, administrators can recover deleted blobs or containers. This feature is useful for protecting against accidental deletion or malicious operations. It works by keeping a hidden copy of deleted objects until the retention period expires. After that, the data is permanently removed. Soft delete is often combined with versioning for additional protection.

Demand Score: 77

Exam Relevance Score: 88

AZ-104 Training Course
$58.88$29.99
AZ-104 Training Course