Encryption ensures that data is unreadable to unauthorized users. In Azure Storage, encryption happens by default.
Server-side encryption means Azure encrypts your data as it’s written to storage, and decrypts it when read.
There are two main key management options:
Default option (no setup required).
Microsoft controls and rotates the encryption keys.
Fully automatic and compliant with most standards (e.g., ISO, SOC, GDPR).
You manage your own keys using Azure Key Vault.
Gives you more control:
You can rotate keys manually.
You can revoke access to data by disabling the key.
Useful in regulated industries (finance, healthcare, etc.).
Here, you encrypt data on the client before uploading it to Azure Storage.
You manage your own encryption libraries and keys.
Used when you need end-to-end control or extra protection for sensitive data.
Example: Use a custom app to encrypt data using AES before uploading.
Controlling who can access what data is essential for security. Azure offers several mechanisms:
SAS tokens let you grant limited-time and permission-scoped access to storage data.
There are three types:
| Type | Purpose |
|---|---|
| User delegation SAS | Uses Azure AD credentials, more secure, applies to Blob Storage |
| Service SAS | Grants access to specific storage resources (like a blob, file, or table) |
| Account SAS | Grants access at the account level, broader access than service SAS |
SAS tokens are URLs with query parameters — shareable via code or email, but should be treated like passwords.
Azure Storage supports Azure AD-based access for Blob and Queue storage.
Benefits:
Centralized identity management.
Role-based access control (RBAC) applies.
Users must authenticate with Azure AD and be assigned roles like:
Storage Blob Data Reader
Storage Blob Data Contributor
Access control can be done using:
RBAC (via Azure AD) — recommended for most secure scenarios.
Access keys — provide full admin-level access to a storage account (not recommended for sharing).
SAS tokens — provide scoped and temporary access.
Best Practice: Avoid sharing access keys; prefer SAS or Azure AD RBAC.
You can restrict storage account access to:
Specific IP address ranges
Virtual networks (VNets) and subnets
Steps:
Go to the storage account in the portal.
Under Networking, choose Selected networks.
Add VNets or IP ranges allowed to access the account.
Use private endpoints for secure access over private IPs within your VNet.
Azure Storage accounts are containers for your data services, such as blobs, files, queues, and tables. Choosing the correct type affects pricing, performance, and available features.
Supports all storage types: Blob, File, Queue, Table, and Disk
Access tiers: Hot, Cool, Archive
Most feature-rich and cost-efficient for modern use cases
Supports all redundancy and performance options
This is the default and recommended storage account type.
Older type, limited features
No support for access tiers
Not recommended for new deployments
Optimized for storing only blobs
Supports Hot/Cool access tiers
Rarely used now—GPv2 accounts include all Blob features and more
| Tier | Use Case | Media | Notes |
|---|---|---|---|
| Standard | General workloads | HDD/Standard SSD | Cost-effective |
| Premium | High-performance needs | Premium SSD or NVMe | Low latency, higher IOPS |
Examples:
Use Premium for workloads like databases or virtual machine disks.
Use Standard for logs, backups, documents.
Control how frequently your data is accessed:
| Tier | Use case | Pricing |
|---|---|---|
| Hot | Frequently accessed data | Higher storage cost, lower access cost |
| Cool | Infrequently accessed data (≥30 days) | Lower storage cost, higher access cost |
| Archive | Rarely accessed data (≥180 days) | Cheapest storage, highest access cost and delay (~hours to rehydrate) |
You can change tiers per blob or automatically with lifecycle rules.
Replication determines how your data is duplicated for durability and high availability.
| Type | Scope | Use Case |
|---|---|---|
| LRS (Locally Redundant Storage) | 3 copies within one datacenter | Least expensive |
| ZRS (Zone-Redundant Storage) | Across 3 availability zones in a region | Higher availability |
| GRS (Geo-Redundant Storage) | Replicates to secondary region (read access disabled) | Disaster recovery |
| RA-GRS (Read-Access GRS) | Like GRS, but with read access to secondary | Read during outages |
| GZRS (Geo+Zone Redundant) | Across zones + replicated to another region | Max durability |
| RA-GZRS | GZRS with read access to secondary | Premium option |
Choose based on criticality and budget.
Public access: Open to all networks (not recommended)
Selected networks: Allow only specific IP ranges or VNets
Private endpoint: Assigns a private IP address inside your VNet, ensures secure communication
Custom Domain: Replace default *.blob.core.windows.net with your own domain (e.g., storage.mycompany.com)
TLS Settings: Enforce secure transfer (https only) to enhance security
Used to automate blob tiering and deletion based on rules:
Move blobs to Cool after 30 days of no access
Move to Archive after 90 days
Delete blobs after 365 days
Steps:
Go to Storage Account > Data Management > Lifecycle Management
Add a rule
Set filters (e.g., prefix or blob type)
Define actions and conditions
This section is about how you actually store, organize, access, and transfer data in Azure using different storage services.
Blob = Binary Large Object
It is optimized for massive unstructured data, such as images, videos, backups, and logs.
| Blob Type | Description | Use Case |
|---|---|---|
| Block Blob | Stores data in blocks | Most common; used for images, documents, backups |
| Append Blob | Optimized for append operations | Ideal for logging; you can only add to the end |
| Page Blob | Optimized for random read/write | Used for virtual hard disks (VHDs), like Azure VM disks |
Block blob is the default and most widely used blob type.
You can interact with blobs using:
Azure Portal (manual upload/download)
AzCopy (command-line tool)
Azure CLI or PowerShell
SDKs and REST APIs
Supports operations like:
Uploading large files in blocks (resumable)
Deleting blobs individually or in batch
Versioning and snapshot support
Containers are like folders inside a blob storage account. Each blob must belong to a container.
Metadata: Key-value pairs stored with each blob (e.g., owner = “finance”)
Soft Delete: Allows recovery of deleted blobs (like a recycle bin)
Versioning: Automatically stores past versions of a blob for rollback or recovery
Enable these features from:Storage Account > Data Protection
Azure Files provides shared folders in the cloud that you can mount via SMB or NFS.
SMB (Server Message Block): Used by Windows clients; can map drives like \\storageaccount.file.core.windows.net\share
NFS (Network File System): For Linux-based systems
Azure File Shares support standard file operations and access control (NTFS ACLs on SMB shares)
Use Azure File Sync to:
Synchronize your on-prem Windows Server file shares with Azure Files
Enable cloud tiering: keep frequently accessed files local, others in the cloud
Create a hybrid file server model
Setup involves:
Create a file share in Azure
Install Azure File Sync agent on your Windows Server
Register the server in Azure
Create a sync group and add endpoints
Snapshots: Point-in-time copies of a file share. Useful for recovery or auditing.
Can restore individual files or entire share to a previous state.
Snapshots don’t need extra tools — managed within the Azure portal or CLI.
Azure offers several options for moving data in and out of Azure Storage.
Command-line tool
Fast, efficient, resumable uploads/downloads
Example command:
azcopy copy "file.txt" "https://<account>.blob.core.windows.net/<container>?<SAS>" --recursive
GUI tool for Windows/macOS/Linux
Easy to manage blobs, tables, queues, and file shares
Great for non-developers or ad-hoc data tasks
ETL tool used for data integration and migration
Supports moving data from:
On-prem SQL, Oracle
AWS S3, FTP
Other cloud storage
For large-scale data transfer (TBs to PBs)
Microsoft provides encrypted disks for physical shipping to/from Azure datacenters
Useful when network bandwidth is limited or expensive
Azure Data Lake Storage Gen2 supports a file-system-like structure with folders and access control
Hierarchical namespace enables:
Faster access to files
ACL-based security
Efficient big data analytics
Ideal for use with Azure Synapse, Databricks, or Hadoop
This section focuses on advanced configurations, optimization features, and troubleshooting techniques for Azure’s two core data storage services.
Azure File Sync allows you to synchronize on-premises Windows file servers with Azure Files, creating a hybrid cloud file system.
Benefits:
Centralized file management in Azure
Reduced on-prem storage usage with cloud tiering
Disaster recovery and data protection
Steps:
Create an Azure File Share
Deploy Azure File Sync Agent on the Windows server
Register the server in Azure Storage Sync Service
Create a sync group in Azure
Add the Azure file share and on-prem path as endpoints
File changes are synchronized both ways (cloud ↔ local)
Cloud tiering saves space on the local server.
Frequently accessed files remain on-premises.
Other files become stubs, pulled from Azure on demand.
Configuration settings:
Set a volume free space threshold (e.g., keep 20% of disk free)
File tiering is automatic and policy-driven
Soft delete protects against accidental deletions.
When a file share is deleted, it is retained in the "soft deleted" state for a configurable number of days (up to 365).
You can undelete it during that retention window.
Go to Storage Account > Data Protection
Enable soft delete for file shares
Set the retention period
Can also be enabled for blobs and containers.
Enable diagnostic logs to track:
Access patterns
Errors
Latency and throughput
Steps:
Go to Storage Account > Diagnostic settings
Choose logs to send to:
Log Analytics
Event Hub
Storage account (for archiving)
| Issue | Possible Cause | Solution |
|---|---|---|
| Access denied (403) | Invalid SAS or RBAC permissions | Verify token or role |
| Network timeout | Firewall or NSG blocking traffic | Check network settings |
| Slow access | Too many concurrent requests | Scale up or optimize request patterns |
| Sync delays | Agent offline or misconfigured | Restart sync agent and check status |
Use Azure Monitor, Metrics, and Activity Log to investigate issues in real-time.
This section focuses on tracking usage, detecting threats, and monitoring the performance and health of your Azure Storage resources.
Diagnostic logging provides detailed logs about:
Who accessed what
What operations were performed (read, write, delete)
Latency and request details
Useful for auditing, troubleshooting, and security forensics.
Go to your Storage Account > Monitoring > Diagnostic settings
Click + Add diagnostic setting
Choose what to log:
Blob, File, Queue, Table logs
Read, Write, Delete operations
Metrics (requests, errors, latency)
Choose where to send logs:
Log Analytics (for querying)
Event Hub (for streaming to SIEM tools)
Storage account (for long-term storage)
You can enable one or more destinations at once.
Azure Monitor provides:
Built-in metrics for storage accounts (e.g., total requests, egress, latency)
Custom alerts
Charts and dashboards
Access from:Storage Account > Monitoring > Metrics
Common metrics:
TotalRequests
Availability
Egress / Ingress (data transfer)
SuccessServerLatency, SuccessE2ELatency
Use Log Analytics to write powerful queries for diagnostics logs.
Example:
StorageBlobLogs
| where StatusCode == 403
| summarize count() by CallerIpAddress
Helps identify:
Unauthorized access attempts
IPs causing errors
Access trends over time
Azure allows you to define alerts for any abnormal behavior.
Go to Storage Account > Monitoring > Alerts
Click + New alert rule
Define:
Condition: e.g., TotalRequests > 10,000, or Availability < 99%
Scope: specific storage account
Action Group: who to notify and how (email, SMS, webhook, etc.)
Use alert severity levels (0–4) to classify urgency.
Set up alerts on performance thresholds to detect slowdowns
Alert on error spikes (e.g., 403 or 500 errors)
Create security alerts for suspicious IPs or denied requests
Combine alerts with Log Analytics Workbooks for visualization
What It Does:
Blob object replication allows asynchronous replication of blobs between containers — even across storage accounts and Azure regions.
Use Cases:
Cross-region data redundancy
Regulatory compliance
Global content distribution
Key Points:
The destination container must have versioning enabled.
Replication is automatic and does not require application-level intervention.
It’s configured per rule, specifying source and destination conditions.
Where to Enable:Storage Account > Data Management > Object Replication
Exam Tip:
Do not confuse this with storage redundancy like GRS or RA-GRS. Object replication is application-level, not storage-level redundancy.
Scenario:
To control access to Azure File Shares using NTFS ACLs, integration with Azure Active Directory Domain Services (Azure AD DS) is required.
Benefits:
Enables domain-based access control (SMB + ACL).
Users can access file shares using Windows credentials.
Supports traditional enterprise scenarios with group-based access control.
How It Works:
Deploy Azure AD Domain Services.
Join file shares to the domain.
Assign access via NTFS permissions and group membership.
Exam Tip:
Know that regular Azure AD (without AD DS) does not support traditional Windows authentication for file shares.
Important Distinction:
Azure has storage-specific roles that are different from general-purpose roles like Contributor or Owner.
| Role Type | Grants Access To |
|---|---|
| Contributor | Manage resource settings |
| Storage Blob Data Reader | Read blob contents |
| Storage Queue Data Contributor | Write to storage queues |
Key Point:
Being a Contributor does not automatically grant data-level access to blob content.
You must assign storage-specific RBAC roles for data plane access (reading/writing blobs, queues, tables).
Exam Tip:
When troubleshooting access denial errors in blob storage, check for missing data roles, not just management roles.
Scope:
Soft delete is supported for:
Blobs
Blob containers
File shares
Retention Period:
Default: 7 days
Customizable: 1 to 365 days
Functionality:
Deleted data is not immediately removed.
Can be restored within the configured window.
Works like a "recycle bin" for Azure Storage.
Exam Tip:
Questions may test whether soft delete applies at the container level (it does) and what the default or maximum retention limits are.
Key Concepts:
A private endpoint maps a resource to a private IP in your VNet.
Once connected via private endpoint:
Traffic uses internal Azure backbone, bypassing firewalls.
Public network access can be disabled or restricted via firewall rules.
Firewall Behavior:
Exam Tip:
Expect scenarios asking whether firewall rules block private traffic — they do not unless explicitly misconfigured.
Overview:
The Archive tier is optimized for long-term, rarely accessed data.
Rehydration Required:
Before reading from archive, you must rehydrate the blob.
This takes 6 to 15 hours, depending on the priority level.
Use Cases:
Legal compliance
Financial data archiving
Medical records
Exam Tip:
You may be asked about access latency or cost implications of rehydration. Archive is low-cost for storage, high-cost for access.
What It Is:
Blob index tags let you assign key-value pairs (e.g., project=finance) to blobs.
Benefits:
Query blobs without scanning entire containers.
Apply lifecycle management rules based on tag filters.
Easier search, filtering, and compliance management.
Prerequisites:
Available only in Blob Storage accounts (GPv2).
Must enable versioning and lifecycle management.
Use Cases:
Tagging invoices, contracts, backups
Automating data retention or archival
Exam Tip:
This is less commonly tested but valuable in real-world design. Be familiar with the syntax and logic of index-based filtering rules.
What is the primary difference between Hot, Cool, and Archive tiers in Azure Blob Storage?
The tiers differ mainly in storage cost, access cost, and expected data access frequency.
Hot tier is optimized for frequently accessed data and has higher storage cost but low access cost. Cool tier is designed for infrequently accessed data and offers lower storage cost but higher access costs. Archive tier provides the lowest storage cost but requires rehydration before data can be accessed, which can take hours. Organizations typically choose tiers based on how frequently data is accessed and cost optimization goals.
Demand Score: 84
Exam Relevance Score: 92
How can administrators restrict access to an Azure Storage account so that it is accessible only from a private network?
By using a Private Endpoint with Azure Private Link.
A private endpoint assigns a private IP address from a virtual network to the storage account service. This allows traffic to flow through the Azure backbone network rather than the public internet. When combined with storage firewall rules that disable public access, only resources inside the virtual network can reach the storage account. This is a common design for secure enterprise workloads requiring strict network isolation.
Demand Score: 82
Exam Relevance Score: 93
What happens if public access is disabled on an Azure Storage account but data is still reachable?
The data may still be accessible through authorized identities or private endpoints.
Disabling public access prevents anonymous access from the internet but does not block authenticated requests using Azure AD identities, shared access signatures (SAS), or private network endpoints. If a user has valid credentials or if a service connects through a private endpoint within the virtual network, the storage account remains accessible. This distinction often confuses administrators who expect public access disabling to block all connectivity.
Demand Score: 79
Exam Relevance Score: 90
Which feature protects Azure Storage data from accidental deletion?
Soft delete for blobs or containers.
Soft delete retains deleted data for a configurable retention period. During this time, administrators can recover deleted blobs or containers. This feature is useful for protecting against accidental deletion or malicious operations. It works by keeping a hidden copy of deleted objects until the retention period expires. After that, the data is permanently removed. Soft delete is often combined with versioning for additional protection.
Demand Score: 77
Exam Relevance Score: 88