Objective: To establish and optimize the data collection network infrastructure necessary for comprehensive network analysis.
Implementing these strategies effectively requires a detailed understanding of both the technological and regulatory aspects of network management. Each component of the data collection system must be carefully configured to ensure it collects necessary data without compromising the network's performance or security. As you progress, regularly reviewing and updating your data collection practices in response to new threats and changing network conditions will help maintain a robust and secure network assurance framework.
In enterprise-scale networks, especially those with high data volume or distributed topologies, raw telemetry or SNMP data is not always sent directly to the final analytics platform. Instead, Cisco architectures often use aggregation points to improve efficiency and manageability.
Data Aggregation Points:
These are intermediate systems such as collectors, brokers, or log aggregators (e.g., Kafka brokers, syslog collectors) that receive data from multiple devices.
Purpose of Aggregation:
Reduces the number of direct connections to the analytics engine (e.g., Cisco DNA Center or a SIEM).
Offloads pre-processing, such as format normalization or filtering.
Enables correlation across devices before export.
Export Mechanisms:
After aggregation, data can be forwarded to:
Cisco DNA Center
Third-party platforms like Splunk, Elasticsearch, or custom dashboards
In large-scale environments, data collected via telemetry or SNMP is often sent to aggregation points (e.g., collectors or brokers) before being exported to analytics engines like Cisco DNA Center. This reduces processing overhead on network devices and centralizes data handling.
While comprehensive data collection is valuable, excessive sampling or polling can negatively affect device performance. Cisco encourages smart data collection, which includes optimized configurations that reduce overhead.
SNMP Over-Polling:
Frequent polling (e.g., sub-minute intervals) can increase CPU and memory usage on network devices.
Polling large MIBs (Management Information Bases) can congest management interfaces.
Full NetFlow Record Export:
Exporting full-flow records for every session is resource-intensive.
High CPU load and network bandwidth usage can occur on busy interfaces.
Flow Sampling:
Instead of exporting all flows, devices export a statistically representative sample (e.g., 1 out of every 1000 packets).
Reduces processing and bandwidth costs.
Record Filtering:
Configure devices to export only relevant flow types or specific data fields.
For example, exclude internal DNS traffic or known-safe ports.
To avoid performance degradation, administrators should use sampling techniques and selective record filtering. Over-polling via SNMP or exporting full NetFlow records can significantly increase CPU load, especially on core devices.
Data collection systems must be resilient, particularly in environments where continuous monitoring is mission-critical. High availability (HA) ensures that data continues to be collected and processed even if part of the system fails.
Redundant Collectors:
Multiple collectors are deployed in active/passive or active/active configurations.
If the primary collector becomes unavailable, devices can switch to the secondary collector.
Local Buffering on Devices:
Devices can temporarily store collected data if the network path is interrupted.
Once connectivity is restored, buffered data is forwarded to the collector.
Data Integrity and Synchronization:
To ensure resilience, configure redundant collectors and buffering mechanisms in case the primary data path fails. Devices can temporarily store data locally or forward to secondary collectors to prevent data loss.
| Topic | Practical Value |
|---|---|
| Aggregation and Export | Improves scalability and decouples devices from analytics systems |
| Performance Filtering | Protects device health while maintaining meaningful data flow |
| High Availability Architecture | Ensures uninterrupted data collection in fault conditions |
What configuration elements are required to enable model-driven telemetry on Cisco IOS-XE devices for DNA Center Assurance?
Model-driven telemetry requires enabling telemetry subscriptions, defining the telemetry transport protocol, and specifying YANG-based data sources.
IOS-XE devices export telemetry through subscriptions that reference YANG models describing operational data. Administrators configure the destination collector (such as Cisco DNA Center), choose a transport protocol like gRPC, and specify encoding formats such as GPB. The telemetry subscription defines which operational paths are streamed and at what interval. This structured approach allows Cisco DNA Center to ingest high-frequency telemetry efficiently while maintaining schema consistency.
Demand Score: 85
Exam Relevance Score: 87
Why is NetFlow still relevant in Cisco DNA Center Assurance when streaming telemetry exists?
NetFlow provides detailed traffic flow visibility that complements device performance telemetry.
Streaming telemetry focuses primarily on device operational metrics such as CPU usage, interface statistics, and hardware health. NetFlow exports information about traffic flows including source and destination IPs, ports, and application characteristics. Cisco DNA Center Assurance integrates both telemetry types to analyze performance and traffic patterns simultaneously. This combined visibility helps identify issues such as traffic congestion, abnormal flows, or application-specific network behavior.
Demand Score: 80
Exam Relevance Score: 84
What telemetry transport protocols are commonly used when exporting data to Cisco DNA Center?
Common transport protocols include gRPC and NETCONF-based telemetry mechanisms.
Model-driven telemetry relies on structured data defined by YANG models. Devices stream telemetry using protocols such as gRPC, which supports high-performance data transport, or NETCONF-based subscription models. These protocols enable secure, reliable streaming of operational metrics. Cisco DNA Center uses these streams to perform analytics and maintain near real-time visibility into network conditions.
Demand Score: 78
Exam Relevance Score: 83
How does Cisco DNA Center ensure telemetry data integrity during collection?
Cisco DNA Center ensures integrity through secure transport protocols, structured data models, and validation during ingestion.
Telemetry streams are transported using encrypted protocols such as TLS-enabled gRPC sessions. The use of YANG data models enforces schema validation, ensuring that the telemetry data follows defined structures. During ingestion, Cisco DNA Center validates timestamps, device identifiers, and metric formats before storing data. This validation process prevents corrupted or inconsistent telemetry data from affecting analytics and alerts.
Demand Score: 76
Exam Relevance Score: 81
Why must telemetry collection intervals be carefully configured in assurance systems?
Improper telemetry intervals can either overload the system with excessive data or fail to capture important performance anomalies.
Short telemetry intervals generate more granular visibility but increase processing and storage requirements. Longer intervals reduce data volume but risk missing transient issues such as short spikes in latency or interface errors. Cisco DNA Center Assurance balances these trade-offs by recommending interval configurations that provide actionable insights without overwhelming analytics systems.
Demand Score: 74
Exam Relevance Score: 80