Appearance
Polling vs Interrupts
Polling and interrupts are two fundamental approaches for handling network events. The choice between them significantly impacts latency, throughput, and CPU utilization in networking applications.
What are Interrupts?
Interrupts are hardware signals that notify the CPU when an event occurs, such as data arrival on a network interface.
cpp
// Traditional interrupt-driven networking
int sock = socket(AF_INET, SOCK_STREAM, 0);
// ... setup socket ...
// Block waiting for data (interrupt-driven)
char buffer[1024];
int bytes = recv(sock, buffer, 1024, 0); // Blocks until data arrives
// CPU is free to run other processes while waitingInterrupt Flow:
cpp
Network Card → Interrupt → CPU → Kernel → Application
↑ Data arrives ↑ Hardware signalWhat is Polling?
Polling continuously checks for events instead of waiting for interrupts.
cpp
// Polling-based networking
int sock = socket(AF_INET, SOCK_STREAM, 0);
// ... setup socket ...
// Continuously check for data
while (true) {
char buffer[1024];
int bytes = recv(sock, buffer, 1024, MSG_DONTWAIT);
if (bytes > 0) {
process_data(buffer, bytes);
}
// CPU continuously checks, no blocking
}Polling Flow:
cpp
Application → Check Network Card → Process Data → Repeat
↑ Continuous checkingPerformance Comparison
Latency
Interrupts:
- Pros: CPU is free when no events occur
- Cons: Interrupt overhead (context switch, cache invalidation)
- Latency: 1-10 microseconds (interrupt overhead)
Polling:
- Pros: No interrupt overhead, predictable latency
- Cons: CPU waste when no events occur
- Latency: 0.1-1 microseconds (no overhead)
CPU Utilization
Interrupts:
cpp
// Interrupt-driven: CPU utilization depends on event frequency
// Low frequency: Low CPU usage
// High frequency: High CPU usage due to interrupt overheadPolling:
cpp
// Polling: CPU utilization depends on polling frequency
// Low frequency: Low CPU usage, high latency
// High frequency: High CPU usage, low latencyUse Cases
High-Frequency Trading
cpp
// Interrupt-driven HFT (problematic)
while (true) {
char buffer[1024];
recv(socket, buffer, 1024, 0); // Blocking, interrupt-driven
process_market_data(buffer);
send(socket, response, response_size, 0);
}
// Problem: Unpredictable latency due to interrupts
// Polling-based HFT (better)
while (true) {
char buffer[1024];
int bytes = recv(socket, buffer, 1024, MSG_DONTWAIT);
if (bytes > 0) {
process_market_data(buffer);
send(socket, response, response_size, MSG_DONTWAIT);
}
// Predictable, low latency
}Web Server
cpp
// Interrupt-driven web server
while (true) {
int client = accept(server_socket, NULL, NULL); // Blocking
handle_request(client);
close(client);
}
// Good for low-load scenarios
// Polling-based web server
while (true) {
int client = accept(server_socket, NULL, MSG_DONTWAIT);
if (client >= 0) {
handle_request(client);
close(client);
}
// Better for high-load scenarios
}Polling Strategies
1. Busy Polling
Busy polling continuously checks without yielding CPU time.
cpp
// Busy polling - maximum performance, maximum CPU usage
while (true) {
char buffer[1024];
int bytes = recv(socket, buffer, 1024, MSG_DONTWAIT);
if (bytes > 0) {
process_data(buffer, bytes);
}
// No sleep, no yield - maximum responsiveness
}2. Adaptive Polling
Adaptive polling adjusts polling frequency based on load.
cpp
// Adaptive polling
int poll_interval = 1000; // Start with 1ms intervals
while (true) {
char buffer[1024];
int bytes = recv(socket, buffer, 1024, MSG_DONTWAIT);
if (bytes > 0) {
process_data(buffer, bytes);
poll_interval = max(100, poll_interval / 2); // Decrease interval
} else {
poll_interval = min(10000, poll_interval * 2); // Increase interval
}
usleep(poll_interval); // Adaptive sleep
}3. Hybrid Polling
Hybrid polling switches between polling and interrupts based on conditions.
cpp
// Hybrid polling
bool high_load = false;
while (true) {
if (high_load) {
// Use polling during high load
char buffer[1024];
int bytes = recv(socket, buffer, 1024, MSG_DONTWAIT);
if (bytes > 0) {
process_data(buffer, bytes);
}
} else {
// Use interrupts during low load
char buffer[1024];
int bytes = recv(socket, buffer, 1024, 0); // Blocking
process_data(buffer, bytes);
}
// Update load status
high_load = get_load_level() > THRESHOLD;
}Kernel Bypass Polling
DPDK and other kernel bypass techniques use polling exclusively.
cpp
// DPDK polling loop
while (true) {
struct rte_mbuf *pkts[32];
uint16_t nb_rx = rte_eth_rx_burst(port_id, 0, pkts, 32);
for (int i = 0; i < nb_rx; i++) {
process_packet(pkts[i]);
rte_pktmbuf_free(pkts[i]);
}
// No interrupts, no context switches
// Dedicated CPU core for maximum performance
}Performance Metrics
Latency Comparison
| Technique | Latency | CPU Usage | Predictability |
|---|---|---|---|
| Interrupts | 1-10 μs | Variable | Low |
| Polling | 0.1-1 μs | High | High |
| Busy Polling | 0.1 μs | Very High | Very High |
| Adaptive Polling | 0.1-10 μs | Variable | Medium |
Throughput Comparison
cpp
// Performance characteristics
// Interrupts: Good for low-frequency events
// Polling: Good for high-frequency events
// Busy Polling: Maximum performance, maximum CPU usageDecision Guide
Use Interrupts When:
- Low event frequency: Events occur infrequently
- Power efficiency: Battery-powered devices
- General-purpose systems: Multiple applications sharing CPU
- Simple applications: Standard networking needs
cpp
// Good for: Web browsing, file transfers, general networking
while (true) {
char buffer[1024];
recv(socket, buffer, 1024, 0); // Blocking, interrupt-driven
process_data(buffer);
}Use Polling When:
- High event frequency: Events occur frequently
- Latency critical: Predictable, low latency required
- Dedicated systems: Single application with dedicated resources
- High-performance applications: HFT, gaming, real-time systems
cpp
// Good for: HFT, gaming, real-time applications
while (true) {
char buffer[1024];
int bytes = recv(socket, buffer, 1024, MSG_DONTWAIT);
if (bytes > 0) {
process_data(buffer);
}
}Use Busy Polling When:
- Maximum performance: Every microsecond matters
- Dedicated cores: CPU cores dedicated to networking
- Ultra-low latency: Sub-microsecond latency required
- High-frequency events: Very frequent network events
cpp
// Good for: Ultra-low latency HFT, network appliances
while (true) {
char buffer[1024];
int bytes = recv(socket, buffer, 1024, MSG_DONTWAIT);
if (bytes > 0) {
process_data(buffer);
}
// No sleep, no yield - maximum responsiveness
}Implementation Considerations
Hardware Requirements
- CPU: Sufficient cores for polling
- Network Cards: Support for polling mode
- Memory: Fast access for polling loops
Software Requirements
- Operating System: Support for non-blocking I/O
- Applications: Designed for polling patterns
- Scheduling: Proper CPU affinity and priority
Configuration
cpp
// Set CPU affinity for polling threads
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(core_id, &cpuset);
pthread_setaffinity_np(thread_id, sizeof(cpu_set_t), &cpuset);
// Set high priority for polling threads
struct sched_param param;
param.sched_priority = sched_get_priority_max(SCHED_FIFO);
pthread_setschedparam(thread_id, SCHED_FIFO, ¶m);Best Practices
1. Choose Based on Event Frequency
cpp
// Low frequency (< 1000 events/sec): Use interrupts
// High frequency (> 10000 events/sec): Use polling
// Very high frequency (> 100000 events/sec): Use busy polling2. Monitor CPU Usage
cpp
// Monitor CPU usage when using polling
// Adjust polling frequency based on load
// Consider hybrid approaches for variable loads3. Use Appropriate Timeouts
cpp
// For polling with timeouts
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 1000; // 1ms timeout
setsockopt(socket, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout));4. Consider NUMA Effects
cpp
// Bind polling threads to appropriate NUMA nodes
// Ensure network card and CPU are on same NUMA node
// Use NUMA-aware memory allocationThe Bottom Line
The choice between polling and interrupts depends on your specific requirements for latency, throughput, and CPU utilization. Interrupts are better for low-frequency events and power efficiency, while polling is better for high-frequency events and predictable latency.
Key Takeaways:
- Interrupts provide power efficiency but unpredictable latency
- Polling provides predictable latency but higher CPU usage
- Busy polling provides maximum performance but maximum CPU usage
- Choose based on event frequency and latency requirements
- Kernel bypass techniques use polling exclusively
- Hybrid approaches can provide the best of both worlds
- Monitor and tune based on actual performance requirements
Questions
Q: What is polling in networking?
A: Sending data to multiple recipients
Polling involves continuously checking for network events (like data arrival) instead of waiting for interrupts, eliminating interrupt overhead.
Q: What is the main advantage of polling over interrupts?
Polling provides predictable latency and eliminates context switches, making it suitable for latency-critical applications like HFT.
Q: What is the main disadvantage of polling?
Polling wastes CPU cycles when no network events occur, as the CPU continuously checks for events instead of being interrupted only when needed.
Q: When should you use polling instead of interrupts?
Use polling when latency is critical and network events are frequent, as the predictable latency outweighs the CPU overhead cost.
Q: What is hybrid polling?
Hybrid polling switches between polling and interrupts based on system load, using polling during high-load periods and interrupts during low-load periods.