Appearance
Memory-Pooled Linked List Queue
In high-performance systems, allocating and deallocating memory frequently can cause major slowdowns due to:
- Heap fragmentation
- Frequent calls to
newanddelete - Poor cache performance
To solve this, we often use a memory pool --- a recycled pool of preallocated memory blocks that we reuse instead of allocating afresh.
This is especially powerful for linked list--based queues where:
- Nodes are frequently removed (
pop) - Then added again (
push) - But the node size is fixed --- perfect for reuse
Real-World Use
Trading systems often process millions of messages per second Allocating and deallocating every queue node adds huge latency and causes allocator contention. Memory pools help reduce GC pressure, increase throughput, and improve CPU cache behavior.
Your Task
Upgrade your previous linked list--based queue to use a memory pool.
When pop() is called:
- Recycle the removed node by adding it to a
freelist.
When push() is called:
- If the freelist is not empty, reuse a node from it instead of allocating a new one.
Maintain:
- Correct ownership (use smart pointers or raw carefully)
- No memory leaks
- O(1) time complexity for all operations
Bonus (Optional)
- Implement
emplace()using placementnewto construct elements directly in recycled nodes. - Track memory reuse count and display it after 1,000 operations.
Implement a basic queue with memory pooling in C++.
cpp
#include <memory>
template <typename T>
class MemoryPooledQueue {
private:
// Data structures here
public:
MemoryPooledQueue() = default;
void push(const T& val) {
// TODO: Reuse a node from freelist or allocate a new one
}
void pop() {
// TODO: Recycle the node to freelist
}
T& front() {
// TODO: Return the front element
// Assumes queue is not empty
return head->value;
}
bool empty() const {
return head == nullptr;
}
};