Address
304 North Cardinal St.
Dorchester Center, MA 02124
Work Hours
Monday to Friday: 7AM - 7PM
Weekend: 10AM - 5PM
You enable QoS.
You tune your queues.
You finally get latency under control.
Then you enable flow offload.
And suddenly… everything feels wrong again.
This is not a bug. It’s a design trade-off.
Flow offload exists for one reason:
to bypass the slow parts of the Linux networking stack.
Instead of processing every packet through:
it caches the flow and fast-paths future packets.
This dramatically reduces CPU usage.
And on small routers, that matters.
QoS in Linux is applied through:
All of that happens in the normal packet path.
Flow offload skips most of it.
So when a flow is offloaded:
QoS decisions are no longer applied per packet.
You might see:
Because once a flow is offloaded:
it is treated as “just packets” again.
At first glance, it seems like a bug.
Why not just apply QoS in the fast path?
Because:
You can have:
But not both at the same time.
This is where things get interesting.
Even with flow offload:
So while software QoS is bypassed:
hardware QoS can still work.
If your system is aligned (DSCP → CoS → WMM):
This is why alignment matters more than complex qdiscs.
Flow offload is a good choice when:
In other words:
most real-world SoHo routers.
You should avoid flow offload when:
Because in those cases:
software control matters more than raw speed.
Instead of trying to make SQM and flow offload work together, RouterWRT takes a different approach:
The goal is not perfect fairness.
It is predictable performance at low CPU cost.
Flow offload doesn’t break QoS by accident.
It replaces it.
It trades per-packet control for efficiency.
And on small devices, that trade-off often makes sense.
The key is understanding the difference:
Once you accept that, the system becomes much easier to reason about.