Due to its development history, there currently are two sets of interfaces to create workqueues.
The latter is the new interface which is superset of the former.
In weeks 1&2, I was involved removing instances of the deprecated create_workqueue and mapping them to an alloc_workqueue() invocation or to use system_wq. Each case had be examined to find out why a specific kind is used and then converted to an alloc_workqueue() or system_wq invocation which matched the requirements.
The key points that I learned during these conversions were:
- If the work items hosted by a workqueue are dependent on memory reclaim, it needs to stay in a dedicated workqueue with WQ_MEM_RECLAIM set. This guarantees forward progress of a single work item at a given time. This means that if a work item which is dependent on memory reclaim depends on another work item, the two need to put into separate workqueues, each with WQ_MEM_RECLAIM.
- If the work items require other specific attributes – WQ_HIGHPRI, WQ_CPU_INTENSIVE and so on then one can use alloc_workqueue with the appropriate flags.
- If the work items need to be flushed as a whole then a dedicated workqueue becomes necessary. e.g. if work items are created dynamically and freed on execution but need to be flushed, they need a dedicated workqueue to serve as a flush domain.
- If the work items are created dynamically and can be numerous, they need to be queued on a separate workqueue with a reasonable concurrency limit. Concurrency limit is per cpu or numa node.
I have converted all occurrences of create_workqueue in the kernel. It was a great experience to work with over 28 drivers in my first 2 weeks… 😀