Skip to content

Commit

Permalink
workqueue/hotplug: simplify workqueue_offline_cpu()
Browse files Browse the repository at this point in the history
Since the recent cpu/hotplug refactoring, workqueue_offline_cpu() is
guaranteed to run on the local cpu which is going offline.

This also fixes the following deadlock by removing work item
scheduling and flushing from CPU hotplug path.

 http://lkml.kernel.org/r/1504764252-29091-1-git-send-email-prsood@codeaurora.org

tj: Description update.

Signed-off-by: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
  • Loading branch information
laijs authored and htejun committed Dec 4, 2017
1 parent c98a980 commit e8b3f8d
Showing 1 changed file with 6 additions and 9 deletions.
15 changes: 6 additions & 9 deletions kernel/workqueue.c
Original file line number Diff line number Diff line change
Expand Up @@ -1635,7 +1635,7 @@ static void worker_enter_idle(struct worker *worker)
mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT);

/*
* Sanity check nr_running. Because wq_unbind_fn() releases
* Sanity check nr_running. Because unbind_workers() releases
* pool->lock between setting %WORKER_UNBOUND and zapping
* nr_running, the warning may trigger spuriously. Check iff
* unbind is not in progress.
Expand Down Expand Up @@ -4511,9 +4511,8 @@ void show_workqueue_state(void)
* cpu comes back online.
*/

static void wq_unbind_fn(struct work_struct *work)
static void unbind_workers(int cpu)
{
int cpu = smp_processor_id();
struct worker_pool *pool;
struct worker *worker;

Expand Down Expand Up @@ -4710,22 +4709,20 @@ int workqueue_online_cpu(unsigned int cpu)

int workqueue_offline_cpu(unsigned int cpu)
{
struct work_struct unbind_work;
struct workqueue_struct *wq;

/* unbinding per-cpu workers should happen on the local CPU */
INIT_WORK_ONSTACK(&unbind_work, wq_unbind_fn);
queue_work_on(cpu, system_highpri_wq, &unbind_work);
if (WARN_ON(cpu != smp_processor_id()))
return -1;

unbind_workers(cpu);

/* update NUMA affinity of unbound workqueues */
mutex_lock(&wq_pool_mutex);
list_for_each_entry(wq, &workqueues, list)
wq_update_unbound_numa(wq, cpu, false);
mutex_unlock(&wq_pool_mutex);

/* wait for per-cpu unbinding to finish */
flush_work(&unbind_work);
destroy_work_on_stack(&unbind_work);
return 0;
}

Expand Down

0 comments on commit e8b3f8d

Please sign in to comment.