summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVincent Guittot <vincent.guittot@linaro.org>2023-01-13 14:36:13 +0100
committerQais Yousef <qyousef@google.com>2024-02-23 13:20:37 +0000
commit35fe68699a93e739dcb87fc4fbe632a5a49c1d94 (patch)
treed60ec500c539e6ffd3aad3f66a04ad22f5f71cca
parentc16060052ef4546688dbe0f4ad8d4b76a32f706c (diff)
downloadcommon-android-gs-shusky-5.15-android15-dp.tar.gz
In presence of a lot of small weight tasks like sched_idle tasks, normal or high weight tasks can see their ideal runtime (sched_slice) to increase to hundreds ms whereas it normally stays below sysctl_sched_latency. 2 normal tasks running on a CPU will have a max sched_slice of 12ms (half of the sched_period). This means that they will make progress every sysctl_sched_latency period. If we now add 1000 idle tasks on the CPU, the sched_period becomes 3006 ms and the ideal runtime of the normal tasks becomes 609 ms. It will even become 1500ms if the idle tasks belongs to an idle cgroup. This means that the scheduler will look for picking another waiting task after 609ms running time (1500ms respectively). The idle tasks change significantly the way the 2 normal tasks interleave their running time slot whereas they should have a small impact. Such long sched_slice can delay significantly the release of resources as the tasks can wait hundreds of ms before the next running slot just because of idle tasks queued on the rq. Cap the ideal_runtime to sysctl_sched_latency to make sure that tasks will regularly make progress and will not be significantly impacted by idle/background tasks queued on the rq. Bug: 326576184 Bug: 319514485 Bug: 315185352 Bug: 269111781 Change-Id: I27f956ee275d17ef708d8d27dc082c66ed5a5275 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://lore.kernel.org/r/20230113133613.257342-1-vincent.guittot@linaro.org (cherry picked from commit 79ba1e607d68178db7d3fe4f6a4aa38f06805e7b) Signed-off-by: Qais Yousef <qyousef@google.com> (cherry picked from commit e32aeb03b9c6b1b625ff0248b6d5670aa74e783b) (cherry picked from commit 9520a47b21457e4411c58e364f71c7428f9a3896)
-rw-r--r--kernel/sched/fair.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 35a436a8df5d..1ab5df1b1a72 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4681,7 +4681,13 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
s64 delta;
bool skip_preempt = false;
- ideal_runtime = sched_slice(cfs_rq, curr);
+ /*
+ * When many tasks blow up the sched_period; it is possible that
+ * sched_slice() reports unusually large results (when many tasks are
+ * very light for example). Therefore impose a maximum.
+ */
+ ideal_runtime = min_t(u64, sched_slice(cfs_rq, curr), sysctl_sched_latency);
+
delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
trace_android_rvh_check_preempt_tick(current, &ideal_runtime, &skip_preempt,
delta_exec, cfs_rq, curr, sysctl_sched_min_granularity);