diff options
author | Suzanne Candanedo <suzanne.candanedo@arm.com> | 2023-05-09 17:01:56 +0100 |
---|---|---|
committer | Guus Sliepen <gsliepen@google.com> | 2023-05-23 19:25:05 +0000 |
commit | c91dc341da12a38da522ed3e9b0a091ae6272fdc (patch) | |
tree | 3b3ecb5c29aba0005f205302b81b54338fd570e0 /mali_kbase/mmu | |
parent | c0e6373517c47561d75c3355991f2723f295979b (diff) | |
download | gpu-c91dc341da12a38da522ed3e9b0a091ae6272fdc.tar.gz |
[Official] MIDCET-4546, GPUCORE-37946: Synchronize GPU cache flush cmds with silent reset on GPU power up
Commands for GPU cache maintenance and TLB invalidation were sent after
acquiring 'hwaccess_lock' and checking if the 'gpu_powered' flag is set.
The combination of lock and the flag ensured that GPU registers remained
accessible whilst the commands were in progress. If the flag was not set
then the GPU power up was not performed and the commands were rightfully
skipped.
The 'gpu_powered' flag is set immediately after the Top-level power up
of GPU is done by the platform specific power_on_callback() and so the
registers can be safely accessed. If the callback returns 1 then a
silent soft-reset of the GPU is performed after setting the flag.
This lead to a race between the cache maintanence commands and the soft
reset of GPU, due to which the commands did not complete or got lost
and there was a timeout.
This commit replaces the 'gpu_powered' flag with the 'gpu_ready' flag
as the latter is set after the soft-reset is done and all the in-use
GPU address spaces have been enabled. It is okay to skip the commands
when the flag is false, as L2 cache would be in powered down state.
The page migrate function is also updated to use 'gpu_ready' flag as
that was also affected by the similar race with silent reset in
GPUCORE-35861 and 'kbdev->pm.lock' had to be used.
Change-Id: I4cefe3add2863d7b29f111d437061031b66e7080
(cherry picked from commit e31494f5b7b9e9101aab4bd75fa4dc7d7f47b66a)
Provenance: https://code.ipdelivery.arm.com/c/GPU/mali-ddk/+/5284
Bug: 281540759
Diffstat (limited to 'mali_kbase/mmu')
-rw-r--r-- | mali_kbase/mmu/mali_kbase_mmu.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mali_kbase/mmu/mali_kbase_mmu.c b/mali_kbase/mmu/mali_kbase_mmu.c index 0707e33..2e3c251 100644 --- a/mali_kbase/mmu/mali_kbase_mmu.c +++ b/mali_kbase/mmu/mali_kbase_mmu.c @@ -146,7 +146,7 @@ static void mmu_invalidate(struct kbase_device *kbdev, struct kbase_context *kct spin_lock_irqsave(&kbdev->hwaccess_lock, flags); - if (kbdev->pm.backend.gpu_powered && (!kctx || kctx->as_nr >= 0)) { + if (kbdev->pm.backend.gpu_ready && (!kctx || kctx->as_nr >= 0)) { as_nr = kctx ? kctx->as_nr : as_nr; err = kbase_mmu_hw_do_unlock(kbdev, &kbdev->as[as_nr], op_param); } @@ -173,7 +173,7 @@ static void mmu_flush_invalidate_as(struct kbase_device *kbdev, struct kbase_as mutex_lock(&kbdev->mmu_hw_mutex); spin_lock_irqsave(&kbdev->hwaccess_lock, flags); - if (kbdev->pm.backend.gpu_powered) + if (kbdev->pm.backend.gpu_ready) err = kbase_mmu_hw_do_flush_locked(kbdev, as, op_param); if (err) { @@ -270,7 +270,7 @@ static void mmu_flush_invalidate_on_gpu_ctrl(struct kbase_device *kbdev, struct mutex_lock(&kbdev->mmu_hw_mutex); spin_lock_irqsave(&kbdev->hwaccess_lock, flags); - if (kbdev->pm.backend.gpu_powered && (!kctx || kctx->as_nr >= 0)) { + if (kbdev->pm.backend.gpu_ready && (!kctx || kctx->as_nr >= 0)) { as_nr = kctx ? kctx->as_nr : as_nr; err = kbase_mmu_hw_do_flush_on_gpu_ctrl(kbdev, &kbdev->as[as_nr], op_param); |