diff options
author | Debarshi Dutta <debarshid@google.com> | 2023-03-27 17:09:43 +0000 |
---|---|---|
committer | Debarshi Dutta <debarshid@google.com> | 2023-03-29 09:00:58 +0000 |
commit | a1a6abced856836855d73b0d7004ed1087733389 (patch) | |
tree | b4899c2c28ae2bad6e600e5c48cc08aab9317981 /mali_kbase | |
parent | 3486b00e408d5c8096dc8539c385b6a545085d29 (diff) | |
download | gpu-a1a6abced856836855d73b0d7004ed1087733389.tar.gz |
Revert "mali_kbase: mem: Prevent vma splits"
In the original bug, the protected memory imports via Base
were ignoring the actual size of the import that came back from
the kernel memory import routines. These resulted in errors as when
these imports were freed, the incorrect size was passed resulting in
only sub-regions of the original mapped range being unmapped resulting
in cases where the GPU and CPU VAs ended up being inconsistent.
A WAR was added to prevent VMA splits temporarily until a fix was provided
for the protected memory size mismatch. As a result of this fix this WAR is
no longer necessary. The consequences of this WAR is now resulting in failures
for the case when an application tries to call mprotect(restrictive) on a memory already
allocated and mmapped on by Vulkan API calls.
Vulkan alloc() invokes the cmem_heap_alloc() function, which for the general
case allocates some extra memory to fulfil the worse case alignment requirements.
As a result invoking mprotect on the partial user provided range always result
in VMA splits().
For further reference look at this article.
https://lwn.net/Articles/182847/
Bug 269535398
This reverts commit 6d1d889156e68493842f5bb18fc9aed74cc57454.
Change-Id: Ic5749fab2613d6495fd3669356697ff40bfafcb7
Diffstat (limited to 'mali_kbase')
-rw-r--r-- | mali_kbase/mali_kbase_mem_linux.c | 29 |
1 files changed, 0 insertions, 29 deletions
diff --git a/mali_kbase/mali_kbase_mem_linux.c b/mali_kbase/mali_kbase_mem_linux.c index 000efc7..957b5da 100644 --- a/mali_kbase/mali_kbase_mem_linux.c +++ b/mali_kbase/mali_kbase_mem_linux.c @@ -2407,34 +2407,6 @@ static void kbase_cpu_vm_close(struct vm_area_struct *vma) kfree(map); } -static int kbase_cpu_vm_split(struct vm_area_struct *vma, unsigned long addr) -{ - struct kbase_cpu_mapping *map = vma->vm_private_data; - - KBASE_DEBUG_ASSERT(map->kctx); - KBASE_DEBUG_ASSERT(map->count > 0); - - /* - * We should never have a map/munmap pairing on a kbase_context managed - * vma such that the munmap only unmaps a portion of the vma range. - * Should this arise, the kernel attempts to split the vma range to - * ensure that it only unmaps the requested region. To achieve this it - * attempts to split the containing vma split occurs, and this callback - * is reached. By returning -EINVAL here we inform the kernel that such - * splits are not supported so that it instead unmaps the entire region. - * Since this is indicative of a bug in the map/munmap code in the - * driver, we raise a WARN here to indicate that this invalid - * state has been reached. - */ - dev_warn(map->kctx->kbdev->dev, - "%s: vma region split requested: addr=%lx map->count=%d reg=%p reg->start_pfn=%llx reg->nr_pages=%zu", - __func__, addr, map->count, map->region, map->region->start_pfn, - map->region->nr_pages); - WARN_ON_ONCE(1); - - return -EINVAL; -} - static struct kbase_aliased *get_aliased_alloc(struct vm_area_struct *vma, struct kbase_va_region *reg, pgoff_t *start_off, @@ -2543,7 +2515,6 @@ exit: const struct vm_operations_struct kbase_vm_ops = { .open = kbase_cpu_vm_open, .close = kbase_cpu_vm_close, - .split = kbase_cpu_vm_split, .fault = kbase_cpu_vm_fault }; |