summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2013-01-24staging: android: ashmem: Add support for 32bit ashmem calls in a 64bit kerneljuice-m1-20130123.0Serban Constantinescu
Android's shared memory subsystem, Ashmem, does not support calls from a 32-bit userspace in a 64 bit kernel. This patch adds support for syscalls coming from a 32-bit userspace in a 64-bit kernel. The patch has been successfully tested on ARMv8 AEM and Versatile Express V2P-CA9. Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
2013-01-24Revert "staging: android: ashmem: Add support for 32bit ashmem calls in a ↵Bernhard Rosenkränzer
64bit kernel" This reverts commit fca1555ddc78a7801fde9e2d32ed86977287698b.
2013-01-23Merge branch 'tracking-armdroid-ashmem' into merge-linux-linaroAndrey Konovalov
2013-01-23Merge branch 'tracking-perf-android' into merge-linux-linaroAndrey Konovalov
2013-01-23Merge branch 'tracking-ste-tb-ethernet' into merge-linux-linaroAndrey Konovalov
2013-01-23Merge branch 'tracking-samslt-all' into merge-linux-linaroAndrey Konovalov
2013-01-23staging: android: ashmem: Add support for 32bit ashmem calls in a 64bit kernelSerban Constantinescu
Android's shared memory subsystem, Ashmem, does not support calls from a 32-bit userspace in a 64 bit kernel. This patch adds support for syscalls coming from a 32-bit userspace in a 64-bit kernel. The patch has been successfully tested on ARMv8 AEM and Versatile Express V2P-CA9. Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
2013-01-23cpufreq: ARM_DT_BL_CPUFREQ should be enabled only when BIG_LITTLE is enabledTushar Behera
Currently ARM_DT_BL_CPUFREQ gets enabled by default which causes a kernel-oops on Arndale board. Forcing this to only when BIG_LITTLE is enabled fixes following issue. Unable to handle kernel paging request at virtual address ffffffd0 pgd = c0004000 [ffffffd0] *pgd=6f7fe821, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#2] PREEMPT SMP THUMB2 Modules linked in: CPU: 0 Tainted: G D (3.8.0-rc4+ #2) PC is at kthread_data+0xa/0x10 LR is at wq_worker_sleeping+0xf/0xa4 pc : [<c0035726>] lr : [<c0032273>] psr: a00000b3 sp : ef0adb98 ip : 00000001 fp : ef0a6080 r10: c1bb7140 r9 : c05cd140 r8 : ef0ac000 r7 : ef0adbb8 r6 : ef0a6080 r5 : 00000000 r4 : b0000000 r3 : 00000000 r2 : 00000000 r1 : 00000000 r0 : ef0a6080 Flags: NzCv IRQs off FIQs on Mode SVC_32 ISA Thumb Segment user Control: 50c5387d Table: 6e4c806a DAC: 55555555 Signed-off-by: Tushar Behera <tushar.behera@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
2013-01-22perf: Fix build inside the Android source treetracking-perf-android-ll-20130122.1Bernhard Rosenkränzer
Signed-off-by: Bernhard Rosenkränzer <Bernhard.Rosenkranzer@linaro.org>
2013-01-22ARM: Fix compile error if CONFIG_ARM_UNWIND is not definedtracking-samslt-all-ll-20130122.1tracking-samslt-all-ll-20130122.0samsung-lt-v3.8-rc4Al Stone
If CONFIG_ARM_UNWIND is not set for some reason, module_finalize() will not compile due to a missing declaration for 'err'. Moved the declaration outside of an #ifdef so that the code will compile regardless of the value of CONFIG_ARM_UNWIND. Signed-off-by: Al Stone <ahs3@redhat.com> Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
2013-01-22CONFIG: ARNDALE: UBUNTU: Enable NUMA and HugeTLB supportTushar Behera
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
2013-01-22ARM: syscall: wire up sys_migrate_pages.Steve Capper
For NUMA support, the sys_migrate_pages syscall is required by userspace. This patch allocates #379 for this syscall and plumbs it in. Signed-off-by: Steve Capper <steve.capper@arm.com> [tushar.behera@linaro.org: Change to #380, increase __NR_syscalls] Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
2013-01-22RM: mm: Add NUMA support.Steve Capper
This patch adds support for NUMA (running on either discontiguous and sparse memory). At the moment, the number of nodes has to be specified on the commandline. One can also, optionally, specify the memory size of each node. (Otherwise the memory range is split roughly equally between nodes). CPUs can be striped across nodes (cpu number modulo the number of nodes), or assigned to a node based on their topology_physical_package_id. So for instance on a TC2, the A7 cores can be grouped together in one node and the A15s grouped together in another node. Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: mm: Add discontiguous memory support.Steve Capper
This patch adds support for discontiguous memory, with a view to each discontiguous block being assigned to a NUMA node (in a future patch). Discontiguous memory should only be used to back NUMA on systems where sparse memory is not available. Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: Consider memblocks in mem_init and show_mem.Steve Capper
This is based on Michael Spang's patch [1]; and is my attempt at applying the feedback from Russell [2]. With discontiguous memory (a requirement for running NUMA on some systems), membanks may not necessarily be representable as contiguous blocks of struct page *s. This patch updates the page scanning code in mem_init and show_mem to consider pages in the intersection of membanks and memblocks instead. We can't consider memblocks solely as under sparse memory configurations, contiguous physical membanks won't necessarily have a contiguous memory map (but may be merged into the same memblock). Only memory blocks in the "memory" region were considered as the "reserved" region was found to always overlap "memory"; all the memory banks are added with memblock_add (which adds to "memory") and no instances were found where memory was added to "reserved" then removed from "memory". In mem_init we are running on one CPU, and I can't see the memblocks changing whilst being enumerated. In show_mem, we can be running on multiple CPUs; whilst the memblock manipulation functions are annotated as __init, this doesn't stop memblocks being manipulated during bootup. I can't see any place where memblocks are removed or merged other than driver initialisation (memblock_steal) or boot memory initialisation. One consequence of using memblocks in show_mem, is that we are unable to define ARCH_DISCARD_MEMBLOCK. Any feedback would be welcome. [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2012-October/127104.html [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2012-November/135455.html Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: mm: Transparent huge page support for non-LPAE systems.Steve Capper
Much of the required code for THP has been implemented in the earlier non-LPAE HugeTLB patch. One more domain bits is used (to store whether or not the THP is splitting). Some THP helper functions are defined; and we have to re-define pmd_page such that it distinguishes between page tables and sections. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: mm: Transparent huge page support for LPAE systems.Catalin Marinas
The patch adds support for THP (transparent huge pages) to LPAE systems. When this feature is enabled, the kernel tries to map anonymous pages as 2MB sections where possible. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [steve.capper@arm.com: symbolic constants used, value of PMD_SECT_SPLITTING adjusted, tlbflush.h included in pgtable.h] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: mm: HugeTLB support for non-LPAE systems.Steve Capper
Based on Bill Carson's HugeTLB patch, with the big difference being in the way PTEs are passed back to the memory manager. Rather than store a "Linux Huge PTE" separately; we make one up on the fly in huge_ptep_get. Also rather than consider 16M supersections, we focus solely on 2x1M sections. To construct a huge PTE on the fly we need additional information (such as the accessed flag and dirty bit) which we choose to store in the domain bits of the short section descriptor. In order to use these domain bits for storage, we need to make ourselves a client for all 16 domains and this is done in head.S. Storing extra information in the domain bits also makes it a lot easier to implement Transparent Huge Pages, and some of the code in pgtable-2level.h is arranged to facilitate THP support in a later patch. Non-LPAE HugeTLB pages are incompatible with the huge page migration code (enabled when CONFIG_MEMORY_FAILURE is selected) as that code dereferences PTEs directly, rather than calling huge_ptep_get and set_huge_pte_at. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: mm: HugeTLB support for LPAE systems.Catalin Marinas
This patch adds support for hugetlbfs based on the x86 implementation. It allows mapping of 2MB sections (see Documentation/vm/hugetlbpage.txt for usage). The 64K pages configuration is not supported (section size is 512MB in this case). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [steve.capper@arm.com: symbolic constants replace numbers in places. Split up into multiple files, to simplify future non-LPAE support]. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: mm: Add support for flushing HugeTLB pages.Steve Capper
On ARM we use the __flush_dcache_page function to flush the dcache of pages when needed; usually when the PG_dcache_clean bit is unset and we are setting a PTE. A HugeTLB page is represented as a compound page consisting of an array of pages. Thus to flush the dcache of a HugeTLB page, one must flush more than a single page. This patch modifies __flush_dcache_page such that all constituent pages of a HugeTLB page are flushed. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: mm: correct pte_same behaviour for LPAE.Steve Capper
For 3 levels of paging the PTE_EXT_NG bit will be set for user address ptes that are written to a page table but not for ptes created with mk_pte. This can cause some comparison tests made by pte_same to fail spuriously and lead to other problems. To correct this behaviour, we mask off PTE_EXT_NG for any pte that is present before running the comparison. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com>
2013-01-22ARM: OF: update coherent_dma_maskSubash Patel
This patch is tested in ARM:exynos5250 with LPAE enabled. The coherent_dma_mask needs to be defined to DMA_BIT_MASK(64) as dma-mapping API's check it against 64-bit mask. Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: exynos: add coherent dma maskSubash Patel
This patch adds the coherent_dma_mask to usb/dwc3 node. This is needed as check is performed before allocating any coherent buffer in the dma-mapping framework. Note: Find a better place to add this change Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: exynos: add coherent dma maskSubash Patel
This patch adds the coherent_dma_mask for dw_mci_pltfm node. This is needed as the check is done during the coherent buffer allocation. Note: Find a better place to add this Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: exynos: add coherent_dma_maskSubash Patel
This patch adds the coherent_dma_mask for the dw_mmc device. This is needed as check is now done in dma-mapping framework before allocating the buffers. Note: Find a better place to add this Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARm: exynos: update coherent dma maskSubash Patel
This patch updates the coherent_dma_mask for dev-ohci Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: exynos: update coherent dma maskSubash Patel
This patch updates the coherent dma mask for dev-ehci Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: exynos: update coherent dma maskSubash Patel
This patch updates the coherent_dma_mask for dev-ohci Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: exynos: update dma_bit_mask to 64-bitsSubash Patel
This patch changes the dma_mask and coherent_dma_mask to 64-bit mask value for dev-ahci Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22NET: eth: ax88796: fixup for LPAESubash Patel
This patch adds condition for variables declared of type resource_size_t. When LPAE is enabled, these will be 64-bit, but the linker will throw error for missing __aeabi_uldivmod support in lib1funcs.s. This patch may be safetly reverted when this is added. Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: LPAE: accomodate >32-bit addresses for page table baseSubash Patel
This patch redefines the early boot time use of the R4 register to steal a few low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to 38-bit physical addresses. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Hand-edited as this patch in eml format doesnt apply due to missing blob data for arch/arm/include/asm/memory.h Signed-off-by: Subash Patel <subash.rp@samsung.com>
2013-01-22ARM: mm: clean up membank size limit checksCyril Chemparathy
This patch cleans up the highmem sanity check code by simplifying the range checks with a pre-calculated size_limit. This patch should otherwise have no functional impact on behavior. This patch also removes a redundant (bank->start < vmalloc_limit) check, since this is already covered by the !highmem condition. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
2013-01-22ARM: mm: cleanup checks for membank overlap with vmalloc areaCyril Chemparathy
On Keystone platforms, physical memory is entirely outside the 32-bit addressible range. Therefore, the (bank->start > ULONG_MAX) check below marks the entire system memory as highmem, and this causes unpleasentness all over. This patch eliminates the extra bank start check (against ULONG_MAX) by checking bank->start against the physical address corresponding to vmalloc_min instead. In the process, this patch also cleans up parts of the highmem sanity check code by removing what has now become a redundant check for banks that entirely overlap with the vmalloc range. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: mm: use physical addresses in highmem sanity checksCyril Chemparathy
This patch modifies the highmem sanity checking code to use physical addresses instead. This change eliminates the wrap-around problems associated with the original virtual address based checks, and this simplifies the code a bit. The one constraint imposed here is that low physical memory must be mapped in a monotonically increasing fashion if there are multiple banks of memory, i.e., x < y must => pa(x) < pa(y). Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: LPAE: factor out T1SZ and TTBR1 computationsCyril Chemparathy
This patch moves the TTBR1 offset calculation and the T1SZ calculation out of the TTB setup assembly code. This should not affect functionality in any way, but improves code readability as well as readability of subsequent patches in this series. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: LPAE: define ARCH_LOW_ADDRESS_LIMIT for bootmemCyril Chemparathy
This patch adds an architecture defined override for ARCH_LOW_ADDRESS_LIMIT. On PAE systems, the absence of this override causes bootmem to incorrectly limit itself to 32-bit addressable physical memory. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
2013-01-22ARM: LPAE: use 64-bit accessors for TTBR registersCyril Chemparathy
This patch adds TTBR accessor macros, and modifies cpu_get_pgd() and the LPAE version of cpu_set_reserved_ttbr0() to use these instead. In the process, we also fix these functions to correctly handle cases where the physical address lies beyond the 4G limit of 32-bit addressing. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: LPAE: use phys_addr_t in switch_mm()Cyril Chemparathy
This patch modifies the switch_mm() processor functions to use phys_addr_t. On LPAE systems, we now honor the upper 32-bits of the physical address that is being passed in, and program these into TTBR as expected. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
2013-01-22ARM: LPAE: use phys_addr_t for initrd location and sizeVitaly Andrianov
This patch fixes the initrd setup code to use phys_addr_t instead of assuming 32-bit addressing. Without this we cannot boot on systems where initrd is located above the 4G physical address limit. Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Signed-off-by: Cyril Chemparathy <cyril@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: LPAE: use phys_addr_t in free_memmap()Vitaly Andrianov
The free_memmap() was mistakenly using unsigned long type to represent physical addresses. This breaks on PAE systems where memory could be placed above the 32-bit addressible limit. This patch fixes this function to properly use phys_addr_t instead. Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Signed-off-by: Cyril Chemparathy <cyril@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: LPAE: use signed arithmetic for mask definitionsCyril Chemparathy
This patch applies to PAGE_MASK, PMD_MASK, and PGDIR_MASK, where forcing unsigned long math truncates the mask at the 32-bits. This clearly does bad things on PAE systems. This patch fixes this problem by defining these masks as signed quantities. We then rely on sign extension to do the right thing. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Reviewed-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: LPAE: support 64-bit virt_to_phys patchingCyril Chemparathy
This patch adds support for 64-bit physical addresses in virt_to_phys() patching. This does not do real 64-bit add/sub, but instead patches in the upper 32-bits of the phys_offset directly into the output of virt_to_phys. There is no corresponding change on the phys_to_virt() side, because computations on the upper 32-bits would be discarded anyway. Signed-off-by: Cyril Chemparathy <cyril@ti.com>
2013-01-22ARM: LPAE: use phys_addr_t on virt <--> phys conversionCyril Chemparathy
This patch fixes up the types used when converting back and forth between physical and virtual addresses. Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Signed-off-by: Cyril Chemparathy <cyril@ti.com> Reviewed-by: Nicolas Pitre <nico@linaro.org>
2013-01-22ARM: use late patch framework for phys-virt patchingCyril Chemparathy
This patch replaces the original physical offset patching implementation with one that uses the newly added patching framework. Signed-off-by: Cyril Chemparathy <cyril@ti.com> [tushar.behera@linaro.org: export __pv_offset] Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
2013-01-22ARM: add self test for runtime patch mechanismCyril Chemparathy
This patch adds basic sanity tests to ensure that the instruction patching results in valid instruction encodings. This is done by verifying the output of the patch process against a vector of assembler generated instructions at init time. Signed-off-by: Cyril Chemparathy <cyril@ti.com>
2013-01-22ARM: add mechanism for late code patchingCyril Chemparathy
The original phys_to_virt/virt_to_phys patching implementation relied on early patching prior to MMU initialization. On PAE systems running out of >4G address space, this would have entailed an additional round of patching after switching over to the high address space. The approach implemented here conceptually extends the original PHYS_OFFSET patching implementation with the introduction of "early" patch stubs. Early patch code is required to be functional out of the box, even before the patch is applied. This is implemented by inserting functional (but inefficient) load code into the .runtime.patch.code init section. Having functional code out of the box then allows us to defer the init time patch application until later in the init sequence. In addition to fitting better with our need for physical address-space switch-over, this implementation should be somewhat more extensible by virtue of its more readable (and hackable) C implementation. This should prove useful for other similar init time specialization needs, especially in light of our multi-platform kernel initiative. This code has been boot tested in both ARM and Thumb-2 modes on an ARMv7 (Cortex-A8) device. Note: the obtuse use of stringified symbols in patch_stub() and early_patch_stub() is intentional. Theoretically this should have been accomplished with formal operands passed into the asm block, but this requires the use of the 'c' modifier for instantiating the long (e.g. .long %c0). However, the 'c' modifier has been found to ICE certain versions of GCC, and therefore we resort to stringified symbols here. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Reviewed-by: Nicolas Pitre <nico@linaro.org>
2013-01-22CONFIG: ARNDALE: UBUNTU: Enable SATATushar Behera
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
2013-01-22linaro/configs: arndale: Enable SATA supportTushar Behera
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
2013-01-22ata: samsung: Add SATA PHY controller driverVasanth Ananthan
This patch adds a platform driver and I2C client driver for SATA PHY controller Signed-off-by: Vasanth Ananthan <vasanth.a@samsung.com>
2013-01-22ata: samsung: Add SATA controller driverVasanth Ananthan
This patch adds a platform driver for SATA controller. Signed-off-by: Vasanth Ananthan <vasanth.a@samsung.com>