summaryrefslogtreecommitdiff
path: root/drivers/rmnet
AgeCommit message (Collapse)Author
2020-11-12Merge LA.UM.9.12.R2.10.00.00.685.039 via branch 'qcom-msm-4.19-7250' into ↵android-s-preview-3_r0.5android-s-beta-2_r0.5android-s-beta-1_r0.5android-msm-redbull-4.19-s-preview-3android-msm-redbull-4.19-s-beta-2android-msm-redbull-4.19-s-beta-1lucaswei
android-msm-pixel-4.19 Bug: 172988823 Signed-off-by: lucaswei <lucaswei@google.com> Change-Id: I61f8e65251aabbcd6ee7ed5cdfcdc8ffa861ae97
2020-07-16drivers: rmnet: shs: Remove unecessary dereferenceSubash Abhinov Kasiviswanathan
Remove double dereference to get segs_per_skb. This should prevent a null dereference is node is invalid. Change-Id: I6f199457088c9f33d69192dd24360b95718db54d Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-06-18rmnet_shs: avoid setting gso info for single segmentsSubash Abhinov Kasiviswanathan
Avoid setting the gso info when there is only one segment in an SKB. Change-Id: I666fac9500caef5fb9b82b7678df533de9213663 Acked-by: Ryan Chapman <rchapman@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-06-17rmnet_shs: set gso_type when partially segmenting SKBsSubash Abhinov Kasiviswanathan
Copy the gso_type in segmented SKBs to avoid warning of packets not being able to be forwarded. Change-Id: I163b00233439edead2508f63766d3531053bd57b Acked-by: Ryan Chapman <rchapman@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-06-10Merge LA.UM.9.12.R2.10.00.00.685.014 via branch 'qcom-msm-4.19-7250' into ↵android-s-preview-2_r0.5android-s-preview-1_r0.5android-11.0.0_r0.95android-11.0.0_r0.86android-11.0.0_r0.81android-11.0.0_r0.76android-11.0.0_r0.66android-11.0.0_r0.57android-11.0.0_r0.56android-11.0.0_r0.47android-11.0.0_r0.46android-11.0.0_r0.33android-11.0.0_r0.32android-11.0.0_r0.27android-11.0.0_r0.26android-11.0.0_r0.18android-11.0.0_r0.16android-11.0.0_r0.115android-11.0.0_r0.105android-11.0.0_r0.100android-msm-redfin-4.19-android11-qpr1android-msm-redfin-4.19-android11-d1android-msm-redbull-4.19-s-preview-2android-msm-redbull-4.19-s-preview-1android-msm-redbull-4.19-android11-qpr3android-msm-redbull-4.19-android11-qpr2android-msm-bramble-4.19-android11-qpr1android-msm-bramble-4.19-android11-d1lucaswei
android-msm-pixel-4.19 Bug: 158429902 Signed-off-by: lucaswei <lucaswei@google.com> Change-Id: I055cafe491df95918248801595c73e3de3cb37ad
2020-06-01drivers: rmnet_shs: Reset hstat node correctlySubash Abhinov Kasiviswanathan
Previously hstat node was not being cleared correctly. This change correctly resets segmentation field to prevent the stale field from being used on a recycle. CRs-Fixed: 2699690 Change-Id: Ie9d6b5f64d2e94d8a8c3fb99fdcee1b13ae2ec6d Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-05-26rmnet_shs: Remove local_bh_disable in oom handlerSubash Abhinov Kasiviswanathan
Low memory handler in shs is in atomic context so prevent the disabling of bottom halves. Change-Id: I0eae18f8876edddd964346fee5b6b39af952d6fa Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-05-25supporting modularized CONFIG_QCOM_QMI_POWER_COLLAPSEAaron Ding
Bug: 157100899 Change-Id: Ifbe5ec4edffe33a404a6a703bca17f9696da527f Signed-off-by: Aaron Ding <aaronding@google.com>
2020-05-20drivers: rmnet: shs: Fix Error reported in Static AnalysisChinmay Agarwal
NULL check added in "rmnet_shs_wq_mem_init" for "shs_proc_dir" . Change-Id: I75296da8476ee52e4c49dc0e7f8d83ac7568782d Signed-off-by: Chinmay Agarwal <chinagar@codeaurora.org>
2020-05-01Merge "drivers: rmnet: shs: Add oom handler"qctecmdr
2020-04-29drivers: rmnet: shs: Add oom handlerSubash Abhinov Kasiviswanathan
Add RX packet drops when out of memory reaper runs. Remove WQ_MEM_RECLAIM from rmnet_shs_wq. Change-Id: I4b9ff4762be272ca162beb9aa691db1c29467cbf Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-04-28Merge "rmnet_shs: Change file permissions"qctecmdr
2020-04-24drivers: rmnet: shs: add segmentation levels for slow start flowsSubash Abhinov Kasiviswanathan
Adds various levels of segmentation for flows in TCP slow start. Instead of segmentation causing all packets to be 1500 bytes, we will control how much larger packets get broken up by passing segs_per_skb, which indicates how many MTU sizes packets should be in the newly segmented SKBs. i.e. segs_per_skb = 2 means 2*MTU can be passed in a segmented skb. Change-Id: I422a794f3b1d3f2e313ce8f89695a536984cd947 Acked-by: Ryan Chapman <rchapman@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-04-20rmnet_shs: Change file permissionsSubash Abhinov Kasiviswanathan
Remove the root user and group permissions from the proc files. CRs-Fixed: 2668115 Change-Id: Ib0b9502db4d52c20554e19762d72afd05c7b1532 Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-04-10Merge "drivers: rmnet: shs: Check backlog on all flushes"qctecmdr
2020-04-08drivers: rmnet: shs: Check backlog on all flushesSubash Abhinov Kasiviswanathan
Checking for backlog will no longer occur for only cpus with segmented flows. Backlog NET_RX switching will now be checked for on every silver CPU regardless if segmented flows are available. Change-Id: Ic6912e9c3ddd719cb9b0f5b13609ba7161d31b1f Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-04-08drivers: rmnet: shs: Reduce Max Backlog limitSubash Abhinov Kasiviswanathan
Some small OOO packets were still seen in extreme cases. Reducing the backlog limit threshold slightly. Change-Id: I9ccd09445d521e94879bef5cba2041702086e83d Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-04-06Merge "drivers: rmnet_perf: Avoid recursive spinlock in legacy mode"qctecmdr
2020-04-01drivers: rmnet_perf: Increase number of flow nodesSean Tranchetti
Allow up to 50 flows to be coalesced by software; Change-Id: I0c578f3c5b65b2826767c4bd7421b585f2125936 Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-04-01drivers: rmnet_perf: Avoid recursive spinlock in legacy modeSean Tranchetti
Commit 56901a4a6639 ("drivers: rmnet_perf: Take lock during DL marker handling") locks the DL marker handling to ensure synchronization. When rmnet_perf handles deaggregation of QMAP frames, this will result in attempting to take the lock recursively, as the lock will already be held by the deaggregation logic. Change-Id: I731574ed56e770193c9b094758d7f4119ef91781 Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-04-01drivers: rmnet_perf: Take lock during DL marker handlingSean Tranchetti
Since handling DL markers can result in flushing the various flow nodes, the rmnet_perf lock must be taken to ensure synchronization with the rest of the driver. During hotplug scenarios, a regular flush could be going on while a DL marker handling callback is invoked. In certain cases, the callback can proceed farther than it should, and send a second pointer to a previously flushed descriptor down the call chain. This phantom descriptor can cause various problems, but the most "common" case seen is a NULL dereference such as the following: rmnet_frag_deliver+0x110/0x730 rmnet_perf_core_send_desc+0x44/0x50 [rmnet_perf] rmnet_perf_opt_flush_single_flow_node+0x220/0x430 [rmnet_perf] rmnet_perf_opt_flush_all_flow_nodes+0x40/0x70 [rmnet_perf] rmnet_perf_core_handle_map_control_start+0x38/0x130 [rmnet_perf] rmnet_map_dl_hdr_notify_v2+0x3c/0x58 rmnet_frag_flow_command+0x104/0x120 rmnet_frag_ingress_handler+0x2c8/0x3c8 rmnet_rx_handler+0x188/0x238 Change-Id: I79cb626732358c827d6c9df4239c0c55821bd3a5 Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2020-03-18drivers: rmnet: shs: Snapshot of data.lnx.5.1Subash Abhinov Kasiviswanathan
Snapshot of shs driver on data.lnx.5.1 up to the following change id. drivers: rmnet: shs: Unrevert Deadlock fix I1307d82ffa12d0cc1115baa25a19df8ada924e89 Change-Id: I868f2fff8a90d1e99860803c994cee0f69af60b2 Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-03-11drivers: rmnet_shs: Add Max Backlog switch for TCPSubash Abhinov Kasiviswanathan
Change instant rate timer to rely more on byte limit counts. Add backlog limit to loaded core metric that will only be active when a slow start flow is on the cpu. This type of NET_RX switch will also have a longer core switch timer. Change-Id: I414db0d10c1b72d54df25138bd8adf0902357847 Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-02-20Merge "drivers: shs: Add filter flow module param"qctecmdr
2020-02-18drivers: rmnet_shs: Fix invalid mask on vnd creationSubash Abhinov Kasiviswanathan
For a time after a vnd is created, the shs internal map_mask is updated to an invalid state. If this happens during data transfer it could cause invalid cpu states. This fix will cause the map_mask to only take into acount already initialized vnd rps values. Fixes the following- Unable to handle kernel write to read-only memory at virtual address ffffff99a0b5484c pc : rmnet_shs_flush_lock_table+0x264/0x688 [rmnet_shs] lr : rmnet_shs_flush_lock_table+0x238/0x688 [rmnet_shs] Call trace: rmnet_shs_flush_lock_table+0x264/0x688 [rmnet_shs] rmnet_shs_chain_to_skb_list+0x320/0x340 [rmnet_shs] rmnet_shs_assign+0x980/0x1290 [rmnet_shs] rmnet_deliver_skb+0x240/0x410 rmnet_frag_deliver+0x618/0x778 rmnet_perf_core_flush_curr_pkt+0x12c/0x148 [rmnet_perf] rmnet_perf_tcp_opt_ingress+0x88/0x268 [rmnet_perf] rmnet_perf_opt_ingress+0x348/0x398 [rmnet_perf] rmnet_perf_core_desc_entry+0x128/0x180 [rmnet_perf] rmnet_frag_ingress_handler+0x3a8/0x578 rmnet_rx_handler+0x230/0x400 __netif_receive_skb_core+0x518/0xd60 process_backlog+0x1d4/0x438 net_rx_action+0x124/0x5b8 __do_softirq+0x2f8/0x5d8 irq_exit+0xec/0x110 handle_IPI+0x1b8/0x2f8 gic_handle_irq+0x10c/0x1d0 el0_irq_naked+0x50/0x5c Change-Id: I4c10ebb83140eb14ee3b643d057e3de29dfa851b Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-02-10drivers: shs: Add filter flow module paramSubash Abhinov Kasiviswanathan
Add a module param of currently running consistent flows. This module param will allow a filtered viewed in cases when shs goes into idle mode and when multiple idle flows have been created. Change-Id: I6b9c9b18c30575d2b59dd76814f4b7b2a2953bc0 Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2020-01-10drivers: shs: Clear out pending core switch timers on cleanupSubash Abhinov Kasiviswanathan
In case there is a burst of data beyond a threshold, the core switch timer will try to schedule a worker once the timer expires. While the pending work is cleaned up on timer expiry, the pending timers itself are not cleared up. As part of this change, the core switch module parameter is reset on start of cleanup to ensure no more timers are configured during the cleanup. Fixes the following- 399.705316: <6> Modules linked in: rmnet_perf(O) [last unloaded: rmnet_shs] 399.734305: <2> pstate: 20400085 (nzCv daIf +PAN -UAO) 399.739251: <2> pc : rb_insert_color+0x10/0x168 399.743555: <2> lr : timerqueue_add+0x88/0xc0 400.413555: <2> Call trace: 400.416084: <2> rb_insert_color+0x10/0x168 400.420042: <2> enqueue_hrtimer+0x198/0x1c0 400.424081: <2> __hrtimer_run_queues+0x4e8/0x5b0 400.428568: <2> hrtimer_interrupt+0x108/0x350 400.432793: <2> arch_timer_handler_virt+0x40/0x50 400.437373: <2> handle_percpu_devid_irq+0x1dc/0x428 400.442122: <2> __handle_domain_irq+0xa0/0xf8 400.446345: <2> gic_handle_irq+0x154/0x1d4 400.450298: <2> el1_irq+0xb4/0x130 CRs-fixed: 2594249 Change-Id: I6e4a1982ce4665340cb1a75d0ec17d1db3f286fc Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-11-29drivers: shs: Fix potential null dereference on page alloc failureSubash Abhinov Kasiviswanathan
Check before using page allocated for capabilities, gold flows and slow start flows. CRs-fixed: 2576578 Change-Id: I8f062004466447703c84912506af5963035c163c Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-11-29drivers: shs: Protect all file system operations using shs ep lockSubash Abhinov Kasiviswanathan
This add synchronization between the file system operations and the updates which occur within the shs workqueue. Fixes the following when there are two instances of userspace handlers running and are killed together- <6> Unable to handle kernel paging request at virtual address ffffffbfadadadb4 <2> pc : __free_pages+0x24/0xc0 <2> lr : free_pages+0x38/0x48 <2> Call trace: <2> __free_pages+0x24/0xc0 <2> free_pages+0x38/0x48 <2> rmnet_shs_release_caps+0x9c/0xb0 [rmnet_shs] <2> close_pdeo+0x94/0x120 <2> proc_reg_release+0x64/0x88 <2> __fput+0xdc/0x1d8 <2> ____fput+0x1c/0x28 <2> task_work_run+0x48/0xd0 <2> do_notify_resume+0x950/0x1160 <2> work_pending+0x8/0x14 CRs-fixed: 2576578 Change-Id: I67d1fc4d1f3c93d4497e988c2118c410091f0dd2 Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-11-28Merge "drivers: perf: Register/unregister on perf"qctecmdr
2019-11-27drivers: perf: Register/unregister on perfConner Huff
Before registering perf structures make sure it has not yet been registered before. Conversely, if deregistering perf, make sure that it was registered in the first place. Signed-off-by: Conner Huff <chuff@codeaurora.org> Change-Id: I00b89f45780161615b25faebfa70e79dde530c2f
2019-11-27data-kernel: rmnet: shs: Fix Errors Reported during Static Analysis.Chinmay Agarwal
Set the weights to be used in calculating "avg_pps", when the flow is executing on the Perf CPU's in function "rmnet_shs_wq_get_flow_avg_pps". Change-Id: Ia50db34a348c068a9b1bf3171fced858ce0a62de Signed-off-by: Chinmay Agarwal <chinagar@codeaurora.org>
2019-11-22drivers: shs: limit size copied to cached flows array to avoid globar var ↵Subash Abhinov Kasiviswanathan
corruption Add limit to the number of flows copied into the gold flow and slow start flow arrays before memcpy to shared memory. Going out of bounds on the array write corrupted the global variables for the shared memory pointers. Fixes the following: [ 846.803490] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000081 [ 846.909206] Process kworker/4:1 (pid: 80, stack limit = 0xffffff800b670000) [ 846.916377] CPU: 4 PID: 80 Comm: kworker/4:1 Tainted: G S O 4.19.81+ #1 [ 846.930899] Workqueue: rmnet_shs_wq rmnet_shs_wq_process_wq [rmnet_shs] [ 846.942612] pc : rmnet_shs_wq_mem_update_cached_sorted_ss_flows+0x9c/0xf0 [rmnet_shs] [ 846.950657] lr : rmnet_shs_wq_eval_cpus_caps_and_flows+0x74/0x218 [rmnet_shs] Change-Id: Ifeee71e48fc61c4dd750eb061573beb88fcd2b7d Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-11-22drivers: shs: fix deadlock caused between generic netlink and rtnl locksSubash Abhinov Kasiviswanathan
Fix for a deadlock seen when trying to register the shs generic netlink family inside of the callback context of rtnl_netlink. Instead of running in the notifier's context, the generic netlink registration is moved to initialization of the kernel module. Fixes the following lock contention scenario: [ 3302.102281] Call trace: [ 3302.102332] __switch_to+0x108/0x118 [ 3302.102357] __schedule+0x8fc/0xcd8 [ 3302.102368] schedule_preempt_disabled+0x7c/0xa8 [ 3302.102384] __mutex_lock+0x444/0x660 [ 3302.102392] __mutex_lock_slowpath+0x10/0x18 [ 3302.102399] mutex_lock+0x30/0x38 mutex_lock(&rtnl_mutex); [ 3302.102422] rtnl_lock+0x14/0x20 rtnl_lock [ 3302.102448] nl80211_pre_doit+0x28/0x1a0 [ 3302.102465] genl_rcv_msg+0x3a4/0x408 [ 3302.102473] netlink_rcv_skb+0xa8/0x120 [ 3302.102481] genl_rcv+0x30/0x48 down_read(&cb_lock); [ 3302.102487] netlink_unicast+0x1ec/0x290 [ 3302.102496] netlink_sendmsg+0x2ec/0x348 [ 3302.102609] Call trace: [ 3302.102615] __switch_to+0x108/0x118 [ 3302.102624] __schedule+0x8fc/0xcd8 [ 3302.102630] schedule+0x70/0x90 [ 3302.102638] rwsem_down_write_failed+0x2bc/0x3c8 [ 3302.102644] down_write+0x4c/0x50 [ 3302.102652] genl_register_family+0xb4/0x650 down_write(&cb_lock); [ 3302.102818] rmnet_shs_wq_genl_init+0x1c/0x38 [rmnet_shs] [ 3302.102847] rmnet_shs_wq_init+0x218/0x328 [rmnet_shs] [ 3302.102873] rmnet_shs_dev_notify_cb+0x378/0x3e0 [rmnet_shs] [ 3302.102892] raw_notifier_call_chain+0x3c/0x68 [ 3302.102909] register_netdevice+0x374/0x560 [ 3302.102934] rmnet_vnd_newlink+0x6c/0xe8 [ 3302.102942] rmnet_newlink+0x9c/0x198 [ 3302.102950] rtnl_newlink+0x648/0x7b0 [ 3302.102960] rtnetlink_rcv_msg+0x270/0x388 mutex_lock(&rtnl_mutex); Change-Id: Ib71de0cb4617477cab40a7f42154584765e30c2b Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-11-22drivers: shs: fix null check before freeing slow start listSubash Abhinov Kasiviswanathan
Checks the correct pointer for null before freeing the associated page for slow start shared memory. Fixes the following: [ 2982.239281] Unable to handle kernel paging request at virtual address ffffffbfadadadb4 [ 2982.239512] pc : __free_pages+0x24/0xc0 [ 2982.239515] lr : free_pages+0x38/0x48 [ 2982.240605] Call trace: [ 2982.240609] __free_pages+0x24/0xc0 [ 2982.240613] free_pages+0x38/0x48 [ 2982.240632] rmnet_shs_release_ss_flows+0x38/0x58 [rmnet_shs] Change-Id: I1c61b8c9c89905e94c24f6836eaf1d7f56566162 Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-11-22Merge "drivers: rmnet_perf: Check for over pulling"qctecmdr
2019-11-20drivers: shs: Change allocation context of shs allocations within spin_lockSubash Abhinov Kasiviswanathan
The allocation of the shs memory for cpu and flow level stats is done is done in the atomic context due to the invocation of the spin_lock_irqsave which disables preemption. Fixes the following- 230.251419: <6> sleeping function called from invalid context at mm/slab.h:422 230.277265: <6> in_atomic(): 1, irqs_disabled(): 128, pid: 62, name: kworker/6:0 230.277267: <2> INFO: lockdep is turned off. 230.284504: <2> irq event stamp: 90 230.284514: <2> hardirqs last enabled at (89): [<ffffff9ddee82594>] _raw_spin_unlock_irq+0x34/0x68 230.284517: <2> hardirqs last disabled at (90): [<ffffff9ddee7b5a8>] __schedule+0x138/0x1128 230.284524: <2> softirqs last enabled at (0): [<ffffff9ddd8b7f24>] copy_process+0x60c/0x1c28 230.284525: <2> softirqs last disabled at (0): [<0000000000000000>] (null) 230.284526: <4> Preemption disabled at: 230.284535: <2> [<ffffff9d7fa2b63c>] rmnet_shs_wq_process_wq+0x18c/0x350 [rmnet_shs] 230.288129: <6> ------------[ cut here ]------------ 230.292958: <6> at kernel/sched/core.c:6786! 230.305980: <6> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP 230.358297: <6> Process kworker/6:0 (pid: 62, stack limit = 0xffffff80083c0000) 230.365454: <6> CPU: 6 PID: 62 Comm: kworker/6:0 Tainted: G S O 4.19.81+ #1 230.379937: <6> Workqueue: rmnet_shs_wq rmnet_shs_wq_process_wq [rmnet_shs] 230.386741: <2> pc : ___might_sleep+0x204/0x208 230.401745: <2> lr : ___might_sleep+0x204/0x208 230.598414: <2> Call trace: 230.598419: <2> ___might_sleep+0x204/0x208 230.598420: <2> __might_sleep+0x50/0x88 230.598423: <2> kmem_cache_alloc_trace+0x74/0x420 230.598430: <2> rmnet_shs_wq_cpu_caps_list_add+0x64/0x118 [rmnet_shs] 230.598433: <2> rmnet_shs_wq_update_stats+0x4dc/0xea0 [rmnet_shs] 230.598435: <2> rmnet_shs_wq_process_wq+0x194/0x350 [rmnet_shs] 230.598438: <2> process_one_work+0x328/0x6b0 230.598439: <2> worker_thread+0x330/0x4d0 230.598441: <2> kthread+0x128/0x138 230.598443: <2> ret_from_fork+0x10/0x1c Also fixes structure padding of shared mem structs, which was causing memcpy overrun. CRs-Fixed: 2570479 Change-Id: Ia58b0bee544afb030353ad1d3cd45d8c16a94f75 Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-11-20drivers: rmnet_perf: Check for over pullingSean Tranchetti
Much like the core driver, we must check for the return value of rmnet_frag_pull() to ensure the descriptor is still valid before continuing to process it. Call Trace: __memcpy+0x68/0x180 rmnet_perf_core_send_desc+0x48/0x58 [rmnet_perf] rmnet_perf_opt_flush_single_flow_node+0x244/0x458 [rmnet_perf] rmnet_perf_tcp_opt_ingress+0x228/0x270 [rmnet_perf] rmnet_perf_opt_ingress+0x34c/0x3a0 [rmnet_perf] rmnet_perf_core_desc_entry+0x114/0x168 [rmnet_perf] Change-Id: I6665d7410bcdb6880af1ca20822328f81a2cc0ec Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
2019-11-14Merge "rmnet_shs: add userspace flow movement and slow start support"qctecmdr
2019-11-12drivers: rmnet_shs: Disable RPS for ICMP packetsSubash Abhinov Kasiviswanathan
Previously ICMP packet would be ignored by SHS causing RPS to send them randomly to a CPU in the configured RPS mask. This would lead ICMP packets to being sent to gold cluster in some cases. This results in worse power performance and higher latency as gold cluster might have to be woken up from power collapse in order to process a single packet. This change causes SHS to instead mark the ICMP packet as having a valid hash but actually leaves the hash as null. This is interpreted by RPS as an invalid CPU and causes the packet to be processed on the current CPU. From various experiments rmnet to NW stack latency start seems to take on avg about ~.8ms longer for inter-cluster ping processing so queing to gold cluster is not advised. Additionally queueing to a separate silver core in the silver cluster only increased avg latency by about ~.01ms. Change-Id: I631061890b1edb03d2e680b7f6d19f310d838ed1 Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-10-23rmnet_shs: add userspace flow movement and slow start supportSubash Abhinov Kasiviswanathan
Adds shared memory and generic netlink communication channels for userspace to access cpu and flow related stats so that shsusr can manage how flows are balanced and how TCP flows ramp up. Change-Id: Ie565141149fd5fa58cb1d2b982907681f5c7fd7d Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-10-09drivers: rmnet_shs: Remove rmnet ep accessSubash Abhinov Kasiviswanathan
Rmnet driver allocates rmnet_endpoint which rmnet_shs was using to keep track of endpoints that needed. However rmnet driver frees the memory before endpoint unregistration so this leaves a potential race condition where the wq can run after freeing. Change is to instead use net_dev refrerences we keep track of from net_dev_cb and no longer use rmnet_endpoints allocated by rmnet driver. Rmnet_shs was only using netdev references in rmnet_endpoint so no impact should be expected. This use-after-free would cause the following crash-signature. <6> Unable to handle kernel paging request at virtual address 00005000 <6> Mem abort info: <6> Exception class = DABT (current EL), IL = 32 bits <6> SET = 0, FnV = 0 <6> EA = 0, S1PTW = 0 <6> FSC = 5 <6> Data abort info: <6> ISV = 0, ISS = 0x00000005 <6> CM = 0, WnR = 0 <6> user pgtable: 4k pages, 39-bit VAs, pgd = 0000000070b0b425 <6> Internal error: Oops: 96000005 [#1] PREEMPT SMP <6> Workqueue: rmnet_shs_wq rmnet_shs_wq_process_wq [rmnet_shs] <6> task: 00000000deaad59d task.stack: 00000000053e0949 <2> pc : rmnet_shs_wq_update_ep_rps_msk+0x3c/0xd8 [rmnet_shs] <2> lr : rmnet_shs_wq_update_ep_rps_msk+0x28/0xd8 [rmnet_shs] <2> Call trace: <2> rmnet_shs_wq_update_ep_rps_msk+0x3c/0xd8 [rmnet_shs] <2> rmnet_shs_wq_update_stats+0x98/0x928 [rmnet_shs] <2> rmnet_shs_wq_process_wq+0x10c/0x248 [rmnet_shs] <2> process_one_work+0x1f0/0x458 <2> worker_thread+0x2ec/0x450 <2> kthread+0x11c/0x130 <2> ret_from_fork+0x10/0x1c CRs-Fixed: 2541604 Change-Id: I7026f2564c463f4ca989af97572e2a8fe5652087 Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-10-07drivers: rmnet: perf: Safely free recycle buffersConner Huff
Only free recycled buffers which have successfully been allocated. In the event allocation fails, we deallocate all previously allocated buffers, and we disable the feature entirely. Also, handle other resources more carefully during module bring up and teardown. Make sure resources are null if they fail to allocate. Additionally, modify the way that handling of allocation failures on init works. Now if one feature fails it won't bring down the operation of another feature. Now core_meta has access to a flag which indicates if the callbacks were completed successfully. If they were not, it is set to indicate that we must flush at the end of every chain since we can't rely on bm/ps. [ 1026.970767] Unable to handle kernel paging request at virtual address ffffffbfaaaaaaa0 [ 1026.979098] Mem abort info: [ 1026.981973] Exception class = DABT (current EL), IL = 32 bits [ 1026.988147] SET = 0, FnV = 0 [ 1026.991687] EA = 0, S1PTW = 0 [ 1026.995578] FSC = 6 [ 1026.997993] Data abort info: [ 1027.001962] ISV = 0, ISS = 0x00000006 [ 1027.006283] CM = 0, WnR = 0 [ 1027.011503] swapper pgtable: 4k pages, 39-bit VAs, pgd = ffffff812120f000 [ 1027.018667] [ffffffbfaaaaaaa0] *pgd=00000001bcbfc003, *pud=00000001bcbfc003, *pmd=0000000000000000 [ 1027.028686] Internal error: Oops: 96000006 [#1] PREEMPT SMP [ 1027.048533] mhi 0001:01:00.0: enabling device (0000 - 0002) [ 1027.084511] CPU: 5 PID: 17777 Comm: modprobe Tainted: G S W O 4.14.83+ #1 [ 1027.101460] task: ffffffec3f430080 task.stack: ffffff8052338000 [ 1027.107541] pc : kfree+0xe8/0x62c [ 1027.110954] lr : rmnet_perf_config_notify_cb+0xf8/0x484 [rmnet_perf] [ 1027.328866] Call trace: [ 1027.331381] kfree+0xe8/0x62c [ 1027.334425] rmnet_perf_config_notify_cb+0xf8/0x484 [rmnet_perf] [ 1027.340592] unregister_netdevice_notifier+0xc0/0x114 [ 1027.345780] rmnet_perf_exit+0x40/0x60c [rmnet_perf] [ 1027.350870] SyS_delete_module+0x1b8/0x224 [ 1027.355078] el0_svc_naked+0x34/0x38 [ 1027.358746] Code: f2ffffe9 aa090109 d2dff7ea f2ffffea (f9400129) Change-Id: Ieb12697fe23b6e7de39ae352a5481e6ae454e126 Signed-off-by: Conner Huff <chuff@codeaurora.org>
2019-10-04drivers: rmnet_shs: Change init to Register eventsSubash Abhinov Kasiviswanathan
Previously we used NETDEV_GOING_DOWN and NETDEV_UP events to cover shs init/de-init scenarios. That did not cover rmnet driver going down prematurely if rmnet vnds were cleaned up in a non-SSR scenario. This change will allow shs to keep track of registered vnds and to de-init when the last vnd has been unregistered to avoid this issue. <6> Unable to handle kernel NULL pointer dereference at virtual address 00000000 <6> Mem abort info: <6> Exception class = DABT (current EL), IL = 32 bits <6> SET = 0, FnV = 0 <6> EA = 0, S1PTW = 0 <6> FSC = 5 <6> Data abort info: <6> ISV = 0, ISS = 0x00000005 <6> CM = 0, WnR = 0 <6> user pgtable: 4k pages, 39-bit VAs, pgd = 00000000bafe2c18 <6> task: 00000000d6d739bd task.stack: 000000009afe105c <2> pc : rmnet_map_dl_ind_deregister+0x24/0x68 <2> lr : rmnet_shs_exit+0x30/0x68 [rmnet_shs] <2> Call trace: <2> rmnet_map_dl_ind_deregister+0x24/0x68 <2> rmnet_shs_dev_notify_cb+0x118/0x478 [rmnet_shs] <2> raw_notifier_call_chain+0x3c/0x68 <2> __dev_close_many+0x9c/0x158 <2> dev_close_many+0x7c/0x1e0 <2> rollback_registered_many+0xe4/0x460 <2> unregister_netdev+0x48/0xd0 <2> mhi_netdev_remove+0x124/0x220 <2> mhi_driver_remove+0x178/0x250 <2> device_release_driver_internal+0x158/0x200 <2> device_release_driver+0x14/0x20 <2> bus_remove_device+0xd8/0x100 CRs-Fixed: 2540066 Change-Id: I231ff0af3cb2feb955b891b628487ab4fc3377ba Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-10-02drivers: rmnet_shs: Core conflict fixSubash Abhinov Kasiviswanathan
Previously if shs had no fully idle cpus and two new flows came into SHS and a WQ was executed between them the round robin counter would be reset back to 0 and the two flows would not be ditributed to different cores. This change changes the way the wq calculates the most idle CPU so that new flows get distributed correctly based on least number of flows and lowest workload across WQ ticks. CRs-Fixed: 2503374 Change-Id: I3934b693f65579e1bd12e6f86e692f8feae2975c Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
2019-10-02Merge "drivers: rmnet_shs: Fix shs_boost_wq memleak"qctecmdr
2019-10-02Merge "drivers: rmnet_shs: Remove netdev dereference"qctecmdr
2019-10-01Merge "drivers: rmnet_perf: Fix SSR cleanup of structs"qctecmdr
2019-10-01drivers: rmnet_shs: Fix shs_boost_wq memleakSubash Abhinov Kasiviswanathan
Previously when SSR occured a new shs_freq_wq would be allocated if shs initialized. This change destroys the workqueue on de-initialization so that allocating isn't resulting in a memleak. Change-Id: I6282890f9fd3c6c562903ff7b64b014ad2658f38 Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org
2019-10-01drivers: rmnet_shs: Remove netdev dereferenceSubash Abhinov Kasiviswanathan
Previously there was a dereference of RPS CPU when updating flows in rmnet_shs_wq. After changing logic to handle change of RPS mask net_rx is responsible for catching changes in a flow's CPU. This leaves this dereference unessesary. This change removes that dereference and relies on the trustworthy saved CPU of the flow. <6> Unable to handle kernel NULL pointer dereference at virtual address 00000000 <6> Mem abort info: <6> Exception class = DABT (current EL), IL = 32 bits <6> SET = 0, FnV = 0 <6> EA = 0, S1PTW = 0 <6> FSC = 6 <6> Data abort info: <6> ISV = 0, ISS = 0x00000006 <6> CM = 0, WnR = 0 <6> user pgtable: 4k pages, 39-bit VAs, pgd = 0000000019917200 <6> [0000000000000000] *pgd=0000000000000000, *pud=0000000000000000 <6> Internal error: Oops: 96000006 [#1] PREEMPT SMP <2> pc : rmnet_shs_wq_update_cpu_rx_tbl+0x44/0x224 [rmnet_shs] <2> lr : rmnet_shs_wq_update_cpu_rx_tbl+0x3c/0x224 [rmnet_shs] <2> Call trace: <2> rmnet_shs_wq_update_cpu_rx_tbl+0x44/0x224 [rmnet_shs] <2> rmnet_shs_wq_process_wq+0x184/0x83c [rmnet_shs] <2> process_one_work+0x1e0/0x410 <2> worker_thread+0x27c/0x38c <2> kthread+0x12c/0x13c <2> ret_from_fork+0x10/0x18 <6> Code: f9401274 94000ed6 f9400a88 f9419508 (f9400108) Change-Id: Id50a7da2cccccacf4694a1bb43d62ec28e2b4462 Acked-by: Raul Martinez <mraul@qti.qualcomm.com> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org