Age | Commit message (Collapse) | Author |
|
|
|
For a time after a vnd is created, the shs internal map_mask is updated
to an invalid state. If this happens during data transfer it could cause
invalid cpu states.
This fix will cause the map_mask to only take into acount already initialized
vnd rps values.
Fixes the following-
Unable to handle kernel write to read-only memory at virtual address ffffff99a0b5484c
pc : rmnet_shs_flush_lock_table+0x264/0x688 [rmnet_shs]
lr : rmnet_shs_flush_lock_table+0x238/0x688 [rmnet_shs]
Call trace:
rmnet_shs_flush_lock_table+0x264/0x688 [rmnet_shs]
rmnet_shs_chain_to_skb_list+0x320/0x340 [rmnet_shs]
rmnet_shs_assign+0x980/0x1290 [rmnet_shs]
rmnet_deliver_skb+0x240/0x410
rmnet_frag_deliver+0x618/0x778
rmnet_perf_core_flush_curr_pkt+0x12c/0x148 [rmnet_perf]
rmnet_perf_tcp_opt_ingress+0x88/0x268 [rmnet_perf]
rmnet_perf_opt_ingress+0x348/0x398 [rmnet_perf]
rmnet_perf_core_desc_entry+0x128/0x180 [rmnet_perf]
rmnet_frag_ingress_handler+0x3a8/0x578
rmnet_rx_handler+0x230/0x400
__netif_receive_skb_core+0x518/0xd60
process_backlog+0x1d4/0x438
net_rx_action+0x124/0x5b8
__do_softirq+0x2f8/0x5d8
irq_exit+0xec/0x110
handle_IPI+0x1b8/0x2f8
gic_handle_irq+0x10c/0x1d0
el0_irq_naked+0x50/0x5c
Change-Id: I4c10ebb83140eb14ee3b643d057e3de29dfa851b
Acked-by: Raul Martinez <mraul@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
android-msm-floral-4.14
Bug: 149660093
Change-Id: Ibc0260e3160dd1a4b421b2559312bc16a5c0c7d8
Signed-off-by: Wilson Sung <wilsonsung@google.com>
|
|
Change-Id: Ibf8e4f1ea80214c92cc710655b962f2abbaa00a5
|
|
Bug: 149536833
Change-Id: I6ffa90bf10a38abd388d35074528a648be08a085
Signed-off-by: Wilson Sung <wilsonsung@google.com>
|
|
Change-Id: I17a95af88c915280d5342e5a99a392498fda534f
|
|
Skip phy chip sw reset as we already doing hw reset
to improve emac probe time.
Change-Id: I2a8696855e97c0b60dfebbe83b9e2bdd6349d24b
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
- Add place marker for emac probe end
Change-Id: Ia7ba12397dabaa6cfc81c82839458bdea5b08ffd
Signed-off-by: Lakshit Tyagi <ltyagi@codeaurora.org>
|
|
Add a module param of currently running consistent flows.
This module param will allow a filtered viewed in cases
when shs goes into idle mode and when multiple idle flows
have been created.
Change-Id: I6b9c9b18c30575d2b59dd76814f4b7b2a2953bc0
Acked-by: Raul Martinez <mraul@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
android-msm-sunfish-4.14
Bug: 148752159
Change-Id: I3d2e69488d7684d528d12f3c950f45601f74dc34
Signed-off-by: Alex Hong <rurumihong@google.com>
|
|
Change-Id: I6ddf0a45b4efe6942ad69e4d666692a14f8030cb
|
|
|
|
|
|
This add delay of 50ms after phy reset so
that phy get sufficient time.
Change-Id: Ia75ca969b46c891f6b47bd237cf845d2101ff4d9
|
|
|
|
Enable wake on lan on interface open not during
ethernet driver probe to reduce driver initialization
time.
Change-Id: Icee83f29e1276df2ce5287ed790f30368d4d09bc
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
Read phy address from dtis to avoid dynamic phy
detection which will reduce ethernet initialization
time.
Change-Id: I95e4386e98bebf9a71aa186b3ebd20cea758c7db
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
Populate the phy mask correctly to avoid scanning
of all pins for phy detection.
Change-Id: Ia730518ffb1dce0d1b4f47c3d9c7f42a7ce7de20
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
fix to reduce MDIO delay in read and write operation in phy regs of
the emac driver.
Change-Id: I559c4ee00da57547113b83fd980803a4b19b5c03
Acked-by: Abhishek Chauhan <abchauha@qti.qualcomm.com>
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
Change-Id: I4963fd1150ba6ba5f2c21a2a28f35413c3c6fa6f
|
|
|
|
|
|
This add delay of 50ms after phy reset so
that phy get sufficient time.
Change-Id: Ia75ca969b46c891f6b47bd237cf845d2101ff4d9
|
|
fix to reduce MDIO delay in read and write operation in phy regs of
the emac driver.
Change-Id: I559c4ee00da57547113b83fd980803a4b19b5c03
Acked-by: Abhishek Chauhan <abchauha@qti.qualcomm.com>
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
Add a shared memory file for net devices. This is an array of
net device names, ip mismatches, and coalesced rx_pkts. Also passes
UDP and TCP tput numbers and buffer usage stats per netdev.
Change-Id: If68288e7628a06a0d5f61a8fa318f691ed600196
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
Change-Id: I5fedc1f2bd670421ae29ca49b554005ce8efb05e
|
|
Ensure that the mapped CPU which SHS suggests is within
the bounds of the length of the relevant stats array
Change-Id: I83254cd7407231027c05665d3038e367d9545195
Signed-off-by: Conner Huff <chuff@codeaurora.org>
|
|
In case there is a burst of data beyond a threshold, the core switch
timer will try to schedule a worker once the timer expires.
While the pending work is cleaned up on timer expiry, the pending
timers itself are not cleared up.
As part of this change, the core switch module parameter is
reset on start of cleanup to ensure no more timers are configured
during the cleanup.
Fixes the following-
399.705316: <6> Modules linked in: rmnet_perf(O) [last unloaded: rmnet_shs]
399.734305: <2> pstate: 20400085 (nzCv daIf +PAN -UAO)
399.739251: <2> pc : rb_insert_color+0x10/0x168
399.743555: <2> lr : timerqueue_add+0x88/0xc0
400.413555: <2> Call trace:
400.416084: <2> rb_insert_color+0x10/0x168
400.420042: <2> enqueue_hrtimer+0x198/0x1c0
400.424081: <2> __hrtimer_run_queues+0x4e8/0x5b0
400.428568: <2> hrtimer_interrupt+0x108/0x350
400.432793: <2> arch_timer_handler_virt+0x40/0x50
400.437373: <2> handle_percpu_devid_irq+0x1dc/0x428
400.442122: <2> __handle_domain_irq+0xa0/0xf8
400.446345: <2> gic_handle_irq+0x154/0x1d4
400.450298: <2> el1_irq+0xb4/0x130
CRs-fixed: 2594249
Change-Id: I6e4a1982ce4665340cb1a75d0ec17d1db3f286fc
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
Returns a return code indicating rmnet_shs is cleaning up. This is so shsusrd
is notified that the cleanup is about to happen.
Change-Id: Ia11b85426a446ca9a599cbe0fc2117a996d67c37
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
Make cat and echo shell commands aimed to query SHS
shared memory files simply return 0.
Also equip recovery related global variables with
refcounts to prevent premature freeing when both
shs and shell commands may be opening and closing
the files.
[ 4993.922290]BUG: sleeping function called from invalid context at include/linux/uaccess.h:131
[ 4993.922306]Preemption disabled at:
[ 4993.922318][<ffffffaf240cfad8>] rmnet_shs_wq_ep_lock_bh+0x18/0x20 [rmnet_shs]
[ 4993.922333]Call trace:
[ 4993.922355] ___might_sleep+0x208/0x218
[ 4993.922357] __might_sleep+0x50/0x88
[ 4993.922363] __might_fault+0x44/0x98
[ 4993.922367] rmnet_shs_read+0x58/0xc8 [rmnet_shs]
Change-Id: I6ac5728e76980d5b74b9b415bd50a1a8147c266b
Signed-off-by: Conner Huff <chuff@codeaurora.org>
|
|
Adds locking around the structures traversed in the generic netlink
handlers. The ep lock is needed around the genl function to move flows,
and the ht lock is needed around the set segmentation genl function.
This is to prevent the race scenario where the ep/ht tables are modified
while the generic netlink runs in another context. Fixes this:
[29292.841793] [006b6b6b6b6b6b43] address between user and kernel address ranges
[29292.849475] Internal error: Oops: 96000004 [#1] PREEMPT SMP
[29292.928088] pc : rmnet_shs_wq_set_flow_segmentation+0x4c/0x228 [rmnet_shs]
[29292.935145] lr : rmnet_shs_genl_set_flow_segmentation+0x5c/0x3a0 [rmnet_shs]
[29293.027717] Call trace:
[29293.030241] rmnet_shs_wq_set_flow_segmentation+0x4c/0x228 [rmnet_shs]
[29293.036941] rmnet_shs_genl_set_flow_segmentation+0x5c/0x3a0 [rmnet_shs]
[29293.043828] genl_rcv_msg+0x3d4/0x410
[29293.047601] netlink_rcv_skb+0xac/0x128
[29293.051544] genl_rcv+0x34/0x48
[29293.054775] netlink_unicast+0x1c0/0x268
[29293.058803] netlink_sendmsg+0x308/0x368
Change-Id: Ibc7143c6e3047afb9abc574a9ddb06d597895b7c
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
|
|
Read phy address from dtis to avoid dynamic phy
detection which will reduce ethernet initialization
time.
Change-Id: I95e4386e98bebf9a71aa186b3ebd20cea758c7db
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
Enable wake on lan on interface open not during
ethernet driver probe to reduce driver initialization
time.
Change-Id: Icee83f29e1276df2ce5287ed790f30368d4d09bc
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
Populate the phy mask correctly to avoid scanning
of all pins for phy detection.
Change-Id: Ia730518ffb1dce0d1b4f47c3d9c7f42a7ce7de20
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
This fixes the deadlock between the ep lock and ht lock. This occurs
when there are multiple streams which are being assigned a new
CPU while existing flows are pruned by the workqueue.
Fixes the following deadlock-
CORE 1
-000|queued_spin_lock_slowpath(lock = 0xFFFFFFAD4F837968, ?)
-001|queued_spin_lock(inline)
-001|do_raw_spin_lock_flags(inline)
-001|__raw_spin_lock_irqsave(inline)
-001|raw_spin_lock_irqsave(?)
-002|rmnet_shs_wq_get_lpwr_cpu_new_flow(dev = 0xFFFFFFE22813A000) //acquires rmnet_shs_ep_lock
-003|rmnet_shs_new_flow_cpu(burst_size = 0, dev = 0xFFFFFFE22813A000)
-004|rmnet_shs_assign(skb = 0xFFFFFFE1DE425400, ?) //acquires rmnet_shs_ht_splock
-005|rcu_read_unlock(inline)
-005|rmnet_deliver_skb(skb = 0xFFFFFFE1DE425400, port = 0xFFFFFFE228138000)
-006|rmnet_frag_deliver(frag_desc = 0xFFFFFFE1F0A7D700, ?)
-007|rmnet_perf_core_flush_curr_pkt(pkt_info = 0xFFFFFF800800BB60, ?, ?, skip_hash = FALSE)
-008|rmnet_perf_tcp_opt_ingress(?, pkt_info = 0xFFFFFF800800BB60, ?)
-009|rmnet_perf_opt_ingress(pkt_info = 0xFFFFFF800800BB60)
-010|rmnet_perf_core_desc_entry(frag_desc = 0xFFFFFFE1F0A7D700, ?)
-011|__rmnet_frag_ingress_handler(inline)
-011|rmnet_frag_ingress_handler(skb = 0xFFFFFFE1DB2A2F00, ?)
-012|rmnet_map_ingress_handler(inline)
-012|rmnet_rx_handler(?)
-013|__netif_receive_skb_core(skb = 0xFFFFFFE1DB2A2F00, pfmemalloc = FALSE, ?)
-014|__netif_receive_skb_one_core(inline)
-014|__netif_receive_skb(inline)
-014|process_backlog(napi = 0xFFFFFFE27A966850, quota = 64)
-015|__read_once_size(inline)
-015|static_key_count(inline)
-015|static_key_false(inline)
-015|trace_napi_poll(inline)
-015|napi_poll(inline)
-015|net_rx_action(?)
CORE 7
-000|queued_spin_lock_slowpath(lock = 0xFFFFFFAD4F8356B8, ?)
-001|queued_spin_lock(inline)
-001|do_raw_spin_lock_flags(inline)
-001|__raw_spin_lock_irqsave(inline)
-001|raw_spin_lock_irqsave(?)
-002|rmnet_shs_wq_cleanup_hash_tbl(?) //acquires rmnet_shs_ht_splock
-003|rmnet_shs_wq_update_stats()
-004|spin_unlock_irqrestore(inline)
-004|rmnet_shs_wq_process_wq() //acquires rmnet_shs_ep_lock
-005|__read_once_size(inline)
-005|static_key_count(inline)
-005|static_key_false(inline)
-005|trace_workqueue_execute_end(inline)
-005|process_one_work(worker = 0xFFFFFFE2787B1600, work = 0xFFFFFFE1F31DE580)
-006|__read_once_size(inline)
-006|list_empty(inline)
-006|worker_thread(__worker = 0xFFFFFFE2787B1600)
-007|kthread(_create = 0xFFFFFFE278639900)
-008|ret_from_fork(asm)
CRs-fixed: 2595421
Change-Id: I760a51f7ff998ab610858f38cf76e577d026ff41
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
|
|
android-msm-sunfish-4.14
Bug: 146991028
Change-Id: I5adaa271a7f26e0901780689770dbb7bfb6e8c7c
Signed-off-by: lucaswei <lucaswei@google.com>
|
|
Change-Id: I83d670605281949f7f61da9e9ce9adb430e1a656
|
|
|
|
The new flow cpu logic dereferences endpoint information but
does not hold the endpoint lock. This could potentially cause
an use after free as these elements could be freed from
the netdevice notifier which holds the endpoint lock.
Fixes the following-
1130.302097: <6> Unable to handle kernel paging request at virtual address bd9e0912da8bb3e5
1130.302138: <6> Modules linked in: rmnet_shs(O-) rmnet_perf(O) [last unloaded: rmnet_shs]
1130.302213: <2> pc : rmnet_shs_wq_get_lpwr_cpu_new_flow+0x2c/0xc8 [rmnet_shs]
1130.302224: <2> lr : rmnet_shs_new_flow_cpu+0x34/0x138 [rmnet_shs]
1130.302305: <2> Call trace:
1130.302317: <2> rmnet_shs_wq_get_lpwr_cpu_new_flow+0x2c/0xc8 [rmnet_shs]
1130.302328: <2> rmnet_shs_assign+0x188/0xc50 [rmnet_shs]
1130.302340: <2> rmnet_deliver_skb+0x134/0x228
1130.302344: <2> rmnet_frag_deliver+0x5d0/0x730
1130.302379: <2> rmnet_perf_core_send_desc+0x44/0x50 [rmnet_perf]
1130.302386: <2> rmnet_perf_opt_flush_single_flow_node+0x228/0x438 [rmnet_perf]
1130.302393: <2> rmnet_perf_opt_flush_all_flow_nodes+0x40/0x70 [rmnet_perf]
1130.302400: <2> rmnet_perf_core_handle_map_control_end+0x34/0x138 [rmnet_perf]
1130.302405: <2> rmnet_map_dl_trl_notify_v2+0x40/0x80
1130.302409: <2> rmnet_frag_flow_command+0x110/0x120
1130.302413: <2> rmnet_frag_ingress_handler+0x2c8/0x3c8
1130.302417: <2> rmnet_rx_handler+0x188/0x238
1130.302425: <2> __netif_receive_skb_core+0x444/0xb68
1130.302428: <2> process_backlog+0x170/0x390
1130.302431: <2> net_rx_action+0x134/0x548
1130.302439: <2> __do_softirq+0x1dc/0x384
CRs-fixed: 2594249
Change-Id: Ie4bcd300e340dc190ec88dd5d067cdd59b6d30eb
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
In case there is a burst of data beyond a threshold, the core switch
timer will try to schedule a worker once the timer expires.
While the pending work is cleaned up on timer expiry, the pending
timers itself are not cleared up.
As part of this change, the core switch module parameter is
reset on start of cleanup to ensure no more timers are configured
during the cleanup.
Fixes the following-
399.705316: <6> Modules linked in: rmnet_perf(O) [last unloaded: rmnet_shs]
399.734305: <2> pstate: 20400085 (nzCv daIf +PAN -UAO)
399.739251: <2> pc : rb_insert_color+0x10/0x168
399.743555: <2> lr : timerqueue_add+0x88/0xc0
400.413555: <2> Call trace:
400.416084: <2> rb_insert_color+0x10/0x168
400.420042: <2> enqueue_hrtimer+0x198/0x1c0
400.424081: <2> __hrtimer_run_queues+0x4e8/0x5b0
400.428568: <2> hrtimer_interrupt+0x108/0x350
400.432793: <2> arch_timer_handler_virt+0x40/0x50
400.437373: <2> handle_percpu_devid_irq+0x1dc/0x428
400.442122: <2> __handle_domain_irq+0xa0/0xf8
400.446345: <2> gic_handle_irq+0x154/0x1d4
400.450298: <2> el1_irq+0xb4/0x130
CRs-fixed: 2594249
Change-Id: I6e4a1982ce4665340cb1a75d0ec17d1db3f286fc
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
This adds synchronization for the mmap file operation so that an mmap
will only occur after a succesful open and will not occur after a release.
This is done by using a fault handler for each shared memory file and
checking the respective global variable inside of the ep lock.
Change-Id: I955dfc8e745b1e275328721e9934ef8c020d8fa3
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
|
- Changing PTP clock frequency according to the targets
Change-Id: I7a69049640ce732fe5f21ea51a4ff3ced0138f0e
Signed-off-by: Lakshit Tyagi <ltyagi@codeaurora.org>
|
|
android-msm-floral-4.14
Bug: 146759211
Change-Id: I594bc7e2ab1c248a53a1aa2f49604bc37bdab434
Signed-off-by: Wilson Sung <wilsonsung@google.com>
|
|
Change-Id: I6feb6a89d841735476ea9e7cac86874bc0b55978
|
|
Commit 56901a4a6639 ("drivers: rmnet_perf: Take lock during DL marker
handling") locks the DL marker handling to ensure synchronization. When
rmnet_perf handles deaggregation of QMAP frames, this will result in
attempting to take the lock recursively, as the lock will already be held
by the deaggregation logic.
Change-Id: I731574ed56e770193c9b094758d7f4119ef91781
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
|
|
Since handling DL markers can result in flushing the various flow nodes,
the rmnet_perf lock must be taken to ensure synchronization with the
rest of the driver. During hotplug scenarios, a regular flush could be
going on while a DL marker handling callback is invoked. In certain cases,
the callback can proceed farther than it should, and send a second pointer
to a previously flushed descriptor down the call chain. This phantom
descriptor can cause various problems, but the most "common" case seen
is a NULL dereference such as the following:
rmnet_frag_deliver+0x110/0x730
rmnet_perf_core_send_desc+0x44/0x50 [rmnet_perf]
rmnet_perf_opt_flush_single_flow_node+0x220/0x430 [rmnet_perf]
rmnet_perf_opt_flush_all_flow_nodes+0x40/0x70 [rmnet_perf]
rmnet_perf_core_handle_map_control_start+0x38/0x130 [rmnet_perf]
rmnet_map_dl_hdr_notify_v2+0x3c/0x58
rmnet_frag_flow_command+0x104/0x120
rmnet_frag_ingress_handler+0x2c8/0x3c8
rmnet_rx_handler+0x188/0x238
Change-Id: I79cb626732358c827d6c9df4239c0c55821bd3a5
Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
|
|
Changes to check if NAPI is enabled and then call
netif_rx_skb, else follow the legacy way of calling
netif_rx_ni.
Acked-by: Abhishek Chauhan <abchauha@qti.qualcomm.com>
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|
|
Changes to check if NAPI is enabled and then call
netif_rx_skb, else follow the legacy way of calling
netif_rx_ni.
Acked-by: Abhishek Chauhan <abchauha@qti.qualcomm.com>
Signed-off-by: Sunil Paidimarri <hisunil@codeaurora.org>
|