diff options
author | Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> | 2019-02-27 15:32:19 -0700 |
---|---|---|
committer | Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> | 2019-03-29 17:58:17 -0600 |
commit | 8ef929c9dc6db3151f2c1921306c8917813b8002 (patch) | |
tree | 308f003769d9fe4c2129819077f000be86781143 /drivers/rmnet/shs/rmnet_shs.h | |
parent | f5c70f1c6affa97a0a78194d6172a834db5db3b3 (diff) | |
download | data-kernel-8ef929c9dc6db3151f2c1921306c8917813b8002.tar.gz |
drivers: rmnet_shs: Flow limit, WQ prio/bps tweak
Previously a flow node would be allocated per new flow seen by
driver. This leaves the possibility that if many ports are
probed in a short window that would cause memory to be allocated
for each packet. This could be exploited to cause low memory
errors until the flows are cleaned up.
We can mitigate this by setting a limit to the number of active flows
that we actively manage before we start ignoring new flows.
During Instant rate switching a whole core as loaded and moves
any flow that lands on that core to gold. This marking duration
is being reduced from 10 wq ticks to 3.
Driver was incorrectly doubling bps causing sysfs and logs to be
inaccurate. This fixes that so bps is counted correctly.
CRs-Fixed: 2406711
Change-Id: I4a653e07e823884e957cb383430cf94c07c3d3f4
Acked-by: Raul Martinez <mraul@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Diffstat (limited to 'drivers/rmnet/shs/rmnet_shs.h')
-rw-r--r-- | drivers/rmnet/shs/rmnet_shs.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/drivers/rmnet/shs/rmnet_shs.h b/drivers/rmnet/shs/rmnet_shs.h index e4450d1..6466223 100644 --- a/drivers/rmnet/shs/rmnet_shs.h +++ b/drivers/rmnet/shs/rmnet_shs.h @@ -36,6 +36,14 @@ /* RPS mask change's Default core for orphaned CPU flows */ #define MAIN_CORE 0 #define UPDATE_MASK 0xFF +#define MAX_FLOWS 700 + +/* Different max inactivity based on # of flows */ +#define FLOW_LIMIT1 70 +#define INACTIVE_TSEC1 10 +#define FLOW_LIMIT2 140 +#define INACTIVE_TSEC2 2 + //#define RMNET_SHS_MAX_UDP_SILVER_CORE_DATA_RATE 1073741824 //1.0Gbps //#define RMNET_SHS_MAX_UDP_SILVER_CORE_DATA_RATE 320787200 //320 Mbps @@ -84,6 +92,7 @@ struct rmnet_shs_cfg_s { long int num_bytes_parked; long int num_pkts_parked; u32 is_reg_dl_mrk_ind; + u16 num_flows; u8 is_pkt_parked; u8 is_timer_init; u8 force_flush_state; |