diff options
author | Soby Mathew <soby.mathew@arm.com> | 2017-03-30 14:42:54 +0100 |
---|---|---|
committer | dp-arm <dimitris.papastamos@arm.com> | 2017-05-12 11:54:12 +0100 |
commit | b6285d64c12ae653c39ecdc3a4c47369aca9d7b0 (patch) | |
tree | 372287d79aea7f149b6c4e1b31103a66c547b903 /bl1/aarch32 | |
parent | d801fbb0fc1834f7e7f6840d9a302beee5021f75 (diff) | |
download | arm-trusted-firmware-b6285d64c12ae653c39ecdc3a4c47369aca9d7b0.tar.gz |
AArch32: Rework SMC context save and restore mechanism
The current SMC context data structure `smc_ctx_t` and related helpers are
optimized for case when SMC call does not result in world switch. This was
the case for SP_MIN and BL1 cold boot flow. But the firmware update usecase
requires world switch as a result of SMC and the current SMC context helpers
were not helping very much in this regard. Therefore this patch does the
following changes to improve this:
1. Add monitor stack pointer, `spmon` to `smc_ctx_t`
The C Runtime stack pointer in monitor mode, `sp_mon` is added to the
SMC context, and the `smc_ctx_t` pointer is cached in `sp_mon` prior
to exit from Monitor mode. This makes is easier to retrieve the
context when the next SMC call happens. As a result of this change,
the SMC context helpers no longer depend on the stack to save and
restore the register.
This aligns it with the context save and restore mechanism in AArch64.
2. Add SCR in `smc_ctx_t`
Adding the SCR register to `smc_ctx_t` makes it easier to manage this
register state when switching between non secure and secure world as a
result of an SMC call.
Change-Id: I5e12a7056107c1701b457b8f7363fdbf892230bf
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
Diffstat (limited to 'bl1/aarch32')
-rw-r--r-- | bl1/aarch32/bl1_context_mgmt.c | 23 | ||||
-rw-r--r-- | bl1/aarch32/bl1_entrypoint.S | 11 |
2 files changed, 24 insertions, 10 deletions
diff --git a/bl1/aarch32/bl1_context_mgmt.c b/bl1/aarch32/bl1_context_mgmt.c index fc1e4eac6..cbf5cb698 100644 --- a/bl1/aarch32/bl1_context_mgmt.c +++ b/bl1/aarch32/bl1_context_mgmt.c @@ -74,6 +74,7 @@ static void copy_cpu_ctx_to_smc_ctx(const regs_t *cpu_reg_ctx, next_smc_ctx->r3 = read_ctx_reg(cpu_reg_ctx, CTX_GPREG_R3); next_smc_ctx->lr_mon = read_ctx_reg(cpu_reg_ctx, CTX_LR); next_smc_ctx->spsr_mon = read_ctx_reg(cpu_reg_ctx, CTX_SPSR); + next_smc_ctx->scr = read_ctx_reg(cpu_reg_ctx, CTX_SCR); } /******************************************************************************* @@ -141,6 +142,28 @@ void bl1_prepare_next_image(unsigned int image_id) smc_get_next_ctx()); /* + * If the next image is non-secure, then we need to program the banked + * non secure sctlr. This is not required when the next image is secure + * because in AArch32, we expect the secure world to have the same + * SCTLR settings. + */ + if (security_state == NON_SECURE) { + cpu_context_t *ctx = cm_get_context(security_state); + u_register_t ns_sctlr; + + /* Temporarily set the NS bit to access NS SCTLR */ + write_scr(read_scr() | SCR_NS_BIT); + isb(); + + ns_sctlr = read_ctx_reg(get_regs_ctx(ctx), CTX_NS_SCTLR); + write_sctlr(ns_sctlr); + isb(); + + write_scr(read_scr() & ~SCR_NS_BIT); + isb(); + } + + /* * Flush the SMC & CPU context and the (next)pointers, * to access them after caches are disabled. */ diff --git a/bl1/aarch32/bl1_entrypoint.S b/bl1/aarch32/bl1_entrypoint.S index 86bdf7289..e3d915fb4 100644 --- a/bl1/aarch32/bl1_entrypoint.S +++ b/bl1/aarch32/bl1_entrypoint.S @@ -81,20 +81,11 @@ func bl1_entrypoint dsb sy isb - /* Get the cpu_context for next BL image */ - bl cm_get_next_context - - /* Restore the SCR */ - ldr r2, [r0, #CTX_REGS_OFFSET + CTX_SCR] - stcopr r2, SCR - isb - /* * Get the smc_context for next BL image, * program the gp/system registers and exit * secure monitor mode */ bl smc_get_next_ctx - smcc_restore_gp_mode_regs - eret + monitor_exit endfunc bl1_entrypoint |