diff options
author | Amit Pundir <amit.pundir@linaro.org> | 2016-01-12 12:32:49 +0530 |
---|---|---|
committer | Amit Pundir <amit.pundir@linaro.org> | 2016-01-12 12:32:49 +0530 |
commit | f24c222c7ce7ab4356904ad0303424468726b14b (patch) | |
tree | ad2ffe6a18e05832b3f9a514e9460921767c9007 | |
parent | d53f3eee401aa90afe8f4614e3c6021463cf85f5 (diff) | |
parent | 1909f8687a1c5ca1c9886843af096dfc13dcc472 (diff) | |
download | linaro-android-test/qcomlt-3.18.tar.gz |
Merge branch 'qcomlt/release/qcomlt-3.18' into linaro-android/linaro-fixes/android-3.18test/qcomlt-3.18
qcomlt --> https://git.linaro.org/landing-teams/working/qualcomm/kernel.git
linaro-android --> git://android.git.linaro.org/kernel/linaro-android.git
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
179 files changed, 46413 insertions, 1298 deletions
diff --git a/Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt b/Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt new file mode 100644 index 000000000000..ae1b07f852c6 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/msm/qcom,idle-state.txt @@ -0,0 +1,81 @@ +QCOM Idle States for cpuidle driver + +ARM provides idle-state node to define the cpuidle states, as defined in [1]. +cpuidle-qcom is the cpuidle driver for Qualcomm SoCs and uses these idle +states. Idle states have different enter/exit latency and residency values. +The idle states supported by the QCOM SoC are defined as - + + * Standby + * Retention + * Standalone Power Collapse (Standalone PC or SPC) + * Power Collapse (PC) + +Standby: Standby does a little more in addition to architectural clock gating. +When the WFI instruction is executed the ARM core would gate its internal +clocks. In addition to gating the clocks, QCOM cpus use this instruction as a +trigger to execute the SPM state machine. The SPM state machine waits for the +interrupt to trigger the core back in to active. This triggers the cache +hierarchy to enter standby states, when all cpus are idle. An interrupt brings +the SPM state machine out of its wait, the next step is to ensure that the +cache hierarchy is also out of standby, and then the cpu is allowed to resume +execution. + +Retention: Retention is a low power state where the core is clock gated and +the memory and the registers associated with the core are retained. The +voltage may be reduced to the minimum value needed to keep the processor +registers active. The SPM should be configured to execute the retention +sequence and would wait for interrupt, before restoring the cpu to execution +state. Retention may have a slightly higher latency than Standby. + +Standalone PC: A cpu can power down and warmboot if there is a sufficient time +between the time it enters idle and the next known wake up. SPC mode is used +to indicate a core entering a power down state without consulting any other +cpu or the system resources. This helps save power only on that core. The SPM +sequence for this idle state is programmed to power down the supply to the +core, wait for the interrupt, restore power to the core, and ensure the +system state including cache hierarchy is ready before allowing core to +resume. Applying power and resetting the core causes the core to warmboot +back into Elevation Level (EL) which trampolines the control back to the +kernel. Entering a power down state for the cpu, needs to be done by trapping +into a EL. Failing to do so, would result in a crash enforced by the warm boot +code in the EL for the SoC. On SoCs with write-back L1 cache, the cache has to +be flushed in s/w, before powering down the core. + +Power Collapse: This state is similar to the SPC mode, but distinguishes +itself in that the cpu acknowledges and permits the SoC to enter deeper sleep +modes. In a hierarchical power domain SoC, this means L2 and other caches can +be flushed, system bus, clocks - lowered, and SoC main XO clock gated and +voltages reduced, provided all cpus enter this state. Since the span of low +power modes possible at this state is vast, the exit latency and the residency +of this low power mode would be considered high even though at a cpu level, +this essentially is cpu power down. The SPM in this state also may handshake +with the Resource power manager processor in the SoC to indicate a complete +application processor subsystem shut down. + +The idle-state for QCOM SoCs are distinguished by the compatible property of +the idle-states device node. +The devicetree representation of the idle state should be - + +Required properties: + +- compatible: Must be one of - + "qcom,idle-state-stby", + "qcom,idle-state-ret", + "qcom,idle-state-spc", + "qcom,idle-state-pc", + and "arm,idle-state". + +Other required and optional properties are specified in [1]. + +Example: + + idle-states { + CPU_SPC: spc { + compatible = "qcom,idle-state-spc", "arm,idle-state"; + entry-latency-us = <150>; + exit-latency-us = <200>; + min-residency-us = <2000>; + }; + }; + +[1]. Documentation/devicetree/bindings/arm/idle-states.txt diff --git a/Documentation/devicetree/bindings/arm/msm/qcom,saw2.txt b/Documentation/devicetree/bindings/arm/msm/qcom,saw2.txt index 1505fb8e131a..690c3c002ad2 100644 --- a/Documentation/devicetree/bindings/arm/msm/qcom,saw2.txt +++ b/Documentation/devicetree/bindings/arm/msm/qcom,saw2.txt @@ -2,11 +2,20 @@ SPM AVS Wrapper 2 (SAW2) The SAW2 is a wrapper around the Subsystem Power Manager (SPM) and the Adaptive Voltage Scaling (AVS) hardware. The SPM is a programmable -micro-controller that transitions a piece of hardware (like a processor or +power-controller that transitions a piece of hardware (like a processor or subsystem) into and out of low power modes via a direct connection to the PMIC. It can also be wired up to interact with other processors in the system, notifying them when a low power state is entered or exited. +Multiple revisions of the SAW hardware are supported using these Device Nodes. +SAW2 revisions differ in the register offset and configuration data. Also, the +same revision of the SAW in different SoCs may have different configuration +data due the the differences in hardware capabilities. Hence the SoC name, the +version of the SAW hardware in that SoC and the distinction between cpu (big +or Little) or cache, may be needed to uniquely identify the SAW register +configuration and initialization data. The compatible string is used to +indicate this parameter. + PROPERTIES - compatible: @@ -14,10 +23,13 @@ PROPERTIES Value type: <string> Definition: shall contain "qcom,saw2". A more specific value should be one of: - "qcom,saw2-v1" - "qcom,saw2-v1.1" - "qcom,saw2-v2" - "qcom,saw2-v2.1" + "qcom,saw2-v1" + "qcom,saw2-v1.1" + "qcom,saw2-v2" + "qcom,saw2-v2.1" + "qcom,apq8064-saw2-v1.1-cpu" + "qcom,msm8974-saw2-v2.1-cpu" + "qcom,apq8084-saw2-v2.1-cpu" - reg: Usage: required @@ -26,10 +38,17 @@ PROPERTIES the register region. An optional second element specifies the base address and size of the alias register region. +- regulator: + Usage: optional + Value type: boolean + Definition: Indicates that this SPM device acts as a regulator device + device for the core (CPU or Cache) the SPM is attached + to. Example: - regulator@2099000 { + power-controller@2099000 { compatible = "qcom,saw2"; reg = <0x02099000 0x1000>, <0x02009000 0x1000>; + regulator; }; diff --git a/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt b/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt index d75a9d767022..83c8e57ea2e7 100644 --- a/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt +++ b/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt @@ -1,7 +1,9 @@ QCOM BAM DMA controller Required properties: -- compatible: must contain "qcom,bam-v1.4.0" for MSM8974 +- compatible: must be one of the following: + * "qcom,bam-v1.4.0" for MSM8974, APQ8074 and APQ8084 + * "qcom,bam-v1.3.0" for APQ8064 and MSM8960 - reg: Address range for DMA registers - interrupts: Should contain the one interrupt shared by all channels - #dma-cells: must be <1>, the cell in the dmas property of the client device diff --git a/Documentation/devicetree/bindings/mfd/qcom-rpm.txt b/Documentation/devicetree/bindings/mfd/qcom-rpm.txt new file mode 100644 index 000000000000..85e31980017a --- /dev/null +++ b/Documentation/devicetree/bindings/mfd/qcom-rpm.txt @@ -0,0 +1,70 @@ +Qualcomm Resource Power Manager (RPM) + +This driver is used to interface with the Resource Power Manager (RPM) found in +various Qualcomm platforms. The RPM allows each component in the system to vote +for state of the system resources, such as clocks, regulators and bus +frequencies. + +- compatible: + Usage: required + Value type: <string> + Definition: must be one of: + "qcom,rpm-apq8064" + "qcom,rpm-msm8660" + "qcom,rpm-msm8960" + +- reg: + Usage: required + Value type: <prop-encoded-array> + Definition: base address and size of the RPM's message ram + +- interrupts: + Usage: required + Value type: <prop-encoded-array> + Definition: three entries specifying the RPM's: + 1. acknowledgement interrupt + 2. error interrupt + 3. wakeup interrupt + +- interrupt-names: + Usage: required + Value type: <string-array> + Definition: must be the three strings "ack", "err" and "wakeup", in order + +- #address-cells: + Usage: required + Value type: <u32> + Definition: must be 1 + +- #size-cells: + Usage: required + Value type: <u32> + Definition: must be 0 + +- qcom,ipc: + Usage: required + Value type: <prop-encoded-array> + + Definition: three entries specifying the outgoing ipc bit used for + signaling the RPM: + - phandle to a syscon node representing the apcs registers + - u32 representing offset to the register within the syscon + - u32 representing the ipc bit within the register + + += EXAMPLE + + #include <dt-bindings/mfd/qcom-rpm.h> + + rpm@108000 { + compatible = "qcom,rpm-msm8960"; + reg = <0x108000 0x1000>; + qcom,ipc = <&apcs 0x8 2>; + + interrupts = <0 19 0>, <0 21 0>, <0 22 0>; + interrupt-names = "ack", "err", "wakeup"; + + #address-cells = <1>; + #size-cells = <0>; + }; + diff --git a/Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt b/Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt new file mode 100644 index 000000000000..7ed08048516a --- /dev/null +++ b/Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt @@ -0,0 +1,215 @@ +Qualcomm PMIC GPIO block + +This binding describes the GPIO block(s) found in the 8xxx series of +PMIC's from Qualcomm. + +- compatible: + Usage: required + Value type: <string> + Definition: must be one of: + "qcom,pm8018-gpio" + "qcom,pm8038-gpio" + "qcom,pm8058-gpio" + "qcom,pm8917-gpio" + "qcom,pm8921-gpio" + "qcom,pm8941-gpio" + "qcom,pma8084-gpio" + +- reg: + Usage: required + Value type: <prop-encoded-array> + Definition: Register base of the GPIO block and length. + +- interrupts: + Usage: required + Value type: <prop-encoded-array> + Definition: Must contain an array of encoded interrupt specifiers for + each available GPIO + +- gpio-controller: + Usage: required + Value type: <none> + Definition: Mark the device node as a GPIO controller + +- #gpio-cells: + Usage: required + Value type: <u32> + Definition: Must be 2; + the first cell will be used to define gpio number and the + second denotes the flags for this gpio + +Please refer to ../gpio/gpio.txt and ../interrupt-controller/interrupts.txt for +a general description of GPIO and interrupt bindings. + +Please refer to pinctrl-bindings.txt in this directory for details of the +common pinctrl bindings used by client devices, including the meaning of the +phrase "pin configuration node". + +The pin configuration nodes act as a container for an arbitrary number of +subnodes. Each of these subnodes represents some desired configuration for a +pin or a list of pins. This configuration can include the +mux function to select on those pin(s), and various pin configuration +parameters, as listed below. + + +SUBNODES: + +The name of each subnode is not important; all subnodes should be enumerated +and processed purely based on their content. + +Each subnode only affects those parameters that are explicitly listed. In +other words, a subnode that lists a mux function but no pin configuration +parameters implies no information about any pin configuration parameters. +Similarly, a pin subnode that describes a pullup parameter implies no +information about e.g. the mux function. + +The following generic properties as defined in pinctrl-bindings.txt are valid +to specify in a pin configuration subnode: + +- pins: + Usage: required + Value type: <string-array> + Definition: List of gpio pins affected by the properties specified in + this subnode. Valid pins are: + gpio1-gpio6 for pm8018 + gpio1-gpio12 for pm8038 + gpio1-gpio40 for pm8058 + gpio1-gpio38 for pm8917 + gpio1-gpio44 for pm8921 + gpio1-gpio36 for pm8941 + gpio1-gpio22 for pma8084 + +- function: + Usage: required + Value type: <string> + Definition: Specify the alternative function to be configured for the + specified pins. Valid values are: + "normal", + "paired", + "func1", + "func2", + "dtest1", + "dtest2", + "dtest3", + "dtest4" + +- bias-disable: + Usage: optional + Value type: <none> + Definition: The specified pins should be configured as no pull. + +- bias-pull-down: + Usage: optional + Value type: <none> + Definition: The specified pins should be configured as pull down. + +- bias-pull-up: + Usage: optional + Value type: <empty> + Definition: The specified pins should be configured as pull up. + +- qcom,pull-up-strength: + Usage: optional + Value type: <u32> + Definition: Specifies the strength to use for pull up, if selected. + Valid values are; as defined in + <dt-bindings/pinctrl/qcom,pmic-gpio.h>: + 1: 30uA (PMIC_GPIO_PULL_UP_30) + 2: 1.5uA (PMIC_GPIO_PULL_UP_1P5) + 3: 31.5uA (PMIC_GPIO_PULL_UP_31P5) + 4: 1.5uA + 30uA boost (PMIC_GPIO_PULL_UP_1P5_30) + If this property is ommited 30uA strength will be used if + pull up is selected + +- bias-high-impedance: + Usage: optional + Value type: <none> + Definition: The specified pins will put in high-Z mode and disabled. + +- input-enable: + Usage: optional + Value type: <none> + Definition: The specified pins are put in input mode. + +- output-high: + Usage: optional + Value type: <none> + Definition: The specified pins are configured in output mode, driven + high. + +- output-low: + Usage: optional + Value type: <none> + Definition: The specified pins are configured in output mode, driven + low. + +- power-source: + Usage: optional + Value type: <u32> + Definition: Selects the power source for the specified pins. Valid + power sources are defined per chip in + <dt-bindings/pinctrl/qcom,pmic-gpio.h> + +- qcom,drive-strength: + Usage: optional + Value type: <u32> + Definition: Selects the drive strength for the specified pins. Value + drive strengths are: + 0: no (PMIC_GPIO_STRENGTH_NO) + 1: high (PMIC_GPIO_STRENGTH_HIGH) 0.9mA @ 1.8V - 1.9mA @ 2.6V + 2: medium (PMIC_GPIO_STRENGTH_MED) 0.6mA @ 1.8V - 1.25mA @ 2.6V + 3: low (PMIC_GPIO_STRENGTH_LOW) 0.15mA @ 1.8V - 0.3mA @ 2.6V + as defined in <dt-bindings/pinctrl/qcom,pmic-gpio.h> + +- drive-push-pull: + Usage: optional + Value type: <none> + Definition: The specified pins are configured in push-pull mode. + +- drive-open-drain: + Usage: optional + Value type: <none> + Definition: The specified pins are configured in open-drain mode. + +- drive-open-source: + Usage: optional + Value type: <none> + Definition: The specified pins are configured in open-source mode. + +Example: + + pm8921_gpio: gpio@150 { + compatible = "qcom,pm8921-gpio"; + reg = <0x150 0x160>; + interrupts = <192 1>, <193 1>, <194 1>, + <195 1>, <196 1>, <197 1>, + <198 1>, <199 1>, <200 1>, + <201 1>, <202 1>, <203 1>, + <204 1>, <205 1>, <206 1>, + <207 1>, <208 1>, <209 1>, + <210 1>, <211 1>, <212 1>, + <213 1>, <214 1>, <215 1>, + <216 1>, <217 1>, <218 1>, + <219 1>, <220 1>, <221 1>, + <222 1>, <223 1>, <224 1>, + <225 1>, <226 1>, <227 1>, + <228 1>, <229 1>, <230 1>, + <231 1>, <232 1>, <233 1>, + <234 1>, <235 1>; + + gpio-controller; + #gpio-cells = <2>; + + pm8921_gpio_keys: gpio-keys { + volume-keys { + pins = "gpio20", "gpio21"; + function = "normal"; + + input-enable; + bias-pull-up; + drive-push-pull; + qcom,drive-strength = <PMIC_GPIO_STRENGTH_NO>; + power-source = <PM8921_GPIO_S4>; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/pinctrl/qcom,pmic-mpp.txt b/Documentation/devicetree/bindings/pinctrl/qcom,pmic-mpp.txt new file mode 100644 index 000000000000..854774b194ed --- /dev/null +++ b/Documentation/devicetree/bindings/pinctrl/qcom,pmic-mpp.txt @@ -0,0 +1,162 @@ +Qualcomm PMIC Multi-Purpose Pin (MPP) block + +This binding describes the MPP block(s) found in the 8xxx series +of PMIC's from Qualcomm. + +- compatible: + Usage: required + Value type: <string> + Definition: Should contain one of: + "qcom,pm8841-mpp", + "qcom,pm8941-mpp", + "qcom,pma8084-mpp", + +- reg: + Usage: required + Value type: <prop-encoded-array> + Definition: Register base of the MPP block and length. + +- interrupts: + Usage: required + Value type: <prop-encoded-array> + Definition: Must contain an array of encoded interrupt specifiers for + each available MPP + +- gpio-controller: + Usage: required + Value type: <none> + Definition: Mark the device node as a GPIO controller + +- #gpio-cells: + Usage: required + Value type: <u32> + Definition: Must be 2; + the first cell will be used to define MPP number and the + second denotes the flags for this MPP + +Please refer to ../gpio/gpio.txt and ../interrupt-controller/interrupts.txt for +a general description of GPIO and interrupt bindings. + +Please refer to pinctrl-bindings.txt in this directory for details of the +common pinctrl bindings used by client devices, including the meaning of the +phrase "pin configuration node". + +The pin configuration nodes act as a container for an arbitrary number of +subnodes. Each of these subnodes represents some desired configuration for a +pin or a list of pins. This configuration can include the +mux function to select on those pin(s), and various pin configuration +parameters, as listed below. + +SUBNODES: + +The name of each subnode is not important; all subnodes should be enumerated +and processed purely based on their content. + +Each subnode only affects those parameters that are explicitly listed. In +other words, a subnode that lists a mux function but no pin configuration +parameters implies no information about any pin configuration parameters. +Similarly, a pin subnode that describes a pullup parameter implies no +information about e.g. the mux function. + +The following generic properties as defined in pinctrl-bindings.txt are valid +to specify in a pin configuration subnode: + +- pins: + Usage: required + Value type: <string-array> + Definition: List of MPP pins affected by the properties specified in + this subnode. Valid pins are: + mpp1-mpp4 for pm8841 + mpp1-mpp8 for pm8941 + mpp1-mpp4 for pma8084 + +- function: + Usage: required + Value type: <string> + Definition: Specify the alternative function to be configured for the + specified pins. Valid values are: + "normal", + "paired", + "dtest1", + "dtest2", + "dtest3", + "dtest4" + +- bias-disable: + Usage: optional + Value type: <none> + Definition: The specified pins should be configured as no pull. + +- bias-pull-up: + Usage: optional + Value type: <u32> + Definition: The specified pins should be configured as pull up. + Valid values are 600, 10000 and 30000 in bidirectional mode + only, i.e. when operating in qcom,analog-mode and input and + outputs are enabled. The hardware ignores the configuration + when operating in other modes. + +- bias-high-impedance: + Usage: optional + Value type: <none> + Definition: The specified pins will put in high-Z mode and disabled. + +- input-enable: + Usage: optional + Value type: <none> + Definition: The specified pins are put in input mode, i.e. their input + buffer is enabled + +- output-high: + Usage: optional + Value type: <none> + Definition: The specified pins are configured in output mode, driven + high. + +- output-low: + Usage: optional + Value type: <none> + Definition: The specified pins are configured in output mode, driven + low. + +- power-source: + Usage: optional + Value type: <u32> + Definition: Selects the power source for the specified pins. Valid power + sources are defined in <dt-bindings/pinctrl/qcom,pmic-mpp.h> + +- qcom,analog-mode: + Usage: optional + Value type: <none> + Definition: Selects Analog mode of operation: combined with input-enable + and/or output-high, output-low MPP could operate as + Bidirectional Logic, Analog Input, Analog Output. + +- qcom,amux-route: + Usage: optional + Value type: <u32> + Definition: Selects the source for analog input. Valid values are + defined in <dt-bindings/pinctrl/qcom,pmic-mpp.h> + PMIC_MPP_AMUX_ROUTE_CH5, PMIC_MPP_AMUX_ROUTE_CH6... + +Example: + + mpps@a000 { + compatible = "qcom,pm8841-mpp"; + reg = <0xa000>; + gpio-controller; + #gpio-cells = <2>; + interrupts = <4 0xa0 0 0>, <4 0xa1 0 0>, <4 0xa2 0 0>, <4 0xa3 0 0>; + + pinctrl-names = "default"; + pinctrl-0 = <&pm8841_default>; + + pm8841_default: default { + gpio { + pins = "mpp1", "mpp2", "mpp3", "mpp4"; + function = "normal"; + input-enable; + power-source = <PM8841_MPP_S3>; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/serial/qcom,msm-uartdm.txt b/Documentation/devicetree/bindings/serial/qcom,msm-uartdm.txt index ffa5b784c66e..965bf92a3fe4 100644 --- a/Documentation/devicetree/bindings/serial/qcom,msm-uartdm.txt +++ b/Documentation/devicetree/bindings/serial/qcom,msm-uartdm.txt @@ -27,27 +27,52 @@ Optional properties: - dmas: Should contain dma specifiers for transmit and receive channels - dma-names: Should contain "tx" for transmit and "rx" for receive channels +Note: Aliases may be defined to ensure the correct ordering of the UARTs. +The alias serialN will result in the UART being assigned port N. If any +serialN alias exists, then an alias must exist for each enabled UART. The +serialN aliases should be in a .dts file instead of in a .dtsi file. + Examples: -A uartdm v1.4 device with dma capabilities. - -serial@f991e000 { - compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm"; - reg = <0xf991e000 0x1000>; - interrupts = <0 108 0x0>; - clocks = <&blsp1_uart2_apps_cxc>, <&blsp1_ahb_cxc>; - clock-names = "core", "iface"; - dmas = <&dma0 0>, <&dma0 1>; - dma-names = "tx", "rx"; -}; - -A uartdm v1.3 device without dma capabilities and part of a GSBI complex. - -serial@19c40000 { - compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm"; - reg = <0x19c40000 0x1000>, - <0x19c00000 0x1000>; - interrupts = <0 195 0x0>; - clocks = <&gsbi5_uart_cxc>, <&gsbi5_ahb_cxc>; - clock-names = "core", "iface"; -}; +- A uartdm v1.4 device with dma capabilities. + + serial@f991e000 { + compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm"; + reg = <0xf991e000 0x1000>; + interrupts = <0 108 0x0>; + clocks = <&blsp1_uart2_apps_cxc>, <&blsp1_ahb_cxc>; + clock-names = "core", "iface"; + dmas = <&dma0 0>, <&dma0 1>; + dma-names = "tx", "rx"; + }; + +- A uartdm v1.3 device without dma capabilities and part of a GSBI complex. + + serial@19c40000 { + compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm"; + reg = <0x19c40000 0x1000>, + <0x19c00000 0x1000>; + interrupts = <0 195 0x0>; + clocks = <&gsbi5_uart_cxc>, <&gsbi5_ahb_cxc>; + clock-names = "core", "iface"; + }; + +- serialN alias. + + aliases { + serial0 = &uarta; + serial1 = &uartc; + serial2 = &uartb; + }; + + uarta: serial@12490000 { + status = "ok"; + }; + + uartb: serial@16340000 { + status = "ok"; + }; + + uartc: serial@1a240000 { + status = "ok"; + }; diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug index 439d17d9a698..cf85d7eb3747 100644 --- a/arch/arm/Kconfig.debug +++ b/arch/arm/Kconfig.debug @@ -372,6 +372,13 @@ choice Say Y here if you want kernel low-lever debugging support on Amlogic Meson6 based platforms on the UARTAO. + config DEBUG_KS8695_UART + bool "KS8695 Debug UART" + depends on ARCH_KS8695 + help + Say Y here if you want kernel low-level debugging support + on KS8695. + config DEBUG_MMP_UART2 bool "Kernel low-level debugging message via MMP UART2" depends on ARCH_MMP @@ -464,6 +471,14 @@ choice Say Y here if you want kernel low-level debugging support on Vybrid based platforms. + config DEBUG_NETX_UART + bool "Kernel low-level debugging messages via NetX UART" + depends on ARCH_NETX + select DEBUG_UART_NETX + help + Say Y here if you want kernel low-level debugging support + on Hilscher NetX based platforms. + config DEBUG_NOMADIK_UART bool "Kernel low-level debugging messages via NOMADIK UART" depends on ARCH_NOMADIK @@ -488,6 +503,30 @@ choice Say Y here if you want kernel low-level debugging support on TI-NSPIRE CX models. + config DEBUG_OMAP1UART1 + bool "Kernel low-level debugging via OMAP1 UART1" + depends on ARCH_OMAP1 + select DEBUG_UART_8250 + help + Say Y here if you want kernel low-level debugging support + on OMAP1 based platforms (expect OMAP730) on the UART1. + + config DEBUG_OMAP1UART2 + bool "Kernel low-level debugging via OMAP1 UART2" + depends on ARCH_OMAP1 + select DEBUG_UART_8250 + help + Say Y here if you want kernel low-level debugging support + on OMAP1 based platforms (expect OMAP730) on the UART2. + + config DEBUG_OMAP1UART3 + bool "Kernel low-level debugging via OMAP1 UART3" + depends on ARCH_OMAP1 + select DEBUG_UART_8250 + help + Say Y here if you want kernel low-level debugging support + on OMAP1 based platforms (expect OMAP730) on the UART3. + config DEBUG_OMAP2UART1 bool "OMAP2/3/4 UART1 (omap2/3 sdp boards and some omap3 boards)" depends on ARCH_OMAP2PLUS @@ -530,6 +569,30 @@ choice depends on ARCH_OMAP2PLUS select DEBUG_OMAP2PLUS_UART + config DEBUG_OMAP7XXUART1 + bool "Kernel low-level debugging via OMAP730 UART1" + depends on ARCH_OMAP730 + select DEBUG_UART_8250 + help + Say Y here if you want kernel low-level debugging support + on OMAP730 based platforms on the UART1. + + config DEBUG_OMAP7XXUART2 + bool "Kernel low-level debugging via OMAP730 UART2" + depends on ARCH_OMAP730 + select DEBUG_UART_8250 + help + Say Y here if you want kernel low-level debugging support + on OMAP730 based platforms on the UART2. + + config DEBUG_OMAP7XXUART3 + bool "Kernel low-level debugging via OMAP730 UART3" + depends on ARCH_OMAP730 + select DEBUG_UART_8250 + help + Say Y here if you want kernel low-level debugging support + on OMAP730 based platforms on the UART3. + config DEBUG_TI81XXUART1 bool "Kernel low-level debugging messages via TI81XX UART1 (ti8148evm)" depends on ARCH_OMAP2PLUS @@ -723,6 +786,30 @@ choice their output to UART 2. The port must have been initialised by the boot-loader before use. + config DEBUG_SA1100_UART1 + depends on ARCH_SA1100 + select DEBUG_SA1100_UART + bool "Kernel low-level debugging messages via SA1100 Ser1" + help + Say Y here if you want kernel low-level debugging support + on SA1100 based platforms. + + config DEBUG_SA1100_UART2 + depends on ARCH_SA1100 + select DEBUG_SA1100_UART + bool "Kernel low-level debugging messages via SA1100 Ser2" + help + Say Y here if you want kernel low-level debugging support + on SA1100 based platforms. + + config DEBUG_SA1100_UART3 + depends on ARCH_SA1100 + select DEBUG_SA1100_UART + bool "Kernel low-level debugging messages via SA1100 Ser3" + help + Say Y here if you want kernel low-level debugging support + on SA1100 based platforms. + config DEBUG_SOCFPGA_UART depends on ARCH_SOCFPGA bool "Use SOCFPGA UART for low-level debug" @@ -909,15 +996,6 @@ choice This option selects UART0 on VIA/Wondermedia System-on-a-chip devices, including VT8500, WM8505, WM8650 and WM8850. - config DEBUG_LL_UART_NONE - bool "No low-level debugging UART" - depends on !ARCH_MULTIPLATFORM - help - Say Y here if your platform doesn't provide a UART option - above. This relies on your platform choosing the right UART - definition internally in order for low-level debugging to - work. - config DEBUG_ICEDCC bool "Kernel low-level debugging via EmbeddedICE DCC channel" help @@ -1035,6 +1113,10 @@ config DEBUG_TEGRA_UART bool depends on ARCH_TEGRA +config DEBUG_SA1100_UART + bool + depends on ARCH_SA1100 + config DEBUG_STI_UART bool depends on ARCH_STI @@ -1059,10 +1141,13 @@ config DEBUG_LL_INCLUDE DEBUG_IMX6Q_UART || \ DEBUG_IMX6SL_UART || \ DEBUG_IMX6SX_UART + default "debug/ks8695.S" if DEBUG_KS8695_UART default "debug/msm.S" if DEBUG_MSM_UART || DEBUG_QCOM_UARTDM + default "debug/netx.S" if DEBUG_NETX_UART default "debug/omap2plus.S" if DEBUG_OMAP2PLUS_UART default "debug/s3c24xx.S" if DEBUG_S3C24XX_UART default "debug/s5pv210.S" if DEBUG_S5PV210_UART + default "debug/sa1100.S" if DEBUG_SA1100_UART default "debug/sirf.S" if DEBUG_SIRFPRIMA2_UART1 || DEBUG_SIRFMARCO_UART1 default "debug/sti.S" if DEBUG_STI_UART default "debug/tegra.S" if DEBUG_TEGRA_UART @@ -1076,14 +1161,9 @@ config DEBUG_LL_INCLUDE # Compatibility options for PL01x config DEBUG_UART_PL01X - def_bool ARCH_EP93XX || \ - ARCH_INTEGRATOR || \ - ARCH_SPEAR3XX || \ - ARCH_SPEAR6XX || \ - ARCH_SPEAR13XX || \ - ARCH_VERSATILE - -# Compatibility options for 8250 + bool + +#Compatibility options for 8250 config DEBUG_UART_8250 def_bool ARCH_DOVE || ARCH_EBSA110 || \ (FOOTBRIDGE && !DEBUG_DC21285_PORT) || \ @@ -1097,6 +1177,7 @@ config DEBUG_UART_BCM63XX config DEBUG_UART_PHYS hex "Physical base address of debug UART" + default 0x00100a00 if DEBUG_NETX_UART default 0x01c20000 if DEBUG_DAVINCI_DMx_UART0 default 0x01c28000 if DEBUG_SUNXI_UART0 default 0x01c28400 if DEBUG_SUNXI_UART1 @@ -1135,6 +1216,9 @@ config DEBUG_UART_PHYS default 0x78000000 if DEBUG_CNS3XXX default 0x7c0003f8 if FOOTBRIDGE default 0x78000000 if DEBUG_CNS3XXX + default 0x80010000 if DEBUG_SA1100_UART1 + default 0x80030000 if DEBUG_SA1100_UART2 + default 0x80050000 if DEBUG_SA1100_UART3 default 0x80070000 if DEBUG_IMX23_UART default 0x80074000 if DEBUG_IMX28_UART default 0x80230000 if DEBUG_PICOXCELL_UART @@ -1164,6 +1248,9 @@ config DEBUG_UART_PHYS default 0xff690000 if DEBUG_RK32_UART2 default 0xffc02000 if DEBUG_SOCFPGA_UART default 0xffd82340 if ARCH_IOP13XX + default 0xfffb0000 if DEBUG_OMAP1UART1 || DEBUG_OMAP7XXUART1 + default 0xfffb0800 if DEBUG_OMAP1UART2 || DEBUG_OMAP7XXUART2 + default 0xfffb9800 if DEBUG_OMAP1UART3 || DEBUG_OMAP7XXUART3 default 0xfff36000 if DEBUG_HIGHBANK_UART default 0xfffe8600 if DEBUG_UART_BCM63XX default 0xfffff700 if ARCH_IOP33X @@ -1171,10 +1258,12 @@ config DEBUG_UART_PHYS DEBUG_LL_UART_EFM32 || \ DEBUG_UART_8250 || DEBUG_UART_PL01X || DEBUG_MESON_UARTAO || \ DEBUG_MSM_UART || DEBUG_QCOM_UARTDM || DEBUG_S3C24XX_UART || \ - DEBUG_UART_BCM63XX + DEBUG_UART_BCM63XX || DEBUG_NETX_UART || DEBUG_SA1100_UART || \ + ARCH_EP93XX config DEBUG_UART_VIRT hex "Virtual base address of debug UART" + default 0xe0000a00 if DEBUG_NETX_UART default 0xe0010fe0 if ARCH_RPC default 0xe1000000 if DEBUG_MSM_UART default 0xf0000be0 if ARCH_EBSA110 @@ -1199,6 +1288,9 @@ config DEBUG_UART_VIRT default 0xf7fc9000 if DEBUG_BERLIN_UART default 0xf8007000 if DEBUG_HIP04_UART default 0xf8009000 if DEBUG_VEXPRESS_UART0_CA9 + default 0xf8010000 if DEBUG_SA1100_UART1 + default 0xf8030000 if DEBUG_SA1100_UART2 + default 0xf8050000 if DEBUG_SA1100_UART3 default 0xf8090000 if DEBUG_VEXPRESS_UART0_RS1 default 0xfa71e000 if DEBUG_QCOM_UARTDM default 0xfb002000 if DEBUG_CNS3XXX @@ -1238,18 +1330,22 @@ config DEBUG_UART_VIRT default 0xfef00000 if ARCH_IXP4XX && !CPU_BIG_ENDIAN default 0xfef00003 if ARCH_IXP4XX && CPU_BIG_ENDIAN default 0xfef36000 if DEBUG_HIGHBANK_UART + default 0xfefb0000 if DEBUG_OMAP1UART1 || DEBUG_OMAP7XXUART1 + default 0xfefb0800 if DEBUG_OMAP1UART2 || DEBUG_OMAP7XXUART2 + default 0xfefb9800 if DEBUG_OMAP1UART3 || DEBUG_OMAP7XXUART3 default 0xfefff700 if ARCH_IOP33X default 0xff003000 if DEBUG_U300_UART default DEBUG_UART_PHYS if !MMU depends on DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \ DEBUG_UART_8250 || DEBUG_UART_PL01X || DEBUG_MESON_UARTAO || \ DEBUG_MSM_UART || DEBUG_QCOM_UARTDM || DEBUG_S3C24XX_UART || \ - DEBUG_UART_BCM63XX + DEBUG_UART_BCM63XX || DEBUG_NETX_UART || DEBUG_SA1100_UART config DEBUG_UART_8250_SHIFT int "Register offset shift for the 8250 debug UART" depends on DEBUG_LL_UART_8250 || DEBUG_UART_8250 - default 0 if FOOTBRIDGE || ARCH_IOP32X + default 0 if FOOTBRIDGE || ARCH_IOP32X || \ + DEBUG_OMAP7XXUART1 || DEBUG_OMAP7XXUART2 || DEBUG_OMAP7XXUART3 default 2 config DEBUG_UART_8250_WORD diff --git a/arch/arm/boot/dts/qcom-apq8064-cm-qs600.dts b/arch/arm/boot/dts/qcom-apq8064-cm-qs600.dts index 5d75666f7f6c..66c60f9a456e 100644 --- a/arch/arm/boot/dts/qcom-apq8064-cm-qs600.dts +++ b/arch/arm/boot/dts/qcom-apq8064-cm-qs600.dts @@ -1,4 +1,5 @@ #include "qcom-apq8064-v2.0.dtsi" +#include <dt-bindings/gpio/gpio.h> / { model = "CompuLab CM-QS600"; @@ -12,6 +13,14 @@ function = "gsbi1"; }; }; + + card_detect: card_detect { + mux { + pins = "gpio26"; + function = "gpio"; + bias-disable; + }; + }; }; gsbi@12440000 { @@ -27,7 +36,7 @@ eeprom: eeprom@50 { compatible = "24c02"; reg = <0x50>; - pagesize = <32>; + pagesize = <16>; }; }; }; @@ -40,6 +49,60 @@ }; }; + /* OTG */ + usb1_phy:phy@12500000 { + status = "ok"; + }; + + usb3_phy:phy@12520000 { + status = "ok"; + }; + + usb4_phy:phy@12530000 { + status = "ok"; + }; + + gadget1:gadget@12500000 { + status = "ok"; + }; + + /* OTG */ + usb1: usb@12500000 { + status = "ok"; + }; + + usb3: usb@12520000 { + status = "ok"; + }; + + usb4: usb@12530000 { + status = "ok"; + }; + + /* on board fixed 3.3v supply */ + v3p3_pcieclk: v3p3-pcieclk { + compatible = "regulator-fixed"; + regulator-name = "PCIE V3P3"; + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; + regulator-always-on; + }; + + pci@1b500000 { + status = "ok"; + pcie-clk-supply = <&v3p3_pcieclk>; + avdd-supply = <&pm8921_s3>; + vdd-supply = <&pm8921_lvs6>; + ext-3p3v-supply = <&ext_3p3v>; + qcom,external-phy-refclk; + reset-gpio = <&tlmm_pinmux 27 GPIO_ACTIVE_LOW>; + }; + + wlan0 { + compatible = "atheros,ath6kl"; + reset-gpio = <&pm8921_gpio 42 GPIO_ACTIVE_LOW>; + }; + amba { /* eMMC */ sdcc1: sdcc@12400000 { @@ -49,6 +112,9 @@ /* External micro SD card */ sdcc3: sdcc@12180000 { status = "okay"; + pinctrl-names = "default"; + pinctrl-0 = <&card_detect>; + cd-gpios = <&tlmm_pinmux 26 GPIO_ACTIVE_LOW>; }; /* WLAN */ sdcc4: sdcc@121c0000 { diff --git a/arch/arm/boot/dts/qcom-apq8064-ifc6410.dts b/arch/arm/boot/dts/qcom-apq8064-ifc6410.dts index b396c8311b27..5f5a7b531345 100644 --- a/arch/arm/boot/dts/qcom-apq8064-ifc6410.dts +++ b/arch/arm/boot/dts/qcom-apq8064-ifc6410.dts @@ -1,9 +1,14 @@ #include "qcom-apq8064-v2.0.dtsi" +#include <dt-bindings/gpio/gpio.h> / { model = "Qualcomm APQ8064/IFC6410"; compatible = "qcom,apq8064-ifc6410", "qcom,apq8064"; + aliases { + serial0 = &serial0; + }; + soc { pinctrl@800000 { i2c1_pins: i2c1 { @@ -12,6 +17,21 @@ function = "gsbi1"; }; }; + + uart_pins: uart_pins { + mux { + pins = "gpio14", "gpio15", "gpio16", "gpio17"; + function = "gsbi6"; + }; + }; + + card_detect: card_detect { + mux { + pins = "gpio26"; + function = "gpio"; + bias-disable; + }; + }; }; gsbi@12440000 { @@ -32,6 +52,18 @@ }; }; + gsbi@16500000 { + status = "ok"; + qcom,mode = <GSBI_PROT_I2C_UART>; + + serial@16540000 { + status = "ok"; + + pinctrl-names = "default"; + pinctrl-0 = <&uart_pins>; + }; + }; + gsbi@16600000 { status = "ok"; qcom,mode = <GSBI_PROT_I2C_UART>; @@ -40,6 +72,65 @@ }; }; + /* OTG */ + usb1_phy:phy@12500000 { + status = "ok"; + }; + + usb3_phy:phy@12520000 { + status = "ok"; + }; + + usb4_phy:phy@12530000 { + status = "ok"; + }; + + gadget1:gadget@12500000 { + status = "ok"; + }; + + /* OTG */ + usb1: usb@12500000 { + status = "ok"; + }; + + usb3: usb@12520000 { + status = "ok"; + }; + + usb4: usb@12530000 { + status = "ok"; + }; + + /* on board fixed 3.3v supply */ + v3p3_pcieclk: v3p3-pcieclk { + compatible = "regulator-fixed"; + regulator-name = "PCIE V3P3"; + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; + regulator-always-on; + }; + + pci@1b500000 { + status = "ok"; + pcie-clk-supply = <&v3p3_pcieclk>; + avdd-supply = <&pm8921_s3>; + vdd-supply = <&pm8921_lvs6>; + ext-3p3v-supply = <&ext_3p3v>; + qcom,external-phy-refclk; + reset-gpio = <&tlmm_pinmux 27 GPIO_ACTIVE_LOW>; + }; + + wlan0 { + compatible = "atheros,ath6kl"; + reset-gpio = <&pm8921_gpio 42 GPIO_ACTIVE_LOW>; + }; + + bluetooth { + compatible = "bt_reset"; + reset-gpio = <&pm8921_gpio 43 GPIO_ACTIVE_LOW>; + }; + amba { /* eMMC */ sdcc1: sdcc@12400000 { @@ -49,6 +140,9 @@ /* External micro SD card */ sdcc3: sdcc@12180000 { status = "okay"; + pinctrl-names = "default"; + pinctrl-0 = <&card_detect>; + cd-gpios = <&tlmm_pinmux 26 GPIO_ACTIVE_LOW>; }; /* WLAN */ sdcc4: sdcc@121c0000 { diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi index b3154c071652..54a4ae144b88 100644 --- a/arch/arm/boot/dts/qcom-apq8064.dtsi +++ b/arch/arm/boot/dts/qcom-apq8064.dtsi @@ -1,16 +1,28 @@ /dts-v1/; #include "skeleton.dtsi" +#include <dt-bindings/gpio/gpio.h> #include <dt-bindings/clock/qcom,gcc-msm8960.h> +#include <dt-bindings/reset/qcom,gcc-msm8960.h> #include <dt-bindings/clock/qcom,mmcc-msm8960.h> +#include <dt-bindings/mfd/qcom-rpm.h> #include <dt-bindings/soc/qcom,gsbi.h> #include <dt-bindings/interrupt-controller/arm-gic.h> - +#include <dt-bindings/pinctrl/qcom,pmic-gpio.h> / { model = "Qualcomm APQ8064"; compatible = "qcom,apq8064"; interrupt-parent = <&intc>; + reserved-memory { + #address-cells = <1>; + #size-cells = <1>; + ranges; + smem@80000000 { + reg = <0x80000000 0x200000>; + no-map; + }; + }; cpus { #address-cells = <1>; #size-cells = <0>; @@ -23,6 +35,10 @@ next-level-cache = <&L2>; qcom,acc = <&acc0>; qcom,saw = <&saw0>; + clocks = <&kraitcc 0>; + clock-names = "cpu"; + clock-latency = <100000>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; cpu@1 { @@ -33,6 +49,10 @@ next-level-cache = <&L2>; qcom,acc = <&acc1>; qcom,saw = <&saw1>; + clocks = <&kraitcc 1>; + clock-names = "cpu"; + clock-latency = <100000>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; cpu@2 { @@ -43,6 +63,10 @@ next-level-cache = <&L2>; qcom,acc = <&acc2>; qcom,saw = <&saw2>; + clocks = <&kraitcc 2>; + clock-names = "cpu"; + clock-latency = <100000>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; cpu@3 { @@ -53,12 +77,33 @@ next-level-cache = <&L2>; qcom,acc = <&acc3>; qcom,saw = <&saw3>; + clocks = <&kraitcc 3>; + clock-names = "cpu"; + clock-latency = <100000>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; L2: l2-cache { compatible = "cache"; cache-level = <2>; }; + + idle-states { + CPU_STBY: standby { + compatible = "qcom,idle-state-stby", "arm,idle-state"; + entry-latency-us = <1>; + exit-latency-us = <1>; + min-residency-us = <2>; + }; + + CPU_SPC: spc { + compatible = "qcom,idle-state-spc", "arm,idle-state"; + entry-latency-us = <400>; + exit-latency-us = <900>; + min-residency-us = <3000>; + }; + }; + }; cpu-pmu { @@ -66,12 +111,278 @@ interrupts = <1 10 0x304>; }; + qcom,pvs { + qcom,pvs-format-a; + qcom,speed0-pvs0-bin-v0 = + < 384000000 950000 >, + < 486000000 975000 >, + < 594000000 1000000 >, + < 702000000 1025000 >, + < 810000000 1075000 >, + < 918000000 1100000 >; + + qcom,speed0-pvs1-bin-v0 = + < 384000000 900000 >, + < 486000000 925000 >, + < 594000000 950000 >, + < 702000000 975000 >, + < 810000000 1025000 >, + < 918000000 1050000 >; + + qcom,speed0-pvs3-bin-v0 = + < 384000000 850000 >, + < 486000000 875000 >, + < 594000000 900000 >, + < 702000000 925000 >, + < 810000000 975000 >, + < 918000000 1000000 >; + + qcom,speed0-pvs4-bin-v0 = + < 384000000 850000 >, + < 486000000 875000 >, + < 594000000 900000 >, + < 702000000 925000 >, + < 810000000 962500 >, + < 918000000 975000 >; + + qcom,speed1-pvs0-bin-v0 = + < 384000000 950000 >, + < 486000000 950000 >, + < 594000000 950000 >, + < 702000000 962500 >, + < 810000000 1000000 >, + < 918000000 1025000 >; + + qcom,speed1-pvs1-bin-v0 = + < 384000000 950000 >, + < 486000000 950000 >, + < 594000000 950000 >, + < 702000000 962500 >, + < 810000000 975000 >, + < 918000000 1000000 >; + + qcom,speed1-pvs2-bin-v0 = + < 384000000 925000 >, + < 486000000 925000 >, + < 594000000 925000 >, + < 702000000 925000 >, + < 810000000 937500 >, + < 918000000 950000 >; + + qcom,speed1-pvs3-bin-v0 = + < 384000000 900000 >, + < 486000000 900000 >, + < 594000000 900000 >, + < 702000000 900000 >, + < 810000000 900000 >, + < 918000000 925000 >; + + qcom,speed1-pvs4-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed1-pvs5-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed1-pvs6-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed2-pvs0-bin-v0 = + < 384000000 950000 >, + < 486000000 950000 >, + < 594000000 950000 >, + < 702000000 950000 >, + < 810000000 962500 >, + < 918000000 975000 >; + + qcom,speed2-pvs1-bin-v0 = + < 384000000 925000 >, + < 486000000 925000 >, + < 594000000 925000 >, + < 702000000 925000 >, + < 810000000 937500 >, + < 918000000 950000 >; + + qcom,speed2-pvs2-bin-v0 = + < 384000000 900000 >, + < 486000000 900000 >, + < 594000000 900000 >, + < 702000000 900000 >, + < 810000000 912500 >, + < 918000000 925000 >; + + qcom,speed2-pvs3-bin-v0 = + < 384000000 900000 >, + < 486000000 900000 >, + < 594000000 900000 >, + < 702000000 900000 >, + < 810000000 900000 >, + < 918000000 912500 >; + + qcom,speed2-pvs4-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed2-pvs5-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed2-pvs6-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed14-pvs0-bin-v0 = + < 384000000 950000 >, + < 486000000 950000 >, + < 594000000 950000 >, + < 702000000 962500 >, + < 810000000 1000000 >, + < 918000000 1025000 >; + + qcom,speed14-pvs1-bin-v0 = + < 384000000 950000 >, + < 486000000 950000 >, + < 594000000 950000 >, + < 702000000 962500 >, + < 810000000 975000 >, + < 918000000 1000000 >; + + qcom,speed14-pvs2-bin-v0 = + < 384000000 925000 >, + < 486000000 925000 >, + < 594000000 925000 >, + < 702000000 925000 >, + < 810000000 937500 >, + < 918000000 950000 >; + + qcom,speed14-pvs3-bin-v0 = + < 384000000 900000 >, + < 486000000 900000 >, + < 594000000 900000 >, + < 702000000 900000 >, + < 810000000 900000 >, + < 918000000 925000 >; + + qcom,speed14-pvs4-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed14-pvs5-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + + qcom,speed14-pvs6-bin-v0 = + < 384000000 875000 >, + < 486000000 875000 >, + < 594000000 875000 >, + < 702000000 875000 >, + < 810000000 887500 >, + < 918000000 900000 >; + }; + + kraitcc: clock-controller { + compatible = "qcom,krait-cc-v1"; + #clock-cells = <1>; + }; + + clocks { + sleep_clk: sleep_clk { + compatible = "fixed-clock"; + clock-frequency = <32768>; + #clock-cells = <0>; + }; + }; + soc: soc { #address-cells = <1>; #size-cells = <1>; ranges; compatible = "simple-bus"; + smd { + compatible = "qcom,smd-apq8064"; + reg = <0x02011000 0x100>, <0x02011000 0x100>; + reg-names = "adsp_a11", "adsp_a11_smsm"; + interrupts = <0 90 IRQ_TYPE_EDGE_RISING>, <0 89 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "adsp_a11", "adsp_a11_smsm"; + }; + + pil_q6v4: pil@28800000 { + compatible = "qcom,pil-qdsp6v4"; + reg = <0x28800000 0x100>; + strap-tcm-base = <0x01460000>; + strap-ahb-upper = <0x00290000>; + strap-ahb-lower = <0x00000280>; + pas-id = <1>; /* PAS_Q6 */ + pas-name = "q6"; + resets = <&gcc SFAB_LPASS_RESET>; + reset-names = "lpass"; + core-vdd-supply = <&pm8921_l26>; + }; + + /* + FIXME these should go properly in to sound nodes, + Let them stay like this till we have a proper + upstream plan. + */ + snd { + compatible = "qcom,snd-apq8064"; + }; + + dai_fe { + compatible = "qcom,msm-dai-fe"; + }; + + dai_hdmi { + compatible = "qcom,msm-dai-q6-hdmi"; + }; + + codec_hdmi { + compatible = "linux,hdmi-audio"; + }; + + msm_pcm { + compatible = "qcom,msm-pcm-dsp"; + }; + + msm_pcm_routing { + compatible = "qcom,msm-pcm-routing"; + }; + tlmm_pinmux: pinctrl@800000 { compatible = "qcom,apq8064-pinctrl"; reg = <0x800000 0x4000>; @@ -92,6 +403,20 @@ }; }; + hdmi_pinctrl: hdmi-pinctrl { + mux1 { + pins = "gpio69", "gpio70", "gpio71"; + function = "hdmi"; + bias-pull-up; + drive-strength = <2>; + }; + mux2 { + pins = "gpio72"; + function = "hdmi"; + bias-pull-down; + drive-strength = <16>; + }; + }; ps_hold: ps_hold { mux { pins = "gpio78"; @@ -119,46 +444,63 @@ cpu-offset = <0x80000>; }; + watchdog@208a038 { + compatible = "qcom,kpss-wdt-apq8064"; + reg = <0x0208a038 0x40>; + clocks = <&sleep_clk>; + timeout-sec = <10>; + }; + acc0: clock-controller@2088000 { compatible = "qcom,kpss-acc-v1"; reg = <0x02088000 0x1000>, <0x02008000 0x1000>; + clock-output-names = "acpu0_aux"; }; acc1: clock-controller@2098000 { compatible = "qcom,kpss-acc-v1"; reg = <0x02098000 0x1000>, <0x02008000 0x1000>; + clock-output-names = "acpu1_aux"; }; acc2: clock-controller@20a8000 { compatible = "qcom,kpss-acc-v1"; reg = <0x020a8000 0x1000>, <0x02008000 0x1000>; + clock-output-names = "acpu2_aux"; }; acc3: clock-controller@20b8000 { compatible = "qcom,kpss-acc-v1"; reg = <0x020b8000 0x1000>, <0x02008000 0x1000>; + clock-output-names = "acpu3_aux"; + }; + + l2cc: clock-controller@2011000 { + compatible = "qcom,kpss-gcc"; + reg = <0x2011000 0x1000>; + clock-output-names = "acpu_l2_aux"; }; - saw0: regulator@2089000 { - compatible = "qcom,saw2"; + saw0: power-controller@2089000 { + compatible = "qcom,apq8064-saw2-v1.1-cpu", "qcom,saw2"; reg = <0x02089000 0x1000>, <0x02009000 0x1000>; regulator; }; - saw1: regulator@2099000 { - compatible = "qcom,saw2"; + saw1: power-controller@2099000 { + compatible = "qcom,apq8064-saw2-v1.1-cpu", "qcom,saw2"; reg = <0x02099000 0x1000>, <0x02009000 0x1000>; regulator; }; - saw2: regulator@20a9000 { - compatible = "qcom,saw2"; + saw2: power-controller@20a9000 { + compatible = "qcom,apq8064-saw2-v1.1-cpu", "qcom,saw2"; reg = <0x020a9000 0x1000>, <0x02009000 0x1000>; regulator; }; - saw3: regulator@20b9000 { - compatible = "qcom,saw2"; + saw3: power-controller@20b9000 { + compatible = "qcom,apq8064-saw2-v1.1-cpu", "qcom,saw2"; reg = <0x020b9000 0x1000>, <0x02009000 0x1000>; regulator; }; @@ -205,6 +547,37 @@ }; }; + adm_dma: dma@18300000 { + compatible = "qcom,adm"; + reg = <0x18320000 0x100000>; + interrupts = <0 171 0>; + #dma-cells = <1>; + + clocks = <&gcc ADM0_CLK>, <&gcc ADM0_PBUS_CLK>; + clock-names = "core", "iface"; + }; + + gsbi6: gsbi@16500000 { + status = "disabled"; + compatible = "qcom,gsbi-v1.0.0"; + reg = <0x16500000 0x03>; + clocks = <&gcc GSBI6_H_CLK>; + clock-names = "iface"; + #address-cells = <1>; + #size-cells = <1>; + ranges; + + serial1: serial@16540000 { + compatible = "qcom,msm-uartdm-v1.3-hs", "qcom,msm-uartdm-hs"; + reg = <0x16540000 0x100>, + <0x16500000 0x03>; + interrupts = <0 156 0x0>; + clocks = <&gcc GSBI6_UART_CLK>, <&gcc GSBI6_H_CLK>; + clock-names = "core", "iface"; + status = "disabled"; + }; + }; + gsbi7: gsbi@16600000 { status = "disabled"; compatible = "qcom,gsbi-v1.0.0"; @@ -215,7 +588,7 @@ #size-cells = <1>; ranges; - serial@16640000 { + serial0: serial@16640000 { compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm"; reg = <0x16640000 0x1000>, <0x16600000 0x1000>; @@ -226,10 +599,78 @@ }; }; + ext_3p3v: regulator-fixed@1 { + compatible = "regulator-fixed"; + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; + regulator-name = "ext_3p3v"; + regulator-type = "voltage"; + startup-delay-us = <0>; + gpio = <&tlmm_pinmux 77 GPIO_ACTIVE_HIGH>; + enable-active-high; + regulator-boot-on; + }; + qcom,ssbi@500000 { compatible = "qcom,ssbi"; reg = <0x00500000 0x1000>; qcom,controller-type = "pmic-arbiter"; + + pmicintc: pmic@0 { + compatible = "qcom,pm8921"; + interrupt-parent = <&tlmm_pinmux>; + interrupts = <74 8>; + #interrupt-cells = <2>; + interrupt-controller; + #address-cells = <1>; + #size-cells = <0>; + pm8921_gpio: gpio@150 { + compatible = "qcom,pm8921-gpio"; + reg = <0x150>; + interrupts = <192 1>, <193 1>, <194 1>, + <195 1>, <196 1>, <197 1>, + <198 1>, <199 1>, <200 1>, + <201 1>, <202 1>, <203 1>, + <204 1>, <205 1>, <206 1>, + <207 1>, <208 1>, <209 1>, + <210 1>, <211 1>, <212 1>, + <213 1>, <214 1>, <215 1>, + <216 1>, <217 1>, <218 1>, + <219 1>, <220 1>, <221 1>, + <222 1>, <223 1>, <224 1>, + <225 1>, <226 1>, <227 1>, + <228 1>, <229 1>, <230 1>, + <231 1>, <232 1>, <233 1>, + <234 1>, <235 1>; + + gpio-controller; + #gpio-cells = <2>; + pinctrl-names = "default"; + pinctrl-0 = <&wlan_default_gpios &bt_gpios>; + + wlan_default_gpios: wlan-gpios { + pios { + pins = "gpio43"; + function = "normal"; + bias-disable; + power-source = <PM8921_GPIO_S4>; + }; + }; + + bt_gpios: bt-gpio { + pios { + pins = "gpio44"; + function = "normal"; + bias-disable; + power-source = <PM8921_GPIO_S4>; + }; + }; + }; + }; + }; + qfprom: qfprom@00700000 { + compatible = "qcom,qfprom", "syscon"; + reg = <0x00700000 0x1000>; }; gcc: clock-controller@900000 { @@ -237,6 +678,18 @@ reg = <0x00900000 0x4000>; #clock-cells = <1>; #reset-cells = <1>; + #address-cells = <1>; + #size-cells = <1>; + ranges; + + /* tsens */ + tsens: tsens { + #thermal-sensor-cells = <1>; + compatible = "qcom,apq8064-tsens"; + qcom,calib-data = <&qfprom 0x404 0x20>; + interrupts = <0 178 0>; + interrupt-names = "tsens-ul"; + }; }; mmcc: clock-controller@4000000 { @@ -246,6 +699,282 @@ #reset-cells = <1>; }; + apcs: syscon@2011000 { + compatible = "syscon"; + reg = <0x2011000 0x1000>; + }; + + rpm@108000 { + compatible = "qcom,rpm-apq8064"; + reg = <0x108000 0x1000>; + qcom,ipc = <&apcs 0x8 2>; + + interrupts = <0 19 0>, <0 21 0>, <0 22 0>; + interrupt-names = "ack", "err", "wakeup"; + + #address-cells = <1>; + #size-cells = <0>; + rpmcc: rpm-clock-controller { + compatible = "qcom,apq8064-rpm-clk"; + #clock-cells = <1>; + #reset-cells = <1>; + }; + + pm8921_s3: pm8921-s3 { + compatible = "qcom,rpm-pm8921-smps"; + reg = <QCOM_RPM_PM8921_SMPS3>; + + regulator-min-microvolt = <1000000>; + regulator-max-microvolt = <1400000>; + qcom,boot-load = <49360>; + qcom,switch-mode-frequency = <3200000>; + regulator-always-on; + }; + + pm8921_s4: pm8921-s4 { + compatible = "qcom,rpm-pm8921-smps"; + reg = <QCOM_RPM_PM8921_SMPS4>; + + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; + qcom,boot-load = <200000>; + qcom,switch-mode-frequency = <3200000>; + regulator-always-on; + }; + + pm8921_l3: pm8921-l3 { + compatible = "qcom,rpm-pm8921-pldo"; + reg = <QCOM_RPM_PM8921_LDO3>; + + regulator-min-microvolt = <3050000>; + regulator-max-microvolt = <3300000>; + regulator-always-on; + qcom,boot-load = <50000>; + }; + + pm8921_l4: pm8921-l4 { + compatible = "qcom,rpm-pm8921-pldo"; + reg = <QCOM_RPM_PM8921_LDO4>; + + regulator-min-microvolt = <1000000>; + regulator-max-microvolt = <1800000>; + regulator-always-on; + qcom,boot-load = <50000>; + }; + + pm8921_l23: pm8921-l23 { + compatible = "qcom,rpm-pm8921-pldo"; + reg = <QCOM_RPM_PM8921_LDO23>; + + regulator-min-microvolt = <1000000>; + regulator-max-microvolt = <1800000>; + qcom,boot-load = <50000>; + regulator-always-on; + }; + + pm8921_l26: pm8921-l26 { + compatible = "qcom,rpm-pm8921-nldo1200"; + reg = <QCOM_RPM_PM8921_LDO26>; + + regulator-min-microvolt = < 375000>; + regulator-max-microvolt = <1050000>; + qcom,boot-load = <50000>; + regulator-always-on; + //FIXME ?? + bias-pull-down; + + }; + pm8921_s7: pm8921-s7 { + compatible = "qcom,rpm-pm8921-smps"; + reg = <QCOM_RPM_PM8921_SMPS7>; + regulator-min-microvolt = <1300000>; + regulator-max-microvolt = <1300000>; + qcom,switch-mode-frequency = <3200000>; + regulator-boot-on; + regulator-always-on; + }; + + pm8921_lvs6: pm8921-lvs6 { + compatible = "qcom,rpm-pm8921-switch"; + reg = <QCOM_RPM_PM8921_LVS6>; + regulator-always-on; + }; + + pm8921_hdmi_mvs: pm8921-hdmi-mvs { + compatible = "qcom,rpm-pm8921-switch"; + reg = <QCOM_RPM_HDMI_SWITCH>; + regulator-always-on; + bias-pull-down; + }; + + }; + + /* fabric clks handoff */ + fabclk-handoff{ + compatible = "qcom,apq8064-rpmcc-handoff"; + clocks = <&rpmcc QCOM_RPM_APPS_FABRIC_CLK>; + + assigned-clocks = <&rpmcc QCOM_RPM_APPS_FABRIC_CLK>; + assigned-clock-rates = <0x7fffffff>; + }; + + /* PCIE */ + + pci@1b500000 { + compatible = "qcom,pcie-ipq8064"; + reg = <0x1b500000 0x1000>, <0x1b502000 0x100>, <0x1b600000 0x80>; + reg-names = "base", "elbi", "parf"; + status = "disabled"; + #address-cells = <3>; + #size-cells = <2>; + device_type = "pci"; + interrupts = <0 35 0x0 + 0 36 0x0 + 0 37 0x0 + 0 38 0x0 + 0 39 0x0 + 0 238 0x0>; + interrupt-names = "irq1", "irq2", "irq3", "irq4", "iqr5", "msi"; + + resets = <&gcc PCIE_ACLK_RESET>, + <&gcc PCIE_HCLK_RESET>, + <&gcc PCIE_POR_RESET>, + <&gcc PCIE_PCI_RESET>, + <&gcc PCIE_PHY_RESET>; + reset-names = "axi", "ahb", "por", "pci", "phy"; + + clocks = <&gcc PCIE_A_CLK>, + <&gcc PCIE_H_CLK>, + <&gcc PCIE_PHY_REF_CLK>; + clock-names = "core", "iface", "phy"; + + ranges = <0x00000000 0 0 0x0ff00000 0 0x00100000 /* configuration space */ + 0x81000000 0 0 0x0fe00000 0 0x00100000 /* downstream I/O */ + 0x82000000 0 0 0x08000000 0 0x07e00000>; /* non-prefetchable memory */ + + }; + + + + usb1_phy:phy@12500000 { + compatible = "qcom,usb-otg-ci"; + reg = <0x12500000 0x400>; + interrupts = <0 100 0>; + status = "disabled"; + dr_mode = "host"; + + clocks = <&gcc USB_HS1_XCVR_CLK>, + <&gcc USB_HS1_H_CLK>; + clock-names = "core", "iface"; + + vddcx-supply = <&pm8921_s3>; + v3p3-supply = <&pm8921_l3>; + v1p8-supply = <&pm8921_l4>; + + resets = <&gcc USB_HS1_RESET>; + reset-names = "link"; + }; + + usb3_phy:phy@12520000 { + compatible = "qcom,usb-otg-ci"; + reg = <0x12520000 0x400>; + interrupts = <0 188 0>; + status = "disabled"; + dr_mode = "host"; + + clocks = <&gcc USB_HS3_XCVR_CLK>, + <&gcc USB_HS3_H_CLK>; + clock-names = "core", "iface"; + + vddcx-supply = <&pm8921_s3>; + v3p3-supply = <&pm8921_l3>; + v1p8-supply = <&pm8921_l23>; + + resets = <&gcc USB_HS3_RESET>; + reset-names = "link"; + }; + + usb4_phy:phy@12530000 { + compatible = "qcom,usb-otg-ci"; + reg = <0x12530000 0x400>; + interrupts = <0 215 0>; + status = "disabled"; + dr_mode = "host"; + + clocks = <&gcc USB_HS4_XCVR_CLK>, + <&gcc USB_HS4_H_CLK>; + clock-names = "core", "iface"; + + vddcx-supply = <&pm8921_s3>; + v3p3-supply = <&pm8921_l3>; + v1p8-supply = <&pm8921_l23>; + + resets = <&gcc USB_HS4_RESET>; + reset-names = "link"; + }; + + gadget1:gadget@12500000 { + compatible = "qcom,ci-hdrc"; + reg = <0x12500000 0x400>; + status = "disabled"; + dr_mode = "peripheral"; + interrupts = <0 100 0>; + usb-phy = <&usb1_phy>; + }; + + usb1: usb@12500000 { + compatible = "qcom,ehci-host"; + reg = <0x12500000 0x400>; + interrupts = <0 100 0>; + status = "disabled"; + usb-phy = <&usb1_phy>; + }; + + usb3: usb@12520000 { + compatible = "qcom,ehci-host"; + reg = <0x12520000 0x400>; + interrupts = <0 188 0>; + status = "disabled"; + usb-phy = <&usb3_phy>; + }; + + usb4: usb@12530000 { + compatible = "qcom,ehci-host"; + reg = <0x12530000 0x400>; + interrupts = <0 215 0>; + status = "disabled"; + usb-phy = <&usb4_phy>; + }; + + sata_phy0:sata-phy@1b400000{ + compatible = "qcom,apq8064-sata-phy"; + reg = <0x1b400000 0x200>; + reg-names = "phy_mem"; + clocks = <&gcc SATA_PHY_CFG_CLK>; + clock-names = "cfg"; + #phy-cells = <0>; + }; + + sata0: sata@29000000 { + compatible = "generic-ahci"; + reg = <0x29000000 0x180>; + interrupts = <0 209 0>; + clocks = <&gcc SFAB_SATA_S_H_CLK>, <&gcc SATA_H_CLK>, + <&gcc SATA_A_CLK>, <&gcc SATA_RXOOB_CLK>, + <&gcc SATA_PMALIVE_CLK>; + + clock-names = "slave_iface", "iface", + "bus", "rxoob", + "core_pmalive"; + assigned-clocks = <&gcc SATA_RXOOB_CLK>, + <&gcc SATA_PMALIVE_CLK>; + assigned-clock-rates = <100000000>, <100000000>; + + phys = <&sata_phy0>; + phy-names = "sata-phy"; + target-supply = <&pm8921_s4>; + }; + /* Temporary fixed regulator */ vsdcc_fixed: vsdcc-regulator { compatible = "regulator-fixed"; @@ -349,5 +1078,172 @@ pinctrl-0 = <&sdc4_gpios>; }; }; + + hdmi: qcom,hdmi-tx@4a00000 { + compatible = "qcom,hdmi-tx-8960"; + reg-names = "core_physical"; + reg = <0x04a00000 0x1000>; + interrupts = <GIC_SPI 79 0>; + clock-names = + "core_clk", + "master_iface_clk", + "slave_iface_clk"; + clocks = + <&mmcc HDMI_APP_CLK>, + <&mmcc HDMI_M_AHB_CLK>, + <&mmcc HDMI_S_AHB_CLK>; + qcom,hdmi-tx-ddc-clk = <&tlmm_pinmux 70 GPIO_ACTIVE_HIGH>; + qcom,hdmi-tx-ddc-data = <&tlmm_pinmux 71 GPIO_ACTIVE_HIGH>; + qcom,hdmi-tx-hpd = <&tlmm_pinmux 72 GPIO_ACTIVE_HIGH>; + core-vdda-supply = <&pm8921_hdmi_mvs>; + hdmi-mux-supply = <&ext_3p3v>; + pinctrl-names = "default"; + pinctrl-0 = <&hdmi_pinctrl>; + }; + + gpu: qcom,adreno-3xx@4300000 { + compatible = "qcom,adreno-3xx"; + #stream-id-cells = <16>; + reg = <0x04300000 0x20000>; + reg-names = "kgsl_3d0_reg_memory"; + interrupts = <GIC_SPI 80 0>; + interrupt-names = "kgsl_3d0_irq"; + clock-names = + "core_clk", + "iface_clk", + "mem_clk", + "mem_iface_clk"; + clocks = + <&mmcc GFX3D_CLK>, + <&mmcc GFX3D_AHB_CLK>, + <&mmcc GFX3D_AXI_CLK>, + <&mmcc MMSS_IMEM_AHB_CLK>; + qcom,chipid = <0x03020002>; + qcom,gpu-pwrlevels { + compatible = "qcom,gpu-pwrlevels"; + qcom,gpu-pwrlevel@0 { + qcom,gpu-freq = <450000000>; + }; + qcom,gpu-pwrlevel@1 { + qcom,gpu-freq = <27000000>; + }; + }; + }; + + mdp: qcom,mdp@5100000 { + compatible = "qcom,mdp"; + #stream-id-cells = <2>; + reg = <0x05100000 0xf0000>; + interrupts = <GIC_SPI 75 0>; + connectors = <&hdmi>; + gpus = <&gpu>; + clock-names = + "core_clk", + "iface_clk", + "lut_clk", + "src_clk", + "hdmi_clk", + "mdp_clk", + "mdp_axi_clk"; + clocks = + <&mmcc MDP_CLK>, + <&mmcc MDP_AHB_CLK>, + <&mmcc MDP_LUT_CLK>, + <&mmcc TV_SRC>, + <&mmcc HDMI_TV_CLK>, + <&mmcc MDP_TV_CLK>, + <&mmcc MDP_AXI_CLK>; +// vdd-supply = <&footswitch_mdp>; + }; + + mdp_port0: qcom,iommu@7500000 { + compatible = "qcom,iommu-v0"; + clock-names = + "smmu_pclk", + "iommu_clk"; + clocks = + <&mmcc SMMU_AHB_CLK>, + <&mmcc MDP_AXI_CLK>; + reg-names = "physbase"; + reg = <0x07500000 0x100000>; + interrupt-names = + "secure_irq", + "nonsecure_irq"; + interrupts = + <GIC_SPI 63 0>, + <GIC_SPI 64 0>; + ncb = <2>; + mmu-masters = <&mdp 0 2>; + }; + + mdp_port1: qcom,iommu@7600000 { + compatible = "qcom,iommu"; + clock-names = + "smmu_pclk", + "iommu_clk"; + clocks = + <&mmcc SMMU_AHB_CLK>, + <&mmcc MDP_AXI_CLK>; + reg-names = "physbase"; + reg = <0x07600000 0x100000>; + interrupt-names = + "secure_irq", + "nonsecure_irq"; + interrupts = + <GIC_SPI 61 0>, + <GIC_SPI 62 0>; + ncb = <2>; + mmu-masters = <&mdp 0 2>; + }; + + gfx3d: qcom,iommu@7c00000 { + compatible = "qcom,iommu-v0"; + clock-names = + "smmu_pclk", + "iommu_clk"; + clocks = + <&mmcc SMMU_AHB_CLK>, + <&mmcc GFX3D_AXI_CLK>; + reg-names = "physbase"; + reg = <0x07c00000 0x100000>; + interrupt-names = + "secure_irq", + "nonsecure_irq"; + interrupts = + <GIC_SPI 69 0>, + <GIC_SPI 70 0>; + ncb = <3>; + ttbr-split = <1>; + mmu-masters = + /* gfx3d_user: */ + <&gpu 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15>, + /* gfx3d_priv: */ + <&gpu 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31>; + }; + + gfx3d1: qcom,iommu@7d00000 { + compatible = "qcom,iommu-v0"; + clock-names = + "smmu_pclk", + "iommu_clk"; + clocks = + <&mmcc SMMU_AHB_CLK>, + <&mmcc GFX3D_AXI_CLK>; + reg-names = "physbase"; + reg = <0x07d00000 0x100000>; + interrupt-names = + "secure_irq", + "nonsecure_irq"; + interrupts = + <GIC_SPI 210 0>, + <GIC_SPI 211 0>; + ncb = <3>; + ttbr-split = <1>; + mmu-masters = + /* gfx3d_user: */ + <&gpu 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15>, + /* gfx3d_priv: */ + <&gpu 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31>; + }; }; }; diff --git a/arch/arm/boot/dts/qcom-apq8084.dtsi b/arch/arm/boot/dts/qcom-apq8084.dtsi index 1f130bc16858..207be157b9a7 100644 --- a/arch/arm/boot/dts/qcom-apq8084.dtsi +++ b/arch/arm/boot/dts/qcom-apq8084.dtsi @@ -21,6 +21,8 @@ enable-method = "qcom,kpss-acc-v2"; next-level-cache = <&L2>; qcom,acc = <&acc0>; + qcom,saw = <&saw0>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; cpu@1 { @@ -30,6 +32,8 @@ enable-method = "qcom,kpss-acc-v2"; next-level-cache = <&L2>; qcom,acc = <&acc1>; + qcom,saw = <&saw1>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; cpu@2 { @@ -39,6 +43,8 @@ enable-method = "qcom,kpss-acc-v2"; next-level-cache = <&L2>; qcom,acc = <&acc2>; + qcom,saw = <&saw2>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; cpu@3 { @@ -48,6 +54,8 @@ enable-method = "qcom,kpss-acc-v2"; next-level-cache = <&L2>; qcom,acc = <&acc3>; + qcom,saw = <&saw3>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; L2: l2-cache { @@ -55,6 +63,22 @@ cache-level = <2>; qcom,saw = <&saw_l2>; }; + + idle-states { + CPU_STBY: standby { + compatible = "qcom,idle-state-stby", "arm,idle-state"; + entry-latency-us = <1>; + exit-latency-us = <1>; + min-residency-us = <2>; + }; + + CPU_SPC: spc { + compatible = "qcom,idle-state-spc", "arm,idle-state"; + entry-latency-us = <150>; + exit-latency-us = <200>; + min-residency-us = <2000>; + }; + }; }; cpu-pmu { @@ -144,7 +168,27 @@ }; }; - saw_l2: regulator@f9012000 { + saw0: power-controller@f9089000 { + compatible = "qcom,apq8084-saw2-v2.1-cpu"; + reg = <0xf9089000 0x1000>; + }; + + saw1: power-controller@f9099000 { + compatible = "qcom,apq8084-saw2-v2.1-cpu"; + reg = <0xf9099000 0x1000>; + }; + + saw2: power-controller@f90a9000 { + compatible = "qcom,apq8084-saw2-v2.1-cpu"; + reg = <0xf90a9000 0x1000>; + }; + + saw3: power-controller@f90b9000 { + compatible = "qcom,apq8084-saw2-v2.1-cpu"; + reg = <0xf90b9000 0x1000>; + }; + + saw_l2: power-controller@f9012000 { compatible = "qcom,saw2"; reg = <0xf9012000 0x1000>; regulator; diff --git a/arch/arm/boot/dts/qcom-msm8960.dtsi b/arch/arm/boot/dts/qcom-msm8960.dtsi index e1b0d5cd9e3c..ef89daf7a258 100644 --- a/arch/arm/boot/dts/qcom-msm8960.dtsi +++ b/arch/arm/boot/dts/qcom-msm8960.dtsi @@ -24,6 +24,10 @@ next-level-cache = <&L2>; qcom,acc = <&acc0>; qcom,saw = <&saw0>; + clocks = <&kraitcc 0>; + clock-names = "cpu"; + clock-latency = <100000>; + }; cpu@1 { @@ -34,6 +38,10 @@ next-level-cache = <&L2>; qcom,acc = <&acc1>; qcom,saw = <&saw1>; + clocks = <&kraitcc 1>; + clock-names = "cpu"; + clock-latency = <100000>; + }; L2: l2-cache { @@ -48,6 +56,39 @@ qcom,no-pc-write; }; + qcom,pvs { + qcom,pvs-format-a; + /* Hz uV */ + qcom,speed0-pvs0-bin-v0 = + < 384000000 950000 >, + < 486000000 975000 >, + < 594000000 1000000 >, + < 702000000 1025000 >, + < 810000000 1075000 >, + < 918000000 1100000 >; + + qcom,speed0-pvs1-bin-v0 = + < 384000000 900000 >, + < 486000000 925000 >, + < 594000000 950000 >, + < 702000000 975000 >, + < 810000000 1025000 >, + < 918000000 1050000 >; + + qcom,speed0-pvs3-bin-v0 = + < 384000000 850000 >, + < 486000000 875000 >, + < 594000000 900000 >, + < 702000000 925000 >, + < 810000000 975000 >, + < 918000000 1000000 >; + }; + + kraitcc: clock-controller { + compatible = "qcom,krait-cc-v1"; + #clock-cells = <1>; + }; + soc: soc { #address-cells = <1>; #size-cells = <1>; @@ -101,11 +142,19 @@ acc0: clock-controller@2088000 { compatible = "qcom,kpss-acc-v1"; reg = <0x02088000 0x1000>, <0x02008000 0x1000>; + clock-output-names = "acpu0_aux"; }; acc1: clock-controller@2098000 { compatible = "qcom,kpss-acc-v1"; reg = <0x02098000 0x1000>, <0x02008000 0x1000>; + clock-output-names = "acpu1_aux"; + }; + + l2cc: clock-controller@2011000 { + compatible = "qcom,kpss-gcc"; + reg = <0x2011000 0x1000>; + clock-output-names = "acpu_l2_aux"; }; saw0: regulator@2089000 { diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom-msm8974.dtsi index e265ec16a787..819a6f407e70 100644 --- a/arch/arm/boot/dts/qcom-msm8974.dtsi +++ b/arch/arm/boot/dts/qcom-msm8974.dtsi @@ -14,40 +14,60 @@ #size-cells = <0>; interrupts = <1 9 0xf04>; - cpu@0 { + cpu0: cpu@0 { compatible = "qcom,krait"; enable-method = "qcom,kpss-acc-v2"; device_type = "cpu"; reg = <0>; next-level-cache = <&L2>; qcom,acc = <&acc0>; + clocks = <&kraitcc 0>; + clock-names = "cpu"; + clock-latency = <100000>; + qcom,saw = <&saw0>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; - cpu@1 { + cpu1: cpu@1 { compatible = "qcom,krait"; enable-method = "qcom,kpss-acc-v2"; device_type = "cpu"; reg = <1>; next-level-cache = <&L2>; qcom,acc = <&acc1>; + clocks = <&kraitcc 1>; + clock-names = "cpu"; + clock-latency = <100000>; + qcom,saw = <&saw1>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; - cpu@2 { + cpu2: cpu@2 { compatible = "qcom,krait"; enable-method = "qcom,kpss-acc-v2"; device_type = "cpu"; reg = <2>; next-level-cache = <&L2>; qcom,acc = <&acc2>; + clocks = <&kraitcc 2>; + clock-names = "cpu"; + clock-latency = <100000>; + qcom,saw = <&saw2>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; - cpu@3 { + cpu3: cpu@3 { compatible = "qcom,krait"; enable-method = "qcom,kpss-acc-v2"; device_type = "cpu"; reg = <3>; next-level-cache = <&L2>; qcom,acc = <&acc3>; + clocks = <&kraitcc 3>; + clock-names = "cpu"; + clock-latency = <100000>; + qcom,saw = <&saw3>; + cpu-idle-states = <&CPU_STBY &CPU_SPC>; }; L2: l2-cache { @@ -55,6 +75,22 @@ cache-level = <2>; qcom,saw = <&saw_l2>; }; + + idle-states { + CPU_STBY: standby { + compatible = "qcom,idle-state-stby", "arm,idle-state"; + entry-latency-us = <1>; + exit-latency-us = <1>; + min-residency-us = <2>; + }; + + CPU_SPC: spc { + compatible = "qcom,idle-state-spc", "arm,idle-state"; + entry-latency-us = <150>; + exit-latency-us = <200>; + min-residency-us = <2000>; + }; + }; }; cpu-pmu { @@ -71,6 +107,267 @@ clock-frequency = <19200000>; }; + qcom,pvs { + qcom,pvs-format-b; + /* Hz uV ua */ + qcom,speed0-pvs0-bin-v0 = + < 300000000 815000 73 >, + < 345600000 825000 85 >, + < 422400000 835000 104 >, + < 499200000 845000 124 >, + < 576000000 855000 144 >, + < 652800000 865000 165 >, + < 729600000 875000 186 >, + < 806400000 890000 208 >, + < 883200000 900000 229 >, + < 960000000 915000 252 >; + + qcom,speed0-pvs1-bin-v0 = + < 300000000 800000 73 >, + < 345600000 810000 85 >, + < 422400000 820000 104 >, + < 499200000 830000 124 >, + < 576000000 840000 144 >, + < 652800000 850000 165 >, + < 729600000 860000 186 >, + < 806400000 875000 208 >, + < 883200000 885000 229 >, + < 960000000 895000 252 >; + + qcom,speed0-pvs2-bin-v0 = + < 300000000 785000 73 >, + < 345600000 795000 85 >, + < 422400000 805000 104 >, + < 499200000 815000 124 >, + < 576000000 825000 144 >, + < 652800000 835000 165 >, + < 729600000 845000 186 >, + < 806400000 855000 208 >, + < 883200000 865000 229 >, + < 960000000 875000 252 >; + + qcom,speed0-pvs3-bin-v0 = + < 300000000 775000 73 >, + < 345600000 780000 85 >, + < 422400000 790000 104 >, + < 499200000 800000 124 >, + < 576000000 810000 144 >, + < 652800000 820000 165 >, + < 729600000 830000 186 >, + < 806400000 840000 208 >, + < 883200000 850000 229 >, + < 960000000 860000 252 >; + + qcom,speed0-pvs4-bin-v0 = + < 300000000 775000 73 >, + < 345600000 775000 85 >, + < 422400000 780000 104 >, + < 499200000 790000 124 >, + < 576000000 800000 144 >, + < 652800000 810000 165 >, + < 729600000 820000 186 >, + < 806400000 830000 208 >, + < 883200000 840000 229 >, + < 960000000 850000 252 >; + + qcom,speed0-pvs5-bin-v0 = + < 300000000 750000 73 >, + < 345600000 760000 85 >, + < 422400000 770000 104 >, + < 499200000 780000 124 >, + < 576000000 790000 144 >, + < 652800000 800000 165 >, + < 729600000 810000 186 >, + < 806400000 820000 208 >, + < 883200000 830000 229 >, + < 960000000 840000 252 >; + + qcom,speed0-pvs6-bin-v0 = + < 300000000 750000 73 >, + < 345600000 750000 85 >, + < 422400000 760000 104 >, + < 499200000 770000 124 >, + < 576000000 780000 144 >, + < 652800000 790000 165 >, + < 729600000 800000 186 >, + < 806400000 810000 208 >, + < 883200000 820000 229 >, + < 960000000 830000 252 >; + + qcom,speed2-pvs0-bin-v0 = + < 300000000 800000 72 >, + < 345600000 800000 83 >, + < 422400000 805000 102 >, + < 499200000 815000 121 >, + < 576000000 825000 141 >, + < 652800000 835000 161 >, + < 729600000 845000 181 >, + < 806400000 855000 202 >, + < 883200000 865000 223 >, + < 960000000 875000 245 >; + + qcom,speed2-pvs1-bin-v0 = + < 300000000 800000 72 >, + < 345600000 800000 83 >, + < 422400000 800000 102 >, + < 499200000 800000 121 >, + < 576000000 810000 141 >, + < 652800000 820000 161 >, + < 729600000 830000 181 >, + < 806400000 840000 202 >, + < 883200000 850000 223 >, + < 960000000 860000 245 >; + + qcom,speed2-pvs2-bin-v0 = + < 300000000 775000 72 >, + < 345600000 775000 83 >, + < 422400000 775000 102 >, + < 499200000 785000 121 >, + < 576000000 795000 141 >, + < 652800000 805000 161 >, + < 729600000 815000 181 >, + < 806400000 825000 202 >, + < 883200000 835000 223 >, + < 960000000 845000 245 >; + + qcom,speed2-pvs3-bin-v0 = + < 300000000 775000 72 >, + < 345600000 775000 83 >, + < 422400000 775000 102 >, + < 499200000 775000 121 >, + < 576000000 780000 141 >, + < 652800000 790000 161 >, + < 729600000 800000 181 >, + < 806400000 810000 202 >, + < 883200000 820000 223 >, + < 960000000 830000 245 >; + + qcom,speed2-pvs4-bin-v0 = + < 300000000 775000 72 >, + < 345600000 775000 83 >, + < 422400000 775000 102 >, + < 499200000 775000 121 >, + < 576000000 775000 141 >, + < 652800000 780000 161 >, + < 729600000 790000 181 >, + < 806400000 800000 202 >, + < 883200000 810000 223 >, + < 960000000 820000 245 >; + + qcom,speed2-pvs5-bin-v0 = + < 300000000 750000 72 >, + < 345600000 750000 83 >, + < 422400000 750000 102 >, + < 499200000 750000 121 >, + < 576000000 760000 141 >, + < 652800000 770000 161 >, + < 729600000 780000 181 >, + < 806400000 790000 202 >, + < 883200000 800000 223 >, + < 960000000 810000 245 >; + + qcom,speed2-pvs6-bin-v0 = + < 300000000 750000 72 >, + < 345600000 750000 83 >, + < 422400000 750000 102 >, + < 499200000 750000 121 >, + < 576000000 750000 141 >, + < 652800000 760000 161 >, + < 729600000 770000 181 >, + < 806400000 780000 202 >, + < 883200000 790000 223 >, + < 960000000 800000 245 >; + + qcom,speed1-pvs0-bin-v0 = + < 300000000 775000 72 >, + < 345600000 775000 83 >, + < 422400000 775000 101 >, + < 499200000 780000 120 >, + < 576000000 790000 139 >, + < 652800000 800000 159 >, + < 729600000 810000 180 >, + < 806400000 820000 200 >, + < 883200000 830000 221 >, + < 960000000 840000 242 >; + + qcom,speed1-pvs1-bin-v0 = + < 300000000 775000 72 >, + < 345600000 775000 83 >, + < 422400000 775000 101 >, + < 499200000 775000 120 >, + < 576000000 775000 139 >, + < 652800000 785000 159 >, + < 729600000 795000 180 >, + < 806400000 805000 200 >, + < 883200000 815000 221 >, + < 960000000 825000 242 >; + + qcom,speed1-pvs2-bin-v0 = + < 300000000 750000 72 >, + < 345600000 750000 83 >, + < 422400000 750000 101 >, + < 499200000 750000 120 >, + < 576000000 760000 139 >, + < 652800000 770000 159 >, + < 729600000 780000 180 >, + < 806400000 790000 200 >, + < 883200000 800000 221 >, + < 960000000 810000 242 >; + + qcom,speed1-pvs3-bin-v0 = + < 300000000 750000 72 >, + < 345600000 750000 83 >, + < 422400000 750000 101 >, + < 499200000 750000 120 >, + < 576000000 750000 139 >, + < 652800000 755000 159 >, + < 729600000 765000 180 >, + < 806400000 775000 200 >, + < 883200000 785000 221 >, + < 960000000 795000 242 >; + + qcom,speed1-pvs4-bin-v0 = + < 300000000 750000 72 >, + < 345600000 750000 83 >, + < 422400000 750000 101 >, + < 499200000 750000 120 >, + < 576000000 750000 139 >, + < 652800000 750000 159 >, + < 729600000 755000 180 >, + < 806400000 765000 200 >, + < 883200000 775000 221 >, + < 960000000 785000 242 >; + + qcom,speed1-pvs5-bin-v0 = + < 300000000 725000 72 >, + < 345600000 725000 83 >, + < 422400000 725000 101 >, + < 499200000 725000 120 >, + < 576000000 725000 139 >, + < 652800000 735000 159 >, + < 729600000 745000 180 >, + < 806400000 755000 200 >, + < 883200000 765000 221 >, + < 960000000 775000 242 >; + + qcom,speed1-pvs6-bin-v0 = + < 300000000 725000 72 >, + < 345600000 725000 83 >, + < 422400000 725000 101 >, + < 499200000 725000 120 >, + < 576000000 725000 139 >, + < 652800000 725000 159 >, + < 729600000 735000 180 >, + < 806400000 745000 200 >, + < 883200000 755000 221 >, + < 960000000 765000 242 >; + }; + + kraitcc: clock-controller { + compatible = "qcom,krait-cc-v2"; + #clock-cells = <1>; + }; + soc: soc { #address-cells = <1>; #size-cells = <1>; @@ -144,7 +441,57 @@ }; }; - saw_l2: regulator@f9012000 { + clock-controller@f9016000 { + compatible = "qcom,hfpll"; + reg = <0xf9016000 0x30>; + clock-output-names = "hfpll_l2"; + }; + + clock-controller@f908a000 { + compatible = "qcom,hfpll"; + reg = <0xf908a000 0x30>, <0xf900a000 0x30>; + clock-output-names = "hfpll0"; + }; + + clock-controller@f909a000 { + compatible = "qcom,hfpll"; + reg = <0xf909a000 0x30>, <0xf900a000 0x30>; + clock-output-names = "hfpll1"; + }; + + clock-controller@f90aa000 { + compatible = "qcom,hfpll"; + reg = <0xf90aa000 0x30>, <0xf900a000 0x30>; + clock-output-names = "hfpll2"; + }; + + clock-controller@f90ba000 { + compatible = "qcom,hfpll"; + reg = <0xf90ba000 0x30>, <0xf900a000 0x30>; + clock-output-names = "hfpll3"; + }; + + saw0: power-controller@f9089000 { + compatible = "qcom,msm8974-saw2-v2.1-cpu"; + reg = <0xf9089000 0x1000>; + }; + + saw1: power-controller@f9099000 { + compatible = "qcom,msm8974-saw2-v2.1-cpu"; + reg = <0xf9099000 0x1000>; + }; + + saw2: power-controller@f90a9000 { + compatible = "qcom,msm8974-saw2-v2.1-cpu"; + reg = <0xf90a9000 0x1000>; + }; + + saw3: power-controller@f90b9000 { + compatible = "qcom,msm8974-saw2-v2.1-cpu"; + reg = <0xf90b9000 0x1000>; + }; + + saw_l2: power-controller@f9012000 { compatible = "qcom,saw2"; reg = <0xf9012000 0x1000>; regulator; diff --git a/arch/arm/common/Kconfig b/arch/arm/common/Kconfig index f4a38a76db64..6c9fa6bd7eae 100644 --- a/arch/arm/common/Kconfig +++ b/arch/arm/common/Kconfig @@ -9,6 +9,9 @@ config DMABOUNCE bool select ZONE_DMA +config KRAIT_L2_ACCESSORS + bool + config SHARP_LOCOMO bool diff --git a/arch/arm/common/Makefile b/arch/arm/common/Makefile index 5748afda0681..c0eb929220b9 100644 --- a/arch/arm/common/Makefile +++ b/arch/arm/common/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_FIQ_GLUE) += fiq_glue.o fiq_glue_setup.o obj-$(CONFIG_ICST) += icst.o obj-$(CONFIG_SA1111) += sa1111.o obj-$(CONFIG_DMABOUNCE) += dmabounce.o +obj-$(CONFIG_KRAIT_L2_ACCESSORS) += krait-l2-accessors.o obj-$(CONFIG_SHARP_LOCOMO) += locomo.o obj-$(CONFIG_SHARP_PARAM) += sharpsl_param.o obj-$(CONFIG_SHARP_SCOOP) += scoop.o diff --git a/arch/arm/common/krait-l2-accessors.c b/arch/arm/common/krait-l2-accessors.c new file mode 100644 index 000000000000..5d514bbc88a6 --- /dev/null +++ b/arch/arm/common/krait-l2-accessors.c @@ -0,0 +1,58 @@ +/* + * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/spinlock.h> +#include <linux/export.h> + +#include <asm/barrier.h> +#include <asm/krait-l2-accessors.h> + +static DEFINE_RAW_SPINLOCK(krait_l2_lock); + +void krait_set_l2_indirect_reg(u32 addr, u32 val) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&krait_l2_lock, flags); + /* + * Select the L2 window by poking l2cpselr, then write to the window + * via l2cpdr. + */ + asm volatile ("mcr p15, 3, %0, c15, c0, 6 @ l2cpselr" : : "r" (addr)); + isb(); + asm volatile ("mcr p15, 3, %0, c15, c0, 7 @ l2cpdr" : : "r" (val)); + isb(); + + raw_spin_unlock_irqrestore(&krait_l2_lock, flags); +} +EXPORT_SYMBOL(krait_set_l2_indirect_reg); + +u32 krait_get_l2_indirect_reg(u32 addr) +{ + u32 val; + unsigned long flags; + + raw_spin_lock_irqsave(&krait_l2_lock, flags); + /* + * Select the L2 window by poking l2cpselr, then read from the window + * via l2cpdr. + */ + asm volatile ("mcr p15, 3, %0, c15, c0, 6 @ l2cpselr" : : "r" (addr)); + isb(); + asm volatile ("mrc p15, 3, %0, c15, c0, 7 @ l2cpdr" : "=r" (val)); + + raw_spin_unlock_irqrestore(&krait_l2_lock, flags); + + return val; +} +EXPORT_SYMBOL(krait_get_l2_indirect_reg); diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig index 9d7a32f93fcf..5a9766bdb3e2 100644 --- a/arch/arm/configs/multi_v7_defconfig +++ b/arch/arm/configs/multi_v7_defconfig @@ -3,6 +3,7 @@ CONFIG_FHANDLE=y CONFIG_IRQ_DOMAIN_DEBUG=y CONFIG_NO_HZ=y CONFIG_HIGH_RES_TIMERS=y +CONFIG_LOG_BUF_SHIFT=21 CONFIG_BLK_DEV_INITRD=y CONFIG_EMBEDDED=y CONFIG_PERF_EVENTS=y @@ -27,16 +28,18 @@ CONFIG_MACH_BERLIN_BG2Q=y CONFIG_ARCH_HIGHBANK=y CONFIG_ARCH_HISI=y CONFIG_ARCH_HI3xxx=y -CONFIG_ARCH_HIX5HD2=y CONFIG_ARCH_HIP04=y +CONFIG_ARCH_HIX5HD2=y CONFIG_ARCH_KEYSTONE=y CONFIG_ARCH_MESON=y +CONFIG_QCOM_SMD=y CONFIG_ARCH_MXC=y CONFIG_SOC_IMX51=y CONFIG_SOC_IMX53=y CONFIG_SOC_IMX6Q=y CONFIG_SOC_IMX6SL=y CONFIG_SOC_VF610=y +CONFIG_ARCH_MEDIATEK=y CONFIG_ARCH_OMAP3=y CONFIG_ARCH_OMAP4=y CONFIG_SOC_OMAP5=y @@ -44,7 +47,6 @@ CONFIG_SOC_AM33XX=y CONFIG_SOC_AM43XX=y CONFIG_SOC_DRA7XX=y CONFIG_ARCH_QCOM=y -CONFIG_ARCH_MEDIATEK=y CONFIG_ARCH_MSM8X60=y CONFIG_ARCH_MSM8960=y CONFIG_ARCH_MSM8974=y @@ -63,16 +65,13 @@ CONFIG_ARCH_TEGRA_2x_SOC=y CONFIG_ARCH_TEGRA_3x_SOC=y CONFIG_ARCH_TEGRA_114_SOC=y CONFIG_ARCH_TEGRA_124_SOC=y -CONFIG_TEGRA_EMC_SCALING_ENABLE=y CONFIG_ARCH_U8500=y CONFIG_MACH_HREFV60=y CONFIG_MACH_SNOWBALL=y -CONFIG_MACH_UX500_DT=y CONFIG_ARCH_VEXPRESS=y CONFIG_ARCH_VEXPRESS_CA9X4=y CONFIG_ARCH_WM8850=y CONFIG_ARCH_ZYNQ=y -CONFIG_TRUSTED_FOUNDATIONS=y CONFIG_PCI=y CONFIG_PCI_MSI=y CONFIG_PCI_MVEBU=y @@ -88,9 +87,14 @@ CONFIG_KEXEC=y CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ_STAT_DETAILS=y CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y +CONFIG_CPU_FREQ_GOV_POWERSAVE=y +CONFIG_CPU_FREQ_GOV_USERSPACE=y +CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y +CONFIG_CPUFREQ_DT=y +CONFIG_ARM_QCOM_CPUFREQ=y CONFIG_CPU_IDLE=y -CONFIG_NEON=y CONFIG_ARM_ZYNQ_CPUIDLE=y +CONFIG_ARM_QCOM_CPUIDLE=y CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y @@ -108,13 +112,21 @@ CONFIG_IPV6_MIP6=m CONFIG_IPV6_TUNNEL=m CONFIG_IPV6_MULTIPLE_TABLES=y CONFIG_CAN=y -CONFIG_CAN_RAW=y -CONFIG_CAN_BCM=y -CONFIG_CAN_DEV=y CONFIG_CAN_XILINXCAN=y CONFIG_CAN_MCP251X=y -CONFIG_CFG80211=m -CONFIG_MAC80211=m +CONFIG_BT=y +CONFIG_BT_RFCOMM=y +CONFIG_BT_RFCOMM_TTY=y +CONFIG_BT_BNEP=y +CONFIG_BT_BNEP_MC_FILTER=y +CONFIG_BT_BNEP_PROTO_FILTER=y +CONFIG_BT_HIDP=y +CONFIG_BT_HCIUART=y +CONFIG_BT_HCIUART_H4=y +CONFIG_BT_HCIUART_ATH3K=y +CONFIG_CFG80211=y +CONFIG_CFG80211_WEXT=y +CONFIG_MAC80211=y CONFIG_RFKILL=y CONFIG_RFKILL_INPUT=y CONFIG_RFKILL_GPIO=y @@ -136,7 +148,6 @@ CONFIG_EEPROM_AT24=y CONFIG_EEPROM_SUNXI_SID=y CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SR=y -CONFIG_SCSI_MULTI_LUN=y CONFIG_ATA=y CONFIG_SATA_AHCI=y CONFIG_SATA_AHCI_PLATFORM=y @@ -147,6 +158,7 @@ CONFIG_SATA_HIGHBANK=y CONFIG_SATA_MV=y CONFIG_NETDEVICES=y CONFIG_SUN4I_EMAC=y +CONFIG_ATL1C=y CONFIG_MACB=y CONFIG_NET_CALXEDA_XGMAC=y CONFIG_IGB=y @@ -165,6 +177,10 @@ CONFIG_USB_PEGASUS=y CONFIG_USB_USBNET=y CONFIG_USB_NET_SMSC75XX=y CONFIG_USB_NET_SMSC95XX=y +CONFIG_ATH_CARDS=y +CONFIG_ATH_DEBUG=y +CONFIG_ATH6KL=m +CONFIG_ATH6KL_SDIO=m CONFIG_BRCMFMAC=m CONFIG_RT2X00=m CONFIG_RT2800USB=m @@ -172,8 +188,8 @@ CONFIG_INPUT_JOYDEV=y CONFIG_INPUT_EVDEV=y CONFIG_KEYBOARD_GPIO=y CONFIG_KEYBOARD_TEGRA=y -CONFIG_KEYBOARD_SPEAR=y CONFIG_KEYBOARD_ST_KEYSCAN=y +CONFIG_KEYBOARD_SPEAR=y CONFIG_KEYBOARD_CROS_EC=y CONFIG_MOUSE_PS2_ELANTECH=y CONFIG_INPUT_TOUCHSCREEN=y @@ -210,19 +226,17 @@ CONFIG_SERIAL_FSL_LPUART_CONSOLE=y CONFIG_SERIAL_ST_ASC=y CONFIG_SERIAL_ST_ASC_CONSOLE=y CONFIG_I2C_CHARDEV=y -CONFIG_I2C_MUX=y CONFIG_I2C_MUX_PCA954x=y CONFIG_I2C_MUX_PINCTRL=y CONFIG_I2C_CADENCE=y CONFIG_I2C_DESIGNWARE_PLATFORM=y -CONFIG_I2C_EXYNOS5=y CONFIG_I2C_MV64XXX=y CONFIG_I2C_S3C2410=y CONFIG_I2C_SIRF=y -CONFIG_I2C_TEGRA=y CONFIG_I2C_ST=y -CONFIG_SPI=y +CONFIG_I2C_TEGRA=y CONFIG_I2C_XILINX=y +CONFIG_SPI=y CONFIG_SPI_CADENCE=y CONFIG_SPI_OMAP24XX=y CONFIG_SPI_ORION=y @@ -234,11 +248,15 @@ CONFIG_SPI_TEGRA114=y CONFIG_SPI_TEGRA20_SFLASH=y CONFIG_SPI_TEGRA20_SLINK=y CONFIG_SPI_XILINX=y +CONFIG_SPMI=y CONFIG_PINCTRL_AS3722=y CONFIG_PINCTRL_PALMAS=y +CONFIG_PINCTRL_APQ8064=y CONFIG_PINCTRL_APQ8084=y +CONFIG_PINCTRL_IPQ8064=y +CONFIG_PINCTRL_QCOM_SPMI_PMIC=y +CONFIG_PINCTRL_SSBI_PMIC=y CONFIG_GPIO_SYSFS=y -CONFIG_GPIO_GENERIC_PLATFORM=y CONFIG_GPIO_DWAPB=y CONFIG_GPIO_XILINX=y CONFIG_GPIO_ZYNQ=y @@ -259,6 +277,7 @@ CONFIG_THERMAL=y CONFIG_ARMADA_THERMAL=y CONFIG_ST_THERMAL_SYSCFG=y CONFIG_ST_THERMAL_MEMMAP=y +CONFIG_THERMAL_TSENS8960=y CONFIG_WATCHDOG=y CONFIG_XILINX_WATCHDOG=y CONFIG_ORION_WATCHDOG=y @@ -269,6 +288,8 @@ CONFIG_MFD_BCM590XX=y CONFIG_MFD_CROS_EC=y CONFIG_MFD_CROS_EC_SPI=y CONFIG_MFD_MAX8907=y +CONFIG_MFD_PM8921_CORE=y +CONFIG_MFD_QCOM_RPM=y CONFIG_MFD_SEC_CORE=y CONFIG_MFD_STMPE=y CONFIG_MFD_PALMAS=y @@ -281,6 +302,7 @@ CONFIG_REGULATOR_BCM590XX=y CONFIG_REGULATOR_GPIO=y CONFIG_REGULATOR_MAX8907=y CONFIG_REGULATOR_PALMAS=y +CONFIG_REGULATOR_QCOM_RPM=y CONFIG_REGULATOR_S2MPS11=y CONFIG_REGULATOR_S5M8767=y CONFIG_REGULATOR_TPS51632=y @@ -296,18 +318,20 @@ CONFIG_MEDIA_USB_SUPPORT=y CONFIG_USB_VIDEO_CLASS=y CONFIG_USB_GSPCA=y CONFIG_DRM=y +CONFIG_DRM_MSM_REGISTER_LOGGING=y CONFIG_DRM_TEGRA=y CONFIG_DRM_PANEL_SIMPLE=y CONFIG_FB_ARMCLCD=y CONFIG_FB_WM8505=y CONFIG_FB_SIMPLE=y -CONFIG_BACKLIGHT_LCD_SUPPORT=y -CONFIG_BACKLIGHT_CLASS_DEVICE=y CONFIG_BACKLIGHT_PWM=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y CONFIG_SOUND=y CONFIG_SND=y +CONFIG_SND_VERBOSE_PRINTK=y +CONFIG_SND_DEBUG=y +CONFIG_SND_DEBUG_VERBOSE=y CONFIG_SND_SOC=y CONFIG_SND_SOC_TEGRA=y CONFIG_SND_SOC_TEGRA_RT5640=y @@ -316,10 +340,16 @@ CONFIG_SND_SOC_TEGRA_WM8903=y CONFIG_SND_SOC_TEGRA_TRIMSLICE=y CONFIG_SND_SOC_TEGRA_ALC5632=y CONFIG_SND_SOC_TEGRA_MAX98090=y +CONFIG_SND_SOC_MSM_QDSP6_HDMI_AUDIO=y +CONFIG_SND_SOC_QDSP6V2=y +CONFIG_SND_SOC_MSM8960=y +CONFIG_SND_SOC_HDMI_CODEC=y +CONFIG_SND_SIMPLE_CARD=y CONFIG_USB=y CONFIG_USB_XHCI_HCD=y CONFIG_USB_XHCI_MVEBU=y CONFIG_USB_EHCI_HCD=y +CONFIG_USB_EHCI_MSM=y CONFIG_USB_EHCI_TEGRA=y CONFIG_USB_EHCI_HCD_PLATFORM=y CONFIG_USB_ISP1760_HCD=y @@ -329,14 +359,12 @@ CONFIG_USB_STORAGE=y CONFIG_USB_CHIPIDEA=y CONFIG_USB_CHIPIDEA_HOST=y CONFIG_AB8500_USB=y -CONFIG_OMAP_USB3=y -CONFIG_SAMSUNG_USB2PHY=y -CONFIG_SAMSUNG_USB3PHY=y CONFIG_USB_GPIO_VBUS=y CONFIG_USB_ISP1301=y +CONFIG_USB_MSM_OTG=y CONFIG_USB_MXS_PHY=y CONFIG_MMC=y -CONFIG_MMC_BLOCK_MINORS=16 +CONFIG_MMC_BLOCK_MINORS=32 CONFIG_MMC_ARMMMCI=y CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_PLTFM=y @@ -344,19 +372,20 @@ CONFIG_MMC_SDHCI_OF_ARASAN=y CONFIG_MMC_SDHCI_ESDHC_IMX=y CONFIG_MMC_SDHCI_DOVE=y CONFIG_MMC_SDHCI_TEGRA=y +CONFIG_MMC_SDHCI_S3C=y CONFIG_MMC_SDHCI_PXAV3=y CONFIG_MMC_SDHCI_SPEAR=y -CONFIG_MMC_SDHCI_S3C=y CONFIG_MMC_SDHCI_S3C_DMA=y CONFIG_MMC_SDHCI_BCM_KONA=y CONFIG_MMC_SDHCI_ST=y CONFIG_MMC_OMAP=y CONFIG_MMC_OMAP_HS=y +CONFIG_MMC_SDHCI_MSM=y CONFIG_MMC_MVSDIO=y -CONFIG_MMC_SUNXI=y CONFIG_MMC_DW=y CONFIG_MMC_DW_EXYNOS=y CONFIG_MMC_DW_ROCKCHIP=y +CONFIG_MMC_SUNXI=y CONFIG_NEW_LEDS=y CONFIG_LEDS_CLASS=y CONFIG_LEDS_GPIO=y @@ -403,6 +432,7 @@ CONFIG_IMX_DMA=y CONFIG_MXS_DMA=y CONFIG_DMA_OMAP=y CONFIG_XILINX_VDMA=y +CONFIG_QCOM_BAM_DMA=y CONFIG_STAGING=y CONFIG_SENSORS_ISL29018=y CONFIG_SENSORS_ISL29028=y @@ -412,11 +442,19 @@ CONFIG_SERIO_NVEC_PS2=y CONFIG_NVEC_POWER=y CONFIG_NVEC_PAZ00=y CONFIG_QCOM_GSBI=y +CONFIG_MSM_PIL=y +CONFIG_MSM_PIL_Q6V4=y +CONFIG_MSM_REMOTE_SPINLOCKS=y CONFIG_COMMON_CLK_QCOM=y CONFIG_APQ_MMCC_8084=y CONFIG_MSM_GCC_8660=y CONFIG_MSM_MMCC_8960=y CONFIG_MSM_MMCC_8974=y +CONFIG_QCOM_HFPLL=y +CONFIG_KPSS_XCC=y +CONFIG_KRAITCC=y +CONFIG_QCOM_RPMCC=y +CONFIG_QCOM_IOMMU_V0=y CONFIG_TEGRA_IOMMU_GART=y CONFIG_TEGRA_IOMMU_SMMU=y CONFIG_MEMORY=y @@ -426,13 +464,13 @@ CONFIG_AK8975=y CONFIG_PWM=y CONFIG_PWM_TEGRA=y CONFIG_PWM_VT8500=y +CONFIG_PHY_MIPHY365X=y CONFIG_OMAP_USB2=y CONFIG_TI_PIPE3=y -CONFIG_PHY_MIPHY365X=y CONFIG_PHY_SUN4I_USB=y +CONFIG_PHY_QCOM_APQ8064_SATA=y CONFIG_EXT4_FS=y CONFIG_VFAT_FS=y -CONFIG_TMPFS=y CONFIG_SQUASHFS=y CONFIG_SQUASHFS_LZO=y CONFIG_SQUASHFS_XZ=y @@ -441,8 +479,14 @@ CONFIG_NFS_V3_ACL=y CONFIG_NFS_V4=y CONFIG_ROOT_NFS=y CONFIG_PRINTK_TIME=y -CONFIG_DEBUG_FS=y CONFIG_MAGIC_SYSRQ=y CONFIG_LOCKUP_DETECTOR=y -CONFIG_CRYPTO_DEV_TEGRA_AES=y -CONFIG_GENERIC_CPUFREQ_CPU0=y +CONFIG_FUNCTION_TRACER=y +CONFIG_SCHED_TRACER=y +CONFIG_BLK_DEV_IO_TRACE=y +CONFIG_TRACEPOINT_BENCHMARK=y +CONFIG_DEBUG_LL=y +CONFIG_DEBUG_QCOM_UARTDM=y +CONFIG_DEBUG_UART_PHYS=0x16640000 +CONFIG_DEBUG_UART_VIRT=0xFA740000 +CONFIG_EARLY_PRINTK=y diff --git a/arch/arm/configs/qcom_defconfig b/arch/arm/configs/qcom_defconfig index 8c7da3319d82..5d93d3b8044d 100644 --- a/arch/arm/configs/qcom_defconfig +++ b/arch/arm/configs/qcom_defconfig @@ -1,8 +1,10 @@ CONFIG_SYSVIPC=y +CONFIG_FHANDLE=y CONFIG_NO_HZ=y CONFIG_HIGH_RES_TIMERS=y CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y +CONFIG_CGROUPS=y CONFIG_BLK_DEV_INITRD=y CONFIG_SYSCTL_SYSCALL=y CONFIG_KALLSYMS_ALL=y @@ -21,18 +23,35 @@ CONFIG_ARCH_QCOM=y CONFIG_ARCH_MSM8X60=y CONFIG_ARCH_MSM8960=y CONFIG_ARCH_MSM8974=y +CONFIG_PCI=y +CONFIG_PCI_MSI=y +CONFIG_PCI_STUB=y +CONFIG_PCI_HOST_GENERIC=y +CONFIG_PCIEPORTBUS=y CONFIG_SMP=y CONFIG_PREEMPT=y CONFIG_AEABI=y CONFIG_HIGHMEM=y CONFIG_HIGHPTE=y CONFIG_CLEANCACHE=y +CONFIG_CMA=y +CONFIG_SECCOMP=y CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_ATAG_DTB_COMPAT=y +CONFIG_CPU_FREQ=y +CONFIG_CPU_FREQ_STAT_DETAILS=y +CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y +CONFIG_CPU_FREQ_GOV_POWERSAVE=y +CONFIG_CPU_FREQ_GOV_USERSPACE=y +CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y +CONFIG_CPUFREQ_DT=y +CONFIG_ARM_QCOM_CPUFREQ=y CONFIG_CPU_IDLE=y +CONFIG_ARM_QCOM_CPUIDLE=y CONFIG_VFP=y CONFIG_NEON=y # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set +CONFIG_PM_RUNTIME=y CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y @@ -47,25 +66,43 @@ CONFIG_IP_PNP_DHCP=y # CONFIG_INET_XFRM_MODE_BEET is not set # CONFIG_INET_LRO is not set # CONFIG_IPV6 is not set -CONFIG_CFG80211=y +CONFIG_BT=y +CONFIG_BT_RFCOMM=y +CONFIG_BT_RFCOMM_TTY=y +CONFIG_BT_BNEP=y +CONFIG_BT_BNEP_MC_FILTER=y +CONFIG_BT_BNEP_PROTO_FILTER=y +CONFIG_BT_HIDP=y +CONFIG_BT_HCIUART=y +CONFIG_BT_HCIUART_H4=y +CONFIG_BT_HCIUART_ATH3K=y +CONFIG_CFG80211=m +CONFIG_CFG80211_WEXT=y +CONFIG_MAC80211=m CONFIG_RFKILL=y CONFIG_DEVTMPFS=y CONFIG_DEVTMPFS_MOUNT=y +CONFIG_DMA_CMA=y +CONFIG_CMA_SIZE_MBYTES=64 CONFIG_MTD=y CONFIG_MTD_BLOCK=y CONFIG_MTD_M25P80=y CONFIG_MTD_SPI_NOR=y CONFIG_BLK_DEV_LOOP=y CONFIG_BLK_DEV_RAM=y -CONFIG_SCSI=y +CONFIG_EEPROM_AT24=y CONFIG_BLK_DEV_SD=y CONFIG_CHR_DEV_SG=y CONFIG_CHR_DEV_SCH=y CONFIG_SCSI_CONSTANTS=y CONFIG_SCSI_LOGGING=y CONFIG_SCSI_SCAN_ASYNC=y +CONFIG_ATA=y +CONFIG_SATA_AHCI_PLATFORM=y CONFIG_NETDEVICES=y CONFIG_DUMMY=y +CONFIG_ATL1C=y +CONFIG_KS8851=y CONFIG_MDIO_BITBANG=y CONFIG_MDIO_GPIO=y CONFIG_SLIP=y @@ -74,6 +111,9 @@ CONFIG_SLIP_MODE_SLIP6=y CONFIG_USB_USBNET=y # CONFIG_USB_NET_AX8817X is not set # CONFIG_USB_NET_ZAURUS is not set +CONFIG_ATH_CARDS=m +CONFIG_ATH6KL=m +CONFIG_ATH6KL_SDIO=m CONFIG_INPUT_EVDEV=y # CONFIG_KEYBOARD_ATKBD is not set # CONFIG_MOUSE_PS2 is not set @@ -85,8 +125,8 @@ CONFIG_SERIO_LIBPS2=y # CONFIG_LEGACY_PTYS is not set CONFIG_SERIAL_MSM=y CONFIG_SERIAL_MSM_CONSOLE=y +CONFIG_SERIAL_MSM_HS=y CONFIG_HW_RANDOM=y -CONFIG_I2C=y CONFIG_I2C_CHARDEV=y CONFIG_I2C_QUP=y CONFIG_SPI=y @@ -97,16 +137,83 @@ CONFIG_PINCTRL_APQ8084=y CONFIG_PINCTRL_IPQ8064=y CONFIG_PINCTRL_MSM8960=y CONFIG_PINCTRL_MSM8X74=y +CONFIG_PINCTRL_SSBI_PMIC=y CONFIG_DEBUG_GPIO=y CONFIG_GPIO_SYSFS=y CONFIG_POWER_SUPPLY=y CONFIG_POWER_RESET=y CONFIG_POWER_RESET_MSM=y CONFIG_THERMAL=y +CONFIG_THERMAL_TSENS8960=y +CONFIG_WATCHDOG=y +CONFIG_QCOM_WDT=y +CONFIG_MFD_PM8921_CORE=y +CONFIG_MFD_QCOM_RPM=y +CONFIG_MFD_SPMI_PMIC=y +CONFIG_MFD_SYSCON=y CONFIG_REGULATOR=y CONFIG_REGULATOR_FIXED_VOLTAGE=y +CONFIG_REGULATOR_QCOM_RPM=y CONFIG_MEDIA_SUPPORT=y -CONFIG_FB=y +CONFIG_MEDIA_CAMERA_SUPPORT=y +CONFIG_MEDIA_USB_SUPPORT=y +CONFIG_USB_VIDEO_CLASS=y +CONFIG_USB_GSPCA=y +CONFIG_USB_M5602=m +CONFIG_USB_STV06XX=m +CONFIG_USB_GL860=m +CONFIG_USB_GSPCA_BENQ=m +CONFIG_USB_GSPCA_CONEX=m +CONFIG_USB_GSPCA_CPIA1=m +CONFIG_USB_GSPCA_DTCS033=m +CONFIG_USB_GSPCA_ETOMS=m +CONFIG_USB_GSPCA_FINEPIX=m +CONFIG_USB_GSPCA_JEILINJ=m +CONFIG_USB_GSPCA_JL2005BCD=m +CONFIG_USB_GSPCA_KINECT=m +CONFIG_USB_GSPCA_KONICA=m +CONFIG_USB_GSPCA_MARS=m +CONFIG_USB_GSPCA_MR97310A=m +CONFIG_USB_GSPCA_NW80X=m +CONFIG_USB_GSPCA_OV519=m +CONFIG_USB_GSPCA_OV534=m +CONFIG_USB_GSPCA_OV534_9=m +CONFIG_USB_GSPCA_PAC207=m +CONFIG_USB_GSPCA_PAC7302=m +CONFIG_USB_GSPCA_PAC7311=m +CONFIG_USB_GSPCA_SE401=m +CONFIG_USB_GSPCA_SN9C2028=m +CONFIG_USB_GSPCA_SN9C20X=m +CONFIG_USB_GSPCA_SONIXB=m +CONFIG_USB_GSPCA_SONIXJ=m +CONFIG_USB_GSPCA_SPCA500=m +CONFIG_USB_GSPCA_SPCA501=m +CONFIG_USB_GSPCA_SPCA505=m +CONFIG_USB_GSPCA_SPCA506=m +CONFIG_USB_GSPCA_SPCA508=m +CONFIG_USB_GSPCA_SPCA561=m +CONFIG_USB_GSPCA_SPCA1528=m +CONFIG_USB_GSPCA_SQ905=m +CONFIG_USB_GSPCA_SQ905C=m +CONFIG_USB_GSPCA_SQ930X=m +CONFIG_USB_GSPCA_STK014=m +CONFIG_USB_GSPCA_STK1135=m +CONFIG_USB_GSPCA_STV0680=m +CONFIG_USB_GSPCA_SUNPLUS=m +CONFIG_USB_GSPCA_T613=m +CONFIG_USB_GSPCA_TOPRO=m +CONFIG_USB_GSPCA_TV8532=m +CONFIG_USB_GSPCA_VC032X=m +CONFIG_USB_GSPCA_VICAM=m +CONFIG_USB_GSPCA_XIRLINK_CIT=m +CONFIG_USB_GSPCA_ZC3XX=m +CONFIG_USB_PWC=m +CONFIG_USB_ZR364XX=m +CONFIG_DRM=y +CONFIG_DRM_PANEL_SIMPLE=y +CONFIG_BACKLIGHT_LCD_SUPPORT=y +CONFIG_BACKLIGHT_CLASS_DEVICE=y +CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_SOUND=y CONFIG_SND=y CONFIG_SND_DYNAMIC_MINORS=y @@ -114,18 +221,89 @@ CONFIG_SND_DYNAMIC_MINORS=y # CONFIG_SND_SPI is not set # CONFIG_SND_USB is not set CONFIG_SND_SOC=y +CONFIG_SND_SOC_MSM_QDSP6_HDMI_AUDIO=y +CONFIG_SND_SOC_QDSP6V2=y +CONFIG_SND_SOC_MSM8960=y +CONFIG_SND_SOC_HDMI_CODEC=y +CONFIG_SND_SIMPLE_CARD=y CONFIG_HID_BATTERY_STRENGTH=y +CONFIG_HID_APPLE=y +CONFIG_HID_LOGITECH=m +CONFIG_HID_MAGICMOUSE=m +CONFIG_HID_MICROSOFT=m CONFIG_USB=y CONFIG_USB_ANNOUNCE_NEW_DEVICES=y +CONFIG_USB_OTG_FSM=y CONFIG_USB_MON=y CONFIG_USB_EHCI_HCD=y +CONFIG_USB_EHCI_MSM=y CONFIG_USB_ACM=y -CONFIG_USB_SERIAL=y +CONFIG_USB_STORAGE=y +CONFIG_USB_UAS=y +CONFIG_USB_CHIPIDEA=y +CONFIG_USB_CHIPIDEA_UDC=y +CONFIG_USB_CHIPIDEA_HOST=y +CONFIG_USB_SERIAL=m +CONFIG_USB_SERIAL_GENERIC=y +CONFIG_USB_SERIAL_SIMPLE=m +CONFIG_USB_SERIAL_AIRCABLE=m +CONFIG_USB_SERIAL_ARK3116=m +CONFIG_USB_SERIAL_BELKIN=m +CONFIG_USB_SERIAL_CH341=m +CONFIG_USB_SERIAL_WHITEHEAT=m +CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m +CONFIG_USB_SERIAL_CP210X=m +CONFIG_USB_SERIAL_CYPRESS_M8=m +CONFIG_USB_SERIAL_EMPEG=m +CONFIG_USB_SERIAL_FTDI_SIO=m +CONFIG_USB_SERIAL_VISOR=m +CONFIG_USB_SERIAL_IPAQ=m +CONFIG_USB_SERIAL_IR=m +CONFIG_USB_SERIAL_EDGEPORT=m +CONFIG_USB_SERIAL_EDGEPORT_TI=m +CONFIG_USB_SERIAL_F81232=m +CONFIG_USB_SERIAL_GARMIN=m +CONFIG_USB_SERIAL_IPW=m +CONFIG_USB_SERIAL_IUU=m +CONFIG_USB_SERIAL_KEYSPAN_PDA=m +CONFIG_USB_SERIAL_KEYSPAN=m +CONFIG_USB_SERIAL_KLSI=m +CONFIG_USB_SERIAL_KOBIL_SCT=m +CONFIG_USB_SERIAL_MCT_U232=m +CONFIG_USB_SERIAL_METRO=m +CONFIG_USB_SERIAL_MOS7720=m +CONFIG_USB_SERIAL_MOS7840=m +CONFIG_USB_SERIAL_MXUPORT=m +CONFIG_USB_SERIAL_NAVMAN=m +CONFIG_USB_SERIAL_PL2303=m +CONFIG_USB_SERIAL_OTI6858=m +CONFIG_USB_SERIAL_QCAUX=m +CONFIG_USB_SERIAL_QUALCOMM=m +CONFIG_USB_SERIAL_SPCP8X5=m +CONFIG_USB_SERIAL_SAFE=m +CONFIG_USB_SERIAL_SIERRAWIRELESS=m +CONFIG_USB_SERIAL_SYMBOL=m +CONFIG_USB_SERIAL_TI=m +CONFIG_USB_SERIAL_CYBERJACK=m +CONFIG_USB_SERIAL_XIRCOM=m +CONFIG_USB_SERIAL_OPTION=m +CONFIG_USB_SERIAL_OMNINET=m +CONFIG_USB_SERIAL_OPTICON=m +CONFIG_USB_SERIAL_XSENS_MT=m +CONFIG_USB_SERIAL_WISHBONE=m +CONFIG_USB_SERIAL_SSU100=m +CONFIG_USB_SERIAL_QT2=m +CONFIG_USB_MSM_OTG=y CONFIG_USB_GADGET=y CONFIG_USB_GADGET_DEBUG_FILES=y CONFIG_USB_GADGET_VBUS_DRAW=500 +CONFIG_USB_ETH=m +CONFIG_USB_GADGETFS=m +CONFIG_USB_FUNCTIONFS=m +CONFIG_USB_FUNCTIONFS_RNDIS=y +CONFIG_USB_MASS_STORAGE=m CONFIG_MMC=y -CONFIG_MMC_BLOCK_MINORS=16 +CONFIG_MMC_BLOCK_MINORS=32 CONFIG_MMC_ARMMMCI=y CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_PLTFM=y @@ -133,15 +311,25 @@ CONFIG_MMC_SDHCI_MSM=y CONFIG_RTC_CLASS=y CONFIG_DMADEVICES=y CONFIG_QCOM_BAM_DMA=y +CONFIG_MSM_DMA=y CONFIG_STAGING=y CONFIG_QCOM_GSBI=y +CONFIG_QCOM_PM=y +CONFIG_QCOM_SMD=y +CONFIG_MSM_PIL=y +CONFIG_MSM_PIL_Q6V4=y +CONFIG_MSM_REMOTE_SPINLOCKS=y CONFIG_COMMON_CLK_QCOM=y CONFIG_APQ_MMCC_8084=y CONFIG_IPQ_GCC_806X=y CONFIG_MSM_GCC_8660=y CONFIG_MSM_MMCC_8960=y CONFIG_MSM_MMCC_8974=y -CONFIG_MSM_IOMMU=y +CONFIG_QCOM_HFPLL=y +CONFIG_KPSS_XCC=y +CONFIG_KRAITCC=y +CONFIG_QCOM_RPMCC=y +CONFIG_QCOM_IOMMU_V0=y CONFIG_PHY_QCOM_APQ8064_SATA=y CONFIG_PHY_QCOM_IPQ806X_SATA=y CONFIG_EXT2_FS=y @@ -149,13 +337,19 @@ CONFIG_EXT2_FS_XATTR=y CONFIG_EXT3_FS=y # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set CONFIG_EXT4_FS=y +CONFIG_FANOTIFY=y +CONFIG_AUTOFS4_FS=y CONFIG_FUSE_FS=y CONFIG_VFAT_FS=y -CONFIG_TMPFS=y +CONFIG_TMPFS_POSIX_ACL=y CONFIG_JFFS2_FS=y CONFIG_NFS_FS=y CONFIG_NFS_V3_ACL=y CONFIG_NFS_V4=y +CONFIG_ROOT_NFS=y +CONFIG_NFSD=m +CONFIG_NFSD_V3=y +CONFIG_NFSD_V3_ACL=y CONFIG_CIFS=y CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_ASCII=y @@ -167,5 +361,6 @@ CONFIG_DEBUG_INFO=y CONFIG_MAGIC_SYSRQ=y CONFIG_LOCKUP_DETECTOR=y # CONFIG_DETECT_HUNG_TASK is not set -# CONFIG_SCHED_DEBUG is not set +CONFIG_SCHEDSTATS=y CONFIG_TIMER_STATS=y +CONFIG_CRYPTO_CCM=y diff --git a/arch/arm/include/asm/krait-l2-accessors.h b/arch/arm/include/asm/krait-l2-accessors.h new file mode 100644 index 000000000000..48fe5527bc01 --- /dev/null +++ b/arch/arm/include/asm/krait-l2-accessors.h @@ -0,0 +1,20 @@ +/* + * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __ASMARM_KRAIT_L2_ACCESSORS_H +#define __ASMARM_KRAIT_L2_ACCESSORS_H + +extern void krait_set_l2_indirect_reg(u32 addr, u32 val); +extern u32 krait_get_l2_indirect_reg(u32 addr); + +#endif diff --git a/arch/arm/include/asm/vfp.h b/arch/arm/include/asm/vfp.h index f4ab34fd4f72..ee5f3084243c 100644 --- a/arch/arm/include/asm/vfp.h +++ b/arch/arm/include/asm/vfp.h @@ -22,6 +22,7 @@ #define FPSID_NODOUBLE (1<<20) #define FPSID_ARCH_BIT (16) #define FPSID_ARCH_MASK (0xF << FPSID_ARCH_BIT) +#define FPSID_CPUID_ARCH_MASK (0x7F << FPSID_ARCH_BIT) #define FPSID_PART_BIT (8) #define FPSID_PART_MASK (0xFF << FPSID_PART_BIT) #define FPSID_VARIANT_BIT (4) @@ -75,6 +76,10 @@ /* MVFR0 bits */ #define MVFR0_A_SIMD_BIT (0) #define MVFR0_A_SIMD_MASK (0xf << MVFR0_A_SIMD_BIT) +#define MVFR0_SP_BIT (4) +#define MVFR0_SP_MASK (0xf << MVFR0_SP_BIT) +#define MVFR0_DP_BIT (8) +#define MVFR0_DP_MASK (0xf << MVFR0_DP_BIT) /* Bit patterns for decoding the packaged operation descriptors */ #define VFPOPDESC_LENGTH_BIT (9) diff --git a/arch/arm/mach-ks8695/include/mach/debug-macro.S b/arch/arm/include/debug/ks8695.S index a79e48981202..961da1f32ab3 100644 --- a/arch/arm/mach-ks8695/include/mach/debug-macro.S +++ b/arch/arm/include/debug/ks8695.S @@ -1,5 +1,5 @@ /* - * arch/arm/mach-ks8695/include/mach/debug-macro.S + * arch/arm/include/debug/ks8695.S * * Copyright (C) 2006 Ben Dooks <ben@simtec.co.uk> * Copyright (C) 2006 Simtec Electronics @@ -11,8 +11,12 @@ * published by the Free Software Foundation. */ -#include <mach/hardware.h> -#include <mach/regs-uart.h> +#define KS8695_UART_PA 0x03ffe000 +#define KS8695_UART_VA 0xf00fe000 +#define KS8695_URTH (0x04) +#define KS8695_URLS (0x14) +#define URLS_URTE (1 << 6) +#define URLS_URTHRE (1 << 5) .macro addruart, rp, rv, tmp ldr \rp, =KS8695_UART_PA @ physical base address diff --git a/arch/arm/mach-netx/include/mach/debug-macro.S b/arch/arm/include/debug/netx.S index 247781e096e2..cf7522aba702 100644 --- a/arch/arm/mach-netx/include/mach/debug-macro.S +++ b/arch/arm/include/debug/netx.S @@ -1,5 +1,4 @@ -/* arch/arm/mach-netx/include/mach/debug-macro.S - * +/* * Debugging macro include header * * Copyright (C) 1994-1999 Russell King @@ -11,26 +10,27 @@ * */ -#include "hardware.h" +#define UART_DATA 0 +#define UART_FLAG 0x18 +#define UART_FLAG_BUSY (1 << 3) .macro addruart, rp, rv, tmp - mov \rp, #0x00000a00 - orr \rv, \rp, #io_p2v(0x00100000) @ virtual - orr \rp, \rp, #0x00100000 @ physical + ldr \rp, =CONFIG_DEBUG_UART_PHYS + ldr \rp, =CONFIG_DEBUG_UART_VIRT .endm .macro senduart,rd,rx - str \rd, [\rx, #0] + str \rd, [\rx, #UART_DATA] .endm .macro busyuart,rd,rx -1002: ldr \rd, [\rx, #0x18] - tst \rd, #(1 << 3) +1002: ldr \rd, [\rx, #UART_FLAG] + tst \rd, #UART_FLAG_BUSY bne 1002b .endm .macro waituart,rd,rx -1001: ldr \rd, [\rx, #0x18] - tst \rd, #(1 << 3) +1001: ldr \rd, [\rx, #UART_FLAG] + tst \rd, #UART_FLAG_BUSY bne 1001b .endm diff --git a/arch/arm/include/debug/sa1100.S b/arch/arm/include/debug/sa1100.S new file mode 100644 index 000000000000..6b5e1ce099d3 --- /dev/null +++ b/arch/arm/include/debug/sa1100.S @@ -0,0 +1,37 @@ +/* + * Debugging macro include header + * + * Copyright (C) 1994-1999 Russell King + * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * +*/ + +#define UTDR 0x14 +#define UTSR1 0x20 +#define UTSR1_TBY 0x00000001 /* Transmitter BusY (read) */ +#define UTSR1_TNF 0x00000004 /* Transmit FIFO Not Full (read) */ + + .macro addruart, rp, rv, tmp + ldr \rp, =CONFIG_DEBUG_UART_PHYS + ldr \rv, =CONFIG_DEBUG_UART_VIRT + .endm + + .macro senduart,rd,rx + str \rd, [\rx, #UTDR] + .endm + + .macro waituart,rd,rx +1001: ldr \rd, [\rx, #UTSR1] + tst \rd, #UTSR1_TNF + beq 1001b + .endm + + .macro busyuart,rd,rx +1001: ldr \rd, [\rx, #UTSR1] + tst \rd, #UTSR1_TBY + bne 1001b + .endm diff --git a/arch/arm/mach-omap1/include/mach/debug-macro.S b/arch/arm/mach-omap1/include/mach/debug-macro.S deleted file mode 100644 index 5c1a26c9f490..000000000000 --- a/arch/arm/mach-omap1/include/mach/debug-macro.S +++ /dev/null @@ -1,101 +0,0 @@ -/* arch/arm/mach-omap1/include/mach/debug-macro.S - * - * Debugging macro include header - * - * Copyright (C) 1994-1999 Russell King - * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * -*/ - -#include <linux/serial_reg.h> - -#include "serial.h" - - .pushsection .data -omap_uart_phys: .word 0x0 -omap_uart_virt: .word 0x0 - .popsection - - /* - * Note that this code won't work if the bootloader passes - * a wrong machine ID number in r1. To debug, just hardcode - * the desired UART phys and virt addresses temporarily into - * the omap_uart_phys and omap_uart_virt above. - */ - .macro addruart, rp, rv, tmp - - /* Use omap_uart_phys/virt if already configured */ -9: adr \rp, 99f @ get effective addr of 99f - ldr \rv, [\rp] @ get absolute addr of 99f - sub \rv, \rv, \rp @ offset between the two - ldr \rp, [\rp, #4] @ abs addr of omap_uart_phys - sub \tmp, \rp, \rv @ make it effective - ldr \rp, [\tmp, #0] @ omap_uart_phys - ldr \rv, [\tmp, #4] @ omap_uart_virt - cmp \rp, #0 @ is port configured? - cmpne \rv, #0 - bne 100f @ already configured - - /* Check the debug UART configuration set in uncompress.h */ - and \rp, pc, #0xff000000 - ldr \rv, =OMAP_UART_INFO_OFS - ldr \rp, [\rp, \rv] - - /* Select the UART to use based on the UART1 scratchpad value */ -10: cmp \rp, #0 @ no port configured? - beq 11f @ if none, try to use UART1 - cmp \rp, #OMAP1UART1 - beq 11f @ configure OMAP1UART1 - cmp \rp, #OMAP1UART2 - beq 12f @ configure OMAP1UART2 - cmp \rp, #OMAP1UART3 - beq 13f @ configure OMAP2UART3 - - /* Configure the UART offset from the phys/virt base */ -11: mov \rp, #0x00fb0000 @ OMAP1UART1 - b 98f -12: mov \rp, #0x00fb0000 @ OMAP1UART1 - orr \rp, \rp, #0x00000800 @ OMAP1UART2 - b 98f -13: mov \rp, #0x00fb0000 @ OMAP1UART1 - orr \rp, \rp, #0x00000800 @ OMAP1UART2 - orr \rp, \rp, #0x00009000 @ OMAP1UART3 - - /* Store both phys and virt address for the uart */ -98: add \rp, \rp, #0xff000000 @ phys base - str \rp, [\tmp, #0] @ omap_uart_phys - sub \rp, \rp, #0xff000000 @ phys base - add \rp, \rp, #0xfe000000 @ virt base - str \rp, [\tmp, #4] @ omap_uart_virt - b 9b - - .align -99: .word . - .word omap_uart_phys - .ltorg - -100: - .endm - - .macro senduart,rd,rx - strb \rd, [\rx] - .endm - - .macro busyuart,rd,rx -1001: ldrb \rd, [\rx, #(UART_LSR << OMAP_PORT_SHIFT)] - and \rd, \rd, #(UART_LSR_TEMT | UART_LSR_THRE) - teq \rd, #(UART_LSR_TEMT | UART_LSR_THRE) - beq 1002f - ldrb \rd, [\rx, #(UART_LSR << OMAP7XX_PORT_SHIFT)] - and \rd, \rd, #(UART_LSR_TEMT | UART_LSR_THRE) - teq \rd, #(UART_LSR_TEMT | UART_LSR_THRE) - bne 1001b -1002: - .endm - - .macro waituart,rd,rx - .endm diff --git a/arch/arm/mach-qcom/Kconfig b/arch/arm/mach-qcom/Kconfig index ee5697ba05bc..891442eb01b1 100644 --- a/arch/arm/mach-qcom/Kconfig +++ b/arch/arm/mach-qcom/Kconfig @@ -5,6 +5,8 @@ menuconfig ARCH_QCOM select ARM_AMBA select CLKSRC_OF select PINCTRL + select MIGHT_HAVE_PCI + select PCI_DOMAINS if PCI select QCOM_SCM if SMP help Support for Qualcomm's devicetree based systems. @@ -23,7 +25,4 @@ config ARCH_MSM8974 bool "Enable support for MSM8974" select HAVE_ARM_ARCH_TIMER -config QCOM_SCM - bool - endif diff --git a/arch/arm/mach-qcom/Makefile b/arch/arm/mach-qcom/Makefile index 8f756ae1ae31..e324375fa919 100644 --- a/arch/arm/mach-qcom/Makefile +++ b/arch/arm/mach-qcom/Makefile @@ -1,5 +1,2 @@ obj-y := board.o obj-$(CONFIG_SMP) += platsmp.o -obj-$(CONFIG_QCOM_SCM) += scm.o scm-boot.o - -CFLAGS_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) diff --git a/arch/arm/mach-qcom/board.c b/arch/arm/mach-qcom/board.c index 6d8bbf7d39d8..605114935b20 100644 --- a/arch/arm/mach-qcom/board.c +++ b/arch/arm/mach-qcom/board.c @@ -11,6 +11,10 @@ */ #include <linux/init.h> +#include <linux/of.h> +#include <linux/of_gpio.h> +#include <linux/of_platform.h> +#include <linux/delay.h> #include <asm/mach/arch.h> @@ -25,6 +29,52 @@ static const char * const qcom_dt_match[] __initconst = { NULL }; +static void __init qcom_late_init(void) +{ + struct device_node *node; + int reset_gpio, ret; + + for_each_compatible_node(node, NULL, "atheros,ath6kl") { + of_node_put(node); + + reset_gpio = of_get_named_gpio(node, "reset-gpio", 0); + if (reset_gpio < 0) + return; + + ret = gpio_request_one(reset_gpio, + GPIOF_DIR_OUT | GPIOF_INIT_HIGH, "reset"); + if (ret) + return; + + udelay(100); + gpio_set_value(reset_gpio, 0); + udelay(100); + gpio_set_value(reset_gpio, 1); + } + + for_each_compatible_node(node, NULL, "bt_reset") { + of_node_put(node); + + reset_gpio = of_get_named_gpio(node, "reset-gpio", 0); + if (reset_gpio < 0) + return; + + ret = gpio_request_one(reset_gpio, + GPIOF_DIR_OUT | GPIOF_INIT_HIGH, + "reset"); + if (ret) { + pr_err("bt_reset gpio_request failed %d\n", ret); + return; + } + + udelay(100); + gpio_set_value(reset_gpio, 0); + udelay(100); + gpio_set_value(reset_gpio, 1); + } +} + DT_MACHINE_START(QCOM_DT, "Qualcomm (Flattened Device Tree)") - .dt_compat = qcom_dt_match, + .init_late = qcom_late_init, + .dt_compat = qcom_dt_match, MACHINE_END diff --git a/arch/arm/mach-qcom/platsmp.c b/arch/arm/mach-qcom/platsmp.c index d6908569ecaf..a692bcb7e7f5 100644 --- a/arch/arm/mach-qcom/platsmp.c +++ b/arch/arm/mach-qcom/platsmp.c @@ -20,7 +20,7 @@ #include <asm/smp_plat.h> -#include "scm-boot.h" +#include <soc/qcom/scm-boot.h> #define VDD_SC1_ARRAY_CLAMP_GFS_CTL 0x35a0 #define SCSS_CPU1CORE_RESET 0x2d80 diff --git a/arch/arm/mach-sa1100/include/mach/debug-macro.S b/arch/arm/mach-sa1100/include/mach/debug-macro.S deleted file mode 100644 index 530772d937ad..000000000000 --- a/arch/arm/mach-sa1100/include/mach/debug-macro.S +++ /dev/null @@ -1,62 +0,0 @@ -/* arch/arm/mach-sa1100/include/mach/debug-macro.S - * - * Debugging macro include header - * - * Copyright (C) 1994-1999 Russell King - * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * -*/ -#include <mach/hardware.h> - - .macro addruart, rp, rv, tmp - mrc p15, 0, \rp, c1, c0 - tst \rp, #1 @ MMU enabled? - moveq \rp, #0x80000000 @ physical base address - movne \rp, #0xf8000000 @ virtual address - - @ We probe for the active serial port here, coherently with - @ the comment in arch/arm/mach-sa1100/include/mach/uncompress.h. - @ We assume r1 can be clobbered. - - @ see if Ser3 is active - add \rp, \rp, #0x00050000 - ldr \rv, [\rp, #UTCR3] - tst \rv, #UTCR3_TXE - - @ if Ser3 is inactive, then try Ser1 - addeq \rp, \rp, #(0x00010000 - 0x00050000) - ldreq \rv, [\rp, #UTCR3] - tsteq \rv, #UTCR3_TXE - - @ if Ser1 is inactive, then try Ser2 - addeq \rp, \rp, #(0x00030000 - 0x00010000) - ldreq \rv, [\rp, #UTCR3] - tsteq \rv, #UTCR3_TXE - - @ clear top bits, and generate both phys and virt addresses - lsl \rp, \rp, #8 - lsr \rp, \rp, #8 - orr \rv, \rp, #0xf8000000 @ virtual - orr \rp, \rp, #0x80000000 @ physical - - .endm - - .macro senduart,rd,rx - str \rd, [\rx, #UTDR] - .endm - - .macro waituart,rd,rx -1001: ldr \rd, [\rx, #UTSR1] - tst \rd, #UTSR1_TNF - beq 1001b - .endm - - .macro busyuart,rd,rx -1001: ldr \rd, [\rx, #UTSR1] - tst \rd, #UTSR1_TBY - bne 1001b - .endm diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S index 22ac2a6fbfe3..8b4ee5e81c14 100644 --- a/arch/arm/mm/proc-v7.S +++ b/arch/arm/mm/proc-v7.S @@ -591,9 +591,10 @@ __krait_proc_info: /* * Some Krait processors don't indicate support for SDIV and UDIV * instructions in the ARM instruction set, even though they actually - * do support them. + * do support them. They also don't indicate support for fused multiply + * instructions even though they actually do support them. */ - __v7_proc __v7_setup, hwcaps = HWCAP_IDIV + __v7_proc __v7_setup, hwcaps = HWCAP_IDIV | HWCAP_VFPv4 .size __krait_proc_info, . - __krait_proc_info /* diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S index cda654cbf2c2..f74a8f7e5f84 100644 --- a/arch/arm/vfp/vfphw.S +++ b/arch/arm/vfp/vfphw.S @@ -197,6 +197,12 @@ look_for_VFP_exceptions: tst r5, #FPSCR_IXE bne process_exception + tst r5, #FPSCR_LENGTH_MASK + beq skip + orr r1, r1, #FPEXC_DEX + b process_exception +skip: + @ Fall into hand on to next handler - appropriate coproc instr @ not recognised by VFP diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 2f37e1d6cb45..f901242dee98 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -722,6 +722,7 @@ static int __init vfp_init(void) { unsigned int vfpsid; unsigned int cpu_arch = cpu_architecture(); + u32 mvfr0; if (cpu_arch >= CPU_ARCH_ARMv6) on_each_cpu(vfp_enable, NULL, 1); @@ -738,63 +739,73 @@ static int __init vfp_init(void) vfp_vector = vfp_null_entry; pr_info("VFP support v0.3: "); - if (VFP_arch) + if (VFP_arch) { pr_cont("not present\n"); - else if (vfpsid & FPSID_NODOUBLE) { - pr_cont("no double precision support\n"); - } else { - hotcpu_notifier(vfp_hotplug, 0); - - VFP_arch = (vfpsid & FPSID_ARCH_MASK) >> FPSID_ARCH_BIT; /* Extract the architecture version */ - pr_cont("implementor %02x architecture %d part %02x variant %x rev %x\n", - (vfpsid & FPSID_IMPLEMENTER_MASK) >> FPSID_IMPLEMENTER_BIT, - (vfpsid & FPSID_ARCH_MASK) >> FPSID_ARCH_BIT, - (vfpsid & FPSID_PART_MASK) >> FPSID_PART_BIT, - (vfpsid & FPSID_VARIANT_MASK) >> FPSID_VARIANT_BIT, - (vfpsid & FPSID_REV_MASK) >> FPSID_REV_BIT); - - vfp_vector = vfp_support_entry; - - thread_register_notifier(&vfp_notifier_block); - vfp_pm_init(); - + return 0; + /* Extract the arhitecture on CPUID scheme */ + } else if ((read_cpuid_id() & 0x000f0000) == 0x000f0000) { + VFP_arch = vfpsid & FPSID_CPUID_ARCH_MASK; + VFP_arch >>= FPSID_ARCH_BIT; /* - * We detected VFP, and the support code is - * in place; report VFP support to userspace. + * Check for the presence of the Advanced SIMD + * load/store instructions, integer and single + * precision floating point operations. Only check + * for NEON if the hardware has the MVFR registers. */ - elf_hwcap |= HWCAP_VFP; +#ifdef CONFIG_NEON + if ((fmrx(MVFR1) & 0x000fff00) == 0x00011100) + elf_hwcap |= HWCAP_NEON; +#endif #ifdef CONFIG_VFPv3 - if (VFP_arch >= 2) { + mvfr0 = fmrx(MVFR0); + if (((mvfr0 & MVFR0_DP_MASK) >> MVFR0_DP_BIT) == 0x2 || + ((mvfr0 & MVFR0_SP_MASK) >> MVFR0_SP_BIT) == 0x2) { elf_hwcap |= HWCAP_VFPv3; - /* * Check for VFPv3 D16 and VFPv4 D16. CPUs in * this configuration only have 16 x 64bit * registers. */ - if (((fmrx(MVFR0) & MVFR0_A_SIMD_MASK)) == 1) - elf_hwcap |= HWCAP_VFPv3D16; /* also v4-D16 */ + if ((mvfr0 & MVFR0_A_SIMD_MASK) == 1) + /* also v4-D16 */ + elf_hwcap |= HWCAP_VFPv3D16; else elf_hwcap |= HWCAP_VFPD32; } + + if ((fmrx(MVFR1) & 0xf0000000) == 0x10000000) + elf_hwcap |= HWCAP_VFPv4; #endif - /* - * Check for the presence of the Advanced SIMD - * load/store instructions, integer and single - * precision floating point operations. Only check - * for NEON if the hardware has the MVFR registers. - */ - if ((read_cpuid_id() & 0x000f0000) == 0x000f0000) { -#ifdef CONFIG_NEON - if ((fmrx(MVFR1) & 0x000fff00) == 0x00011100) - elf_hwcap |= HWCAP_NEON; -#endif -#ifdef CONFIG_VFPv3 - if ((fmrx(MVFR1) & 0xf0000000) == 0x10000000) - elf_hwcap |= HWCAP_VFPv4; -#endif + /* Extract the architecture version on pre-cpuid scheme */ + } else { + if (vfpsid & FPSID_NODOUBLE) { + pr_cont("no double precision support\n"); + return 0; } + + VFP_arch = (vfpsid & FPSID_ARCH_MASK) >> FPSID_ARCH_BIT; } + + hotcpu_notifier(vfp_hotplug, 0); + + vfp_vector = vfp_support_entry; + + thread_register_notifier(&vfp_notifier_block); + vfp_pm_init(); + + /* + * We detected VFP, and the support code is + * in place; report VFP support to userspace. + */ + elf_hwcap |= HWCAP_VFP; + + pr_cont("implementor %02x architecture %d part %02x variant %x rev %x\n", + (vfpsid & FPSID_IMPLEMENTER_MASK) >> FPSID_IMPLEMENTER_BIT, + VFP_arch, + (vfpsid & FPSID_PART_MASK) >> FPSID_PART_BIT, + (vfpsid & FPSID_VARIANT_MASK) >> FPSID_VARIANT_BIT, + (vfpsid & FPSID_REV_MASK) >> FPSID_REV_BIT); + return 0; } diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c index c0a842b335c5..a728f000f625 100644 --- a/drivers/clk/clk-divider.c +++ b/drivers/clk/clk-divider.c @@ -30,7 +30,7 @@ #define to_clk_divider(_hw) container_of(_hw, struct clk_divider, hw) -#define div_mask(d) ((1 << ((d)->width)) - 1) +#define div_mask(width) (BIT(width) - 1) static unsigned int _get_table_maxdiv(const struct clk_div_table *table) { @@ -54,15 +54,16 @@ static unsigned int _get_table_mindiv(const struct clk_div_table *table) return mindiv; } -static unsigned int _get_maxdiv(struct clk_divider *divider) +static unsigned int _get_maxdiv(const struct clk_div_table *table, u8 width, + unsigned long flags) { - if (divider->flags & CLK_DIVIDER_ONE_BASED) - return div_mask(divider); - if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) - return 1 << div_mask(divider); - if (divider->table) - return _get_table_maxdiv(divider->table); - return div_mask(divider) + 1; + if (flags & CLK_DIVIDER_ONE_BASED) + return div_mask(width); + if (flags & CLK_DIVIDER_POWER_OF_TWO) + return 1 << div_mask(width); + if (table) + return _get_table_maxdiv(table); + return div_mask(width) + 1; } static unsigned int _get_table_div(const struct clk_div_table *table, @@ -76,14 +77,15 @@ static unsigned int _get_table_div(const struct clk_div_table *table, return 0; } -static unsigned int _get_div(struct clk_divider *divider, unsigned int val) +static unsigned int _get_div(const struct clk_div_table *table, + unsigned int val, unsigned long flags) { - if (divider->flags & CLK_DIVIDER_ONE_BASED) + if (flags & CLK_DIVIDER_ONE_BASED) return val; - if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) + if (flags & CLK_DIVIDER_POWER_OF_TWO) return 1 << val; - if (divider->table) - return _get_table_div(divider->table, val); + if (table) + return _get_table_div(table, val); return val + 1; } @@ -98,29 +100,28 @@ static unsigned int _get_table_val(const struct clk_div_table *table, return 0; } -static unsigned int _get_val(struct clk_divider *divider, unsigned int div) +static unsigned int _get_val(const struct clk_div_table *table, + unsigned int div, unsigned long flags) { - if (divider->flags & CLK_DIVIDER_ONE_BASED) + if (flags & CLK_DIVIDER_ONE_BASED) return div; - if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) + if (flags & CLK_DIVIDER_POWER_OF_TWO) return __ffs(div); - if (divider->table) - return _get_table_val(divider->table, div); + if (table) + return _get_table_val(table, div); return div - 1; } -static unsigned long clk_divider_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) +unsigned long divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate, + unsigned int val, + const struct clk_div_table *table, + unsigned long flags) { - struct clk_divider *divider = to_clk_divider(hw); - unsigned int div, val; - - val = clk_readl(divider->reg) >> divider->shift; - val &= div_mask(divider); + unsigned int div; - div = _get_div(divider, val); + div = _get_div(table, val, flags); if (!div) { - WARN(!(divider->flags & CLK_DIVIDER_ALLOW_ZERO), + WARN(!(flags & CLK_DIVIDER_ALLOW_ZERO), "%s: Zero divisor and CLK_DIVIDER_ALLOW_ZERO not set\n", __clk_get_name(hw->clk)); return parent_rate; @@ -128,6 +129,20 @@ static unsigned long clk_divider_recalc_rate(struct clk_hw *hw, return DIV_ROUND_UP(parent_rate, div); } +EXPORT_SYMBOL_GPL(divider_recalc_rate); + +static unsigned long clk_divider_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct clk_divider *divider = to_clk_divider(hw); + unsigned int val; + + val = clk_readl(divider->reg) >> divider->shift; + val &= div_mask(divider->width); + + return divider_recalc_rate(hw, parent_rate, val, divider->table, + divider->flags); +} /* * The reverse of DIV_ROUND_UP: The maximum number which @@ -146,12 +161,13 @@ static bool _is_valid_table_div(const struct clk_div_table *table, return false; } -static bool _is_valid_div(struct clk_divider *divider, unsigned int div) +static bool _is_valid_div(const struct clk_div_table *table, unsigned int div, + unsigned long flags) { - if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) + if (flags & CLK_DIVIDER_POWER_OF_TWO) return is_power_of_2(div); - if (divider->table) - return _is_valid_table_div(divider->table, div); + if (table) + return _is_valid_table_div(table, div); return true; } @@ -191,74 +207,80 @@ static int _round_down_table(const struct clk_div_table *table, int div) return down; } -static int _div_round_up(struct clk_divider *divider, - unsigned long parent_rate, unsigned long rate) +static int _div_round_up(const struct clk_div_table *table, + unsigned long parent_rate, unsigned long rate, + unsigned long flags) { int div = DIV_ROUND_UP(parent_rate, rate); - if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) + if (flags & CLK_DIVIDER_POWER_OF_TWO) div = __roundup_pow_of_two(div); - if (divider->table) - div = _round_up_table(divider->table, div); + if (table) + div = _round_up_table(table, div); return div; } -static int _div_round_closest(struct clk_divider *divider, - unsigned long parent_rate, unsigned long rate) +static int _div_round_closest(const struct clk_div_table *table, + unsigned long parent_rate, unsigned long rate, + unsigned long flags) { int up, down, div; up = down = div = DIV_ROUND_CLOSEST(parent_rate, rate); - if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) { + if (flags & CLK_DIVIDER_POWER_OF_TWO) { up = __roundup_pow_of_two(div); down = __rounddown_pow_of_two(div); - } else if (divider->table) { - up = _round_up_table(divider->table, div); - down = _round_down_table(divider->table, div); + } else if (table) { + up = _round_up_table(table, div); + down = _round_down_table(table, div); } return (up - div) <= (div - down) ? up : down; } -static int _div_round(struct clk_divider *divider, unsigned long parent_rate, - unsigned long rate) +static int _div_round(const struct clk_div_table *table, + unsigned long parent_rate, unsigned long rate, + unsigned long flags) { - if (divider->flags & CLK_DIVIDER_ROUND_CLOSEST) - return _div_round_closest(divider, parent_rate, rate); + if (flags & CLK_DIVIDER_ROUND_CLOSEST) + return _div_round_closest(table, parent_rate, rate, flags); - return _div_round_up(divider, parent_rate, rate); + return _div_round_up(table, parent_rate, rate, flags); } -static bool _is_best_div(struct clk_divider *divider, - unsigned long rate, unsigned long now, unsigned long best) +static bool _is_best_div(unsigned long rate, unsigned long now, + unsigned long best, unsigned long flags) { - if (divider->flags & CLK_DIVIDER_ROUND_CLOSEST) + if (flags & CLK_DIVIDER_ROUND_CLOSEST) return abs(rate - now) < abs(rate - best); return now <= rate && now > best; } -static int _next_div(struct clk_divider *divider, int div) +static int _next_div(const struct clk_div_table *table, int div, + unsigned long flags) { div++; - if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) + if (flags & CLK_DIVIDER_POWER_OF_TWO) return __roundup_pow_of_two(div); - if (divider->table) - return _round_up_table(divider->table, div); + if (table) + return _round_up_table(table, div); return div; } static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, - unsigned long *best_parent_rate) + unsigned long *best_parent_rate, + const struct clk_div_table *table, u8 width, + unsigned long flags) { - struct clk_divider *divider = to_clk_divider(hw); int i, bestdiv = 0; unsigned long parent_rate, best = 0, now, maxdiv; unsigned long parent_rate_saved = *best_parent_rate; + struct clk_divider *divider = to_clk_divider(hw); if (!rate) rate = 1; @@ -266,16 +288,16 @@ static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, /* if read only, just return current value */ if (divider->flags & CLK_DIVIDER_READ_ONLY) { bestdiv = readl(divider->reg) >> divider->shift; - bestdiv &= div_mask(divider); - bestdiv = _get_div(divider, bestdiv); + bestdiv &= div_mask(divider->width); + bestdiv = _get_div(table, bestdiv, flags); return bestdiv; } - maxdiv = _get_maxdiv(divider); + maxdiv = _get_maxdiv(table, width, flags); if (!(__clk_get_flags(hw->clk) & CLK_SET_RATE_PARENT)) { parent_rate = *best_parent_rate; - bestdiv = _div_round(divider, parent_rate, rate); + bestdiv = _div_round(table, parent_rate, rate, flags); bestdiv = bestdiv == 0 ? 1 : bestdiv; bestdiv = bestdiv > maxdiv ? maxdiv : bestdiv; return bestdiv; @@ -287,8 +309,8 @@ static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, */ maxdiv = min(ULONG_MAX / rate, maxdiv); - for (i = 1; i <= maxdiv; i = _next_div(divider, i)) { - if (!_is_valid_div(divider, i)) + for (i = 1; i <= maxdiv; i = _next_div(table, i, flags)) { + if (!_is_valid_div(table, i, flags)) continue; if (rate * i == parent_rate_saved) { /* @@ -302,7 +324,7 @@ static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), MULT_ROUND_UP(rate, i)); now = DIV_ROUND_UP(parent_rate, i); - if (_is_best_div(divider, rate, now, best)) { + if (_is_best_div(rate, now, best, flags)) { bestdiv = i; best = now; *best_parent_rate = parent_rate; @@ -310,48 +332,70 @@ static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, } if (!bestdiv) { - bestdiv = _get_maxdiv(divider); + bestdiv = _get_maxdiv(table, width, flags); *best_parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), 1); } return bestdiv; } -static long clk_divider_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *prate) +long divider_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *prate, const struct clk_div_table *table, + u8 width, unsigned long flags) { int div; - div = clk_divider_bestdiv(hw, rate, prate); + + div = clk_divider_bestdiv(hw, rate, prate, table, width, flags); return DIV_ROUND_UP(*prate, div); } +EXPORT_SYMBOL_GPL(divider_round_rate); -static int clk_divider_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) +static long clk_divider_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *prate) { struct clk_divider *divider = to_clk_divider(hw); + + return divider_round_rate(hw, rate, prate, divider->table, + divider->width, divider->flags); +} + +int divider_get_val(unsigned long rate, unsigned long parent_rate, + const struct clk_div_table *table, u8 width, + unsigned long flags) +{ unsigned int div, value; - unsigned long flags = 0; - u32 val; div = DIV_ROUND_UP(parent_rate, rate); - if (!_is_valid_div(divider, div)) + if (!_is_valid_div(table, div, flags)) return -EINVAL; - value = _get_val(divider, div); + value = _get_val(table, div, flags); + + return min_t(unsigned int, value, div_mask(width)); +} +EXPORT_SYMBOL_GPL(divider_get_val); + +static int clk_divider_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct clk_divider *divider = to_clk_divider(hw); + unsigned int value; + unsigned long flags = 0; + u32 val; - if (value > div_mask(divider)) - value = div_mask(divider); + value = divider_get_val(rate, parent_rate, divider->table, + divider->width, divider->flags); if (divider->lock) spin_lock_irqsave(divider->lock, flags); if (divider->flags & CLK_DIVIDER_HIWORD_MASK) { - val = div_mask(divider) << (divider->shift + 16); + val = div_mask(divider->width) << (divider->shift + 16); } else { val = clk_readl(divider->reg); - val &= ~(div_mask(divider) << divider->shift); + val &= ~(div_mask(divider->width) << divider->shift); } val |= value << divider->shift; clk_writel(val, divider->reg); diff --git a/drivers/clk/clk-mux.c b/drivers/clk/clk-mux.c index 4f96ff3ba728..54217961ae8a 100644 --- a/drivers/clk/clk-mux.c +++ b/drivers/clk/clk-mux.c @@ -29,35 +29,24 @@ #define to_clk_mux(_hw) container_of(_hw, struct clk_mux, hw) -static u8 clk_mux_get_parent(struct clk_hw *hw) +unsigned int clk_mux_get_parent(struct clk_hw *hw, unsigned int val, + unsigned int *table, unsigned long flags) { - struct clk_mux *mux = to_clk_mux(hw); int num_parents = __clk_get_num_parents(hw->clk); - u32 val; - - /* - * FIXME need a mux-specific flag to determine if val is bitwise or numeric - * e.g. sys_clkin_ck's clksel field is 3 bits wide, but ranges from 0x1 - * to 0x7 (index starts at one) - * OTOH, pmd_trace_clk_mux_ck uses a separate bit for each clock, so - * val = 0x4 really means "bit 2, index starts at bit 0" - */ - val = clk_readl(mux->reg) >> mux->shift; - val &= mux->mask; - if (mux->table) { + if (table) { int i; for (i = 0; i < num_parents; i++) - if (mux->table[i] == val) + if (table[i] == val) return i; return -EINVAL; } - if (val && (mux->flags & CLK_MUX_INDEX_BIT)) + if (val && (flags & CLK_MUX_INDEX_BIT)) val = ffs(val) - 1; - if (val && (mux->flags & CLK_MUX_INDEX_ONE)) + if (val && (flags & CLK_MUX_INDEX_ONE)) val--; if (val >= num_parents) @@ -65,24 +54,53 @@ static u8 clk_mux_get_parent(struct clk_hw *hw) return val; } +EXPORT_SYMBOL_GPL(clk_mux_get_parent); -static int clk_mux_set_parent(struct clk_hw *hw, u8 index) +static u8 _clk_mux_get_parent(struct clk_hw *hw) { struct clk_mux *mux = to_clk_mux(hw); u32 val; - unsigned long flags = 0; - if (mux->table) - index = mux->table[index]; + /* + * FIXME need a mux-specific flag to determine if val is bitwise or numeric + * e.g. sys_clkin_ck's clksel field is 3 bits wide, but ranges from 0x1 + * to 0x7 (index starts at one) + * OTOH, pmd_trace_clk_mux_ck uses a separate bit for each clock, so + * val = 0x4 really means "bit 2, index starts at bit 0" + */ + val = clk_readl(mux->reg) >> mux->shift; + val &= mux->mask; + + return clk_mux_get_parent(hw, val, mux->table, mux->flags); +} + +unsigned int clk_mux_reindex(u8 index, unsigned int *table, + unsigned long flags) +{ + unsigned int val = index; - else { - if (mux->flags & CLK_MUX_INDEX_BIT) - index = (1 << ffs(index)); + if (table) { + val = table[val]; + } else { + if (flags & CLK_MUX_INDEX_BIT) + val = (1 << ffs(val)); - if (mux->flags & CLK_MUX_INDEX_ONE) - index++; + if (flags & CLK_MUX_INDEX_ONE) + val++; } + return val; +} +EXPORT_SYMBOL_GPL(clk_mux_reindex); + +static int clk_mux_set_parent(struct clk_hw *hw, u8 index) +{ + struct clk_mux *mux = to_clk_mux(hw); + u32 val; + unsigned long flags = 0; + + index = clk_mux_reindex(index, mux->table, mux->flags); + if (mux->lock) spin_lock_irqsave(mux->lock, flags); @@ -102,21 +120,21 @@ static int clk_mux_set_parent(struct clk_hw *hw, u8 index) } const struct clk_ops clk_mux_ops = { - .get_parent = clk_mux_get_parent, + .get_parent = _clk_mux_get_parent, .set_parent = clk_mux_set_parent, .determine_rate = __clk_mux_determine_rate, }; EXPORT_SYMBOL_GPL(clk_mux_ops); const struct clk_ops clk_mux_ro_ops = { - .get_parent = clk_mux_get_parent, + .get_parent = _clk_mux_get_parent, }; EXPORT_SYMBOL_GPL(clk_mux_ro_ops); struct clk *clk_register_mux_table(struct device *dev, const char *name, const char **parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u32 mask, - u8 clk_mux_flags, u32 *table, spinlock_t *lock) + u8 clk_mux_flags, unsigned int *table, spinlock_t *lock) { struct clk_mux *mux; struct clk *clk; @@ -177,3 +195,18 @@ struct clk *clk_register_mux(struct device *dev, const char *name, NULL, lock); } EXPORT_SYMBOL_GPL(clk_register_mux); + +void clk_unregister_mux(struct clk *clk) +{ + struct clk_hw *hw; + struct clk_mux *mux; + + hw = __clk_get_hw(clk); + if (!hw) + return; + + mux = to_clk_mux(hw); + clk_unregister(clk); + kfree(mux); +} +EXPORT_SYMBOL_GPL(clk_unregister_mux); diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index 6cee26a1465a..3ac372fa8d73 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -558,13 +558,9 @@ unlock: static void clk_debug_unregister(struct clk *clk) { mutex_lock(&clk_debug_lock); - if (!clk->dentry) - goto out; - hlist_del_init(&clk->debug_node); debugfs_remove_recursive(clk->dentry); clk->dentry = NULL; -out: mutex_unlock(&clk_debug_lock); } @@ -921,14 +917,20 @@ struct clk *__clk_lookup(const char *name) return NULL; } -/* - * Helper for finding best parent to provide a given frequency. This can be used - * directly as a determine_rate callback (e.g. for a mux), or from a more - * complex clock that may combine a mux with other operations. - */ -long __clk_mux_determine_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *best_parent_rate, - struct clk **best_parent_p) +static bool mux_is_better_rate(unsigned long rate, unsigned long now, + unsigned long best, unsigned long flags) +{ + if (flags & CLK_MUX_ROUND_CLOSEST) + return abs(now - rate) < abs(best - rate); + + return now <= rate && now > best; +} + +static long +clk_mux_determine_rate_flags(struct clk_hw *hw, unsigned long rate, + unsigned long *best_parent_rate, + struct clk **best_parent_p, + unsigned long flags) { struct clk *clk = hw->clk, *parent, *best_parent = NULL; int i, num_parents; @@ -956,7 +958,7 @@ long __clk_mux_determine_rate(struct clk_hw *hw, unsigned long rate, parent_rate = __clk_round_rate(parent, rate); else parent_rate = __clk_get_rate(parent); - if (parent_rate <= rate && parent_rate > best) { + if (mux_is_better_rate(rate, parent_rate, best, flags)) { best_parent = parent; best = parent_rate; } @@ -969,8 +971,31 @@ out: return best; } + +/* + * Helper for finding best parent to provide a given frequency. This can be used + * directly as a determine_rate callback (e.g. for a mux), or from a more + * complex clock that may combine a mux with other operations. + */ +long __clk_mux_determine_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *best_parent_rate, + struct clk **best_parent_p) +{ + return clk_mux_determine_rate_flags(hw, rate, best_parent_rate, + best_parent_p, 0); +} EXPORT_SYMBOL_GPL(__clk_mux_determine_rate); +long __clk_mux_determine_rate_closest(struct clk_hw *hw, unsigned long rate, + unsigned long *best_parent_rate, + struct clk **best_parent_p) +{ + return clk_mux_determine_rate_flags(hw, rate, best_parent_rate, + best_parent_p, + CLK_MUX_ROUND_CLOSEST); +} +EXPORT_SYMBOL_GPL(__clk_mux_determine_rate_closest); + /*** clk api ***/ void __clk_unprepare(struct clk *clk) @@ -1569,6 +1594,7 @@ static void clk_calc_subtree(struct clk *clk, unsigned long new_rate, struct clk *new_parent, u8 p_index) { struct clk *child; + struct clk *parent; clk->new_rate = new_rate; clk->new_parent = new_parent; @@ -1578,6 +1604,17 @@ static void clk_calc_subtree(struct clk *clk, unsigned long new_rate, if (new_parent && new_parent != clk->parent) new_parent->new_child = clk; + if (clk->ops->get_safe_parent) { + parent = clk->ops->get_safe_parent(clk->hw); + if (parent) { + p_index = clk_fetch_parent_index(clk, parent); + clk->safe_parent_index = p_index; + clk->safe_parent = parent; + } + } else { + clk->safe_parent = NULL; + } + hlist_for_each_entry(child, &clk->children, child_node) { child->new_rate = clk_recalc(child, new_rate); clk_calc_subtree(child, child->new_rate, NULL, 0); @@ -1660,14 +1697,42 @@ out: static struct clk *clk_propagate_rate_change(struct clk *clk, unsigned long event) { struct clk *child, *tmp_clk, *fail_clk = NULL; + struct clk *old_parent; int ret = NOTIFY_DONE; - if (clk->rate == clk->new_rate) + if (clk->rate == clk->new_rate && event != POST_RATE_CHANGE) return NULL; + switch (event) { + case PRE_RATE_CHANGE: + if (clk->safe_parent) + clk->ops->set_parent(clk->hw, clk->safe_parent_index); + break; + case POST_RATE_CHANGE: + if (clk->safe_parent) { + old_parent = __clk_set_parent_before(clk, + clk->new_parent); + if (clk->ops->set_rate_and_parent) { + clk->ops->set_rate_and_parent(clk->hw, + clk->new_rate, + clk->new_parent ? + clk->new_parent->rate : 0, + clk->new_parent_index); + } else if (clk->ops->set_parent) { + clk->ops->set_parent(clk->hw, + clk->new_parent_index); + } + __clk_set_parent_after(clk, clk->new_parent, + old_parent); + } + break; + } + if (clk->notifier_count) { - ret = __clk_notify(clk, event, clk->rate, clk->new_rate); - if (ret & NOTIFY_STOP_MASK) + if (event != POST_RATE_CHANGE) + ret = __clk_notify(clk, event, clk->rate, + clk->new_rate); + if (ret & NOTIFY_STOP_MASK && event != POST_RATE_CHANGE) fail_clk = clk; } @@ -1694,23 +1759,26 @@ static struct clk *clk_propagate_rate_change(struct clk *clk, unsigned long even * walk down a subtree and set the new rates notifying the rate * change on the way */ -static void clk_change_rate(struct clk *clk) +static void clk_change_rate(struct clk *clk, unsigned long best_parent_rate) { struct clk *child; struct hlist_node *tmp; unsigned long old_rate; - unsigned long best_parent_rate = 0; bool skip_set_rate = false; struct clk *old_parent; - old_rate = clk->rate; + hlist_for_each_entry(child, &clk->children, child_node) { + /* Skip children who will be reparented to another clock */ + if (child->new_parent && child->new_parent != clk) + continue; + if (child->new_rate > child->rate) + clk_change_rate(child, clk->new_rate); + } - if (clk->new_parent) - best_parent_rate = clk->new_parent->rate; - else if (clk->parent) - best_parent_rate = clk->parent->rate; + old_rate = clk->rate; - if (clk->new_parent && clk->new_parent != clk->parent) { + if (clk->new_parent && clk->new_parent != clk->parent && + !clk->safe_parent) { old_parent = __clk_set_parent_before(clk, clk->new_parent); if (clk->ops->set_rate_and_parent) { @@ -1767,12 +1835,13 @@ static void clk_change_rate(struct clk *clk) /* Skip children who will be reparented to another clock */ if (child->new_parent && child->new_parent != clk) continue; - clk_change_rate(child); + if (child->new_rate != child->rate) + clk_change_rate(child, clk->new_rate); } /* handle the new child who might not be in clk->children yet */ - if (clk->new_child) - clk_change_rate(clk->new_child); + if (clk->new_child && clk->new_child->new_rate != clk->new_child->rate) + clk_change_rate(clk->new_child, clk->new_rate); } /** @@ -1800,6 +1869,7 @@ int clk_set_rate(struct clk *clk, unsigned long rate) { struct clk *top, *fail_clk; int ret = 0; + unsigned long parent_rate; if (!clk) return 0; @@ -1833,9 +1903,15 @@ int clk_set_rate(struct clk *clk, unsigned long rate) goto out; } + if (top->parent) + parent_rate = top->parent->rate; + else + parent_rate = 0; + /* change the rates */ - clk_change_rate(top); + clk_change_rate(top, parent_rate); + clk_propagate_rate_change(top, POST_RATE_CHANGE); out: clk_prepare_unlock(); diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig index 1107351ed346..f3b8993ff27a 100644 --- a/drivers/clk/qcom/Kconfig +++ b/drivers/clk/qcom/Kconfig @@ -70,3 +70,39 @@ config MSM_MMCC_8974 Support for the multimedia clock controller on msm8974 devices. Say Y if you want to support multimedia devices such as display, graphics, video encode/decode, camera, etc. + +config QCOM_HFPLL + tristate "High-Frequency PLL (HFPLL) Clock Controller" + depends on COMMON_CLK_QCOM + help + Support for the high-frequency PLLs present on Qualcomm devices. + Say Y if you want to support CPU frequency scaling on devices + such as MSM8974, APQ8084, etc. + +config KPSS_XCC + tristate "KPSS Clock Controller" + depends on COMMON_CLK_QCOM + help + Support for the Krait ACC and GCC clock controllers. Say Y + if you want to support CPU frequency scaling on devices such + as MSM8960, APQ8064, etc. + +config KRAITCC + tristate "Krait Clock Controller" + depends on COMMON_CLK_QCOM && ARM + select KRAIT_CLOCKS + help + Support for the Krait CPU clocks on Qualcomm devices. + Say Y if you want to support CPU frequency scaling. + +config KRAIT_CLOCKS + bool + select KRAIT_L2_ACCESSORS + +config QCOM_RPMCC + tristate "Qualcomm RPM Clock Controller" + depends on COMMON_CLK_QCOM + help + Support for the RPM clock controller which controls fabric clocks + on SoCs like APQ8064. + Say Y if you want to use fabric clocks like AFAB, DAYTONA etc. diff --git a/drivers/clk/qcom/Makefile b/drivers/clk/qcom/Makefile index 783cfb24faa4..6cba64efb730 100644 --- a/drivers/clk/qcom/Makefile +++ b/drivers/clk/qcom/Makefile @@ -6,6 +6,8 @@ clk-qcom-y += clk-pll.o clk-qcom-y += clk-rcg.o clk-qcom-y += clk-rcg2.o clk-qcom-y += clk-branch.o +clk-qcom-$(CONFIG_KRAIT_CLOCKS) += clk-krait.o +clk-qcom-y += clk-hfpll.o clk-qcom-y += reset.o obj-$(CONFIG_APQ_GCC_8084) += gcc-apq8084.o @@ -16,3 +18,7 @@ obj-$(CONFIG_MSM_GCC_8960) += gcc-msm8960.o obj-$(CONFIG_MSM_GCC_8974) += gcc-msm8974.o obj-$(CONFIG_MSM_MMCC_8960) += mmcc-msm8960.o obj-$(CONFIG_MSM_MMCC_8974) += mmcc-msm8974.o +obj-$(CONFIG_KPSS_XCC) += kpss-xcc.o +obj-$(CONFIG_QCOM_HFPLL) += hfpll.o +obj-$(CONFIG_KRAITCC) += krait-cc.o +obj-$(CONFIG_QCOM_RPMCC) += clk-rpm.o diff --git a/drivers/clk/qcom/clk-hfpll.c b/drivers/clk/qcom/clk-hfpll.c new file mode 100644 index 000000000000..367eb95f1477 --- /dev/null +++ b/drivers/clk/qcom/clk-hfpll.c @@ -0,0 +1,253 @@ +/* + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#include <linux/kernel.h> +#include <linux/export.h> +#include <linux/regmap.h> +#include <linux/delay.h> +#include <linux/err.h> +#include <linux/clk-provider.h> +#include <linux/spinlock.h> + +#include "clk-regmap.h" +#include "clk-hfpll.h" + +#define PLL_OUTCTRL BIT(0) +#define PLL_BYPASSNL BIT(1) +#define PLL_RESET_N BIT(2) + +/* Initialize a HFPLL at a given rate and enable it. */ +static void __clk_hfpll_init_once(struct clk_hw *hw) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + + if (likely(h->init_done)) + return; + + /* Configure PLL parameters for integer mode. */ + if (hd->config_val) + regmap_write(regmap, hd->config_reg, hd->config_val); + regmap_write(regmap, hd->m_reg, 0); + regmap_write(regmap, hd->n_reg, 1); + + if (hd->user_reg) { + u32 regval = hd->user_val; + unsigned long rate; + + rate = __clk_get_rate(hw->clk); + + /* Pick the right VCO. */ + if (hd->user_vco_mask && rate > hd->low_vco_max_rate) + regval |= hd->user_vco_mask; + regmap_write(regmap, hd->user_reg, regval); + } + + if (hd->droop_reg) + regmap_write(regmap, hd->droop_reg, hd->droop_val); + + h->init_done = true; +} + +static void __clk_hfpll_enable(struct clk_hw *hw) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + u32 val; + + __clk_hfpll_init_once(hw); + + /* Disable PLL bypass mode. */ + regmap_update_bits(regmap, hd->mode_reg, PLL_BYPASSNL, PLL_BYPASSNL); + + /* + * H/W requires a 5us delay between disabling the bypass and + * de-asserting the reset. Delay 10us just to be safe. + */ + udelay(10); + + /* De-assert active-low PLL reset. */ + regmap_update_bits(regmap, hd->mode_reg, PLL_RESET_N, PLL_RESET_N); + + /* Wait for PLL to lock. */ + if (hd->status_reg) { + do { + regmap_read(regmap, hd->status_reg, &val); + } while (!(val & BIT(hd->lock_bit))); + } else { + udelay(60); + } + + /* Enable PLL output. */ + regmap_update_bits(regmap, hd->mode_reg, PLL_OUTCTRL, PLL_OUTCTRL); +} + +/* Enable an already-configured HFPLL. */ +static int clk_hfpll_enable(struct clk_hw *hw) +{ + unsigned long flags; + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + u32 mode; + + spin_lock_irqsave(&h->lock, flags); + regmap_read(regmap, hd->mode_reg, &mode); + if (!(mode & (PLL_BYPASSNL | PLL_RESET_N | PLL_OUTCTRL))) + __clk_hfpll_enable(hw); + spin_unlock_irqrestore(&h->lock, flags); + + return 0; +} + +static void __clk_hfpll_disable(struct clk_hfpll *h) +{ + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + + /* + * Disable the PLL output, disable test mode, enable the bypass mode, + * and assert the reset. + */ + regmap_update_bits(regmap, hd->mode_reg, + PLL_BYPASSNL | PLL_RESET_N | PLL_OUTCTRL, 0); +} + +static void clk_hfpll_disable(struct clk_hw *hw) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + unsigned long flags; + + spin_lock_irqsave(&h->lock, flags); + __clk_hfpll_disable(h); + spin_unlock_irqrestore(&h->lock, flags); +} + +static long clk_hfpll_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *parent_rate) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + unsigned long rrate; + + rate = clamp(rate, hd->min_rate, hd->max_rate); + + rrate = DIV_ROUND_UP(rate, *parent_rate) * *parent_rate; + if (rrate > hd->max_rate) + rrate -= *parent_rate; + + return rrate; +} + +/* + * For optimization reasons, assumes no downstream clocks are actively using + * it. + */ +static int clk_hfpll_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + unsigned long flags; + u32 l_val, val; + bool enabled; + + l_val = rate / parent_rate; + + spin_lock_irqsave(&h->lock, flags); + + enabled = __clk_is_enabled(hw->clk); + if (enabled) + __clk_hfpll_disable(h); + + /* Pick the right VCO. */ + if (hd->user_reg && hd->user_vco_mask) { + regmap_read(regmap, hd->user_reg, &val); + if (rate <= hd->low_vco_max_rate) + val &= ~hd->user_vco_mask; + else + val |= hd->user_vco_mask; + regmap_write(regmap, hd->user_reg, val); + } + + regmap_write(regmap, hd->l_reg, l_val); + + if (enabled) + __clk_hfpll_enable(hw); + + spin_unlock_irqrestore(&h->lock, flags); + + return 0; +} + +static unsigned long clk_hfpll_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + u32 l_val; + + regmap_read(regmap, hd->l_reg, &l_val); + + return l_val * parent_rate; +} + +static void clk_hfpll_init(struct clk_hw *hw) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + u32 mode, status; + + regmap_read(regmap, hd->mode_reg, &mode); + if (mode != (PLL_BYPASSNL | PLL_RESET_N | PLL_OUTCTRL)) { + __clk_hfpll_init_once(hw); + return; + } + + if (hd->status_reg) { + regmap_read(regmap, hd->status_reg, &status); + if (!(status & BIT(hd->lock_bit))) { + WARN(1, "HFPLL %s is ON, but not locked!\n", + __clk_get_name(hw->clk)); + clk_hfpll_disable(hw); + __clk_hfpll_init_once(hw); + } + } +} + +static int hfpll_is_enabled(struct clk_hw *hw) +{ + struct clk_hfpll *h = to_clk_hfpll(hw); + struct hfpll_data const *hd = h->d; + struct regmap *regmap = h->clkr.regmap; + u32 mode; + + regmap_read(regmap, hd->mode_reg, &mode); + mode &= 0x7; + return mode == (PLL_BYPASSNL | PLL_RESET_N | PLL_OUTCTRL); +} + +const struct clk_ops clk_ops_hfpll = { + .enable = clk_hfpll_enable, + .disable = clk_hfpll_disable, + .is_enabled = hfpll_is_enabled, + .round_rate = clk_hfpll_round_rate, + .set_rate = clk_hfpll_set_rate, + .recalc_rate = clk_hfpll_recalc_rate, + .init = clk_hfpll_init, +}; +EXPORT_SYMBOL_GPL(clk_ops_hfpll); diff --git a/drivers/clk/qcom/clk-hfpll.h b/drivers/clk/qcom/clk-hfpll.h new file mode 100644 index 000000000000..48c18d664f4e --- /dev/null +++ b/drivers/clk/qcom/clk-hfpll.h @@ -0,0 +1,54 @@ +/* + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __QCOM_CLK_HFPLL_H__ +#define __QCOM_CLK_HFPLL_H__ + +#include <linux/clk-provider.h> +#include <linux/spinlock.h> +#include "clk-regmap.h" + +struct hfpll_data { + u32 mode_reg; + u32 l_reg; + u32 m_reg; + u32 n_reg; + u32 user_reg; + u32 droop_reg; + u32 config_reg; + u32 status_reg; + u8 lock_bit; + + u32 droop_val; + u32 config_val; + u32 user_val; + u32 user_vco_mask; + unsigned long low_vco_max_rate; + + unsigned long min_rate; + unsigned long max_rate; +}; + +struct clk_hfpll { + struct hfpll_data const *d; + int init_done; + + struct clk_regmap clkr; + spinlock_t lock; +}; + +#define to_clk_hfpll(_hw) \ + container_of(to_clk_regmap(_hw), struct clk_hfpll, clkr) + +extern const struct clk_ops clk_ops_hfpll; + +#endif diff --git a/drivers/clk/qcom/clk-krait.c b/drivers/clk/qcom/clk-krait.c new file mode 100644 index 000000000000..615bfbee1b6a --- /dev/null +++ b/drivers/clk/qcom/clk-krait.c @@ -0,0 +1,166 @@ +/* + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/init.h> +#include <linux/io.h> +#include <linux/delay.h> +#include <linux/err.h> +#include <linux/clk-provider.h> +#include <linux/spinlock.h> + +#include <asm/krait-l2-accessors.h> + +#include "clk-krait.h" + +/* Secondary and primary muxes share the same cp15 register */ +static DEFINE_SPINLOCK(krait_clock_reg_lock); + +#define LPL_SHIFT 8 +static void __krait_mux_set_sel(struct krait_mux_clk *mux, int sel) +{ + unsigned long flags; + u32 regval; + + spin_lock_irqsave(&krait_clock_reg_lock, flags); + regval = krait_get_l2_indirect_reg(mux->offset); + regval &= ~(mux->mask << mux->shift); + regval |= (sel & mux->mask) << mux->shift; + if (mux->lpl) { + regval &= ~(mux->mask << (mux->shift + LPL_SHIFT)); + regval |= (sel & mux->mask) << (mux->shift + LPL_SHIFT); + } + krait_set_l2_indirect_reg(mux->offset, regval); + spin_unlock_irqrestore(&krait_clock_reg_lock, flags); + + /* Wait for switch to complete. */ + mb(); + udelay(1); +} + +static int krait_mux_set_parent(struct clk_hw *hw, u8 index) +{ + struct krait_mux_clk *mux = to_krait_mux_clk(hw); + u32 sel; + + sel = clk_mux_reindex(index, mux->parent_map, 0); + mux->en_mask = sel; + /* Don't touch mux if CPU is off as it won't work */ + if (__clk_is_enabled(hw->clk)) + __krait_mux_set_sel(mux, sel); + return 0; +} + +static u8 krait_mux_get_parent(struct clk_hw *hw) +{ + struct krait_mux_clk *mux = to_krait_mux_clk(hw); + u32 sel; + + sel = krait_get_l2_indirect_reg(mux->offset); + sel >>= mux->shift; + sel &= mux->mask; + mux->en_mask = sel; + + return clk_mux_get_parent(hw, sel, mux->parent_map, 0); +} + +static struct clk *krait_mux_get_safe_parent(struct clk_hw *hw) +{ + int i; + struct krait_mux_clk *mux = to_krait_mux_clk(hw); + int num_parents = __clk_get_num_parents(hw->clk); + + i = mux->safe_sel; + for (i = 0; i < num_parents; i++) + if (mux->safe_sel == mux->parent_map[i]) + break; + + return clk_get_parent_by_index(hw->clk, i); +} + +static int krait_mux_enable(struct clk_hw *hw) +{ + struct krait_mux_clk *mux = to_krait_mux_clk(hw); + + __krait_mux_set_sel(mux, mux->en_mask); + + return 0; +} + +static void krait_mux_disable(struct clk_hw *hw) +{ + struct krait_mux_clk *mux = to_krait_mux_clk(hw); + + __krait_mux_set_sel(mux, mux->safe_sel); +} + +const struct clk_ops krait_mux_clk_ops = { + .enable = krait_mux_enable, + .disable = krait_mux_disable, + .set_parent = krait_mux_set_parent, + .get_parent = krait_mux_get_parent, + .determine_rate = __clk_mux_determine_rate_closest, + .get_safe_parent = krait_mux_get_safe_parent, +}; +EXPORT_SYMBOL_GPL(krait_mux_clk_ops); + +/* The divider can divide by 2, 4, 6 and 8. But we only really need div-2. */ +static long krait_div2_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *parent_rate) +{ + *parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), rate * 2); + return DIV_ROUND_UP(*parent_rate, 2); +} + +static int krait_div2_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct krait_div2_clk *d = to_krait_div2_clk(hw); + unsigned long flags; + u32 val; + u32 mask = BIT(d->width) - 1; + + if (d->lpl) + mask |= mask << (d->shift + LPL_SHIFT); + + spin_lock_irqsave(&krait_clock_reg_lock, flags); + val = krait_get_l2_indirect_reg(d->offset); + val &= ~mask; + krait_set_l2_indirect_reg(d->offset, val); + spin_unlock_irqrestore(&krait_clock_reg_lock, flags); + + return 0; +} + +static unsigned long +krait_div2_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) +{ + struct krait_div2_clk *d = to_krait_div2_clk(hw); + u32 mask = BIT(d->width) - 1; + u32 div; + + div = krait_get_l2_indirect_reg(d->offset); + div >>= d->shift; + div &= mask; + div = (div + 1) * 2; + + return DIV_ROUND_UP(parent_rate, div); +} + +const struct clk_ops krait_div2_clk_ops = { + .round_rate = krait_div2_round_rate, + .set_rate = krait_div2_set_rate, + .recalc_rate = krait_div2_recalc_rate, +}; +EXPORT_SYMBOL_GPL(krait_div2_clk_ops); diff --git a/drivers/clk/qcom/clk-krait.h b/drivers/clk/qcom/clk-krait.h new file mode 100644 index 000000000000..5d0063538e5d --- /dev/null +++ b/drivers/clk/qcom/clk-krait.h @@ -0,0 +1,49 @@ +/* + * Copyright (c) 2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __QCOM_CLK_KRAIT_H +#define __QCOM_CLK_KRAIT_H + +#include <linux/clk-provider.h> + +struct krait_mux_clk { + unsigned int *parent_map; + bool has_safe_parent; + u8 safe_sel; + u32 offset; + u32 mask; + u32 shift; + u32 en_mask; + bool lpl; + + struct clk_hw hw; +}; + +#define to_krait_mux_clk(_hw) container_of(_hw, struct krait_mux_clk, hw) + +extern const struct clk_ops krait_mux_clk_ops; + +struct krait_div2_clk { + u32 offset; + u8 width; + u32 shift; + bool lpl; + + struct clk_hw hw; +}; + +#define to_krait_div2_clk(_hw) container_of(_hw, struct krait_div2_clk, hw) + +extern const struct clk_ops krait_div2_clk_ops; + +#endif diff --git a/drivers/clk/qcom/clk-rpm.c b/drivers/clk/qcom/clk-rpm.c new file mode 100644 index 000000000000..c9107e3723db --- /dev/null +++ b/drivers/clk/qcom/clk-rpm.c @@ -0,0 +1,373 @@ +#include <linux/kernel.h> +#include <linux/clk.h> +#include <linux/err.h> +#include <linux/platform_device.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/clk-provider.h> +#include <linux/mfd/qcom_rpm.h> +#include <dt-bindings/mfd/qcom-rpm.h> + +struct rpm_cc { + struct clk_onecell_data data; + struct clk *clks[]; +}; +struct rpm_clk { + const int rpm_clk_id; + struct qcom_rpm *rpm; + unsigned last_set_khz; + bool enabled; + bool branch; /* true: RPM only accepts 1 for ON and 0 for OFF */ + unsigned factor; + struct clk_hw hw; +}; + +#define to_rpm_clk(_hw) container_of(_hw, struct rpm_clk, hw) + +static int rpm_clk_prepare(struct clk_hw *hw) +{ + struct rpm_clk *r = to_rpm_clk(hw); + uint32_t value; + int rc = 0; + unsigned long this_khz; + + this_khz = r->last_set_khz; + /* Don't send requests to the RPM if the rate has not been set. */ + if (r->last_set_khz == 0) + goto out; + + value = this_khz; + if (r->branch) + value = !!value; + + rc = qcom_rpm_write(r->rpm, QCOM_RPM_ACTIVE_STATE, + r->rpm_clk_id, &value, 1); + if (rc) + goto out; + +out: + if (!rc) + r->enabled = true; + return rc; +} + +static void rpm_clk_unprepare(struct clk_hw *hw) +{ + struct rpm_clk *r = to_rpm_clk(hw); + + if (r->last_set_khz) { + uint32_t value = 0; + int rc; + + rc = qcom_rpm_write(r->rpm, QCOM_RPM_ACTIVE_STATE, + r->rpm_clk_id, &value, 1); + if (rc) + return; + + } + r->enabled = false; +} + +int rpm_clk_set_rate(struct clk_hw *hw, + unsigned long rate, unsigned long prate) +{ + struct rpm_clk *r = to_rpm_clk(hw); + unsigned long this_khz; + int rc = 0; + + this_khz = DIV_ROUND_UP(rate, r->factor); + + if (r->enabled) { + uint32_t value = this_khz; + + rc = qcom_rpm_write(r->rpm, QCOM_RPM_ACTIVE_STATE, + r->rpm_clk_id, &value, 1); + if (rc) + goto out; + } + + if (!rc) + r->last_set_khz = this_khz; + +out: + return rc; +} + +static long rpm_clk_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *parent_rate) +{ + return rate; +} + +static unsigned long rpm_clk_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct rpm_clk *r = to_rpm_clk(hw); + u32 val; + int rc; + + rc = qcom_rpm_read(r->rpm, r->rpm_clk_id, &val, 1); + if (rc < 0) + return 0; + + return val * r->factor; +} + +static unsigned long rpm_branch_clk_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct rpm_clk *r = to_rpm_clk(hw); + + return r->last_set_khz * r->factor; +} + +static const struct clk_ops branch_clk_ops_rpm = { + .prepare = rpm_clk_prepare, + .unprepare = rpm_clk_unprepare, + .recalc_rate = rpm_branch_clk_recalc_rate, + .round_rate = rpm_clk_round_rate, +}; + +static const struct clk_ops clk_ops_rpm = { + .prepare = rpm_clk_prepare, + .unprepare = rpm_clk_unprepare, + .set_rate = rpm_clk_set_rate, + .recalc_rate = rpm_clk_recalc_rate, + .round_rate = rpm_clk_round_rate, +}; + +static struct rpm_clk pxo_clk = { + .rpm_clk_id = QCOM_RPM_PXO_CLK, + .branch = true, + .factor = 1000, + .last_set_khz = 27000, + .hw.init = &(struct clk_init_data){ + .name = "pxo", + .ops = &branch_clk_ops_rpm, + }, +}; + +static struct rpm_clk cxo_clk = { + .rpm_clk_id = QCOM_RPM_CXO_CLK, + .branch = true, + .factor = 1000, + .last_set_khz = 19200, + .hw.init = &(struct clk_init_data){ + .name = "cxo", + .ops = &branch_clk_ops_rpm, + }, +}; + +static struct rpm_clk afab_clk = { + .rpm_clk_id = QCOM_RPM_APPS_FABRIC_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "afab_clk", + .ops = &clk_ops_rpm, + }, +}; + +static struct rpm_clk cfpb_clk = { + .rpm_clk_id = QCOM_RPM_CFPB_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "cfpb_clk", + .ops = &clk_ops_rpm, + }, +}; + +static struct rpm_clk daytona_clk = { + .rpm_clk_id = QCOM_RPM_DAYTONA_FABRIC_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "daytona_clk", + .ops = &clk_ops_rpm, + }, +}; + +static struct rpm_clk ebi1_clk = { + .rpm_clk_id = QCOM_RPM_EBI1_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "ebi1_clk", + .ops = &clk_ops_rpm, + }, +}; + +static struct rpm_clk mmfab_clk = { + .rpm_clk_id = QCOM_RPM_MM_FABRIC_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "mmfab_clk", + .ops = &clk_ops_rpm, + }, +}; + +static struct rpm_clk mmfpb_clk = { + .rpm_clk_id = QCOM_RPM_MMFPB_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "mmfpb_clk", + .ops = &clk_ops_rpm, + }, +}; + +static struct rpm_clk sfab_clk = { + .rpm_clk_id = QCOM_RPM_SYS_FABRIC_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "sfab_clk", + .ops = &clk_ops_rpm, + }, +}; + +static struct rpm_clk sfpb_clk = { + .rpm_clk_id = QCOM_RPM_SFPB_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "sfpb_clk", + .ops = &clk_ops_rpm, + }, +}; + +/* +static struct rpm_clk qdss_clk = { + .rpm_clk_id = QCOM_RPM_QDSS_CLK, + .factor = 1000, + .hw.init = &(struct clk_init_data){ + .name = "qdss_clk", + .ops = &clk_ops_rpm, + }, +}; +*/ + +struct rpm_clk *rpm_clks[] = { + [QCOM_RPM_PXO_CLK] = &pxo_clk, + [QCOM_RPM_CXO_CLK] = &cxo_clk, + [QCOM_RPM_APPS_FABRIC_CLK] = &afab_clk, + [QCOM_RPM_CFPB_CLK] = &cfpb_clk, + [QCOM_RPM_DAYTONA_FABRIC_CLK] = &daytona_clk, + [QCOM_RPM_EBI1_CLK] = &ebi1_clk, + [QCOM_RPM_MM_FABRIC_CLK] = &mmfab_clk, + [QCOM_RPM_MMFPB_CLK] = &mmfpb_clk, + [QCOM_RPM_SYS_FABRIC_CLK] = &sfab_clk, + [QCOM_RPM_SFPB_CLK] = &sfpb_clk, +/** [QCOM_RPM_QDSS_CLK] = &qdss_clk, Needs more checking here **/ +}; + +static int rpm_clk_probe(struct platform_device *pdev) +{ + struct clk **clks; + struct clk *clk; + struct rpm_cc *cc; + struct qcom_rpm *rpm; + int num_clks = ARRAY_SIZE(rpm_clks); + struct clk_onecell_data *data; + int i; + + if (!pdev->dev.of_node) + return -ENODEV; + + cc = devm_kzalloc(&pdev->dev, sizeof(*cc) + sizeof(*clks) * num_clks, + GFP_KERNEL); + if (!cc) + return -ENOMEM; + + clks = cc->clks; + data = &cc->data; + data->clks = clks; + data->clk_num = num_clks; + + rpm = dev_get_drvdata(pdev->dev.parent); + if (!rpm) { + dev_err(&pdev->dev, "unable to retrieve handle to rpm\n"); + return -ENODEV; + } + + for (i = 0; i < num_clks; i++) { + if (!rpm_clks[i]) { + clks[i] = ERR_PTR(-ENOENT); + continue; + } + rpm_clks[i]->rpm = rpm; + clk = devm_clk_register(&pdev->dev, &rpm_clks[i]->hw); + if (IS_ERR(clk)) + return PTR_ERR(clk); + + clks[i] = clk; + } + + return of_clk_add_provider(pdev->dev.of_node, of_clk_src_onecell_get, + data); +} + +static int rpm_clk_remove(struct platform_device *pdev) +{ + of_clk_del_provider(pdev->dev.of_node); + return 0; +} + +static const struct of_device_id rpm_clk_of_match[] = { + { .compatible = "qcom,apq8064-rpm-clk" }, + { }, +}; + +static struct platform_driver rpm_clk_driver = { + .driver = { + .name = "qcom-rpm-clk", + .of_match_table = rpm_clk_of_match, + }, + .probe = rpm_clk_probe, + .remove = rpm_clk_remove, +}; + +/* dummy handoff clk handling... */ +static int rpm_clk_ho_probe(struct platform_device *pdev) +{ + int i, max_clks; + struct clk *clk; + max_clks = of_count_phandle_with_args(pdev->dev.of_node, + "clocks", "#clock-cells"); + + for (i = 0; i < max_clks; i++) { + clk = of_clk_get(pdev->dev.of_node, i); + + if (IS_ERR(clk)) + break; + + clk_prepare_enable(clk); + } + return 0; +} + +static const struct of_device_id rpm_clk_ho_of_match[] = { + { .compatible = "qcom,apq8064-rpmcc-handoff" }, + { }, +}; + +static struct platform_driver rpm_clk_ho_driver = { + .driver = { + .name = "qcom-rpm-clk-handoff", + .of_match_table = rpm_clk_ho_of_match, + }, + .probe = rpm_clk_ho_probe, +}; + +static int __init rpm_clk_init(void) +{ + platform_driver_register(&rpm_clk_driver); + return platform_driver_register(&rpm_clk_ho_driver); +} +subsys_initcall(rpm_clk_init); + +static void __exit rpm_clk_exit(void) +{ + platform_driver_unregister(&rpm_clk_ho_driver); + platform_driver_unregister(&rpm_clk_driver); +} +module_exit(rpm_clk_exit) + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Srinivas Kandagatla <srinivas.kandagatla@linaro.org>"); +MODULE_DESCRIPTION("Driver for the RPM clocks"); diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c index 5cd62a709ac7..1bdf2a1e609b 100644 --- a/drivers/clk/qcom/gcc-ipq806x.c +++ b/drivers/clk/qcom/gcc-ipq806x.c @@ -30,6 +30,7 @@ #include "clk-pll.h" #include "clk-rcg.h" #include "clk-branch.h" +#include "clk-hfpll.h" #include "reset.h" static struct clk_pll pll0 = { @@ -102,6 +103,85 @@ static struct clk_regmap pll8_vote = { }, }; +static struct hfpll_data hfpll0_data = { + .mode_reg = 0x3200, + .l_reg = 0x3208, + .m_reg = 0x320c, + .n_reg = 0x3210, + .config_reg = 0x3204, + .status_reg = 0x321c, + .config_val = 0x7845c665, + .droop_reg = 0x3214, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll0 = { + .d = &hfpll0_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll0", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll0.lock), +}; + +static struct hfpll_data hfpll1_data = { + .mode_reg = 0x3240, + .l_reg = 0x3248, + .m_reg = 0x324c, + .n_reg = 0x3250, + .config_reg = 0x3244, + .status_reg = 0x325c, + .config_val = 0x7845c665, + .droop_reg = 0x3314, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll1 = { + .d = &hfpll1_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll1", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll1.lock), +}; + +static struct hfpll_data hfpll_l2_data = { + .mode_reg = 0x3300, + .l_reg = 0x3308, + .m_reg = 0x330c, + .n_reg = 0x3310, + .config_reg = 0x3304, + .status_reg = 0x331c, + .config_val = 0x7845c665, + .droop_reg = 0x3314, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll_l2 = { + .d = &hfpll_l2_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll_l2", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll_l2.lock), +}; + + static struct clk_pll pll14 = { .l_reg = 0x31c4, .m_reg = 0x31c8, @@ -2261,6 +2341,9 @@ static struct clk_regmap *gcc_ipq806x_clks[] = { [USB_FS1_XCVR_SRC] = &usb_fs1_xcvr_clk_src.clkr, [USB_FS1_XCVR_CLK] = &usb_fs1_xcvr_clk.clkr, [USB_FS1_SYSTEM_CLK] = &usb_fs1_sys_clk.clkr, + [PLL9] = &hfpll0.clkr, + [PLL10] = &hfpll1.clkr, + [PLL12] = &hfpll_l2.clkr, }; static const struct qcom_reset_map gcc_ipq806x_resets[] = { diff --git a/drivers/clk/qcom/gcc-msm8960.c b/drivers/clk/qcom/gcc-msm8960.c index 007534f7a2d7..8307db394463 100644 --- a/drivers/clk/qcom/gcc-msm8960.c +++ b/drivers/clk/qcom/gcc-msm8960.c @@ -15,6 +15,7 @@ #include <linux/bitops.h> #include <linux/err.h> #include <linux/platform_device.h> +#include <linux/of_platform.h> #include <linux/module.h> #include <linux/of.h> #include <linux/of_device.h> @@ -30,6 +31,7 @@ #include "clk-pll.h" #include "clk-rcg.h" #include "clk-branch.h" +#include "clk-hfpll.h" #include "reset.h" static struct clk_pll pll3 = { @@ -75,6 +77,164 @@ static struct clk_regmap pll8_vote = { }, }; +static struct hfpll_data hfpll0_data = { + .mode_reg = 0x3200, + .l_reg = 0x3208, + .m_reg = 0x320c, + .n_reg = 0x3210, + .config_reg = 0x3204, + .status_reg = 0x321c, + .config_val = 0x7845c665, + .droop_reg = 0x3214, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll0 = { + .d = &hfpll0_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll0", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll0.lock), +}; + +static struct hfpll_data hfpll1_8064_data = { + .mode_reg = 0x3240, + .l_reg = 0x3248, + .m_reg = 0x324c, + .n_reg = 0x3250, + .config_reg = 0x3244, + .status_reg = 0x325c, + .config_val = 0x7845c665, + .droop_reg = 0x3254, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct hfpll_data hfpll1_data = { + .mode_reg = 0x3300, + .l_reg = 0x3308, + .m_reg = 0x330c, + .n_reg = 0x3310, + .config_reg = 0x3304, + .status_reg = 0x331c, + .config_val = 0x7845c665, + .droop_reg = 0x3314, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll1 = { + .d = &hfpll1_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll1", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll1.lock), +}; + +static struct hfpll_data hfpll2_data = { + .mode_reg = 0x3280, + .l_reg = 0x3288, + .m_reg = 0x328c, + .n_reg = 0x3290, + .config_reg = 0x3284, + .status_reg = 0x329c, + .config_val = 0x7845c665, + .droop_reg = 0x3294, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll2 = { + .d = &hfpll2_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll2", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll2.lock), +}; + +static struct hfpll_data hfpll3_data = { + .mode_reg = 0x32c0, + .l_reg = 0x32c8, + .m_reg = 0x32cc, + .n_reg = 0x32d0, + .config_reg = 0x32c4, + .status_reg = 0x32dc, + .config_val = 0x7845c665, + .droop_reg = 0x32d4, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll3 = { + .d = &hfpll3_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll3", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll3.lock), +}; + +static struct hfpll_data hfpll_l2_8064_data = { + .mode_reg = 0x3300, + .l_reg = 0x3308, + .m_reg = 0x330c, + .n_reg = 0x3310, + .config_reg = 0x3304, + .status_reg = 0x331c, + .config_val = 0x7845c665, + .droop_reg = 0x3314, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct hfpll_data hfpll_l2_data = { + .mode_reg = 0x3400, + .l_reg = 0x3408, + .m_reg = 0x340c, + .n_reg = 0x3410, + .config_reg = 0x3404, + .status_reg = 0x341c, + .config_val = 0x7845c665, + .droop_reg = 0x3414, + .droop_val = 0x0108c000, + .min_rate = 600000000UL, + .max_rate = 1800000000UL, +}; + +static struct clk_hfpll hfpll_l2 = { + .d = &hfpll_l2_data, + .clkr.hw.init = &(struct clk_init_data){ + .parent_names = (const char *[]){ "pxo" }, + .num_parents = 1, + .name = "hfpll_l2", + .ops = &clk_ops_hfpll, + .flags = CLK_IGNORE_UNUSED, + }, + .lock = __SPIN_LOCK_UNLOCKED(hfpll_l2.lock), +}; + static struct clk_pll pll14 = { .l_reg = 0x31c4, .m_reg = 0x31c8, @@ -3140,6 +3300,9 @@ static struct clk_regmap *gcc_msm8960_clks[] = { [PMIC_ARB1_H_CLK] = &pmic_arb1_h_clk.clkr, [PMIC_SSBI2_CLK] = &pmic_ssbi2_clk.clkr, [RPM_MSG_RAM_H_CLK] = &rpm_msg_ram_h_clk.clkr, + [PLL9] = &hfpll0.clkr, + [PLL10] = &hfpll1.clkr, + [PLL12] = &hfpll_l2.clkr, }; static const struct qcom_reset_map gcc_msm8960_resets[] = { @@ -3350,6 +3513,11 @@ static struct clk_regmap *gcc_apq8064_clks[] = { [PMIC_ARB1_H_CLK] = &pmic_arb1_h_clk.clkr, [PMIC_SSBI2_CLK] = &pmic_ssbi2_clk.clkr, [RPM_MSG_RAM_H_CLK] = &rpm_msg_ram_h_clk.clkr, + [PLL9] = &hfpll0.clkr, + [PLL10] = &hfpll1.clkr, + [PLL12] = &hfpll_l2.clkr, + [PLL16] = &hfpll2.clkr, + [PLL17] = &hfpll3.clkr, }; static const struct qcom_reset_map gcc_apq8064_resets[] = { @@ -3488,24 +3656,19 @@ MODULE_DEVICE_TABLE(of, gcc_msm8960_match_table); static int gcc_msm8960_probe(struct platform_device *pdev) { - struct clk *clk; - struct device *dev = &pdev->dev; const struct of_device_id *match; match = of_match_device(gcc_msm8960_match_table, &pdev->dev); if (!match) return -EINVAL; - /* Temporary until RPM clocks supported */ - clk = clk_register_fixed_rate(dev, "cxo", NULL, CLK_IS_ROOT, 19200000); - if (IS_ERR(clk)) - return PTR_ERR(clk); - - clk = clk_register_fixed_rate(dev, "pxo", NULL, CLK_IS_ROOT, 27000000); - if (IS_ERR(clk)) - return PTR_ERR(clk); + if (match->data == &gcc_apq8064_desc) { + hfpll1.d = &hfpll1_8064_data; + hfpll_l2.d = &hfpll_l2_8064_data; + } - return qcom_cc_probe(pdev, match->data); + qcom_cc_probe(pdev, match->data); + return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); } static int gcc_msm8960_remove(struct platform_device *pdev) diff --git a/drivers/clk/qcom/hfpll.c b/drivers/clk/qcom/hfpll.c new file mode 100644 index 000000000000..701a377d35c1 --- /dev/null +++ b/drivers/clk/qcom/hfpll.c @@ -0,0 +1,110 @@ +/* + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/of.h> +#include <linux/clk.h> +#include <linux/clk-provider.h> +#include <linux/regmap.h> + +#include "clk-regmap.h" +#include "clk-hfpll.h" + +static const struct hfpll_data hdata = { + .mode_reg = 0x00, + .l_reg = 0x04, + .m_reg = 0x08, + .n_reg = 0x0c, + .user_reg = 0x10, + .config_reg = 0x14, + .config_val = 0x430405d, + .status_reg = 0x1c, + .lock_bit = 16, + + .user_val = 0x8, + .user_vco_mask = 0x100000, + .low_vco_max_rate = 1248000000, + .min_rate = 537600000UL, + .max_rate = 2900000000UL, +}; + +static const struct of_device_id qcom_hfpll_match_table[] = { + { .compatible = "qcom,hfpll" }, + { } +}; +MODULE_DEVICE_TABLE(of, qcom_hfpll_match_table); + +static struct regmap_config hfpll_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .max_register = 0x30, + .fast_io = true, +}; + +static int qcom_hfpll_probe(struct platform_device *pdev) +{ + struct clk *clk; + struct resource *res; + struct device *dev = &pdev->dev; + void __iomem *base; + struct regmap *regmap; + struct clk_hfpll *h; + struct clk_init_data init = { + .parent_names = (const char *[]){ "xo" }, + .num_parents = 1, + .ops = &clk_ops_hfpll, + }; + + h = devm_kzalloc(dev, sizeof(*h), GFP_KERNEL); + if (!h) + return -ENOMEM; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + regmap = devm_regmap_init_mmio(&pdev->dev, base, &hfpll_regmap_config); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + if (of_property_read_string_index(dev->of_node, "clock-output-names", + 0, &init.name)) + return -ENODEV; + + h->d = &hdata; + h->clkr.hw.init = &init; + spin_lock_init(&h->lock); + + clk = devm_clk_register_regmap(&pdev->dev, &h->clkr); + + return PTR_ERR_OR_ZERO(clk); +} + +static struct platform_driver qcom_hfpll_driver = { + .probe = qcom_hfpll_probe, + .driver = { + .name = "qcom-hfpll", + .owner = THIS_MODULE, + .of_match_table = qcom_hfpll_match_table, + }, +}; +module_platform_driver(qcom_hfpll_driver); + +MODULE_DESCRIPTION("QCOM HFPLL Clock Driver"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("platform:qcom-hfpll"); diff --git a/drivers/clk/qcom/kpss-xcc.c b/drivers/clk/qcom/kpss-xcc.c new file mode 100644 index 000000000000..9c49496afa6a --- /dev/null +++ b/drivers/clk/qcom/kpss-xcc.c @@ -0,0 +1,94 @@ +/* Copyright (c) 2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/err.h> +#include <linux/io.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/clk.h> +#include <linux/clk-provider.h> + +static const char *aux_parents[] = { + "pll8_vote", + "pxo", +}; + +static unsigned int aux_parent_map[] = { + 3, + 0, +}; + +static const struct of_device_id kpss_xcc_match_table[] = { + { .compatible = "qcom,kpss-acc-v1", .data = (void *)1UL }, + { .compatible = "qcom,kpss-gcc" }, + {} +}; +MODULE_DEVICE_TABLE(of, kpss_xcc_match_table); + +static int kpss_xcc_driver_probe(struct platform_device *pdev) +{ + const struct of_device_id *id; + struct clk *clk; + struct resource *res; + void __iomem *base; + const char *name; + + id = of_match_device(kpss_xcc_match_table, &pdev->dev); + if (!id) + return -ENODEV; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + if (id->data) { + if (of_property_read_string_index(pdev->dev.of_node, + "clock-output-names", 0, &name)) + return -ENODEV; + base += 0x14; + } else { + name = "acpu_l2_aux"; + base += 0x28; + } + + clk = clk_register_mux_table(&pdev->dev, name, aux_parents, + ARRAY_SIZE(aux_parents), 0, base, 0, 0x3, + 0, aux_parent_map, NULL); + + platform_set_drvdata(pdev, clk); + + return PTR_ERR_OR_ZERO(clk); +} + +static int kpss_xcc_driver_remove(struct platform_device *pdev) +{ + clk_unregister_mux(platform_get_drvdata(pdev)); + return 0; +} + +static struct platform_driver kpss_xcc_driver = { + .probe = kpss_xcc_driver_probe, + .remove = kpss_xcc_driver_remove, + .driver = { + .name = "kpss-xcc", + .of_match_table = kpss_xcc_match_table, + .owner = THIS_MODULE, + }, +}; +module_platform_driver(kpss_xcc_driver); + +MODULE_LICENSE("GPL v2"); diff --git a/drivers/clk/qcom/krait-cc.c b/drivers/clk/qcom/krait-cc.c new file mode 100644 index 000000000000..8c5d24e2c6cb --- /dev/null +++ b/drivers/clk/qcom/krait-cc.c @@ -0,0 +1,357 @@ +/* Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/err.h> +#include <linux/io.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/clk.h> +#include <linux/clk-provider.h> +#include <linux/slab.h> + +#include <asm/smp_plat.h> + +#include "clk-krait.h" + +static unsigned int sec_mux_map[] = { + 2, + 0, +}; + +static unsigned int pri_mux_map[] = { + 1, + 2, + 0, +}; + +static int +krait_add_div(struct device *dev, int id, const char *s, unsigned offset) +{ + struct krait_div2_clk *div; + struct clk_init_data init = { + .num_parents = 1, + .ops = &krait_div2_clk_ops, + .flags = CLK_SET_RATE_PARENT, + }; + const char *p_names[1]; + struct clk *clk; + + div = devm_kzalloc(dev, sizeof(*div), GFP_KERNEL); + if (!div) + return -ENOMEM; + + div->width = 2; + div->shift = 6; + div->lpl = id >= 0; + div->offset = offset; + div->hw.init = &init; + + init.name = kasprintf(GFP_KERNEL, "hfpll%s_div", s); + if (!init.name) + return -ENOMEM; + + init.parent_names = p_names; + p_names[0] = kasprintf(GFP_KERNEL, "hfpll%s", s); + if (!p_names[0]) { + kfree(init.name); + return -ENOMEM; + } + + clk = devm_clk_register(dev, &div->hw); + kfree(p_names[0]); + kfree(init.name); + + return PTR_ERR_OR_ZERO(clk); +} + +static int +krait_add_sec_mux(struct device *dev, int id, const char *s, unsigned offset, + bool unique_aux) +{ + struct krait_mux_clk *mux; + static const char *sec_mux_list[] = { + "acpu_aux", + "qsb", + }; + struct clk_init_data init = { + .parent_names = sec_mux_list, + .num_parents = ARRAY_SIZE(sec_mux_list), + .ops = &krait_mux_clk_ops, + .flags = CLK_SET_RATE_PARENT, + }; + struct clk *clk; + + mux = devm_kzalloc(dev, sizeof(*mux), GFP_KERNEL); + if (!mux) + return -ENOMEM; + + mux->offset = offset; + mux->lpl = id >= 0; + mux->has_safe_parent = true; + mux->safe_sel = 2; + mux->mask = 0x3; + mux->shift = 2; + mux->parent_map = sec_mux_map; + mux->hw.init = &init; + + init.name = kasprintf(GFP_KERNEL, "krait%s_sec_mux", s); + if (!init.name) + return -ENOMEM; + + if (unique_aux) { + sec_mux_list[0] = kasprintf(GFP_KERNEL, "acpu%s_aux", s); + if (!sec_mux_list[0]) { + clk = ERR_PTR(-ENOMEM); + goto err_aux; + } + } + + clk = devm_clk_register(dev, &mux->hw); + + if (unique_aux) + kfree(sec_mux_list[0]); +err_aux: + kfree(init.name); + return PTR_ERR_OR_ZERO(clk); +} + +static struct clk * +krait_add_pri_mux(struct device *dev, int id, const char *s, unsigned offset) +{ + struct krait_mux_clk *mux; + const char *p_names[3]; + struct clk_init_data init = { + .parent_names = p_names, + .num_parents = ARRAY_SIZE(p_names), + .ops = &krait_mux_clk_ops, + .flags = CLK_SET_RATE_PARENT, + }; + struct clk *clk; + + mux = devm_kzalloc(dev, sizeof(*mux), GFP_KERNEL); + if (!mux) + return ERR_PTR(-ENOMEM); + + mux->has_safe_parent = true; + mux->safe_sel = 0; + mux->mask = 0x3; + mux->shift = 0; + mux->offset = offset; + mux->lpl = id >= 0; + mux->parent_map = pri_mux_map; + mux->hw.init = &init; + + init.name = kasprintf(GFP_KERNEL, "krait%s_pri_mux", s); + if (!init.name) + return ERR_PTR(-ENOMEM); + + p_names[0] = kasprintf(GFP_KERNEL, "hfpll%s", s); + if (!p_names[0]) { + clk = ERR_PTR(-ENOMEM); + goto err_p0; + } + + p_names[1] = kasprintf(GFP_KERNEL, "hfpll%s_div", s); + if (!p_names[1]) { + clk = ERR_PTR(-ENOMEM); + goto err_p1; + } + + p_names[2] = kasprintf(GFP_KERNEL, "krait%s_sec_mux", s); + if (!p_names[2]) { + clk = ERR_PTR(-ENOMEM); + goto err_p2; + } + + clk = devm_clk_register(dev, &mux->hw); + + kfree(p_names[2]); +err_p2: + kfree(p_names[1]); +err_p1: + kfree(p_names[0]); +err_p0: + kfree(init.name); + return clk; +} + +/* id < 0 for L2, otherwise id == physical CPU number */ +static struct clk *krait_add_clks(struct device *dev, int id, bool unique_aux) +{ + int ret; + unsigned offset; + void *p = NULL; + const char *s; + struct clk *clk; + + if (id >= 0) { + offset = 0x4501 + (0x1000 * id); + s = p = kasprintf(GFP_KERNEL, "%d", id); + if (!s) + return ERR_PTR(-ENOMEM); + } else { + offset = 0x500; + s = "_l2"; + } + + ret = krait_add_div(dev, id, s, offset); + if (ret) { + clk = ERR_PTR(ret); + goto err; + } + + ret = krait_add_sec_mux(dev, id, s, offset, unique_aux); + if (ret) { + clk = ERR_PTR(ret); + goto err; + } + + clk = krait_add_pri_mux(dev, id, s, offset); +err: + kfree(p); + return clk; +} + +static struct clk *krait_of_get(struct of_phandle_args *clkspec, void *data) +{ + unsigned int idx = clkspec->args[0]; + struct clk **clks = data; + + if (idx >= 5) { + pr_err("%s: invalid clock index %d\n", __func__, idx); + return ERR_PTR(-EINVAL); + } + + return clks[idx] ? : ERR_PTR(-ENODEV); +} + +static const struct of_device_id krait_cc_match_table[] = { + { .compatible = "qcom,krait-cc-v1", (void *)1UL }, + { .compatible = "qcom,krait-cc-v2" }, + {} +}; +MODULE_DEVICE_TABLE(of, krait_cc_match_table); + +static int krait_cc_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + const struct of_device_id *id; + unsigned long cur_rate, aux_rate; + int i, cpu; + struct clk *clk; + struct clk **clks; + struct clk *l2_pri_mux_clk; + + id = of_match_device(krait_cc_match_table, &pdev->dev); + if (!id) + return -ENODEV; + + /* Rate is 1 because 0 causes problems for __clk_mux_determine_rate */ + clk = clk_register_fixed_rate(dev, "qsb", NULL, CLK_IS_ROOT, 1); + if (IS_ERR(clk)) + return PTR_ERR(clk); + + if (!id->data) { + clk = clk_register_fixed_factor(&pdev->dev, "acpu_aux", + "gpll0_vote", 0, 1, 2); + if (IS_ERR(clk)) + return PTR_ERR(clk); + } + + /* Krait configurations have at most 4 CPUs and one L2 */ + clks = devm_kcalloc(dev, 5, sizeof(*clks), GFP_KERNEL); + if (!clks) + return -ENOMEM; + + for_each_possible_cpu(i) { + cpu = cpu_logical_map(i); + clk = krait_add_clks(dev, cpu, id->data); + if (IS_ERR(clk)) + return PTR_ERR(clk); + clks[cpu] = clk; + } + + l2_pri_mux_clk = krait_add_clks(dev, -1, id->data); + if (IS_ERR(l2_pri_mux_clk)) + return PTR_ERR(l2_pri_mux_clk); + clks[4] = l2_pri_mux_clk; + + /* + * We don't want the CPU or L2 clocks to be turned off at late init + * if CPUFREQ or HOTPLUG configs are disabled. So, bump up the + * refcount of these clocks. Any cpufreq/hotplug manager can assume + * that the clocks have already been prepared and enabled by the time + * they take over. + */ + for_each_online_cpu(i) { + cpu = cpu_logical_map(i); + clk_prepare_enable(l2_pri_mux_clk); + WARN(clk_prepare_enable(clks[cpu]), + "Unable to turn on CPU%d clock", cpu); + } + + /* + * Force reinit of HFPLLs and muxes to overwrite any potential + * incorrect configuration of HFPLLs and muxes by the bootloader. + * While at it, also make sure the cores are running at known rates + * and print the current rate. + * + * The clocks are set to aux clock rate first to make sure the + * secondary mux is not sourcing off of QSB. The rate is then set to + * two different rates to force a HFPLL reinit under all + * circumstances. + */ + cur_rate = clk_get_rate(l2_pri_mux_clk); + aux_rate = 384000000; + if (cur_rate == 1) { + pr_info("L2 @ QSB rate. Forcing new rate.\n"); + cur_rate = aux_rate; + } + clk_set_rate(l2_pri_mux_clk, aux_rate); + clk_set_rate(l2_pri_mux_clk, 2); + clk_set_rate(l2_pri_mux_clk, cur_rate); + pr_info("L2 @ %lu KHz\n", clk_get_rate(l2_pri_mux_clk) / 1000); + for_each_possible_cpu(i) { + cpu = cpu_logical_map(i); + clk = clks[cpu]; + cur_rate = clk_get_rate(clk); + if (cur_rate == 1) { + pr_info("CPU%d @ QSB rate. Forcing new rate.\n", i); + cur_rate = aux_rate; + } + clk_set_rate(clk, aux_rate); + clk_set_rate(clk, 2); + clk_set_rate(clk, cur_rate); + pr_info("CPU%d @ %lu KHz\n", i, clk_get_rate(clk) / 1000); + } + + of_clk_add_provider(dev->of_node, krait_of_get, clks); + + return 0; +} + +static struct platform_driver krait_cc_driver = { + .probe = krait_cc_probe, + .driver = { + .name = "clock-krait", + .of_match_table = krait_cc_match_table, + .owner = THIS_MODULE, + }, +}; +module_platform_driver(krait_cc_driver); + +MODULE_DESCRIPTION("Krait CPU Clock Driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm index 83a75dc84761..afc27a434fbf 100644 --- a/drivers/cpufreq/Kconfig.arm +++ b/drivers/cpufreq/Kconfig.arm @@ -129,6 +129,15 @@ config ARM_OMAP2PLUS_CPUFREQ depends on ARCH_OMAP2PLUS default ARCH_OMAP2PLUS +config ARM_QCOM_CPUFREQ + tristate "Qualcomm based" + depends on ARCH_QCOM + select PM_OPP + help + This adds the CPUFreq driver for Qualcomm SoC based boards. + + If in doubt, say N. + config ARM_S3C_CPUFREQ bool help diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile index b89b1f71f80f..9b235ce72959 100644 --- a/drivers/cpufreq/Makefile +++ b/drivers/cpufreq/Makefile @@ -62,6 +62,7 @@ obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o +obj-$(CONFIG_ARM_QCOM_CPUFREQ) += qcom-cpufreq.o obj-$(CONFIG_PXA25x) += pxa2xx-cpufreq.o obj-$(CONFIG_PXA27x) += pxa2xx-cpufreq.o obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o diff --git a/drivers/cpufreq/qcom-cpufreq.c b/drivers/cpufreq/qcom-cpufreq.c new file mode 100644 index 000000000000..26bce596a146 --- /dev/null +++ b/drivers/cpufreq/qcom-cpufreq.c @@ -0,0 +1,208 @@ +/* Copyright (c) 2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/cpu.h> +#include <linux/of.h> +#include <linux/platform_device.h> +#include <linux/err.h> +#include <linux/io.h> +#include <linux/slab.h> +#include <linux/cpufreq-dt.h> +#include <linux/pm_opp.h> +#include <linux/init.h> + +static struct cpufreq_dt_platform_data cpufreq_data = { + .independent_clocks = true, +}; + +static void __init get_krait_bin_format_a(int *speed, int *pvs, int *pvs_ver) +{ + void __iomem *base; + u32 pte_efuse; + + *speed = *pvs = *pvs_ver = 0; + + base = ioremap(0x007000c0, 4); + if (!base) { + pr_warn("Unable to read efuse data. Defaulting to 0!\n"); + return; + } + + pte_efuse = readl_relaxed(base); + iounmap(base); + + *speed = pte_efuse & 0xf; + if (*speed == 0xf) + *speed = (pte_efuse >> 4) & 0xf; + + if (*speed == 0xf) { + *speed = 0; + pr_warn("Speed bin: Defaulting to %d\n", *speed); + } else { + pr_info("Speed bin: %d\n", *speed); + } + + *pvs = (pte_efuse >> 10) & 0x7; + if (*pvs == 0x7) + *pvs = (pte_efuse >> 13) & 0x7; + + if (*pvs == 0x7) { + *pvs = 0; + pr_warn("PVS bin: Defaulting to %d\n", *pvs); + } else { + pr_info("PVS bin: %d\n", *pvs); + } +} + +static void __init get_krait_bin_format_b(int *speed, int *pvs, int *pvs_ver) +{ + u32 pte_efuse, redundant_sel; + void __iomem *base; + + *speed = 0; + *pvs = 0; + *pvs_ver = 0; + + base = ioremap(0xfc4b80b0, 8); + if (!base) { + pr_warn("Unable to read efuse data. Defaulting to 0!\n"); + return; + } + + pte_efuse = readl_relaxed(base); + redundant_sel = (pte_efuse >> 24) & 0x7; + *speed = pte_efuse & 0x7; + /* 4 bits of PVS are in efuse register bits 31, 8-6. */ + *pvs = ((pte_efuse >> 28) & 0x8) | ((pte_efuse >> 6) & 0x7); + *pvs_ver = (pte_efuse >> 4) & 0x3; + + switch (redundant_sel) { + case 1: + *speed = (pte_efuse >> 27) & 0xf; + break; + case 2: + *pvs = (pte_efuse >> 27) & 0xf; + break; + } + + /* Check SPEED_BIN_BLOW_STATUS */ + if (pte_efuse & BIT(3)) { + pr_info("Speed bin: %d\n", *speed); + } else { + pr_warn("Speed bin not set. Defaulting to 0!\n"); + *speed = 0; + } + + /* Check PVS_BLOW_STATUS */ + pte_efuse = readl_relaxed(base + 0x4) & BIT(21); + if (pte_efuse) { + pr_info("PVS bin: %d\n", *pvs); + } else { + pr_warn("PVS bin not set. Defaulting to 0!\n"); + *pvs = 0; + } + + pr_info("PVS version: %d\n", *pvs_ver); + iounmap(base); +} + +static int __init qcom_cpufreq_populate_opps(void) +{ + int len, num_rows, i, k; + char table_name[] = "qcom,speedXX-pvsXX-bin-vXX"; + struct device_node *np; + struct device *dev; + int cpu = 0; + int speed, pvs, pvs_ver; + int cols; + + np = of_find_node_by_name(NULL, "qcom,pvs"); + if (!np) + pr_warn("Can't find PVS node\n"); + + if (of_property_read_bool(np, "qcom,pvs-format-a")) { + get_krait_bin_format_a(&speed, &pvs, &pvs_ver); + cols = 2; + } else { + get_krait_bin_format_b(&speed, &pvs, &pvs_ver); + cols = 3; + } + + snprintf(table_name, sizeof(table_name), + "qcom,speed%d-pvs%d-bin-v%d", speed, pvs, pvs_ver); +again: + dev = get_cpu_device(cpu); + if (!dev) + return -ENODEV; + + if (!of_find_property(np, table_name, &len)) + return -EINVAL; + + len /= sizeof(u32); + if (len % cols || len == 0) + return -EINVAL; + + num_rows = len / cols; + + for (i = 0, k = 0; i < num_rows; i++) { + u32 freq, volt; + + of_property_read_u32_index(np, table_name, k++, &freq); + of_property_read_u32_index(np, table_name, k++, &volt); + while (k % cols) + k++; /* Skip uA entries if present */ + if (dev_pm_opp_add(dev, freq, volt)) + pr_warn("failed to add OPP %u\n", freq); + } + + if (cpu++ < num_possible_cpus()) + goto again; + + return 0; +} + +static int __init qcom_cpufreq_driver_init(void) +{ + struct platform_device_info devinfo = { + .name = "cpufreq-dt", + .data = &cpufreq_data, + .size_data = sizeof(cpufreq_data) + }; + struct device *cpu_dev; + struct device_node *np; + struct platform_device *pdev; + + cpu_dev = get_cpu_device(0); + if (!cpu_dev) + return -ENODEV; + + np = of_node_get(cpu_dev->of_node); + if (!np) + return -ENOENT; + + if (!of_device_is_compatible(np, "qcom,krait")) { + of_node_put(np); + return -ENODEV; + } + of_node_put(np); + + qcom_cpufreq_populate_opps(); + pdev = platform_device_register_full(&devinfo); + + return PTR_ERR_OR_ZERO(pdev); +} +module_init(qcom_cpufreq_driver_init); + +MODULE_DESCRIPTION("Qualcomm CPUfreq driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm index 8c16ab20fb15..e98993c872a2 100644 --- a/drivers/cpuidle/Kconfig.arm +++ b/drivers/cpuidle/Kconfig.arm @@ -63,3 +63,10 @@ config ARM_MVEBU_V7_CPUIDLE depends on ARCH_MVEBU help Select this to enable cpuidle on Armada 370, 38x and XP processors. + +config ARM_QCOM_CPUIDLE + bool "CPU Idle drivers for Qualcomm processors" + depends on ARCH_QCOM + select DT_IDLE_STATES + help + Select this to enable cpuidle for QCOM processors diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile index 4d177b916f75..6c222d536005 100644 --- a/drivers/cpuidle/Makefile +++ b/drivers/cpuidle/Makefile @@ -17,6 +17,7 @@ obj-$(CONFIG_ARM_ZYNQ_CPUIDLE) += cpuidle-zynq.o obj-$(CONFIG_ARM_U8500_CPUIDLE) += cpuidle-ux500.o obj-$(CONFIG_ARM_AT91_CPUIDLE) += cpuidle-at91.o obj-$(CONFIG_ARM_EXYNOS_CPUIDLE) += cpuidle-exynos.o +obj-$(CONFIG_ARM_QCOM_CPUIDLE) += cpuidle-qcom.o ############################################################################### # MIPS drivers diff --git a/drivers/cpuidle/cpuidle-qcom.c b/drivers/cpuidle/cpuidle-qcom.c new file mode 100644 index 000000000000..aa6a6c9a3590 --- /dev/null +++ b/drivers/cpuidle/cpuidle-qcom.c @@ -0,0 +1,99 @@ +/* + * Copyright (c) 2014, Linaro Limited. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/cpuidle.h> +#include <linux/module.h> +#include <linux/platform_device.h> + +#include <soc/qcom/pm.h> +#include "dt_idle_states.h" + +static struct qcom_cpu_pm_ops *lpm_ops; + +static int qcom_cpu_stby(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) +{ + int ret; + + ret = lpm_ops->standby(NULL); + if (ret) + return ret; + + return index; +} + +static int qcom_cpu_spc(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) +{ + int ret; + + ret = lpm_ops->spc(NULL); + if (ret) + return ret; + + return index; +} + +static struct cpuidle_driver qcom_cpuidle_driver = { + .name = "qcom_cpuidle", +}; + +static const struct of_device_id qcom_idle_state_match[] = { + { .compatible = "qcom,idle-state-stby", .data = qcom_cpu_stby }, + { .compatible = "qcom,idle-state-spc", .data = qcom_cpu_spc }, + { }, +}; + +static int qcom_cpuidle_probe(struct platform_device *pdev) +{ + struct cpuidle_driver *drv = &qcom_cpuidle_driver; + int ret; + + lpm_ops = pdev->dev.platform_data; + + /* Probe for other states, including standby */ + ret = dt_init_idle_driver(drv, qcom_idle_state_match, 0); + if (ret < 0) + return ret; + + /* + * We will not register for cpu's cpuidle device here, + * they will be registered as and when their power controllers + * are ready. + */ + return cpuidle_register_driver(drv); +} + +static struct platform_driver qcom_cpuidle = { + .probe = qcom_cpuidle_probe, + .driver = { + .name = "qcom_cpuidle", + }, +}; + +/* + * Register the driver early so the we have a successul registration + * when the device shows up. + * This way the cpuidle driver could be registered before the cpuidle + * devices are registered. + */ +static int __init qcom_cpuidle_driver_init(void) +{ + return platform_driver_register(&qcom_cpuidle); +} +core_initcall(qcom_cpuidle_driver_init); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("CPUIDLE driver for QCOM SoC"); +MODULE_ALIAS("platform:qcom-cpuidle"); diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index de469821bc1b..3f3e4b27f3d9 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -457,4 +457,11 @@ config QCOM_BAM_DMA Enable support for the QCOM BAM DMA controller. This controller provides DMA capabilities for a variety of on-chip devices. +config MSM_DMA + tristate "Qualcomm 3.4 ADM DMA support" + depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM) + select DMA_ENGINE + ---help--- + Enable support for the Old Qualcomm DMA controller. + endif diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile index cb626c179911..3be4dd48e48b 100644 --- a/drivers/dma/Makefile +++ b/drivers/dma/Makefile @@ -49,3 +49,4 @@ obj-y += xilinx/ obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o +obj-$(CONFIG_MSM_DMA) += msm_dma.o diff --git a/drivers/dma/msm_dma.c b/drivers/dma/msm_dma.c new file mode 100644 index 000000000000..63970c230b4d --- /dev/null +++ b/drivers/dma/msm_dma.c @@ -0,0 +1,789 @@ +/* linux/arch/arm/mach-msm/dma.c + * + * Copyright (C) 2007 Google, Inc. + * Copyright (c) 2008-2010, 2012 The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/clk.h> +#include <linux/err.h> +#include <linux/io.h> +#include <linux/interrupt.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/spinlock.h> +#include <linux/pm_runtime.h> +#include <soc/qcom/msm_dma.h> + +#define MODULE_NAME "msm_dmov" + +#define MSM_DMOV_CHANNEL_COUNT 16 +#define MSM_DMOV_CRCI_COUNT 16 + +#define GIC_SPI_START 32 +#define ADM_0_SCSS_1_IRQ (GIC_SPI_START + 171) + +enum { + CLK_DIS, + CLK_TO_BE_DIS, + CLK_EN +}; + +struct msm_dmov_ci_conf { + int start; + int end; + int burst; +}; + +struct msm_dmov_crci_conf { + int sd; + int blk_size; +}; + +struct msm_dmov_chan_conf { + int sd; + int block; + int priority; +}; + +struct msm_dmov_conf { + void *base; + struct msm_dmov_crci_conf *crci_conf; + struct msm_dmov_chan_conf *chan_conf; + int channel_active; + int sd; + size_t sd_size; + struct list_head staged_commands[MSM_DMOV_CHANNEL_COUNT]; + struct list_head ready_commands[MSM_DMOV_CHANNEL_COUNT]; + struct list_head active_commands[MSM_DMOV_CHANNEL_COUNT]; + struct mutex lock; + spinlock_t list_lock; + unsigned int irq; + struct clk *clk; + struct clk *pclk; + struct clk *ebiclk; + unsigned int clk_ctl; + struct delayed_work work; + struct workqueue_struct *cmd_wq; +}; + +static void msm_dmov_clock_work(struct work_struct *); + +#ifdef CONFIG_ARCH_MSM8X60 + +#define DMOV_CHANNEL_DEFAULT_CONF { .sd = 1, .block = 0, .priority = 0 } +#define DMOV_CHANNEL_MODEM_CONF { .sd = 3, .block = 0, .priority = 0 } +#define DMOV_CHANNEL_CONF(secd, blk, pri) \ + { .sd = secd, .block = blk, .priority = pri } + +static struct msm_dmov_chan_conf adm0_chan_conf[] = { + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_DEFAULT_CONF, +}; + +static struct msm_dmov_chan_conf adm1_chan_conf[] = { + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_DEFAULT_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, + DMOV_CHANNEL_MODEM_CONF, +}; + +#define DMOV_CRCI_DEFAULT_CONF { .sd = 1, .blk_size = 0 } +#define DMOV_CRCI_CONF(secd, blk) { .sd = secd, .blk_size = blk } + +static struct msm_dmov_crci_conf adm0_crci_conf[] = { + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_CONF(1, 4), + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, +}; + +static struct msm_dmov_crci_conf adm1_crci_conf[] = { + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_CONF(1, 1), + DMOV_CRCI_CONF(1, 1), + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_CONF(1, 1), + DMOV_CRCI_CONF(1, 1), + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_DEFAULT_CONF, + DMOV_CRCI_CONF(1, 1), + DMOV_CRCI_DEFAULT_CONF, +}; + +static struct msm_dmov_conf dmov_conf[] = { + { + .crci_conf = adm0_crci_conf, + .chan_conf = adm0_chan_conf, + .lock = __MUTEX_INITIALIZER(dmov_conf[0].lock), + .list_lock = __SPIN_LOCK_UNLOCKED(dmov_list_lock), + .clk_ctl = CLK_DIS, + .work = __DELAYED_WORK_INITIALIZER(dmov_conf[0].work, + msm_dmov_clock_work, 0), + }, { + .crci_conf = adm1_crci_conf, + .chan_conf = adm1_chan_conf, + .lock = __MUTEX_INITIALIZER(dmov_conf[1].lock), + .list_lock = __SPIN_LOCK_UNLOCKED(dmov_list_lock), + .clk_ctl = CLK_DIS, + .work = __DELAYED_WORK_INITIALIZER(dmov_conf[1].work, + msm_dmov_clock_work, 0), + } +}; +#else +static struct msm_dmov_conf dmov_conf[] = { + { + .crci_conf = NULL, + .chan_conf = NULL, + .lock = __MUTEX_INITIALIZER(dmov_conf[0].lock), + .list_lock = __SPIN_LOCK_UNLOCKED(dmov_list_lock), + .clk_ctl = CLK_DIS, + .work = __DELAYED_WORK_INITIALIZER(dmov_conf[0].work, + msm_dmov_clock_work, 0), + } +}; +#endif + +#define MSM_DMOV_ID_COUNT (MSM_DMOV_CHANNEL_COUNT * ARRAY_SIZE(dmov_conf)) +#define DMOV_REG(name, adm) ((name) + (dmov_conf[adm].base) +\ + (dmov_conf[adm].sd * dmov_conf[adm].sd_size)) +#define DMOV_ID_TO_ADM(id) ((id) / MSM_DMOV_CHANNEL_COUNT) +#define DMOV_ID_TO_CHAN(id) ((id) % MSM_DMOV_CHANNEL_COUNT) +#define DMOV_CHAN_ADM_TO_ID(ch, adm) ((ch) + (adm) * MSM_DMOV_CHANNEL_COUNT) + +#ifdef CONFIG_MSM_ADM3 +#define DMOV_IRQ_TO_ADM(irq) \ +({ \ + typeof(irq) _irq = irq; \ + ((_irq == INT_ADM1_MASTER) || (_irq == INT_ADM1_AARM)); \ +}) +#else +#define DMOV_IRQ_TO_ADM(irq) 0 +#endif + +enum { + MSM_DMOV_PRINT_ERRORS = 1, + MSM_DMOV_PRINT_IO = 2, + MSM_DMOV_PRINT_FLOW = 4 +}; + +unsigned int msm_dmov_print_mask = MSM_DMOV_PRINT_ERRORS; + +#define MSM_DMOV_DPRINTF(mask, format, args...) \ + do { \ + if ((mask) & msm_dmov_print_mask) \ + printk(KERN_ERR format, args); \ + } while (0) +#define PRINT_ERROR(format, args...) \ + MSM_DMOV_DPRINTF(MSM_DMOV_PRINT_ERRORS, format, args); +#define PRINT_IO(format, args...) \ + MSM_DMOV_DPRINTF(MSM_DMOV_PRINT_IO, format, args); +#define PRINT_FLOW(format, args...) \ + MSM_DMOV_DPRINTF(MSM_DMOV_PRINT_FLOW, format, args); + +static int msm_dmov_clk_on(int adm) +{ + int ret; + + ret = clk_prepare_enable(dmov_conf[adm].clk); + if (ret) + return ret; + if (dmov_conf[adm].pclk) { + ret = clk_prepare_enable(dmov_conf[adm].pclk); + if (ret) { + clk_disable_unprepare(dmov_conf[adm].clk); + return ret; + } + } + if (dmov_conf[adm].ebiclk) { + ret = clk_prepare_enable(dmov_conf[adm].ebiclk); + if (ret) { + if (dmov_conf[adm].pclk) + clk_disable_unprepare(dmov_conf[adm].pclk); + clk_disable_unprepare(dmov_conf[adm].clk); + } + } + return ret; +} + +static void msm_dmov_clk_off(int adm) +{ + if (dmov_conf[adm].ebiclk) + clk_disable_unprepare(dmov_conf[adm].ebiclk); + if (dmov_conf[adm].pclk) + clk_disable_unprepare(dmov_conf[adm].pclk); + clk_disable_unprepare(dmov_conf[adm].clk); +} + +static void msm_dmov_clock_work(struct work_struct *work) +{ + struct msm_dmov_conf *conf = + container_of(to_delayed_work(work), struct msm_dmov_conf, work); + int adm = DMOV_IRQ_TO_ADM(conf->irq); + mutex_lock(&conf->lock); + if (conf->clk_ctl == CLK_TO_BE_DIS) { + BUG_ON(conf->channel_active); + msm_dmov_clk_off(adm); + conf->clk_ctl = CLK_DIS; + } + mutex_unlock(&conf->lock); +} + +enum { + NOFLUSH = 0, + GRACEFUL, + NONGRACEFUL, +}; + +/* Caller must hold the list lock */ +static struct msm_dmov_cmd *start_ready_cmd(unsigned ch, int adm) +{ + struct msm_dmov_cmd *cmd; + + if (list_empty(&dmov_conf[adm].ready_commands[ch])) + return NULL; + + cmd = list_entry(dmov_conf[adm].ready_commands[ch].next, typeof(*cmd), + list); + list_del(&cmd->list); + if (cmd->exec_func) + cmd->exec_func(cmd); + list_add_tail(&cmd->list, &dmov_conf[adm].active_commands[ch]); + if (!dmov_conf[adm].channel_active) + enable_irq(dmov_conf[adm].irq); + dmov_conf[adm].channel_active |= BIT(ch); + PRINT_IO("msm dmov enqueue command, %x, ch %d\n", cmd->cmdptr, ch); + writel_relaxed(cmd->cmdptr, DMOV_REG(DMOV_CMD_PTR(ch), adm)); + + return cmd; +} + +static void msm_dmov_enqueue_cmd_ext_work(struct work_struct *work) +{ + struct msm_dmov_cmd *cmd = + container_of(work, struct msm_dmov_cmd, work); + unsigned id = cmd->id; + unsigned status; + unsigned long flags; + int adm = DMOV_ID_TO_ADM(id); + int ch = DMOV_ID_TO_CHAN(id); + + mutex_lock(&dmov_conf[adm].lock); + if (dmov_conf[adm].clk_ctl == CLK_DIS) { + status = msm_dmov_clk_on(adm); + if (status != 0) + goto error; + } + dmov_conf[adm].clk_ctl = CLK_EN; + + spin_lock_irqsave(&dmov_conf[adm].list_lock, flags); + + cmd = list_entry(dmov_conf[adm].staged_commands[ch].next, typeof(*cmd), + list); + list_del(&cmd->list); + list_add_tail(&cmd->list, &dmov_conf[adm].ready_commands[ch]); + status = readl_relaxed(DMOV_REG(DMOV_STATUS(ch), adm)); + if (status & DMOV_STATUS_CMD_PTR_RDY) { + PRINT_IO("msm_dmov_enqueue_cmd(%d), start command, status %x\n", + id, status); + cmd = start_ready_cmd(ch, adm); + /* + * We added something to the ready list, and still hold the + * list lock. Thus, no need to check for cmd == NULL + */ + if (cmd->toflush) { + int flush = (cmd->toflush == GRACEFUL) ? 1 << 31 : 0; + writel_relaxed(flush, DMOV_REG(DMOV_FLUSH0(ch), adm)); + } + } else { + cmd->toflush = 0; + if (list_empty(&dmov_conf[adm].active_commands[ch]) && + !list_empty(&dmov_conf[adm].ready_commands[ch])) + PRINT_ERROR("msm_dmov_enqueue_cmd_ext(%d), stalled, " + "status %x\n", id, status); + PRINT_IO("msm_dmov_enqueue_cmd(%d), enqueue command, status " + "%x\n", id, status); + } + if (!dmov_conf[adm].channel_active) { + dmov_conf[adm].clk_ctl = CLK_TO_BE_DIS; + schedule_delayed_work(&dmov_conf[adm].work, (HZ/10)); + } + spin_unlock_irqrestore(&dmov_conf[adm].list_lock, flags); +error: + mutex_unlock(&dmov_conf[adm].lock); +} + +static void __msm_dmov_enqueue_cmd_ext(unsigned id, struct msm_dmov_cmd *cmd) +{ + int adm = DMOV_ID_TO_ADM(id); + int ch = DMOV_ID_TO_CHAN(id); + unsigned long flags; + cmd->id = id; + cmd->toflush = 0; + + spin_lock_irqsave(&dmov_conf[adm].list_lock, flags); + list_add_tail(&cmd->list, &dmov_conf[adm].staged_commands[ch]); + spin_unlock_irqrestore(&dmov_conf[adm].list_lock, flags); + + queue_work(dmov_conf[adm].cmd_wq, &cmd->work); +} + +void msm_dmov_enqueue_cmd_ext(unsigned id, struct msm_dmov_cmd *cmd) +{ + INIT_WORK(&cmd->work, msm_dmov_enqueue_cmd_ext_work); + __msm_dmov_enqueue_cmd_ext(id, cmd); +} +EXPORT_SYMBOL(msm_dmov_enqueue_cmd_ext); + +void msm_dmov_enqueue_cmd(unsigned id, struct msm_dmov_cmd *cmd) +{ + /* Disable callback function (for backwards compatibility) */ + cmd->exec_func = NULL; + INIT_WORK(&cmd->work, msm_dmov_enqueue_cmd_ext_work); + __msm_dmov_enqueue_cmd_ext(id, cmd); +} +EXPORT_SYMBOL(msm_dmov_enqueue_cmd); + +void msm_dmov_flush(unsigned int id, int graceful) +{ + unsigned long irq_flags; + int ch = DMOV_ID_TO_CHAN(id); + int adm = DMOV_ID_TO_ADM(id); + int flush = graceful ? DMOV_FLUSH_TYPE : 0; + struct msm_dmov_cmd *cmd; + + spin_lock_irqsave(&dmov_conf[adm].list_lock, irq_flags); + /* XXX not checking if flush cmd sent already */ + if (!list_empty(&dmov_conf[adm].active_commands[ch])) { + PRINT_IO("msm_dmov_flush(%d), send flush cmd\n", id); + writel_relaxed(flush, DMOV_REG(DMOV_FLUSH0(ch), adm)); + } + list_for_each_entry(cmd, &dmov_conf[adm].staged_commands[ch], list) + cmd->toflush = graceful ? GRACEFUL : NONGRACEFUL; + /* spin_unlock_irqrestore has the necessary barrier */ + spin_unlock_irqrestore(&dmov_conf[adm].list_lock, irq_flags); +} +EXPORT_SYMBOL(msm_dmov_flush); + +struct msm_dmov_exec_cmdptr_cmd { + struct msm_dmov_cmd dmov_cmd; + struct completion complete; + unsigned id; + unsigned int result; + struct msm_dmov_errdata err; +}; + +static void +dmov_exec_cmdptr_complete_func(struct msm_dmov_cmd *_cmd, + unsigned int result, + struct msm_dmov_errdata *err) +{ + struct msm_dmov_exec_cmdptr_cmd *cmd = container_of(_cmd, struct msm_dmov_exec_cmdptr_cmd, dmov_cmd); + cmd->result = result; + if (result != 0x80000002 && err) + memcpy(&cmd->err, err, sizeof(struct msm_dmov_errdata)); + + complete(&cmd->complete); +} + +int msm_dmov_exec_cmd(unsigned id, unsigned int cmdptr) +{ + struct msm_dmov_exec_cmdptr_cmd cmd; + + PRINT_FLOW("dmov_exec_cmdptr(%d, %x)\n", id, cmdptr); + + cmd.dmov_cmd.cmdptr = cmdptr; + cmd.dmov_cmd.complete_func = dmov_exec_cmdptr_complete_func; + cmd.dmov_cmd.exec_func = NULL; + cmd.id = id; + cmd.result = 0; + INIT_WORK_ONSTACK(&cmd.dmov_cmd.work, msm_dmov_enqueue_cmd_ext_work); + init_completion(&cmd.complete); + + __msm_dmov_enqueue_cmd_ext(id, &cmd.dmov_cmd); + wait_for_completion_io(&cmd.complete); + + if (cmd.result != 0x80000002) { + PRINT_ERROR("dmov_exec_cmdptr(%d): ERROR, result: %x\n", id, cmd.result); + PRINT_ERROR("dmov_exec_cmdptr(%d): flush: %x %x %x %x\n", + id, cmd.err.flush[0], cmd.err.flush[1], cmd.err.flush[2], cmd.err.flush[3]); + return -EIO; + } + PRINT_FLOW("dmov_exec_cmdptr(%d, %x) done\n", id, cmdptr); + return 0; +} +EXPORT_SYMBOL(msm_dmov_exec_cmd); + +static void fill_errdata(struct msm_dmov_errdata *errdata, int ch, int adm) +{ + errdata->flush[0] = readl_relaxed(DMOV_REG(DMOV_FLUSH0(ch), adm)); + errdata->flush[1] = readl_relaxed(DMOV_REG(DMOV_FLUSH1(ch), adm)); + errdata->flush[2] = 0; + errdata->flush[3] = readl_relaxed(DMOV_REG(DMOV_FLUSH3(ch), adm)); + errdata->flush[4] = readl_relaxed(DMOV_REG(DMOV_FLUSH4(ch), adm)); + errdata->flush[5] = readl_relaxed(DMOV_REG(DMOV_FLUSH5(ch), adm)); +} + +static irqreturn_t msm_dmov_isr(int irq, void *dev_id) +{ + unsigned int int_status; + unsigned int mask; + unsigned int id; + unsigned int ch; + unsigned long irq_flags; + unsigned int ch_status; + unsigned int ch_result; + unsigned int valid = 0; + struct msm_dmov_cmd *cmd; + int adm = DMOV_IRQ_TO_ADM(irq); + + mutex_lock(&dmov_conf[adm].lock); + /* read and clear isr */ + int_status = readl_relaxed(DMOV_REG(DMOV_ISR, adm)); + PRINT_FLOW("msm_datamover_irq_handler: DMOV_ISR %x\n", int_status); + + spin_lock_irqsave(&dmov_conf[adm].list_lock, irq_flags); + while (int_status) { + mask = int_status & -int_status; + ch = fls(mask) - 1; + id = DMOV_CHAN_ADM_TO_ID(ch, adm); + PRINT_FLOW("msm_datamover_irq_handler %08x %08x id %d\n", int_status, mask, id); + int_status &= ~mask; + ch_status = readl_relaxed(DMOV_REG(DMOV_STATUS(ch), adm)); + if (!(ch_status & DMOV_STATUS_RSLT_VALID)) { + PRINT_FLOW("msm_datamover_irq_handler id %d, " + "result not valid %x\n", id, ch_status); + continue; + } + do { + valid = 1; + ch_result = readl_relaxed(DMOV_REG(DMOV_RSLT(ch), adm)); + if (list_empty(&dmov_conf[adm].active_commands[ch])) { + PRINT_ERROR("msm_datamover_irq_handler id %d, got result " + "with no active command, status %x, result %x\n", + id, ch_status, ch_result); + cmd = NULL; + } else { + cmd = list_entry(dmov_conf[adm]. + active_commands[ch].next, typeof(*cmd), + list); + } + PRINT_FLOW("msm_datamover_irq_handler id %d, status %x, result %x\n", id, ch_status, ch_result); + if (ch_result & DMOV_RSLT_DONE) { + PRINT_FLOW("msm_datamover_irq_handler id %d, status %x\n", + id, ch_status); + PRINT_IO("msm_datamover_irq_handler id %d, got result " + "for %p, result %x\n", id, cmd, ch_result); + if (cmd) { + list_del(&cmd->list); + cmd->complete_func(cmd, ch_result, NULL); + } + } + if (ch_result & DMOV_RSLT_FLUSH) { + struct msm_dmov_errdata errdata; + + fill_errdata(&errdata, ch, adm); + PRINT_FLOW("msm_datamover_irq_handler id %d, status %x\n", id, ch_status); + PRINT_FLOW("msm_datamover_irq_handler id %d, flush, result %x, flush0 %x\n", id, ch_result, errdata.flush[0]); + if (cmd) { + list_del(&cmd->list); + cmd->complete_func(cmd, ch_result, &errdata); + } + } + if (ch_result & DMOV_RSLT_ERROR) { + struct msm_dmov_errdata errdata; + + fill_errdata(&errdata, ch, adm); + + PRINT_ERROR("msm_datamover_irq_handler id %d, status %x\n", id, ch_status); + PRINT_ERROR("msm_datamover_irq_handler id %d, error, result %x, flush0 %x\n", id, ch_result, errdata.flush[0]); + if (cmd) { + list_del(&cmd->list); + cmd->complete_func(cmd, ch_result, &errdata); + } + /* this does not seem to work, once we get an error */ + /* the datamover will no longer accept commands */ + writel_relaxed(0, DMOV_REG(DMOV_FLUSH0(ch), + adm)); + } + rmb(); + ch_status = readl_relaxed(DMOV_REG(DMOV_STATUS(ch), + adm)); + PRINT_FLOW("msm_datamover_irq_handler id %d, status %x\n", id, ch_status); + if (ch_status & DMOV_STATUS_CMD_PTR_RDY) + start_ready_cmd(ch, adm); + } while (ch_status & DMOV_STATUS_RSLT_VALID); + if (list_empty(&dmov_conf[adm].active_commands[ch]) && + list_empty(&dmov_conf[adm].ready_commands[ch])) + dmov_conf[adm].channel_active &= ~(1U << ch); + PRINT_FLOW("msm_datamover_irq_handler id %d, status %x\n", id, ch_status); + } + spin_unlock_irqrestore(&dmov_conf[adm].list_lock, irq_flags); + + if (!dmov_conf[adm].channel_active && valid) { + disable_irq_nosync(dmov_conf[adm].irq); + dmov_conf[adm].clk_ctl = CLK_TO_BE_DIS; + schedule_delayed_work(&dmov_conf[adm].work, (HZ/10)); + } + + mutex_unlock(&dmov_conf[adm].lock); + return valid ? IRQ_HANDLED : IRQ_NONE; +} + +static int msm_dmov_suspend_late(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + int adm = (pdev->id >= 0) ? pdev->id : 0; + mutex_lock(&dmov_conf[adm].lock); + if (dmov_conf[adm].clk_ctl == CLK_TO_BE_DIS) { + BUG_ON(dmov_conf[adm].channel_active); + msm_dmov_clk_off(adm); + dmov_conf[adm].clk_ctl = CLK_DIS; + } + mutex_unlock(&dmov_conf[adm].lock); + return 0; +} + +static int msm_dmov_runtime_suspend(struct device *dev) +{ + dev_dbg(dev, "pm_runtime: suspending...\n"); + return 0; +} + +static int msm_dmov_runtime_resume(struct device *dev) +{ + dev_dbg(dev, "pm_runtime: resuming...\n"); + return 0; +} + +static int msm_dmov_runtime_idle(struct device *dev) +{ + dev_dbg(dev, "pm_runtime: idling...\n"); + return 0; +} + +static struct dev_pm_ops msm_dmov_dev_pm_ops = { + .runtime_suspend = msm_dmov_runtime_suspend, + .runtime_resume = msm_dmov_runtime_resume, + .runtime_idle = msm_dmov_runtime_idle, + .suspend = msm_dmov_suspend_late, +}; + +static int msm_dmov_init_clocks(struct platform_device *pdev) +{ + int adm = (pdev->id >= 0) ? pdev->id : 0; + int ret; + + dmov_conf[adm].clk = clk_get(&pdev->dev, "core"); + if (IS_ERR(dmov_conf[adm].clk)) { + dmov_conf[adm].clk = NULL; + return -ENOENT; + } + + dmov_conf[adm].pclk = clk_get(&pdev->dev, "iface"); + if (IS_ERR(dmov_conf[adm].pclk)) { + dmov_conf[adm].pclk = NULL; + /* pclk not present on all SoCs, don't bail on failure */ + } + + dmov_conf[adm].ebiclk = clk_get(&pdev->dev, "mem"); + if (IS_ERR(dmov_conf[adm].ebiclk)) { + dmov_conf[adm].ebiclk = NULL; + /* ebiclk not present on all SoCs, don't bail on failure */ + } else { + ret = clk_set_rate(dmov_conf[adm].ebiclk, 27000000); + if (ret) + return -ENOENT; + } + + return 0; +} + +static void config_datamover(int adm) +{ +#ifdef CONFIG_MSM_ADM3 + int i; + for (i = 0; i < MSM_DMOV_CHANNEL_COUNT; i++) { + struct msm_dmov_chan_conf *chan_conf = + dmov_conf[adm].chan_conf; + unsigned conf; + /* Only configure scorpion channels */ + if (chan_conf[i].sd <= 1) { + conf = readl_relaxed(DMOV_REG(DMOV_CONF(i), adm)); + conf &= ~DMOV_CONF_SD(7); + conf |= DMOV_CONF_SD(chan_conf[i].sd); + writel_relaxed(conf | DMOV_CONF_SHADOW_EN, + DMOV_REG(DMOV_CONF(i), adm)); + } + } + for (i = 0; i < MSM_DMOV_CRCI_COUNT; i++) { + struct msm_dmov_crci_conf *crci_conf = + dmov_conf[adm].crci_conf; + + writel_relaxed(DMOV_CRCI_CTL_BLK_SZ(crci_conf[i].blk_size), + DMOV_REG(DMOV_CRCI_CTL(i), adm)); + } +#endif +} + +static int msm_dmov_probe(struct platform_device *pdev) +{ + int adm = (pdev->id >= 0) ? pdev->id : 0; + int i; + int ret; + struct resource *mres = + platform_get_resource(pdev, IORESOURCE_MEM, 0); + + printk("%s: adm = %d\n", __func__, adm); + + dmov_conf[adm].sd = 1; + dmov_conf[adm].sd_size = 0x800; + + if (!mres->start) + return -ENXIO; + + dmov_conf[adm].irq = platform_get_irq(pdev, 0); + if (dmov_conf[adm].irq < 0) + return dmov_conf[adm].irq; + + if (!mres || !mres->start) + return -ENXIO; + + dmov_conf[adm].base = ioremap_nocache(mres->start, resource_size(mres)); + if (!dmov_conf[adm].base) + return -ENOMEM; + + dmov_conf[adm].cmd_wq = alloc_ordered_workqueue("dmov%d_wq", 0, adm); + if (!dmov_conf[adm].cmd_wq) { + PRINT_ERROR("Couldn't allocate ADM%d workqueue.\n", adm); + ret = -ENOMEM; + goto out_map; + } + + ret = msm_dmov_init_clocks(pdev); + if (ret) { + PRINT_ERROR("Requesting ADM%d clocks failed\n", adm); + goto out_irq; + } + + ret = msm_dmov_clk_on(adm); + if (ret) { + PRINT_ERROR("Enabling ADM%d clocks failed\n", adm); + goto out_irq; + } + + ret = request_threaded_irq(dmov_conf[adm].irq, NULL, msm_dmov_isr, + IRQF_ONESHOT, "msmdatamover", NULL); + if (ret) { + PRINT_ERROR("Requesting ADM%d irq %d failed\n", adm, + dmov_conf[adm].irq); + goto out_wq; + } + disable_irq(dmov_conf[adm].irq); + + config_datamover(adm); + for (i = 0; i < MSM_DMOV_CHANNEL_COUNT; i++) { + INIT_LIST_HEAD(&dmov_conf[adm].staged_commands[i]); + INIT_LIST_HEAD(&dmov_conf[adm].ready_commands[i]); + INIT_LIST_HEAD(&dmov_conf[adm].active_commands[i]); + + writel_relaxed(DMOV_RSLT_CONF_IRQ_EN + | DMOV_RSLT_CONF_FORCE_FLUSH_RSLT, + DMOV_REG(DMOV_RSLT_CONF(i), adm)); + } + wmb(); + msm_dmov_clk_off(adm); + + return ret; +out_irq: + free_irq(dmov_conf[adm].irq, NULL); +out_wq: + destroy_workqueue(dmov_conf[adm].cmd_wq); +out_map: + iounmap(dmov_conf[adm].base); + + return ret; +} + +static const struct of_device_id adm_of_match[] = { + { .compatible = "qcom,adm", }, + {} +}; +MODULE_DEVICE_TABLE(of, adm_of_match); + +static struct platform_driver msm_dmov_driver = { + .probe = msm_dmov_probe, + .driver = { + .name = MODULE_NAME, + .owner = THIS_MODULE, + .of_match_table = adm_of_match, + .pm = &msm_dmov_dev_pm_ops, + }, +}; + +/* static int __init */ +static int __init msm_init_datamover(void) +{ + int ret; + + ret = platform_driver_register(&msm_dmov_driver); + if (ret) + return ret; + return 0; +} +arch_initcall(msm_init_datamover); diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom_bam_dma.c index 7a4bbb0f80a5..777afd2f094d 100644 --- a/drivers/dma/qcom_bam_dma.c +++ b/drivers/dma/qcom_bam_dma.c @@ -79,35 +79,97 @@ struct bam_async_desc { struct bam_desc_hw desc[0]; }; -#define BAM_CTRL 0x0000 -#define BAM_REVISION 0x0004 -#define BAM_SW_REVISION 0x0080 -#define BAM_NUM_PIPES 0x003C -#define BAM_TIMER 0x0040 -#define BAM_TIMER_CTRL 0x0044 -#define BAM_DESC_CNT_TRSHLD 0x0008 -#define BAM_IRQ_SRCS 0x000C -#define BAM_IRQ_SRCS_MSK 0x0010 -#define BAM_IRQ_SRCS_UNMASKED 0x0030 -#define BAM_IRQ_STTS 0x0014 -#define BAM_IRQ_CLR 0x0018 -#define BAM_IRQ_EN 0x001C -#define BAM_CNFG_BITS 0x007C -#define BAM_IRQ_SRCS_EE(ee) (0x0800 + ((ee) * 0x80)) -#define BAM_IRQ_SRCS_MSK_EE(ee) (0x0804 + ((ee) * 0x80)) -#define BAM_P_CTRL(pipe) (0x1000 + ((pipe) * 0x1000)) -#define BAM_P_RST(pipe) (0x1004 + ((pipe) * 0x1000)) -#define BAM_P_HALT(pipe) (0x1008 + ((pipe) * 0x1000)) -#define BAM_P_IRQ_STTS(pipe) (0x1010 + ((pipe) * 0x1000)) -#define BAM_P_IRQ_CLR(pipe) (0x1014 + ((pipe) * 0x1000)) -#define BAM_P_IRQ_EN(pipe) (0x1018 + ((pipe) * 0x1000)) -#define BAM_P_EVNT_DEST_ADDR(pipe) (0x182C + ((pipe) * 0x1000)) -#define BAM_P_EVNT_REG(pipe) (0x1818 + ((pipe) * 0x1000)) -#define BAM_P_SW_OFSTS(pipe) (0x1800 + ((pipe) * 0x1000)) -#define BAM_P_DATA_FIFO_ADDR(pipe) (0x1824 + ((pipe) * 0x1000)) -#define BAM_P_DESC_FIFO_ADDR(pipe) (0x181C + ((pipe) * 0x1000)) -#define BAM_P_EVNT_TRSHLD(pipe) (0x1828 + ((pipe) * 0x1000)) -#define BAM_P_FIFO_SIZES(pipe) (0x1820 + ((pipe) * 0x1000)) +enum bam_reg { + BAM_CTRL, + BAM_REVISION, + BAM_NUM_PIPES, + BAM_DESC_CNT_TRSHLD, + BAM_IRQ_SRCS, + BAM_IRQ_SRCS_MSK, + BAM_IRQ_SRCS_UNMASKED, + BAM_IRQ_STTS, + BAM_IRQ_CLR, + BAM_IRQ_EN, + BAM_CNFG_BITS, + BAM_IRQ_SRCS_EE, + BAM_IRQ_SRCS_MSK_EE, + BAM_P_CTRL, + BAM_P_RST, + BAM_P_HALT, + BAM_P_IRQ_STTS, + BAM_P_IRQ_CLR, + BAM_P_IRQ_EN, + BAM_P_EVNT_DEST_ADDR, + BAM_P_EVNT_REG, + BAM_P_SW_OFSTS, + BAM_P_DATA_FIFO_ADDR, + BAM_P_DESC_FIFO_ADDR, + BAM_P_EVNT_GEN_TRSHLD, + BAM_P_FIFO_SIZES, +}; + +struct reg_offset_data { + u32 base_offset; + unsigned int pipe_mult, evnt_mult, ee_mult; +}; + +static const struct reg_offset_data bam_v1_3_reg_info[] = { + [BAM_CTRL] = { 0x0F80, 0x00, 0x00, 0x00 }, + [BAM_REVISION] = { 0x0F84, 0x00, 0x00, 0x00 }, + [BAM_NUM_PIPES] = { 0x0FBC, 0x00, 0x00, 0x00 }, + [BAM_DESC_CNT_TRSHLD] = { 0x0F88, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS] = { 0x0F8C, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS_MSK] = { 0x0F90, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS_UNMASKED] = { 0x0FB0, 0x00, 0x00, 0x00 }, + [BAM_IRQ_STTS] = { 0x0F94, 0x00, 0x00, 0x00 }, + [BAM_IRQ_CLR] = { 0x0F98, 0x00, 0x00, 0x00 }, + [BAM_IRQ_EN] = { 0x0F9C, 0x00, 0x00, 0x00 }, + [BAM_CNFG_BITS] = { 0x0FFC, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS_EE] = { 0x1800, 0x00, 0x00, 0x80 }, + [BAM_IRQ_SRCS_MSK_EE] = { 0x1804, 0x00, 0x00, 0x80 }, + [BAM_P_CTRL] = { 0x0000, 0x80, 0x00, 0x00 }, + [BAM_P_RST] = { 0x0004, 0x80, 0x00, 0x00 }, + [BAM_P_HALT] = { 0x0008, 0x80, 0x00, 0x00 }, + [BAM_P_IRQ_STTS] = { 0x0010, 0x80, 0x00, 0x00 }, + [BAM_P_IRQ_CLR] = { 0x0014, 0x80, 0x00, 0x00 }, + [BAM_P_IRQ_EN] = { 0x0018, 0x80, 0x00, 0x00 }, + [BAM_P_EVNT_DEST_ADDR] = { 0x102C, 0x00, 0x40, 0x00 }, + [BAM_P_EVNT_REG] = { 0x1018, 0x00, 0x40, 0x00 }, + [BAM_P_SW_OFSTS] = { 0x1000, 0x00, 0x40, 0x00 }, + [BAM_P_DATA_FIFO_ADDR] = { 0x1024, 0x00, 0x40, 0x00 }, + [BAM_P_DESC_FIFO_ADDR] = { 0x101C, 0x00, 0x40, 0x00 }, + [BAM_P_EVNT_GEN_TRSHLD] = { 0x1028, 0x00, 0x40, 0x00 }, + [BAM_P_FIFO_SIZES] = { 0x1020, 0x00, 0x40, 0x00 }, +}; + +static const struct reg_offset_data bam_v1_4_reg_info[] = { + [BAM_CTRL] = { 0x0000, 0x00, 0x00, 0x00 }, + [BAM_REVISION] = { 0x0004, 0x00, 0x00, 0x00 }, + [BAM_NUM_PIPES] = { 0x003C, 0x00, 0x00, 0x00 }, + [BAM_DESC_CNT_TRSHLD] = { 0x0008, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS] = { 0x000C, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS_MSK] = { 0x0010, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS_UNMASKED] = { 0x0030, 0x00, 0x00, 0x00 }, + [BAM_IRQ_STTS] = { 0x0014, 0x00, 0x00, 0x00 }, + [BAM_IRQ_CLR] = { 0x0018, 0x00, 0x00, 0x00 }, + [BAM_IRQ_EN] = { 0x001C, 0x00, 0x00, 0x00 }, + [BAM_CNFG_BITS] = { 0x007C, 0x00, 0x00, 0x00 }, + [BAM_IRQ_SRCS_EE] = { 0x0800, 0x00, 0x00, 0x80 }, + [BAM_IRQ_SRCS_MSK_EE] = { 0x0804, 0x00, 0x00, 0x80 }, + [BAM_P_CTRL] = { 0x1000, 0x1000, 0x00, 0x00 }, + [BAM_P_RST] = { 0x1004, 0x1000, 0x00, 0x00 }, + [BAM_P_HALT] = { 0x1008, 0x1000, 0x00, 0x00 }, + [BAM_P_IRQ_STTS] = { 0x1010, 0x1000, 0x00, 0x00 }, + [BAM_P_IRQ_CLR] = { 0x1014, 0x1000, 0x00, 0x00 }, + [BAM_P_IRQ_EN] = { 0x1018, 0x1000, 0x00, 0x00 }, + [BAM_P_EVNT_DEST_ADDR] = { 0x102C, 0x00, 0x1000, 0x00 }, + [BAM_P_EVNT_REG] = { 0x1018, 0x00, 0x1000, 0x00 }, + [BAM_P_SW_OFSTS] = { 0x1000, 0x00, 0x1000, 0x00 }, + [BAM_P_DATA_FIFO_ADDR] = { 0x1824, 0x00, 0x1000, 0x00 }, + [BAM_P_DESC_FIFO_ADDR] = { 0x181C, 0x00, 0x1000, 0x00 }, + [BAM_P_EVNT_GEN_TRSHLD] = { 0x1828, 0x00, 0x1000, 0x00 }, + [BAM_P_FIFO_SIZES] = { 0x1820, 0x00, 0x1000, 0x00 }, +}; /* BAM CTRL */ #define BAM_SW_RST BIT(0) @@ -297,6 +359,8 @@ struct bam_device { /* execution environment ID, from DT */ u32 ee; + const struct reg_offset_data *layout; + struct clk *bamclk; int irq; @@ -305,6 +369,23 @@ struct bam_device { }; /** + * bam_addr - returns BAM register address + * @bdev: bam device + * @pipe: pipe instance (ignored when register doesn't have multiple instances) + * @reg: register enum + */ +static inline void __iomem *bam_addr(struct bam_device *bdev, u32 pipe, + enum bam_reg reg) +{ + const struct reg_offset_data r = bdev->layout[reg]; + + return bdev->regs + r.base_offset + + r.pipe_mult * pipe + + r.evnt_mult * pipe + + r.ee_mult * bdev->ee; +} + +/** * bam_reset_channel - Reset individual BAM DMA channel * @bchan: bam channel * @@ -317,8 +398,8 @@ static void bam_reset_channel(struct bam_chan *bchan) lockdep_assert_held(&bchan->vc.lock); /* reset channel */ - writel_relaxed(1, bdev->regs + BAM_P_RST(bchan->id)); - writel_relaxed(0, bdev->regs + BAM_P_RST(bchan->id)); + writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_RST)); + writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_RST)); /* don't allow cpu to reorder BAM register accesses done after this */ wmb(); @@ -347,17 +428,18 @@ static void bam_chan_init_hw(struct bam_chan *bchan, * because we allocated 1 more descriptor (8 bytes) than we can use */ writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)), - bdev->regs + BAM_P_DESC_FIFO_ADDR(bchan->id)); - writel_relaxed(BAM_DESC_FIFO_SIZE, bdev->regs + - BAM_P_FIFO_SIZES(bchan->id)); + bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR)); + writel_relaxed(BAM_DESC_FIFO_SIZE, + bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES)); /* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */ - writel_relaxed(P_DEFAULT_IRQS_EN, bdev->regs + BAM_P_IRQ_EN(bchan->id)); + writel_relaxed(P_DEFAULT_IRQS_EN, + bam_addr(bdev, bchan->id, BAM_P_IRQ_EN)); /* unmask the specific pipe and EE combo */ - val = readl_relaxed(bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee)); + val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE)); val |= BIT(bchan->id); - writel_relaxed(val, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee)); + writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE)); /* don't allow cpu to reorder the channel enable done below */ wmb(); @@ -367,7 +449,7 @@ static void bam_chan_init_hw(struct bam_chan *bchan, if (dir == DMA_DEV_TO_MEM) val |= P_DIRECTION; - writel_relaxed(val, bdev->regs + BAM_P_CTRL(bchan->id)); + writel_relaxed(val, bam_addr(bdev, bchan->id, BAM_P_CTRL)); bchan->initialized = 1; @@ -432,12 +514,12 @@ static void bam_free_chan(struct dma_chan *chan) bchan->fifo_virt = NULL; /* mask irq for pipe/channel */ - val = readl_relaxed(bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee)); + val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE)); val &= ~BIT(bchan->id); - writel_relaxed(val, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee)); + writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE)); /* disable irq */ - writel_relaxed(0, bdev->regs + BAM_P_IRQ_EN(bchan->id)); + writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_IRQ_EN)); } /** @@ -583,14 +665,14 @@ static int bam_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, switch (cmd) { case DMA_PAUSE: spin_lock_irqsave(&bchan->vc.lock, flag); - writel_relaxed(1, bdev->regs + BAM_P_HALT(bchan->id)); + writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_HALT)); bchan->paused = 1; spin_unlock_irqrestore(&bchan->vc.lock, flag); break; case DMA_RESUME: spin_lock_irqsave(&bchan->vc.lock, flag); - writel_relaxed(0, bdev->regs + BAM_P_HALT(bchan->id)); + writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_HALT)); bchan->paused = 0; spin_unlock_irqrestore(&bchan->vc.lock, flag); break; @@ -626,7 +708,7 @@ static u32 process_channel_irqs(struct bam_device *bdev) unsigned long flags; struct bam_async_desc *async_desc; - srcs = readl_relaxed(bdev->regs + BAM_IRQ_SRCS_EE(bdev->ee)); + srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE)); /* return early if no pipe/channel interrupts are present */ if (!(srcs & P_IRQ)) @@ -639,11 +721,9 @@ static u32 process_channel_irqs(struct bam_device *bdev) continue; /* clear pipe irq */ - pipe_stts = readl_relaxed(bdev->regs + - BAM_P_IRQ_STTS(i)); + pipe_stts = readl_relaxed(bam_addr(bdev, i, BAM_P_IRQ_STTS)); - writel_relaxed(pipe_stts, bdev->regs + - BAM_P_IRQ_CLR(i)); + writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR)); spin_lock_irqsave(&bchan->vc.lock, flags); async_desc = bchan->curr_txd; @@ -694,12 +774,12 @@ static irqreturn_t bam_dma_irq(int irq, void *data) tasklet_schedule(&bdev->task); if (srcs & BAM_IRQ) - clr_mask = readl_relaxed(bdev->regs + BAM_IRQ_STTS); + clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS)); /* don't allow reorder of the various accesses to the BAM registers */ mb(); - writel_relaxed(clr_mask, bdev->regs + BAM_IRQ_CLR); + writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR)); return IRQ_HANDLED; } @@ -763,7 +843,7 @@ static void bam_apply_new_config(struct bam_chan *bchan, else maxburst = bchan->slave.dst_maxburst; - writel_relaxed(maxburst, bdev->regs + BAM_DESC_CNT_TRSHLD); + writel_relaxed(maxburst, bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD)); bchan->reconfigure = 0; } @@ -830,7 +910,7 @@ static void bam_start_dma(struct bam_chan *bchan) /* ensure descriptor writes and dma start not reordered */ wmb(); writel_relaxed(bchan->tail * sizeof(struct bam_desc_hw), - bdev->regs + BAM_P_EVNT_REG(bchan->id)); + bam_addr(bdev, bchan->id, BAM_P_EVNT_REG)); } /** @@ -918,43 +998,44 @@ static int bam_init(struct bam_device *bdev) u32 val; /* read revision and configuration information */ - val = readl_relaxed(bdev->regs + BAM_REVISION) >> NUM_EES_SHIFT; + val = readl_relaxed(bam_addr(bdev, 0, BAM_REVISION)) >> NUM_EES_SHIFT; val &= NUM_EES_MASK; /* check that configured EE is within range */ if (bdev->ee >= val) return -EINVAL; - val = readl_relaxed(bdev->regs + BAM_NUM_PIPES); + val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES)); bdev->num_channels = val & BAM_NUM_PIPES_MASK; /* s/w reset bam */ /* after reset all pipes are disabled and idle */ - val = readl_relaxed(bdev->regs + BAM_CTRL); + val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL)); val |= BAM_SW_RST; - writel_relaxed(val, bdev->regs + BAM_CTRL); + writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL)); val &= ~BAM_SW_RST; - writel_relaxed(val, bdev->regs + BAM_CTRL); + writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL)); /* make sure previous stores are visible before enabling BAM */ wmb(); /* enable bam */ val |= BAM_EN; - writel_relaxed(val, bdev->regs + BAM_CTRL); + writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL)); /* set descriptor threshhold, start with 4 bytes */ - writel_relaxed(DEFAULT_CNT_THRSHLD, bdev->regs + BAM_DESC_CNT_TRSHLD); + writel_relaxed(DEFAULT_CNT_THRSHLD, + bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD)); /* Enable default set of h/w workarounds, ie all except BAM_FULL_PIPE */ - writel_relaxed(BAM_CNFG_BITS_DEFAULT, bdev->regs + BAM_CNFG_BITS); + writel_relaxed(BAM_CNFG_BITS_DEFAULT, bam_addr(bdev, 0, BAM_CNFG_BITS)); /* enable irqs for errors */ writel_relaxed(BAM_ERROR_EN | BAM_HRESP_ERR_EN, - bdev->regs + BAM_IRQ_EN); + bam_addr(bdev, 0, BAM_IRQ_EN)); /* unmask global bam interrupt */ - writel_relaxed(BAM_IRQ_MSK, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee)); + writel_relaxed(BAM_IRQ_MSK, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE)); return 0; } @@ -969,9 +1050,18 @@ static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan, bchan->vc.desc_free = bam_dma_free_desc; } +static const struct of_device_id bam_of_match[] = { + { .compatible = "qcom,bam-v1.3.0", .data = &bam_v1_3_reg_info }, + { .compatible = "qcom,bam-v1.4.0", .data = &bam_v1_4_reg_info }, + {} +}; + +MODULE_DEVICE_TABLE(of, bam_of_match); + static int bam_dma_probe(struct platform_device *pdev) { struct bam_device *bdev; + const struct of_device_id *match; struct resource *iores; int ret, i; @@ -981,6 +1071,14 @@ static int bam_dma_probe(struct platform_device *pdev) bdev->dev = &pdev->dev; + match = of_match_node(bam_of_match, pdev->dev.of_node); + if (!match) { + dev_err(&pdev->dev, "Unsupported BAM module\n"); + return -ENODEV; + } + + bdev->layout = match->data; + iores = platform_get_resource(pdev, IORESOURCE_MEM, 0); bdev->regs = devm_ioremap_resource(&pdev->dev, iores); if (IS_ERR(bdev->regs)) @@ -1084,7 +1182,7 @@ static int bam_dma_remove(struct platform_device *pdev) dma_async_device_unregister(&bdev->common); /* mask all interrupts for this execution environment */ - writel_relaxed(0, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee)); + writel_relaxed(0, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE)); devm_free_irq(bdev->dev, bdev->irq, bdev); @@ -1104,12 +1202,6 @@ static int bam_dma_remove(struct platform_device *pdev) return 0; } -static const struct of_device_id bam_of_match[] = { - { .compatible = "qcom,bam-v1.4.0", }, - {} -}; -MODULE_DEVICE_TABLE(of, bam_of_match); - static struct platform_driver bam_dma_driver = { .probe = bam_dma_probe, .remove = bam_dma_remove, diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 6afa29167fee..492a1f4db265 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -284,10 +284,54 @@ static uint32_t ring_freewords(struct msm_gpu *gpu) return (rptr + (size - 1) - wptr) % size; } +static void ring_wait_contiguous_freewords(struct msm_gpu *gpu, + uint32_t ndwords) +{ + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + uint32_t size = gpu->rb->size/4; + uint32_t wptr; + uint32_t rptr; + + /* Wait for free space and then check if they are contiguous */ + if(spin_until(ring_freewords(gpu)>= ndwords)){ + DRM_ERROR("%s: timeout waiting for ringbuffer space\n", + gpu->name); + return; + } + + wptr = get_wptr(gpu->rb); + rptr = adreno_gpu->memptrs->rptr; + + /* We have enough space in the ring for ndwords. Three conditions + * indicates we have contigous space: + * (1) wptr can be equal to size, ring has wrapped and wptr is 0 + * (see OUT_RING), meaning we have enough space. + * (2) We have enough space in the ring, wptr < rptr indicates + * enough contiguous space + * (3) wptr + ndwords < size - 1 implies enough space in the ring. + */ + if((wptr == size) || (wptr < rptr) || (wptr + ndwords < size - 1)) + return; + + /* Fill the end of ring with no-ops + * */ + OUT_RING(gpu->rb, CP_TYPE3_PKT | (((size - wptr - 1) - 1) << 16) | + ((CP_NOP & 0xFF) << 8)); + gpu->rb->cur = gpu->rb->start; + + /* We have reset cur pointer to start. If ring_freewords returns + * greater than ndwords, then we have contigous space. + * */ + if(spin_until(ring_freewords(gpu)>= ndwords)){ + DRM_ERROR("%s: timeout waiting for ringbuffer space\n", + gpu->name); + return; + } +} + void adreno_wait_ring(struct msm_gpu *gpu, uint32_t ndwords) { - if (spin_until(ring_freewords(gpu) >= ndwords)) - DRM_ERROR("%s: timeout waiting for ringbuffer space\n", gpu->name); + ring_wait_contiguous_freewords(gpu, ndwords); } static const char *iommu_ports[] = { diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c index 9d00dcba6959..e5af4b45d0b6 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi.c @@ -251,11 +251,17 @@ fail: #include <linux/of_gpio.h> +// this is still needed on downstream kernel for now just for +// hdmi_msm_audio_info_setup() / hdmi_msm_audio_sample_rate_reset.. +// TODO sane interface between audio and display.. +struct platform_device *hdmi_pdev_hack; + static void set_hdmi_pdev(struct drm_device *dev, struct platform_device *pdev) { struct msm_drm_private *priv = dev->dev_private; priv->hdmi_pdev = pdev; + hdmi_pdev_hack = pdev; } #ifdef CONFIG_OF @@ -395,6 +401,34 @@ static int hdmi_dev_remove(struct platform_device *pdev) return 0; } +static struct hdmi *find_hdmi(void) +{ + return hdmi_pdev_hack ? platform_get_drvdata(hdmi_pdev_hack) : NULL; +} + +int hdmi_msm_audio_info_setup(bool enabled, u32 num_of_channels, + u32 channel_allocation, u32 level_shift, bool down_mix) +{ + struct hdmi *hdmi = find_hdmi(); + printk("DEBUG::: %s \n", __func__); + if (!hdmi) + return -EINVAL; + printk("DEBUG:::1 %s \n", __func__); + return hdmi_audio_info_setup(hdmi, enabled, num_of_channels, + channel_allocation, level_shift, down_mix); +} +EXPORT_SYMBOL(hdmi_msm_audio_info_setup); + +void hdmi_msm_audio_sample_rate_reset(int rate) +{ + struct hdmi *hdmi = find_hdmi(); + if (!hdmi) + return; + printk("DEBUG::: %s \n", __func__); + hdmi_audio_set_sample_rate(hdmi, rate); +} +EXPORT_SYMBOL(hdmi_msm_audio_sample_rate_reset); + static const struct of_device_id dt_match[] = { { .compatible = "qcom,hdmi-tx-8074" }, { .compatible = "qcom,hdmi-tx-8960" }, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index b67ef5985125..e9726a65d7c2 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -768,6 +768,25 @@ static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, &TS(args->timeout)); } +static int msm_ioctl_get_last_fence(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct msm_drm_private *priv = dev->dev_private; + struct drm_msm_get_last_fence *args = data; + struct msm_gpu *gpu = priv->gpu; + + if (!gpu) + return -ENXIO; + + if (args->pad) { + DRM_ERROR("invalid pad: %08x\n", args->pad); + return -EINVAL; + } + args->fence = gpu->funcs->last_fence(gpu); + + return 0; +} + static const struct drm_ioctl_desc msm_ioctls[] = { DRM_IOCTL_DEF_DRV(MSM_GET_PARAM, msm_ioctl_get_param, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_GEM_NEW, msm_ioctl_gem_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), @@ -776,6 +795,7 @@ static const struct drm_ioctl_desc msm_ioctls[] = { DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_FINI, msm_ioctl_gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_GEM_SUBMIT, msm_ioctl_gem_submit, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_WAIT_FENCE, msm_ioctl_wait_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_GET_LAST_FENCE, msm_ioctl_get_last_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), }; static const struct vm_operations_struct vm_ops = { diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index dd5112265cc9..fa495262c819 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -27,6 +27,17 @@ config FSL_PAMU PAMU can authorize memory access, remap the memory address, and remap I/O transaction types. +config QCOM_IOMMU_V0 + bool "Qualcomm IOMMU v0 Support" + depends on ARCH_QCOM + select IOMMU_API + help + Support for the IOMMUs found on certain Qualcomm SOCs. + These IOMMUs allow virtualization of the address space used by most + cores within the multimedia subsystem. + + If unsure, say N here. + # MSM IOMMU support config MSM_IOMMU bool "MSM IOMMU Support" diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 16edef74b8ee..35acc41592ed 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o obj-$(CONFIG_OF_IOMMU) += of_iommu.o obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o +obj-$(CONFIG_QCOM_IOMMU_V0) += qcom_iommu_v0.o obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o obj-$(CONFIG_ARM_SMMU) += arm-smmu.o diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index 6e3dcc289d59..491c3d7cde65 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -423,12 +423,12 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long va, int i = 0; for (i = 0; i < 16; i++) *(fl_pte+i) = (pa & 0xFF000000) | FL_SUPERSECTION | - FL_AP_READ | FL_AP_WRITE | FL_TYPE_SECT | + FL_AP1 | FL_AP0 | FL_TYPE_SECT | FL_SHARED | FL_NG | pgprot; } if (len == SZ_1M) - *fl_pte = (pa & 0xFFF00000) | FL_AP_READ | FL_AP_WRITE | FL_NG | + *fl_pte = (pa & 0xFFF00000) | FL_AP1 | FL_AP0 | FL_NG | FL_TYPE_SECT | FL_SHARED | pgprot; /* Need a 2nd level table */ diff --git a/drivers/iommu/msm_iommu_hw-8xxx.h b/drivers/iommu/msm_iommu_hw-8xxx.h index fc160101dead..84ba573927d1 100644 --- a/drivers/iommu/msm_iommu_hw-8xxx.h +++ b/drivers/iommu/msm_iommu_hw-8xxx.h @@ -61,8 +61,9 @@ do { \ #define FL_TYPE_TABLE (1 << 0) #define FL_TYPE_SECT (2 << 0) #define FL_SUPERSECTION (1 << 18) -#define FL_AP_WRITE (1 << 10) -#define FL_AP_READ (1 << 11) +#define FL_AP0 (1 << 10) +#define FL_AP1 (1 << 11) +#define FL_AP2 (1 << 15) #define FL_SHARED (1 << 16) #define FL_BUFFERABLE (1 << 2) #define FL_CACHEABLE (1 << 3) @@ -77,6 +78,7 @@ do { \ #define SL_TYPE_SMALL (2 << 0) #define SL_AP0 (1 << 4) #define SL_AP1 (2 << 4) +#define SL_AP2 (1 << 9) #define SL_SHARED (1 << 10) #define SL_BUFFERABLE (1 << 2) #define SL_CACHEABLE (1 << 3) diff --git a/drivers/iommu/qcom_iommu_v0.c b/drivers/iommu/qcom_iommu_v0.c new file mode 100644 index 000000000000..190f41e7f7c6 --- /dev/null +++ b/drivers/iommu/qcom_iommu_v0.c @@ -0,0 +1,1223 @@ +/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved. + * Copyright (C) 2014 Red Hat + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA. + */ + + +/* NOTE: originally based on msm_iommu non-DT driver for same hw + * but as the structure of the driver changes considerably for DT + * it seemed easier to not try to support old platforms with the + * same driver. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/of.h> +#include <linux/errno.h> +#include <linux/io.h> +#include <linux/interrupt.h> +#include <linux/list.h> +#include <linux/mutex.h> +#include <linux/slab.h> +#include <linux/iommu.h> +#include <linux/clk.h> + +#include <asm/cacheflush.h> +#include <asm/sizes.h> + +#include "msm_iommu_hw-8xxx.h" +#include "qcom_iommu_v0.h" + +#define MRC(reg, processor, op1, crn, crm, op2) \ +__asm__ __volatile__ ( \ +" mrc " #processor "," #op1 ", %0," #crn "," #crm "," #op2 "\n" \ +: "=r" (reg)) + +#define RCP15_PRRR(reg) MRC(reg, p15, 0, c10, c2, 0) +#define RCP15_NMRR(reg) MRC(reg, p15, 0, c10, c2, 1) + +/* bitmap of the page sizes currently supported */ +#define QCOM_IOMMU_PGSIZES (SZ_4K | SZ_64K | SZ_1M | SZ_16M) + +static int qcom_iommu_tex_class[4]; + +// TODO any good reason for global lock vs per-iommu lock? +static DEFINE_MUTEX(qcom_iommu_lock); +static LIST_HEAD(qcom_iommu_devices); + +/* Note that a single iommu_domain can, for devices sitting behind + * more than one IOMMU (ie. one per AXI interface) will have more + * than one iommu in the iommu_list. But all are programmed to + * point at the same pagetables so from client device perspective + * they act as a single IOMMU. + */ +struct qcom_domain_priv { + unsigned long *pgtable; + struct iommu_domain *domain; + struct list_head iommu_list; /* list of attached 'struct qcom_iommu' */ +}; + +static int __enable_clocks(struct qcom_iommu *iommu) +{ + int ret; + + ret = clk_prepare_enable(iommu->pclk); + if (ret) + goto fail; + + if (iommu->clk) { + ret = clk_prepare_enable(iommu->clk); + if (ret) + clk_disable_unprepare(iommu->pclk); + } +fail: + return ret; +} + +static void __disable_clocks(struct qcom_iommu *iommu) +{ + if (iommu->clk) + clk_disable_unprepare(iommu->clk); + clk_disable_unprepare(iommu->pclk); +} + +static void __flush_range(struct qcom_iommu *iommu, + unsigned long *start, unsigned long *end) +{ + dmac_flush_range(start, end); +} + +static int __flush_iotlb_va(struct qcom_domain_priv *priv, unsigned int va) +{ + struct qcom_iommu *iommu; + int ret = 0; + + list_for_each_entry(iommu, &priv->iommu_list, dom_node) { + struct qcom_iommu_ctx *iommu_ctx; + list_for_each_entry(iommu_ctx, &iommu->ctx_list, node) { + int ctx = iommu_ctx->num; + uint32_t asid; + + ret = __enable_clocks(iommu); + if (ret) + goto fail; + + asid = GET_CONTEXTIDR_ASID(iommu->base, ctx); + + SET_TLBIVA(iommu->base, ctx, asid | (va & TLBIVA_VA)); + mb(); + + __disable_clocks(iommu); + } + } + +fail: + return ret; +} + +static int __flush_iotlb(struct qcom_domain_priv *priv) +{ + struct qcom_iommu *iommu; + int ret = 0; + +#ifndef CONFIG_IOMMU_PGTABLES_L2 + unsigned long *fl_table = priv->pgtable; + int i; + + list_for_each_entry(iommu, &priv->iommu_list, dom_node) { + + __flush_range(iommu, fl_table, fl_table + SZ_16K); + + for (i = 0; i < NUM_FL_PTE; i++) { + if ((fl_table[i] & 0x03) == FL_TYPE_TABLE) { + void *sl_table = __va(fl_table[i] & + FL_BASE_MASK); + __flush_range(iommu, sl_table, sl_table + SZ_4K); + } + } + + /* + * Only need to flush once, all iommu's attached + * to the domain use common set of pagetables: + */ + break; + } +#endif + + list_for_each_entry(iommu, &priv->iommu_list, dom_node) { + struct qcom_iommu_ctx *iommu_ctx; + + ret = __enable_clocks(iommu); + if (ret) + goto fail; + + list_for_each_entry(iommu_ctx, &iommu->ctx_list, node) { + int ctx = iommu_ctx->num; + uint32_t asid; + + asid = GET_CONTEXTIDR_ASID(iommu->base, ctx); + + SET_TLBIASID(iommu->base, ctx, asid); + mb(); + } + __disable_clocks(iommu); + } + +fail: + return ret; +} + +static void __reset_context(struct qcom_iommu *iommu, int ctx) +{ + void __iomem *base = iommu->base; + + SET_BPRCOSH(base, ctx, 0); + SET_BPRCISH(base, ctx, 0); + SET_BPRCNSH(base, ctx, 0); + SET_BPSHCFG(base, ctx, 0); + SET_BPMTCFG(base, ctx, 0); + SET_ACTLR(base, ctx, 0); + SET_SCTLR(base, ctx, 0); + SET_FSRRESTORE(base, ctx, 0); + SET_TTBR0(base, ctx, 0); + SET_TTBR1(base, ctx, 0); + SET_TTBCR(base, ctx, 0); + SET_BFBCR(base, ctx, 0); + SET_PAR(base, ctx, 0); + SET_FAR(base, ctx, 0); + SET_TLBFLPTER(base, ctx, 0); + SET_TLBSLPTER(base, ctx, 0); + SET_TLBLKCR(base, ctx, 0); + SET_PRRR(base, ctx, 0); + SET_NMRR(base, ctx, 0); + SET_RESUME(base, ctx, 1); + mb(); +} + +static void __reset_iommu(struct qcom_iommu *iommu) +{ + void __iomem *base = iommu->base; + int ctx; + + SET_RPUE(base, 0); + SET_RPUEIE(base, 0); + SET_ESRRESTORE(base, 0); + SET_TBE(base, 0); + SET_CR(base, 0); + SET_SPDMBE(base, 0); + SET_TESTBUSCR(base, 0); + SET_TLBRSW(base, 0); + SET_GLOBAL_TLBIALL(base, 0); + SET_RPU_ACR(base, 0); + SET_TLBLKCRWE(base, 1); + + for (ctx = 0; ctx < iommu->ncb; ctx++) { + SET_BPRCOSH(base, ctx, 0); + SET_BPRCISH(base, ctx, 0); + SET_BPRCNSH(base, ctx, 0); + SET_BPSHCFG(base, ctx, 0); + SET_BPMTCFG(base, ctx, 0); + SET_ACTLR(base, ctx, 0); + SET_SCTLR(base, ctx, 0); + SET_FSRRESTORE(base, ctx, 0); + SET_TTBR0(base, ctx, 0); + SET_TTBR1(base, ctx, 0); + SET_TTBCR(base, ctx, 0); + SET_BFBCR(base, ctx, 0); + SET_PAR(base, ctx, 0); + SET_FAR(base, ctx, 0); + SET_TLBFLPTER(base, ctx, 0); + SET_TLBSLPTER(base, ctx, 0); + SET_TLBLKCR(base, ctx, 0); + SET_CTX_TLBIALL(base, ctx, 0); + SET_TLBIVA(base, ctx, 0); + SET_PRRR(base, ctx, 0); + SET_NMRR(base, ctx, 0); + SET_CONTEXTIDR(base, ctx, 0); + } + mb(); +} + + +static void __program_context(struct qcom_domain_priv *priv, + struct qcom_iommu *iommu, int ctx) +{ + void __iomem *base = iommu->base; + phys_addr_t pgtable = __pa(priv->pgtable); + unsigned int prrr, nmrr; + bool found; + int i, j; + + __reset_context(iommu, ctx); + + /* Set up HTW mode */ + /* TLB miss configuration: perform HTW on miss */ + SET_TLBMCFG(base, ctx, 0x3); + + /* V2P configuration: HTW for access */ + SET_V2PCFG(base, ctx, 0x3); + + SET_TTBCR(base, ctx, iommu->ttbr_split); + SET_TTBR0_PA(base, ctx, (pgtable >> TTBR0_PA_SHIFT)); + if (iommu->ttbr_split) + SET_TTBR1_PA(base, ctx, (pgtable >> TTBR1_PA_SHIFT)); + + /* Enable context fault interrupt */ + SET_CFEIE(base, ctx, 1); + + /* Stall access on a context fault and let the handler deal with it */ + SET_CFCFG(base, ctx, 1); + + /* Redirect all cacheable requests to L2 slave port. */ + SET_RCISH(base, ctx, 1); + SET_RCOSH(base, ctx, 1); + SET_RCNSH(base, ctx, 1); + + /* Turn on TEX Remap */ + SET_TRE(base, ctx, 1); + + /* Set TEX remap attributes */ + RCP15_PRRR(prrr); + RCP15_NMRR(nmrr); + SET_PRRR(base, ctx, prrr); + SET_NMRR(base, ctx, nmrr); + + /* Turn on BFB prefetch */ + SET_BFBDFE(base, ctx, 1); + +#ifdef CONFIG_IOMMU_PGTABLES_L2 + /* Configure page tables as inner-cacheable and shareable to reduce + * the TLB miss penalty. + */ + SET_TTBR0_SH(base, ctx, 1); + SET_TTBR1_SH(base, ctx, 1); + + SET_TTBR0_NOS(base, ctx, 1); + SET_TTBR1_NOS(base, ctx, 1); + + SET_TTBR0_IRGNH(base, ctx, 0); /* WB, WA */ + SET_TTBR0_IRGNL(base, ctx, 1); + + SET_TTBR1_IRGNH(base, ctx, 0); /* WB, WA */ + SET_TTBR1_IRGNL(base, ctx, 1); + + SET_TTBR0_ORGN(base, ctx, 1); /* WB, WA */ + SET_TTBR1_ORGN(base, ctx, 1); /* WB, WA */ +#endif + + /* Find if this page table is used elsewhere, and re-use ASID */ + found = false; + for (i = 0; i < iommu->ncb; i++) { + if (i == ctx) + continue; + + if (GET_TTBR0_PA(base, i) == (pgtable >> TTBR0_PA_SHIFT)) { + SET_CONTEXTIDR_ASID(base, ctx, GET_CONTEXTIDR_ASID(base, i)); + found = true; + break; + } + } + + /* If page table is new, find an unused ASID */ + if (!found) { + for (i = 0; i < iommu->ncb; i++) { + found = false; + for (j = 0; j < iommu->ncb; j++) { + if (j != ctx) + continue; + if (GET_CONTEXTIDR_ASID(base, j) == i) + found = true; + } + + if (!found) { + SET_CONTEXTIDR_ASID(base, ctx, i); + break; + } + } + BUG_ON(found); + } + + /* Enable the MMU */ + SET_M(base, ctx, 1); + mb(); +} + +static int qcom_iommu_domain_init(struct iommu_domain *domain) +{ + struct qcom_domain_priv *priv = kzalloc(sizeof(*priv), GFP_KERNEL); + + if (!priv) + goto fail_nomem; + + INIT_LIST_HEAD(&priv->iommu_list); + priv->pgtable = (unsigned long *)__get_free_pages(GFP_KERNEL, + get_order(SZ_16K)); + + if (!priv->pgtable) + goto fail_nomem; + + memset(priv->pgtable, 0, SZ_16K); + domain->priv = priv; + priv->domain = domain; + +//XXX I think not needed? + dmac_flush_range(priv->pgtable, priv->pgtable + NUM_FL_PTE); + + domain->geometry.aperture_start = 0; + domain->geometry.aperture_end = (1ULL << 32) - 1; + domain->geometry.force_aperture = true; + + return 0; + +fail_nomem: + kfree(priv); + return -ENOMEM; +} + +static void qcom_iommu_domain_destroy(struct iommu_domain *domain) +{ + struct qcom_domain_priv *priv; + unsigned long *fl_table; + int i; + + mutex_lock(&qcom_iommu_lock); + priv = domain->priv; + domain->priv = NULL; + + if (priv) { + fl_table = priv->pgtable; + + for (i = 0; i < NUM_FL_PTE; i++) + if ((fl_table[i] & 0x03) == FL_TYPE_TABLE) + free_page((unsigned long) __va(((fl_table[i]) & + FL_BASE_MASK))); + + free_pages((unsigned long)priv->pgtable, get_order(SZ_16K)); + priv->pgtable = NULL; + } + + kfree(priv); + mutex_unlock(&qcom_iommu_lock); +} + +static int qcom_iommu_attach_dev(struct iommu_domain *domain, struct device *dev) +{ + struct qcom_domain_priv *priv = domain->priv; + struct qcom_iommu *iommu; + struct qcom_iommu_ctx *iommu_ctx = NULL; + int ret = 0; + bool found = false; + + if (!priv) + return -EINVAL; + + mutex_lock(&qcom_iommu_lock); + list_for_each_entry(iommu, &qcom_iommu_devices, dev_node) { + iommu_ctx = list_first_entry(&iommu->ctx_list, + struct qcom_iommu_ctx, node); + + if (iommu_ctx->of_node == dev->of_node) { + found = true; + + ret = __enable_clocks(iommu); + if (ret) + goto fail; + + /* we found a matching device, attach all it's contexts: */ + list_for_each_entry(iommu_ctx, &iommu->ctx_list, node) + __program_context(priv, iommu, iommu_ctx->num); + + __disable_clocks(iommu); + + // TODO check for double attaches, etc.. + + list_add_tail(&iommu->dom_node, &priv->iommu_list); + iommu->domain = domain; + } + } + + if (!found) { + ret = -ENXIO; + goto fail; + } + + // TODO might want to get_device(iommu->dev) unless iommu framework + // does this somewhere for us? + + ret = __flush_iotlb(priv); + +fail: + if (ret) { + // TODO make sure we completely detach.. + } + mutex_unlock(&qcom_iommu_lock); + return ret; +} + +static void qcom_iommu_detach_dev(struct iommu_domain *domain, + struct device *dev) +{ + struct qcom_domain_priv *priv = domain->priv; + struct qcom_iommu *iommu; + struct qcom_iommu_ctx *iommu_ctx; + int ret; + + if (!priv) + return; + + mutex_lock(&qcom_iommu_lock); + + ret = __flush_iotlb(priv); + if (ret) + goto fail; + + while (!list_empty(&priv->iommu_list)) { + iommu = list_first_entry(&priv->iommu_list, + struct qcom_iommu, dom_node); + + ret = __enable_clocks(iommu); + if (ret) + goto fail; + + /* reset all contexts: */ + list_for_each_entry(iommu_ctx, &iommu->ctx_list, node) { + int ctx = iommu_ctx->num; + uint32_t asid = GET_CONTEXTIDR_ASID(iommu->base, ctx); + SET_TLBIASID(iommu->base, ctx, asid); + __reset_context(iommu, ctx); + } + + __disable_clocks(iommu); + + list_del(&iommu->dom_node); + } + + // TODO might want to put_device(iommu->dev) unless iommu framework + // does this somewhere for us? + +fail: + mutex_unlock(&qcom_iommu_lock); +} + +static int __get_pgprot(int prot, int len) +{ + unsigned int pgprot; + int tex; + + if (!(prot & (IOMMU_READ | IOMMU_WRITE))) { + prot |= IOMMU_READ | IOMMU_WRITE; + WARN_ONCE(1, "No attributes in iommu mapping; assuming RW\n"); + } + + if ((prot & IOMMU_WRITE) && !(prot & IOMMU_READ)) { + prot |= IOMMU_READ; + WARN_ONCE(1, "Write-only iommu mappings unsupported; falling back to RW\n"); + } + + if (prot & IOMMU_CACHE) + tex = (pgprot_kernel >> 2) & 0x07; + else + tex = qcom_iommu_tex_class[QCOM_IOMMU_ATTR_NONCACHED]; + + if (tex < 0 || tex > NUM_TEX_CLASS - 1) + return 0; + + if (len == SZ_16M || len == SZ_1M) { + pgprot = FL_SHARED; + pgprot |= tex & 0x01 ? FL_BUFFERABLE : 0; + pgprot |= tex & 0x02 ? FL_CACHEABLE : 0; + pgprot |= tex & 0x04 ? FL_TEX0 : 0; + pgprot |= FL_AP0 | FL_AP1; + pgprot |= prot & IOMMU_WRITE ? 0 : FL_AP2; + } else { + pgprot = SL_SHARED; + pgprot |= tex & 0x01 ? SL_BUFFERABLE : 0; + pgprot |= tex & 0x02 ? SL_CACHEABLE : 0; + pgprot |= tex & 0x04 ? SL_TEX0 : 0; + pgprot |= SL_AP0 | SL_AP1; + pgprot |= prot & IOMMU_WRITE ? 0 : SL_AP2; + } + + return pgprot; +} + +static int qcom_iommu_map(struct iommu_domain *domain, unsigned long va, + phys_addr_t pa, size_t len, int prot) +{ + struct qcom_domain_priv *priv = domain->priv; + struct qcom_iommu *iommu; + unsigned long *fl_table, *fl_pte, fl_offset; + unsigned long *sl_table, *sl_pte, sl_offset; + unsigned int pgprot; + int ret = 0; + + mutex_lock(&qcom_iommu_lock); + + if (!priv || list_empty(&priv->iommu_list)) + goto fail; + + /* all IOMMU's in the domain have same pgtables: */ + iommu = list_first_entry(&priv->iommu_list, + struct qcom_iommu, dom_node); + + fl_table = priv->pgtable; + + if (len != SZ_16M && len != SZ_1M && + len != SZ_64K && len != SZ_4K) { + dev_err(iommu->dev, "Bad size: %d\n", len); + ret = -EINVAL; + goto fail; + } + + if (!fl_table) { + dev_err(iommu->dev, "Null page table\n"); + ret = -EINVAL; + goto fail; + } + + pgprot = __get_pgprot(prot, len); + + if (!pgprot) { + ret = -EINVAL; + goto fail; + } + + fl_offset = FL_OFFSET(va); /* Upper 12 bits */ + fl_pte = fl_table + fl_offset; /* int pointers, 4 bytes */ + + if (len == SZ_16M) { + int i = 0; + + for (i = 0; i < 16; i++) + if (*(fl_pte+i)) { + ret = -EBUSY; + goto fail; + } + + for (i = 0; i < 16; i++) + *(fl_pte+i) = (pa & 0xFF000000) | FL_SUPERSECTION + | FL_TYPE_SECT | FL_SHARED | FL_NG | pgprot; + __flush_range(iommu, fl_pte, fl_pte + 16); + } + + if (len == SZ_1M) { + if (*fl_pte) { + ret = -EBUSY; + goto fail; + } + + *fl_pte = (pa & 0xFFF00000) | FL_NG | FL_TYPE_SECT | FL_SHARED + | pgprot; + __flush_range(iommu, fl_pte, fl_pte + 1); + } + + /* Need a 2nd level table */ + if (len == SZ_4K || len == SZ_64K) { + + if (*fl_pte == 0) { + unsigned long *sl; + sl = (unsigned long *) __get_free_pages(GFP_ATOMIC, + get_order(SZ_4K)); + + if (!sl) { + dev_err(iommu->dev, "Could not allocate second level table\n"); + ret = -ENOMEM; + goto fail; + } + memset(sl, 0, SZ_4K); + __flush_range(iommu, sl, sl + NUM_SL_PTE); + + *fl_pte = ((((int)__pa(sl)) & FL_BASE_MASK) | \ + FL_TYPE_TABLE); + + __flush_range(iommu, fl_pte, fl_pte + 1); + } + + if (!(*fl_pte & FL_TYPE_TABLE)) { + ret = -EBUSY; + goto fail; + } + } + + sl_table = (unsigned long *) __va(((*fl_pte) & FL_BASE_MASK)); + sl_offset = SL_OFFSET(va); + sl_pte = sl_table + sl_offset; + + if (len == SZ_4K) { + if (*sl_pte) { + ret = -EBUSY; + goto fail; + } + + *sl_pte = (pa & SL_BASE_MASK_SMALL) | SL_NG | SL_SHARED + | SL_TYPE_SMALL | pgprot; + __flush_range(iommu, sl_pte, sl_pte + 1); + } + + if (len == SZ_64K) { + int i; + + for (i = 0; i < 16; i++) + if (*(sl_pte+i)) { + ret = -EBUSY; + goto fail; + } + + for (i = 0; i < 16; i++) + *(sl_pte+i) = (pa & SL_BASE_MASK_LARGE) | SL_NG + | SL_SHARED | SL_TYPE_LARGE | pgprot; + + __flush_range(iommu, sl_pte, sl_pte + 16); + } + + ret = __flush_iotlb_va(priv, va); + +fail: + mutex_unlock(&qcom_iommu_lock); + return ret; +} + +static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long va, + size_t len) +{ + struct qcom_domain_priv *priv = domain->priv; + struct qcom_iommu *iommu; + unsigned long *fl_table, *fl_pte, fl_offset; + unsigned long *sl_table, *sl_pte, sl_offset; + int i, ret = 0; + + mutex_lock(&qcom_iommu_lock); + + if (!priv || list_empty(&priv->iommu_list)) + goto fail; + + /* all IOMMU's in the domain have same pgtables: */ + iommu = list_first_entry(&priv->iommu_list, + struct qcom_iommu, dom_node); + + fl_table = priv->pgtable; + + if (len != SZ_16M && len != SZ_1M && + len != SZ_64K && len != SZ_4K) { + dev_err(iommu->dev, "Bad length: %d\n", len); + goto fail; + } + + if (!fl_table) { + dev_err(iommu->dev, "Null page table\n"); + goto fail; + } + + fl_offset = FL_OFFSET(va); /* Upper 12 bits */ + fl_pte = fl_table + fl_offset; /* int pointers, 4 bytes */ + + if (*fl_pte == 0) { + dev_err(iommu->dev, "First level PTE is 0\n"); + goto fail; + } + + /* Unmap supersection */ + if (len == SZ_16M) { + for (i = 0; i < 16; i++) + *(fl_pte+i) = 0; + + __flush_range(iommu, fl_pte, fl_pte + 16); + } + + if (len == SZ_1M) { + *fl_pte = 0; + + __flush_range(iommu, fl_pte, fl_pte + 1); + } + + sl_table = (unsigned long *) __va(((*fl_pte) & FL_BASE_MASK)); + sl_offset = SL_OFFSET(va); + sl_pte = sl_table + sl_offset; + + if (len == SZ_64K) { + for (i = 0; i < 16; i++) + *(sl_pte+i) = 0; + + __flush_range(iommu, sl_pte, sl_pte + 16); + } + + if (len == SZ_4K) { + *sl_pte = 0; + + __flush_range(iommu, sl_pte, sl_pte + 1); + } + + if (len == SZ_4K || len == SZ_64K) { + int used = 0; + + for (i = 0; i < NUM_SL_PTE; i++) + if (sl_table[i]) + used = 1; + if (!used) { + free_page((unsigned long)sl_table); + *fl_pte = 0; + + __flush_range(iommu, fl_pte, fl_pte + 1); + } + } + + ret = __flush_iotlb_va(priv, va); + +fail: + mutex_unlock(&qcom_iommu_lock); + + /* the IOMMU API requires us to return how many bytes were unmapped */ + len = ret ? 0 : len; + return len; +} + +static phys_addr_t qcom_iommu_iova_to_phys(struct iommu_domain *domain, + dma_addr_t va) +{ + struct qcom_domain_priv *priv = domain->priv; + struct qcom_iommu *iommu; + struct qcom_iommu_ctx *iommu_ctx; + unsigned int par; + void __iomem *base; + phys_addr_t ret = 0; + int ctx; + + mutex_lock(&qcom_iommu_lock); + + if (!priv || list_empty(&priv->iommu_list)) + goto fail; + + /* all IOMMU's in the domain have same pgtables: */ + iommu = list_first_entry(&priv->iommu_list, + struct qcom_iommu, dom_node); + + if (list_empty(&iommu->ctx_list)) + goto fail; + + iommu_ctx = list_first_entry(&iommu->ctx_list, + struct qcom_iommu_ctx, node); + + base = iommu->base; + ctx = iommu_ctx->num; + + ret = __enable_clocks(iommu); + if (ret) + goto fail; + + SET_V2PPR(base, ctx, va & V2Pxx_VA); + + mb(); + par = GET_PAR(base, ctx); + + /* We are dealing with a supersection */ + if (GET_NOFAULT_SS(base, ctx)) + ret = (par & 0xFF000000) | (va & 0x00FFFFFF); + else /* Upper 20 bits from PAR, lower 12 from VA */ + ret = (par & 0xFFFFF000) | (va & 0x00000FFF); + + if (GET_FAULT(base, ctx)) + ret = 0; + + __disable_clocks(iommu); + +fail: + mutex_unlock(&qcom_iommu_lock); + return ret; +} + +static void print_ctx_regs(void __iomem *base, int ctx) +{ + unsigned int fsr = GET_FSR(base, ctx); + pr_err("FAR = %08x PAR = %08x\n", + GET_FAR(base, ctx), GET_PAR(base, ctx)); + pr_err("FSR = %08x [%s%s%s%s%s%s%s%s%s%s]\n", fsr, + (fsr & 0x02) ? "TF " : "", + (fsr & 0x04) ? "AFF " : "", + (fsr & 0x08) ? "APF " : "", + (fsr & 0x10) ? "TLBMF " : "", + (fsr & 0x20) ? "HTWDEEF " : "", + (fsr & 0x40) ? "HTWSEEF " : "", + (fsr & 0x80) ? "MHF " : "", + (fsr & 0x10000) ? "SL " : "", + (fsr & 0x40000000) ? "SS " : "", + (fsr & 0x80000000) ? "MULTI " : ""); + + pr_err("FSYNR0 = %08x FSYNR1 = %08x\n", + GET_FSYNR0(base, ctx), GET_FSYNR1(base, ctx)); + pr_err("TTBR0 = %08x TTBR1 = %08x\n", + GET_TTBR0(base, ctx), GET_TTBR1(base, ctx)); + pr_err("SCTLR = %08x ACTLR = %08x\n", + GET_SCTLR(base, ctx), GET_ACTLR(base, ctx)); + pr_err("PRRR = %08x NMRR = %08x\n", + GET_PRRR(base, ctx), GET_NMRR(base, ctx)); +} + +static irqreturn_t __fault_handler(int irq, void *dev_id) +{ + struct platform_device *pdev = dev_id; + struct qcom_iommu *iommu = platform_get_drvdata(pdev); + struct qcom_iommu_ctx *iommu_ctx = NULL; + void __iomem *base = iommu->base; + int ret; + bool first = true; + + mutex_lock(&qcom_iommu_lock); + + ret = __enable_clocks(iommu); + if (ret) + goto fail; + + list_for_each_entry(iommu_ctx, &iommu->ctx_list, node) { + int i = iommu_ctx->num; + unsigned int fsr = GET_FSR(base, i); + unsigned long iova; + int flags; + + if (!fsr) + continue; + + iova = GET_FAR(base, i); + + /* TODO without docs, not sure about IOMMU_FAULT_* flags */ + flags = 0; + + if (!report_iommu_fault(iommu->domain, iommu->dev, + iova, flags)) { + ret = IRQ_HANDLED; + } else { + // XXX ratelimited + if (first) { + /* only print header for first context */ + pr_err("Unexpected IOMMU page fault!\n"); + pr_err("base = %08x\n", (unsigned int) base); + first = false; + } + pr_err("Fault occurred in context %d.\n", i); + pr_err("name = %s\n", dev_name(&pdev->dev)); + pr_err("Interesting registers:\n"); + print_ctx_regs(base, i); + } + + SET_FSR(base, i, fsr); + SET_RESUME(base, i, 1); + } + __disable_clocks(iommu); + +fail: + mutex_unlock(&qcom_iommu_lock); + return IRQ_HANDLED; +} + +static struct iommu_ops qcom_iommu_ops = { + .domain_init = qcom_iommu_domain_init, + .domain_destroy = qcom_iommu_domain_destroy, + .attach_dev = qcom_iommu_attach_dev, + .detach_dev = qcom_iommu_detach_dev, + .map = qcom_iommu_map, + .unmap = qcom_iommu_unmap, + .iova_to_phys = qcom_iommu_iova_to_phys, + .pgsize_bitmap = QCOM_IOMMU_PGSIZES, +}; + +/* + * IOMMU Platform Driver: + */ + +static int __register_ctx(struct qcom_iommu *iommu, + struct of_phandle_args *masterspec, int ctx) +{ + struct qcom_iommu_ctx *iommu_ctx; + int i, ret; + + if (masterspec->args_count > MAX_NUM_MIDS) + return -EINVAL; + + iommu_ctx = kzalloc(sizeof(*iommu_ctx), GFP_KERNEL); + if (!iommu) { + ret = -ENOMEM; + goto fail; + } + + iommu_ctx->of_node = masterspec->np; + iommu_ctx->num = ctx; + + for (i = 0; i < masterspec->args_count; i++) + iommu_ctx->mids[i] = masterspec->args[i]; + for (; i < ARRAY_SIZE(iommu_ctx->mids); i++) + iommu_ctx->mids[i] = -1; + + list_add_tail(&iommu_ctx->node, &iommu->ctx_list); + + return 0; + +fail: + // TODO cleanup; + return ret; +} + +static int qcom_iommu_probe(struct platform_device *pdev) +{ + struct device_node *node = pdev->dev.of_node; + struct qcom_iommu *iommu; + struct qcom_iommu_ctx *iommu_ctx; + struct of_phandle_args masterspec; + struct resource *r; + int ctx, ret, par, i; + u32 val; + + iommu = kzalloc(sizeof(*iommu), GFP_KERNEL); + if (!iommu) { + ret = -ENOMEM; + goto fail; + } + + iommu->dev = &pdev->dev; + INIT_LIST_HEAD(&iommu->ctx_list); + + ret = of_property_read_u32(node, "ncb", &val); + if (ret) { + dev_err(iommu->dev, "could not get ncb\n"); + goto fail; + } + iommu->ncb = val; + + ret = of_property_read_u32(node, "ttbr-split", &val); + if (ret) + val = 0; + iommu->ttbr_split = val; + + iommu->pclk = devm_clk_get(iommu->dev, "smmu_pclk"); + if (IS_ERR(iommu->pclk)) { + dev_err(iommu->dev, "could not get smmu_pclk\n"); + ret = PTR_ERR(iommu->pclk); + iommu->pclk = NULL; + goto fail; + } + + iommu->clk = devm_clk_get(iommu->dev, "iommu_clk"); + if (IS_ERR(iommu->clk)) { + dev_err(iommu->dev, "could not get iommu_clk\n"); + ret = PTR_ERR(iommu->clk); + iommu->clk = NULL; + goto fail; + } + + r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "physbase"); + if (!r) { + dev_err(iommu->dev, "could not get physbase\n"); + ret = -ENODEV; + goto fail; + } + + iommu->base = devm_ioremap_resource(iommu->dev, r); + if (IS_ERR(iommu->base)) { + ret = PTR_ERR(iommu->base); + goto fail; + } + + iommu->irq = platform_get_irq_byname(pdev, "nonsecure_irq"); + if (WARN_ON(iommu->irq < 0)) { + ret = -ENODEV; + goto fail; + } + + /* now find our contexts: */ + i = 0; + while (!of_parse_phandle_with_args(node, "mmu-masters", + "#stream-id-cells", i, &masterspec)) { + ret = __register_ctx(iommu, &masterspec, i); + if (ret) { + dev_err(iommu->dev, "failed to add context %s\n", + masterspec.np->name); + goto fail; + } + + i++; + } + dev_notice(iommu->dev, "registered %d master devices\n", i); + + __enable_clocks(iommu); + __reset_iommu(iommu); + + SET_M(iommu->base, 0, 1); + SET_PAR(iommu->base, 0, 0); + SET_V2PCFG(iommu->base, 0, 1); + SET_V2PPR(iommu->base, 0, 0); + mb(); + par = GET_PAR(iommu->base, 0); + SET_V2PCFG(iommu->base, 0, 0); + SET_M(iommu->base, 0, 0); + mb(); + + ctx = 0; + list_for_each_entry(iommu_ctx, &iommu->ctx_list, node) { + for (i = 0; iommu_ctx->mids[i] != -1; i++) { + int mid = iommu_ctx->mids[i]; + + SET_M2VCBR_N(iommu->base, mid, 0); + SET_CBACR_N(iommu->base, ctx, 0); + + /* Set VMID = 0 */ + SET_VMID(iommu->base, mid, 0); + + /* Set the context number for that MID to this context */ + SET_CBNDX(iommu->base, mid, ctx); + + /* Set MID associated with this context bank to 0*/ + SET_CBVMID(iommu->base, ctx, 0); + + /* Set the ASID for TLB tagging for this context */ + SET_CONTEXTIDR_ASID(iommu->base, ctx, ctx); + + /* Set security bit override to be Non-secure */ + SET_NSCFG(iommu->base, mid, 3); + } + + ctx++; + } + + __disable_clocks(iommu); + + if (!par) { + dev_err(iommu->dev, "Invalid PAR value detected\n"); + ret = -ENODEV; + goto fail; + } + + ret = devm_request_threaded_irq(iommu->dev, iommu->irq, NULL, + __fault_handler, IRQF_ONESHOT | IRQF_SHARED, + "iommu", pdev); + if (ret) { + dev_err(iommu->dev, "Request IRQ %d failed with ret=%d\n", + iommu->irq, ret); + goto fail; + } + + dev_info(iommu->dev, "device mapped at %p, irq %d with %d ctx banks\n", + iommu->base, iommu->irq, iommu->ncb); + + platform_set_drvdata(pdev, iommu); + + mutex_lock(&qcom_iommu_lock); + list_add(&iommu->dev_node, &qcom_iommu_devices); + mutex_unlock(&qcom_iommu_lock); + + return 0; +fail: + // TODO cleanup.. + return ret; +} + +static int qcom_iommu_remove(struct platform_device *pdev) +{ + struct qcom_iommu *priv = platform_get_drvdata(pdev); + + if (WARN_ON(!priv)) + return 0; + + if (priv->clk) + clk_disable_unprepare(priv->clk); + + if (priv->pclk) + clk_disable_unprepare(priv->pclk); + + platform_set_drvdata(pdev, NULL); + kfree(priv); + + return 0; +} + +static const struct of_device_id qcom_iommu_dt_match[] = { + { .compatible = "qcom,iommu-v0" }, + {} +}; + +static struct platform_driver qcom_iommu_driver = { + .driver = { + .name = "qcom-iommu-v0", + .of_match_table = qcom_iommu_dt_match, + }, + .probe = qcom_iommu_probe, + .remove = qcom_iommu_remove, +}; + +static int __init get_tex_class(int icp, int ocp, int mt, int nos) +{ + int i = 0; + unsigned int prrr = 0; + unsigned int nmrr = 0; + int c_icp, c_ocp, c_mt, c_nos; + + RCP15_PRRR(prrr); + RCP15_NMRR(nmrr); + + for (i = 0; i < NUM_TEX_CLASS; i++) { + c_nos = PRRR_NOS(prrr, i); + c_mt = PRRR_MT(prrr, i); + c_icp = NMRR_ICP(nmrr, i); + c_ocp = NMRR_OCP(nmrr, i); + + if (icp == c_icp && ocp == c_ocp && c_mt == mt && c_nos == nos) + return i; + } + + return -ENODEV; +} + +static int __init qcom_iommu_init(void) +{ + int ret; + + ret = platform_driver_register(&qcom_iommu_driver); + if (ret) { + pr_err("Failed to register IOMMU driver\n"); + goto error; + } + + qcom_iommu_tex_class[QCOM_IOMMU_ATTR_NONCACHED] = + get_tex_class(CP_NONCACHED, CP_NONCACHED, MT_NORMAL, 1); + + qcom_iommu_tex_class[QCOM_IOMMU_ATTR_CACHED_WB_WA] = + get_tex_class(CP_WB_WA, CP_WB_WA, MT_NORMAL, 1); + + qcom_iommu_tex_class[QCOM_IOMMU_ATTR_CACHED_WB_NWA] = + get_tex_class(CP_WB_NWA, CP_WB_NWA, MT_NORMAL, 1); + + qcom_iommu_tex_class[QCOM_IOMMU_ATTR_CACHED_WT] = + get_tex_class(CP_WT, CP_WT, MT_NORMAL, 1); + + bus_set_iommu(&platform_bus_type, &qcom_iommu_ops); + + return 0; + +error: + return ret; +} + +static void __exit qcom_iommu_driver_exit(void) +{ + platform_driver_unregister(&qcom_iommu_driver); +} + +subsys_initcall(qcom_iommu_init); +module_exit(qcom_iommu_driver_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Stepan Moskovchenko <stepanm@codeaurora.org>"); +MODULE_AUTHOR("Rob Clark <robdclark@gmail.com>"); diff --git a/drivers/iommu/qcom_iommu_v0.h b/drivers/iommu/qcom_iommu_v0.h new file mode 100644 index 000000000000..a3d1e37182c8 --- /dev/null +++ b/drivers/iommu/qcom_iommu_v0.h @@ -0,0 +1,99 @@ +/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved. + * Copyright (C) 2014 Red Hat + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA. + */ + +/* NOTE: originally based on msm_iommu non-DT driver for same hw + * but as the structure of the driver changes considerably for DT + * it seemed easier to not try to support old platforms with the + * same driver. + */ + +#ifndef QCOM_IOMMU_V0_H +#define QCOM_IOMMU_V0_H + +#include <linux/interrupt.h> +#include <linux/clk.h> + +/* Sharability attributes of QCOM IOMMU mappings */ +#define QCOM_IOMMU_ATTR_NON_SH 0x0 +#define QCOM_IOMMU_ATTR_SH 0x4 + +/* Cacheability attributes of QCOM IOMMU mappings */ +#define QCOM_IOMMU_ATTR_NONCACHED 0x0 +#define QCOM_IOMMU_ATTR_CACHED_WB_WA 0x1 +#define QCOM_IOMMU_ATTR_CACHED_WB_NWA 0x2 +#define QCOM_IOMMU_ATTR_CACHED_WT 0x3 + +/* Mask for the cache policy attribute */ +#define QCOM_IOMMU_CP_MASK 0x03 + +/* Maximum number of Machine IDs that we are allowing to be mapped to the same + * context bank. The number of MIDs mapped to the same CB does not affect + * performance, but there is a practical limit on how many distinct MIDs may + * be present. These mappings are typically determined at design time and are + * not expected to change at run time. + */ +#define MAX_NUM_MIDS 32 + +/** + * struct qcom_iommu - a single IOMMU hardware instance + * @dev: IOMMU device + * @base: IOMMU config port base address (VA) + * @irq: Interrupt number + * @ncb: Number of context banks present on this IOMMU HW instance + * @ttbr_split: ttbr split + * @clk: The bus clock for this IOMMU hardware instance + * @pclk: The clock for the IOMMU bus interconnect + * @ctx_list: list of 'struct qcom_iommu_ctx' + * @dev_node: list head in qcom_iommu_devices list + * @dom_node: list head in domain + * @domain: attached domain. Note that the relationship between domain and + * and iommu's is N:1, ie. an IOMMU can only be attached to one domain, + * but a domain can be attached to many IOMMUs + */ +struct qcom_iommu { + struct device *dev; + void __iomem *base; + int irq; + int ncb; + int ttbr_split; + struct clk *clk; + struct clk *pclk; + struct list_head ctx_list; + struct list_head dev_node; + struct list_head dom_node; + struct iommu_domain *domain; +}; + +/** + * struct qcom_iommu_ctx - an IOMMU context bank instance + * @of_node: node ptr of client device + * @num: Index of this context bank within the hardware + * @mids: List of Machine IDs that are to be mapped into this context + * bank, terminated by -1. The MID is a set of signals on the + * AXI bus that identifies the function associated with a specific + * memory request. (See ARM spec). + * @node: list head in ctx_list + */ +struct qcom_iommu_ctx { + struct device_node *of_node; + int num; + int mids[MAX_NUM_MIDS]; + struct list_head node; +}; + +#endif diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig index 1456ea70bbc7..b6bcb79c9728 100644 --- a/drivers/mfd/Kconfig +++ b/drivers/mfd/Kconfig @@ -567,6 +567,20 @@ config MFD_PM8921_CORE Say M here if you want to include support for PM8921 chip as a module. This will build a module called "pm8921-core". +config MFD_QCOM_RPM + tristate "Qualcomm Resource Power Manager (RPM)" + depends on ARCH_QCOM && OF + help + If you say yes to this option, support will be included for the + Resource Power Manager system found in the Qualcomm 8660, 8960 and + 8064 based devices. + + This is required to access many regulators, clocks and bus + frequencies controlled by the RPM on these devices. + + Say M here if you want to include support for the Qualcomm RPM as a + module. This will build a module called "qcom_rpm". + config MFD_SPMI_PMIC tristate "Qualcomm SPMI PMICs" depends on ARCH_QCOM || COMPILE_TEST diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile index 8bd54b1253af..3364bbc60802 100644 --- a/drivers/mfd/Makefile +++ b/drivers/mfd/Makefile @@ -153,6 +153,7 @@ obj-$(CONFIG_MFD_SI476X_CORE) += si476x-core.o obj-$(CONFIG_MFD_CS5535) += cs5535-mfd.o obj-$(CONFIG_MFD_OMAP_USB_HOST) += omap-usb-host.o omap-usb-tll.o obj-$(CONFIG_MFD_PM8921_CORE) += pm8921-core.o ssbi.o +obj-$(CONFIG_MFD_QCOM_RPM) += qcom_rpm.o obj-$(CONFIG_MFD_SPMI_PMIC) += qcom-spmi-pmic.o obj-$(CONFIG_TPS65911_COMPARATOR) += tps65911-comparator.o obj-$(CONFIG_MFD_TPS65090) += tps65090.o diff --git a/drivers/mfd/pm8921-core.c b/drivers/mfd/pm8921-core.c index 39904f77c049..96b60db0dac9 100644 --- a/drivers/mfd/pm8921-core.c +++ b/drivers/mfd/pm8921-core.c @@ -26,6 +26,7 @@ #include <linux/regmap.h> #include <linux/of_platform.h> #include <linux/mfd/core.h> +#include <linux/mfd/pm8921-core.h> #define SSBI_REG_ADDR_IRQ_BASE 0x1BB @@ -65,6 +66,41 @@ struct pm_irq_chip { u8 config[0]; }; +int pm8xxx_read_irq_status(int irq) +{ + struct irq_data *d = irq_get_irq_data(irq); + struct pm_irq_chip *chip = irq_data_get_irq_chip_data(d); + unsigned int pmirq = irqd_to_hwirq(d); + unsigned int bits; + int irq_bit; + u8 block; + int rc; + + if (!chip) { + pr_err("Failed to resolve pm_irq_chip\n"); + return -EINVAL; + } + + block = pmirq / 8; + irq_bit = pmirq % 8; + + spin_lock(&chip->pm_irq_lock); + rc = regmap_write(chip->regmap, SSBI_REG_ADDR_IRQ_BLK_SEL, block); + if (rc) { + pr_err("Failed Selecting Block %d rc=%d\n", block, rc); + goto bail; + } + + rc = regmap_read(chip->regmap, SSBI_REG_ADDR_IRQ_RT_STATUS, &bits); + if (rc) + pr_err("Failed Reading Status rc=%d\n", rc); +bail: + spin_unlock(&chip->pm_irq_lock); + + return rc ? rc : !!(bits & BIT(irq_bit)); +} +EXPORT_SYMBOL(pm8xxx_read_irq_status); + static int pm8xxx_read_block_irq(struct pm_irq_chip *chip, unsigned int bp, unsigned int *ip) { diff --git a/drivers/mfd/qcom_rpm.c b/drivers/mfd/qcom_rpm.c new file mode 100644 index 000000000000..eceb34f503da --- /dev/null +++ b/drivers/mfd/qcom_rpm.c @@ -0,0 +1,617 @@ +/* + * Copyright (c) 2014, Sony Mobile Communications AB. + * Copyright (c) 2013, The Linux Foundation. All rights reserved. + * Author: Bjorn Andersson <bjorn.andersson@sonymobile.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/of_platform.h> +#include <linux/io.h> +#include <linux/interrupt.h> +#include <linux/mfd/qcom_rpm.h> +#include <linux/mfd/syscon.h> +#include <linux/regmap.h> + +#include <dt-bindings/mfd/qcom-rpm.h> + +struct qcom_rpm_resource { + unsigned target_id; + unsigned status_id; + unsigned select_id; + unsigned size; +}; + +struct qcom_rpm_data { + u32 version; + const struct qcom_rpm_resource *resource_table; + unsigned n_resources; +}; + +struct qcom_rpm { + struct device *dev; + struct regmap *ipc_regmap; + unsigned ipc_offset; + unsigned ipc_bit; + + struct completion ack; + struct mutex lock; + + void __iomem *status_regs; + void __iomem *ctrl_regs; + void __iomem *req_regs; + + u32 ack_status; + + const struct qcom_rpm_data *data; +}; + +#define RPM_STATUS_REG(rpm, i) ((rpm)->status_regs + (i) * 4) +#define RPM_CTRL_REG(rpm, i) ((rpm)->ctrl_regs + (i) * 4) +#define RPM_REQ_REG(rpm, i) ((rpm)->req_regs + (i) * 4) + +#define RPM_REQUEST_TIMEOUT (5 * HZ) + +#define RPM_REQUEST_CONTEXT 3 +#define RPM_REQ_SELECT 11 +#define RPM_ACK_CONTEXT 15 +#define RPM_ACK_SELECTOR 23 +#define RPM_SELECT_SIZE 7 + +#define RPM_NOTIFICATION BIT(30) +#define RPM_REJECTED BIT(31) + +#define RPM_SIGNAL BIT(2) + +static const struct qcom_rpm_resource apq8064_rpm_resource_table[] = { + [QCOM_RPM_CXO_CLK] = { 25, 9, 5, 1 }, + [QCOM_RPM_PXO_CLK] = { 26, 10, 6, 1 }, + [QCOM_RPM_APPS_FABRIC_CLK] = { 27, 11, 8, 1 }, + [QCOM_RPM_SYS_FABRIC_CLK] = { 28, 12, 9, 1 }, + [QCOM_RPM_MM_FABRIC_CLK] = { 29, 13, 10, 1 }, + [QCOM_RPM_DAYTONA_FABRIC_CLK] = { 30, 14, 11, 1 }, + [QCOM_RPM_SFPB_CLK] = { 31, 15, 12, 1 }, + [QCOM_RPM_CFPB_CLK] = { 32, 16, 13, 1 }, + [QCOM_RPM_MMFPB_CLK] = { 33, 17, 14, 1 }, + [QCOM_RPM_EBI1_CLK] = { 34, 18, 16, 1 }, + [QCOM_RPM_APPS_FABRIC_HALT] = { 35, 19, 18, 1 }, + [QCOM_RPM_APPS_FABRIC_MODE] = { 37, 20, 19, 1 }, + [QCOM_RPM_APPS_FABRIC_IOCTL] = { 40, 21, 20, 1 }, + [QCOM_RPM_APPS_FABRIC_ARB] = { 41, 22, 21, 12 }, + [QCOM_RPM_SYS_FABRIC_HALT] = { 53, 23, 22, 1 }, + [QCOM_RPM_SYS_FABRIC_MODE] = { 55, 24, 23, 1 }, + [QCOM_RPM_SYS_FABRIC_IOCTL] = { 58, 25, 24, 1 }, + [QCOM_RPM_SYS_FABRIC_ARB] = { 59, 26, 25, 30 }, + [QCOM_RPM_MM_FABRIC_HALT] = { 89, 27, 26, 1 }, + [QCOM_RPM_MM_FABRIC_MODE] = { 91, 28, 27, 1 }, + [QCOM_RPM_MM_FABRIC_IOCTL] = { 94, 29, 28, 1 }, + [QCOM_RPM_MM_FABRIC_ARB] = { 95, 30, 29, 21 }, + [QCOM_RPM_PM8921_SMPS1] = { 116, 31, 30, 2 }, + [QCOM_RPM_PM8921_SMPS2] = { 118, 33, 31, 2 }, + [QCOM_RPM_PM8921_SMPS3] = { 120, 35, 32, 2 }, + [QCOM_RPM_PM8921_SMPS4] = { 122, 37, 33, 2 }, + [QCOM_RPM_PM8921_SMPS5] = { 124, 39, 34, 2 }, + [QCOM_RPM_PM8921_SMPS6] = { 126, 41, 35, 2 }, + [QCOM_RPM_PM8921_SMPS7] = { 128, 43, 36, 2 }, + [QCOM_RPM_PM8921_SMPS8] = { 130, 45, 37, 2 }, + [QCOM_RPM_PM8921_LDO1] = { 132, 47, 38, 2 }, + [QCOM_RPM_PM8921_LDO2] = { 134, 49, 39, 2 }, + [QCOM_RPM_PM8921_LDO3] = { 136, 51, 40, 2 }, + [QCOM_RPM_PM8921_LDO4] = { 138, 53, 41, 2 }, + [QCOM_RPM_PM8921_LDO5] = { 140, 55, 42, 2 }, + [QCOM_RPM_PM8921_LDO6] = { 142, 57, 43, 2 }, + [QCOM_RPM_PM8921_LDO7] = { 144, 59, 44, 2 }, + [QCOM_RPM_PM8921_LDO8] = { 146, 61, 45, 2 }, + [QCOM_RPM_PM8921_LDO9] = { 148, 63, 46, 2 }, + [QCOM_RPM_PM8921_LDO10] = { 150, 65, 47, 2 }, + [QCOM_RPM_PM8921_LDO11] = { 152, 67, 48, 2 }, + [QCOM_RPM_PM8921_LDO12] = { 154, 69, 49, 2 }, + [QCOM_RPM_PM8921_LDO13] = { 156, 71, 50, 2 }, + [QCOM_RPM_PM8921_LDO14] = { 158, 73, 51, 2 }, + [QCOM_RPM_PM8921_LDO15] = { 160, 75, 52, 2 }, + [QCOM_RPM_PM8921_LDO16] = { 162, 77, 53, 2 }, + [QCOM_RPM_PM8921_LDO17] = { 164, 79, 54, 2 }, + [QCOM_RPM_PM8921_LDO18] = { 166, 81, 55, 2 }, + [QCOM_RPM_PM8921_LDO19] = { 168, 83, 56, 2 }, + [QCOM_RPM_PM8921_LDO20] = { 170, 85, 57, 2 }, + [QCOM_RPM_PM8921_LDO21] = { 172, 87, 58, 2 }, + [QCOM_RPM_PM8921_LDO22] = { 174, 89, 59, 2 }, + [QCOM_RPM_PM8921_LDO23] = { 176, 91, 60, 2 }, + [QCOM_RPM_PM8921_LDO24] = { 178, 93, 61, 2 }, + [QCOM_RPM_PM8921_LDO25] = { 180, 95, 62, 2 }, + [QCOM_RPM_PM8921_LDO26] = { 182, 97, 63, 2 }, + [QCOM_RPM_PM8921_LDO27] = { 184, 99, 64, 2 }, + [QCOM_RPM_PM8921_LDO28] = { 186, 101, 65, 2 }, + [QCOM_RPM_PM8921_LDO29] = { 188, 103, 66, 2 }, + [QCOM_RPM_PM8921_CLK1] = { 190, 105, 67, 2 }, + [QCOM_RPM_PM8921_CLK2] = { 192, 107, 68, 2 }, + [QCOM_RPM_PM8921_LVS1] = { 194, 109, 69, 1 }, + [QCOM_RPM_PM8921_LVS2] = { 195, 110, 70, 1 }, + [QCOM_RPM_PM8921_LVS3] = { 196, 111, 71, 1 }, + [QCOM_RPM_PM8921_LVS4] = { 197, 112, 72, 1 }, + [QCOM_RPM_PM8921_LVS5] = { 198, 113, 73, 1 }, + [QCOM_RPM_PM8921_LVS6] = { 199, 114, 74, 1 }, + [QCOM_RPM_PM8921_LVS7] = { 200, 115, 75, 1 }, + [QCOM_RPM_PM8821_SMPS1] = { 201, 116, 76, 2 }, + [QCOM_RPM_PM8821_SMPS2] = { 203, 118, 77, 2 }, + [QCOM_RPM_PM8821_LDO1] = { 205, 120, 78, 2 }, + [QCOM_RPM_PM8921_NCP] = { 207, 122, 80, 2 }, + [QCOM_RPM_CXO_BUFFERS] = { 209, 124, 81, 1 }, + [QCOM_RPM_USB_OTG_SWITCH] = { 210, 125, 82, 1 }, + [QCOM_RPM_HDMI_SWITCH] = { 211, 126, 83, 1 }, + [QCOM_RPM_DDR_DMM] = { 212, 127, 84, 2 }, + [QCOM_RPM_VDDMIN_GPIO] = { 215, 131, 89, 1 }, +}; + +static const struct qcom_rpm_data apq8064_template = { + .version = 3, + .resource_table = apq8064_rpm_resource_table, + .n_resources = ARRAY_SIZE(apq8064_rpm_resource_table), +}; + +static const struct qcom_rpm_resource msm8660_rpm_resource_table[] = { + [QCOM_RPM_CXO_CLK] = { 32, 12, 5, 1 }, + [QCOM_RPM_PXO_CLK] = { 33, 13, 6, 1 }, + [QCOM_RPM_PLL_4] = { 34, 14, 7, 1 }, + [QCOM_RPM_APPS_FABRIC_CLK] = { 35, 15, 8, 1 }, + [QCOM_RPM_SYS_FABRIC_CLK] = { 36, 16, 9, 1 }, + [QCOM_RPM_MM_FABRIC_CLK] = { 37, 17, 10, 1 }, + [QCOM_RPM_DAYTONA_FABRIC_CLK] = { 38, 18, 11, 1 }, + [QCOM_RPM_SFPB_CLK] = { 39, 19, 12, 1 }, + [QCOM_RPM_CFPB_CLK] = { 40, 20, 13, 1 }, + [QCOM_RPM_MMFPB_CLK] = { 41, 21, 14, 1 }, + [QCOM_RPM_SMI_CLK] = { 42, 22, 15, 1 }, + [QCOM_RPM_EBI1_CLK] = { 43, 23, 16, 1 }, + [QCOM_RPM_APPS_L2_CACHE_CTL] = { 44, 24, 17, 1 }, + [QCOM_RPM_APPS_FABRIC_HALT] = { 45, 25, 18, 2 }, + [QCOM_RPM_APPS_FABRIC_MODE] = { 47, 26, 19, 3 }, + [QCOM_RPM_APPS_FABRIC_ARB] = { 51, 28, 21, 6 }, + [QCOM_RPM_SYS_FABRIC_HALT] = { 63, 29, 22, 2 }, + [QCOM_RPM_SYS_FABRIC_MODE] = { 65, 30, 23, 3 }, + [QCOM_RPM_SYS_FABRIC_ARB] = { 69, 32, 25, 22 }, + [QCOM_RPM_MM_FABRIC_HALT] = { 105, 33, 26, 2 }, + [QCOM_RPM_MM_FABRIC_MODE] = { 107, 34, 27, 3 }, + [QCOM_RPM_MM_FABRIC_ARB] = { 111, 36, 29, 23 }, + [QCOM_RPM_PM8901_SMPS0] = { 134, 37, 30, 2 }, + [QCOM_RPM_PM8901_SMPS1] = { 136, 39, 31, 2 }, + [QCOM_RPM_PM8901_SMPS2] = { 138, 41, 32, 2 }, + [QCOM_RPM_PM8901_SMPS3] = { 140, 43, 33, 2 }, + [QCOM_RPM_PM8901_SMPS4] = { 142, 45, 34, 2 }, + [QCOM_RPM_PM8901_LDO0] = { 144, 47, 35, 2 }, + [QCOM_RPM_PM8901_LDO1] = { 146, 49, 36, 2 }, + [QCOM_RPM_PM8901_LDO2] = { 148, 51, 37, 2 }, + [QCOM_RPM_PM8901_LDO3] = { 150, 53, 38, 2 }, + [QCOM_RPM_PM8901_LDO4] = { 152, 55, 39, 2 }, + [QCOM_RPM_PM8901_LDO5] = { 154, 57, 40, 2 }, + [QCOM_RPM_PM8901_LDO6] = { 156, 59, 41, 2 }, + [QCOM_RPM_PM8901_LVS0] = { 158, 61, 42, 1 }, + [QCOM_RPM_PM8901_LVS1] = { 159, 62, 43, 1 }, + [QCOM_RPM_PM8901_LVS2] = { 160, 63, 44, 1 }, + [QCOM_RPM_PM8901_LVS3] = { 161, 64, 45, 1 }, + [QCOM_RPM_PM8901_MVS] = { 162, 65, 46, 1 }, + [QCOM_RPM_PM8058_SMPS0] = { 163, 66, 47, 2 }, + [QCOM_RPM_PM8058_SMPS1] = { 165, 68, 48, 2 }, + [QCOM_RPM_PM8058_SMPS2] = { 167, 70, 49, 2 }, + [QCOM_RPM_PM8058_SMPS3] = { 169, 72, 50, 2 }, + [QCOM_RPM_PM8058_SMPS4] = { 171, 74, 51, 2 }, + [QCOM_RPM_PM8058_LDO0] = { 173, 76, 52, 2 }, + [QCOM_RPM_PM8058_LDO1] = { 175, 78, 53, 2 }, + [QCOM_RPM_PM8058_LDO2] = { 177, 80, 54, 2 }, + [QCOM_RPM_PM8058_LDO3] = { 179, 82, 55, 2 }, + [QCOM_RPM_PM8058_LDO4] = { 181, 84, 56, 2 }, + [QCOM_RPM_PM8058_LDO5] = { 183, 86, 57, 2 }, + [QCOM_RPM_PM8058_LDO6] = { 185, 88, 58, 2 }, + [QCOM_RPM_PM8058_LDO7] = { 187, 90, 59, 2 }, + [QCOM_RPM_PM8058_LDO8] = { 189, 92, 60, 2 }, + [QCOM_RPM_PM8058_LDO9] = { 191, 94, 61, 2 }, + [QCOM_RPM_PM8058_LDO10] = { 193, 96, 62, 2 }, + [QCOM_RPM_PM8058_LDO11] = { 195, 98, 63, 2 }, + [QCOM_RPM_PM8058_LDO12] = { 197, 100, 64, 2 }, + [QCOM_RPM_PM8058_LDO13] = { 199, 102, 65, 2 }, + [QCOM_RPM_PM8058_LDO14] = { 201, 104, 66, 2 }, + [QCOM_RPM_PM8058_LDO15] = { 203, 106, 67, 2 }, + [QCOM_RPM_PM8058_LDO16] = { 205, 108, 68, 2 }, + [QCOM_RPM_PM8058_LDO17] = { 207, 110, 69, 2 }, + [QCOM_RPM_PM8058_LDO18] = { 209, 112, 70, 2 }, + [QCOM_RPM_PM8058_LDO19] = { 211, 114, 71, 2 }, + [QCOM_RPM_PM8058_LDO20] = { 213, 116, 72, 2 }, + [QCOM_RPM_PM8058_LDO21] = { 215, 118, 73, 2 }, + [QCOM_RPM_PM8058_LDO22] = { 217, 120, 74, 2 }, + [QCOM_RPM_PM8058_LDO23] = { 219, 122, 75, 2 }, + [QCOM_RPM_PM8058_LDO24] = { 221, 124, 76, 2 }, + [QCOM_RPM_PM8058_LDO25] = { 223, 126, 77, 2 }, + [QCOM_RPM_PM8058_LVS0] = { 225, 128, 78, 1 }, + [QCOM_RPM_PM8058_LVS1] = { 226, 129, 79, 1 }, + [QCOM_RPM_PM8058_NCP] = { 227, 130, 80, 2 }, + [QCOM_RPM_CXO_BUFFERS] = { 229, 132, 81, 1 }, +}; + +static const struct qcom_rpm_data msm8660_template = { + .version = 2, + .resource_table = msm8660_rpm_resource_table, + .n_resources = ARRAY_SIZE(msm8660_rpm_resource_table), +}; + +static const struct qcom_rpm_resource msm8960_rpm_resource_table[] = { + [QCOM_RPM_CXO_CLK] = { 25, 9, 5, 1 }, + [QCOM_RPM_PXO_CLK] = { 26, 10, 6, 1 }, + [QCOM_RPM_APPS_FABRIC_CLK] = { 27, 11, 8, 1 }, + [QCOM_RPM_SYS_FABRIC_CLK] = { 28, 12, 9, 1 }, + [QCOM_RPM_MM_FABRIC_CLK] = { 29, 13, 10, 1 }, + [QCOM_RPM_DAYTONA_FABRIC_CLK] = { 30, 14, 11, 1 }, + [QCOM_RPM_SFPB_CLK] = { 31, 15, 12, 1 }, + [QCOM_RPM_CFPB_CLK] = { 32, 16, 13, 1 }, + [QCOM_RPM_MMFPB_CLK] = { 33, 17, 14, 1 }, + [QCOM_RPM_EBI1_CLK] = { 34, 18, 16, 1 }, + [QCOM_RPM_APPS_FABRIC_HALT] = { 35, 19, 18, 1 }, + [QCOM_RPM_APPS_FABRIC_MODE] = { 37, 20, 19, 1 }, + [QCOM_RPM_APPS_FABRIC_IOCTL] = { 40, 21, 20, 1 }, + [QCOM_RPM_APPS_FABRIC_ARB] = { 41, 22, 21, 12 }, + [QCOM_RPM_SYS_FABRIC_HALT] = { 53, 23, 22, 1 }, + [QCOM_RPM_SYS_FABRIC_MODE] = { 55, 24, 23, 1 }, + [QCOM_RPM_SYS_FABRIC_IOCTL] = { 58, 25, 24, 1 }, + [QCOM_RPM_SYS_FABRIC_ARB] = { 59, 26, 25, 29 }, + [QCOM_RPM_MM_FABRIC_HALT] = { 88, 27, 26, 1 }, + [QCOM_RPM_MM_FABRIC_MODE] = { 90, 28, 27, 1 }, + [QCOM_RPM_MM_FABRIC_IOCTL] = { 93, 29, 28, 1 }, + [QCOM_RPM_MM_FABRIC_ARB] = { 94, 30, 29, 23 }, + [QCOM_RPM_PM8921_SMPS1] = { 117, 31, 30, 2 }, + [QCOM_RPM_PM8921_SMPS2] = { 119, 33, 31, 2 }, + [QCOM_RPM_PM8921_SMPS3] = { 121, 35, 32, 2 }, + [QCOM_RPM_PM8921_SMPS4] = { 123, 37, 33, 2 }, + [QCOM_RPM_PM8921_SMPS5] = { 125, 39, 34, 2 }, + [QCOM_RPM_PM8921_SMPS6] = { 127, 41, 35, 2 }, + [QCOM_RPM_PM8921_SMPS7] = { 129, 43, 36, 2 }, + [QCOM_RPM_PM8921_SMPS8] = { 131, 45, 37, 2 }, + [QCOM_RPM_PM8921_LDO1] = { 133, 47, 38, 2 }, + [QCOM_RPM_PM8921_LDO2] = { 135, 49, 39, 2 }, + [QCOM_RPM_PM8921_LDO3] = { 137, 51, 40, 2 }, + [QCOM_RPM_PM8921_LDO4] = { 139, 53, 41, 2 }, + [QCOM_RPM_PM8921_LDO5] = { 141, 55, 42, 2 }, + [QCOM_RPM_PM8921_LDO6] = { 143, 57, 43, 2 }, + [QCOM_RPM_PM8921_LDO7] = { 145, 59, 44, 2 }, + [QCOM_RPM_PM8921_LDO8] = { 147, 61, 45, 2 }, + [QCOM_RPM_PM8921_LDO9] = { 149, 63, 46, 2 }, + [QCOM_RPM_PM8921_LDO10] = { 151, 65, 47, 2 }, + [QCOM_RPM_PM8921_LDO11] = { 153, 67, 48, 2 }, + [QCOM_RPM_PM8921_LDO12] = { 155, 69, 49, 2 }, + [QCOM_RPM_PM8921_LDO13] = { 157, 71, 50, 2 }, + [QCOM_RPM_PM8921_LDO14] = { 159, 73, 51, 2 }, + [QCOM_RPM_PM8921_LDO15] = { 161, 75, 52, 2 }, + [QCOM_RPM_PM8921_LDO16] = { 163, 77, 53, 2 }, + [QCOM_RPM_PM8921_LDO17] = { 165, 79, 54, 2 }, + [QCOM_RPM_PM8921_LDO18] = { 167, 81, 55, 2 }, + [QCOM_RPM_PM8921_LDO19] = { 169, 83, 56, 2 }, + [QCOM_RPM_PM8921_LDO20] = { 171, 85, 57, 2 }, + [QCOM_RPM_PM8921_LDO21] = { 173, 87, 58, 2 }, + [QCOM_RPM_PM8921_LDO22] = { 175, 89, 59, 2 }, + [QCOM_RPM_PM8921_LDO23] = { 177, 91, 60, 2 }, + [QCOM_RPM_PM8921_LDO24] = { 179, 93, 61, 2 }, + [QCOM_RPM_PM8921_LDO25] = { 181, 95, 62, 2 }, + [QCOM_RPM_PM8921_LDO26] = { 183, 97, 63, 2 }, + [QCOM_RPM_PM8921_LDO27] = { 185, 99, 64, 2 }, + [QCOM_RPM_PM8921_LDO28] = { 187, 101, 65, 2 }, + [QCOM_RPM_PM8921_LDO29] = { 189, 103, 66, 2 }, + [QCOM_RPM_PM8921_CLK1] = { 191, 105, 67, 2 }, + [QCOM_RPM_PM8921_CLK2] = { 193, 107, 68, 2 }, + [QCOM_RPM_PM8921_LVS1] = { 195, 109, 69, 1 }, + [QCOM_RPM_PM8921_LVS2] = { 196, 110, 70, 1 }, + [QCOM_RPM_PM8921_LVS3] = { 197, 111, 71, 1 }, + [QCOM_RPM_PM8921_LVS4] = { 198, 112, 72, 1 }, + [QCOM_RPM_PM8921_LVS5] = { 199, 113, 73, 1 }, + [QCOM_RPM_PM8921_LVS6] = { 200, 114, 74, 1 }, + [QCOM_RPM_PM8921_LVS7] = { 201, 115, 75, 1 }, + [QCOM_RPM_PM8921_NCP] = { 202, 116, 80, 2 }, + [QCOM_RPM_CXO_BUFFERS] = { 204, 118, 81, 1 }, + [QCOM_RPM_USB_OTG_SWITCH] = { 205, 119, 82, 1 }, + [QCOM_RPM_HDMI_SWITCH] = { 206, 120, 83, 1 }, + [QCOM_RPM_DDR_DMM] = { 207, 121, 84, 2 }, +}; + +static const struct qcom_rpm_data msm8960_template = { + .version = 3, + .resource_table = msm8960_rpm_resource_table, + .n_resources = ARRAY_SIZE(msm8960_rpm_resource_table), +}; + +static const struct of_device_id qcom_rpm_of_match[] = { + { .compatible = "qcom,rpm-apq8064", .data = &apq8064_template }, + { .compatible = "qcom,rpm-msm8660", .data = &msm8660_template }, + { .compatible = "qcom,rpm-msm8960", .data = &msm8960_template }, + { } +}; +MODULE_DEVICE_TABLE(of, qcom_rpm_of_match); + +#define QCOM_RPM_STATUS_ID_SEQUENCE 7 +int qcom_rpm_read(struct qcom_rpm *rpm, + int resource, + u32 *val, + size_t count) +{ + const struct qcom_rpm_resource *res; + const struct qcom_rpm_data *data = rpm->data; + uint32_t seq_begin; + uint32_t seq_end; + int rc; + int i; + + if (WARN_ON(resource < 0 || resource >= data->n_resources)) + return -EINVAL; + + res = &data->resource_table[resource]; + if (WARN_ON(res->size != count)) + return -EINVAL; + + mutex_lock(&rpm->lock); + seq_begin = readl(RPM_STATUS_REG(rpm, QCOM_RPM_STATUS_ID_SEQUENCE)); + + for (i = 0; i < count; i++) + *val++ = readl(RPM_STATUS_REG(rpm, res->status_id)); + + seq_end = readl(RPM_STATUS_REG(rpm, QCOM_RPM_STATUS_ID_SEQUENCE)); + + rc = (seq_begin != seq_end || (seq_begin & 0x01)) ? -EBUSY : 0; + + mutex_unlock(&rpm->lock); + return rc; + +} +EXPORT_SYMBOL(qcom_rpm_read); + +int qcom_rpm_write(struct qcom_rpm *rpm, + int state, + int resource, + u32 *buf, size_t count) +{ + const struct qcom_rpm_resource *res; + const struct qcom_rpm_data *data = rpm->data; + u32 sel_mask[RPM_SELECT_SIZE] = { 0 }; + int left; + int ret = 0; + int i; + + if (WARN_ON(resource < 0 || resource >= data->n_resources)) + return -EINVAL; + + res = &data->resource_table[resource]; + if (WARN_ON(res->size != count)) + return -EINVAL; + + mutex_lock(&rpm->lock); + + for (i = 0; i < res->size; i++) + writel_relaxed(buf[i], RPM_REQ_REG(rpm, res->target_id + i)); + + bitmap_set((unsigned long *)sel_mask, res->select_id, 1); + for (i = 0; i < ARRAY_SIZE(sel_mask); i++) { + writel_relaxed(sel_mask[i], + RPM_CTRL_REG(rpm, RPM_REQ_SELECT + i)); + } + + writel_relaxed(BIT(state), RPM_CTRL_REG(rpm, RPM_REQUEST_CONTEXT)); + + reinit_completion(&rpm->ack); + regmap_write(rpm->ipc_regmap, rpm->ipc_offset, BIT(rpm->ipc_bit)); + + left = wait_for_completion_timeout(&rpm->ack, RPM_REQUEST_TIMEOUT); + if (!left) + ret = -ETIMEDOUT; + else if (rpm->ack_status & RPM_REJECTED) + ret = -EIO; + + mutex_unlock(&rpm->lock); + + return ret; +} +EXPORT_SYMBOL(qcom_rpm_write); + +static irqreturn_t qcom_rpm_ack_interrupt(int irq, void *dev) +{ + struct qcom_rpm *rpm = dev; + u32 ack; + int i; + + ack = readl_relaxed(RPM_CTRL_REG(rpm, RPM_ACK_CONTEXT)); + for (i = 0; i < RPM_SELECT_SIZE; i++) + writel_relaxed(0, RPM_CTRL_REG(rpm, RPM_ACK_SELECTOR + i)); + writel(0, RPM_CTRL_REG(rpm, RPM_ACK_CONTEXT)); + + if (ack & RPM_NOTIFICATION) { + dev_warn(rpm->dev, "ignoring notification!\n"); + } else { + rpm->ack_status = ack; + complete(&rpm->ack); + } + + return IRQ_HANDLED; +} + +static irqreturn_t qcom_rpm_err_interrupt(int irq, void *dev) +{ + struct qcom_rpm *rpm = dev; + + regmap_write(rpm->ipc_regmap, rpm->ipc_offset, BIT(rpm->ipc_bit)); + dev_err(rpm->dev, "RPM triggered fatal error\n"); + + return IRQ_HANDLED; +} + +static irqreturn_t qcom_rpm_wakeup_interrupt(int irq, void *dev) +{ + return IRQ_HANDLED; +} + +static int qcom_rpm_probe(struct platform_device *pdev) +{ + const struct of_device_id *match; + struct device_node *syscon_np; + struct resource *res; + struct qcom_rpm *rpm; + u32 fw_version[3]; + int irq_wakeup; + int irq_ack; + int irq_err; + int ret; + + rpm = devm_kzalloc(&pdev->dev, sizeof(*rpm), GFP_KERNEL); + if (!rpm) + return -ENOMEM; + + rpm->dev = &pdev->dev; + mutex_init(&rpm->lock); + init_completion(&rpm->ack); + + irq_ack = platform_get_irq_byname(pdev, "ack"); + if (irq_ack < 0) { + dev_err(&pdev->dev, "required ack interrupt missing\n"); + return irq_ack; + } + + irq_err = platform_get_irq_byname(pdev, "err"); + if (irq_err < 0) { + dev_err(&pdev->dev, "required err interrupt missing\n"); + return irq_err; + } + + irq_wakeup = platform_get_irq_byname(pdev, "wakeup"); + if (irq_wakeup < 0) { + dev_err(&pdev->dev, "required wakeup interrupt missing\n"); + return irq_wakeup; + } + + match = of_match_device(qcom_rpm_of_match, &pdev->dev); + rpm->data = match->data; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + rpm->status_regs = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(rpm->status_regs)) + return PTR_ERR(rpm->status_regs); + rpm->ctrl_regs = rpm->status_regs + 0x400; + rpm->req_regs = rpm->status_regs + 0x600; + + syscon_np = of_parse_phandle(pdev->dev.of_node, "qcom,ipc", 0); + if (!syscon_np) { + dev_err(&pdev->dev, "no qcom,ipc node\n"); + return -ENODEV; + } + + rpm->ipc_regmap = syscon_node_to_regmap(syscon_np); + if (IS_ERR(rpm->ipc_regmap)) + return PTR_ERR(rpm->ipc_regmap); + + ret = of_property_read_u32_index(pdev->dev.of_node, "qcom,ipc", 1, + &rpm->ipc_offset); + if (ret < 0) { + dev_err(&pdev->dev, "no offset in qcom,ipc\n"); + return -EINVAL; + } + + ret = of_property_read_u32_index(pdev->dev.of_node, "qcom,ipc", 2, + &rpm->ipc_bit); + if (ret < 0) { + dev_err(&pdev->dev, "no bit in qcom,ipc\n"); + return -EINVAL; + } + + dev_set_drvdata(&pdev->dev, rpm); + + fw_version[0] = readl(RPM_STATUS_REG(rpm, 0)); + fw_version[1] = readl(RPM_STATUS_REG(rpm, 1)); + fw_version[2] = readl(RPM_STATUS_REG(rpm, 2)); + if (fw_version[0] != rpm->data->version) { + dev_err(&pdev->dev, + "RPM version %u.%u.%u incompatible with driver version %u", + fw_version[0], + fw_version[1], + fw_version[2], + rpm->data->version); + return -EFAULT; + } + + dev_info(&pdev->dev, "RPM firmware %u.%u.%u\n", fw_version[0], + fw_version[1], + fw_version[2]); + + ret = devm_request_irq(&pdev->dev, + irq_ack, + qcom_rpm_ack_interrupt, + IRQF_TRIGGER_RISING | IRQF_NO_SUSPEND, + "qcom_rpm_ack", + rpm); + if (ret) { + dev_err(&pdev->dev, "failed to request ack interrupt\n"); + return ret; + } + + ret = irq_set_irq_wake(irq_ack, 1); + if (ret) + dev_warn(&pdev->dev, "failed to mark ack irq as wakeup\n"); + + ret = devm_request_irq(&pdev->dev, + irq_err, + qcom_rpm_err_interrupt, + IRQF_TRIGGER_RISING, + "qcom_rpm_err", + rpm); + if (ret) { + dev_err(&pdev->dev, "failed to request err interrupt\n"); + return ret; + } + + ret = devm_request_irq(&pdev->dev, + irq_wakeup, + qcom_rpm_wakeup_interrupt, + IRQF_TRIGGER_RISING, + "qcom_rpm_wakeup", + rpm); + if (ret) { + dev_err(&pdev->dev, "failed to request wakeup interrupt\n"); + return ret; + } + + ret = irq_set_irq_wake(irq_wakeup, 1); + if (ret) + dev_warn(&pdev->dev, "failed to mark wakeup irq as wakeup\n"); + + return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); +} + +static int qcom_rpm_remove(struct platform_device *pdev) +{ + of_platform_depopulate(&pdev->dev); + return 0; +} + +static struct platform_driver qcom_rpm_driver = { + .probe = qcom_rpm_probe, + .remove = qcom_rpm_remove, + .driver = { + .name = "qcom_rpm", + .of_match_table = qcom_rpm_of_match, + }, +}; + +static int __init qcom_rpm_init(void) +{ + return platform_driver_register(&qcom_rpm_driver); +} +arch_initcall(qcom_rpm_init); + +static void __exit qcom_rpm_exit(void) +{ + platform_driver_unregister(&qcom_rpm_driver); +} +module_exit(qcom_rpm_exit) + +MODULE_DESCRIPTION("Qualcomm Resource Power Manager driver"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>"); diff --git a/drivers/mfd/ssbi.c b/drivers/mfd/ssbi.c index b78942ed4c67..4fbe02ed74da 100644 --- a/drivers/mfd/ssbi.c +++ b/drivers/mfd/ssbi.c @@ -331,7 +331,12 @@ static struct platform_driver ssbi_driver = { .of_match_table = ssbi_match_table, }, }; -module_platform_driver(ssbi_driver); + +static int ssbi_init(void) +{ + return platform_driver_register(&ssbi_driver); +} +subsys_initcall(ssbi_init); MODULE_LICENSE("GPL v2"); MODULE_VERSION("1.0"); diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 9166ae24ca90..557648c3c81e 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -544,8 +544,18 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host, if (host->card && mmc_card_mmc(host->card) && ((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) || (mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) && - (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) + (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { + + /* Cancel the prepared request */ + if (areq) + mmc_post_req(host, areq->mrq, -EINVAL); + mmc_start_bkops(host->card, true); + + /* prepare the request again */ + if (areq) + mmc_pre_req(host, areq->mrq, !host->areq); + } } if (!err && areq) { diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c index 43af791e2e45..f31d70201ad7 100644 --- a/drivers/mmc/host/mmci.c +++ b/drivers/mmc/host/mmci.c @@ -78,6 +78,7 @@ static unsigned int fmax = 515633; * @qcom_fifo: enables qcom specific fifo pio read logic. * @qcom_dml: enables qcom specific dma glue for dma transfers. * @reversed_irq_handling: handle data irq before cmd irq. + * @any_blksize: true if block any sizes are supported */ struct variant_data { unsigned int clkreg; @@ -104,6 +105,7 @@ struct variant_data { bool qcom_fifo; bool qcom_dml; bool reversed_irq_handling; + bool any_blksize; }; static struct variant_data variant_arm = { @@ -200,6 +202,7 @@ static struct variant_data variant_ux500v2 = { .pwrreg_clkgate = true, .busy_detect = true, .pwrreg_nopower = true, + .any_blksize = true, }; static struct variant_data variant_qcom = { @@ -218,6 +221,7 @@ static struct variant_data variant_qcom = { .explicit_mclk_control = true, .qcom_fifo = true, .qcom_dml = true, + .any_blksize = true, }; static int mmci_card_busy(struct mmc_host *mmc) @@ -245,10 +249,11 @@ static int mmci_card_busy(struct mmc_host *mmc) static int mmci_validate_data(struct mmci_host *host, struct mmc_data *data) { + struct variant_data *variant = host->variant; + if (!data) return 0; - - if (!is_power_of_2(data->blksz)) { + if (!is_power_of_2(data->blksz) && !variant->any_blksize) { dev_err(mmc_dev(host->mmc), "unsupported block size (%d bytes)\n", data->blksz); return -EINVAL; @@ -736,8 +741,15 @@ static void mmci_post_request(struct mmc_host *mmc, struct mmc_request *mrq, chan = host->dma_tx_channel; dmaengine_terminate_all(chan); + if (host->dma_desc_current == next->dma_desc) + host->dma_desc_current = NULL; + + if (host->dma_current == next->dma_chan) + host->dma_current = NULL; + next->dma_desc = NULL; next->dma_chan = NULL; + data->host_cookie = 0; } } @@ -802,7 +814,6 @@ static void mmci_start_data(struct mmci_host *host, struct mmc_data *data) writel(host->size, base + MMCIDATALENGTH); blksz_bits = ffs(data->blksz) - 1; - BUG_ON(1 << blksz_bits != data->blksz); if (variant->blksz_datactrl16) datactrl = MCI_DPSM_ENABLE | (data->blksz << 16); diff --git a/drivers/pci/host/Makefile b/drivers/pci/host/Makefile index 26b3461d68d7..7225882bf4b4 100644 --- a/drivers/pci/host/Makefile +++ b/drivers/pci/host/Makefile @@ -11,3 +11,4 @@ obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o obj-$(CONFIG_PCI_XGENE) += pci-xgene.o +obj-$(CONFIG_ARCH_QCOM) += pci-qcom.o diff --git a/drivers/pci/host/pci-qcom.c b/drivers/pci/host/pci-qcom.c new file mode 100644 index 000000000000..2e77881b35cc --- /dev/null +++ b/drivers/pci/host/pci-qcom.c @@ -0,0 +1,981 @@ +/* Copyright (c) 2012-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +/* + * QCOM MSM PCIe controller driver. + */ + +#include <linux/kernel.h> +#include <linux/pci.h> +#include <linux/gpio.h> +#include <linux/interrupt.h> +#include <linux/irq.h> +#include <linux/irqdomain.h> +#include <linux/of_gpio.h> +#include <linux/msi.h> +#include <linux/platform_device.h> +#include <linux/regulator/consumer.h> +#include <linux/of_address.h> +#include <linux/clk.h> +#include <linux/reset.h> +#include <linux/delay.h> + +#define INT_PCI_MSI_NR (8 * 32) +#define MSM_PCIE_MSI_PHY 0xa0000000 + +#define PCIE20_MSI_CTRL_ADDR (0x820) +#define PCIE20_MSI_CTRL_UPPER_ADDR (0x824) +#define PCIE20_MSI_CTRL_INTR_EN (0x828) +#define PCIE20_MSI_CTRL_INTR_MASK (0x82C) +#define PCIE20_MSI_CTRL_INTR_STATUS (0x830) + +#define PCIE20_MSI_CTRL_MAX 8 +/* Root Complex Port vendor/device IDs */ +#define PCIE_VENDOR_ID_RCP 0x17cb +#define PCIE_DEVICE_ID_RCP 0x0101 + +#define __set(v, a, b) (((v) << (b)) & GENMASK(a, b)) + +#define PCIE20_PARF_PCS_DEEMPH 0x34 +#define PCIE20_PARF_PCS_DEEMPH_TX_DEEMPH_GEN1(x) __set(x, 21, 16) +#define PCIE20_PARF_PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(x) __set(x, 13, 8) +#define PCIE20_PARF_PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(x) __set(x, 5, 0) + +#define PCIE20_PARF_PCS_SWING 0x38 +#define PCIE20_PARF_PCS_SWING_TX_SWING_FULL(x) __set(x, 14, 8) +#define PCIE20_PARF_PCS_SWING_TX_SWING_LOW(x) __set(x, 6, 0) + +#define PCIE20_PARF_PHY_CTRL 0x40 +#define PCIE20_PARF_PHY_CTRL_PHY_TX0_TERM_OFFST(x) __set(x, 20, 16) +#define PCIE20_PARF_PHY_CTRL_PHY_LOS_LEVEL(x) __set(x, 12, 8) +#define PCIE20_PARF_PHY_CTRL_PHY_RTUNE_REQ (1 << 4) +#define PCIE20_PARF_PHY_CTRL_PHY_TEST_BURNIN (1 << 2) +#define PCIE20_PARF_PHY_CTRL_PHY_TEST_BYPASS (1 << 1) +#define PCIE20_PARF_PHY_CTRL_PHY_TEST_PWR_DOWN (1 << 0) + +#define PCIE20_PARF_PHY_REFCLK 0x4C +#define PCIE20_PARF_CONFIG_BITS 0x50 + +#define PCIE20_ELBI_SYS_CTRL 0x04 +#define PCIE20_ELBI_SYS_CTRL_LTSSM_EN 0x01 + +#define PCIE20_CAP 0x70 +#define PCIE20_CAP_LINKCTRLSTATUS (PCIE20_CAP + 0x10) + +#define PCIE20_COMMAND_STATUS 0x04 +#define PCIE20_BUSNUMBERS 0x18 +#define PCIE20_MEMORY_BASE_LIMIT 0x20 + +#define PCIE20_AXI_MSTR_RESP_COMP_CTRL0 0x818 +#define PCIE20_AXI_MSTR_RESP_COMP_CTRL1 0x81c +#define PCIE20_PLR_IATU_VIEWPORT 0x900 +#define PCIE20_PLR_IATU_CTRL1 0x904 +#define PCIE20_PLR_IATU_CTRL2 0x908 +#define PCIE20_PLR_IATU_LBAR 0x90C +#define PCIE20_PLR_IATU_UBAR 0x910 +#define PCIE20_PLR_IATU_LAR 0x914 +#define PCIE20_PLR_IATU_LTAR 0x918 +#define PCIE20_PLR_IATU_UTAR 0x91c + +#define MSM_PCIE_DEV_CFG_ADDR 0x01000000 + +#define RD 0 +#define WR 1 + +#define MAX_RC_NUM 3 +#define PCIE_BUS_PRIV_DATA(pdev) \ + (((struct pci_sys_data *)pdev->bus->sysdata)->private_data) + +/* PCIe TLP types that we are interested in */ +#define PCI_CFG0_RDWR 0x4 +#define PCI_CFG1_RDWR 0x5 + +#define readl_poll_timeout(addr, val, cond, sleep_us, timeout_us) \ +({ \ + unsigned long timeout = jiffies + usecs_to_jiffies(timeout_us); \ + might_sleep_if(timeout_us); \ + for (;;) { \ + (val) = readl(addr); \ + if (cond) \ + break; \ + if (timeout_us && time_after(jiffies, timeout)) { \ + (val) = readl(addr); \ + break; \ + } \ + if (sleep_us) \ + usleep_range(DIV_ROUND_UP(sleep_us, 4), sleep_us); \ + } \ + (cond) ? 0 : -ETIMEDOUT; \ +}) + +struct qcom_msi { + struct msi_chip chip; + DECLARE_BITMAP(used, INT_PCI_MSI_NR); + struct irq_domain *domain; + unsigned long pages; + struct mutex lock; + int irq; +}; + +struct qcom_pcie { + void __iomem *elbi_base; + void __iomem *parf_base; + void __iomem *dwc_base; + void __iomem *cfg_base; + struct device *dev; + int reset_gpio; + bool ext_phy_ref_clk; + struct clk *iface_clk; + struct clk *bus_clk; + struct clk *phy_clk; + int irq_int[4]; + struct reset_control *axi_reset; + struct reset_control *ahb_reset; + struct reset_control *por_reset; + struct reset_control *pci_reset; + struct reset_control *phy_reset; + + struct resource conf; + struct resource io; + struct resource mem; + + struct regulator *vdd_supply; + struct regulator *avdd_supply; + struct regulator *pcie_clk_supply; + struct regulator *pcie_ext3p3v_supply; + + struct qcom_msi msi; +}; + +static int nr_controllers; +static DEFINE_SPINLOCK(qcom_hw_pci_lock); + +static inline struct qcom_pcie *sys_to_pcie(struct pci_sys_data *sys) +{ + return sys->private_data; +} + +static void qcom_pcie_add_bus(struct pci_bus *bus) +{ + if (IS_ENABLED(CONFIG_PCI_MSI)) { + struct qcom_pcie *pcie = sys_to_pcie(bus->sysdata); + + bus->msi = &pcie->msi.chip; + } +} + +inline int is_msm_pcie_rc(struct pci_bus *bus) +{ + return (bus->number == 0); +} + +static int qcom_pcie_is_link_up(struct qcom_pcie *dev) +{ + return readl_relaxed(dev->dwc_base + PCIE20_CAP_LINKCTRLSTATUS) & + BIT(29); +} + +inline int msm_pcie_get_cfgtype(struct pci_bus *bus) +{ + /* + * http://www.tldp.org/LDP/tlk/dd/pci.html + * Pass it onto the secondary bus interface unchanged if the + * bus number specified is greater than the secondary bus + * number and less than or equal to the subordinate bus + * number. + * + * Read/Write to the RC and Device/Switch connected to the RC + * are CFG0 type transactions. Rest have to be forwarded + * down stream as CFG1 transactions. + * + */ + if (bus->number == 0) + return PCI_CFG0_RDWR; + + return PCI_CFG0_RDWR; +} + +void msm_pcie_config_cfgtype(struct pci_bus *bus, u32 devfn) +{ + uint32_t bdf, cfgtype; + struct qcom_pcie *dev = sys_to_pcie(bus->sysdata); + + cfgtype = msm_pcie_get_cfgtype(bus); + + if (cfgtype == PCI_CFG0_RDWR) { + bdf = MSM_PCIE_DEV_CFG_ADDR; + } else { + /* + * iATU Lower Target Address Register + * Bits Description + * *-1:0 Forms bits [*:0] of the + * start address of the new + * address of the translated + * region. The start address + * must be aligned to a + * CX_ATU_MIN_REGION_SIZE kB + * boundary, so these bits are + * always 0. A write to this + * location is ignored by the + * PCIe core. + * 31:*1 Forms bits [31:*] of the of + * the new address of the + * translated region. + * + * * is log2(CX_ATU_MIN_REGION_SIZE) + */ + bdf = (((bus->number & 0xff) << 24) & 0xff000000) | + (((devfn & 0xff) << 16) & 0x00ff0000); + } + + writel_relaxed(0, dev->dwc_base + PCIE20_PLR_IATU_VIEWPORT); + wmb(); + + /* Program Bdf Address */ + writel_relaxed(bdf, dev->dwc_base + PCIE20_PLR_IATU_LTAR); + wmb(); + + /* Write Config Request Type */ + writel_relaxed(cfgtype, dev->dwc_base + PCIE20_PLR_IATU_CTRL1); + wmb(); +} + +static inline int msm_pcie_oper_conf(struct pci_bus *bus, u32 devfn, int oper, + int where, int size, u32 *val) +{ + uint32_t word_offset, byte_offset, mask; + uint32_t rd_val, wr_val; + struct qcom_pcie *dev = sys_to_pcie(bus->sysdata); + void __iomem *config_base; + int rc; + + rc = is_msm_pcie_rc(bus); + + /* + * For downstream bus, make sure link is up + */ + if (rc && (devfn != 0)) { + *val = ~0; + return PCIBIOS_DEVICE_NOT_FOUND; + } else if ((!rc) && (!qcom_pcie_is_link_up(dev))) { + *val = ~0; + return PCIBIOS_DEVICE_NOT_FOUND; + } + + msm_pcie_config_cfgtype(bus, devfn); + + word_offset = where & ~0x3; + byte_offset = where & 0x3; + mask = (~0 >> (8 * (4 - size))) << (8 * byte_offset); + + config_base = (rc) ? dev->dwc_base : dev->cfg_base; + rd_val = readl_relaxed(config_base + word_offset); + + if (oper == RD) { + *val = ((rd_val & mask) >> (8 * byte_offset)); + } else { + wr_val = (rd_val & ~mask) | + ((*val << (8 * byte_offset)) & mask); + writel_relaxed(wr_val, config_base + word_offset); + wmb(); /* ensure config data is written to hardware register */ + } + + return 0; +} + +static int msm_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, + int size, u32 *val) +{ + return msm_pcie_oper_conf(bus, devfn, RD, where, size, val); +} + +static int msm_pcie_wr_conf(struct pci_bus *bus, u32 devfn, + int where, int size, u32 val) +{ + /* + *Attempt to reset secondary bus is causing PCIE core to reset. + *Disable secondary bus reset functionality. + */ + if ((bus->number == 0) && (where == PCI_BRIDGE_CONTROL) && + (val & PCI_BRIDGE_CTL_BUS_RESET)) { + pr_info("PCIE secondary bus reset not supported\n"); + val &= ~PCI_BRIDGE_CTL_BUS_RESET; + } + + return msm_pcie_oper_conf(bus, devfn, WR, where, size, &val); +} + +static int qcom_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) +{ + struct qcom_pcie *pcie_dev = PCIE_BUS_PRIV_DATA(dev); + + return pcie_dev->irq_int[pin-1]; +} + +static int qcom_pcie_setup(int nr, struct pci_sys_data *sys) +{ + struct qcom_pcie *qcom_pcie = sys->private_data; + + /* + * specify linux PCI framework to allocate device memory (BARs) + * from msm_pcie_dev.dev_mem_res resource. + */ + sys->mem_offset = 0; + sys->io_offset = 0; + + pci_add_resource(&sys->resources, &qcom_pcie->mem); + pci_add_resource(&sys->resources, &qcom_pcie->io); + + return 1; +} + +static struct pci_ops qcom_pcie_ops = { + .read = msm_pcie_rd_conf, + .write = msm_pcie_wr_conf, +}; + +static struct hw_pci qcom_hw_pci[MAX_RC_NUM] = { + { +#ifdef CONFIG_PCI_DOMAINS + .domain = 0, +#endif + .ops = &qcom_pcie_ops, + .nr_controllers = 1, + .swizzle = pci_common_swizzle, + .setup = qcom_pcie_setup, + .map_irq = qcom_pcie_map_irq, + .add_bus = qcom_pcie_add_bus, + }, + { +#ifdef CONFIG_PCI_DOMAINS + .domain = 1, +#endif + .ops = &qcom_pcie_ops, + .nr_controllers = 1, + .swizzle = pci_common_swizzle, + .setup = qcom_pcie_setup, + .map_irq = qcom_pcie_map_irq, + }, + { +#ifdef CONFIG_PCI_DOMAINS + .domain = 2, +#endif + .ops = &qcom_pcie_ops, + .nr_controllers = 1, + .swizzle = pci_common_swizzle, + .setup = qcom_pcie_setup, + .map_irq = qcom_pcie_map_irq, + }, +}; + +static inline void qcom_elbi_writel_relaxed(struct qcom_pcie *pcie, + u32 val, u32 reg) +{ + writel_relaxed(val, pcie->elbi_base + reg); +} + +static inline u32 qcom_elbi_readl_relaxed(struct qcom_pcie *pcie, u32 reg) +{ + return readl_relaxed(pcie->elbi_base + reg); +} + +static inline void qcom_parf_writel_relaxed(struct qcom_pcie *pcie, + u32 val, u32 reg) +{ + writel_relaxed(val, pcie->parf_base + reg); +} + +static inline u32 qcom_parf_readl_relaxed(struct qcom_pcie *pcie, u32 reg) +{ + return readl_relaxed(pcie->parf_base + reg); +} + +static void msm_pcie_write_mask(void __iomem *addr, + uint32_t clear_mask, uint32_t set_mask) +{ + uint32_t val; + + val = (readl_relaxed(addr) & ~clear_mask) | set_mask; + writel_relaxed(val, addr); + wmb(); /* ensure data is written to hardware register */ +} + +static void qcom_pcie_config_controller(struct qcom_pcie *dev) +{ + /* + * program and enable address translation region 0 (device config + * address space); region type config; + * axi config address range to device config address range + */ + writel_relaxed(0, dev->dwc_base + PCIE20_PLR_IATU_VIEWPORT); + /* ensure that hardware locks the region before programming it */ + wmb(); + + writel_relaxed(4, dev->dwc_base + PCIE20_PLR_IATU_CTRL1); + writel_relaxed(BIT(31), dev->dwc_base + PCIE20_PLR_IATU_CTRL2); + writel_relaxed(dev->conf.start, dev->dwc_base + PCIE20_PLR_IATU_LBAR); + writel_relaxed(0, dev->dwc_base + PCIE20_PLR_IATU_UBAR); + writel_relaxed(dev->conf.end, dev->dwc_base + PCIE20_PLR_IATU_LAR); + writel_relaxed(MSM_PCIE_DEV_CFG_ADDR, + dev->dwc_base + PCIE20_PLR_IATU_LTAR); + writel_relaxed(0, dev->dwc_base + PCIE20_PLR_IATU_UTAR); + /* ensure that hardware registers the configuration */ + wmb(); + + /* + * program and enable address translation region 2 (device resource + * address space); region type memory; + * axi device bar address range to device bar address range + */ + writel_relaxed(2, dev->dwc_base + PCIE20_PLR_IATU_VIEWPORT); + /* ensure that hardware locks the region before programming it */ + wmb(); + + writel_relaxed(0, dev->dwc_base + PCIE20_PLR_IATU_CTRL1); + writel_relaxed(BIT(31), dev->dwc_base + PCIE20_PLR_IATU_CTRL2); + writel_relaxed(dev->mem.start, dev->dwc_base + PCIE20_PLR_IATU_LBAR); + writel_relaxed(0, dev->dwc_base + PCIE20_PLR_IATU_UBAR); + writel_relaxed(dev->mem.end, dev->dwc_base + PCIE20_PLR_IATU_LAR); + writel_relaxed(dev->mem.start, + dev->dwc_base + PCIE20_PLR_IATU_LTAR); + writel_relaxed(0, dev->dwc_base + PCIE20_PLR_IATU_UTAR); + /* ensure that hardware registers the configuration */ + wmb(); + + /* 1K PCIE buffer setting */ + writel_relaxed(0x3, dev->dwc_base + PCIE20_AXI_MSTR_RESP_COMP_CTRL0); + writel_relaxed(0x1, dev->dwc_base + PCIE20_AXI_MSTR_RESP_COMP_CTRL1); + /* ensure that hardware registers the configuration */ + wmb(); +} + +static int qcom_msi_alloc(struct qcom_msi *chip) +{ + int msi; + + mutex_lock(&chip->lock); + + msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); + if (msi < INT_PCI_MSI_NR) + set_bit(msi, chip->used); + else + msi = -ENOSPC; + + mutex_unlock(&chip->lock); + + return msi; +} + +static void qcom_msi_free(struct qcom_msi *chip, unsigned long irq) +{ + struct device *dev = chip->chip.dev; + + mutex_lock(&chip->lock); + + if (!test_bit(irq, chip->used)) + dev_err(dev, "trying to free unused MSI#%lu\n", irq); + else + clear_bit(irq, chip->used); + + mutex_unlock(&chip->lock); +} + + +static irqreturn_t handle_msi_irq(int irq, void *data) +{ + int i, j, index; + unsigned long val; + struct qcom_pcie *dev = data; + void __iomem *ctrl_status; + struct qcom_msi *msi = &dev->msi; + + /* check for set bits, clear it by setting that bit + and trigger corresponding irq */ + for (i = 0; i < PCIE20_MSI_CTRL_MAX; i++) { + ctrl_status = dev->dwc_base + + PCIE20_MSI_CTRL_INTR_STATUS + (i * 12); + + val = readl_relaxed(ctrl_status); + while (val) { + j = find_first_bit(&val, 32); + index = j + (32 * i); + writel_relaxed(BIT(j), ctrl_status); + /* ensure that interrupt is cleared (acked) */ + wmb(); + + irq = irq_find_mapping(msi->domain, index); + if (irq) { + if (test_bit(index, msi->used)) + generic_handle_irq(irq); + else + dev_info(dev->dev, "unhandled MSI\n"); + } + val = readl_relaxed(ctrl_status); + } + } + + return IRQ_HANDLED; +} + +static inline struct qcom_msi *to_qcom_msi(struct msi_chip *chip) +{ + return container_of(chip, struct qcom_msi, chip); +} + +static int qcom_msi_setup_irq(struct msi_chip *chip, struct pci_dev *pdev, + struct msi_desc *desc) +{ + struct qcom_msi *msi = to_qcom_msi(chip); + struct msi_msg msg; + unsigned int irq; + int hwirq; + + hwirq = qcom_msi_alloc(msi); + if (hwirq < 0) + return hwirq; + + irq = irq_create_mapping(msi->domain, hwirq); + if (!irq) + return -EINVAL; + + irq_set_msi_desc(irq, desc); + + msg.address_lo = MSM_PCIE_MSI_PHY; + /* 32 bit address only */ + msg.address_hi = 0; + msg.data = hwirq; + + write_msi_msg(irq, &msg); + + return 0; +} + +static void qcom_msi_teardown_irq(struct msi_chip *chip, unsigned int irq) +{ + struct qcom_msi *msi = to_qcom_msi(chip); + struct irq_data *d = irq_get_irq_data(irq); + + qcom_msi_free(msi, d->hwirq); +} + +static struct irq_chip qcom_msi_irq_chip = { + .name = "PCI-MSI", + .irq_enable = unmask_msi_irq, + .irq_disable = mask_msi_irq, + .irq_mask = mask_msi_irq, + .irq_unmask = unmask_msi_irq, +}; + + +static int qcom_pcie_msi_map(struct irq_domain *domain, unsigned int irq, + irq_hw_number_t hwirq) +{ + irq_set_chip_and_handler(irq, &qcom_msi_irq_chip, handle_simple_irq); + irq_set_chip_data(irq, domain->host_data); + set_irq_flags(irq, IRQF_VALID); + + return 0; +} + + +static const struct irq_domain_ops msi_domain_ops = { + .map = qcom_pcie_msi_map, +}; +uint32_t msm_pcie_msi_init(struct qcom_pcie *pcie, struct platform_device *pdev) +{ + int i, rc; + struct qcom_msi *msi = &pcie->msi; + int err; + + mutex_init(&msi->lock); + + msi->chip.dev = pcie->dev; + msi->chip.setup_irq = qcom_msi_setup_irq; + msi->chip.teardown_irq = qcom_msi_teardown_irq; + msi->domain = irq_domain_add_linear(pdev->dev.of_node, INT_PCI_MSI_NR, + &msi_domain_ops, &msi->chip); + if (!msi->domain) { + dev_err(&pdev->dev, "failed to create IRQ domain\n"); + return -ENOMEM; + } + + + err = platform_get_irq_byname(pdev, "msi"); + if (err < 0) { + dev_err(&pdev->dev, "failed to get IRQ: %d\n", err); + return err; + } + + msi->irq = err; + + /* program MSI controller and enable all interrupts */ + writel_relaxed(MSM_PCIE_MSI_PHY, pcie->dwc_base + PCIE20_MSI_CTRL_ADDR); + writel_relaxed(0, pcie->dwc_base + PCIE20_MSI_CTRL_UPPER_ADDR); + + for (i = 0; i < PCIE20_MSI_CTRL_MAX; i++) + writel_relaxed(~0, pcie->dwc_base + + PCIE20_MSI_CTRL_INTR_EN + (i * 12)); + + /* ensure that hardware is configured before proceeding */ + wmb(); + + /* register handler for physical MSI interrupt line */ + rc = request_irq(msi->irq, handle_msi_irq, IRQF_TRIGGER_RISING, + "msm_pcie_msi", pcie); + if (rc) { + pr_err("Unable to allocate msi interrupt\n"); + return rc; + } + + return rc; +} + +static int qcom_pcie_vreg_on(struct qcom_pcie *qcom_pcie) +{ + int err; + /* enable regulators */ + err = regulator_enable(qcom_pcie->vdd_supply); + if (err < 0) { + dev_err(qcom_pcie->dev, "failed to enable VDD regulator\n"); + return err; + } + + err = regulator_enable(qcom_pcie->pcie_clk_supply); + if (err < 0) { + dev_err(qcom_pcie->dev, "failed to enable pcie-clk regulator\n"); + return err; + } + + err = regulator_enable(qcom_pcie->avdd_supply); + if (err < 0) { + dev_err(qcom_pcie->dev, "failed to enable AVDD regulator\n"); + return err; + } + + err = regulator_enable(qcom_pcie->pcie_ext3p3v_supply); + if (err < 0) { + dev_err(qcom_pcie->dev, "failed to enable pcie_ext3p3v regulator\n"); + return err; + } + + return err; + +} + +static int qcom_pcie_parse_dt(struct qcom_pcie *qcom_pcie, + struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + struct resource *elbi_base, *parf_base, *dwc_base; + struct of_pci_range range; + struct of_pci_range_parser parser; + int ret, i; + + qcom_pcie->ext_phy_ref_clk = of_property_read_bool(np, + "qcom,external-phy-refclk"); + + elbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); + qcom_pcie->elbi_base = devm_ioremap_resource(&pdev->dev, elbi_base); + if (IS_ERR(qcom_pcie->elbi_base)) { + dev_err(&pdev->dev, "Failed to ioremap elbi space\n"); + return PTR_ERR(qcom_pcie->elbi_base); + } + + parf_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf"); + qcom_pcie->parf_base = devm_ioremap_resource(&pdev->dev, parf_base); + if (IS_ERR(qcom_pcie->parf_base)) { + dev_err(&pdev->dev, "Failed to ioremap parf space\n"); + return PTR_ERR(qcom_pcie->parf_base); + } + + dwc_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "base"); + qcom_pcie->dwc_base = devm_ioremap_resource(&pdev->dev, dwc_base); + if (IS_ERR(qcom_pcie->dwc_base)) { + dev_err(&pdev->dev, "Failed to ioremap dwc_base space\n"); + return PTR_ERR(qcom_pcie->dwc_base); + } + + if (of_pci_range_parser_init(&parser, np)) { + dev_err(&pdev->dev, "missing ranges property\n"); + return -EINVAL; + } + + /* Get the I/O and memory ranges from DT */ + for_each_of_pci_range(&parser, &range) { + switch (range.pci_space & 0x3) { + case 0: /* cfg */ + of_pci_range_to_resource(&range, np, &qcom_pcie->conf); + qcom_pcie->conf.flags = IORESOURCE_MEM; + break; + case 1: /* io */ + of_pci_range_to_resource(&range, np, &qcom_pcie->io); + break; + default: /* mem */ + of_pci_range_to_resource(&range, np, &qcom_pcie->mem); + break; + } + } + + qcom_pcie->vdd_supply = devm_regulator_get(&pdev->dev, "vdd"); + if (IS_ERR(qcom_pcie->vdd_supply)) { + dev_err(&pdev->dev, "Failed to get vdd supply\n"); + return PTR_ERR(qcom_pcie->vdd_supply); + } + + qcom_pcie->pcie_clk_supply = devm_regulator_get(&pdev->dev, "pcie-clk"); + if (IS_ERR(qcom_pcie->pcie_clk_supply)) { + dev_err(&pdev->dev, "Failed to get pcie clk supply\n"); + return PTR_ERR(qcom_pcie->pcie_clk_supply); + } + qcom_pcie->avdd_supply = devm_regulator_get(&pdev->dev, "avdd"); + if (IS_ERR(qcom_pcie->avdd_supply)) { + dev_err(&pdev->dev, "Failed to get avdd supply\n"); + return PTR_ERR(qcom_pcie->avdd_supply); + } + + qcom_pcie->pcie_ext3p3v_supply = devm_regulator_get(&pdev->dev, + "ext-3p3v"); + if (IS_ERR(qcom_pcie->pcie_ext3p3v_supply)) { + dev_err(&pdev->dev, "Failed to get pcie_ext3p3v supply\n"); + return PTR_ERR(qcom_pcie->pcie_ext3p3v_supply); + } + + qcom_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0); + if (!gpio_is_valid(qcom_pcie->reset_gpio)) { + dev_err(&pdev->dev, "pcie reset gpio is not valid\n"); + return -EINVAL; + } + + ret = devm_gpio_request_one(&pdev->dev, qcom_pcie->reset_gpio, + GPIOF_DIR_OUT, "pcie_reset"); + if (ret) { + dev_err(&pdev->dev, "Failed to request pcie reset gpio\n"); + return ret; + } + + qcom_pcie->iface_clk = devm_clk_get(&pdev->dev, "iface"); + if (IS_ERR(qcom_pcie->iface_clk)) { + dev_err(&pdev->dev, "Failed to get pcie iface clock\n"); + return PTR_ERR(qcom_pcie->iface_clk); + } + + qcom_pcie->phy_clk = devm_clk_get(&pdev->dev, "phy"); + if (IS_ERR(qcom_pcie->phy_clk)) { + dev_err(&pdev->dev, "Failed to get pcie phy clock\n"); + return PTR_ERR(qcom_pcie->phy_clk); + } + + qcom_pcie->bus_clk = devm_clk_get(&pdev->dev, "core"); + if (IS_ERR(qcom_pcie->bus_clk)) { + dev_err(&pdev->dev, "Failed to get pcie core clock\n"); + return PTR_ERR(qcom_pcie->bus_clk); + } + + qcom_pcie->axi_reset = devm_reset_control_get(&pdev->dev, "axi"); + if (IS_ERR(qcom_pcie->axi_reset)) { + dev_err(&pdev->dev, "Failed to get axi reset\n"); + return PTR_ERR(qcom_pcie->axi_reset); + } + + qcom_pcie->ahb_reset = devm_reset_control_get(&pdev->dev, "ahb"); + if (IS_ERR(qcom_pcie->ahb_reset)) { + dev_err(&pdev->dev, "Failed to get ahb reset\n"); + return PTR_ERR(qcom_pcie->ahb_reset); + } + + qcom_pcie->por_reset = devm_reset_control_get(&pdev->dev, "por"); + if (IS_ERR(qcom_pcie->por_reset)) { + dev_err(&pdev->dev, "Failed to get por reset\n"); + return PTR_ERR(qcom_pcie->por_reset); + } + + qcom_pcie->pci_reset = devm_reset_control_get(&pdev->dev, "pci"); + if (IS_ERR(qcom_pcie->pci_reset)) { + dev_err(&pdev->dev, "Failed to get pci reset\n"); + return PTR_ERR(qcom_pcie->pci_reset); + } + + qcom_pcie->phy_reset = devm_reset_control_get(&pdev->dev, "phy"); + if (IS_ERR(qcom_pcie->phy_reset)) { + dev_err(&pdev->dev, "Failed to get phy reset\n"); + return PTR_ERR(qcom_pcie->phy_reset); + } + + for (i = 0; i < 4; i++) { + qcom_pcie->irq_int[i] = platform_get_irq(pdev, i+1); + if (qcom_pcie->irq_int[i] < 0) { + dev_err(&pdev->dev, "failed to get irq resource\n"); + return qcom_pcie->irq_int[i]; + } + } + + return 0; +} + +static int qcom_pcie_probe(struct platform_device *pdev) +{ + unsigned long flags; + struct qcom_pcie *qcom_pcie; + struct hw_pci *hw; + int ret; + u32 val; + + qcom_pcie = devm_kzalloc(&pdev->dev, sizeof(*qcom_pcie), GFP_KERNEL); + if (!qcom_pcie) { + dev_err(&pdev->dev, "no memory for qcom_pcie\n"); + return -ENOMEM; + } + qcom_pcie->dev = &pdev->dev; + + ret = qcom_pcie_parse_dt(qcom_pcie, pdev); + if (IS_ERR_VALUE(ret)) + return ret; + + qcom_pcie->cfg_base = devm_ioremap_resource(&pdev->dev, + &qcom_pcie->conf); + if (IS_ERR(qcom_pcie->cfg_base)) { + dev_err(&pdev->dev, "Failed to ioremap PCIe cfg space\n"); + return PTR_ERR(qcom_pcie->cfg_base); + } + + gpio_set_value(qcom_pcie->reset_gpio, 0); + usleep_range(10000, 15000); + + /* enable power */ + qcom_pcie_vreg_on(qcom_pcie); + /* assert PCIe PARF reset while powering the core */ + reset_control_assert(qcom_pcie->ahb_reset); + + /* enable clocks */ + ret = clk_prepare_enable(qcom_pcie->iface_clk); + if (ret) + return ret; + ret = clk_prepare_enable(qcom_pcie->phy_clk); + if (ret) + return ret; + ret = clk_prepare_enable(qcom_pcie->bus_clk); + if (ret) + return ret; + + /* + * de-assert PCIe PARF reset; + * wait 1us before accessing PARF registers + */ + reset_control_deassert(qcom_pcie->ahb_reset); + udelay(1); + + /* enable PCIe clocks and resets */ + msm_pcie_write_mask(qcom_pcie->parf_base + PCIE20_PARF_PHY_CTRL, + BIT(0), 0); + + /* Set Tx Termination Offset */ + val = qcom_parf_readl_relaxed(qcom_pcie, PCIE20_PARF_PHY_CTRL); + val |= PCIE20_PARF_PHY_CTRL_PHY_TX0_TERM_OFFST(7); + qcom_parf_writel_relaxed(qcom_pcie, val, PCIE20_PARF_PHY_CTRL); + + /* PARF programming */ + qcom_parf_writel_relaxed(qcom_pcie, + PCIE20_PARF_PCS_DEEMPH_TX_DEEMPH_GEN1(0x18) | + PCIE20_PARF_PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(0x18) | + PCIE20_PARF_PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(0x22), + PCIE20_PARF_PCS_DEEMPH); + qcom_parf_writel_relaxed(qcom_pcie, + PCIE20_PARF_PCS_SWING_TX_SWING_FULL(0x78) | + PCIE20_PARF_PCS_SWING_TX_SWING_LOW(0x78), + PCIE20_PARF_PCS_SWING); + qcom_parf_writel_relaxed(qcom_pcie, (4<<24), PCIE20_PARF_CONFIG_BITS); + /* ensure that hardware registers the PARF configuration */ + wmb(); + + /* enable reference clock */ + msm_pcie_write_mask(qcom_pcie->parf_base + PCIE20_PARF_PHY_REFCLK, + qcom_pcie->ext_phy_ref_clk ? 0 : BIT(12), + BIT(16)); + + /* ensure that access is enabled before proceeding */ + wmb(); + + /* de-assert PICe PHY, Core, POR and AXI clk domain resets */ + reset_control_deassert(qcom_pcie->phy_reset); + reset_control_deassert(qcom_pcie->pci_reset); + reset_control_deassert(qcom_pcie->por_reset); + reset_control_deassert(qcom_pcie->axi_reset); + + /* wait 150ms for clock acquisition */ + usleep_range(10000, 15000); + + /* de-assert PCIe reset link to bring EP out of reset */ + gpio_set_value(qcom_pcie->reset_gpio, 1); + usleep_range(10000, 15000); + + /* enable link training */ + val = qcom_elbi_readl_relaxed(qcom_pcie, PCIE20_ELBI_SYS_CTRL); + val |= PCIE20_ELBI_SYS_CTRL_LTSSM_EN; + qcom_elbi_writel_relaxed(qcom_pcie, val, PCIE20_ELBI_SYS_CTRL); + wmb(); + + /* poll for link to come up for upto 100ms */ + ret = readl_poll_timeout( + (qcom_pcie->dwc_base + PCIE20_CAP_LINKCTRLSTATUS), + val, (val & BIT(29)), 10000, 100000); + + dev_info(&pdev->dev, "link initialized %d\n", ret); + + qcom_pcie_config_controller(qcom_pcie); + + platform_set_drvdata(pdev, qcom_pcie); + + spin_lock_irqsave(&qcom_hw_pci_lock, flags); + qcom_hw_pci[nr_controllers].private_data = (void **)&qcom_pcie; + hw = &qcom_hw_pci[nr_controllers]; + nr_controllers++; + spin_unlock_irqrestore(&qcom_hw_pci_lock, flags); + + pci_common_init(hw); + + msm_pcie_msi_init(qcom_pcie, pdev); + return 0; +} + +static int __exit qcom_pcie_remove(struct platform_device *pdev) +{ + return 0; +} + +static struct of_device_id qcom_pcie_match[] = { + { .compatible = "qcom,pcie-ipq8064", }, + {} +}; + +static struct platform_driver qcom_pcie_driver = { + .probe = qcom_pcie_probe, + .remove = qcom_pcie_remove, + .driver = { + .name = "qcom_pcie", + .owner = THIS_MODULE, + .of_match_table = qcom_pcie_match, + }, +}; + +static int qcom_pcie_init(void) +{ + return platform_driver_register(&qcom_pcie_driver); +} +subsys_initcall_sync(qcom_pcie_init); + +/* RC do not represent the right class; set it to PCI_CLASS_BRIDGE_PCI */ +static void msm_pcie_fixup_early(struct pci_dev *dev) +{ + if (dev->hdr_type == 1) + dev->class = (dev->class & 0xff) | (PCI_CLASS_BRIDGE_PCI << 8); +} +DECLARE_PCI_FIXUP_EARLY(PCIE_VENDOR_ID_RCP, PCIE_DEVICE_ID_RCP, msm_pcie_fixup_early); diff --git a/drivers/pinctrl/qcom/Kconfig b/drivers/pinctrl/qcom/Kconfig index 81275af9638b..295b2af4369b 100644 --- a/drivers/pinctrl/qcom/Kconfig +++ b/drivers/pinctrl/qcom/Kconfig @@ -47,4 +47,28 @@ config PINCTRL_MSM8X74 This is the pinctrl, pinmux, pinconf and gpiolib driver for the Qualcomm TLMM block found in the Qualcomm 8974 platform. +config PINCTRL_QCOM_SPMI_PMIC + tristate "Qualcomm SPMI PMIC pin controller driver" + depends on GPIOLIB && OF && SPMI + select REGMAP_SPMI + select PINMUX + select PINCONF + select GENERIC_PINCONF + help + This is the pinctrl, pinmux, pinconf and gpiolib driver for the + Qualcomm GPIO and MPP blocks found in the Qualcomm PMIC's chips, + which are using SPMI for communication with SoC. Example PMIC's + devices are pm8841, pm8941 and pma8084. + +config PINCTRL_SSBI_PMIC + tristate "Qualcomm SSBI PMIC pin controller driver" + depends on GPIOLIB && OF && MFD_PM8921_CORE + select PINCONF + select PINMUX + select GENERIC_PINCONF + help + This is the pinctrl, pinmux, pinconf and gpiolib driver for the + Qualcomm GPIO blocks found in the pm8018, pm8038, pm8058, pm8917 and + pm8921 pmics from Qualcomm. + endif diff --git a/drivers/pinctrl/qcom/Makefile b/drivers/pinctrl/qcom/Makefile index ba8519fcd8d3..27485f41a225 100644 --- a/drivers/pinctrl/qcom/Makefile +++ b/drivers/pinctrl/qcom/Makefile @@ -5,3 +5,6 @@ obj-$(CONFIG_PINCTRL_APQ8084) += pinctrl-apq8084.o obj-$(CONFIG_PINCTRL_IPQ8064) += pinctrl-ipq8064.o obj-$(CONFIG_PINCTRL_MSM8960) += pinctrl-msm8960.o obj-$(CONFIG_PINCTRL_MSM8X74) += pinctrl-msm8x74.o +obj-$(CONFIG_PINCTRL_QCOM_SPMI_PMIC) += pinctrl-spmi-gpio.o +obj-$(CONFIG_PINCTRL_QCOM_SPMI_PMIC) += pinctrl-spmi-mpp.o +obj-$(CONFIG_PINCTRL_SSBI_PMIC) += pinctrl-ssbi-pmic.o diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c new file mode 100644 index 000000000000..b863b5080890 --- /dev/null +++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c @@ -0,0 +1,933 @@ +/* + * Copyright (c) 2012-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/gpio.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/pinctrl/pinconf-generic.h> +#include <linux/pinctrl/pinconf.h> +#include <linux/pinctrl/pinmux.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> +#include <linux/slab.h> +#include <linux/types.h> + +#include <dt-bindings/pinctrl/qcom,pmic-gpio.h> + +#include "../core.h" +#include "../pinctrl-utils.h" + +#define PMIC_GPIO_ADDRESS_RANGE 0x100 + +/* type and subtype registers base address offsets */ +#define PMIC_GPIO_REG_TYPE 0x4 +#define PMIC_GPIO_REG_SUBTYPE 0x5 + +/* GPIO peripheral type and subtype out_values */ +#define PMIC_GPIO_TYPE 0x10 +#define PMIC_GPIO_SUBTYPE_GPIO_4CH 0x1 +#define PMIC_GPIO_SUBTYPE_GPIOC_4CH 0x5 +#define PMIC_GPIO_SUBTYPE_GPIO_8CH 0x9 +#define PMIC_GPIO_SUBTYPE_GPIOC_8CH 0xd + +#define PMIC_MPP_REG_RT_STS 0x10 +#define PMIC_MPP_REG_RT_STS_VAL_MASK 0x1 + +/* control register base address offsets */ +#define PMIC_GPIO_REG_MODE_CTL 0x40 +#define PMIC_GPIO_REG_DIG_VIN_CTL 0x41 +#define PMIC_GPIO_REG_DIG_PULL_CTL 0x42 +#define PMIC_GPIO_REG_DIG_OUT_CTL 0x45 +#define PMIC_GPIO_REG_EN_CTL 0x46 + +/* PMIC_GPIO_REG_MODE_CTL */ +#define PMIC_GPIO_REG_MODE_VALUE_SHIFT 0x1 +#define PMIC_GPIO_REG_MODE_FUNCTION_SHIFT 1 +#define PMIC_GPIO_REG_MODE_FUNCTION_MASK 0x7 +#define PMIC_GPIO_REG_MODE_DIR_SHIFT 4 +#define PMIC_GPIO_REG_MODE_DIR_MASK 0x7 + +/* PMIC_GPIO_REG_DIG_VIN_CTL */ +#define PMIC_GPIO_REG_VIN_SHIFT 0 +#define PMIC_GPIO_REG_VIN_MASK 0x7 + +/* PMIC_GPIO_REG_DIG_PULL_CTL */ +#define PMIC_GPIO_REG_PULL_SHIFT 0 +#define PMIC_GPIO_REG_PULL_MASK 0x7 + +#define PMIC_GPIO_PULL_DOWN 4 +#define PMIC_GPIO_PULL_DISABLE 5 + +/* PMIC_GPIO_REG_DIG_OUT_CTL */ +#define PMIC_GPIO_REG_OUT_STRENGTH_SHIFT 0 +#define PMIC_GPIO_REG_OUT_STRENGTH_MASK 0x3 +#define PMIC_GPIO_REG_OUT_TYPE_SHIFT 4 +#define PMIC_GPIO_REG_OUT_TYPE_MASK 0x3 + +/* + * Output type - indicates pin should be configured as push-pull, + * open drain or open source. + */ +#define PMIC_GPIO_OUT_BUF_CMOS 0 +#define PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS 1 +#define PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS 2 + +/* PMIC_GPIO_REG_EN_CTL */ +#define PMIC_GPIO_REG_MASTER_EN_SHIFT 7 + +#define PMIC_GPIO_PHYSICAL_OFFSET 1 + +/* Qualcomm specific pin configurations */ +#define PMIC_GPIO_CONF_PULL_UP (PIN_CONFIG_END + 1) +#define PMIC_GPIO_CONF_STRENGTH (PIN_CONFIG_END + 2) + +/** + * struct pmic_gpio_pad - keep current GPIO settings + * @base: Address base in SPMI device. + * @irq: IRQ number which this GPIO generate. + * @is_enabled: Set to false when GPIO should be put in high Z state. + * @out_value: Cached pin output value + * @have_buffer: Set to true if GPIO output could be configured in push-pull, + * open-drain or open-source mode. + * @output_enabled: Set to true if GPIO output logic is enabled. + * @input_enabled: Set to true if GPIO input buffer logic is enabled. + * @num_sources: Number of power-sources supported by this GPIO. + * @power_source: Current power-source used. + * @buffer_type: Push-pull, open-drain or open-source. + * @pullup: Constant current which flow trough GPIO output buffer. + * @strength: No, Low, Medium, High + * @function: See pmic_gpio_functions[] + */ +struct pmic_gpio_pad { + u16 base; + int irq; + bool is_enabled; + bool out_value; + bool have_buffer; + bool output_enabled; + bool input_enabled; + unsigned int num_sources; + unsigned int power_source; + unsigned int buffer_type; + unsigned int pullup; + unsigned int strength; + unsigned int function; +}; + +struct pmic_gpio_state { + struct device *dev; + struct regmap *map; + struct pinctrl_dev *ctrl; + struct gpio_chip chip; +}; + +struct pmic_gpio_bindings { + const char *property; + unsigned param; +}; + +static struct pmic_gpio_bindings pmic_gpio_bindings[] = { + {"qcom,pull-up-strength", PMIC_GPIO_CONF_PULL_UP}, + {"qcom,drive-strength", PMIC_GPIO_CONF_STRENGTH}, +}; + +static const char *const pmic_gpio_groups[] = { + "gpio1", "gpio2", "gpio3", "gpio4", "gpio5", "gpio6", "gpio7", "gpio8", + "gpio9", "gpio10", "gpio11", "gpio12", "gpio13", "gpio14", "gpio15", + "gpio16", "gpio17", "gpio18", "gpio19", "gpio20", "gpio21", "gpio22", + "gpio23", "gpio24", "gpio25", "gpio26", "gpio27", "gpio28", "gpio29", + "gpio30", "gpio31", "gpio32", "gpio33", "gpio34", "gpio35", "gpio36", +}; + +static const char *const pmic_gpio_functions[] = { + PMIC_GPIO_FUNC_NORMAL, PMIC_GPIO_FUNC_PAIRED, + PMIC_GPIO_FUNC_FUNC1, PMIC_GPIO_FUNC_FUNC2, + PMIC_GPIO_FUNC_DTEST1, PMIC_GPIO_FUNC_DTEST2, + PMIC_GPIO_FUNC_DTEST3, PMIC_GPIO_FUNC_DTEST4, +}; + +static inline struct pmic_gpio_state *to_gpio_state(struct gpio_chip *chip) +{ + return container_of(chip, struct pmic_gpio_state, chip); +}; + +static int pmic_gpio_read(struct pmic_gpio_state *state, + struct pmic_gpio_pad *pad, unsigned int addr) +{ + unsigned int val; + int ret; + + ret = regmap_read(state->map, pad->base + addr, &val); + if (ret < 0) + dev_err(state->dev, "read 0x%x failed\n", addr); + else + ret = val; + + return ret; +} + +static int pmic_gpio_write(struct pmic_gpio_state *state, + struct pmic_gpio_pad *pad, unsigned int addr, + unsigned int val) +{ + int ret; + + ret = regmap_write(state->map, pad->base + addr, val); + if (ret < 0) + dev_err(state->dev, "write 0x%x failed\n", addr); + + return ret; +} + +static int pmic_gpio_get_groups_count(struct pinctrl_dev *pctldev) +{ + /* Every PIN is a group */ + return pctldev->desc->npins; +} + +static const char *pmic_gpio_get_group_name(struct pinctrl_dev *pctldev, + unsigned pin) +{ + return pctldev->desc->pins[pin].name; +} + +static int pmic_gpio_get_group_pins(struct pinctrl_dev *pctldev, unsigned pin, + const unsigned **pins, unsigned *num_pins) +{ + *pins = &pctldev->desc->pins[pin].number; + *num_pins = 1; + return 0; +} + +static int pmic_gpio_parse_dt_config(struct device_node *np, + struct pinctrl_dev *pctldev, + unsigned long **configs, + unsigned int *nconfs) +{ + struct pmic_gpio_bindings *par; + unsigned long cfg; + int ret, i; + u32 val; + + for (i = 0; i < ARRAY_SIZE(pmic_gpio_bindings); i++) { + par = &pmic_gpio_bindings[i]; + ret = of_property_read_u32(np, par->property, &val); + + /* property not found */ + if (ret == -EINVAL) + continue; + + /* use zero as default value */ + if (ret) + val = 0; + + dev_dbg(pctldev->dev, "found %s with value %u\n", + par->property, val); + + cfg = pinconf_to_config_packed(par->param, val); + + ret = pinctrl_utils_add_config(pctldev, configs, nconfs, cfg); + if (ret) + return ret; + } + + return 0; +} + +static int pmic_gpio_dt_subnode_to_map(struct pinctrl_dev *pctldev, + struct device_node *np, + struct pinctrl_map **map, + unsigned *reserv, unsigned *nmaps, + enum pinctrl_map_type type) +{ + unsigned long *configs = NULL; + unsigned nconfs = 0; + struct property *prop; + const char *group; + int ret; + + ret = pmic_gpio_parse_dt_config(np, pctldev, &configs, &nconfs); + if (ret < 0) + return ret; + + if (!nconfs) + return 0; + + ret = of_property_count_strings(np, "pins"); + if (ret < 0) + goto exit; + + ret = pinctrl_utils_reserve_map(pctldev, map, reserv, nmaps, ret); + if (ret < 0) + goto exit; + + of_property_for_each_string(np, "pins", prop, group) { + ret = pinctrl_utils_add_map_configs(pctldev, map, + reserv, nmaps, group, + configs, nconfs, type); + if (ret < 0) + break; + } +exit: + kfree(configs); + return ret; +} + +static int pmic_gpio_dt_node_to_map(struct pinctrl_dev *pctldev, + struct device_node *np_config, + struct pinctrl_map **map, unsigned *nmaps) +{ + enum pinctrl_map_type type; + struct device_node *np; + unsigned reserv; + int ret; + + ret = 0; + *map = NULL; + *nmaps = 0; + reserv = 0; + type = PIN_MAP_TYPE_CONFIGS_GROUP; + + for_each_child_of_node(np_config, np) { + ret = pinconf_generic_dt_subnode_to_map(pctldev, np, map, + &reserv, nmaps, type); + if (ret) + break; + + ret = pmic_gpio_dt_subnode_to_map(pctldev, np, map, &reserv, + nmaps, type); + if (ret) + break; + } + + if (ret < 0) + pinctrl_utils_dt_free_map(pctldev, *map, *nmaps); + + return ret; +} + +static const struct pinctrl_ops pmic_gpio_pinctrl_ops = { + .get_groups_count = pmic_gpio_get_groups_count, + .get_group_name = pmic_gpio_get_group_name, + .get_group_pins = pmic_gpio_get_group_pins, + .dt_node_to_map = pmic_gpio_dt_node_to_map, + .dt_free_map = pinctrl_utils_dt_free_map, +}; + +static int pmic_gpio_get_functions_count(struct pinctrl_dev *pctldev) +{ + return ARRAY_SIZE(pmic_gpio_functions); +} + +static const char *pmic_gpio_get_function_name(struct pinctrl_dev *pctldev, + unsigned function) +{ + return pmic_gpio_functions[function]; +} + +static int pmic_gpio_get_function_groups(struct pinctrl_dev *pctldev, + unsigned function, + const char *const **groups, + unsigned *const num_qgroups) +{ + *groups = pmic_gpio_groups; + *num_qgroups = pctldev->desc->npins; + return 0; +} + +static int pmic_gpio_set_mux(struct pinctrl_dev *pctldev, unsigned function, + unsigned pin) +{ + struct pmic_gpio_state *state = pinctrl_dev_get_drvdata(pctldev); + struct pmic_gpio_pad *pad; + unsigned int val; + int ret; + + pad = pctldev->desc->pins[pin].drv_data; + + pad->function = function; + + val = 0; + if (pad->output_enabled) { + if (pad->input_enabled) + val = 2; + else + val = 1; + } + + val |= pad->function << PMIC_GPIO_REG_MODE_FUNCTION_SHIFT; + val |= pad->out_value & PMIC_GPIO_REG_MODE_VALUE_SHIFT; + + ret = pmic_gpio_write(state, pad, PMIC_GPIO_REG_MODE_CTL, val); + if (ret < 0) + return ret; + + val = pad->is_enabled << PMIC_GPIO_REG_MASTER_EN_SHIFT; + + return pmic_gpio_write(state, pad, PMIC_GPIO_REG_EN_CTL, val); +} + +static const struct pinmux_ops pmic_gpio_pinmux_ops = { + .get_functions_count = pmic_gpio_get_functions_count, + .get_function_name = pmic_gpio_get_function_name, + .get_function_groups = pmic_gpio_get_function_groups, + .set_mux = pmic_gpio_set_mux, +}; + +static int pmic_gpio_config_get(struct pinctrl_dev *pctldev, + unsigned int pin, unsigned long *config) +{ + unsigned param = pinconf_to_config_param(*config); + struct pmic_gpio_pad *pad; + unsigned arg; + + pad = pctldev->desc->pins[pin].drv_data; + + switch (param) { + case PIN_CONFIG_DRIVE_PUSH_PULL: + arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_CMOS; + break; + case PIN_CONFIG_DRIVE_OPEN_DRAIN: + arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS; + break; + case PIN_CONFIG_DRIVE_OPEN_SOURCE: + arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS; + break; + case PIN_CONFIG_BIAS_PULL_DOWN: + arg = pad->pullup == PMIC_GPIO_PULL_DOWN; + break; + case PIN_CONFIG_BIAS_DISABLE: + arg = pad->pullup = PMIC_GPIO_PULL_DISABLE; + break; + case PIN_CONFIG_BIAS_PULL_UP: + arg = pad->pullup == PMIC_GPIO_PULL_UP_30; + break; + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: + arg = !pad->is_enabled; + break; + case PIN_CONFIG_POWER_SOURCE: + arg = pad->power_source; + break; + case PIN_CONFIG_INPUT_ENABLE: + arg = pad->input_enabled; + break; + case PIN_CONFIG_OUTPUT: + arg = pad->out_value; + break; + case PMIC_GPIO_CONF_PULL_UP: + arg = pad->pullup; + break; + case PMIC_GPIO_CONF_STRENGTH: + arg = pad->strength; + break; + default: + return -EINVAL; + } + + *config = pinconf_to_config_packed(param, arg); + return 0; +} + +static int pmic_gpio_config_set(struct pinctrl_dev *pctldev, unsigned int pin, + unsigned long *configs, unsigned nconfs) +{ + struct pmic_gpio_state *state = pinctrl_dev_get_drvdata(pctldev); + struct pmic_gpio_pad *pad; + unsigned param, arg; + unsigned int val; + int i, ret; + + pad = pctldev->desc->pins[pin].drv_data; + + for (i = 0; i < nconfs; i++) { + param = pinconf_to_config_param(configs[i]); + arg = pinconf_to_config_argument(configs[i]); + + switch (param) { + case PIN_CONFIG_DRIVE_PUSH_PULL: + pad->buffer_type = PMIC_GPIO_OUT_BUF_CMOS; + break; + case PIN_CONFIG_DRIVE_OPEN_DRAIN: + if (!pad->have_buffer) + return -EINVAL; + pad->buffer_type = PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS; + break; + case PIN_CONFIG_DRIVE_OPEN_SOURCE: + if (!pad->have_buffer) + return -EINVAL; + pad->buffer_type = PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS; + break; + case PIN_CONFIG_BIAS_DISABLE: + pad->pullup = PMIC_GPIO_PULL_DISABLE; + break; + case PIN_CONFIG_BIAS_PULL_UP: + pad->pullup = PMIC_GPIO_PULL_UP_30; + break; + case PIN_CONFIG_BIAS_PULL_DOWN: + if (arg) + pad->pullup = PMIC_GPIO_PULL_DOWN; + else + pad->pullup = PMIC_GPIO_PULL_DISABLE; + break; + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: + pad->is_enabled = false; + break; + case PIN_CONFIG_POWER_SOURCE: + if (arg > pad->num_sources) + return -EINVAL; + pad->power_source = arg; + break; + case PIN_CONFIG_INPUT_ENABLE: + pad->input_enabled = arg ? true : false; + break; + case PIN_CONFIG_OUTPUT: + pad->output_enabled = true; + pad->out_value = arg; + break; + case PMIC_GPIO_CONF_PULL_UP: + if (arg > PMIC_GPIO_PULL_UP_1P5_30) + return -EINVAL; + pad->pullup = arg; + break; + case PMIC_GPIO_CONF_STRENGTH: + if (arg > PMIC_GPIO_STRENGTH_LOW) + return -EINVAL; + pad->strength = arg; + break; + default: + return -EINVAL; + } + } + + val = pad->power_source << PMIC_GPIO_REG_VIN_SHIFT; + + ret = pmic_gpio_write(state, pad, PMIC_GPIO_REG_DIG_VIN_CTL, val); + if (ret < 0) + return ret; + + val = pad->pullup << PMIC_GPIO_REG_PULL_SHIFT; + + ret = pmic_gpio_write(state, pad, PMIC_GPIO_REG_DIG_PULL_CTL, val); + if (ret < 0) + return ret; + + val = pad->buffer_type << PMIC_GPIO_REG_OUT_TYPE_SHIFT; + val = pad->strength << PMIC_GPIO_REG_OUT_STRENGTH_SHIFT; + + ret = pmic_gpio_write(state, pad, PMIC_GPIO_REG_DIG_OUT_CTL, val); + if (ret < 0) + return ret; + + val = 0; + if (pad->output_enabled) { + if (pad->input_enabled) + val = 2; + else + val = 1; + } + + val = val << PMIC_GPIO_REG_MODE_DIR_SHIFT; + val |= pad->function << PMIC_GPIO_REG_MODE_FUNCTION_SHIFT; + val |= pad->out_value & PMIC_GPIO_REG_MODE_VALUE_SHIFT; + + return pmic_gpio_write(state, pad, PMIC_GPIO_REG_MODE_CTL, val); +} + +static void pmic_gpio_config_dbg_show(struct pinctrl_dev *pctldev, + struct seq_file *s, unsigned pin) +{ + struct pmic_gpio_state *state = pinctrl_dev_get_drvdata(pctldev); + struct pmic_gpio_pad *pad; + int ret, val; + + static const char *const biases[] = { + "pull-up 30uA", "pull-up 1.5uA", "pull-up 31.5uA", + "pull-up 1.5uA + 30uA boost", "pull-down 10uA", "no pull" + }; + static const char *const buffer_types[] = { + "push-pull", "open-drain", "open-source" + }; + static const char *const strengths[] = { + "no", "high", "medium", "low" + }; + + pad = pctldev->desc->pins[pin].drv_data; + + seq_printf(s, " gpio%-2d:", pin + PMIC_GPIO_PHYSICAL_OFFSET); + + val = pmic_gpio_read(state, pad, PMIC_GPIO_REG_EN_CTL); + + if (val < 0 || !(val >> PMIC_GPIO_REG_MASTER_EN_SHIFT)) { + seq_puts(s, " ---"); + } else { + + if (!pad->input_enabled) { + ret = pmic_gpio_read(state, pad, PMIC_MPP_REG_RT_STS); + if (!ret) { + ret &= PMIC_MPP_REG_RT_STS_VAL_MASK; + pad->out_value = ret; + } + } + + seq_printf(s, " %-4s", pad->output_enabled ? "out" : "in"); + seq_printf(s, " %-7s", pmic_gpio_functions[pad->function]); + seq_printf(s, " vin-%d", pad->power_source); + seq_printf(s, " %-27s", biases[pad->pullup]); + seq_printf(s, " %-10s", buffer_types[pad->buffer_type]); + seq_printf(s, " %-4s", pad->out_value ? "high" : "low"); + seq_printf(s, " %-7s", strengths[pad->strength]); + } +} + +static const struct pinconf_ops pmic_gpio_pinconf_ops = { + .pin_config_group_get = pmic_gpio_config_get, + .pin_config_group_set = pmic_gpio_config_set, + .pin_config_group_dbg_show = pmic_gpio_config_dbg_show, +}; + +static int pmic_gpio_direction_input(struct gpio_chip *chip, unsigned pin) +{ + struct pmic_gpio_state *state = to_gpio_state(chip); + unsigned long config; + + config = pinconf_to_config_packed(PIN_CONFIG_INPUT_ENABLE, 1); + + return pmic_gpio_config_set(state->ctrl, pin, &config, 1); +} + +static int pmic_gpio_direction_output(struct gpio_chip *chip, + unsigned pin, int val) +{ + struct pmic_gpio_state *state = to_gpio_state(chip); + unsigned long config; + + config = pinconf_to_config_packed(PIN_CONFIG_OUTPUT, val); + + return pmic_gpio_config_set(state->ctrl, pin, &config, 1); +} + +static int pmic_gpio_get(struct gpio_chip *chip, unsigned pin) +{ + struct pmic_gpio_state *state = to_gpio_state(chip); + struct pmic_gpio_pad *pad; + int ret; + + pad = state->ctrl->desc->pins[pin].drv_data; + + if (!pad->is_enabled) + return -EINVAL; + + if (pad->input_enabled) { + ret = pmic_gpio_read(state, pad, PMIC_MPP_REG_RT_STS); + if (ret < 0) + return ret; + + pad->out_value = ret & PMIC_MPP_REG_RT_STS_VAL_MASK; + } + + return pad->out_value; +} + +static void pmic_gpio_set(struct gpio_chip *chip, unsigned pin, int value) +{ + struct pmic_gpio_state *state = to_gpio_state(chip); + unsigned long config; + + config = pinconf_to_config_packed(PIN_CONFIG_OUTPUT, value); + + pmic_gpio_config_set(state->ctrl, pin, &config, 1); +} + +static int pmic_gpio_request(struct gpio_chip *chip, unsigned base) +{ + return pinctrl_request_gpio(chip->base + base); +} + +static void pmic_gpio_free(struct gpio_chip *chip, unsigned base) +{ + pinctrl_free_gpio(chip->base + base); +} + +static int pmic_gpio_of_xlate(struct gpio_chip *chip, + const struct of_phandle_args *gpio_desc, + u32 *flags) +{ + if (chip->of_gpio_n_cells < 2) + return -EINVAL; + + if (flags) + *flags = gpio_desc->args[1]; + + return gpio_desc->args[0] - PMIC_GPIO_PHYSICAL_OFFSET; +} + +static int pmic_gpio_to_irq(struct gpio_chip *chip, unsigned pin) +{ + struct pmic_gpio_state *state = to_gpio_state(chip); + struct pmic_gpio_pad *pad; + + pad = state->ctrl->desc->pins[pin].drv_data; + + return pad->irq; +} + +static void pmic_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip) +{ + struct pmic_gpio_state *state = to_gpio_state(chip); + unsigned i; + + for (i = 0; i < chip->ngpio; i++) { + pmic_gpio_config_dbg_show(state->ctrl, s, i); + seq_puts(s, "\n"); + } +} + +static const struct gpio_chip pmic_gpio_gpio_template = { + .direction_input = pmic_gpio_direction_input, + .direction_output = pmic_gpio_direction_output, + .get = pmic_gpio_get, + .set = pmic_gpio_set, + .request = pmic_gpio_request, + .free = pmic_gpio_free, + .of_xlate = pmic_gpio_of_xlate, + .to_irq = pmic_gpio_to_irq, + .dbg_show = pmic_gpio_dbg_show, +}; + +static int pmic_gpio_populate(struct pmic_gpio_state *state, + struct pmic_gpio_pad *pad) +{ + int type, subtype, val, dir; + + type = pmic_gpio_read(state, pad, PMIC_GPIO_REG_TYPE); + if (type < 0) + return type; + + if (type != PMIC_GPIO_TYPE) { + dev_err(state->dev, "incorrect block type 0x%x at 0x%x\n", + type, pad->base); + return -ENODEV; + } + + subtype = pmic_gpio_read(state, pad, PMIC_GPIO_REG_SUBTYPE); + if (subtype < 0) + return subtype; + + switch (subtype) { + case PMIC_GPIO_SUBTYPE_GPIO_4CH: + pad->have_buffer = true; + case PMIC_GPIO_SUBTYPE_GPIOC_4CH: + pad->num_sources = 4; + break; + case PMIC_GPIO_SUBTYPE_GPIO_8CH: + pad->have_buffer = true; + case PMIC_GPIO_SUBTYPE_GPIOC_8CH: + pad->num_sources = 8; + break; + default: + dev_err(state->dev, "unknown GPIO type 0x%x\n", subtype); + return -ENODEV; + } + + val = pmic_gpio_read(state, pad, PMIC_GPIO_REG_MODE_CTL); + if (val < 0) + return val; + + pad->out_value = val & PMIC_GPIO_REG_MODE_VALUE_SHIFT; + + dir = val >> PMIC_GPIO_REG_MODE_DIR_SHIFT; + dir &= PMIC_GPIO_REG_MODE_DIR_MASK; + switch (dir) { + case 0: + pad->input_enabled = true; + pad->output_enabled = false; + break; + case 1: + pad->input_enabled = false; + pad->output_enabled = true; + break; + case 2: + pad->input_enabled = true; + pad->output_enabled = true; + break; + default: + dev_err(state->dev, "unknown GPIO direction\n"); + return -ENODEV; + } + + pad->function = val >> PMIC_GPIO_REG_MODE_FUNCTION_SHIFT; + pad->function &= PMIC_GPIO_REG_MODE_FUNCTION_MASK; + + val = pmic_gpio_read(state, pad, PMIC_GPIO_REG_DIG_VIN_CTL); + if (val < 0) + return val; + + pad->power_source = val >> PMIC_GPIO_REG_VIN_SHIFT; + pad->power_source &= PMIC_GPIO_REG_VIN_MASK; + + val = pmic_gpio_read(state, pad, PMIC_GPIO_REG_DIG_PULL_CTL); + if (val < 0) + return val; + + pad->pullup = val >> PMIC_GPIO_REG_PULL_SHIFT; + pad->pullup &= PMIC_GPIO_REG_PULL_MASK; + + val = pmic_gpio_read(state, pad, PMIC_GPIO_REG_DIG_OUT_CTL); + if (val < 0) + return val; + + pad->strength = val >> PMIC_GPIO_REG_OUT_STRENGTH_SHIFT; + pad->strength &= PMIC_GPIO_REG_OUT_STRENGTH_MASK; + + pad->buffer_type = val >> PMIC_GPIO_REG_OUT_TYPE_SHIFT; + pad->buffer_type &= PMIC_GPIO_REG_OUT_TYPE_MASK; + + /* Pin could be disabled with PIN_CONFIG_BIAS_HIGH_IMPEDANCE */ + pad->is_enabled = true; + return 0; +} + +static int pmic_gpio_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct pinctrl_pin_desc *pindesc; + struct pinctrl_desc *pctrldesc; + struct pmic_gpio_pad *pad, *pads; + struct pmic_gpio_state *state; + int ret, npins, i; + u32 res[2]; + + ret = of_property_read_u32_array(dev->of_node, "reg", res, 2); + if (ret < 0) { + dev_err(dev, "missing base address and/or range"); + return ret; + } + + npins = res[1] / PMIC_GPIO_ADDRESS_RANGE; + + if (!npins) + return -EINVAL; + + BUG_ON(npins > ARRAY_SIZE(pmic_gpio_groups)); + + state = devm_kzalloc(dev, sizeof(*state), GFP_KERNEL); + if (!state) + return -ENOMEM; + + platform_set_drvdata(pdev, state); + + state->dev = &pdev->dev; + state->map = dev_get_regmap(dev->parent, NULL); + + pindesc = devm_kcalloc(dev, npins, sizeof(*pindesc), GFP_KERNEL); + if (!pindesc) + return -ENOMEM; + + pads = devm_kcalloc(dev, npins, sizeof(*pads), GFP_KERNEL); + if (!pads) + return -ENOMEM; + + pctrldesc = devm_kzalloc(dev, sizeof(*pctrldesc), GFP_KERNEL); + if (!pctrldesc) + return -ENOMEM; + + pctrldesc->pctlops = &pmic_gpio_pinctrl_ops; + pctrldesc->pmxops = &pmic_gpio_pinmux_ops; + pctrldesc->confops = &pmic_gpio_pinconf_ops; + pctrldesc->owner = THIS_MODULE; + pctrldesc->name = dev_name(dev); + pctrldesc->pins = pindesc; + pctrldesc->npins = npins; + + for (i = 0; i < npins; i++, pindesc++) { + pad = &pads[i]; + pindesc->drv_data = pad; + pindesc->number = i; + pindesc->name = pmic_gpio_groups[i]; + + pad->irq = platform_get_irq(pdev, i); + if (pad->irq < 0) + return pad->irq; + + pad->base = res[0] + i * PMIC_GPIO_ADDRESS_RANGE; + + ret = pmic_gpio_populate(state, pad); + if (ret < 0) + return ret; + } + + state->chip = pmic_gpio_gpio_template; + state->chip.dev = dev; + state->chip.base = -1; + state->chip.ngpio = npins; + state->chip.label = dev_name(dev); + state->chip.of_gpio_n_cells = 2; + state->chip.can_sleep = false; + + state->ctrl = pinctrl_register(pctrldesc, dev, state); + if (!state->ctrl) + return -ENODEV; + + ret = gpiochip_add(&state->chip); + if (ret) { + dev_err(state->dev, "can't add gpio chip\n"); + goto err_chip; + } + + ret = gpiochip_add_pin_range(&state->chip, dev_name(dev), 0, 0, npins); + if (ret) { + dev_err(dev, "failed to add pin range\n"); + goto err_range; + } + + return 0; + +err_range: + gpiochip_remove(&state->chip); +err_chip: + pinctrl_unregister(state->ctrl); + return ret; +} + +static int pmic_gpio_remove(struct platform_device *pdev) +{ + struct pmic_gpio_state *state = platform_get_drvdata(pdev); + + gpiochip_remove(&state->chip); + pinctrl_unregister(state->ctrl); + return 0; +} + +static const struct of_device_id pmic_gpio_of_match[] = { + { .compatible = "qcom,pm8941-gpio" }, /* 36 GPIO's */ + { .compatible = "qcom,pma8084-gpio" }, /* 22 GPIO's */ + { }, +}; + +MODULE_DEVICE_TABLE(of, pmic_gpio_of_match); + +static struct platform_driver pmic_gpio_driver = { + .driver = { + .name = "qcom-spmi-gpio", + .of_match_table = pmic_gpio_of_match, + }, + .probe = pmic_gpio_probe, + .remove = pmic_gpio_remove, +}; + +module_platform_driver(pmic_gpio_driver); + +MODULE_AUTHOR("Ivan T. Ivanov <iivanov@mm-sol.com>"); +MODULE_DESCRIPTION("Qualcomm SPMI PMIC GPIO pin control driver"); +MODULE_ALIAS("platform:qcom-spmi-gpio"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c new file mode 100644 index 000000000000..a8924dba335e --- /dev/null +++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c @@ -0,0 +1,949 @@ +/* + * Copyright (c) 2012-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/gpio.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/pinctrl/pinconf-generic.h> +#include <linux/pinctrl/pinconf.h> +#include <linux/pinctrl/pinmux.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> +#include <linux/slab.h> +#include <linux/types.h> + +#include <dt-bindings/pinctrl/qcom,pmic-mpp.h> + +#include "../core.h" +#include "../pinctrl-utils.h" + +#define PMIC_MPP_ADDRESS_RANGE 0x100 + +/* + * Pull Up Values - it indicates whether a pull-up should be + * applied for bidirectional mode only. The hardware ignores the + * configuration when operating in other modes. + */ +#define PMIC_MPP_PULL_UP_0P6KOHM 0 +#define PMIC_MPP_PULL_UP_10KOHM 1 +#define PMIC_MPP_PULL_UP_30KOHM 2 +#define PMIC_MPP_PULL_UP_OPEN 3 + +/* type registers base address bases */ +#define PMIC_MPP_REG_TYPE 0x4 +#define PMIC_MPP_REG_SUBTYPE 0x5 + +/* mpp peripheral type and subtype values */ +#define PMIC_MPP_TYPE 0x11 +#define PMIC_MPP_SUBTYPE_4CH_NO_ANA_OUT 0x3 +#define PMIC_MPP_SUBTYPE_ULT_4CH_NO_ANA_OUT 0x4 +#define PMIC_MPP_SUBTYPE_4CH_NO_SINK 0x5 +#define PMIC_MPP_SUBTYPE_ULT_4CH_NO_SINK 0x6 +#define PMIC_MPP_SUBTYPE_4CH_FULL_FUNC 0x7 +#define PMIC_MPP_SUBTYPE_8CH_FULL_FUNC 0xf + +#define PMIC_MPP_REG_RT_STS 0x10 +#define PMIC_MPP_REG_RT_STS_VAL_MASK 0x1 + +/* control register base address bases */ +#define PMIC_MPP_REG_MODE_CTL 0x40 +#define PMIC_MPP_REG_DIG_VIN_CTL 0x41 +#define PMIC_MPP_REG_DIG_PULL_CTL 0x42 +#define PMIC_MPP_REG_DIG_IN_CTL 0x43 +#define PMIC_MPP_REG_EN_CTL 0x46 +#define PMIC_MPP_REG_AIN_CTL 0x4a + +/* PMIC_MPP_REG_MODE_CTL */ +#define PMIC_MPP_REG_MODE_VALUE_MASK 0x1 +#define PMIC_MPP_REG_MODE_FUNCTION_SHIFT 1 +#define PMIC_MPP_REG_MODE_FUNCTION_MASK 0x7 +#define PMIC_MPP_REG_MODE_DIR_SHIFT 4 +#define PMIC_MPP_REG_MODE_DIR_MASK 0x7 + +/* PMIC_MPP_REG_DIG_VIN_CTL */ +#define PMIC_MPP_REG_VIN_SHIFT 0 +#define PMIC_MPP_REG_VIN_MASK 0x7 + +/* PMIC_MPP_REG_DIG_PULL_CTL */ +#define PMIC_MPP_REG_PULL_SHIFT 0 +#define PMIC_MPP_REG_PULL_MASK 0x7 + +/* PMIC_MPP_REG_EN_CTL */ +#define PMIC_MPP_REG_MASTER_EN_SHIFT 7 + +/* PMIC_MPP_REG_AIN_CTL */ +#define PMIC_MPP_REG_AIN_ROUTE_SHIFT 0 +#define PMIC_MPP_REG_AIN_ROUTE_MASK 0x7 + +#define PMIC_MPP_PHYSICAL_OFFSET 1 + +/* Qualcomm specific pin configurations */ +#define PMIC_MPP_CONF_AMUX_ROUTE (PIN_CONFIG_END + 1) +#define PMIC_MPP_CONF_ANALOG_MODE (PIN_CONFIG_END + 2) + +/** + * struct pmic_mpp_pad - keep current MPP settings + * @base: Address base in SPMI device. + * @irq: IRQ number which this MPP generate. + * @is_enabled: Set to false when MPP should be put in high Z state. + * @out_value: Cached pin output value. + * @output_enabled: Set to true if MPP output logic is enabled. + * @input_enabled: Set to true if MPP input buffer logic is enabled. + * @analog_mode: Set to true when MPP should operate in Analog Input, Analog + * Output or Bidirectional Analog mode. + * @num_sources: Number of power-sources supported by this MPP. + * @power_source: Current power-source used. + * @amux_input: Set the source for analog input. + * @pullup: Pullup resistor value. Valid in Bidirectional mode only. + * @function: See pmic_mpp_functions[]. + */ +struct pmic_mpp_pad { + u16 base; + int irq; + bool is_enabled; + bool out_value; + bool output_enabled; + bool input_enabled; + bool analog_mode; + unsigned int num_sources; + unsigned int power_source; + unsigned int amux_input; + unsigned int pullup; + unsigned int function; +}; + +struct pmic_mpp_state { + struct device *dev; + struct regmap *map; + struct pinctrl_dev *ctrl; + struct gpio_chip chip; +}; + +struct pmic_mpp_bindings { + const char *property; + unsigned param; +}; + +static struct pmic_mpp_bindings pmic_mpp_bindings[] = { + {"qcom,amux-route", PMIC_MPP_CONF_AMUX_ROUTE}, + {"qcom,analog-mode", PMIC_MPP_CONF_ANALOG_MODE}, +}; + +static const char *const pmic_mpp_groups[] = { + "mpp1", "mpp2", "mpp3", "mpp4", "mpp5", "mpp6", "mpp7", "mpp8", +}; + +static const char *const pmic_mpp_functions[] = { + PMIC_MPP_FUNC_NORMAL, PMIC_MPP_FUNC_PAIRED, + "reserved1", "reserved2", + PMIC_MPP_FUNC_DTEST1, PMIC_MPP_FUNC_DTEST2, + PMIC_MPP_FUNC_DTEST3, PMIC_MPP_FUNC_DTEST4, +}; + +static inline struct pmic_mpp_state *to_mpp_state(struct gpio_chip *chip) +{ + return container_of(chip, struct pmic_mpp_state, chip); +}; + +static int pmic_mpp_read(struct pmic_mpp_state *state, + struct pmic_mpp_pad *pad, unsigned int addr) +{ + unsigned int val; + int ret; + + ret = regmap_read(state->map, pad->base + addr, &val); + if (ret < 0) + dev_err(state->dev, "read 0x%x failed\n", addr); + else + ret = val; + + return ret; +} + +static int pmic_mpp_write(struct pmic_mpp_state *state, + struct pmic_mpp_pad *pad, unsigned int addr, + unsigned int val) +{ + int ret; + + ret = regmap_write(state->map, pad->base + addr, val); + if (ret < 0) + dev_err(state->dev, "write 0x%x failed\n", addr); + + return ret; +} + +static int pmic_mpp_get_groups_count(struct pinctrl_dev *pctldev) +{ + /* Every PIN is a group */ + return pctldev->desc->npins; +} + +static const char *pmic_mpp_get_group_name(struct pinctrl_dev *pctldev, + unsigned pin) +{ + return pctldev->desc->pins[pin].name; +} + +static int pmic_mpp_get_group_pins(struct pinctrl_dev *pctldev, + unsigned pin, + const unsigned **pins, unsigned *num_pins) +{ + *pins = &pctldev->desc->pins[pin].number; + *num_pins = 1; + return 0; +} + +static int pmic_mpp_parse_dt_config(struct device_node *np, + struct pinctrl_dev *pctldev, + unsigned long **configs, + unsigned int *nconfs) +{ + struct pmic_mpp_bindings *par; + unsigned long cfg; + int ret, i; + u32 val; + + for (i = 0; i < ARRAY_SIZE(pmic_mpp_bindings); i++) { + par = &pmic_mpp_bindings[i]; + ret = of_property_read_u32(np, par->property, &val); + + /* property not found */ + if (ret == -EINVAL) + continue; + + /* use zero as default value, when no value is specified */ + if (ret) + val = 0; + + dev_dbg(pctldev->dev, "found %s with value %u\n", + par->property, val); + + cfg = pinconf_to_config_packed(par->param, val); + + ret = pinctrl_utils_add_config(pctldev, configs, nconfs, cfg); + if (ret) + return ret; + } + + return 0; +} + +static int pmic_mpp_dt_subnode_to_map(struct pinctrl_dev *pctldev, + struct device_node *np, + struct pinctrl_map **map, + unsigned *reserv, unsigned *nmaps, + enum pinctrl_map_type type) +{ + unsigned long *configs = NULL; + unsigned nconfs = 0; + struct property *prop; + const char *group; + int ret; + + ret = pmic_mpp_parse_dt_config(np, pctldev, &configs, &nconfs); + if (ret < 0) + return ret; + + if (!nconfs) + return 0; + + ret = of_property_count_strings(np, "pins"); + if (ret < 0) + goto exit; + + ret = pinctrl_utils_reserve_map(pctldev, map, reserv, nmaps, ret); + if (ret < 0) + goto exit; + + of_property_for_each_string(np, "pins", prop, group) { + ret = pinctrl_utils_add_map_configs(pctldev, map, + reserv, nmaps, group, + configs, nconfs, type); + if (ret < 0) + break; + } +exit: + kfree(configs); + return ret; +} + +static int pmic_mpp_dt_node_to_map(struct pinctrl_dev *pctldev, + struct device_node *np_config, + struct pinctrl_map **map, unsigned *nmaps) +{ + struct device_node *np; + enum pinctrl_map_type type; + unsigned reserv; + int ret; + + ret = 0; + *map = NULL; + *nmaps = 0; + reserv = 0; + type = PIN_MAP_TYPE_CONFIGS_GROUP; + + for_each_child_of_node(np_config, np) { + ret = pinconf_generic_dt_subnode_to_map(pctldev, np, map, + &reserv, nmaps, type); + if (ret) + break; + + ret = pmic_mpp_dt_subnode_to_map(pctldev, np, map, &reserv, + nmaps, type); + if (ret) + break; + } + + if (ret < 0) + pinctrl_utils_dt_free_map(pctldev, *map, *nmaps); + + return ret; +} + +static const struct pinctrl_ops pmic_mpp_pinctrl_ops = { + .get_groups_count = pmic_mpp_get_groups_count, + .get_group_name = pmic_mpp_get_group_name, + .get_group_pins = pmic_mpp_get_group_pins, + .dt_node_to_map = pmic_mpp_dt_node_to_map, + .dt_free_map = pinctrl_utils_dt_free_map, +}; + +static int pmic_mpp_get_functions_count(struct pinctrl_dev *pctldev) +{ + return ARRAY_SIZE(pmic_mpp_functions); +} + +static const char *pmic_mpp_get_function_name(struct pinctrl_dev *pctldev, + unsigned function) +{ + return pmic_mpp_functions[function]; +} + +static int pmic_mpp_get_function_groups(struct pinctrl_dev *pctldev, + unsigned function, + const char *const **groups, + unsigned *const num_qgroups) +{ + *groups = pmic_mpp_groups; + *num_qgroups = pctldev->desc->npins; + return 0; +} + +static int pmic_mpp_set_mux(struct pinctrl_dev *pctldev, unsigned function, + unsigned pin) +{ + struct pmic_mpp_state *state = pinctrl_dev_get_drvdata(pctldev); + struct pmic_mpp_pad *pad; + unsigned int val; + int ret; + + pad = pctldev->desc->pins[pin].drv_data; + + pad->function = function; + + if (!pad->analog_mode) { + val = 0; /* just digital input */ + if (pad->output_enabled) { + if (pad->input_enabled) + val = 2; /* digital input and output */ + else + val = 1; /* just digital output */ + } + } else { + val = 4; /* just analog input */ + if (pad->output_enabled) { + if (pad->input_enabled) + val = 3; /* analog input and output */ + else + val = 5; /* just analog output */ + } + } + + val |= pad->function << PMIC_MPP_REG_MODE_FUNCTION_SHIFT; + val |= pad->out_value & PMIC_MPP_REG_MODE_VALUE_MASK; + + ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_MODE_CTL, val); + if (ret < 0) + return ret; + + val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT; + + return pmic_mpp_write(state, pad, PMIC_MPP_REG_EN_CTL, val); +} + +static const struct pinmux_ops pmic_mpp_pinmux_ops = { + .get_functions_count = pmic_mpp_get_functions_count, + .get_function_name = pmic_mpp_get_function_name, + .get_function_groups = pmic_mpp_get_function_groups, + .set_mux = pmic_mpp_set_mux, +}; + +static int pmic_mpp_config_get(struct pinctrl_dev *pctldev, + unsigned int pin, unsigned long *config) +{ + unsigned param = pinconf_to_config_param(*config); + struct pmic_mpp_pad *pad; + unsigned arg = 0; + + pad = pctldev->desc->pins[pin].drv_data; + + switch (param) { + case PIN_CONFIG_BIAS_DISABLE: + arg = pad->pullup == PMIC_MPP_PULL_UP_OPEN; + break; + case PIN_CONFIG_BIAS_PULL_UP: + switch (pad->pullup) { + case PMIC_MPP_PULL_UP_OPEN: + arg = 0; + break; + case PMIC_MPP_PULL_UP_0P6KOHM: + arg = 600; + break; + case PMIC_MPP_PULL_UP_10KOHM: + arg = 10000; + break; + case PMIC_MPP_PULL_UP_30KOHM: + arg = 30000; + break; + default: + return -EINVAL; + } + break; + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: + arg = !pad->is_enabled; + break; + case PIN_CONFIG_POWER_SOURCE: + arg = pad->power_source; + break; + case PIN_CONFIG_INPUT_ENABLE: + arg = pad->input_enabled; + break; + case PIN_CONFIG_OUTPUT: + arg = pad->out_value; + break; + case PMIC_MPP_CONF_AMUX_ROUTE: + arg = pad->amux_input; + break; + case PMIC_MPP_CONF_ANALOG_MODE: + arg = pad->analog_mode; + break; + default: + return -EINVAL; + } + + /* Convert register value to pinconf value */ + *config = pinconf_to_config_packed(param, arg); + return 0; +} + +static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin, + unsigned long *configs, unsigned nconfs) +{ + struct pmic_mpp_state *state = pinctrl_dev_get_drvdata(pctldev); + struct pmic_mpp_pad *pad; + unsigned param, arg; + unsigned int val; + int i, ret; + + pad = pctldev->desc->pins[pin].drv_data; + + for (i = 0; i < nconfs; i++) { + param = pinconf_to_config_param(configs[i]); + arg = pinconf_to_config_argument(configs[i]); + + switch (param) { + case PIN_CONFIG_BIAS_DISABLE: + pad->pullup = PMIC_MPP_PULL_UP_OPEN; + break; + case PIN_CONFIG_BIAS_PULL_UP: + switch (arg) { + case 600: + pad->pullup = PMIC_MPP_PULL_UP_0P6KOHM; + break; + case 10000: + pad->pullup = PMIC_MPP_PULL_UP_10KOHM; + break; + case 30000: + pad->pullup = PMIC_MPP_PULL_UP_30KOHM; + break; + default: + return -EINVAL; + } + break; + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: + pad->is_enabled = false; + break; + case PIN_CONFIG_POWER_SOURCE: + if (arg >= pad->num_sources) + return -EINVAL; + pad->power_source = arg; + break; + case PIN_CONFIG_INPUT_ENABLE: + pad->input_enabled = arg ? true : false; + break; + case PIN_CONFIG_OUTPUT: + pad->output_enabled = true; + pad->out_value = arg; + break; + case PMIC_MPP_CONF_AMUX_ROUTE: + if (arg >= PMIC_MPP_AMUX_ROUTE_ABUS4) + return -EINVAL; + pad->amux_input = arg; + break; + case PMIC_MPP_CONF_ANALOG_MODE: + pad->analog_mode = true; + break; + default: + return -EINVAL; + } + } + + val = pad->power_source << PMIC_MPP_REG_VIN_SHIFT; + + ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_DIG_VIN_CTL, val); + if (ret < 0) + return ret; + + val = pad->pullup << PMIC_MPP_REG_PULL_SHIFT; + + ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_DIG_PULL_CTL, val); + if (ret < 0) + return ret; + + val = pad->amux_input & PMIC_MPP_REG_AIN_ROUTE_MASK; + + ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_AIN_CTL, val); + if (ret < 0) + return ret; + + if (!pad->analog_mode) { + val = 0; /* just digital input */ + if (pad->output_enabled) { + if (pad->input_enabled) + val = 2; /* digital input and output */ + else + val = 1; /* just digital output */ + } + } else { + val = 4; /* just analog input */ + if (pad->output_enabled) { + if (pad->input_enabled) + val = 3; /* analog input and output */ + else + val = 5; /* just analog output */ + } + } + + val = val << PMIC_MPP_REG_MODE_DIR_SHIFT; + val |= pad->function << PMIC_MPP_REG_MODE_FUNCTION_SHIFT; + val |= pad->out_value & PMIC_MPP_REG_MODE_VALUE_MASK; + + return pmic_mpp_write(state, pad, PMIC_MPP_REG_MODE_CTL, val); +} + +static void pmic_mpp_config_dbg_show(struct pinctrl_dev *pctldev, + struct seq_file *s, unsigned pin) +{ + struct pmic_mpp_state *state = pinctrl_dev_get_drvdata(pctldev); + struct pmic_mpp_pad *pad; + int ret, val; + + static const char *const biases[] = { + "0.6kOhm", "10kOhm", "30kOhm", "Disabled" + }; + + + pad = pctldev->desc->pins[pin].drv_data; + + seq_printf(s, " mpp%-2d:", pin + PMIC_MPP_PHYSICAL_OFFSET); + + val = pmic_mpp_read(state, pad, PMIC_MPP_REG_EN_CTL); + + if (val < 0 || !(val >> PMIC_MPP_REG_MASTER_EN_SHIFT)) { + seq_puts(s, " ---"); + } else { + + if (pad->input_enabled) { + ret = pmic_mpp_read(state, pad, PMIC_MPP_REG_RT_STS); + if (!ret) { + ret &= PMIC_MPP_REG_RT_STS_VAL_MASK; + pad->out_value = ret; + } + } + + seq_printf(s, " %-4s", pad->output_enabled ? "out" : "in"); + seq_printf(s, " %-4s", pad->analog_mode ? "ana" : "dig"); + seq_printf(s, " %-7s", pmic_mpp_functions[pad->function]); + seq_printf(s, " vin-%d", pad->power_source); + seq_printf(s, " %-8s", biases[pad->pullup]); + seq_printf(s, " %-4s", pad->out_value ? "high" : "low"); + } +} + +static const struct pinconf_ops pmic_mpp_pinconf_ops = { + .pin_config_group_get = pmic_mpp_config_get, + .pin_config_group_set = pmic_mpp_config_set, + .pin_config_group_dbg_show = pmic_mpp_config_dbg_show, +}; + +static int pmic_mpp_direction_input(struct gpio_chip *chip, unsigned pin) +{ + struct pmic_mpp_state *state = to_mpp_state(chip); + unsigned long config; + + config = pinconf_to_config_packed(PIN_CONFIG_INPUT_ENABLE, 1); + + return pmic_mpp_config_set(state->ctrl, pin, &config, 1); +} + +static int pmic_mpp_direction_output(struct gpio_chip *chip, + unsigned pin, int val) +{ + struct pmic_mpp_state *state = to_mpp_state(chip); + unsigned long config; + + config = pinconf_to_config_packed(PIN_CONFIG_OUTPUT, val); + + return pmic_mpp_config_set(state->ctrl, pin, &config, 1); +} + +static int pmic_mpp_get(struct gpio_chip *chip, unsigned pin) +{ + struct pmic_mpp_state *state = to_mpp_state(chip); + struct pmic_mpp_pad *pad; + int ret; + + pad = state->ctrl->desc->pins[pin].drv_data; + + if (pad->input_enabled) { + ret = pmic_mpp_read(state, pad, PMIC_MPP_REG_RT_STS); + if (ret < 0) + return ret; + + pad->out_value = ret & PMIC_MPP_REG_RT_STS_VAL_MASK; + } + + return pad->out_value; +} + +static void pmic_mpp_set(struct gpio_chip *chip, unsigned pin, int value) +{ + struct pmic_mpp_state *state = to_mpp_state(chip); + unsigned long config; + + config = pinconf_to_config_packed(PIN_CONFIG_OUTPUT, value); + + pmic_mpp_config_set(state->ctrl, pin, &config, 1); +} + +static int pmic_mpp_request(struct gpio_chip *chip, unsigned base) +{ + return pinctrl_request_gpio(chip->base + base); +} + +static void pmic_mpp_free(struct gpio_chip *chip, unsigned base) +{ + pinctrl_free_gpio(chip->base + base); +} + +static int pmic_mpp_of_xlate(struct gpio_chip *chip, + const struct of_phandle_args *gpio_desc, + u32 *flags) +{ + if (chip->of_gpio_n_cells < 2) + return -EINVAL; + + if (flags) + *flags = gpio_desc->args[1]; + + return gpio_desc->args[0] - PMIC_MPP_PHYSICAL_OFFSET; +} + +static int pmic_mpp_to_irq(struct gpio_chip *chip, unsigned pin) +{ + struct pmic_mpp_state *state = to_mpp_state(chip); + struct pmic_mpp_pad *pad; + + pad = state->ctrl->desc->pins[pin].drv_data; + + return pad->irq; +} + +static void pmic_mpp_dbg_show(struct seq_file *s, struct gpio_chip *chip) +{ + struct pmic_mpp_state *state = to_mpp_state(chip); + unsigned i; + + for (i = 0; i < chip->ngpio; i++) { + pmic_mpp_config_dbg_show(state->ctrl, s, i); + seq_puts(s, "\n"); + } +} + +static const struct gpio_chip pmic_mpp_gpio_template = { + .direction_input = pmic_mpp_direction_input, + .direction_output = pmic_mpp_direction_output, + .get = pmic_mpp_get, + .set = pmic_mpp_set, + .request = pmic_mpp_request, + .free = pmic_mpp_free, + .of_xlate = pmic_mpp_of_xlate, + .to_irq = pmic_mpp_to_irq, + .dbg_show = pmic_mpp_dbg_show, +}; + +static int pmic_mpp_populate(struct pmic_mpp_state *state, + struct pmic_mpp_pad *pad) +{ + int type, subtype, val, dir; + + type = pmic_mpp_read(state, pad, PMIC_MPP_REG_TYPE); + if (type < 0) + return type; + + if (type != PMIC_MPP_TYPE) { + dev_err(state->dev, "incorrect block type 0x%x at 0x%x\n", + type, pad->base); + return -ENODEV; + } + + subtype = pmic_mpp_read(state, pad, PMIC_MPP_REG_SUBTYPE); + if (subtype < 0) + return subtype; + + switch (subtype) { + case PMIC_MPP_SUBTYPE_4CH_NO_ANA_OUT: + case PMIC_MPP_SUBTYPE_ULT_4CH_NO_ANA_OUT: + case PMIC_MPP_SUBTYPE_4CH_NO_SINK: + case PMIC_MPP_SUBTYPE_ULT_4CH_NO_SINK: + case PMIC_MPP_SUBTYPE_4CH_FULL_FUNC: + pad->num_sources = 4; + break; + case PMIC_MPP_SUBTYPE_8CH_FULL_FUNC: + pad->num_sources = 8; + break; + default: + dev_err(state->dev, "unknown MPP type 0x%x at 0x%x\n", + subtype, pad->base); + return -ENODEV; + } + + val = pmic_mpp_read(state, pad, PMIC_MPP_REG_MODE_CTL); + if (val < 0) + return val; + + pad->out_value = val & PMIC_MPP_REG_MODE_VALUE_MASK; + + dir = val >> PMIC_MPP_REG_MODE_DIR_SHIFT; + dir &= PMIC_MPP_REG_MODE_DIR_MASK; + + switch (dir) { + case 0: + pad->input_enabled = true; + pad->output_enabled = false; + pad->analog_mode = false; + break; + case 1: + pad->input_enabled = false; + pad->output_enabled = true; + pad->analog_mode = false; + break; + case 2: + pad->input_enabled = true; + pad->output_enabled = true; + pad->analog_mode = false; + break; + case 3: + pad->input_enabled = true; + pad->output_enabled = true; + pad->analog_mode = true; + break; + case 4: + pad->input_enabled = true; + pad->output_enabled = false; + pad->analog_mode = true; + break; + case 5: + pad->input_enabled = false; + pad->output_enabled = true; + pad->analog_mode = true; + break; + default: + dev_err(state->dev, "unknown MPP direction\n"); + return -ENODEV; + } + + pad->function = val >> PMIC_MPP_REG_MODE_FUNCTION_SHIFT; + pad->function &= PMIC_MPP_REG_MODE_FUNCTION_MASK; + + val = pmic_mpp_read(state, pad, PMIC_MPP_REG_DIG_VIN_CTL); + if (val < 0) + return val; + + pad->power_source = val >> PMIC_MPP_REG_VIN_SHIFT; + pad->power_source &= PMIC_MPP_REG_VIN_MASK; + + val = pmic_mpp_read(state, pad, PMIC_MPP_REG_DIG_PULL_CTL); + if (val < 0) + return val; + + pad->pullup = val >> PMIC_MPP_REG_PULL_SHIFT; + pad->pullup &= PMIC_MPP_REG_PULL_MASK; + + val = pmic_mpp_read(state, pad, PMIC_MPP_REG_AIN_CTL); + if (val < 0) + return val; + + pad->amux_input = val >> PMIC_MPP_REG_AIN_ROUTE_SHIFT; + pad->amux_input &= PMIC_MPP_REG_AIN_ROUTE_MASK; + + /* Pin could be disabled with PIN_CONFIG_BIAS_HIGH_IMPEDANCE */ + pad->is_enabled = true; + return 0; +} + +static int pmic_mpp_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct pinctrl_pin_desc *pindesc; + struct pinctrl_desc *pctrldesc; + struct pmic_mpp_pad *pad, *pads; + struct pmic_mpp_state *state; + int ret, npins, i; + u32 res[2]; + + ret = of_property_read_u32_array(dev->of_node, "reg", res, 2); + if (ret < 0) { + dev_err(dev, "missing base address and/or range"); + return ret; + } + + npins = res[1] / PMIC_MPP_ADDRESS_RANGE; + if (!npins) + return -EINVAL; + + BUG_ON(npins > ARRAY_SIZE(pmic_mpp_groups)); + + state = devm_kzalloc(dev, sizeof(*state), GFP_KERNEL); + if (!state) + return -ENOMEM; + + platform_set_drvdata(pdev, state); + + state->dev = &pdev->dev; + state->map = dev_get_regmap(dev->parent, NULL); + + pindesc = devm_kcalloc(dev, npins, sizeof(*pindesc), GFP_KERNEL); + if (!pindesc) + return -ENOMEM; + + pads = devm_kcalloc(dev, npins, sizeof(*pads), GFP_KERNEL); + if (!pads) + return -ENOMEM; + + pctrldesc = devm_kzalloc(dev, sizeof(*pctrldesc), GFP_KERNEL); + if (!pctrldesc) + return -ENOMEM; + + pctrldesc->pctlops = &pmic_mpp_pinctrl_ops; + pctrldesc->pmxops = &pmic_mpp_pinmux_ops; + pctrldesc->confops = &pmic_mpp_pinconf_ops; + pctrldesc->owner = THIS_MODULE; + pctrldesc->name = dev_name(dev); + pctrldesc->pins = pindesc; + pctrldesc->npins = npins; + + for (i = 0; i < npins; i++, pindesc++) { + pad = &pads[i]; + pindesc->drv_data = pad; + pindesc->number = i; + pindesc->name = pmic_mpp_groups[i]; + + pad->irq = platform_get_irq(pdev, i); + if (pad->irq < 0) + return pad->irq; + + pad->base = res[0] + i * PMIC_MPP_ADDRESS_RANGE; + + ret = pmic_mpp_populate(state, pad); + if (ret < 0) + return ret; + } + + state->chip = pmic_mpp_gpio_template; + state->chip.dev = dev; + state->chip.base = -1; + state->chip.ngpio = npins; + state->chip.label = dev_name(dev); + state->chip.of_gpio_n_cells = 2; + state->chip.can_sleep = false; + + state->ctrl = pinctrl_register(pctrldesc, dev, state); + if (!state->ctrl) + return -ENODEV; + + ret = gpiochip_add(&state->chip); + if (ret) { + dev_err(state->dev, "can't add gpio chip\n"); + goto err_chip; + } + + ret = gpiochip_add_pin_range(&state->chip, dev_name(dev), 0, 0, npins); + if (ret) { + dev_err(dev, "failed to add pin range\n"); + goto err_range; + } + + return 0; + +err_range: + gpiochip_remove(&state->chip); +err_chip: + pinctrl_unregister(state->ctrl); + return ret; +} + +static int pmic_mpp_remove(struct platform_device *pdev) +{ + struct pmic_mpp_state *state = platform_get_drvdata(pdev); + + gpiochip_remove(&state->chip); + pinctrl_unregister(state->ctrl); + return 0; +} + +static const struct of_device_id pmic_mpp_of_match[] = { + { .compatible = "qcom,pm8841-mpp" }, /* 4 MPP's */ + { .compatible = "qcom,pm8941-mpp" }, /* 8 MPP's */ + { .compatible = "qcom,pma8084-mpp" }, /* 8 MPP's */ + { }, +}; + +MODULE_DEVICE_TABLE(of, pmic_mpp_of_match); + +static struct platform_driver pmic_mpp_driver = { + .driver = { + .name = "qcom-spmi-mpp", + .of_match_table = pmic_mpp_of_match, + }, + .probe = pmic_mpp_probe, + .remove = pmic_mpp_remove, +}; + +module_platform_driver(pmic_mpp_driver); + +MODULE_AUTHOR("Ivan T. Ivanov <iivanov@mm-sol.com>"); +MODULE_DESCRIPTION("Qualcomm SPMI PMIC MPP pin control driver"); +MODULE_ALIAS("platform:qcom-spmi-mpp"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-pmic.c b/drivers/pinctrl/qcom/pinctrl-ssbi-pmic.c new file mode 100644 index 000000000000..e02145e11baf --- /dev/null +++ b/drivers/pinctrl/qcom/pinctrl-ssbi-pmic.c @@ -0,0 +1,875 @@ +/* + * Copyright (c) 2014, Sony Mobile Communications AB. + * Copyright (c) 2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/pinctrl/pinctrl.h> +#include <linux/pinctrl/pinmux.h> +#include <linux/pinctrl/pinconf.h> +#include <linux/pinctrl/pinconf-generic.h> +#include <linux/slab.h> +#include <linux/regmap.h> +#include <linux/gpio.h> +#include <linux/mfd/pm8921-core.h> + +#include <dt-bindings/pinctrl/qcom,pmic-gpio.h> + +#include "../core.h" +#include "../pinconf.h" +#include "../pinctrl-utils.h" + +/* direction */ +#define PM8XXX_GPIO_DIR_OUT BIT(0) +#define PM8XXX_GPIO_DIR_IN BIT(1) + +/* output buffer */ +#define PM8XXX_GPIO_PUSH_PULL 0 +#define PM8XXX_GPIO_OPEN_DRAIN 1 + +/* bias */ +#define PM8XXX_GPIO_BIAS_PU_30 0 +#define PM8XXX_GPIO_BIAS_PU_1P5 1 +#define PM8XXX_GPIO_BIAS_PU_31P5 2 +#define PM8XXX_GPIO_BIAS_PU_1P5_30 3 +#define PM8XXX_GPIO_BIAS_PD 4 +#define PM8XXX_GPIO_BIAS_NP 5 + +/* GPIO registers */ +#define SSBI_REG_ADDR_GPIO_BASE 0x150 +#define SSBI_REG_ADDR_GPIO(n) (SSBI_REG_ADDR_GPIO_BASE + n) + +#define PM8XXX_GPIO_MODE_ENABLE BIT(0) +#define PM8XXX_GPIO_WRITE BIT(7) + +#define PM8XXX_MAX_GPIOS 44 + +/* Qualcomm specific pin configurations */ +#define PM8XXX_PINCONF_PULL_UP (PIN_CONFIG_END + 1) +#define PM8XXX_PINCONF_STRENGTH (PIN_CONFIG_END + 2) + +struct pm8xxx_pinbindings { + const char *property; + unsigned param; + u32 default_value; +}; +static struct pm8xxx_pinbindings pm8xxx_pinbindings[] = { + /* PMIC_GPIO_PULL_UP_30... */ + {"qcom,pull-up-strength", PM8XXX_PINCONF_PULL_UP, 0}, + /* PMIC_GPIO_STRENGTH_NO... */ + {"qcom,drive-strength", PM8XXX_PINCONF_STRENGTH, 0}, +}; + +struct pm8xxx_gpio_pin { + int irq; + + u8 power_source; + u8 direction; + u8 output_buffer; + u8 output_value; + u8 bias; + u8 output_strength; + u8 disable; + u8 function; + u8 non_inverted; +}; + +struct pm8xxx_gpio_data { + int ngpio; +}; + +struct pm8xxx_gpio { + struct device *dev; + struct regmap *regmap; + struct pinctrl_dev *pctrl; + struct gpio_chip chip; + + const struct pm8xxx_gpio_data *data; + + struct pm8xxx_gpio_pin pins[PM8XXX_MAX_GPIOS]; +}; + +static inline struct pm8xxx_gpio *to_pm8xxx_gpio(struct gpio_chip *chip) +{ + return container_of(chip, struct pm8xxx_gpio, chip); +}; + +static const char * const pm8xxx_gpio_groups[PM8XXX_MAX_GPIOS] = { + "gpio1", "gpio2", "gpio3", "gpio4", "gpio5", "gpio6", "gpio7", "gpio8", + "gpio9", "gpio10", "gpio11", "gpio12", "gpio13", "gpio14", "gpio15", + "gpio16", "gpio17", "gpio18", "gpio19", "gpio20", "gpio21", "gpio22", + "gpio23", "gpio24", "gpio25", "gpio26", "gpio27", "gpio28", "gpio29", + "gpio30", "gpio31", "gpio32", "gpio33", "gpio34", "gpio35", "gpio36", + "gpio37", "gpio38", "gpio39", "gpio40", "gpio41", "gpio42", "gpio43", + "gpio44", +}; + +static const char * const pm8xxx_gpio_functions[] = { + PMIC_GPIO_FUNC_NORMAL, PMIC_GPIO_FUNC_PAIRED, + PMIC_GPIO_FUNC_FUNC1, PMIC_GPIO_FUNC_FUNC2, + PMIC_GPIO_FUNC_DTEST1, PMIC_GPIO_FUNC_DTEST2, + PMIC_GPIO_FUNC_DTEST3, PMIC_GPIO_FUNC_DTEST4, +}; + +static int pm8xxx_gpio_read(struct pm8xxx_gpio *pctrl, int pin, int bank) +{ + int reg = SSBI_REG_ADDR_GPIO(pin); + unsigned int val = bank << 4; + int ret; + + ret = regmap_write(pctrl->regmap, reg, val); + if (ret) { + dev_err(pctrl->dev, + "failed to select bank %d of pin %d\n", bank, pin); + return ret; + } + + ret = regmap_read(pctrl->regmap, reg, &val); + if (ret) { + dev_err(pctrl->dev, + "failed to read register %d of pin %d\n", bank, pin); + return ret; + } + + return val; +} + +static int pm8xxx_gpio_write(struct pm8xxx_gpio *pctrl, + int pin, int bank, u8 val) +{ + int ret; + + val |= PM8XXX_GPIO_WRITE; + val |= bank << 4; + + ret = regmap_write(pctrl->regmap, SSBI_REG_ADDR_GPIO(pin), val); + if (ret) + dev_err(pctrl->dev, "failed to write register\n"); + + return ret; +} + +static int pm8xxx_gpio_get_groups_count(struct pinctrl_dev *pctldev) +{ + struct pm8xxx_gpio *pctrl = pinctrl_dev_get_drvdata(pctldev); + + return pctrl->data->ngpio; +} + +static const char *pm8xxx_gpio_get_group_name(struct pinctrl_dev *pctldev, + unsigned group) +{ + return pm8xxx_gpio_groups[group]; +} + +static int pm8xxx_parse_dt_config(struct device *dev, struct device_node *np, + unsigned long **configs, unsigned int *nconfigs) +{ + struct pm8xxx_pinbindings *par; + unsigned long cfg[ARRAY_SIZE(pm8xxx_pinbindings)]; + unsigned int ncfg = 0; + int ret, idx; + u32 val; + + if (!np) + return -EINVAL; + + for (idx = 0; idx < ARRAY_SIZE(pm8xxx_pinbindings); idx++) { + par = &pm8xxx_pinbindings[idx]; + ret = of_property_read_u32(np, par->property, &val); + + /* property not found */ + if (ret == -EINVAL) + continue; + + /* use default value, when no value is specified */ + if (ret) + val = par->default_value; + + dev_dbg(dev, "found %s with value %u\n", par->property, val); + cfg[ncfg] = pinconf_to_config_packed(par->param, val); + ncfg++; + } + + ret = 0; + + /* no configs found at qchip->npads */ + if (ncfg == 0) { + *configs = NULL; + *nconfigs = 0; + goto out; + } + + /* + * Now limit the number of configs to the real number of + * found properties. + */ + *configs = kcalloc(ncfg, sizeof(unsigned long), GFP_KERNEL); + if (!*configs) { + ret = -ENOMEM; + goto out; + } + + memcpy(*configs, cfg, ncfg * sizeof(unsigned long)); + *nconfigs = ncfg; + +out: + return ret; +} + +static int pm8xxx_dt_subnode_to_map(struct pinctrl_dev *pctldev, + struct device_node *np, + struct pinctrl_map **map, + unsigned *reserv, unsigned *nmaps, + enum pinctrl_map_type type) +{ + unsigned long *configs = NULL; + unsigned num_configs = 0; + struct property *prop; + const char *group; + int ret; + + ret = pm8xxx_parse_dt_config(pctldev->dev, np, &configs, &num_configs); + if (ret < 0) + return ret; + + if (!num_configs) + return 0; + + ret = of_property_count_strings(np, "pins"); + if (ret < 0) + goto exit; + + ret = pinctrl_utils_reserve_map(pctldev, map, reserv, + nmaps, ret); + if (ret < 0) + goto exit; + + of_property_for_each_string(np, "pins", prop, group) { + ret = pinctrl_utils_add_map_configs(pctldev, map, + reserv, nmaps, group, configs, + num_configs, type); + if (ret < 0) + break; + } +exit: + kfree(configs); + return ret; +} + +static int pm8xxx_dt_node_to_map(struct pinctrl_dev *pctldev, + struct device_node *np_config, + struct pinctrl_map **map, + unsigned *nmaps) +{ + struct device_node *np; + enum pinctrl_map_type type; + unsigned reserv; + int ret; + + ret = 0; + *map = NULL; + *nmaps = 0; + reserv = 0; + type = PIN_MAP_TYPE_CONFIGS_GROUP; + + for_each_child_of_node(np_config, np) { + + ret = pinconf_generic_dt_subnode_to_map(pctldev, np, map, + &reserv, nmaps, type); + if (ret) + break; + + ret = pm8xxx_dt_subnode_to_map(pctldev, np, map, &reserv, + nmaps, type); + if (ret) + break; + } + + if (ret < 0) + pinctrl_utils_dt_free_map(pctldev, *map, *nmaps); + + return ret; +} + +static const struct pinctrl_ops pm8xxx_gpio_pinctrl_ops = { + .get_groups_count = pm8xxx_gpio_get_groups_count, + .get_group_name = pm8xxx_gpio_get_group_name, + .dt_node_to_map = pm8xxx_dt_node_to_map, + .dt_free_map = pinctrl_utils_dt_free_map, +}; + +static int pm8xxx_get_functions_count(struct pinctrl_dev *pctldev) +{ + return ARRAY_SIZE(pm8xxx_gpio_functions); +} + +static const char *pm8xxx_get_function_name(struct pinctrl_dev *pctldev, + unsigned function) +{ + return pm8xxx_gpio_functions[function]; +} + +static int pm8xxx_get_function_groups(struct pinctrl_dev *pctldev, + unsigned function, + const char * const **groups, + unsigned * const num_groups) +{ + struct pm8xxx_gpio *pctrl = pinctrl_dev_get_drvdata(pctldev); + + *groups = pm8xxx_gpio_groups; + *num_groups = pctrl->data->ngpio; + return 0; +} + +static int pm8xxx_pinmux_enable(struct pinctrl_dev *pctldev, + unsigned function, + unsigned group) +{ + struct pm8xxx_gpio *pctrl = pinctrl_dev_get_drvdata(pctldev); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[group]; + u8 val; + + pin->function = function; + val = pin->function << 1; + + pm8xxx_gpio_write(pctrl, group, 4, val); + + return 0; +} + +static const struct pinmux_ops pm8xxx_pinmux_ops = { + .get_functions_count = pm8xxx_get_functions_count, + .get_function_name = pm8xxx_get_function_name, + .get_function_groups = pm8xxx_get_function_groups, + .set_mux = pm8xxx_pinmux_enable, +}; + +static int pm8xxx_gpio_config_get(struct pinctrl_dev *pctldev, + unsigned int offset, + unsigned long *config) +{ + struct pm8xxx_gpio *pctrl = pinctrl_dev_get_drvdata(pctldev); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset]; + unsigned param = pinconf_to_config_param(*config); + unsigned arg; + + switch (param) { + case PIN_CONFIG_BIAS_DISABLE: + arg = pin->bias == PM8XXX_GPIO_BIAS_NP; + break; + case PIN_CONFIG_BIAS_PULL_DOWN: + arg = pin->bias == PM8XXX_GPIO_BIAS_PD; + break; + case PM8XXX_PINCONF_PULL_UP: + if (pin->bias >= PM8XXX_GPIO_BIAS_PU_30 && + pin->bias <= PM8XXX_GPIO_BIAS_PU_1P5_30) + arg = PMIC_GPIO_PULL_UP_30 + pin->bias; + else + arg = 0; + break; + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: + arg = pin->disable; + break; + case PIN_CONFIG_INPUT_ENABLE: + arg = pin->direction == PM8XXX_GPIO_DIR_IN; + break; + case PIN_CONFIG_OUTPUT: + arg = pin->output_value; + break; + case PIN_CONFIG_POWER_SOURCE: + arg = pin->power_source; + break; + case PM8XXX_PINCONF_STRENGTH: + arg = pin->output_strength; + break; + case PIN_CONFIG_DRIVE_PUSH_PULL: + arg = pin->output_buffer == PM8XXX_GPIO_PUSH_PULL; + break; + case PIN_CONFIG_DRIVE_OPEN_DRAIN: + arg = pin->output_buffer == PM8XXX_GPIO_OPEN_DRAIN; + break; + default: + dev_err(pctrl->dev, + "unsupported config parameter: %x\n", + param); + return -EINVAL; + } + + *config = pinconf_to_config_packed(param, arg); + + return 0; +} + +static int pm8xxx_gpio_config_set(struct pinctrl_dev *pctldev, + unsigned int offset, + unsigned long *configs, + unsigned num_configs) +{ + struct pm8xxx_gpio *pctrl = pinctrl_dev_get_drvdata(pctldev); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset]; + unsigned param; + unsigned arg; + unsigned i; + u8 banks = 0; + u8 val; + + for (i = 0; i < num_configs; i++) { + param = pinconf_to_config_param(configs[i]); + arg = pinconf_to_config_argument(configs[i]); + + switch (param) { + case PIN_CONFIG_BIAS_DISABLE: + pin->bias = PM8XXX_GPIO_BIAS_NP; + banks |= BIT(2); + pin->disable = 0; + banks |= BIT(3); + break; + case PIN_CONFIG_BIAS_PULL_DOWN: + pin->bias = PM8XXX_GPIO_BIAS_PD; + banks |= BIT(2); + pin->disable = 0; + banks |= BIT(3); + break; + case PM8XXX_PINCONF_PULL_UP: + if (arg < PMIC_GPIO_PULL_UP_30 || + arg > PMIC_GPIO_PULL_UP_1P5_30) { + dev_err(pctrl->dev, "invalid pull-up level\n"); + return -EINVAL; + } + pin->bias = arg - PM8XXX_GPIO_BIAS_PU_30; + banks |= BIT(2); + pin->disable = 0; + banks |= BIT(3); + break; + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: + pin->disable = 1; + banks |= BIT(3); + break; + case PIN_CONFIG_INPUT_ENABLE: + pin->direction = PM8XXX_GPIO_DIR_IN; + banks |= BIT(1); + break; + case PIN_CONFIG_OUTPUT: + pin->direction = PM8XXX_GPIO_DIR_OUT; + pin->output_value = !!arg; + banks |= BIT(1); + break; + case PIN_CONFIG_POWER_SOURCE: + pin->power_source = arg; + banks |= BIT(0); + break; + case PM8XXX_PINCONF_STRENGTH: + if (arg > PMIC_GPIO_STRENGTH_LOW) { + dev_err(pctrl->dev, "invalid drive strength\n"); + return -EINVAL; + } + pin->output_strength = arg; + banks |= BIT(3); + break; + case PIN_CONFIG_DRIVE_PUSH_PULL: + pin->output_buffer = PM8XXX_GPIO_PUSH_PULL; + banks |= BIT(1); + break; + case PIN_CONFIG_DRIVE_OPEN_DRAIN: + pin->output_buffer = PM8XXX_GPIO_OPEN_DRAIN; + banks |= BIT(1); + break; + default: + dev_err(pctrl->dev, + "unsupported config parameter: %x\n", + param); + return -EINVAL; + } + } + + if (banks & BIT(0)) + pm8xxx_gpio_write(pctrl, offset, 0, pin->power_source << 1 | + PM8XXX_GPIO_MODE_ENABLE); + + if (banks & BIT(1)) { + val = pin->direction << 2; + val |= pin->output_buffer << 1; + val |= pin->output_value; + pm8xxx_gpio_write(pctrl, offset, 1, val); + } + + if (banks & BIT(2)) { + val = pin->bias << 1; + pm8xxx_gpio_write(pctrl, offset, 2, val); + } + + if (banks & BIT(3)) { + val = pin->output_strength << 2; + val |= pin->disable; + pm8xxx_gpio_write(pctrl, offset, 3, val); + } + + if (banks & BIT(4)) { + val = pin->function << 1; + pm8xxx_gpio_write(pctrl, offset, 4, val); + } + + if (banks & BIT(5)) { + val = 0; + if (pin->non_inverted) + val |= BIT(3); + pm8xxx_gpio_write(pctrl, offset, 5, val); + } + + return 0; +} + +static const struct pinconf_ops pm8xxx_gpio_pinconf_ops = { + .pin_config_group_get = pm8xxx_gpio_config_get, + .pin_config_group_set = pm8xxx_gpio_config_set, +}; + +static struct pinctrl_desc pm8xxx_gpio_desc = { + .pctlops = &pm8xxx_gpio_pinctrl_ops, + .pmxops = &pm8xxx_pinmux_ops, + .confops = &pm8xxx_gpio_pinconf_ops, + .owner = THIS_MODULE, +}; + +static int pm8xxx_gpio_direction_input(struct gpio_chip *chip, + unsigned offset) +{ + struct pm8xxx_gpio *pctrl = to_pm8xxx_gpio(chip); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset - 1]; + u8 val; + + pin->direction = PM8XXX_GPIO_DIR_IN; + val = pin->direction << 2; + + pm8xxx_gpio_write(pctrl, offset, 1, val); + + return 0; +} + +static int pm8xxx_gpio_direction_output(struct gpio_chip *chip, + unsigned offset, + int value) +{ + struct pm8xxx_gpio *pctrl = to_pm8xxx_gpio(chip); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset - 1]; + u8 val; + + pin->direction = PM8XXX_GPIO_DIR_OUT; + pin->output_value = !!value; + + val = pin->direction << 2; + val |= pin->output_buffer << 1; + val |= pin->output_value; + + pm8xxx_gpio_write(pctrl, offset, 1, val); + + return 0; +} + +static int pm8xxx_gpio_get(struct gpio_chip *chip, unsigned offset) +{ + struct pm8xxx_gpio *pctrl = to_pm8xxx_gpio(chip); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset - 1]; + + if (pin->direction == PM8XXX_GPIO_DIR_OUT) + return pin->output_value; + + return pm8xxx_read_irq_status(pin->irq); +} + +static void pm8xxx_gpio_set(struct gpio_chip *gc, unsigned offset, int value) +{ + struct pm8xxx_gpio *pctrl = to_pm8xxx_gpio(gc); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset - 1]; + u8 val; + + pin->output_value = !!value; + + val = pin->direction << 2; + val |= pin->output_buffer << 1; + val |= pin->output_value; + + pm8xxx_gpio_write(pctrl, offset, 1, val); +} + +static int pm8xxx_gpio_to_irq(struct gpio_chip *chip, unsigned offset) +{ + struct pm8xxx_gpio *pctrl = to_pm8xxx_gpio(chip); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset - 1]; + + return pin->irq; +} + +#ifdef CONFIG_DEBUG_FS +#include <linux/seq_file.h> + +static void pm8xxx_gpio_dbg_show_one(struct seq_file *s, + struct pinctrl_dev *pctldev, + struct gpio_chip *chip, + unsigned offset, + unsigned gpio) +{ + struct pm8xxx_gpio *pctrl = to_pm8xxx_gpio(chip); + struct pm8xxx_gpio_pin *pin = &pctrl->pins[offset]; + + static const char * const directions[] = { + "off", "out", "in", "both" + }; + static const char * const biases[] = { + "pull-up 30uA", "pull-up 1.5uA", "pull-up 31.5uA", + "pull-up 1.5uA + 30uA boost", "pull-down 10uA", "no pull" + }; + static const char * const buffer_types[] = { + "push-pull", "open-drain" + }; + static const char * const strengths[] = { + "no", "high", "medium", "low" + }; + + seq_printf(s, " gpio%-2d:", offset + 1); + if (pin->disable) { + seq_puts(s, " ---"); + } else { + seq_printf(s, " %-4s", directions[pin->direction]); + seq_printf(s, " %-7s", pm8xxx_gpio_functions[pin->function]); + seq_printf(s, " VIN%d", pin->power_source); + seq_printf(s, " %-27s", biases[pin->bias]); + seq_printf(s, " %-10s", buffer_types[pin->output_buffer]); + seq_printf(s, " %-4s", pin->output_value ? "high" : "low"); + seq_printf(s, " %-7s", strengths[pin->output_strength]); + if (!pin->non_inverted) + seq_puts(s, " inverted"); + } +} + +static void pm8xxx_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip) +{ + unsigned gpio = chip->base; + unsigned i; + + for (i = 0; i < chip->ngpio; i++, gpio++) { + pm8xxx_gpio_dbg_show_one(s, NULL, chip, i, gpio); + seq_puts(s, "\n"); + } +} + +#else +#define msm_gpio_dbg_show NULL +#endif + +static struct gpio_chip pm8xxx_gpio_template = { + .direction_input = pm8xxx_gpio_direction_input, + .direction_output = pm8xxx_gpio_direction_output, + .get = pm8xxx_gpio_get, + .set = pm8xxx_gpio_set, + .to_irq = pm8xxx_gpio_to_irq, + .dbg_show = pm8xxx_gpio_dbg_show, + .owner = THIS_MODULE, +}; + +static int pm8xxx_gpio_populate(struct pm8xxx_gpio *pctrl) +{ + struct pm8xxx_gpio_pin *pin; + int val; + int i; + + for (i = 0; i < pctrl->data->ngpio; i++) { + pin = &pctrl->pins[i]; + + val = pm8xxx_gpio_read(pctrl, i, 0); + if (val < 0) + return val; + + pin->power_source = (val >> 1) & 0x7; + + val = pm8xxx_gpio_read(pctrl, i, 1); + if (val < 0) + return val; + + pin->direction = (val >> 2) & 0x3; + pin->output_buffer = !!(val & BIT(1)); + pin->output_value = val & BIT(0); + + val = pm8xxx_gpio_read(pctrl, i, 2); + if (val < 0) + return val; + + pin->bias = (val >> 1) & 0x7; + + val = pm8xxx_gpio_read(pctrl, i, 3); + if (val < 0) + return val; + + pin->output_strength = (val >> 2) & 0x3; + pin->disable = val & BIT(0); + + val = pm8xxx_gpio_read(pctrl, i, 4); + if (val < 0) + return val; + + pin->function = (val >> 1) & 0x7; + + val = pm8xxx_gpio_read(pctrl, i, 5); + if (val < 0) + return val; + + pin->non_inverted = !!(val & BIT(3)); + } + + return 0; +} + +static const struct pm8xxx_gpio_data pm8018_gpio_data = { + .ngpio = 6, +}; + +static const struct pm8xxx_gpio_data pm8038_gpio_data = { + .ngpio = 12, +}; + +static const struct pm8xxx_gpio_data pm8058_gpio_data = { + .ngpio = 40, +}; +static const struct pm8xxx_gpio_data pm8917_gpio_data = { + .ngpio = 38, +}; + +static const struct pm8xxx_gpio_data pm8921_gpio_data = { + .ngpio = 44, +}; + +static const struct of_device_id pm8xxx_gpio_of_match[] = { + { .compatible = "qcom,pm8018-gpio", .data = &pm8018_gpio_data }, + { .compatible = "qcom,pm8038-gpio", .data = &pm8038_gpio_data }, + { .compatible = "qcom,pm8058-gpio", .data = &pm8058_gpio_data }, + { .compatible = "qcom,pm8917-gpio", .data = &pm8917_gpio_data }, + { .compatible = "qcom,pm8921-gpio", .data = &pm8921_gpio_data }, + { }, +}; +MODULE_DEVICE_TABLE(of, pm8xxx_gpio_of_match); + +static int pm8xxx_gpio_probe(struct platform_device *pdev) +{ + const struct of_device_id *match; + struct pm8xxx_gpio *pctrl; + int ret; + int i; + + match = of_match_node(pm8xxx_gpio_of_match, pdev->dev.of_node); + if (!match) + return -ENXIO; + + pctrl = devm_kzalloc(&pdev->dev, sizeof(*pctrl), GFP_KERNEL); + if (!pctrl) + return -ENOMEM; + + pctrl->dev = &pdev->dev; + pctrl->data = match->data; + + BUG_ON(pctrl->data->ngpio > PM8XXX_MAX_GPIOS); + + pctrl->chip = pm8xxx_gpio_template; + pctrl->chip.base = -1; + pctrl->chip.dev = &pdev->dev; + pctrl->chip.of_node = pdev->dev.of_node; + pctrl->chip.label = dev_name(pctrl->dev); + pctrl->chip.ngpio = pctrl->data->ngpio; + + pctrl->regmap = dev_get_regmap(pdev->dev.parent, NULL); + if (!pctrl->regmap) { + dev_err(&pdev->dev, "parent regmap unavailable\n"); + return -ENXIO; + } + + for (i = 0; i < pctrl->data->ngpio; i++) { + ret = platform_get_irq(pdev, i); + if (ret < 0) { + dev_err(&pdev->dev, + "missing interrupts for pin %d\n", i); + return ret; + } + + pctrl->pins[i].irq = ret; + } + + ret = pm8xxx_gpio_populate(pctrl); + if (ret) + return ret; + + pm8xxx_gpio_desc.name = dev_name(&pdev->dev); + pctrl->pctrl = pinctrl_register(&pm8xxx_gpio_desc, &pdev->dev, pctrl); + if (!pctrl->pctrl) { + dev_err(&pdev->dev, "couldn't register pm8xxx gpio driver\n"); + return -ENODEV; + } + + ret = gpiochip_add(&pctrl->chip); + if (ret) { + dev_err(&pdev->dev, "failed register gpiochip\n"); + goto unregister_pinctrl; + } + + ret = gpiochip_add_pin_range(&pctrl->chip, + dev_name(pctrl->dev), + 1, 0, pctrl->data->ngpio); + if (ret) { + dev_err(pctrl->dev, "failed to add pin range\n"); + goto unregister_gpiochip; + } + + platform_set_drvdata(pdev, pctrl); + + dev_dbg(&pdev->dev, "Qualcomm pm8xxx gpio driver probed\n"); + + return 0; + +unregister_pinctrl: + pinctrl_unregister(pctrl->pctrl); + +unregister_gpiochip: + gpiochip_remove(&pctrl->chip); + + return ret; +} + +static int pm8xxx_gpio_remove(struct platform_device *pdev) +{ + struct pm8xxx_gpio *pctrl = platform_get_drvdata(pdev); + + gpiochip_remove(&pctrl->chip); + + pinctrl_unregister(pctrl->pctrl); + + return 0; +} + +static struct platform_driver pm8xxx_gpio_driver = { + .driver = { + .name = "ssbi-pmic-gpio", + .owner = THIS_MODULE, + .of_match_table = pm8xxx_gpio_of_match, + }, + .probe = pm8xxx_gpio_probe, + .remove = pm8xxx_gpio_remove, +}; + +static int pm8xxx_gpio_init(void) +{ + return platform_driver_register(&pm8xxx_gpio_driver); +} +subsys_initcall(pm8xxx_gpio_init); + +MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>"); +MODULE_DESCRIPTION("Qualcomm SSBI PMIC GPIO driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/regulator/qcom_rpm-regulator.c b/drivers/regulator/qcom_rpm-regulator.c index b55cd5b50ebe..8f6604031de7 100644 --- a/drivers/regulator/qcom_rpm-regulator.c +++ b/drivers/regulator/qcom_rpm-regulator.c @@ -198,6 +198,7 @@ static int rpm_reg_write(struct qcom_rpm_reg *vreg, vreg->val[req->word] |= value << req->shift; return qcom_rpm_write(vreg->rpm, + QCOM_RPM_ACTIVE_STATE, vreg->resource, vreg->val, vreg->parts->request_len); diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig index 7bd2c94f54a4..586d19774533 100644 --- a/drivers/soc/qcom/Kconfig +++ b/drivers/soc/qcom/Kconfig @@ -9,3 +9,38 @@ config QCOM_GSBI functions for connecting the underlying serial UART, SPI, and I2C devices to the output pins. +config QCOM_SCM + bool + +config QCOM_PM + bool "Qualcomm Power Management" + depends on ARCH_QCOM + help + QCOM Platform specific power driver to manage cores and L2 low power + modes. It interface with various system drivers to put the cores in + low power modes. + +config QCOM_SMD + tristate "Shared Memory device" + ---help--- + Supports the Shared memory device + +config MSM_PIL + tristate "MSM Periphera Loader" + depends on ARCH_QCOM + help + Say y here to enable Periphera loader support. The PIL provides support + to download firmware securely and reset peripherials. + +config MSM_PIL_Q6V4 + tristate "MSM Periphera Loader for QDSP 6V4" + depends on MSM_PIL + help + Say y here to enable Periphera loader support for QDSP. + +config MSM_REMOTE_SPINLOCKS + tristate "MSM Remote Spinlocks" + depends on ARCH_QCOM + help + Say y here to enable Remote Spinlock support. + diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile index 438901257ac1..b6edaee05d13 100644 --- a/drivers/soc/qcom/Makefile +++ b/drivers/soc/qcom/Makefile @@ -1 +1,8 @@ obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o +obj-$(CONFIG_QCOM_PM) += spm.o +obj-$(CONFIG_MSM_PIL) += peripheral-loader.o +obj-$(CONFIG_MSM_PIL_Q6V4) += pil-q6v4.o +obj-$(CONFIG_MSM_REMOTE_SPINLOCKS) += remote_spinlock.o +obj-$(CONFIG_QCOM_SMD) += smd.o smd_private.o +CFLAGS_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) +obj-$(CONFIG_QCOM_SCM) += scm.o scm-boot.o scm-pas.o diff --git a/drivers/soc/qcom/peripheral-loader.c b/drivers/soc/qcom/peripheral-loader.c new file mode 100644 index 000000000000..927a654e67c9 --- /dev/null +++ b/drivers/soc/qcom/peripheral-loader.c @@ -0,0 +1,690 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/module.h> +#include <linux/string.h> +#include <linux/device.h> +#include <linux/firmware.h> +#include <linux/io.h> +#include <linux/debugfs.h> +#include <linux/elf.h> +#include <linux/mutex.h> +#include <linux/memblock.h> +#include <linux/slab.h> +#include <linux/atomic.h> +#include <linux/suspend.h> +#include <linux/rwsem.h> +#include <linux/sysfs.h> +#include <linux/workqueue.h> +#include <linux/jiffies.h> + +#include <asm/uaccess.h> +#include <asm/setup.h> +#include <linux/peripheral-loader.h> + +#include "peripheral-loader.h" + +/** + * proxy_timeout - Override for proxy vote timeouts + * -1: Use driver-specified timeout + * 0: Hold proxy votes until shutdown + * >0: Specify a custom timeout in ms + */ +static int proxy_timeout_ms = -1; +module_param(proxy_timeout_ms, int, S_IRUGO | S_IWUSR); + +enum pil_state { + PIL_OFFLINE, + PIL_ONLINE, +}; + +static const char *pil_states[] = { + [PIL_OFFLINE] = "OFFLINE", + [PIL_ONLINE] = "ONLINE", +}; + +struct pil_device { + struct pil_desc *desc; + int count; + enum pil_state state; + struct mutex lock; + struct device dev; + struct module *owner; +#ifdef CONFIG_DEBUG_FS + struct dentry *dentry; +#endif + struct delayed_work proxy; +}; + +#define to_pil_device(d) container_of(d, struct pil_device, dev) + +static ssize_t name_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return snprintf(buf, PAGE_SIZE, "%s\n", to_pil_device(dev)->desc->name); +} + +static ssize_t state_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + enum pil_state state = to_pil_device(dev)->state; + return snprintf(buf, PAGE_SIZE, "%s\n", pil_states[state]); +} + +static struct device_attribute pil_attrs[] = { + __ATTR_RO(name), + __ATTR_RO(state), + { }, +}; + +struct bus_type pil_bus_type = { + .name = "pil", + .dev_attrs = pil_attrs, +}; + +static int __find_peripheral(struct device *dev, void *data) +{ + struct pil_device *pdev = to_pil_device(dev); + return !strncmp(pdev->desc->name, data, INT_MAX); +} + +static struct pil_device *find_peripheral(const char *str) +{ + struct device *dev; + + if (!str) + return NULL; + + dev = bus_find_device(&pil_bus_type, NULL, (void *)str, + __find_peripheral); + return dev ? to_pil_device(dev) : NULL; +} + +static void pil_proxy_work(struct work_struct *work) +{ + struct pil_device *pil; + + pil = container_of(work, struct pil_device, proxy.work); + pil->desc->ops->proxy_unvote(pil->desc); + //wake_unlock(&pil->wlock); +} + +static int pil_proxy_vote(struct pil_device *pil) +{ + int ret = 0; + + if (pil->desc->ops->proxy_vote) { + //wake_lock(&pil->wlock); + ret = pil->desc->ops->proxy_vote(pil->desc); + //if (ret) + // wake_unlock(&pil->wlock); + } + return ret; +} + +static void pil_proxy_unvote(struct pil_device *pil, unsigned long timeout) +{ + if (proxy_timeout_ms >= 0) + timeout = proxy_timeout_ms; + + if (timeout && pil->desc->ops->proxy_unvote) + schedule_delayed_work(&pil->proxy, msecs_to_jiffies(timeout)); +} + +#define IOMAP_SIZE SZ_4M + +static int load_segment(const struct elf32_phdr *phdr, unsigned num, + struct pil_device *pil) +{ + int ret = 0, count, paddr; + char fw_name[30]; + const struct firmware *fw = NULL; + const u8 *data; + + if (memblock_overlaps_memory(phdr->p_paddr, phdr->p_memsz)) { + dev_err(&pil->dev, "%s: kernel memory would be overwritten " + "[%#08lx, %#08lx)\n", pil->desc->name, + (unsigned long)phdr->p_paddr, + (unsigned long)(phdr->p_paddr + phdr->p_memsz)); + return -EPERM; + } + + if (phdr->p_filesz) { + snprintf(fw_name, ARRAY_SIZE(fw_name), "%s.b%02d", + pil->desc->name, num); + ret = request_firmware(&fw, fw_name, &pil->dev); + printk("FWDEBUG: request_firmware %s ret %d \n", fw_name, ret); + if (ret) { + dev_err(&pil->dev, "%s: Failed to locate blob %s\n", + pil->desc->name, fw_name); + return ret; + } + + if (fw->size != phdr->p_filesz) { + dev_err(&pil->dev, "%s: Blob size %u doesn't match " + "%u\n", pil->desc->name, fw->size, + phdr->p_filesz); + ret = -EPERM; + goto release_fw; + } + } + + /* Load the segment into memory */ + count = phdr->p_filesz; + paddr = phdr->p_paddr; + data = fw ? fw->data : NULL; + while (count > 0) { + int size; + u8 __iomem *buf; + + size = min_t(size_t, IOMAP_SIZE, count); + buf = ioremap(paddr, size); + if (!buf) { + dev_err(&pil->dev, "%s: Failed to map memory\n", + pil->desc->name); + ret = -ENOMEM; + goto release_fw; + } + memcpy(buf, data, size); + iounmap(buf); + + count -= size; + paddr += size; + data += size; + } + + /* Zero out trailing memory */ + count = phdr->p_memsz - phdr->p_filesz; + while (count > 0) { + int size; + u8 __iomem *buf; + + size = min_t(size_t, IOMAP_SIZE, count); + buf = ioremap(paddr, size); + if (!buf) { + dev_err(&pil->dev, "%s: Failed to map memory\n", + pil->desc->name); + ret = -ENOMEM; + goto release_fw; + } + memset(buf, 0, size); + iounmap(buf); + + count -= size; + paddr += size; + } + + if (pil->desc->ops->verify_blob) { + ret = pil->desc->ops->verify_blob(pil->desc, phdr->p_paddr, + phdr->p_memsz); + if (ret) + dev_err(&pil->dev, "%s: Blob%u failed verification\n", + pil->desc->name, num); + } + +release_fw: + release_firmware(fw); + return ret; +} + +#define segment_is_hash(flag) (((flag) & (0x7 << 24)) == (0x2 << 24)) + +static int segment_is_loadable(const struct elf32_phdr *p) +{ + return (p->p_type == PT_LOAD) && !segment_is_hash(p->p_flags); +} + +/* Sychronize request_firmware() with suspend */ +static DECLARE_RWSEM(pil_pm_rwsem); + +static int load_image(struct pil_device *pil) +{ + int i, ret; + char fw_name[30]; + struct elf32_hdr *ehdr; + const struct elf32_phdr *phdr; + const struct firmware *fw; + unsigned long proxy_timeout = pil->desc->proxy_timeout; + + down_read(&pil_pm_rwsem); + snprintf(fw_name, sizeof(fw_name), "%s.mdt", pil->desc->name); + ret = request_firmware(&fw, fw_name, &pil->dev); + printk("DEBUG:: %s request_firmware %s ret %d\n", __func__, fw_name, ret); + if (ret) { + dev_err(&pil->dev, "%s: Failed to locate %s\n", + pil->desc->name, fw_name); + goto out; + } + + if (fw->size < sizeof(*ehdr)) { + dev_err(&pil->dev, "%s: Not big enough to be an elf header\n", + pil->desc->name); + ret = -EIO; + goto release_fw; + } + + ehdr = (struct elf32_hdr *)fw->data; + if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG)) { + dev_err(&pil->dev, "%s: Not an elf header\n", pil->desc->name); + ret = -EIO; + goto release_fw; + } + + if (ehdr->e_phnum == 0) { + dev_err(&pil->dev, "%s: No loadable segments\n", + pil->desc->name); + ret = -EIO; + goto release_fw; + } + if (sizeof(struct elf32_phdr) * ehdr->e_phnum + + sizeof(struct elf32_hdr) > fw->size) { + dev_err(&pil->dev, "%s: Program headers not within mdt\n", + pil->desc->name); + ret = -EIO; + goto release_fw; + } + + ret = pil->desc->ops->init_image(pil->desc, fw->data, fw->size); + if (ret) { + dev_err(&pil->dev, "%s: Invalid firmware metadata\n", + pil->desc->name); + goto release_fw; + } + + phdr = (const struct elf32_phdr *)(fw->data + sizeof(struct elf32_hdr)); + for (i = 0; i < ehdr->e_phnum; i++, phdr++) { + if (!segment_is_loadable(phdr)) + continue; + + ret = load_segment(phdr, i, pil); + if (ret) { + dev_err(&pil->dev, "%s: Failed to load segment %d\n", + pil->desc->name, i); + goto release_fw; + } + } + + ret = pil_proxy_vote(pil); + if (ret) { + dev_err(&pil->dev, "%s: Failed to proxy vote\n", + pil->desc->name); + goto release_fw; + } + + ret = pil->desc->ops->auth_and_reset(pil->desc); + if (ret) { + dev_err(&pil->dev, "%s: Failed to bring out of reset\n", + pil->desc->name); + proxy_timeout = 0; /* Remove proxy vote immediately on error */ + goto err_boot; + } + dev_info(&pil->dev, "%s: Brought out of reset\n", pil->desc->name); +err_boot: + pil_proxy_unvote(pil, proxy_timeout); +release_fw: + release_firmware(fw); +out: + up_read(&pil_pm_rwsem); + return ret; +} + +static void pil_set_state(struct pil_device *pil, enum pil_state state) +{ + if (pil->state != state) { + pil->state = state; + sysfs_notify(&pil->dev.kobj, NULL, "state"); + } +} + +/** + * pil_get() - Load a peripheral into memory and take it out of reset + * @name: pointer to a string containing the name of the peripheral to load + * + * This function returns a pointer if it succeeds. If an error occurs an + * ERR_PTR is returned. + * + * If PIL is not enabled in the kernel, the value %NULL will be returned. + */ +void *pil_get(const char *name) +{ + int ret; + struct pil_device *pil; + struct pil_device *pil_d; + void *retval; + printk("DEBUG::: %s %s \n", __func__, name); + if (!name) + return NULL; + + pil = retval = find_peripheral(name); + if (!pil) + return ERR_PTR(-ENODEV); + if (!try_module_get(pil->owner)) { + put_device(&pil->dev); + return ERR_PTR(-ENODEV); + } + + pil_d = pil_get(pil->desc->depends_on); + if (IS_ERR(pil_d)) { + retval = pil_d; + goto err_depends; + } + + mutex_lock(&pil->lock); + if (!pil->count) { + ret = load_image(pil); + if (ret) { + retval = ERR_PTR(ret); + goto err_load; + } + } + pil->count++; + pil_set_state(pil, PIL_ONLINE); + mutex_unlock(&pil->lock); +out: + return retval; +err_load: + mutex_unlock(&pil->lock); + pil_put(pil_d); +err_depends: + put_device(&pil->dev); + module_put(pil->owner); + goto out; +} +EXPORT_SYMBOL(pil_get); + +static void pil_shutdown(struct pil_device *pil) +{ + pil->desc->ops->shutdown(pil->desc); + if (proxy_timeout_ms == 0 && pil->desc->ops->proxy_unvote) + pil->desc->ops->proxy_unvote(pil->desc); + else + flush_delayed_work(&pil->proxy); + + pil_set_state(pil, PIL_OFFLINE); +} + +/** + * pil_put() - Inform PIL the peripheral no longer needs to be active + * @peripheral_handle: pointer from a previous call to pil_get() + * + * This doesn't imply that a peripheral is shutdown or in reset since another + * driver could be using the peripheral. + */ +void pil_put(void *peripheral_handle) +{ + struct pil_device *pil_d, *pil = peripheral_handle; + + if (IS_ERR_OR_NULL(pil)) + return; + + mutex_lock(&pil->lock); + if (WARN(!pil->count, "%s: %s: Reference count mismatch\n", + pil->desc->name, __func__)) + goto err_out; + if (!--pil->count) + pil_shutdown(pil); + mutex_unlock(&pil->lock); + + pil_d = find_peripheral(pil->desc->depends_on); + module_put(pil->owner); + if (pil_d) { + pil_put(pil_d); + put_device(&pil_d->dev); + } + put_device(&pil->dev); + return; +err_out: + mutex_unlock(&pil->lock); + return; +} +EXPORT_SYMBOL(pil_put); + +void pil_force_shutdown(const char *name) +{ + struct pil_device *pil; + + pil = find_peripheral(name); + if (!pil) { + pr_err("%s: Couldn't find %s\n", __func__, name); + return; + } + + mutex_lock(&pil->lock); + if (!WARN(!pil->count, "%s: %s: Reference count mismatch\n", + pil->desc->name, __func__)) + pil_shutdown(pil); + mutex_unlock(&pil->lock); + + put_device(&pil->dev); +} +EXPORT_SYMBOL(pil_force_shutdown); + +int pil_force_boot(const char *name) +{ + int ret = -EINVAL; + struct pil_device *pil; + + pil = find_peripheral(name); + if (!pil) { + pr_err("%s: Couldn't find %s\n", __func__, name); + return -EINVAL; + } + + mutex_lock(&pil->lock); + if (!WARN(!pil->count, "%s: %s: Reference count mismatch\n", + pil->desc->name, __func__)) + ret = load_image(pil); + if (!ret) + pil_set_state(pil, PIL_ONLINE); + mutex_unlock(&pil->lock); + put_device(&pil->dev); + + return ret; +} +EXPORT_SYMBOL(pil_force_boot); + +#ifdef CONFIG_DEBUG_FS +static int msm_pil_debugfs_open(struct inode *inode, struct file *filp) +{ + filp->private_data = inode->i_private; + return 0; +} + +static ssize_t msm_pil_debugfs_read(struct file *filp, char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + int r; + char buf[40]; + struct pil_device *pil = filp->private_data; + + mutex_lock(&pil->lock); + r = snprintf(buf, sizeof(buf), "%d\n", pil->count); + mutex_unlock(&pil->lock); + return simple_read_from_buffer(ubuf, cnt, ppos, buf, r); +} + +static ssize_t msm_pil_debugfs_write(struct file *filp, + const char __user *ubuf, size_t cnt, loff_t *ppos) +{ + struct pil_device *pil = filp->private_data; + char buf[4]; + + if (cnt > sizeof(buf)) + return -EINVAL; + + if (copy_from_user(&buf, ubuf, cnt)) + return -EFAULT; + + if (!strncmp(buf, "get", 3)) { + if (IS_ERR(pil_get(pil->desc->name))) + return -EIO; + } else if (!strncmp(buf, "put", 3)) + pil_put(pil); + else + return -EINVAL; + + return cnt; +} + +static const struct file_operations msm_pil_debugfs_fops = { + .open = msm_pil_debugfs_open, + .read = msm_pil_debugfs_read, + .write = msm_pil_debugfs_write, +}; + +static struct dentry *pil_base_dir; + +static int __init msm_pil_debugfs_init(void) +{ + pil_base_dir = debugfs_create_dir("pil", NULL); + if (!pil_base_dir) { + pil_base_dir = NULL; + return -ENOMEM; + } + + return 0; +} + +static void __exit msm_pil_debugfs_exit(void) +{ + debugfs_remove_recursive(pil_base_dir); +} + +static int msm_pil_debugfs_add(struct pil_device *pil) +{ + if (!pil_base_dir) + return -ENOMEM; + + pil->dentry = debugfs_create_file(pil->desc->name, S_IRUGO | S_IWUSR, + pil_base_dir, pil, &msm_pil_debugfs_fops); + return !pil->dentry ? -ENOMEM : 0; +} + +static void msm_pil_debugfs_remove(struct pil_device *pil) +{ + debugfs_remove(pil->dentry); +} +#else +static int __init msm_pil_debugfs_init(void) { return 0; }; +static void __exit msm_pil_debugfs_exit(void) { return 0; }; +static int msm_pil_debugfs_add(struct pil_device *pil) { return 0; } +static void msm_pil_debugfs_remove(struct pil_device *pil) { } +#endif + +static void pil_device_release(struct device *dev) +{ + struct pil_device *pil = to_pil_device(dev); + mutex_destroy(&pil->lock); + kfree(pil); +} + +struct pil_device *msm_pil_register(struct pil_desc *desc) +{ + int err; + static atomic_t pil_count = ATOMIC_INIT(-1); + struct pil_device *pil; + + /* Ignore users who don't make any sense */ + if (WARN(desc->ops->proxy_unvote && !desc->ops->proxy_vote, + "invalid proxy voting. ignoring\n")) + ((struct pil_reset_ops *)desc->ops)->proxy_unvote = NULL; + + WARN(desc->ops->proxy_unvote && !desc->proxy_timeout, + "A proxy timeout of 0 ms was specified for %s. Specify one in " + "desc->proxy_timeout.\n", desc->name); + + pil = kzalloc(sizeof(*pil), GFP_KERNEL); + if (!pil) + return ERR_PTR(-ENOMEM); + + mutex_init(&pil->lock); + pil->desc = desc; + pil->owner = desc->owner; + pil->dev.parent = desc->dev; + pil->dev.bus = &pil_bus_type; + pil->dev.release = pil_device_release; + + INIT_DELAYED_WORK(&pil->proxy, pil_proxy_work); + + dev_set_name(&pil->dev, "pil%d", atomic_inc_return(&pil_count)); + err = device_register(&pil->dev); + if (err) { + put_device(&pil->dev); + mutex_destroy(&pil->lock); + kfree(pil); + return ERR_PTR(err); + } + + err = msm_pil_debugfs_add(pil); + if (err) { + device_unregister(&pil->dev); + return ERR_PTR(err); + } + + return pil; +} +EXPORT_SYMBOL(msm_pil_register); + +void msm_pil_unregister(struct pil_device *pil) +{ + if (IS_ERR_OR_NULL(pil)) + return; + + if (get_device(&pil->dev)) { + mutex_lock(&pil->lock); + WARN_ON(pil->count); + flush_delayed_work(&pil->proxy); + msm_pil_debugfs_remove(pil); + device_unregister(&pil->dev); + mutex_unlock(&pil->lock); + put_device(&pil->dev); + } +} +EXPORT_SYMBOL(msm_pil_unregister); + +static int pil_pm_notify(struct notifier_block *b, unsigned long event, void *p) +{ + switch (event) { + case PM_SUSPEND_PREPARE: + down_write(&pil_pm_rwsem); + break; + case PM_POST_SUSPEND: + up_write(&pil_pm_rwsem); + break; + } + return NOTIFY_DONE; +} + +static struct notifier_block pil_pm_notifier = { + .notifier_call = pil_pm_notify, +}; + +static int __init msm_pil_init(void) +{ + int ret = msm_pil_debugfs_init(); + if (ret) + return ret; + register_pm_notifier(&pil_pm_notifier); + return bus_register(&pil_bus_type); +} +subsys_initcall(msm_pil_init); + +static void __exit msm_pil_exit(void) +{ + bus_unregister(&pil_bus_type); + unregister_pm_notifier(&pil_pm_notifier); + msm_pil_debugfs_exit(); +} +module_exit(msm_pil_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Load peripheral images and bring peripherals out of reset"); diff --git a/drivers/soc/qcom/peripheral-loader.h b/drivers/soc/qcom/peripheral-loader.h new file mode 100644 index 000000000000..6113ed6e5404 --- /dev/null +++ b/drivers/soc/qcom/peripheral-loader.h @@ -0,0 +1,61 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __MSM_PERIPHERAL_LOADER_H +#define __MSM_PERIPHERAL_LOADER_H + +struct device; +struct module; + +/** + * struct pil_desc - PIL descriptor + * @name: string used for pil_get() + * @depends_on: booted before this peripheral + * @dev: parent device + * @ops: callback functions + * @owner: module the descriptor belongs to + * @proxy_timeout: delay in ms until proxy vote is removed + */ +struct pil_desc { + const char *name; + const char *depends_on; + struct device *dev; + const struct pil_reset_ops *ops; + struct module *owner; + unsigned long proxy_timeout; +}; + +/** + * struct pil_reset_ops - PIL operations + * @init_image: prepare an image for authentication + * @verify_blob: authenticate a program segment, called once for each loadable + * program segment (optional) + * @proxy_vote: make proxy votes before auth_and_reset (optional) + * @auth_and_reset: boot the processor + * @proxy_unvote: remove any proxy votes (optional) + * @shutdown: shutdown the processor + */ +struct pil_reset_ops { + int (*init_image)(struct pil_desc *pil, const u8 *metadata, + size_t size); + int (*verify_blob)(struct pil_desc *pil, u32 phy_addr, size_t size); + int (*proxy_vote)(struct pil_desc *pil); + int (*auth_and_reset)(struct pil_desc *pil); + void (*proxy_unvote)(struct pil_desc *pil); + int (*shutdown)(struct pil_desc *pil); +}; + +struct pil_device; + +extern struct pil_device *msm_pil_register(struct pil_desc *desc); +extern void msm_pil_unregister(struct pil_device *pil); + +#endif diff --git a/drivers/soc/qcom/pil-q6v4.c b/drivers/soc/qcom/pil-q6v4.c new file mode 100644 index 000000000000..39558e71f11b --- /dev/null +++ b/drivers/soc/qcom/pil-q6v4.c @@ -0,0 +1,466 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/platform_device.h> +#include <linux/io.h> +#include <linux/ioport.h> +#include <linux/regulator/consumer.h> +#include <linux/elf.h> +#include <linux/delay.h> +#include <linux/reset.h> +#include <linux/err.h> +#include <linux/clk.h> + +//#include <mach/msm_bus.h> +//#include <mach/msm_iomap.h> + +#include "peripheral-loader.h" +#include "pil-q6v4.h" +#include "scm-pas.h" + +#define QDSP6SS_RST_EVB 0x0 +#define QDSP6SS_RESET 0x04 +#define QDSP6SS_STRAP_TCM 0x1C +#define QDSP6SS_STRAP_AHB 0x20 +#define QDSP6SS_GFMUX_CTL 0x30 +#define QDSP6SS_PWR_CTL 0x38 + +#define Q6SS_SS_ARES BIT(0) +#define Q6SS_CORE_ARES BIT(1) +#define Q6SS_ISDB_ARES BIT(2) +#define Q6SS_ETM_ARES BIT(3) +#define Q6SS_STOP_CORE_ARES BIT(4) +#define Q6SS_PRIV_ARES BIT(5) + +#define Q6SS_L2DATA_SLP_NRET_N BIT(0) +#define Q6SS_SLP_RET_N BIT(1) +#define Q6SS_L1TCM_SLP_NRET_N BIT(2) +#define Q6SS_L2TAG_SLP_NRET_N BIT(3) +#define Q6SS_ETB_SLEEP_NRET_N BIT(4) +#define Q6SS_ARR_STBY_N BIT(5) +#define Q6SS_CLAMP_IO BIT(6) + +#define Q6SS_CLK_ENA BIT(1) +#define Q6SS_SRC_SWITCH_CLK_OVR BIT(8) + +struct q6v4_data { + void __iomem *base; + unsigned long start_addr; + struct regulator *vreg; + struct regulator *pll_supply; + bool vreg_enabled; + struct clk *xo; + struct pil_device *pil; + + /* platform specific data */ + u32 strap_tcm_base; + u32 strap_ahb_upper; + u32 strap_ahb_lower; + u32 pas_id; + struct reset_control *rstc; + +}; + +static int pil_q6v4_init_image(struct pil_desc *pil, const u8 *metadata, + size_t size) +{ + const struct elf32_hdr *ehdr = (struct elf32_hdr *)metadata; + struct q6v4_data *drv = dev_get_drvdata(pil->dev); + drv->start_addr = ehdr->e_entry; + return 0; +} + +static int pil_q6v4_make_proxy_votes(struct pil_desc *pil) +{ + const struct q6v4_data *drv = dev_get_drvdata(pil->dev); + int ret; + + ret = clk_prepare_enable(drv->xo); + if (ret) { + dev_err(pil->dev, "Failed to enable XO\n"); +// goto err; + } + if (drv->pll_supply) { + ret = regulator_enable(drv->pll_supply); + if (ret) { + dev_err(pil->dev, "Failed to enable pll supply\n"); + goto err_regulator; + } + } + return 0; +err_regulator: + clk_disable_unprepare(drv->xo); +//err: + return ret; +} + +static void pil_q6v4_remove_proxy_votes(struct pil_desc *pil) +{ + const struct q6v4_data *drv = dev_get_drvdata(pil->dev); + if (drv->pll_supply) + regulator_disable(drv->pll_supply); + clk_disable_unprepare(drv->xo); +} + +static int pil_q6v4_power_up(struct device *dev) +{ + int err; + struct q6v4_data *drv = dev_get_drvdata(dev); + + //err = regulator_set_voltage(drv->vreg, 743750, 743750); + err = regulator_set_voltage(drv->vreg, 750000, 750000); + if (err) { + dev_err(dev, "Failed to set regulator's voltage step.\n"); + return err; + } + err = regulator_enable(drv->vreg); + if (err) { + dev_err(dev, "Failed to enable regulator.\n"); + return err; + } + + /* + * Q6 hardware requires a two step voltage ramp-up. + * Delay between the steps. + */ + udelay(100); + + err = regulator_set_voltage(drv->vreg, 1050000, 1050000); + if (err) { + dev_err(dev, "Failed to set regulator's voltage.\n"); + return err; + } + drv->vreg_enabled = true; + return 0; +} + +static DEFINE_MUTEX(pil_q6v4_modem_lock); +//static unsigned pil_q6v4_modem_count; + +#if 0 +/* Bring modem subsystem out of reset */ +static void pil_q6v4_init_modem(void __iomem *base, void __iomem *jtag_clk) +{ + mutex_lock(&pil_q6v4_modem_lock); + if (!pil_q6v4_modem_count) { + /* Enable MSS clocks */ + writel_relaxed(0x10, SFAB_MSS_M_ACLK_CTL); + writel_relaxed(0x10, SFAB_MSS_S_HCLK_CTL); + writel_relaxed(0x10, MSS_S_HCLK_CTL); + writel_relaxed(0x10, MSS_SLP_CLK_CTL); + /* Wait for clocks to enable */ + mb(); + udelay(10); + + /* De-assert MSS reset */ + writel_relaxed(0x0, MSS_RESET); + mb(); + udelay(10); + /* Enable MSS */ + writel_relaxed(0x7, base); + } + + /* Enable JTAG clocks */ + /* TODO: Remove if/when Q6 software enables them? */ + writel_relaxed(0x10, jtag_clk); + + pil_q6v4_modem_count++; + mutex_unlock(&pil_q6v4_modem_lock); +} + +/* Put modem subsystem back into reset */ +static void pil_q6v4_shutdown_modem(void) +{ + mutex_lock(&pil_q6v4_modem_lock); + if (pil_q6v4_modem_count) + pil_q6v4_modem_count--; + if (pil_q6v4_modem_count == 0) + writel_relaxed(0x1, MSS_RESET); + mutex_unlock(&pil_q6v4_modem_lock); +} +#endif +static int pil_q6v4_reset(struct pil_desc *pil) +{ + u32 reg, err; + const struct q6v4_data *drv = dev_get_drvdata(pil->dev); +// const struct pil_q6v4_pdata *pdata = pil->dev->platform_data; + + err = pil_q6v4_power_up(pil->dev); + if (err) + return err; + /* Enable Q6 ACLK */ +//BUG(); //FIXME +// writel_relaxed(0x10, pdata->aclk_reg); + if (drv->rstc) + reset_control_deassert(drv->rstc); + + /* Unhalt bus port */ +// err = msm_bus_axi_portunhalt(pdata->bus_port); +// if (err) +// dev_err(pil->dev, "Failed to unhalt bus port\n"); +// + /* Deassert Q6SS_SS_ARES */ + reg = readl_relaxed(drv->base + QDSP6SS_RESET); + reg &= ~(Q6SS_SS_ARES); + writel_relaxed(reg, drv->base + QDSP6SS_RESET); + + /* Program boot address */ + writel_relaxed((drv->start_addr >> 8) & 0xFFFFFF, + drv->base + QDSP6SS_RST_EVB); + + /* Program TCM and AHB address ranges */ + writel_relaxed(drv->strap_tcm_base, drv->base + QDSP6SS_STRAP_TCM); + writel_relaxed(drv->strap_ahb_upper | drv->strap_ahb_lower, + drv->base + QDSP6SS_STRAP_AHB); + + /* Turn off Q6 core clock */ + writel_relaxed(Q6SS_SRC_SWITCH_CLK_OVR, + drv->base + QDSP6SS_GFMUX_CTL); + + /* Put memories to sleep */ + writel_relaxed(Q6SS_CLAMP_IO, drv->base + QDSP6SS_PWR_CTL); + + /* Assert resets */ + reg = readl_relaxed(drv->base + QDSP6SS_RESET); + reg |= (Q6SS_CORE_ARES | Q6SS_ISDB_ARES | Q6SS_ETM_ARES + | Q6SS_STOP_CORE_ARES); + writel_relaxed(reg, drv->base + QDSP6SS_RESET); + + /* Wait 8 AHB cycles for Q6 to be fully reset (AHB = 1.5Mhz) */ + mb(); + usleep_range(20, 30); + + /* Turn on Q6 memories */ + reg = Q6SS_L2DATA_SLP_NRET_N | Q6SS_SLP_RET_N | Q6SS_L1TCM_SLP_NRET_N + | Q6SS_L2TAG_SLP_NRET_N | Q6SS_ETB_SLEEP_NRET_N | Q6SS_ARR_STBY_N + | Q6SS_CLAMP_IO; + writel_relaxed(reg, drv->base + QDSP6SS_PWR_CTL); + + /* Turn on Q6 core clock */ + reg = Q6SS_CLK_ENA | Q6SS_SRC_SWITCH_CLK_OVR; + writel_relaxed(reg, drv->base + QDSP6SS_GFMUX_CTL); + + /* Remove Q6SS_CLAMP_IO */ + reg = readl_relaxed(drv->base + QDSP6SS_PWR_CTL); + reg &= ~Q6SS_CLAMP_IO; + writel_relaxed(reg, drv->base + QDSP6SS_PWR_CTL); + + /* Bring Q6 core out of reset and start execution. */ + writel_relaxed(0x0, drv->base + QDSP6SS_RESET); + + return 0; +} + +static int pil_q6v4_shutdown(struct pil_desc *pil) +{ + u32 reg; + struct q6v4_data *drv = dev_get_drvdata(pil->dev); +// const struct pil_q6v4_pdata *pdata = pil->dev->platform_data; + + /* Make sure bus port is halted */ + //msm_bus_axi_porthalt(pdata->bus_port); + + /* Turn off Q6 core clock */ + writel_relaxed(Q6SS_SRC_SWITCH_CLK_OVR, + drv->base + QDSP6SS_GFMUX_CTL); + + /* Assert resets */ + reg = (Q6SS_SS_ARES | Q6SS_CORE_ARES | Q6SS_ISDB_ARES + | Q6SS_ETM_ARES | Q6SS_STOP_CORE_ARES | Q6SS_PRIV_ARES); + writel_relaxed(reg, drv->base + QDSP6SS_RESET); + + /* Turn off Q6 memories */ + writel_relaxed(Q6SS_CLAMP_IO, drv->base + QDSP6SS_PWR_CTL); + + + if (drv->vreg_enabled) { + regulator_disable(drv->vreg); + drv->vreg_enabled = false; + } + + return 0; +} + +static struct pil_reset_ops pil_q6v4_ops = { + .init_image = pil_q6v4_init_image, + .auth_and_reset = pil_q6v4_reset, + .shutdown = pil_q6v4_shutdown, + .proxy_vote = pil_q6v4_make_proxy_votes, + .proxy_unvote = pil_q6v4_remove_proxy_votes, +}; + +static int pil_q6v4_init_image_trusted(struct pil_desc *pil, + const u8 *metadata, size_t size) +{ + int ret; + //const struct pil_q6v4_pdata *pdata = pil->dev->platform_data; + struct q6v4_data *drv = dev_get_drvdata(pil->dev); + + ret = pas_init_image(drv->pas_id, metadata, size); + printk("DEBUG: %s ret %d \n", __func__, ret); + return ret; +} + +static int pil_q6v4_reset_trusted(struct pil_desc *pil) +{ + //const struct pil_q6v4_pdata *pdata = pil->dev->platform_data; + struct q6v4_data *drv = dev_get_drvdata(pil->dev); + int err; + + err = pil_q6v4_power_up(pil->dev); + if (err) + return err; + + /* Unhalt bus port */ +// err = msm_bus_axi_portunhalt(pdata->bus_port); +// if (err) +// dev_err(pil->dev, "Failed to unhalt bus port\n"); + return pas_auth_and_reset(drv->pas_id); +} + +static int pil_q6v4_shutdown_trusted(struct pil_desc *pil) +{ + int ret; + struct q6v4_data *drv = dev_get_drvdata(pil->dev); +// struct pil_q6v4_pdata *pdata = pil->dev->platform_data; + + /* Make sure bus port is halted */ + //msm_bus_axi_porthalt(pdata->bus_port); + + ret = pas_shutdown(drv->pas_id); + if (ret) + return ret; + + if (drv->vreg_enabled) { + regulator_disable(drv->vreg); + drv->vreg_enabled = false; + } + + return ret; +} + +static struct pil_reset_ops pil_q6v4_ops_trusted = { + .init_image = pil_q6v4_init_image_trusted, + .auth_and_reset = pil_q6v4_reset_trusted, + .shutdown = pil_q6v4_shutdown_trusted, +// .proxy_vote = pil_q6v4_make_proxy_votes, + // .proxy_unvote = pil_q6v4_remove_proxy_votes, +}; + +static int pil_q6v4_driver_probe(struct platform_device *pdev) +{ + struct q6v4_data *drv; + struct resource *res; + struct pil_desc *desc; +// int ret; + struct device_node *np = pdev->dev.of_node; + + printk("DEBUG:::::::::::::: %s \n", __func__); + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + return -EINVAL; + + drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); + if (!drv) + return -ENOMEM; + platform_set_drvdata(pdev, drv); + + request_mem_region(res->start, resource_size(res), pdev->name); + + drv->base = devm_ioremap(&pdev->dev, res->start, resource_size(res)); + if (!drv->base) + return -ENOMEM; + + + desc = devm_kzalloc(&pdev->dev, sizeof(*desc), GFP_KERNEL); + if (!desc) + return -ENOMEM; + + if (np) { + of_property_read_u32(np, "strap-tcm-base", &drv->strap_tcm_base); + of_property_read_u32(np, "strap-ahb-lower", &drv->strap_ahb_lower); + of_property_read_u32(np, "strap-ahb-upper", &drv->strap_ahb_upper); + of_property_read_u32(np, "pas-id", &drv->pas_id); + of_property_read_string(np, "pas-name", &desc->name); + drv->rstc = reset_control_get_optional(&pdev->dev, "lpass"); + if (IS_ERR(drv->rstc)) + drv->rstc = NULL; + } else { + const struct pil_q6v4_pdata *pdata; + pdata = pdev->dev.platform_data; + drv->strap_tcm_base = pdata->strap_tcm_base; + drv->strap_ahb_lower = pdata->strap_ahb_lower; + drv->strap_ahb_upper = pdata->strap_ahb_upper; + //drv->aclk_reg = pdata->aclk_reg; + + drv->pas_id = pdata->pas_id; + /* No support to clk reset here */ + } + + // desc->name = pdata->name; +// desc->depends_on = pdata->depends; + desc->dev = &pdev->dev; + desc->owner = THIS_MODULE; + desc->proxy_timeout = 10000; + + printk("DEBUG:::::::::::; pas id %d \n", drv->pas_id); + + if (pas_supported(drv->pas_id) > 0) { + desc->ops = &pil_q6v4_ops_trusted; + dev_info(&pdev->dev, "using secure boot\n"); + } else { + desc->ops = &pil_q6v4_ops; + dev_info(&pdev->dev, "using non-secure boot\n"); + } + + drv->vreg = devm_regulator_get(&pdev->dev, "core-vdd"); + if (IS_ERR(drv->vreg)) { + dev_err(&pdev->dev, "Failed to set regulator's mode.\n"); + return PTR_ERR(drv->vreg); + } + + drv->pil = msm_pil_register(desc); + if (IS_ERR(drv->pil)) + return PTR_ERR(drv->pil); + printk("DEBUG:::::::::::::: %s successful\n", __func__); + return 0; +} + +static int pil_q6v4_driver__exit(struct platform_device *pdev) +{ + struct q6v4_data *drv = platform_get_drvdata(pdev); + msm_pil_unregister(drv->pil); + return 0; +} + +static struct of_device_id pil_q6v4_match[] = { + { .compatible = "qcom,pil-qdsp6v4", }, + {}, +}; + +static struct platform_driver pil_q6v4_driver = { + .probe = pil_q6v4_driver_probe, + .remove = pil_q6v4_driver__exit, + .driver = { + .name = "pil_qdsp6v4", + .owner = THIS_MODULE, + .of_match_table = of_match_ptr(pil_q6v4_match), + }, +}; + +module_platform_driver(pil_q6v4_driver); + +MODULE_DESCRIPTION("Support for booting QDSP6v4 (Hexagon) processors"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/soc/qcom/pil-q6v4.h b/drivers/soc/qcom/pil-q6v4.h new file mode 100644 index 000000000000..a791996b8459 --- /dev/null +++ b/drivers/soc/qcom/pil-q6v4.h @@ -0,0 +1,26 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __MSM_PIL_Q6V4_H +#define __MSM_PIL_Q6V4_H + +struct pil_q6v4_pdata { + const unsigned long strap_tcm_base; + const unsigned long strap_ahb_upper; + const unsigned long strap_ahb_lower; + void __iomem *aclk_reg; + void __iomem *jtag_clk_reg; + const char *name; + const char *depends; + const unsigned pas_id; + int bus_port; +}; +#endif diff --git a/drivers/soc/qcom/remote_spinlock.c b/drivers/soc/qcom/remote_spinlock.c new file mode 100644 index 000000000000..53a35c37c27d --- /dev/null +++ b/drivers/soc/qcom/remote_spinlock.c @@ -0,0 +1,196 @@ +/* Copyright (c) 2008-2009, 2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/err.h> +#include <linux/kernel.h> +#include <linux/string.h> +#include <linux/delay.h> + +//#include <asm/system.h> + +//#include <mach/msm_iomap.h> +//#include <mach/remote_spinlock.h> +#include <linux/remote_spinlock.h> +//#include <mach/dal.h> +#include "smd_private.h" +#include <linux/module.h> + +static void remote_spin_release_all_locks(uint32_t pid, int count); + +#define CONFIG_MSM_REMOTE_SPINLOCK_SFPB +#if defined(CONFIG_MSM_REMOTE_SPINLOCK_SFPB) +#define SFPB_SPINLOCK_COUNT 8 +#define MSM_SFPB_MUTEX_REG_BASE 0x01200600 +#define MSM_SFPB_MUTEX_REG_SIZE (33 * 4) + +static void *hw_mutex_reg_base; +static DEFINE_MUTEX(hw_map_init_lock); + +static int remote_spinlock_init_address(int id, _remote_spinlock_t *lock) +{ + if (id >= SFPB_SPINLOCK_COUNT) + return -EINVAL; + + if (!hw_mutex_reg_base) { + mutex_lock(&hw_map_init_lock); + if (!hw_mutex_reg_base) + hw_mutex_reg_base = ioremap(MSM_SFPB_MUTEX_REG_BASE, + MSM_SFPB_MUTEX_REG_SIZE); + mutex_unlock(&hw_map_init_lock); + BUG_ON(hw_mutex_reg_base == NULL); + } + + *lock = hw_mutex_reg_base + 0x4 + id * 4; + return 0; +} + +void _remote_spin_release_all(uint32_t pid) +{ + remote_spin_release_all_locks(pid, SFPB_SPINLOCK_COUNT); +} + +#else +#define SMEM_SPINLOCK_COUNT 8 +#define SMEM_SPINLOCK_ARRAY_SIZE (SMEM_SPINLOCK_COUNT * sizeof(uint32_t)) + +static int remote_spinlock_init_address(int id, _remote_spinlock_t *lock) +{ + _remote_spinlock_t spinlock_start; + + if (id >= SMEM_SPINLOCK_COUNT) + return -EINVAL; + + spinlock_start = smem_alloc(SMEM_SPINLOCK_ARRAY, + SMEM_SPINLOCK_ARRAY_SIZE); + if (spinlock_start == NULL) + return -ENXIO; + + *lock = spinlock_start + id; + + return 0; +} + +void _remote_spin_release_all(uint32_t pid) +{ + remote_spin_release_all_locks(pid, SMEM_SPINLOCK_COUNT); +} + +#endif + +/** + * Release all spinlocks owned by @pid. + * + * This is only to be used for situations where the processor owning + * spinlocks has crashed and the spinlocks must be released. + * + * @pid - processor ID of processor to release + */ +static void remote_spin_release_all_locks(uint32_t pid, int count) +{ + int n; + _remote_spinlock_t lock; + + for (n = 0; n < count; ++n) { + if (remote_spinlock_init_address(n, &lock) == 0) + _remote_spin_release(&lock, pid); + } +} + +static int +remote_spinlock_dal_init(const char *chunk_name, _remote_spinlock_t *lock) +{ +#if 0 + void *dal_smem_start, *dal_smem_end; + uint32_t dal_smem_size; + struct dal_chunk_header *cur_header; + + if (!chunk_name) + return -EINVAL; + + dal_smem_start = smem_get_entry(SMEM_DAL_AREA, &dal_smem_size); + if (!dal_smem_start) + return -ENXIO; + + dal_smem_end = dal_smem_start + dal_smem_size; + + /* Find first chunk header */ + cur_header = (struct dal_chunk_header *) + (((uint32_t)dal_smem_start + (4095)) & ~4095); + *lock = NULL; + while (cur_header->size != 0 + && ((uint32_t)(cur_header + 1) < (uint32_t)dal_smem_end)) { + + /* Check if chunk name matches */ + if (!strncmp(cur_header->name, chunk_name, + DAL_CHUNK_NAME_LENGTH)) { + *lock = (_remote_spinlock_t)&cur_header->lock; + return 0; + } + cur_header = (void *)cur_header + cur_header->size; + } + + pr_err("%s: DAL remote lock \"%s\" not found.\n", __func__, + chunk_name); +#endif + return -EINVAL; +} + +int _remote_spin_lock_init(remote_spinlock_id_t id, _remote_spinlock_t *lock) +{ + BUG_ON(id == NULL); + + if (id[0] == 'D' && id[1] == ':') { + /* DAL chunk name starts after "D:" */ + return remote_spinlock_dal_init(&id[2], lock); + } else if (id[0] == 'S' && id[1] == ':') { + /* Single-digit lock ID follows "S:" */ + BUG_ON(id[3] != '\0'); + + return remote_spinlock_init_address((((uint8_t)id[2])-'0'), + lock); + } else { + return -EINVAL; + } +} + +int _remote_mutex_init(struct remote_mutex_id *id, _remote_mutex_t *lock) +{ + BUG_ON(id == NULL); + + lock->delay_us = id->delay_us; + return _remote_spin_lock_init(id->r_spinlock_id, &(lock->r_spinlock)); +} +EXPORT_SYMBOL(_remote_mutex_init); + +void _remote_mutex_lock(_remote_mutex_t *lock) +{ + while (!_remote_spin_trylock(&(lock->r_spinlock))) { + if (lock->delay_us >= 1000) + msleep(lock->delay_us/1000); + else + udelay(lock->delay_us); + } +} +EXPORT_SYMBOL(_remote_mutex_lock); + +void _remote_mutex_unlock(_remote_mutex_t *lock) +{ + _remote_spin_unlock(&(lock->r_spinlock)); +} +EXPORT_SYMBOL(_remote_mutex_unlock); + +int _remote_mutex_trylock(_remote_mutex_t *lock) +{ + return _remote_spin_trylock(&(lock->r_spinlock)); +} +EXPORT_SYMBOL(_remote_mutex_trylock); diff --git a/arch/arm/mach-qcom/scm-boot.c b/drivers/soc/qcom/scm-boot.c index 45cee3e469a5..f653217ae442 100644 --- a/arch/arm/mach-qcom/scm-boot.c +++ b/drivers/soc/qcom/scm-boot.c @@ -18,8 +18,25 @@ #include <linux/module.h> #include <linux/slab.h> -#include "scm.h" -#include "scm-boot.h" +#include <soc/qcom/scm.h> +#include <soc/qcom/scm-boot.h> + +#define SCM_FLAG_WARMBOOT_CPU0 0x04 +#define SCM_FLAG_WARMBOOT_CPU1 0x02 +#define SCM_FLAG_WARMBOOT_CPU2 0x10 +#define SCM_FLAG_WARMBOOT_CPU3 0x40 + +struct scm_warmboot { + int flag; + void *entry; +}; + +static struct scm_warmboot scm_flags[] = { + { .flag = SCM_FLAG_WARMBOOT_CPU0 }, + { .flag = SCM_FLAG_WARMBOOT_CPU1 }, + { .flag = SCM_FLAG_WARMBOOT_CPU2 }, + { .flag = SCM_FLAG_WARMBOOT_CPU3 }, +}; /* * Set the cold/warm boot address for one of the CPU cores. @@ -37,3 +54,21 @@ int scm_set_boot_addr(phys_addr_t addr, int flags) &cmd, sizeof(cmd), NULL, 0); } EXPORT_SYMBOL(scm_set_boot_addr); + +int scm_set_warm_boot_addr(void *entry, int cpu) +{ + int ret; + + /* + * Reassign only if we are switching from hotplug entry point + * to cpuidle entry point or vice versa. + */ + if (entry == scm_flags[cpu].entry) + return 0; + + ret = scm_set_boot_addr(virt_to_phys(entry), scm_flags[cpu].flag); + if (!ret) + scm_flags[cpu].entry = entry; + + return ret; +} diff --git a/drivers/soc/qcom/scm-pas.c b/drivers/soc/qcom/scm-pas.c new file mode 100644 index 000000000000..9b32cc478434 --- /dev/null +++ b/drivers/soc/qcom/scm-pas.c @@ -0,0 +1,112 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#define pr_fmt(fmt) "scm-pas: " fmt + +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/clk.h> +#include <linux/err.h> +#include <linux/dma-mapping.h> + +#include <soc/qcom/scm.h> +#include <asm/cacheflush.h> +#include "scm-pas.h" + +#define PAS_INIT_IMAGE_CMD 1 +#define PAS_MEM_SETUP_CMD 2 +#define PAS_AUTH_AND_RESET_CMD 5 +#define PAS_SHUTDOWN_CMD 6 +#define PAS_IS_SUPPORTED_CMD 7 + + +int pas_init_image(enum pas_id id, const u8 *metadata, size_t size) +{ + int ret; + struct pas_init_image_req { + u32 proc; + u32 image_addr; + } request; + u32 scm_ret = 0; + void *mdata_buf; + dma_addr_t mdata_phys; + DEFINE_DMA_ATTRS(attrs); + + + dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs); + mdata_buf = dma_alloc_attrs(NULL, size, &mdata_phys, GFP_KERNEL, + &attrs); + if (!mdata_buf) { + pr_err("Allocation for metadata failed.\n"); + return -ENOMEM; + } + + memcpy(mdata_buf, metadata, size); + + request.proc = id; + request.image_addr = mdata_phys; + + ret = scm_call(SCM_SVC_PIL, PAS_INIT_IMAGE_CMD, &request, + sizeof(request), &scm_ret, sizeof(scm_ret)); + + dma_free_attrs(NULL, size, mdata_buf, mdata_phys, &attrs); + + if (ret) + return ret; + return scm_ret; +} +EXPORT_SYMBOL(pas_init_image); + +int pas_auth_and_reset(enum pas_id id) +{ + int ret; + u32 proc = id, scm_ret = 0; + ret = scm_call(SCM_SVC_PIL, PAS_AUTH_AND_RESET_CMD, &proc, + sizeof(proc), &scm_ret, sizeof(scm_ret)); + if (ret) + scm_ret = ret; + + return scm_ret; +} +EXPORT_SYMBOL(pas_auth_and_reset); + +int pas_shutdown(enum pas_id id) +{ + int ret; + u32 proc = id, scm_ret = 0; + + ret = scm_call(SCM_SVC_PIL, PAS_SHUTDOWN_CMD, &proc, sizeof(proc), + &scm_ret, sizeof(scm_ret)); + if (ret) + return ret; + + return scm_ret; +} +EXPORT_SYMBOL(pas_shutdown); + +int pas_supported(enum pas_id id) +{ + int ret; + u32 periph = id, ret_val = 0; + + if (scm_is_call_available(SCM_SVC_PIL, PAS_IS_SUPPORTED_CMD) <= 0) + return 0; + + ret = scm_call(SCM_SVC_PIL, PAS_IS_SUPPORTED_CMD, &periph, + sizeof(periph), &ret_val, sizeof(ret_val)); + if (ret) + return ret; + + return ret_val; +} +EXPORT_SYMBOL(pas_supported); diff --git a/drivers/soc/qcom/scm-pas.h b/drivers/soc/qcom/scm-pas.h new file mode 100644 index 000000000000..86b5b30f5bea --- /dev/null +++ b/drivers/soc/qcom/scm-pas.h @@ -0,0 +1,32 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __MSM_SCM_PAS_H +#define __MSM_SCM_PAS_H + +enum pas_id { + PAS_MODEM, + PAS_Q6, + PAS_DSPS, + PAS_TZAPPS, + PAS_MODEM_SW, + PAS_MODEM_FW, + PAS_WCNSS, + PAS_SECAPP, + PAS_GSS, + PAS_VIDC, +}; + +extern int pas_init_image(enum pas_id id, const u8 *metadata, size_t size); +extern int pas_auth_and_reset(enum pas_id id); +extern int pas_shutdown(enum pas_id id); +extern int pas_supported(enum pas_id id); +#endif diff --git a/arch/arm/mach-qcom/scm.c b/drivers/soc/qcom/scm.c index c536fd6bf827..aa0d7b89ee73 100644 --- a/arch/arm/mach-qcom/scm.c +++ b/drivers/soc/qcom/scm.c @@ -22,12 +22,11 @@ #include <linux/errno.h> #include <linux/err.h> -#include <asm/cacheflush.h> +#include <soc/qcom/scm.h> -#include "scm.h" +#include <asm/outercache.h> +#include <asm/cacheflush.h> -/* Cache line size for msm8x60 */ -#define CACHELINESIZE 32 #define SCM_ENOMEM -5 #define SCM_EOPNOTSUPP -4 @@ -152,8 +151,14 @@ static inline void *scm_get_response_buffer(const struct scm_response *rsp) return (void *)rsp + rsp->buf_offset; } -static int scm_remap_error(int err) +static int scm_remap_error(const struct scm_command *cmd, int err) { + pr_err("scm_call failed with error code %d\n", err); + u32 svc_id = cmd->id >> 10; + u32 cmd_id = cmd->id & GENMASK(10, 0); + + pr_err("scm_call for svc_id %d & cmd_id %d failed with error code %d\n", + svc_id, cmd_id, err); switch (err) { case SCM_ERROR: return -EIO; @@ -198,18 +203,38 @@ static int __scm_call(const struct scm_command *cmd) u32 cmd_addr = virt_to_phys(cmd); /* - * Flush the entire cache here so callers don't have to remember - * to flush the cache when passing physical addresses to the secure - * side in the buffer. + * Flush the command buffer so that the secure world sees + * the correct data. */ - flush_cache_all(); + __cpuc_flush_dcache_area((void *)cmd, cmd->len); + outer_flush_range(cmd_addr, cmd_addr + cmd->len); + ret = smc(cmd_addr); if (ret < 0) - ret = scm_remap_error(ret); + ret = scm_remap_error(cmd, ret); return ret; } +static void scm_inv_range(unsigned long start, unsigned long end) +{ + u32 cacheline_size, ctr; + + asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr)); + cacheline_size = 4 << ((ctr >> 16) & 0xf); + + start = round_down(start, cacheline_size); + end = round_up(end, cacheline_size); + outer_inv_range(start, end); + while (start < end) { + asm ("mcr p15, 0, %0, c7, c6, 1" : : "r" (start) + : "memory"); + start += cacheline_size; + } + dsb(); + isb(); +} + /** * scm_call() - Send an SCM command * @svc_id: service identifier @@ -220,6 +245,13 @@ static int __scm_call(const struct scm_command *cmd) * @resp_len: length of the response buffer * * Sends a command to the SCM and waits for the command to finish processing. + * + * A note on cache maintenance: + * Note that any buffers that are expected to be accessed by the secure world + * must be flushed before invoking scm_call and invalidated in the cache + * immediately after scm_call returns. Cache maintenance on the command and + * response buffers is taken care of by scm_call; however, callers are + * responsible for any other cached buffers passed over to the secure world. */ int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len, void *resp_buf, size_t resp_len) @@ -227,6 +259,7 @@ int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len, int ret; struct scm_command *cmd; struct scm_response *rsp; + unsigned long start, end; cmd = alloc_scm_command(cmd_len, resp_len); if (!cmd) @@ -243,17 +276,15 @@ int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len, goto out; rsp = scm_command_to_response(cmd); + start = (unsigned long)rsp; + do { - u32 start = (u32)rsp; - u32 end = (u32)scm_get_response_buffer(rsp) + resp_len; - start &= ~(CACHELINESIZE - 1); - while (start < end) { - asm ("mcr p15, 0, %0, c7, c6, 1" : : "r" (start) - : "memory"); - start += CACHELINESIZE; - } + scm_inv_range(start, start + sizeof(*rsp)); } while (!rsp->is_complete); + end = (unsigned long)scm_get_response_buffer(rsp) + resp_len; + scm_inv_range(start, end); + if (resp_buf) memcpy(resp_buf, scm_get_response_buffer(rsp), resp_len); out: @@ -262,6 +293,79 @@ out: } EXPORT_SYMBOL(scm_call); +#define SCM_CLASS_REGISTER (0x2 << 8) +#define SCM_MASK_IRQS BIT(5) +#define SCM_ATOMIC(svc, cmd, n) (((((svc) << 10)|((cmd) & 0x3ff)) << 12) | \ + SCM_CLASS_REGISTER | \ + SCM_MASK_IRQS | \ + (n & 0xf)) + +/** + * scm_call_atomic1() - Send an atomic SCM command with one argument + * @svc_id: service identifier + * @cmd_id: command identifier + * @arg1: first argument + * + * This shall only be used with commands that are guaranteed to be + * uninterruptable, atomic and SMP safe. + */ +s32 scm_call_atomic1(u32 svc, u32 cmd, u32 arg1) +{ + int context_id; + register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 1); + register u32 r1 asm("r1") = (u32)&context_id; + register u32 r2 asm("r2") = arg1; + + asm volatile( + __asmeq("%0", "r0") + __asmeq("%1", "r0") + __asmeq("%2", "r1") + __asmeq("%3", "r2") +#ifdef REQUIRES_SEC + ".arch_extension sec\n" +#endif + "smc #0 @ switch to secure world\n" + : "=r" (r0) + : "r" (r0), "r" (r1), "r" (r2) + : "r3"); + return r0; +} +EXPORT_SYMBOL(scm_call_atomic1); + +/** + * scm_call_atomic2() - Send an atomic SCM command with two arguments + * @svc_id: service identifier + * @cmd_id: command identifier + * @arg1: first argument + * @arg2: second argument + * + * This shall only be used with commands that are guaranteed to be + * uninterruptable, atomic and SMP safe. + */ +s32 scm_call_atomic2(u32 svc, u32 cmd, u32 arg1, u32 arg2) +{ + int context_id; + register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 2); + register u32 r1 asm("r1") = (u32)&context_id; + register u32 r2 asm("r2") = arg1; + register u32 r3 asm("r3") = arg2; + + asm volatile( + __asmeq("%0", "r0") + __asmeq("%1", "r0") + __asmeq("%2", "r1") + __asmeq("%3", "r2") + __asmeq("%4", "r3") +#ifdef REQUIRES_SEC + ".arch_extension sec\n" +#endif + "smc #0 @ switch to secure world\n" + : "=r" (r0) + : "r" (r0), "r" (r1), "r" (r2), "r" (r3)); + return r0; +} +EXPORT_SYMBOL(scm_call_atomic2); + u32 scm_get_version(void) { int context_id; @@ -297,3 +401,20 @@ u32 scm_get_version(void) return version; } EXPORT_SYMBOL(scm_get_version); + +#define IS_CALL_AVAIL_CMD 1 +int scm_is_call_available(u32 svc_id, u32 cmd_id) +{ + int ret; + u32 svc_cmd = (svc_id << 10) | cmd_id; + u32 ret_val = 0; + + ret = scm_call(SCM_SVC_INFO, IS_CALL_AVAIL_CMD, &svc_cmd, + sizeof(svc_cmd), &ret_val, sizeof(ret_val)); + if (ret) + return ret; + + return ret_val; +} +EXPORT_SYMBOL(scm_is_call_available); + diff --git a/drivers/soc/qcom/smd.c b/drivers/soc/qcom/smd.c new file mode 100644 index 000000000000..ac35a37fa3ca --- /dev/null +++ b/drivers/soc/qcom/smd.c @@ -0,0 +1,3400 @@ +/* arch/arm/mach-msm/smd.c + * + * Copyright (C) 2007 Google, Inc. + * Copyright (c) 2008-2012, The Linux Foundation. All rights reserved. + * Author: Brian Swetland <swetland@google.com> + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/platform_device.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/fs.h> +#include <linux/cdev.h> +#include <linux/device.h> +#include <linux/wait.h> +#include <linux/interrupt.h> +#include <linux/irq.h> +#include <linux/list.h> +#include <linux/slab.h> +#include <linux/delay.h> +#include <linux/io.h> +#include <linux/termios.h> +#include <linux/ctype.h> +#include <linux/remote_spinlock.h> +#include <linux/uaccess.h> +#include <linux/kfifo.h> +//#include <linux/wakelock.h> +#include <linux/notifier.h> +#include <linux/sort.h> +#include <linux/suspend.h> +#include <linux/msm_smd.h> +//#include <mach/msm_iomap.h> +//#include <mach/system.h> +//#include <linux/subsystem_notif.h> +//#include <mach/socinfo.h> +//#include <mach/proc_comm.h> +#include <asm/cacheflush.h> + +#include "smd_private.h" +//#include "modem_notifier.h" + +#define CONFIG_QDSP6 1 +#define CONFIG_DSPS 1 +#define CONFIG_WCNSS 1 +#define CONFIG_DSPS_SMSM 1 + +#define MODULE_NAME "msm_smd" +#define SMEM_VERSION 0x000B +#define SMD_VERSION 0x00020000 +#define SMSM_SNAPSHOT_CNT 64 +#define SMSM_SNAPSHOT_SIZE ((SMSM_NUM_ENTRIES + 1) * 4) + +uint32_t SMSM_NUM_ENTRIES = 8; +uint32_t SMSM_NUM_HOSTS = 3; +void * msm_smem_base_virt; +void * g_out_base; + +/* Legacy SMSM interrupt notifications */ +#define LEGACY_MODEM_SMSM_MASK (SMSM_RESET | SMSM_INIT | SMSM_SMDINIT \ + | SMSM_RUN | SMSM_SYSTEM_DOWNLOAD) + +enum { + MSM_SMD_DEBUG = 1U << 0, + MSM_SMSM_DEBUG = 1U << 1, + MSM_SMD_INFO = 1U << 2, + MSM_SMSM_INFO = 1U << 3, + MSM_SMx_POWER_INFO = 1U << 4, +}; + +struct smsm_shared_info { + uint32_t *state; + uint32_t *intr_mask; + uint32_t *intr_mux; +}; + +static struct smsm_shared_info smsm_info; +static struct kfifo smsm_snapshot_fifo; +//static struct wake_lock smsm_snapshot_wakelock; +static int smsm_snapshot_count; +static DEFINE_SPINLOCK(smsm_snapshot_count_lock); + +struct smsm_size_info_type { + uint32_t num_hosts; + uint32_t num_entries; + uint32_t reserved0; + uint32_t reserved1; +}; + +struct smsm_state_cb_info { + struct list_head cb_list; + uint32_t mask; + void *data; + void (*notify)(void *data, uint32_t old_state, uint32_t new_state); +}; + +struct smsm_state_info { + struct list_head callbacks; + uint32_t last_value; + uint32_t intr_mask_set; + uint32_t intr_mask_clear; +}; + +struct interrupt_config_item { + /* must be initialized */ + irqreturn_t (*irq_handler)(int req, void *data); + /* outgoing interrupt config (set from platform data) */ + uint32_t out_bit_pos; + void __iomem *out_base; + uint32_t out_offset; + int irq_id; +}; + +struct interrupt_config { + struct interrupt_config_item smd; + struct interrupt_config_item smsm; +}; + +static irqreturn_t smd_modem_irq_handler(int irq, void *data); +static irqreturn_t smsm_modem_irq_handler(int irq, void *data); +static irqreturn_t smd_dsp_irq_handler(int irq, void *data); +static irqreturn_t smsm_dsp_irq_handler(int irq, void *data); +static irqreturn_t smd_dsps_irq_handler(int irq, void *data); +static irqreturn_t smsm_dsps_irq_handler(int irq, void *data); +static irqreturn_t smd_wcnss_irq_handler(int irq, void *data); +static irqreturn_t smsm_wcnss_irq_handler(int irq, void *data); +static irqreturn_t smd_rpm_irq_handler(int irq, void *data); +static irqreturn_t smsm_irq_handler(int irq, void *data); + +static struct interrupt_config private_intr_config[NUM_SMD_SUBSYSTEMS] = { + [SMD_MODEM] = { + .smd.irq_handler = smd_modem_irq_handler, + .smsm.irq_handler = smsm_modem_irq_handler, + }, + [SMD_Q6] = { + .smd.irq_handler = smd_dsp_irq_handler, + .smsm.irq_handler = smsm_dsp_irq_handler, + }, + [SMD_DSPS] = { + .smd.irq_handler = smd_dsps_irq_handler, + .smsm.irq_handler = smsm_dsps_irq_handler, + }, + [SMD_WCNSS] = { + .smd.irq_handler = smd_wcnss_irq_handler, + .smsm.irq_handler = smsm_wcnss_irq_handler, + }, + [SMD_RPM] = { + .smd.irq_handler = smd_rpm_irq_handler, + .smsm.irq_handler = NULL, /* does not support smsm */ + }, +}; + +struct smem_area { + void *phys_addr; + unsigned size; + void __iomem *virt_addr; +}; +static uint32_t num_smem_areas; +static struct smem_area *smem_areas; +static void *smem_range_check(void *base, unsigned offset); + +struct interrupt_stat interrupt_stats[NUM_SMD_SUBSYSTEMS]; + +#define SMSM_STATE_ADDR(entry) (smsm_info.state + entry) +#define SMSM_INTR_MASK_ADDR(entry, host) (smsm_info.intr_mask + \ + entry * SMSM_NUM_HOSTS + host) +#define SMSM_INTR_MUX_ADDR(entry) (smsm_info.intr_mux + entry) + +/* Internal definitions which are not exported in some targets */ +enum { + SMSM_APPS_DEM_I = 3, +}; + +static int msm_smd_debug_mask; +module_param_named(debug_mask, msm_smd_debug_mask, + int, S_IRUGO | S_IWUSR | S_IWGRP); + +#define CONFIG_MSM_SMD_DEBUG +#if defined(CONFIG_MSM_SMD_DEBUG) +#define SMD_DBG(x...) do { \ + if (msm_smd_debug_mask & MSM_SMD_DEBUG) \ + printk(KERN_DEBUG x); \ + } while (0) + +#define SMSM_DBG(x...) do { \ + if (msm_smd_debug_mask & MSM_SMSM_DEBUG) \ + printk(KERN_DEBUG x); \ + } while (0) + +#define SMD_INFO(x...) do { \ + if (msm_smd_debug_mask & MSM_SMD_INFO) \ + printk(KERN_INFO x); \ + } while (0) + +#define SMSM_INFO(x...) do { \ + if (msm_smd_debug_mask & MSM_SMSM_INFO) \ + printk(KERN_INFO x); \ + } while (0) +#define SMx_POWER_INFO(x...) do { \ + if (msm_smd_debug_mask & MSM_SMx_POWER_INFO) \ + printk(KERN_INFO x); \ + } while (0) +#else +#define SMD_DBG(x...) do { } while (0) +#define SMSM_DBG(x...) do { } while (0) +#define SMD_INFO(x...) do { } while (0) +#define SMSM_INFO(x...) do { } while (0) +#define SMx_POWER_INFO(x...) do { } while (0) +#endif + +static unsigned last_heap_free = 0xffffffff; + + +#define MSM_TRIG_A2M_SMD_INT +#define MSM_TRIG_A2Q6_SMD_INT +#define MSM_TRIG_A2M_SMSM_INT +#define MSM_TRIG_A2Q6_SMSM_INT +#define MSM_TRIG_A2DSPS_SMD_INT +#define MSM_TRIG_A2DSPS_SMSM_INT +#define MSM_TRIG_A2WCNSS_SMD_INT +#define MSM_TRIG_A2WCNSS_SMSM_INT + +/* + * stub out legacy macros if they are not being used so that the legacy + * code compiles even though it is not used + * + * these definitions should not be used in active code and will cause + * an early failure + */ +#ifndef INT_ADSP_A11_SMSM +#define INT_ADSP_A11_SMSM 121 +#endif + +#define SMD_LOOPBACK_CID 100 + +#define SMEM_SPINLOCK_SMEM_ALLOC "S:3" +static remote_spinlock_t remote_spinlock; + +static LIST_HEAD(smd_ch_list_loopback); +static void smd_fake_irq_handler(unsigned long arg); +static void smsm_cb_snapshot(uint32_t use_wakelock); + +static struct workqueue_struct *smsm_cb_wq; +static void notify_smsm_cb_clients_worker(struct work_struct *work); +static DECLARE_WORK(smsm_cb_work, notify_smsm_cb_clients_worker); +static DEFINE_MUTEX(smsm_lock); +static struct smsm_state_info *smsm_states; +static int spinlocks_initialized; + +/** + * Variables to indicate smd module initialization. + * Dependents to register for smd module init notifier. + */ +static int smd_module_inited; +static RAW_NOTIFIER_HEAD(smd_module_init_notifier_list); +static DEFINE_MUTEX(smd_module_init_notifier_lock); +static void smd_module_init_notify(uint32_t state, void *data); + +static inline void smd_write_intr(unsigned int val, + volatile void __iomem *addr) +{ + wmb(); + __raw_writel(val, addr); +} + +static inline void notify_modem_smd(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_MODEM].smd; + if (intr->out_base) { + ++interrupt_stats[SMD_MODEM].smd_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_MODEM].smd_out_hardcode_count; + MSM_TRIG_A2M_SMD_INT; + } +} + +static inline void notify_dsp_smd(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_Q6].smd; + if (intr->out_base) { + ++interrupt_stats[SMD_Q6].smd_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_Q6].smd_out_hardcode_count; + MSM_TRIG_A2Q6_SMD_INT; + } +} + +static inline void notify_dsps_smd(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_DSPS].smd; + if (intr->out_base) { + ++interrupt_stats[SMD_DSPS].smd_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_DSPS].smd_out_hardcode_count; + MSM_TRIG_A2DSPS_SMD_INT; + } +} + +static inline void notify_wcnss_smd(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_WCNSS].smd; + + if (intr->out_base) { + ++interrupt_stats[SMD_WCNSS].smd_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_WCNSS].smd_out_hardcode_count; + MSM_TRIG_A2WCNSS_SMD_INT; + } +} + +static inline void notify_rpm_smd(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_RPM].smd; + + if (intr->out_base) { + ++interrupt_stats[SMD_RPM].smd_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } +} + +static inline void notify_modem_smsm(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_MODEM].smsm; + if (intr->out_base) { + ++interrupt_stats[SMD_MODEM].smsm_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_MODEM].smsm_out_hardcode_count; + MSM_TRIG_A2M_SMSM_INT; + } +} + +static inline void notify_dsp_smsm(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_Q6].smsm; + if (intr->out_base) { + ++interrupt_stats[SMD_Q6].smsm_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_Q6].smsm_out_hardcode_count; + MSM_TRIG_A2Q6_SMSM_INT; + } +} + +static inline void notify_dsps_smsm(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_DSPS].smsm; + if (intr->out_base) { + ++interrupt_stats[SMD_DSPS].smsm_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_DSPS].smsm_out_hardcode_count; + MSM_TRIG_A2DSPS_SMSM_INT; + } +} + +static inline void notify_wcnss_smsm(void) +{ + static const struct interrupt_config_item *intr + = &private_intr_config[SMD_WCNSS].smsm; + + if (intr->out_base) { + ++interrupt_stats[SMD_WCNSS].smsm_out_config_count; + smd_write_intr(intr->out_bit_pos, + intr->out_base + intr->out_offset); + } else { + ++interrupt_stats[SMD_WCNSS].smsm_out_hardcode_count; + MSM_TRIG_A2WCNSS_SMSM_INT; + } +} + +static void notify_other_smsm(uint32_t smsm_entry, uint32_t notify_mask) +{ + /* older protocol don't use smsm_intr_mask, + but still communicates with modem */ + if (!smsm_info.intr_mask || + (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_MODEM)) + & notify_mask)) + notify_modem_smsm(); + + if (smsm_info.intr_mask && + (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_Q6)) + & notify_mask)) { + uint32_t mux_val; + + //if (cpu_is_qsd8x50() && smsm_info.intr_mux) { + if (0 && smsm_info.intr_mux) { + mux_val = __raw_readl( + SMSM_INTR_MUX_ADDR(SMEM_APPS_Q6_SMSM)); + mux_val++; + smd_write_intr(mux_val, + SMSM_INTR_MUX_ADDR(SMEM_APPS_Q6_SMSM)); + } + notify_dsp_smsm(); + } + + if (smsm_info.intr_mask && + (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_WCNSS)) + & notify_mask)) { + notify_wcnss_smsm(); + } + + if (smsm_info.intr_mask && + (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_DSPS)) + & notify_mask)) { + notify_dsps_smsm(); + } + + /* + * Notify local SMSM callback clients without wakelock since this + * code is used by power management during power-down/-up sequencing + * on DEM-based targets. Grabbing a wakelock in this case will + * abort the power-down sequencing. + */ + if (smsm_info.intr_mask && + (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS)) + & notify_mask)) { + smsm_cb_snapshot(0); + } +} + +static int smsm_pm_notifier(struct notifier_block *nb, + unsigned long event, void *unused) +{ + switch (event) { + case PM_SUSPEND_PREPARE: + smsm_change_state(SMSM_APPS_STATE, SMSM_PROC_AWAKE, 0); + break; + + case PM_POST_SUSPEND: + smsm_change_state(SMSM_APPS_STATE, 0, SMSM_PROC_AWAKE); + break; + } + return NOTIFY_DONE; +} + +static struct notifier_block smsm_pm_nb = { + .notifier_call = smsm_pm_notifier, + .priority = 0, +}; + +void smd_diag(void) +{ + char *x; + int size; + + x = smem_find(ID_DIAG_ERR_MSG, SZ_DIAG_ERR_MSG); + if (x != 0) { + x[SZ_DIAG_ERR_MSG - 1] = 0; + SMD_INFO("smem: DIAG '%s'\n", x); + } + + x = smem_get_entry(SMEM_ERR_CRASH_LOG, &size); + if (x != 0) { + x[size - 1] = 0; + pr_err("smem: CRASH LOG\n'%s'\n", x); + } +} + + +static void handle_modem_crash(void) +{ + pr_err("MODEM/AMSS has CRASHED\n"); + smd_diag(); + + /* hard reboot if possible FIXME + if (msm_reset_hook) + msm_reset_hook(); + */ + + /* in this case the modem or watchdog should reboot us */ + for (;;) + ; +} + +int smsm_check_for_modem_crash(void) +{ + /* if the modem's not ready yet, we have to hope for the best */ + if (!smsm_info.state) + return 0; + + if (__raw_readl(SMSM_STATE_ADDR(SMSM_MODEM_STATE)) & SMSM_RESET) { + handle_modem_crash(); + return -1; + } + return 0; +} +EXPORT_SYMBOL(smsm_check_for_modem_crash); + +/* the spinlock is used to synchronize between the + * irq handler and code that mutates the channel + * list or fiddles with channel state + */ +static DEFINE_SPINLOCK(smd_lock); +DEFINE_SPINLOCK(smem_lock); + +/* the mutex is used during open() and close() + * operations to avoid races while creating or + * destroying smd_channel structures + */ +static DEFINE_MUTEX(smd_creation_mutex); + +static int smd_initialized; + +struct smd_shared_v1 { + struct smd_half_channel ch0; + unsigned char data0[SMD_BUF_SIZE]; + struct smd_half_channel ch1; + unsigned char data1[SMD_BUF_SIZE]; +}; + +struct smd_shared_v2 { + struct smd_half_channel ch0; + struct smd_half_channel ch1; +}; + +struct smd_shared_v2_word_access { + struct smd_half_channel_word_access ch0; + struct smd_half_channel_word_access ch1; +}; + +struct smd_channel { + volatile void *send; /* some variant of smd_half_channel */ + volatile void *recv; /* some variant of smd_half_channel */ + unsigned char *send_data; + unsigned char *recv_data; + unsigned fifo_size; + unsigned fifo_mask; + struct list_head ch_list; + + unsigned current_packet; + unsigned n; + void *priv; + void (*notify)(void *priv, unsigned flags); + + int (*read)(smd_channel_t *ch, void *data, int len, int user_buf); + int (*write)(smd_channel_t *ch, const void *data, int len, + int user_buf); + int (*read_avail)(smd_channel_t *ch); + int (*write_avail)(smd_channel_t *ch); + int (*read_from_cb)(smd_channel_t *ch, void *data, int len, + int user_buf); + + void (*update_state)(smd_channel_t *ch); + unsigned last_state; + void (*notify_other_cpu)(void); + + char name[20]; + struct platform_device pdev; + unsigned type; + + int pending_pkt_sz; + + char is_pkt_ch; + + /* + * private internal functions to access *send and *recv. + * never to be exported outside of smd + */ + struct smd_half_channel_access *half_ch; +}; + +struct edge_to_pid { + uint32_t local_pid; + uint32_t remote_pid; + char subsys_name[SMD_MAX_CH_NAME_LEN]; +}; + +/** + * Maps edge type to local and remote processor ID's. + */ +static struct edge_to_pid edge_to_pids[] = { + [SMD_APPS_MODEM] = {SMD_APPS, SMD_MODEM, "modem"}, + [SMD_APPS_QDSP] = {SMD_APPS, SMD_Q6, "q6"}, + [SMD_MODEM_QDSP] = {SMD_MODEM, SMD_Q6}, + [SMD_APPS_DSPS] = {SMD_APPS, SMD_DSPS, "dsps"}, + [SMD_MODEM_DSPS] = {SMD_MODEM, SMD_DSPS}, + [SMD_QDSP_DSPS] = {SMD_Q6, SMD_DSPS}, + [SMD_APPS_WCNSS] = {SMD_APPS, SMD_WCNSS, "wcnss"}, + [SMD_MODEM_WCNSS] = {SMD_MODEM, SMD_WCNSS}, + [SMD_QDSP_WCNSS] = {SMD_Q6, SMD_WCNSS}, + [SMD_DSPS_WCNSS] = {SMD_DSPS, SMD_WCNSS}, + [SMD_APPS_Q6FW] = {SMD_APPS, SMD_MODEM_Q6_FW}, + [SMD_MODEM_Q6FW] = {SMD_MODEM, SMD_MODEM_Q6_FW}, + [SMD_QDSP_Q6FW] = {SMD_Q6, SMD_MODEM_Q6_FW}, + [SMD_DSPS_Q6FW] = {SMD_DSPS, SMD_MODEM_Q6_FW}, + [SMD_WCNSS_Q6FW] = {SMD_WCNSS, SMD_MODEM_Q6_FW}, + [SMD_APPS_RPM] = {SMD_APPS, SMD_RPM}, + [SMD_MODEM_RPM] = {SMD_MODEM, SMD_RPM}, + [SMD_QDSP_RPM] = {SMD_Q6, SMD_RPM}, + [SMD_WCNSS_RPM] = {SMD_WCNSS, SMD_RPM}, +}; + +struct restart_notifier_block { + unsigned processor; + char *name; + struct notifier_block nb; +}; + +static int disable_smsm_reset_handshake; +static struct platform_device loopback_tty_pdev = {.name = "LOOPBACK_TTY"}; + +static LIST_HEAD(smd_ch_closed_list); +static LIST_HEAD(smd_ch_closing_list); +static LIST_HEAD(smd_ch_to_close_list); +static LIST_HEAD(smd_ch_list_modem); +static LIST_HEAD(smd_ch_list_dsp); +static LIST_HEAD(smd_ch_list_dsps); +static LIST_HEAD(smd_ch_list_wcnss); +static LIST_HEAD(smd_ch_list_rpm); + +static unsigned char smd_ch_allocated[64]; +static struct work_struct probe_work; + +static void finalize_channel_close_fn(struct work_struct *work); +static DECLARE_WORK(finalize_channel_close_work, finalize_channel_close_fn); +static struct workqueue_struct *channel_close_wq; + +static int smd_alloc_channel(struct smd_alloc_elm *alloc_elm); + +/* on smp systems, the probe might get called from multiple cores, + hence use a lock */ +static DEFINE_MUTEX(smd_probe_lock); + +static void smd_channel_probe_worker(struct work_struct *work) +{ + struct smd_alloc_elm *shared; + unsigned n; + uint32_t type; + + shared = smem_find(ID_CH_ALLOC_TBL, sizeof(*shared) * 64); + + if (!shared) { + pr_err("%s: allocation table not initialized\n", __func__); + return; + } + + mutex_lock(&smd_probe_lock); + for (n = 0; n < 64; n++) { + if (smd_ch_allocated[n]) + continue; + + /* channel should be allocated only if APPS + processor is involved */ + type = SMD_CHANNEL_TYPE(shared[n].type); + if (type >= ARRAY_SIZE(edge_to_pids) || + edge_to_pids[type].local_pid != SMD_APPS) + continue; + if (!shared[n].ref_count) + continue; + if (!shared[n].name[0]) + continue; + + if (!smd_alloc_channel(&shared[n])) + smd_ch_allocated[n] = 1; + else + SMD_INFO("Probe skipping ch %d, not allocated\n", n); + } + mutex_unlock(&smd_probe_lock); +} + +/** + * Lookup processor ID and determine if it belongs to the proved edge + * type. + * + * @shared2: Pointer to v2 shared channel structure + * @type: Edge type + * @pid: Processor ID of processor on edge + * @local_ch: Channel that belongs to processor @pid + * @remote_ch: Other side of edge contained @pid + * @is_word_access_ch: Bool, is this a word aligned access channel + * + * Returns 0 for not on edge, 1 for found on edge + */ +static int pid_is_on_edge(void *shared2, + uint32_t type, uint32_t pid, + void **local_ch, + void **remote_ch, + int is_word_access_ch + ) +{ + int ret = 0; + struct edge_to_pid *edge; + void *ch0; + void *ch1; + + *local_ch = 0; + *remote_ch = 0; + + if (!shared2 || (type >= ARRAY_SIZE(edge_to_pids))) + return 0; + + if (is_word_access_ch) { + ch0 = &((struct smd_shared_v2_word_access *)(shared2))->ch0; + ch1 = &((struct smd_shared_v2_word_access *)(shared2))->ch1; + } else { + ch0 = &((struct smd_shared_v2 *)(shared2))->ch0; + ch1 = &((struct smd_shared_v2 *)(shared2))->ch1; + } + + edge = &edge_to_pids[type]; + if (edge->local_pid != edge->remote_pid) { + if (pid == edge->local_pid) { + *local_ch = ch0; + *remote_ch = ch1; + ret = 1; + } else if (pid == edge->remote_pid) { + *local_ch = ch1; + *remote_ch = ch0; + ret = 1; + } + } + + return ret; +} + +/* + * Returns a pointer to the subsystem name or NULL if no + * subsystem name is available. + * + * @type - Edge definition + */ +const char *smd_edge_to_subsystem(uint32_t type) +{ + const char *subsys = NULL; + + if (type < ARRAY_SIZE(edge_to_pids)) { + subsys = edge_to_pids[type].subsys_name; + if (subsys[0] == 0x0) + subsys = NULL; + } + return subsys; +} +EXPORT_SYMBOL(smd_edge_to_subsystem); + +/* + * Returns a pointer to the subsystem name given the + * remote processor ID. + * + * @pid Remote processor ID + * @returns Pointer to subsystem name or NULL if not found + */ +const char *smd_pid_to_subsystem(uint32_t pid) +{ + const char *subsys = NULL; + int i; + + for (i = 0; i < ARRAY_SIZE(edge_to_pids); ++i) { + if (pid == edge_to_pids[i].remote_pid && + edge_to_pids[i].subsys_name[0] != 0x0 + ) { + subsys = edge_to_pids[i].subsys_name; + break; + } + } + + return subsys; +} +EXPORT_SYMBOL(smd_pid_to_subsystem); + +static void smd_reset_edge(void *void_ch, unsigned new_state, + int is_word_access_ch) +{ + if (is_word_access_ch) { + struct smd_half_channel_word_access *ch = + (struct smd_half_channel_word_access *)(void_ch); + if (ch->state != SMD_SS_CLOSED) { + ch->state = new_state; + ch->fDSR = 0; + ch->fCTS = 0; + ch->fCD = 0; + ch->fSTATE = 1; + } + } else { + struct smd_half_channel *ch = + (struct smd_half_channel *)(void_ch); + if (ch->state != SMD_SS_CLOSED) { + ch->state = new_state; + ch->fDSR = 0; + ch->fCTS = 0; + ch->fCD = 0; + ch->fSTATE = 1; + } + } +} + +static void smd_channel_reset_state(struct smd_alloc_elm *shared, + unsigned new_state, unsigned pid) +{ + unsigned n; + void *shared2; + uint32_t type; + void *local_ch; + void *remote_ch; + int is_word_access; + + for (n = 0; n < SMD_CHANNELS; n++) { + if (!shared[n].ref_count) + continue; + if (!shared[n].name[0]) + continue; + + type = SMD_CHANNEL_TYPE(shared[n].type); + is_word_access = is_word_access_ch(type); + if (is_word_access) + shared2 = smem_alloc(SMEM_SMD_BASE_ID + n, + sizeof(struct smd_shared_v2_word_access)); + else + shared2 = smem_alloc(SMEM_SMD_BASE_ID + n, + sizeof(struct smd_shared_v2)); + if (!shared2) + continue; + + if (pid_is_on_edge(shared2, type, pid, &local_ch, &remote_ch, + is_word_access)) + smd_reset_edge(local_ch, new_state, is_word_access); + + /* + * ModemFW is in the same subsystem as ModemSW, but has + * separate SMD edges that need to be reset. + */ + if (pid == SMSM_MODEM && + pid_is_on_edge(shared2, type, SMD_MODEM_Q6_FW, + &local_ch, &remote_ch, is_word_access)) + smd_reset_edge(local_ch, new_state, is_word_access); + } +} + + +void smd_channel_reset(uint32_t restart_pid) +{ + struct smd_alloc_elm *shared; + unsigned long flags; + + SMD_DBG("%s: starting reset\n", __func__); + + /* release any held spinlocks */ + remote_spin_release(&remote_spinlock, restart_pid); + remote_spin_release_all(restart_pid); + + shared = smem_find(ID_CH_ALLOC_TBL, sizeof(*shared) * 64); + if (!shared) { + pr_err("%s: allocation table not initialized\n", __func__); + return; + } + + /* reset SMSM entry */ + if (smsm_info.state) { + writel_relaxed(0, SMSM_STATE_ADDR(restart_pid)); + + /* restart SMSM init handshake */ + if (restart_pid == SMSM_MODEM) { + smsm_change_state(SMSM_APPS_STATE, + SMSM_INIT | SMSM_SMD_LOOPBACK | SMSM_RESET, + 0); + } + + /* notify SMSM processors */ + smsm_irq_handler(0, 0); + notify_modem_smsm(); + notify_dsp_smsm(); + notify_dsps_smsm(); + notify_wcnss_smsm(); + } + + /* change all remote states to CLOSING */ + mutex_lock(&smd_probe_lock); + spin_lock_irqsave(&smd_lock, flags); + smd_channel_reset_state(shared, SMD_SS_CLOSING, restart_pid); + spin_unlock_irqrestore(&smd_lock, flags); + mutex_unlock(&smd_probe_lock); + + /* notify SMD processors */ + mb(); + smd_fake_irq_handler(0); + notify_modem_smd(); + notify_dsp_smd(); + notify_dsps_smd(); + notify_wcnss_smd(); + + /* change all remote states to CLOSED */ + mutex_lock(&smd_probe_lock); + spin_lock_irqsave(&smd_lock, flags); + smd_channel_reset_state(shared, SMD_SS_CLOSED, restart_pid); + spin_unlock_irqrestore(&smd_lock, flags); + mutex_unlock(&smd_probe_lock); + + /* notify SMD processors */ + mb(); + smd_fake_irq_handler(0); + notify_modem_smd(); + notify_dsp_smd(); + notify_dsps_smd(); + notify_wcnss_smd(); + + SMD_DBG("%s: finished reset\n", __func__); +} + +/* how many bytes are available for reading */ +static int smd_stream_read_avail(struct smd_channel *ch) +{ + return (ch->half_ch->get_head(ch->recv) - + ch->half_ch->get_tail(ch->recv)) & ch->fifo_mask; +} + +/* how many bytes we are free to write */ +static int smd_stream_write_avail(struct smd_channel *ch) +{ + return ch->fifo_mask - ((ch->half_ch->get_head(ch->send) - + ch->half_ch->get_tail(ch->send)) & ch->fifo_mask); +} + +static int smd_packet_read_avail(struct smd_channel *ch) +{ + if (ch->current_packet) { + int n = smd_stream_read_avail(ch); + if (n > ch->current_packet) + n = ch->current_packet; + return n; + } else { + return 0; + } +} + +static int smd_packet_write_avail(struct smd_channel *ch) +{ + int n = smd_stream_write_avail(ch); + return n > SMD_HEADER_SIZE ? n - SMD_HEADER_SIZE : 0; +} + +static int ch_is_open(struct smd_channel *ch) +{ + return (ch->half_ch->get_state(ch->recv) == SMD_SS_OPENED || + ch->half_ch->get_state(ch->recv) == SMD_SS_FLUSHING) + && (ch->half_ch->get_state(ch->send) == SMD_SS_OPENED); +} + +/* provide a pointer and length to readable data in the fifo */ +static unsigned ch_read_buffer(struct smd_channel *ch, void **ptr) +{ + unsigned head = ch->half_ch->get_head(ch->recv); + unsigned tail = ch->half_ch->get_tail(ch->recv); + *ptr = (void *) (ch->recv_data + tail); + + if (tail <= head) + return head - tail; + else + return ch->fifo_size - tail; +} + +static int read_intr_blocked(struct smd_channel *ch) +{ + return ch->half_ch->get_fBLOCKREADINTR(ch->recv); +} + +/* advance the fifo read pointer after data from ch_read_buffer is consumed */ +static void ch_read_done(struct smd_channel *ch, unsigned count) +{ + BUG_ON(count > smd_stream_read_avail(ch)); + ch->half_ch->set_tail(ch->recv, + (ch->half_ch->get_tail(ch->recv) + count) & ch->fifo_mask); + wmb(); + ch->half_ch->set_fTAIL(ch->send, 1); +} + +/* basic read interface to ch_read_{buffer,done} used + * by smd_*_read() and update_packet_state() + * will read-and-discard if the _data pointer is null + */ +static int ch_read(struct smd_channel *ch, void *_data, int len, int user_buf) +{ + void *ptr; + unsigned n; + unsigned char *data = _data; + int orig_len = len; + int r = 0; + + while (len > 0) { + n = ch_read_buffer(ch, &ptr); + if (n == 0) + break; + + if (n > len) + n = len; + if (_data) { + if (user_buf) { + r = copy_to_user(data, ptr, n); + if (r > 0) { + pr_err("%s: " + "copy_to_user could not copy " + "%i bytes.\n", + __func__, + r); + } + } else + memcpy(data, ptr, n); + } + + data += n; + len -= n; + ch_read_done(ch, n); + } + + return orig_len - len; +} + +static void update_stream_state(struct smd_channel *ch) +{ + /* streams have no special state requiring updating */ +} + +static void update_packet_state(struct smd_channel *ch) +{ + unsigned hdr[5]; + int r; + + /* can't do anything if we're in the middle of a packet */ + while (ch->current_packet == 0) { + /* discard 0 length packets if any */ + + /* don't bother unless we can get the full header */ + if (smd_stream_read_avail(ch) < SMD_HEADER_SIZE) + return; + + r = ch_read(ch, hdr, SMD_HEADER_SIZE, 0); + BUG_ON(r != SMD_HEADER_SIZE); + + ch->current_packet = hdr[0]; + } +} + +/* provide a pointer and length to next free space in the fifo */ +static unsigned ch_write_buffer(struct smd_channel *ch, void **ptr) +{ + unsigned head = ch->half_ch->get_head(ch->send); + unsigned tail = ch->half_ch->get_tail(ch->send); + *ptr = (void *) (ch->send_data + head); + + if (head < tail) { + return tail - head - 1; + } else { + if (tail == 0) + return ch->fifo_size - head - 1; + else + return ch->fifo_size - head; + } +} + +/* advace the fifo write pointer after freespace + * from ch_write_buffer is filled + */ +static void ch_write_done(struct smd_channel *ch, unsigned count) +{ + BUG_ON(count > smd_stream_write_avail(ch)); + ch->half_ch->set_head(ch->send, + (ch->half_ch->get_head(ch->send) + count) & ch->fifo_mask); + wmb(); + ch->half_ch->set_fHEAD(ch->send, 1); +} + +static void ch_set_state(struct smd_channel *ch, unsigned n) +{ + if (n == SMD_SS_OPENED) { + ch->half_ch->set_fDSR(ch->send, 1); + ch->half_ch->set_fCTS(ch->send, 1); + ch->half_ch->set_fCD(ch->send, 1); + } else { + ch->half_ch->set_fDSR(ch->send, 0); + ch->half_ch->set_fCTS(ch->send, 0); + ch->half_ch->set_fCD(ch->send, 0); + } + ch->half_ch->set_state(ch->send, n); + ch->half_ch->set_fSTATE(ch->send, 1); + ch->notify_other_cpu(); +} + +static void do_smd_probe(void) +{ + struct smem_shared *shared = (void *) msm_smem_base_virt; +//printk("DEBUG::::::: %s\n", __func__); + if (shared->heap_info.free_offset != last_heap_free) { + last_heap_free = shared->heap_info.free_offset; + schedule_work(&probe_work); + } +} + +static void smd_state_change(struct smd_channel *ch, + unsigned last, unsigned next) +{ + ch->last_state = next; + + SMD_INFO("SMD: ch %d %d -> %d\n", ch->n, last, next); + + switch (next) { + case SMD_SS_OPENING: + if (ch->half_ch->get_state(ch->send) == SMD_SS_CLOSING || + ch->half_ch->get_state(ch->send) == SMD_SS_CLOSED) { + ch->half_ch->set_tail(ch->recv, 0); + ch->half_ch->set_head(ch->send, 0); + ch->half_ch->set_fBLOCKREADINTR(ch->send, 0); + ch_set_state(ch, SMD_SS_OPENING); + } + break; + case SMD_SS_OPENED: + if (ch->half_ch->get_state(ch->send) == SMD_SS_OPENING) { + ch_set_state(ch, SMD_SS_OPENED); + ch->notify(ch->priv, SMD_EVENT_OPEN); + } + break; + case SMD_SS_FLUSHING: + case SMD_SS_RESET: + /* we should force them to close? */ + break; + case SMD_SS_CLOSED: + if (ch->half_ch->get_state(ch->send) == SMD_SS_OPENED) { + ch_set_state(ch, SMD_SS_CLOSING); + ch->current_packet = 0; + ch->pending_pkt_sz = 0; + ch->notify(ch->priv, SMD_EVENT_CLOSE); + } + break; + case SMD_SS_CLOSING: + if (ch->half_ch->get_state(ch->send) == SMD_SS_CLOSED) { + list_move(&ch->ch_list, + &smd_ch_to_close_list); + queue_work(channel_close_wq, + &finalize_channel_close_work); + } + break; + } +} + +static void handle_smd_irq_closing_list(void) +{ + unsigned long flags; + struct smd_channel *ch; + struct smd_channel *index; + unsigned tmp; + + spin_lock_irqsave(&smd_lock, flags); + list_for_each_entry_safe(ch, index, &smd_ch_closing_list, ch_list) { + if (ch->half_ch->get_fSTATE(ch->recv)) + ch->half_ch->set_fSTATE(ch->recv, 0); + tmp = ch->half_ch->get_state(ch->recv); + if (tmp != ch->last_state) + smd_state_change(ch, ch->last_state, tmp); + } + spin_unlock_irqrestore(&smd_lock, flags); +} + +static void handle_smd_irq(struct list_head *list, void (*notify)(void)) +{ + unsigned long flags; + struct smd_channel *ch; + unsigned ch_flags; + unsigned tmp; + unsigned char state_change; + + spin_lock_irqsave(&smd_lock, flags); + list_for_each_entry(ch, list, ch_list) { + state_change = 0; + ch_flags = 0; + if (ch_is_open(ch)) { + if (ch->half_ch->get_fHEAD(ch->recv)) { + ch->half_ch->set_fHEAD(ch->recv, 0); + ch_flags |= 1; + } + if (ch->half_ch->get_fTAIL(ch->recv)) { + ch->half_ch->set_fTAIL(ch->recv, 0); + ch_flags |= 2; + } + if (ch->half_ch->get_fSTATE(ch->recv)) { + ch->half_ch->set_fSTATE(ch->recv, 0); + ch_flags |= 4; + } + } + tmp = ch->half_ch->get_state(ch->recv); + if (tmp != ch->last_state) { + SMx_POWER_INFO("SMD ch%d '%s' State change %d->%d\n", + ch->n, ch->name, ch->last_state, tmp); + smd_state_change(ch, ch->last_state, tmp); + state_change = 1; + } + if (ch_flags & 0x3) { + ch->update_state(ch); + SMx_POWER_INFO("SMD ch%d '%s' Data event r%d/w%d\n", + ch->n, ch->name, + ch->read_avail(ch), + ch->fifo_size - ch->write_avail(ch)); + ch->notify(ch->priv, SMD_EVENT_DATA); + } + if (ch_flags & 0x4 && !state_change) { + SMx_POWER_INFO("SMD ch%d '%s' State update\n", + ch->n, ch->name); + ch->notify(ch->priv, SMD_EVENT_STATUS); + } + } + spin_unlock_irqrestore(&smd_lock, flags); + do_smd_probe(); +} + +static irqreturn_t smd_modem_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMD Int Modem->Apps\n"); + ++interrupt_stats[SMD_MODEM].smd_in_count; + handle_smd_irq(&smd_ch_list_modem, notify_modem_smd); + handle_smd_irq_closing_list(); + return IRQ_HANDLED; +} + +static irqreturn_t smd_dsp_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMD Int LPASS->Apps\n"); + ++interrupt_stats[SMD_Q6].smd_in_count; + handle_smd_irq(&smd_ch_list_dsp, notify_dsp_smd); + handle_smd_irq_closing_list(); + return IRQ_HANDLED; +} + +static irqreturn_t smd_dsps_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMD Int DSPS->Apps\n"); + ++interrupt_stats[SMD_DSPS].smd_in_count; + handle_smd_irq(&smd_ch_list_dsps, notify_dsps_smd); + handle_smd_irq_closing_list(); + return IRQ_HANDLED; +} + +static irqreturn_t smd_wcnss_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMD Int WCNSS->Apps\n"); + ++interrupt_stats[SMD_WCNSS].smd_in_count; + handle_smd_irq(&smd_ch_list_wcnss, notify_wcnss_smd); + handle_smd_irq_closing_list(); + return IRQ_HANDLED; +} + +static irqreturn_t smd_rpm_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMD Int RPM->Apps\n"); + ++interrupt_stats[SMD_RPM].smd_in_count; + handle_smd_irq(&smd_ch_list_rpm, notify_rpm_smd); + handle_smd_irq_closing_list(); + return IRQ_HANDLED; +} + +static void smd_fake_irq_handler(unsigned long arg) +{ + handle_smd_irq(&smd_ch_list_modem, notify_modem_smd); + handle_smd_irq(&smd_ch_list_dsp, notify_dsp_smd); + handle_smd_irq(&smd_ch_list_dsps, notify_dsps_smd); + handle_smd_irq(&smd_ch_list_wcnss, notify_wcnss_smd); + handle_smd_irq(&smd_ch_list_rpm, notify_rpm_smd); + handle_smd_irq_closing_list(); +} + +static DECLARE_TASKLET(smd_fake_irq_tasklet, smd_fake_irq_handler, 0); + +static inline int smd_need_int(struct smd_channel *ch) +{ + if (ch_is_open(ch)) { + if (ch->half_ch->get_fHEAD(ch->recv) || + ch->half_ch->get_fTAIL(ch->recv) || + ch->half_ch->get_fSTATE(ch->recv)) + return 1; + if (ch->half_ch->get_state(ch->recv) != ch->last_state) + return 1; + } + return 0; +} + +void smd_sleep_exit(void) +{ + unsigned long flags; + struct smd_channel *ch; + int need_int = 0; + + spin_lock_irqsave(&smd_lock, flags); + list_for_each_entry(ch, &smd_ch_list_modem, ch_list) { + if (smd_need_int(ch)) { + need_int = 1; + break; + } + } + list_for_each_entry(ch, &smd_ch_list_dsp, ch_list) { + if (smd_need_int(ch)) { + need_int = 1; + break; + } + } + list_for_each_entry(ch, &smd_ch_list_dsps, ch_list) { + if (smd_need_int(ch)) { + need_int = 1; + break; + } + } + list_for_each_entry(ch, &smd_ch_list_wcnss, ch_list) { + if (smd_need_int(ch)) { + need_int = 1; + break; + } + } + spin_unlock_irqrestore(&smd_lock, flags); + do_smd_probe(); + + if (need_int) { + SMD_DBG("smd_sleep_exit need interrupt\n"); + tasklet_schedule(&smd_fake_irq_tasklet); + } +} +EXPORT_SYMBOL(smd_sleep_exit); + +static int smd_is_packet(struct smd_alloc_elm *alloc_elm) +{ + if (SMD_XFER_TYPE(alloc_elm->type) == 1) + return 0; + else if (SMD_XFER_TYPE(alloc_elm->type) == 2) + return 1; + + /* for cases where xfer type is 0 */ + if (!strncmp(alloc_elm->name, "DAL", 3)) + return 0; + + /* for cases where xfer type is 0 */ + if (!strncmp(alloc_elm->name, "RPCCALL_QDSP", 12)) + return 0; + + if (alloc_elm->cid > 4 || alloc_elm->cid == 1) + return 1; + else + return 0; +} + +static int smd_stream_write(smd_channel_t *ch, const void *_data, int len, + int user_buf) +{ + void *ptr; + const unsigned char *buf = _data; + unsigned xfer; + int orig_len = len; + int r = 0; + + SMD_DBG("smd_stream_write() %d -> ch%d\n", len, ch->n); + if (len < 0) + return -EINVAL; + else if (len == 0) + return 0; + + while ((xfer = ch_write_buffer(ch, &ptr)) != 0) { + if (!ch_is_open(ch)) { + len = orig_len; + break; + } + if (xfer > len) + xfer = len; + if (user_buf) { + r = copy_from_user(ptr, buf, xfer); + if (r > 0) { + pr_err("%s: " + "copy_from_user could not copy %i " + "bytes.\n", + __func__, + r); + } + } else + memcpy(ptr, buf, xfer); + ch_write_done(ch, xfer); + len -= xfer; + buf += xfer; + if (len == 0) + break; + } + + if (orig_len - len) + ch->notify_other_cpu(); + + return orig_len - len; +} + +static int smd_packet_write(smd_channel_t *ch, const void *_data, int len, + int user_buf) +{ + int ret; + unsigned hdr[5]; + + SMD_DBG("smd_packet_write() %d -> ch%d\n", len, ch->n); + if (len < 0) + return -EINVAL; + else if (len == 0) + return 0; + + if (smd_stream_write_avail(ch) < (len + SMD_HEADER_SIZE)) + return -ENOMEM; + + hdr[0] = len; + hdr[1] = hdr[2] = hdr[3] = hdr[4] = 0; + + + ret = smd_stream_write(ch, hdr, sizeof(hdr), 0); + if (ret < 0 || ret != sizeof(hdr)) { + SMD_DBG("%s failed to write pkt header: " + "%d returned\n", __func__, ret); + return -1; + } + + + ret = smd_stream_write(ch, _data, len, user_buf); + if (ret < 0 || ret != len) { + SMD_DBG("%s failed to write pkt data: " + "%d returned\n", __func__, ret); + return ret; + } + + return len; +} + +static int smd_stream_read(smd_channel_t *ch, void *data, int len, int user_buf) +{ + int r; + + if (len < 0) + return -EINVAL; + + r = ch_read(ch, data, len, user_buf); + if (r > 0) + if (!read_intr_blocked(ch)) + ch->notify_other_cpu(); + + return r; +} + +static int smd_packet_read(smd_channel_t *ch, void *data, int len, int user_buf) +{ + unsigned long flags; + int r; + + if (len < 0) + return -EINVAL; + + if (len > ch->current_packet) + len = ch->current_packet; + + r = ch_read(ch, data, len, user_buf); + if (r > 0) + if (!read_intr_blocked(ch)) + ch->notify_other_cpu(); + + spin_lock_irqsave(&smd_lock, flags); + ch->current_packet -= r; + update_packet_state(ch); + spin_unlock_irqrestore(&smd_lock, flags); + + return r; +} + +static int smd_packet_read_from_cb(smd_channel_t *ch, void *data, int len, + int user_buf) +{ + int r; + + if (len < 0) + return -EINVAL; + + if (len > ch->current_packet) + len = ch->current_packet; + + r = ch_read(ch, data, len, user_buf); + if (r > 0) + if (!read_intr_blocked(ch)) + ch->notify_other_cpu(); + + ch->current_packet -= r; + update_packet_state(ch); + + return r; +} + +#define CONFIG_MSM_SMD_PKG4 + +#if (defined(CONFIG_MSM_SMD_PKG4) || defined(CONFIG_MSM_SMD_PKG3)) +static int smd_alloc_v2(struct smd_channel *ch) +{ + void *buffer; + unsigned buffer_sz; + + if (is_word_access_ch(ch->type)) { + struct smd_shared_v2_word_access *shared2; + shared2 = smem_alloc(SMEM_SMD_BASE_ID + ch->n, + sizeof(*shared2)); + if (!shared2) { + SMD_INFO("smem_alloc failed ch=%d\n", ch->n); + return -EINVAL; + } + ch->send = &shared2->ch0; + ch->recv = &shared2->ch1; + } else { + struct smd_shared_v2 *shared2; + shared2 = smem_alloc(SMEM_SMD_BASE_ID + ch->n, + sizeof(*shared2)); + if (!shared2) { + SMD_INFO("smem_alloc failed ch=%d\n", ch->n); + return -EINVAL; + } + ch->send = &shared2->ch0; + ch->recv = &shared2->ch1; + } + ch->half_ch = get_half_ch_funcs(ch->type); + + buffer = smem_get_entry(SMEM_SMD_FIFO_BASE_ID + ch->n, &buffer_sz); + if (!buffer) { + SMD_INFO("smem_get_entry failed\n"); + return -EINVAL; + } + + /* buffer must be a power-of-two size */ + if (buffer_sz & (buffer_sz - 1)) { + SMD_INFO("Buffer size: %u not power of two\n", buffer_sz); + return -EINVAL; + } + buffer_sz /= 2; + ch->send_data = buffer; + ch->recv_data = buffer + buffer_sz; + ch->fifo_size = buffer_sz; + + return 0; +} + +static int smd_alloc_v1(struct smd_channel *ch) +{ + return -EINVAL; +} + +#else /* define v1 for older targets */ +static int smd_alloc_v2(struct smd_channel *ch) +{ + return -EINVAL; +} + +static int smd_alloc_v1(struct smd_channel *ch) +{ + struct smd_shared_v1 *shared1; + shared1 = smem_alloc(ID_SMD_CHANNELS + ch->n, sizeof(*shared1)); + if (!shared1) { + pr_err("smd_alloc_channel() cid %d does not exist\n", ch->n); + return -EINVAL; + } + ch->send = &shared1->ch0; + ch->recv = &shared1->ch1; + ch->send_data = shared1->data0; + ch->recv_data = shared1->data1; + ch->fifo_size = SMD_BUF_SIZE; + ch->half_ch = get_half_ch_funcs(ch->type); + return 0; +} + +#endif + +static int smd_alloc_channel(struct smd_alloc_elm *alloc_elm) +{ + struct smd_channel *ch; + + ch = kzalloc(sizeof(struct smd_channel), GFP_KERNEL); + if (ch == 0) { + pr_err("smd_alloc_channel() out of memory\n"); + return -1; + } + ch->n = alloc_elm->cid; + ch->type = SMD_CHANNEL_TYPE(alloc_elm->type); + + if (smd_alloc_v2(ch) && smd_alloc_v1(ch)) { + kfree(ch); + return -1; + } + + ch->fifo_mask = ch->fifo_size - 1; + + /* probe_worker guarentees ch->type will be a valid type */ + if (ch->type == SMD_APPS_MODEM) + ch->notify_other_cpu = notify_modem_smd; + else if (ch->type == SMD_APPS_QDSP) + ch->notify_other_cpu = notify_dsp_smd; + else if (ch->type == SMD_APPS_DSPS) + ch->notify_other_cpu = notify_dsps_smd; + else if (ch->type == SMD_APPS_WCNSS) + ch->notify_other_cpu = notify_wcnss_smd; + else if (ch->type == SMD_APPS_RPM) + ch->notify_other_cpu = notify_rpm_smd; + + if (smd_is_packet(alloc_elm)) { + ch->read = smd_packet_read; + ch->write = smd_packet_write; + ch->read_avail = smd_packet_read_avail; + ch->write_avail = smd_packet_write_avail; + ch->update_state = update_packet_state; + ch->read_from_cb = smd_packet_read_from_cb; + ch->is_pkt_ch = 1; + } else { + ch->read = smd_stream_read; + ch->write = smd_stream_write; + ch->read_avail = smd_stream_read_avail; + ch->write_avail = smd_stream_write_avail; + ch->update_state = update_stream_state; + ch->read_from_cb = smd_stream_read; + } + + memcpy(ch->name, alloc_elm->name, SMD_MAX_CH_NAME_LEN); + ch->name[SMD_MAX_CH_NAME_LEN-1] = 0; + + ch->pdev.name = ch->name; + ch->pdev.id = ch->type; + + SMD_INFO("smd_alloc_channel() '%s' cid=%d\n", + ch->name, ch->n); + + mutex_lock(&smd_creation_mutex); + list_add(&ch->ch_list, &smd_ch_closed_list); + mutex_unlock(&smd_creation_mutex); + + platform_device_register(&ch->pdev); + if (!strncmp(ch->name, "LOOPBACK", 8) && ch->type == SMD_APPS_MODEM) { + /* create a platform driver to be used by smd_tty driver + * so that it can access the loopback port + */ + loopback_tty_pdev.id = ch->type; + platform_device_register(&loopback_tty_pdev); + } + return 0; +} + +static inline void notify_loopback_smd(void) +{ + unsigned long flags; + struct smd_channel *ch; + + spin_lock_irqsave(&smd_lock, flags); + list_for_each_entry(ch, &smd_ch_list_loopback, ch_list) { + ch->notify(ch->priv, SMD_EVENT_DATA); + } + spin_unlock_irqrestore(&smd_lock, flags); +} + +static int smd_alloc_loopback_channel(void) +{ + static struct smd_half_channel smd_loopback_ctl; + static char smd_loopback_data[SMD_BUF_SIZE]; + struct smd_channel *ch; + + ch = kzalloc(sizeof(struct smd_channel), GFP_KERNEL); + if (ch == 0) { + pr_err("%s: out of memory\n", __func__); + return -1; + } + ch->n = SMD_LOOPBACK_CID; + + ch->send = &smd_loopback_ctl; + ch->recv = &smd_loopback_ctl; + ch->send_data = smd_loopback_data; + ch->recv_data = smd_loopback_data; + ch->fifo_size = SMD_BUF_SIZE; + + ch->fifo_mask = ch->fifo_size - 1; + ch->type = SMD_LOOPBACK_TYPE; + ch->notify_other_cpu = notify_loopback_smd; + + ch->read = smd_stream_read; + ch->write = smd_stream_write; + ch->read_avail = smd_stream_read_avail; + ch->write_avail = smd_stream_write_avail; + ch->update_state = update_stream_state; + ch->read_from_cb = smd_stream_read; + + memset(ch->name, 0, 20); + memcpy(ch->name, "local_loopback", 14); + + ch->pdev.name = ch->name; + ch->pdev.id = ch->type; + + SMD_INFO("%s: '%s' cid=%d\n", __func__, ch->name, ch->n); + + mutex_lock(&smd_creation_mutex); + list_add(&ch->ch_list, &smd_ch_closed_list); + mutex_unlock(&smd_creation_mutex); + + platform_device_register(&ch->pdev); + return 0; +} + +static void do_nothing_notify(void *priv, unsigned flags) +{ +} + +static void finalize_channel_close_fn(struct work_struct *work) +{ + unsigned long flags; + struct smd_channel *ch; + struct smd_channel *index; + + mutex_lock(&smd_creation_mutex); + spin_lock_irqsave(&smd_lock, flags); + list_for_each_entry_safe(ch, index, &smd_ch_to_close_list, ch_list) { + list_del(&ch->ch_list); + list_add(&ch->ch_list, &smd_ch_closed_list); + ch->notify(ch->priv, SMD_EVENT_REOPEN_READY); + ch->notify = do_nothing_notify; + } + spin_unlock_irqrestore(&smd_lock, flags); + mutex_unlock(&smd_creation_mutex); +} + +struct smd_channel *smd_get_channel(const char *name, uint32_t type) +{ + struct smd_channel *ch; + + mutex_lock(&smd_creation_mutex); + list_for_each_entry(ch, &smd_ch_closed_list, ch_list) { + if (!strcmp(name, ch->name) && + (type == ch->type)) { + list_del(&ch->ch_list); + mutex_unlock(&smd_creation_mutex); + return ch; + } + } + mutex_unlock(&smd_creation_mutex); + + return NULL; +} + +int smd_named_open_on_edge(const char *name, uint32_t edge, + smd_channel_t **_ch, + void *priv, void (*notify)(void *, unsigned)) +{ + struct smd_channel *ch; + unsigned long flags; + + if (smd_initialized == 0) { + SMD_INFO("smd_open() before smd_init()\n"); + return -ENODEV; + } + + SMD_DBG("smd_open('%s', %p, %p)\n", name, priv, notify); + + ch = smd_get_channel(name, edge); + if (!ch) { + /* check closing list for port */ + spin_lock_irqsave(&smd_lock, flags); + list_for_each_entry(ch, &smd_ch_closing_list, ch_list) { + if (!strncmp(name, ch->name, 20) && + (edge == ch->type)) { + /* channel exists, but is being closed */ + spin_unlock_irqrestore(&smd_lock, flags); + return -EAGAIN; + } + } + + /* check closing workqueue list for port */ + list_for_each_entry(ch, &smd_ch_to_close_list, ch_list) { + if (!strncmp(name, ch->name, 20) && + (edge == ch->type)) { + /* channel exists, but is being closed */ + spin_unlock_irqrestore(&smd_lock, flags); + return -EAGAIN; + } + } + spin_unlock_irqrestore(&smd_lock, flags); + + /* one final check to handle closing->closed race condition */ + ch = smd_get_channel(name, edge); + if (!ch) + return -ENODEV; + } + + if (notify == 0) + notify = do_nothing_notify; + + ch->notify = notify; + ch->current_packet = 0; + ch->last_state = SMD_SS_CLOSED; + ch->priv = priv; + + if (edge == SMD_LOOPBACK_TYPE) { + ch->last_state = SMD_SS_OPENED; + ch->half_ch->set_state(ch->send, SMD_SS_OPENED); + ch->half_ch->set_fDSR(ch->send, 1); + ch->half_ch->set_fCTS(ch->send, 1); + ch->half_ch->set_fCD(ch->send, 1); + } + + *_ch = ch; + + SMD_DBG("smd_open: opening '%s'\n", ch->name); + + spin_lock_irqsave(&smd_lock, flags); + if (SMD_CHANNEL_TYPE(ch->type) == SMD_APPS_MODEM) + list_add(&ch->ch_list, &smd_ch_list_modem); + else if (SMD_CHANNEL_TYPE(ch->type) == SMD_APPS_QDSP) + list_add(&ch->ch_list, &smd_ch_list_dsp); + else if (SMD_CHANNEL_TYPE(ch->type) == SMD_APPS_DSPS) + list_add(&ch->ch_list, &smd_ch_list_dsps); + else if (SMD_CHANNEL_TYPE(ch->type) == SMD_APPS_WCNSS) + list_add(&ch->ch_list, &smd_ch_list_wcnss); + else if (SMD_CHANNEL_TYPE(ch->type) == SMD_APPS_RPM) + list_add(&ch->ch_list, &smd_ch_list_rpm); + else + list_add(&ch->ch_list, &smd_ch_list_loopback); + + SMD_DBG("%s: opening ch %d\n", __func__, ch->n); + + if (edge != SMD_LOOPBACK_TYPE) + smd_state_change(ch, ch->last_state, SMD_SS_OPENING); + + spin_unlock_irqrestore(&smd_lock, flags); + + return 0; +} +EXPORT_SYMBOL(smd_named_open_on_edge); + + +int smd_open(const char *name, smd_channel_t **_ch, + void *priv, void (*notify)(void *, unsigned)) +{ + return smd_named_open_on_edge(name, SMD_APPS_MODEM, _ch, priv, + notify); +} +EXPORT_SYMBOL(smd_open); + +int smd_close(smd_channel_t *ch) +{ + unsigned long flags; + + if (ch == 0) + return -1; + + SMD_INFO("smd_close(%s)\n", ch->name); + + spin_lock_irqsave(&smd_lock, flags); + list_del(&ch->ch_list); + if (ch->n == SMD_LOOPBACK_CID) { + ch->half_ch->set_fDSR(ch->send, 0); + ch->half_ch->set_fCTS(ch->send, 0); + ch->half_ch->set_fCD(ch->send, 0); + ch->half_ch->set_state(ch->send, SMD_SS_CLOSED); + } else + ch_set_state(ch, SMD_SS_CLOSED); + + if (ch->half_ch->get_state(ch->recv) == SMD_SS_OPENED) { + list_add(&ch->ch_list, &smd_ch_closing_list); + spin_unlock_irqrestore(&smd_lock, flags); + } else { + spin_unlock_irqrestore(&smd_lock, flags); + ch->notify = do_nothing_notify; + mutex_lock(&smd_creation_mutex); + list_add(&ch->ch_list, &smd_ch_closed_list); + mutex_unlock(&smd_creation_mutex); + } + + return 0; +} +EXPORT_SYMBOL(smd_close); + +int smd_write_start(smd_channel_t *ch, int len) +{ + int ret; + unsigned hdr[5]; + + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + if (!ch->is_pkt_ch) { + pr_err("%s: non-packet channel specified\n", __func__); + return -EACCES; + } + if (len < 1) { + pr_err("%s: invalid length: %d\n", __func__, len); + return -EINVAL; + } + + if (ch->pending_pkt_sz) { + pr_err("%s: packet of size: %d in progress\n", __func__, + ch->pending_pkt_sz); + return -EBUSY; + } + ch->pending_pkt_sz = len; + + if (smd_stream_write_avail(ch) < (SMD_HEADER_SIZE)) { + ch->pending_pkt_sz = 0; + SMD_DBG("%s: no space to write packet header\n", __func__); + return -EAGAIN; + } + + hdr[0] = len; + hdr[1] = hdr[2] = hdr[3] = hdr[4] = 0; + + + ret = smd_stream_write(ch, hdr, sizeof(hdr), 0); + if (ret < 0 || ret != sizeof(hdr)) { + ch->pending_pkt_sz = 0; + pr_err("%s: packet header failed to write\n", __func__); + return -EPERM; + } + return 0; +} +EXPORT_SYMBOL(smd_write_start); + +int smd_write_segment(smd_channel_t *ch, void *data, int len, int user_buf) +{ + int bytes_written; + + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + if (len < 1) { + pr_err("%s: invalid length: %d\n", __func__, len); + return -EINVAL; + } + + if (!ch->pending_pkt_sz) { + pr_err("%s: no transaction in progress\n", __func__); + return -ENOEXEC; + } + if (ch->pending_pkt_sz - len < 0) { + pr_err("%s: segment of size: %d will make packet go over " + "length\n", __func__, len); + return -EINVAL; + } + + bytes_written = smd_stream_write(ch, data, len, user_buf); + + ch->pending_pkt_sz -= bytes_written; + + return bytes_written; +} +EXPORT_SYMBOL(smd_write_segment); + +int smd_write_end(smd_channel_t *ch) +{ + + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + if (ch->pending_pkt_sz) { + pr_err("%s: current packet not completely written\n", __func__); + return -E2BIG; + } + + return 0; +} +EXPORT_SYMBOL(smd_write_end); + +int smd_read(smd_channel_t *ch, void *data, int len) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->read(ch, data, len, 0); +} +EXPORT_SYMBOL(smd_read); + +int smd_read_user_buffer(smd_channel_t *ch, void *data, int len) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->read(ch, data, len, 1); +} +EXPORT_SYMBOL(smd_read_user_buffer); + +int smd_read_from_cb(smd_channel_t *ch, void *data, int len) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->read_from_cb(ch, data, len, 0); +} +EXPORT_SYMBOL(smd_read_from_cb); + +int smd_write(smd_channel_t *ch, const void *data, int len) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->pending_pkt_sz ? -EBUSY : ch->write(ch, data, len, 0); +} +EXPORT_SYMBOL(smd_write); + +int smd_write_user_buffer(smd_channel_t *ch, const void *data, int len) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->pending_pkt_sz ? -EBUSY : ch->write(ch, data, len, 1); +} +EXPORT_SYMBOL(smd_write_user_buffer); + +int smd_read_avail(smd_channel_t *ch) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->read_avail(ch); +} +EXPORT_SYMBOL(smd_read_avail); + +int smd_write_avail(smd_channel_t *ch) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->write_avail(ch); +} +EXPORT_SYMBOL(smd_write_avail); + +void smd_enable_read_intr(smd_channel_t *ch) +{ + if (ch) + ch->half_ch->set_fBLOCKREADINTR(ch->send, 0); +} +EXPORT_SYMBOL(smd_enable_read_intr); + +void smd_disable_read_intr(smd_channel_t *ch) +{ + if (ch) + ch->half_ch->set_fBLOCKREADINTR(ch->send, 1); +} +EXPORT_SYMBOL(smd_disable_read_intr); + +/** + * Enable/disable receive interrupts for the remote processor used by a + * particular channel. + * @ch: open channel handle to use for the edge + * @mask: 1 = mask interrupts; 0 = unmask interrupts + * @returns: 0 for success; < 0 for failure + * + * Note that this enables/disables all interrupts from the remote subsystem for + * all channels. As such, it should be used with care and only for specific + * use cases such as power-collapse sequencing. + */ +int smd_mask_receive_interrupt(smd_channel_t *ch, bool mask) +{ + struct irq_chip *irq_chip; + struct irq_data *irq_data; + struct interrupt_config_item *int_cfg; + + if (!ch) + return -EINVAL; + + if (ch->type >= ARRAY_SIZE(edge_to_pids)) + return -ENODEV; + + int_cfg = &private_intr_config[edge_to_pids[ch->type].remote_pid].smd; + + if (int_cfg->irq_id < 0) + return -ENODEV; + + irq_chip = irq_get_chip(int_cfg->irq_id); + if (!irq_chip) + return -ENODEV; + + irq_data = irq_get_irq_data(int_cfg->irq_id); + if (!irq_data) + return -ENODEV; + + if (mask) { + SMx_POWER_INFO("SMD Masking interrupts from %s\n", + edge_to_pids[ch->type].subsys_name); + irq_chip->irq_mask(irq_data); + } else { + SMx_POWER_INFO("SMD Unmasking interrupts from %s\n", + edge_to_pids[ch->type].subsys_name); + irq_chip->irq_unmask(irq_data); + } + + return 0; +} +EXPORT_SYMBOL(smd_mask_receive_interrupt); + +int smd_wait_until_readable(smd_channel_t *ch, int bytes) +{ + return -1; +} + +int smd_wait_until_writable(smd_channel_t *ch, int bytes) +{ + return -1; +} + +int smd_cur_packet_size(smd_channel_t *ch) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return ch->current_packet; +} +EXPORT_SYMBOL(smd_cur_packet_size); + +int smd_tiocmget(smd_channel_t *ch) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + return (ch->half_ch->get_fDSR(ch->recv) ? TIOCM_DSR : 0) | + (ch->half_ch->get_fCTS(ch->recv) ? TIOCM_CTS : 0) | + (ch->half_ch->get_fCD(ch->recv) ? TIOCM_CD : 0) | + (ch->half_ch->get_fRI(ch->recv) ? TIOCM_RI : 0) | + (ch->half_ch->get_fCTS(ch->send) ? TIOCM_RTS : 0) | + (ch->half_ch->get_fDSR(ch->send) ? TIOCM_DTR : 0); +} +EXPORT_SYMBOL(smd_tiocmget); + +/* this api will be called while holding smd_lock */ +int +smd_tiocmset_from_cb(smd_channel_t *ch, unsigned int set, unsigned int clear) +{ + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + if (set & TIOCM_DTR) + ch->half_ch->set_fDSR(ch->send, 1); + + if (set & TIOCM_RTS) + ch->half_ch->set_fCTS(ch->send, 1); + + if (clear & TIOCM_DTR) + ch->half_ch->set_fDSR(ch->send, 0); + + if (clear & TIOCM_RTS) + ch->half_ch->set_fCTS(ch->send, 0); + + ch->half_ch->set_fSTATE(ch->send, 1); + barrier(); + ch->notify_other_cpu(); + + return 0; +} +EXPORT_SYMBOL(smd_tiocmset_from_cb); + +int smd_tiocmset(smd_channel_t *ch, unsigned int set, unsigned int clear) +{ + unsigned long flags; + + if (!ch) { + pr_err("%s: Invalid channel specified\n", __func__); + return -ENODEV; + } + + spin_lock_irqsave(&smd_lock, flags); + smd_tiocmset_from_cb(ch, set, clear); + spin_unlock_irqrestore(&smd_lock, flags); + + return 0; +} +EXPORT_SYMBOL(smd_tiocmset); + +int smd_is_pkt_avail(smd_channel_t *ch) +{ + unsigned long flags; + + if (!ch || !ch->is_pkt_ch) + return -EINVAL; + + if (ch->current_packet) + return 1; + + spin_lock_irqsave(&smd_lock, flags); + update_packet_state(ch); + spin_unlock_irqrestore(&smd_lock, flags); + + return ch->current_packet ? 1 : 0; +} +EXPORT_SYMBOL(smd_is_pkt_avail); + + +/* -------------------------------------------------------------------------- */ + +/* + * Shared Memory Range Check + * + * Takes a physical address and an offset and checks if the resulting physical + * address would fit into one of the aux smem regions. If so, returns the + * corresponding virtual address. Otherwise returns NULL. Expects the array + * of smem regions to be in ascending physical address order. + * + * @base: physical base address to check + * @offset: offset from the base to get the final address + */ +static void *smem_range_check(void *base, unsigned offset) +{ + int i; + void *phys_addr; + unsigned size; + + for (i = 0; i < num_smem_areas; ++i) { + phys_addr = smem_areas[i].phys_addr; + size = smem_areas[i].size; + if (base < phys_addr) + return NULL; + if (base > phys_addr + size) + continue; + if (base >= phys_addr && base + offset < phys_addr + size) + return smem_areas[i].virt_addr + offset; + } + + return NULL; +} + +/* smem_alloc returns the pointer to smem item if it is already allocated. + * Otherwise, it returns NULL. + */ +void *smem_alloc(unsigned id, unsigned size) +{ + return smem_find(id, size); +} +EXPORT_SYMBOL(smem_alloc); + +/* smem_alloc2 returns the pointer to smem item. If it is not allocated, + * it allocates it and then returns the pointer to it. + */ +void *smem_alloc2(unsigned id, unsigned size_in) +{ + struct smem_shared *shared = (void *) msm_smem_base_virt; + struct smem_heap_entry *toc = shared->heap_toc; + unsigned long flags; + void *ret = NULL; + if (!shared->heap_info.initialized) { + pr_err("%s: smem heap info not initialized\n", __func__); + return NULL; + } + + if (id >= SMEM_NUM_ITEMS) + return NULL; + + size_in = ALIGN(size_in, 8); + remote_spin_lock_irqsave(&remote_spinlock, flags); + if (toc[id].allocated) { + SMD_DBG("%s: %u already allocated\n", __func__, id); + if (size_in != toc[id].size) + pr_err("%s: wrong size %u (expected %u)\n", + __func__, toc[id].size, size_in); + else + ret = (void *)(msm_smem_base_virt + toc[id].offset); + } else if (id > SMEM_FIXED_ITEM_LAST) { + SMD_DBG("%s: allocating %u\n", __func__, id); + if (shared->heap_info.heap_remaining >= size_in) { + toc[id].offset = shared->heap_info.free_offset; + toc[id].size = size_in; + wmb(); + toc[id].allocated = 1; + + shared->heap_info.free_offset += size_in; + shared->heap_info.heap_remaining -= size_in; + ret = (void *)(msm_smem_base_virt + toc[id].offset); + } else + pr_err("%s: not enough memory %u (required %u)\n", + __func__, shared->heap_info.heap_remaining, + size_in); + } + wmb(); + remote_spin_unlock_irqrestore(&remote_spinlock, flags); + return ret; +} +EXPORT_SYMBOL(smem_alloc2); + +void *smem_get_entry(unsigned id, unsigned *size) +{ + struct smem_shared *shared = (void *) msm_smem_base_virt; + struct smem_heap_entry *toc = shared->heap_toc; + int use_spinlocks = spinlocks_initialized; + void *ret = 0; + unsigned long flags = 0; + + if (id >= SMEM_NUM_ITEMS) + return ret; + + if (use_spinlocks) + remote_spin_lock_irqsave(&remote_spinlock, flags); + /* toc is in device memory and cannot be speculatively accessed */ + if (toc[id].allocated) { + *size = toc[id].size; + barrier(); + if (!(toc[id].reserved & BASE_ADDR_MASK)) + ret = (void *) (msm_smem_base_virt + toc[id].offset); + else + ret = smem_range_check( + (void *)(toc[id].reserved & BASE_ADDR_MASK), + toc[id].offset); + } else { + *size = 0; + } + if (use_spinlocks) + remote_spin_unlock_irqrestore(&remote_spinlock, flags); + + return ret; +} +EXPORT_SYMBOL(smem_get_entry); + +void *smem_find(unsigned id, unsigned size_in) +{ + unsigned size; + void *ptr; + + ptr = smem_get_entry(id, &size); + if (!ptr) + return 0; + + size_in = ALIGN(size_in, 8); + if (size_in != size) { + pr_err("smem_find(%d, %d): wrong size %d\n", + id, size_in, size); + return 0; + } + + return ptr; +} +EXPORT_SYMBOL(smem_find); + +static int smsm_cb_init(void) +{ + struct smsm_state_info *state_info; + int n; + int ret = 0; + + smsm_states = kmalloc(sizeof(struct smsm_state_info)*SMSM_NUM_ENTRIES, + GFP_KERNEL); + + if (!smsm_states) { + pr_err("%s: SMSM init failed\n", __func__); + return -ENOMEM; + } + + smsm_cb_wq = create_singlethread_workqueue("smsm_cb_wq"); + if (!smsm_cb_wq) { + pr_err("%s: smsm_cb_wq creation failed\n", __func__); + kfree(smsm_states); + return -EFAULT; + } + + mutex_lock(&smsm_lock); + for (n = 0; n < SMSM_NUM_ENTRIES; n++) { + state_info = &smsm_states[n]; + state_info->last_value = __raw_readl(SMSM_STATE_ADDR(n)); + state_info->intr_mask_set = 0x0; + state_info->intr_mask_clear = 0x0; + INIT_LIST_HEAD(&state_info->callbacks); + } + mutex_unlock(&smsm_lock); + + return ret; +} + +static int smsm_init(void) +{ + struct smem_shared *shared = (void *) msm_smem_base_virt; + int i; + struct smsm_size_info_type *smsm_size_info; + + + smsm_size_info = smem_alloc(SMEM_SMSM_SIZE_INFO, + sizeof(struct smsm_size_info_type)); + if (smsm_size_info) { + SMSM_NUM_ENTRIES = smsm_size_info->num_entries; + SMSM_NUM_HOSTS = smsm_size_info->num_hosts; + } + + i = kfifo_alloc(&smsm_snapshot_fifo, + sizeof(uint32_t) * SMSM_NUM_ENTRIES * SMSM_SNAPSHOT_CNT, + GFP_KERNEL); + if (i) { + pr_err("%s: SMSM state fifo alloc failed %d\n", __func__, i); + return i; + } + //wake_lock_init(&smsm_snapshot_wakelock, WAKE_LOCK_SUSPEND, + // "smsm_snapshot"); + + if (!smsm_info.state) { + smsm_info.state = smem_alloc2(ID_SHARED_STATE, + SMSM_NUM_ENTRIES * + sizeof(uint32_t)); + + if (smsm_info.state) { + smd_write_intr(0, SMSM_STATE_ADDR(SMSM_APPS_STATE)); + if ((shared->version[VERSION_MODEM] >> 16) >= 0xB) + smd_write_intr(0, \ + SMSM_STATE_ADDR(SMSM_APPS_DEM_I)); + } + } + + if (!smsm_info.intr_mask) { + smsm_info.intr_mask = smem_alloc2(SMEM_SMSM_CPU_INTR_MASK, + SMSM_NUM_ENTRIES * + SMSM_NUM_HOSTS * + sizeof(uint32_t)); + + if (smsm_info.intr_mask) { + for (i = 0; i < SMSM_NUM_ENTRIES; i++) + smd_write_intr(0x0, + SMSM_INTR_MASK_ADDR(i, SMSM_APPS)); + + /* Configure legacy modem bits */ + smd_write_intr(LEGACY_MODEM_SMSM_MASK, + SMSM_INTR_MASK_ADDR(SMSM_MODEM_STATE, + SMSM_APPS)); + } + } + + if (!smsm_info.intr_mux) + smsm_info.intr_mux = smem_alloc2(SMEM_SMD_SMSM_INTR_MUX, + SMSM_NUM_INTR_MUX * + sizeof(uint32_t)); + + i = smsm_cb_init(); + if (i) + return i; + + wmb(); + + smsm_pm_notifier(&smsm_pm_nb, PM_POST_SUSPEND, NULL); + i = register_pm_notifier(&smsm_pm_nb); + if (i) + pr_err("%s: power state notif error %d\n", __func__, i); + + return 0; +} + +void smsm_reset_modem(unsigned mode) +{ + if (mode == SMSM_SYSTEM_DOWNLOAD) { + mode = SMSM_RESET | SMSM_SYSTEM_DOWNLOAD; + } else if (mode == SMSM_MODEM_WAIT) { + mode = SMSM_RESET | SMSM_MODEM_WAIT; + } else { /* reset_mode is SMSM_RESET or default */ + mode = SMSM_RESET; + } + + smsm_change_state(SMSM_APPS_STATE, mode, mode); +} +EXPORT_SYMBOL(smsm_reset_modem); + +void smsm_reset_modem_cont(void) +{ + unsigned long flags; + uint32_t state; + + if (!smsm_info.state) + return; + + spin_lock_irqsave(&smem_lock, flags); + state = __raw_readl(SMSM_STATE_ADDR(SMSM_APPS_STATE)) \ + & ~SMSM_MODEM_WAIT; + smd_write_intr(state, SMSM_STATE_ADDR(SMSM_APPS_STATE)); + wmb(); + spin_unlock_irqrestore(&smem_lock, flags); +} +EXPORT_SYMBOL(smsm_reset_modem_cont); + +static void smsm_cb_snapshot(uint32_t use_wakelock) +{ + int n; + uint32_t new_state; + unsigned long flags; + int ret; + + ret = kfifo_avail(&smsm_snapshot_fifo); + if (ret < SMSM_SNAPSHOT_SIZE) { + pr_err("%s: SMSM snapshot full %d\n", __func__, ret); + return; + } + + /* + * To avoid a race condition with notify_smsm_cb_clients_worker, the + * following sequence must be followed: + * 1) increment snapshot count + * 2) insert data into FIFO + * + * Potentially in parallel, the worker: + * a) verifies >= 1 snapshots are in FIFO + * b) processes snapshot + * c) decrements reference count + * + * This order ensures that 1 will always occur before abc. + */ + if (use_wakelock) { + spin_lock_irqsave(&smsm_snapshot_count_lock, flags); + if (smsm_snapshot_count == 0) { + SMx_POWER_INFO("SMSM snapshot wake lock\n"); + //wake_lock(&smsm_snapshot_wakelock); + } + ++smsm_snapshot_count; + spin_unlock_irqrestore(&smsm_snapshot_count_lock, flags); + } + + /* queue state entries */ + for (n = 0; n < SMSM_NUM_ENTRIES; n++) { + new_state = __raw_readl(SMSM_STATE_ADDR(n)); + + ret = kfifo_in(&smsm_snapshot_fifo, + &new_state, sizeof(new_state)); + if (ret != sizeof(new_state)) { + pr_err("%s: SMSM snapshot failure %d\n", __func__, ret); + goto restore_snapshot_count; + } + } + + /* queue wakelock usage flag */ + ret = kfifo_in(&smsm_snapshot_fifo, + &use_wakelock, sizeof(use_wakelock)); + if (ret != sizeof(use_wakelock)) { + pr_err("%s: SMSM snapshot failure %d\n", __func__, ret); + goto restore_snapshot_count; + } + + queue_work(smsm_cb_wq, &smsm_cb_work); + return; + +restore_snapshot_count: + if (use_wakelock) { + spin_lock_irqsave(&smsm_snapshot_count_lock, flags); + if (smsm_snapshot_count) { + --smsm_snapshot_count; + if (smsm_snapshot_count == 0) { + SMx_POWER_INFO("SMSM snapshot wake unlock\n"); + //wake_unlock(&smsm_snapshot_wakelock); + } + } else { + pr_err("%s: invalid snapshot count\n", __func__); + } + spin_unlock_irqrestore(&smsm_snapshot_count_lock, flags); + } +} + +static irqreturn_t smsm_irq_handler(int irq, void *data) +{ + unsigned long flags; + + if (irq == INT_ADSP_A11_SMSM) { + uint32_t mux_val; + static uint32_t prev_smem_q6_apps_smsm; + + if (smsm_info.intr_mux && 0 /*cpu_is_qsd8x50()*/) { + mux_val = __raw_readl( + SMSM_INTR_MUX_ADDR(SMEM_Q6_APPS_SMSM)); + if (mux_val != prev_smem_q6_apps_smsm) + prev_smem_q6_apps_smsm = mux_val; + } + + spin_lock_irqsave(&smem_lock, flags); + smsm_cb_snapshot(1); + spin_unlock_irqrestore(&smem_lock, flags); + return IRQ_HANDLED; + } + + spin_lock_irqsave(&smem_lock, flags); + if (!smsm_info.state) { + SMSM_INFO("<SM NO STATE>\n"); + } else { + unsigned old_apps, apps; + unsigned modm = __raw_readl(SMSM_STATE_ADDR(SMSM_MODEM_STATE)); + + old_apps = apps = __raw_readl(SMSM_STATE_ADDR(SMSM_APPS_STATE)); + + SMSM_DBG("<SM %08x %08x>\n", apps, modm); + if (apps & SMSM_RESET) { + /* If we get an interrupt and the apps SMSM_RESET + bit is already set, the modem is acking the + app's reset ack. */ + if (!disable_smsm_reset_handshake) + apps &= ~SMSM_RESET; + /* Issue a fake irq to handle any + * smd state changes during reset + */ + smd_fake_irq_handler(0); + + /* queue modem restart notify chain */ + //modem_queue_start_reset_notify(); + + } else if (modm & SMSM_RESET) { + pr_err("\nSMSM: Modem SMSM state changed to SMSM_RESET."); + if (!disable_smsm_reset_handshake) { + apps |= SMSM_RESET; + flush_cache_all(); + outer_flush_all(); + } + //modem_queue_start_reset_notify(); + + } else if (modm & SMSM_INIT) { + if (!(apps & SMSM_INIT)) { + apps |= SMSM_INIT; + //modem_queue_smsm_init_notify(); + } + + if (modm & SMSM_SMDINIT) + apps |= SMSM_SMDINIT; + if ((apps & (SMSM_INIT | SMSM_SMDINIT | SMSM_RPCINIT)) == + (SMSM_INIT | SMSM_SMDINIT | SMSM_RPCINIT)) + apps |= SMSM_RUN; + } else if (modm & SMSM_SYSTEM_DOWNLOAD) { + pr_err("\nSMSM: Modem SMSM state changed to SMSM_SYSTEM_DOWNLOAD."); + //modem_queue_start_reset_notify(); + } + + if (old_apps != apps) { + SMSM_DBG("<SM %08x NOTIFY>\n", apps); + smd_write_intr(apps, SMSM_STATE_ADDR(SMSM_APPS_STATE)); + do_smd_probe(); + notify_other_smsm(SMSM_APPS_STATE, (old_apps ^ apps)); + } + + smsm_cb_snapshot(1); + } + spin_unlock_irqrestore(&smem_lock, flags); + return IRQ_HANDLED; +} + +static irqreturn_t smsm_modem_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMSM Int Modem->Apps\n"); + ++interrupt_stats[SMD_MODEM].smsm_in_count; + return smsm_irq_handler(irq, data); +} + +static irqreturn_t smsm_dsp_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMSM Int LPASS->Apps\n"); + ++interrupt_stats[SMD_Q6].smsm_in_count; + return smsm_irq_handler(irq, data); +} + +static irqreturn_t smsm_dsps_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMSM Int DSPS->Apps\n"); + ++interrupt_stats[SMD_DSPS].smsm_in_count; + return smsm_irq_handler(irq, data); +} + +static irqreturn_t smsm_wcnss_irq_handler(int irq, void *data) +{ + SMx_POWER_INFO("SMSM Int WCNSS->Apps\n"); + ++interrupt_stats[SMD_WCNSS].smsm_in_count; + return smsm_irq_handler(irq, data); +} + +/* + * Changes the global interrupt mask. The set and clear masks are re-applied + * every time the global interrupt mask is updated for callback registration + * and de-registration. + * + * The clear mask is applied first, so if a bit is set to 1 in both the clear + * mask and the set mask, the result will be that the interrupt is set. + * + * @smsm_entry SMSM entry to change + * @clear_mask 1 = clear bit, 0 = no-op + * @set_mask 1 = set bit, 0 = no-op + * + * @returns 0 for success, < 0 for error + */ +int smsm_change_intr_mask(uint32_t smsm_entry, + uint32_t clear_mask, uint32_t set_mask) +{ + uint32_t old_mask, new_mask; + unsigned long flags; + + if (smsm_entry >= SMSM_NUM_ENTRIES) { + pr_err("smsm_change_state: Invalid entry %d\n", + smsm_entry); + return -EINVAL; + } + + if (!smsm_info.intr_mask) { + pr_err("smsm_change_intr_mask <SM NO STATE>\n"); + return -EIO; + } + + spin_lock_irqsave(&smem_lock, flags); + smsm_states[smsm_entry].intr_mask_clear = clear_mask; + smsm_states[smsm_entry].intr_mask_set = set_mask; + + old_mask = __raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS)); + new_mask = (old_mask & ~clear_mask) | set_mask; + smd_write_intr(new_mask, SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS)); + + wmb(); + spin_unlock_irqrestore(&smem_lock, flags); + + return 0; +} +EXPORT_SYMBOL(smsm_change_intr_mask); + +int smsm_get_intr_mask(uint32_t smsm_entry, uint32_t *intr_mask) +{ + if (smsm_entry >= SMSM_NUM_ENTRIES) { + pr_err("smsm_change_state: Invalid entry %d\n", + smsm_entry); + return -EINVAL; + } + + if (!smsm_info.intr_mask) { + pr_err("smsm_change_intr_mask <SM NO STATE>\n"); + return -EIO; + } + + *intr_mask = __raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS)); + return 0; +} +EXPORT_SYMBOL(smsm_get_intr_mask); + +int smsm_change_state(uint32_t smsm_entry, + uint32_t clear_mask, uint32_t set_mask) +{ + unsigned long flags; + uint32_t old_state, new_state; + + if (smsm_entry >= SMSM_NUM_ENTRIES) { + pr_err("smsm_change_state: Invalid entry %d", + smsm_entry); + return -EINVAL; + } + + if (!smsm_info.state) { + pr_err("smsm_change_state <SM NO STATE>\n"); + return -EIO; + } + spin_lock_irqsave(&smem_lock, flags); + + old_state = __raw_readl(SMSM_STATE_ADDR(smsm_entry)); + new_state = (old_state & ~clear_mask) | set_mask; + smd_write_intr(new_state, SMSM_STATE_ADDR(smsm_entry)); + SMSM_DBG("smsm_change_state %x\n", new_state); + notify_other_smsm(SMSM_APPS_STATE, (old_state ^ new_state)); + + spin_unlock_irqrestore(&smem_lock, flags); + + return 0; +} +EXPORT_SYMBOL(smsm_change_state); + +uint32_t smsm_get_state(uint32_t smsm_entry) +{ + uint32_t rv = 0; + + /* needs interface change to return error code */ + if (smsm_entry >= SMSM_NUM_ENTRIES) { + pr_err("smsm_change_state: Invalid entry %d", + smsm_entry); + return 0; + } + + if (!smsm_info.state) { + pr_err("smsm_get_state <SM NO STATE>\n"); + } else { + rv = __raw_readl(SMSM_STATE_ADDR(smsm_entry)); + } + + return rv; +} +EXPORT_SYMBOL(smsm_get_state); + +/** + * Performs SMSM callback client notifiction. + */ +void notify_smsm_cb_clients_worker(struct work_struct *work) +{ + struct smsm_state_cb_info *cb_info; + struct smsm_state_info *state_info; + int n; + uint32_t new_state; + uint32_t state_changes; + uint32_t use_wakelock; + int ret; + unsigned long flags; + + if (!smd_initialized) + return; + + while (kfifo_len(&smsm_snapshot_fifo) >= SMSM_SNAPSHOT_SIZE) { + mutex_lock(&smsm_lock); + for (n = 0; n < SMSM_NUM_ENTRIES; n++) { + state_info = &smsm_states[n]; + + ret = kfifo_out(&smsm_snapshot_fifo, &new_state, + sizeof(new_state)); + if (ret != sizeof(new_state)) { + pr_err("%s: snapshot underflow %d\n", + __func__, ret); + mutex_unlock(&smsm_lock); + return; + } + + state_changes = state_info->last_value ^ new_state; + if (state_changes) { + SMx_POWER_INFO("SMSM Change %d: %08x->%08x\n", + n, state_info->last_value, + new_state); + list_for_each_entry(cb_info, + &state_info->callbacks, cb_list) { + + if (cb_info->mask & state_changes) + cb_info->notify(cb_info->data, + state_info->last_value, + new_state); + } + state_info->last_value = new_state; + } + } + + /* read wakelock flag */ + ret = kfifo_out(&smsm_snapshot_fifo, &use_wakelock, + sizeof(use_wakelock)); + if (ret != sizeof(use_wakelock)) { + pr_err("%s: snapshot underflow %d\n", + __func__, ret); + mutex_unlock(&smsm_lock); + return; + } + mutex_unlock(&smsm_lock); + + if (use_wakelock) { + spin_lock_irqsave(&smsm_snapshot_count_lock, flags); + if (smsm_snapshot_count) { + --smsm_snapshot_count; + if (smsm_snapshot_count == 0) { + SMx_POWER_INFO("SMSM snapshot" + " wake unlock\n"); + //wake_unlock(&smsm_snapshot_wakelock); + } + } else { + pr_err("%s: invalid snapshot count\n", + __func__); + } + spin_unlock_irqrestore(&smsm_snapshot_count_lock, + flags); + } + } +} + + +/** + * Registers callback for SMSM state notifications when the specified + * bits change. + * + * @smsm_entry Processor entry to deregister + * @mask Bits to deregister (if result is 0, callback is removed) + * @notify Notification function to deregister + * @data Opaque data passed in to callback + * + * @returns Status code + * <0 error code + * 0 inserted new entry + * 1 updated mask of existing entry + */ +int smsm_state_cb_register(uint32_t smsm_entry, uint32_t mask, + void (*notify)(void *, uint32_t, uint32_t), void *data) +{ + struct smsm_state_info *state; + struct smsm_state_cb_info *cb_info; + struct smsm_state_cb_info *cb_found = 0; + uint32_t new_mask = 0; + int ret = 0; + + if (smsm_entry >= SMSM_NUM_ENTRIES) + return -EINVAL; + + mutex_lock(&smsm_lock); + + if (!smsm_states) { + /* smsm not yet initialized */ + ret = -ENODEV; + goto cleanup; + } + + state = &smsm_states[smsm_entry]; + list_for_each_entry(cb_info, + &state->callbacks, cb_list) { + if (!ret && (cb_info->notify == notify) && + (cb_info->data == data)) { + cb_info->mask |= mask; + cb_found = cb_info; + ret = 1; + } + new_mask |= cb_info->mask; + } + + if (!cb_found) { + cb_info = kmalloc(sizeof(struct smsm_state_cb_info), + GFP_ATOMIC); + if (!cb_info) { + ret = -ENOMEM; + goto cleanup; + } + + cb_info->mask = mask; + cb_info->notify = notify; + cb_info->data = data; + INIT_LIST_HEAD(&cb_info->cb_list); + list_add_tail(&cb_info->cb_list, + &state->callbacks); + new_mask |= mask; + } + + /* update interrupt notification mask */ + if (smsm_entry == SMSM_MODEM_STATE) + new_mask |= LEGACY_MODEM_SMSM_MASK; + + if (smsm_info.intr_mask) { + unsigned long flags; + + spin_lock_irqsave(&smem_lock, flags); + new_mask = (new_mask & ~state->intr_mask_clear) + | state->intr_mask_set; + smd_write_intr(new_mask, + SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS)); + wmb(); + spin_unlock_irqrestore(&smem_lock, flags); + } + +cleanup: + mutex_unlock(&smsm_lock); + return ret; +} +EXPORT_SYMBOL(smsm_state_cb_register); + + +/** + * Deregisters for SMSM state notifications for the specified bits. + * + * @smsm_entry Processor entry to deregister + * @mask Bits to deregister (if result is 0, callback is removed) + * @notify Notification function to deregister + * @data Opaque data passed in to callback + * + * @returns Status code + * <0 error code + * 0 not found + * 1 updated mask + * 2 removed callback + */ +int smsm_state_cb_deregister(uint32_t smsm_entry, uint32_t mask, + void (*notify)(void *, uint32_t, uint32_t), void *data) +{ + struct smsm_state_cb_info *cb_info; + struct smsm_state_cb_info *cb_tmp; + struct smsm_state_info *state; + uint32_t new_mask = 0; + int ret = 0; + + if (smsm_entry >= SMSM_NUM_ENTRIES) + return -EINVAL; + + mutex_lock(&smsm_lock); + + if (!smsm_states) { + /* smsm not yet initialized */ + mutex_unlock(&smsm_lock); + return -ENODEV; + } + + state = &smsm_states[smsm_entry]; + list_for_each_entry_safe(cb_info, cb_tmp, + &state->callbacks, cb_list) { + if (!ret && (cb_info->notify == notify) && + (cb_info->data == data)) { + cb_info->mask &= ~mask; + ret = 1; + if (!cb_info->mask) { + /* no mask bits set, remove callback */ + list_del(&cb_info->cb_list); + kfree(cb_info); + ret = 2; + continue; + } + } + new_mask |= cb_info->mask; + } + + /* update interrupt notification mask */ + if (smsm_entry == SMSM_MODEM_STATE) + new_mask |= LEGACY_MODEM_SMSM_MASK; + + if (smsm_info.intr_mask) { + unsigned long flags; + + spin_lock_irqsave(&smem_lock, flags); + new_mask = (new_mask & ~state->intr_mask_clear) + | state->intr_mask_set; + smd_write_intr(new_mask, + SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS)); + wmb(); + spin_unlock_irqrestore(&smem_lock, flags); + } + + mutex_unlock(&smsm_lock); + return ret; +} +EXPORT_SYMBOL(smsm_state_cb_deregister); + +int smd_module_init_notifier_register(struct notifier_block *nb) +{ + int ret; + if (!nb) + return -EINVAL; + mutex_lock(&smd_module_init_notifier_lock); + ret = raw_notifier_chain_register(&smd_module_init_notifier_list, nb); + if (smd_module_inited) + nb->notifier_call(nb, 0, NULL); + mutex_unlock(&smd_module_init_notifier_lock); + return ret; +} +EXPORT_SYMBOL(smd_module_init_notifier_register); + +int smd_module_init_notifier_unregister(struct notifier_block *nb) +{ + int ret; + if (!nb) + return -EINVAL; + mutex_lock(&smd_module_init_notifier_lock); + ret = raw_notifier_chain_unregister(&smd_module_init_notifier_list, + nb); + mutex_unlock(&smd_module_init_notifier_lock); + return ret; +} +EXPORT_SYMBOL(smd_module_init_notifier_unregister); + +static void smd_module_init_notify(uint32_t state, void *data) +{ + mutex_lock(&smd_module_init_notifier_lock); + smd_module_inited = 1; + raw_notifier_call_chain(&smd_module_init_notifier_list, + state, data); + mutex_unlock(&smd_module_init_notifier_lock); +} + + +static int intr_init(struct interrupt_config_item *private_irq, + struct smd_irq_config *platform_irq, + struct platform_device *pdev + ) +{ + int irq_id; + int ret; + + private_irq->out_bit_pos = platform_irq->out_bit_pos; + private_irq->out_offset = platform_irq->out_offset; + //private_irq->out_base = platform_irq->out_base; + private_irq->out_base = ioremap_nocache(0x02011000, 0x100);;// platform_irq->out_base; + + irq_id = platform_get_irq_byname( + pdev, + platform_irq->irq_name + ); + SMD_DBG("smd: %s: register irq: %s id: %d\n", __func__, + platform_irq->irq_name, irq_id); + ret = request_irq(irq_id, + private_irq->irq_handler, + platform_irq->flags, + platform_irq->device_name, + (void *)platform_irq->dev_id + ); + if (ret < 0) { + platform_irq->irq_id = ret; + private_irq->irq_id = ret; + } else { + platform_irq->irq_id = irq_id; + private_irq->irq_id = irq_id; +// ret_wake = enable_irq_wake(irq_id); +// if (ret_wake < 0) { +// pr_err("smd: enable_irq_wake failed on %s", +// platform_irq->irq_name); +// } + } + + return ret; +} + +int sort_cmp_func(const void *a, const void *b) +{ + struct smem_area *left = (struct smem_area *)(a); + struct smem_area *right = (struct smem_area *)(b); + + return left->phys_addr - right->phys_addr; +} + +int smd_core_platform_init(struct platform_device *pdev, struct smd_platform *smd_platform_data) +{ + int i; + int ret; + uint32_t num_ss; +// struct smd_platform *smd_platform_data; + struct smd_subsystem_config *smd_ss_config_list; + struct smd_subsystem_config *cfg; + int err_ret = 0; + struct smd_smem_regions *smd_smem_areas; + int smem_idx = 0; + +// smd_platform_data = pdev->dev.platform_data; + num_ss = smd_platform_data->num_ss_configs; + smd_ss_config_list = smd_platform_data->smd_ss_configs; + + if (smd_platform_data->smd_ssr_config) + disable_smsm_reset_handshake = smd_platform_data-> + smd_ssr_config->disable_smsm_reset_handshake; + + smd_smem_areas = smd_platform_data->smd_smem_areas; + if (smd_smem_areas) { + num_smem_areas = smd_platform_data->num_smem_areas; + smem_areas = kmalloc(sizeof(struct smem_area) * num_smem_areas, + GFP_KERNEL); + if (!smem_areas) { + pr_err("%s: smem_areas kmalloc failed\n", __func__); + err_ret = -ENOMEM; + goto smem_areas_alloc_fail; + } + + for (smem_idx = 0; smem_idx < num_smem_areas; ++smem_idx) { + smem_areas[smem_idx].phys_addr = + smd_smem_areas[smem_idx].phys_addr; + smem_areas[smem_idx].size = + smd_smem_areas[smem_idx].size; + smem_areas[smem_idx].virt_addr = ioremap_nocache( + (unsigned long)(smem_areas[smem_idx].phys_addr), + smem_areas[smem_idx].size); + if (!smem_areas[smem_idx].virt_addr) { + pr_err("%s: ioremap_nocache() of addr:%p" + " size: %x\n", __func__, + smem_areas[smem_idx].phys_addr, + smem_areas[smem_idx].size); + err_ret = -ENOMEM; + ++smem_idx; + goto smem_failed; + } + } + sort(smem_areas, num_smem_areas, + sizeof(struct smem_area), + sort_cmp_func, NULL); + } + + for (i = 0; i < num_ss; i++) { + cfg = &smd_ss_config_list[i]; + + ret = intr_init( + &private_intr_config[cfg->irq_config_id].smd, + &cfg->smd_int, + pdev + ); + + if (ret < 0) { + err_ret = ret; + pr_err("smd: register irq failed on %s\n", + cfg->smd_int.irq_name); + goto intr_failed; + } + + interrupt_stats[cfg->irq_config_id].smd_interrupt_id + = cfg->smd_int.irq_id; + /* only init smsm structs if this edge supports smsm */ + if (cfg->smsm_int.irq_id) + ret = intr_init( + &private_intr_config[cfg->irq_config_id].smsm, + &cfg->smsm_int, + pdev + ); + + if (ret < 0) { + err_ret = ret; + pr_err("smd: register irq failed on %s\n", + cfg->smsm_int.irq_name); + goto intr_failed; + } + + if (cfg->smsm_int.irq_id) + interrupt_stats[cfg->irq_config_id].smsm_interrupt_id + = cfg->smsm_int.irq_id; + if (cfg->subsys_name) + strlcpy(edge_to_pids[cfg->edge].subsys_name, + cfg->subsys_name, SMD_MAX_CH_NAME_LEN); + } + + + SMD_INFO("smd_core_platform_init() done\n"); + return 0; + +intr_failed: + pr_err("smd: deregistering IRQs\n"); + for (i = 0; i < num_ss; ++i) { + cfg = &smd_ss_config_list[i]; + + if (cfg->smd_int.irq_id >= 0) + free_irq(cfg->smd_int.irq_id, + (void *)cfg->smd_int.dev_id + ); + if (cfg->smsm_int.irq_id >= 0) + free_irq(cfg->smsm_int.irq_id, + (void *)cfg->smsm_int.dev_id + ); + } +smem_failed: + for (smem_idx = smem_idx - 1; smem_idx >= 0; --smem_idx) + iounmap(smem_areas[smem_idx].virt_addr); + kfree(smem_areas); +smem_areas_alloc_fail: + return err_ret; +} + +static struct smd_subsystem_config smd_config_list[] = { + { + .irq_config_id = SMD_Q6, + .subsys_name = "q6", + .edge = SMD_APPS_QDSP, + + .smd_int.irq_name = "adsp_a11", + .smd_int.flags = IRQF_TRIGGER_RISING, + .smd_int.irq_id = -1, + .smd_int.device_name = "smd_dev", + .smd_int.dev_id = 0, + .smd_int.out_bit_pos = 1 << 15, +// .smd_int.out_base = (void __iomem *)MSM_APCS_GCC_BASE, + .smd_int.out_offset = 0x8, + + .smsm_int.irq_name = "adsp_a11_smsm", + .smsm_int.flags = IRQF_TRIGGER_RISING, + .smsm_int.irq_id = -1, + .smsm_int.device_name = "smd_smsm", + .smsm_int.dev_id = 0, + .smsm_int.out_bit_pos = 1 << 14, +// .smsm_int.out_base = (void __iomem *)MSM_APCS_GCC_BASE, + .smsm_int.out_offset = 0x8, + }, +}; + +static struct smd_subsystem_restart_config smd_ssr_config = { + .disable_smsm_reset_handshake = 1, +}; + +static struct smd_platform smd_platform_data = { + .num_ss_configs = ARRAY_SIZE(smd_config_list), + .smd_ss_configs = smd_config_list, + .smd_ssr_config = &smd_ssr_config, +}; + + + +static int msm_smd_probe(struct platform_device *pdev) +{ + int ret; + + //printk("DEBUG::: %s \n", __func__); + SMD_INFO("smd probe\n"); + INIT_WORK(&probe_work, smd_channel_probe_worker); + + channel_close_wq = create_singlethread_workqueue("smd_channel_close"); + if (IS_ERR(channel_close_wq)) { + pr_err("%s: create_singlethread_workqueue ENOMEM\n", __func__); + return -ENOMEM; + } + + if (smsm_init()) { + pr_err("smsm_init() failed\n"); + return -1; + } + + if (pdev) { + if (pdev->dev.of_node) { + ret = smd_core_platform_init(pdev, &smd_platform_data); + if (ret) { + pr_err( + "SMD: smd_core_platform_init() failed\n"); + return -ENODEV; + } + } else if (pdev->dev.platform_data) { + ret = smd_core_platform_init(pdev, pdev->dev.platform_data); + if (ret) { + pr_err( + "SMD: smd_core_platform_init() failed\n"); + return -ENODEV; + } + } + } else { + pr_err("SMD: PDEV not found\n"); + return -ENODEV; + } + + smd_initialized = 1; + + smd_alloc_loopback_channel(); + smsm_irq_handler(0, 0); + tasklet_schedule(&smd_fake_irq_tasklet); + + return 0; +} + + +static const struct of_device_id msm_smd_ids[] = { + { .compatible = "qcom,smd-apq8064", }, + { } +}; + +static struct platform_driver msm_smd_driver = { + .probe = msm_smd_probe, + .driver = { + .name = MODULE_NAME, + .owner = THIS_MODULE, + .of_match_table = of_match_ptr(msm_smd_ids), + }, +}; + +#define SMEM_START 0x80000000 +#define SMEM_SIZE SZ_2M + +int __init msm_smd_init(void) +{ + static bool registered; + int rc; + + msm_smem_base_virt = ioremap(0x80000000, SZ_2M); + if (!msm_smem_base_virt) { + printk("SMD: Unable to map reserved memory 0x%08x\n", SMEM_START); + return -ENOMEM; + } + + g_out_base = ioremap_nocache(0x02011000, 0x100); + if (!g_out_base) { + return -ENOMEM; + } + + if (registered) + return 0; + + registered = true; + rc = remote_spin_lock_init(&remote_spinlock, SMEM_SPINLOCK_SMEM_ALLOC); + if (rc) { + pr_err("%s: remote spinlock init failed %d\n", __func__, rc); + return rc; + } + spinlocks_initialized = 1; + + rc = platform_driver_register(&msm_smd_driver); + if (rc) { + pr_err("%s: msm_smd_driver register failed %d\n", + __func__, rc); + return rc; + } + + smd_module_init_notify(0, NULL); + + return 0; +} + +module_init(msm_smd_init); + +MODULE_DESCRIPTION("MSM Shared Memory Core"); +MODULE_AUTHOR("Brian Swetland <swetland@google.com>"); +MODULE_LICENSE("GPL"); diff --git a/drivers/soc/qcom/smd_private.c b/drivers/soc/qcom/smd_private.c new file mode 100644 index 000000000000..411cd81d8eb7 --- /dev/null +++ b/drivers/soc/qcom/smd_private.c @@ -0,0 +1,303 @@ +/* Copyright (c) 2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include "smd_private.h" + +void set_state(volatile void *half_channel, unsigned data) +{ + ((struct smd_half_channel *)(half_channel))->state = data; +} + +unsigned get_state(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->state; +} + +void set_fDSR(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fDSR = data; +} + +unsigned get_fDSR(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fDSR; +} + +void set_fCTS(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fCTS = data; +} + +unsigned get_fCTS(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fCTS; +} + +void set_fCD(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fCD = data; +} + +unsigned get_fCD(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fCD; +} + +void set_fRI(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fRI = data; +} + +unsigned get_fRI(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fRI; +} + +void set_fHEAD(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fHEAD = data; +} + +unsigned get_fHEAD(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fHEAD; +} + +void set_fTAIL(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fTAIL = data; +} + +unsigned get_fTAIL(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fTAIL; +} + +void set_fSTATE(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fSTATE = data; +} + +unsigned get_fSTATE(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fSTATE; +} + +void set_fBLOCKREADINTR(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel *)(half_channel))->fBLOCKREADINTR = data; +} + +unsigned get_fBLOCKREADINTR(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->fBLOCKREADINTR; +} + +void set_tail(volatile void *half_channel, unsigned data) +{ + ((struct smd_half_channel *)(half_channel))->tail = data; +} + +unsigned get_tail(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->tail; +} + +void set_head(volatile void *half_channel, unsigned data) +{ + ((struct smd_half_channel *)(half_channel))->head = data; +} + +unsigned get_head(volatile void *half_channel) +{ + return ((struct smd_half_channel *)(half_channel))->head; +} + +void set_state_word_access(volatile void *half_channel, unsigned data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->state = data; +} + +unsigned get_state_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->state; +} + +void set_fDSR_word_access(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->fDSR = data; +} + +unsigned get_fDSR_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->fDSR; +} + +void set_fCTS_word_access(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->fCTS = data; +} + +unsigned get_fCTS_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->fCTS; +} + +void set_fCD_word_access(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->fCD = data; +} + +unsigned get_fCD_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->fCD; +} + +void set_fRI_word_access(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->fRI = data; +} + +unsigned get_fRI_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->fRI; +} + +void set_fHEAD_word_access(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->fHEAD = data; +} + +unsigned get_fHEAD_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->fHEAD; +} + +void set_fTAIL_word_access(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->fTAIL = data; +} + +unsigned get_fTAIL_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->fTAIL; +} + +void set_fSTATE_word_access(volatile void *half_channel, unsigned char data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->fSTATE = data; +} + +unsigned get_fSTATE_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->fSTATE; +} + +void set_fBLOCKREADINTR_word_access(volatile void *half_channel, + unsigned char data) +{ + ((struct smd_half_channel_word_access *) + (half_channel))->fBLOCKREADINTR = data; +} + +unsigned get_fBLOCKREADINTR_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *) + (half_channel))->fBLOCKREADINTR; +} + +void set_tail_word_access(volatile void *half_channel, unsigned data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->tail = data; +} + +unsigned get_tail_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->tail; +} + +void set_head_word_access(volatile void *half_channel, unsigned data) +{ + ((struct smd_half_channel_word_access *)(half_channel))->head = data; +} + +unsigned get_head_word_access(volatile void *half_channel) +{ + return ((struct smd_half_channel_word_access *)(half_channel))->head; +} + +int is_word_access_ch(unsigned ch_type) +{ + if (ch_type == SMD_APPS_RPM || ch_type == SMD_MODEM_RPM || + ch_type == SMD_QDSP_RPM || ch_type == SMD_WCNSS_RPM) + return 1; + else + return 0; +} + +struct smd_half_channel_access *get_half_ch_funcs(unsigned ch_type) +{ + static struct smd_half_channel_access byte_access = { + .set_state = set_state, + .get_state = get_state, + .set_fDSR = set_fDSR, + .get_fDSR = get_fDSR, + .set_fCTS = set_fCTS, + .get_fCTS = get_fCTS, + .set_fCD = set_fCD, + .get_fCD = get_fCD, + .set_fRI = set_fRI, + .get_fRI = get_fRI, + .set_fHEAD = set_fHEAD, + .get_fHEAD = get_fHEAD, + .set_fTAIL = set_fTAIL, + .get_fTAIL = get_fTAIL, + .set_fSTATE = set_fSTATE, + .get_fSTATE = get_fSTATE, + .set_fBLOCKREADINTR = set_fBLOCKREADINTR, + .get_fBLOCKREADINTR = get_fBLOCKREADINTR, + .set_tail = set_tail, + .get_tail = get_tail, + .set_head = set_head, + .get_head = get_head, + }; + static struct smd_half_channel_access word_access = { + .set_state = set_state_word_access, + .get_state = get_state_word_access, + .set_fDSR = set_fDSR_word_access, + .get_fDSR = get_fDSR_word_access, + .set_fCTS = set_fCTS_word_access, + .get_fCTS = get_fCTS_word_access, + .set_fCD = set_fCD_word_access, + .get_fCD = get_fCD_word_access, + .set_fRI = set_fRI_word_access, + .get_fRI = get_fRI_word_access, + .set_fHEAD = set_fHEAD_word_access, + .get_fHEAD = get_fHEAD_word_access, + .set_fTAIL = set_fTAIL_word_access, + .get_fTAIL = get_fTAIL_word_access, + .set_fSTATE = set_fSTATE_word_access, + .get_fSTATE = get_fSTATE_word_access, + .set_fBLOCKREADINTR = set_fBLOCKREADINTR_word_access, + .get_fBLOCKREADINTR = get_fBLOCKREADINTR_word_access, + .set_tail = set_tail_word_access, + .get_tail = get_tail_word_access, + .set_head = set_head_word_access, + .get_head = get_head_word_access, + }; + + if (is_word_access_ch(ch_type)) + return &word_access; + else + return &byte_access; +} + diff --git a/drivers/soc/qcom/smd_private.h b/drivers/soc/qcom/smd_private.h new file mode 100644 index 000000000000..6a1dc0ae0800 --- /dev/null +++ b/drivers/soc/qcom/smd_private.h @@ -0,0 +1,267 @@ +/* arch/arm/mach-msm/smd_private.h + * + * Copyright (C) 2007 Google, Inc. + * Copyright (c) 2007-2012, The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef _ARCH_ARM_MACH_MSM_MSM_SMD_PRIVATE_H_ +#define _ARCH_ARM_MACH_MSM_MSM_SMD_PRIVATE_H_ + +#include <linux/types.h> +#include <linux/spinlock.h> +#include <linux/msm_smsm.h> +#include <linux/msm_smd.h> + +#define PC_APPS 0 +#define PC_MODEM 1 + +#define VERSION_QDSP6 4 +#define VERSION_APPS_SBL 6 +#define VERSION_MODEM_SBL 7 +#define VERSION_APPS 8 +#define VERSION_MODEM 9 +#define VERSION_DSPS 10 + +#define SMD_HEAP_SIZE 512 + +struct smem_heap_info { + unsigned initialized; + unsigned free_offset; + unsigned heap_remaining; + unsigned reserved; +}; + +struct smem_heap_entry { + unsigned allocated; + unsigned offset; + unsigned size; + unsigned reserved; /* bits 1:0 reserved, bits 31:2 aux smem base addr */ +}; +#define BASE_ADDR_MASK 0xfffffffc + +struct smem_proc_comm { + unsigned command; + unsigned status; + unsigned data1; + unsigned data2; +}; + +struct smem_shared { + struct smem_proc_comm proc_comm[4]; + unsigned version[32]; + struct smem_heap_info heap_info; + struct smem_heap_entry heap_toc[SMD_HEAP_SIZE]; +}; + +#if 1 /* defined(CONFIG_MSM_SMD_PKG4)*/ +struct smsm_interrupt_info { + uint32_t aArm_en_mask; + uint32_t aArm_interrupts_pending; + uint32_t aArm_wakeup_reason; + uint32_t aArm_rpc_prog; + uint32_t aArm_rpc_proc; + char aArm_smd_port_name[20]; + uint32_t aArm_gpio_info; +}; +#elif defined(CONFIG_MSM_SMD_PKG3) +struct smsm_interrupt_info { + uint32_t aArm_en_mask; + uint32_t aArm_interrupts_pending; + uint32_t aArm_wakeup_reason; +}; +#elif !defined(CONFIG_MSM_SMD) +/* Don't trigger the error */ +#else +#error No SMD Package Specified; aborting +#endif + +#define SZ_DIAG_ERR_MSG 0xC8 +#define ID_DIAG_ERR_MSG SMEM_DIAG_ERR_MESSAGE +#define ID_SMD_CHANNELS SMEM_SMD_BASE_ID +#define ID_SHARED_STATE SMEM_SMSM_SHARED_STATE +#define ID_CH_ALLOC_TBL SMEM_CHANNEL_ALLOC_TBL + +#define SMD_SS_CLOSED 0x00000000 +#define SMD_SS_OPENING 0x00000001 +#define SMD_SS_OPENED 0x00000002 +#define SMD_SS_FLUSHING 0x00000003 +#define SMD_SS_CLOSING 0x00000004 +#define SMD_SS_RESET 0x00000005 +#define SMD_SS_RESET_OPENING 0x00000006 + +#define SMD_BUF_SIZE 8192 +#define SMD_CHANNELS 64 +#define SMD_HEADER_SIZE 20 + +/* 'type' field of smd_alloc_elm structure + * has the following breakup + * bits 0-7 -> channel type + * bits 8-11 -> xfer type + * bits 12-31 -> reserved + */ +struct smd_alloc_elm { + char name[20]; + uint32_t cid; + uint32_t type; + uint32_t ref_count; +}; + +#define SMD_CHANNEL_TYPE(x) ((x) & 0x000000FF) +#define SMD_XFER_TYPE(x) (((x) & 0x00000F00) >> 8) + +struct smd_half_channel { + unsigned state; + unsigned char fDSR; + unsigned char fCTS; + unsigned char fCD; + unsigned char fRI; + unsigned char fHEAD; + unsigned char fTAIL; + unsigned char fSTATE; + unsigned char fBLOCKREADINTR; + unsigned tail; + unsigned head; +}; + +struct smd_half_channel_word_access { + unsigned state; + unsigned fDSR; + unsigned fCTS; + unsigned fCD; + unsigned fRI; + unsigned fHEAD; + unsigned fTAIL; + unsigned fSTATE; + unsigned fBLOCKREADINTR; + unsigned tail; + unsigned head; +}; + +struct smd_half_channel_access { + void (*set_state)(volatile void *half_channel, unsigned data); + unsigned (*get_state)(volatile void *half_channel); + void (*set_fDSR)(volatile void *half_channel, unsigned char data); + unsigned (*get_fDSR)(volatile void *half_channel); + void (*set_fCTS)(volatile void *half_channel, unsigned char data); + unsigned (*get_fCTS)(volatile void *half_channel); + void (*set_fCD)(volatile void *half_channel, unsigned char data); + unsigned (*get_fCD)(volatile void *half_channel); + void (*set_fRI)(volatile void *half_channel, unsigned char data); + unsigned (*get_fRI)(volatile void *half_channel); + void (*set_fHEAD)(volatile void *half_channel, unsigned char data); + unsigned (*get_fHEAD)(volatile void *half_channel); + void (*set_fTAIL)(volatile void *half_channel, unsigned char data); + unsigned (*get_fTAIL)(volatile void *half_channel); + void (*set_fSTATE)(volatile void *half_channel, unsigned char data); + unsigned (*get_fSTATE)(volatile void *half_channel); + void (*set_fBLOCKREADINTR)(volatile void *half_channel, + unsigned char data); + unsigned (*get_fBLOCKREADINTR)(volatile void *half_channel); + void (*set_tail)(volatile void *half_channel, unsigned data); + unsigned (*get_tail)(volatile void *half_channel); + void (*set_head)(volatile void *half_channel, unsigned data); + unsigned (*get_head)(volatile void *half_channel); +}; + +int is_word_access_ch(unsigned ch_type); + +struct smd_half_channel_access *get_half_ch_funcs(unsigned ch_type); + +struct smem_ram_ptn { + char name[16]; + unsigned start; + unsigned size; + + /* RAM Partition attribute: READ_ONLY, READWRITE etc. */ + unsigned attr; + + /* RAM Partition category: EBI0, EBI1, IRAM, IMEM */ + unsigned category; + + /* RAM Partition domain: APPS, MODEM, APPS & MODEM (SHARED) etc. */ + unsigned domain; + + /* RAM Partition type: system, bootloader, appsboot, apps etc. */ + unsigned type; + + /* reserved for future expansion without changing version number */ + unsigned reserved2, reserved3, reserved4, reserved5; +} __attribute__ ((__packed__)); + + +struct smem_ram_ptable { + #define _SMEM_RAM_PTABLE_MAGIC_1 0x9DA5E0A8 + #define _SMEM_RAM_PTABLE_MAGIC_2 0xAF9EC4E2 + unsigned magic[2]; + unsigned version; + unsigned reserved1; + unsigned len; + struct smem_ram_ptn parts[32]; + unsigned buf; +} __attribute__ ((__packed__)); + +/* SMEM RAM Partition */ +enum { + DEFAULT_ATTRB = ~0x0, + READ_ONLY = 0x0, + READWRITE, +}; + +enum { + DEFAULT_CATEGORY = ~0x0, + SMI = 0x0, + EBI1, + EBI2, + QDSP6, + IRAM, + IMEM, + EBI0_CS0, + EBI0_CS1, + EBI1_CS0, + EBI1_CS1, + SDRAM = 0xE, +}; + +enum { + DEFAULT_DOMAIN = 0x0, + APPS_DOMAIN, + MODEM_DOMAIN, + SHARED_DOMAIN, +}; + +enum { + SYS_MEMORY = 1, /* system memory*/ + BOOT_REGION_MEMORY1, /* boot loader memory 1*/ + BOOT_REGION_MEMORY2, /* boot loader memory 2,reserved*/ + APPSBL_MEMORY, /* apps boot loader memory*/ + APPS_MEMORY, /* apps usage memory*/ +}; + +extern spinlock_t smem_lock; + + +void smd_diag(void); + +struct interrupt_stat { + uint32_t smd_in_count; + uint32_t smd_out_hardcode_count; + uint32_t smd_out_config_count; + uint32_t smd_interrupt_id; + + uint32_t smsm_in_count; + uint32_t smsm_out_hardcode_count; + uint32_t smsm_out_config_count; + uint32_t smsm_interrupt_id; +}; +extern struct interrupt_stat interrupt_stats[NUM_SMD_SUBSYSTEMS]; + +#endif diff --git a/drivers/soc/qcom/spm.c b/drivers/soc/qcom/spm.c new file mode 100644 index 000000000000..e6fbed081b47 --- /dev/null +++ b/drivers/soc/qcom/spm.c @@ -0,0 +1,339 @@ +/* + * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/io.h> +#include <linux/slab.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/of_device.h> +#include <linux/err.h> +#include <linux/platform_device.h> +#include <linux/cpuidle.h> +#include <linux/cpu_pm.h> + +#include <asm/proc-fns.h> +#include <asm/suspend.h> + +#include <soc/qcom/pm.h> +#include <soc/qcom/pm.h> +#include <soc/qcom/scm.h> +#include <soc/qcom/scm-boot.h> + + +#define SCM_CMD_TERMINATE_PC 0x2 +#define SCM_FLUSH_FLAG_MASK 0x3 +#define SCM_L2_ON 0x0 +#define SCM_L2_OFF 0x1 +#define MAX_PMIC_DATA 2 +#define MAX_SEQ_DATA 64 +#define SPM_CTL_INDEX 0x7f +#define SPM_CTL_INDEX_SHIFT 4 +#define SPM_CTL_EN BIT(0) + +enum spm_reg { + SPM_REG_CFG, + SPM_REG_SPM_CTL, + SPM_REG_DLY, + SPM_REG_PMIC_DLY, + SPM_REG_PMIC_DATA_0, + SPM_REG_PMIC_DATA_1, + SPM_REG_VCTL, + SPM_REG_SEQ_ENTRY, + SPM_REG_SPM_STS, + SPM_REG_PMIC_STS, + SPM_REG_NR, +}; + +struct spm_reg_data { + const u8 *reg_offset; + u32 spm_cfg; + u32 spm_dly; + u32 pmic_dly; + u32 pmic_data[MAX_PMIC_DATA]; + u8 seq[MAX_SEQ_DATA]; + u8 start_index[PM_SLEEP_MODE_NR]; +}; + +struct spm_driver_data { + void __iomem *reg_base; + const struct spm_reg_data *reg_data; +}; + +static const u8 spm_reg_offset_v2_1[SPM_REG_NR] = { + [SPM_REG_CFG] = 0x08, + [SPM_REG_SPM_CTL] = 0x30, + [SPM_REG_DLY] = 0x34, + [SPM_REG_SEQ_ENTRY] = 0x80, +}; + +/* SPM register data for 8974, 8084 */ +static const struct spm_reg_data spm_reg_8974_8084_cpu = { + .reg_offset = spm_reg_offset_v2_1, + .spm_cfg = 0x1, + .spm_dly = 0x3C102800, + .seq = { 0x03, 0x0B, 0x0F, 0x00, 0x20, 0x80, 0x10, 0xE8, 0x5B, 0x03, + 0x3B, 0xE8, 0x5B, 0x82, 0x10, 0x0B, 0x30, 0x06, 0x26, 0x30, + 0x0F }, + .start_index[PM_SLEEP_MODE_STBY] = 0, + .start_index[PM_SLEEP_MODE_SPC] = 3, +}; + +static const u8 spm_reg_offset_v1_1[SPM_REG_NR] = { + [SPM_REG_CFG] = 0x08, + [SPM_REG_SPM_CTL] = 0x20, + [SPM_REG_PMIC_DLY] = 0x24, + [SPM_REG_PMIC_DATA_0] = 0x28, + [SPM_REG_PMIC_DATA_1] = 0x2C, + [SPM_REG_SEQ_ENTRY] = 0x80, +}; + +/* SPM register data for 8064 */ +static const struct spm_reg_data spm_reg_8064_cpu = { + .reg_offset = spm_reg_offset_v1_1, + .spm_cfg = 0x1F, + .pmic_dly = 0x02020004, + .pmic_data[0] = 0x0084009C, + .pmic_data[1] = 0x00A4001C, + .seq = { 0x03, 0x0F, 0x00, 0x24, 0x54, 0x10, 0x09, 0x03, 0x01, + 0x10, 0x54, 0x30, 0x0C, 0x24, 0x30, 0x0F }, + .start_index[PM_SLEEP_MODE_STBY] = 0, + .start_index[PM_SLEEP_MODE_SPC] = 2, +}; + +static DEFINE_PER_CPU_SHARED_ALIGNED(struct spm_driver_data *, cpu_spm_drv); + +static inline void spm_register_write(struct spm_driver_data *drv, + enum spm_reg reg, u32 val) +{ + if (drv->reg_data->reg_offset[reg]) + writel_relaxed(val, drv->reg_base + + drv->reg_data->reg_offset[reg]); +} + +/* Ensure a guaranteed write, before return */ +static inline void spm_register_write_sync(struct spm_driver_data *drv, + enum spm_reg reg, u32 val) +{ + u32 ret; + + if (!drv->reg_data->reg_offset[reg]) + return; + + do { + writel_relaxed(val, drv->reg_base + + drv->reg_data->reg_offset[reg]); + ret = readl_relaxed(drv->reg_base + + drv->reg_data->reg_offset[reg]); + if (ret == val) + break; + cpu_relax(); + } while (1); +} + +static inline u32 spm_register_read(struct spm_driver_data *drv, + enum spm_reg reg) +{ + return readl_relaxed(drv->reg_base + drv->reg_data->reg_offset[reg]); +} + +static void spm_set_low_power_mode(enum pm_sleep_mode mode) +{ + struct spm_driver_data *drv = per_cpu(cpu_spm_drv, + smp_processor_id()); + u32 start_index; + u32 ctl_val; + + start_index = drv->reg_data->start_index[mode]; + + ctl_val = spm_register_read(drv, SPM_REG_SPM_CTL); + ctl_val &= ~(SPM_CTL_INDEX << SPM_CTL_INDEX_SHIFT); + ctl_val |= start_index << SPM_CTL_INDEX_SHIFT; + ctl_val |= SPM_CTL_EN; + spm_register_write_sync(drv, SPM_REG_SPM_CTL, ctl_val); +} + +static int qcom_pm_collapse(unsigned long int unused) +{ + int ret; + u32 flag; + int cpu = smp_processor_id(); + + ret = scm_set_warm_boot_addr(cpu_resume, cpu); + if (ret) + return ret; + + flag = SCM_L2_ON & SCM_FLUSH_FLAG_MASK; + ret = scm_call_atomic1(SCM_SVC_BOOT, SCM_CMD_TERMINATE_PC, flag); + + /* + * Returns here only if there was a pending interrupt and we did not + * power down as a result. + */ + /* Hack:: Ignore scm call return values in power down path */ + return 0; +} + +static int qcom_cpu_standby(void *unused) +{ + spm_set_low_power_mode(PM_SLEEP_MODE_STBY); + cpu_do_idle(); + + return 0; +} + +static int qcom_cpu_spc(void *unused) +{ + int ret; + + spm_set_low_power_mode(PM_SLEEP_MODE_STBY); + cpu_pm_enter(); + ret = cpu_suspend(0, qcom_pm_collapse); + cpu_pm_exit(); + + return ret; +} + +static struct spm_driver_data *spm_get_drv(struct platform_device *pdev, + int *spm_cpu) +{ + struct spm_driver_data *drv = NULL; + struct device_node *cpu_node, *saw_node; + int cpu; + bool found; + + for_each_possible_cpu(cpu) { + cpu_node = of_cpu_device_node_get(cpu); + if (!cpu_node) + continue; + saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); + found = (saw_node == pdev->dev.of_node); + of_node_put(saw_node); + of_node_put(cpu_node); + if (found) + break; + } + + if (found) { + drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); + if (drv) + *spm_cpu = cpu; + } + + return drv; +} + +static const struct of_device_id spm_match_table[] = { + { .compatible = "qcom,msm8974-saw2-v2.1-cpu", + .data = &spm_reg_8974_8084_cpu }, + { .compatible = "qcom,apq8084-saw2-v2.1-cpu", + .data = &spm_reg_8974_8084_cpu }, + { .compatible = "qcom,apq8064-saw2-v1.1-cpu", + .data = &spm_reg_8064_cpu }, + { }, +}; + +static int spm_dev_probe(struct platform_device *pdev) +{ + struct spm_driver_data *drv; + struct resource *res; + const struct of_device_id *match_id; + void __iomem *addr; + int cpu; + struct cpuidle_device *dev; + + drv = spm_get_drv(pdev, &cpu); + if (!drv) + return -EINVAL; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + drv->reg_base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(drv->reg_base)) + return PTR_ERR(drv->reg_base); + + match_id = of_match_node(spm_match_table, pdev->dev.of_node); + if (!match_id) + return -ENODEV; + + drv->reg_data = match_id->data; + + /* Write the SPM sequences first.. */ + addr = drv->reg_base + drv->reg_data->reg_offset[SPM_REG_SEQ_ENTRY]; + __iowrite32_copy(addr, drv->reg_data->seq, + ARRAY_SIZE(drv->reg_data->seq) / 4); + + /* + * ..and then the control registers. + * On some SoC if the control registers are written first and if the + * CPU was held in reset, the reset signal could trigger the SPM state + * machine, before the sequences are completely written. + */ + spm_register_write(drv, SPM_REG_CFG, drv->reg_data->spm_cfg); + spm_register_write(drv, SPM_REG_DLY, drv->reg_data->spm_dly); + spm_register_write(drv, SPM_REG_PMIC_DLY, drv->reg_data->pmic_dly); + spm_register_write(drv, SPM_REG_PMIC_DATA_0, + drv->reg_data->pmic_data[0]); + spm_register_write(drv, SPM_REG_PMIC_DATA_1, + drv->reg_data->pmic_data[1]); + + per_cpu(cpu_spm_drv, cpu) = drv; + + /* Register the cpuidle device for the cpu, we are ready for cpuidle */ + dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL); + if (!dev) + return -ENOMEM; + + dev->cpu = cpu; + return cpuidle_register_device(dev); +} + +static struct qcom_cpu_pm_ops lpm_ops = { + .standby = qcom_cpu_standby, + .spc = qcom_cpu_spc, +}; + +static struct platform_device qcom_cpuidle_drv = { + .name = "qcom_cpuidle", + .id = -1, + .dev.platform_data = &lpm_ops, +}; + +static struct platform_driver spm_driver = { + .probe = spm_dev_probe, + .driver = { + .name = "saw", + .of_match_table = spm_match_table, + }, +}; + +static int __init qcom_spm_init(void) +{ + int ret; + + /* + * cpuidle driver need to registered before the cpuidle device + * for any cpu. Register the device for the the cpuidle driver. + */ + ret = platform_device_register(&qcom_cpuidle_drv); + if (ret) + return ret; + + return platform_driver_register(&spm_driver); +} +module_init(qcom_spm_init); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("SAW power controller driver"); +MODULE_ALIAS("platform:saw"); diff --git a/drivers/thermal/Kconfig b/drivers/thermal/Kconfig index f554d25b4399..e50de9e68bf8 100644 --- a/drivers/thermal/Kconfig +++ b/drivers/thermal/Kconfig @@ -249,6 +249,16 @@ config INT340X_THERMAL information to allow the user to select his laptop to run without turning on the fans. +config THERMAL_TSENS8960 + tristate "Qualcomm 8960 Tsens Temperature Alarm" + depends on THERMAL + help + This enables the thermal sysfs driver for the Tsens device. It shows + up in Sysfs as a thermal zone with mutiple trip points. Disabling the + thermal zone device via the mode file results in disabling the sensor. + Also able to set threshold temperature for both hot and cold and update + when a threshold is reached. + config ACPI_THERMAL_REL tristate depends on ACPI diff --git a/drivers/thermal/Makefile b/drivers/thermal/Makefile index 39c4fe87da2f..211f6ca89603 100644 --- a/drivers/thermal/Makefile +++ b/drivers/thermal/Makefile @@ -31,6 +31,7 @@ obj-$(CONFIG_DB8500_CPUFREQ_COOLING) += db8500_cpufreq_cooling.o obj-$(CONFIG_INTEL_POWERCLAMP) += intel_powerclamp.o obj-$(CONFIG_X86_PKG_TEMP_THERMAL) += x86_pkg_temp_thermal.o obj-$(CONFIG_INTEL_SOC_DTS_THERMAL) += intel_soc_dts_thermal.o +obj-$(CONFIG_THERMAL_TSENS8960) += msm8960_tsens.o obj-$(CONFIG_TI_SOC_THERMAL) += ti-soc-thermal/ obj-$(CONFIG_INT340X_THERMAL) += int340x_thermal/ obj-$(CONFIG_ST_THERMAL) += st/ diff --git a/drivers/thermal/msm8960_tsens.c b/drivers/thermal/msm8960_tsens.c new file mode 100644 index 000000000000..0d88fdd73210 --- /dev/null +++ b/drivers/thermal/msm8960_tsens.c @@ -0,0 +1,753 @@ +/* + * Copyright (c) 2015, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/mfd/syscon.h> +#include <linux/thermal.h> +#include <linux/interrupt.h> +#include <linux/delay.h> +#include <linux/slab.h> +#include <linux/io.h> +#include <linux/err.h> +#include <linux/pm.h> +#include <linux/regmap.h> +#include <linux/of.h> + +#define TSENS_CNTL_ADDR 0x3620 +#define TSENS_EN BIT(0) +#define TSENS_SW_RST BIT(1) +#define TSENS_ADC_CLK_SEL BIT(2) +#define SENSOR0_EN BIT(3) +#define TSENS_SENSOR0_SHIFT 3 +#define TSENS_MASK1 1 +#define SENSORS_EN_MASK(n) GENMASK((n + 2), 3) + +#define TSENS_TH_ADDR 0x3624 +#define TSENS_TH_MAX_LIMIT_SHIFT 24 +#define TSENS_TH_MIN_LIMIT_SHIFT 16 +#define TSENS_TH_UPPER_LIMIT_SHIFT 8 +#define TSENS_TH_LOWER_LIMIT_SHIFT 0 +#define TSENS_TH_MAX_LIMIT_MASK GENMASK(31, 24) +#define TSENS_TH_MIN_LIMIT_MASK GENMASK(23, 16) +#define TSENS_TH_UPPER_LIMIT_MASK GENMASK(15, 8) +#define TSENS_TH_LOWER_LIMIT_MASK GENMASK(7, 0) + +#define TSENS_S0_STATUS_ADDR 0x3628 +#define TSENS_STATUS_ADDR_OFFSET 2 + +#define TSENS_INT_STATUS_ADDR 0x363c +#define TSENS_LOWER_INT_MASK BIT(1) +#define TSENS_UPPER_INT_MASK BIT(2) +#define TSENS_MAX_INT_MASK BIT(3) +#define TSENS_TRDY_MASK BIT(7) + +#define TSENS_TH_MAX_CODE 0xff +#define TSENS_TH_MIN_CODE 0 +#define TSENS_TRDY_RDY_MIN_TIME 1000 +#define TSENS_TRDY_RDY_MAX_TIME 1100 + +#define TSENS_MEASURE_PERIOD 1 +/* Initial temperature threshold values */ +#define TSENS_LOWER_LIMIT_TH 0x50 +#define TSENS_UPPER_LIMIT_TH 0xdf +#define TSENS_MIN_LIMIT_TH 0x0 +#define TSENS_MAX_LIMIT_TH 0xff + +#define TSENS_MIN_STATUS_MASK(offset) BIT((offset)) +#define TSENS_LOWER_STATUS_CLR(offset) BIT((offset + 1)) +#define TSENS_UPPER_STATUS_CLR(offset) BIT((offset + 2)) +#define TSENS_MAX_STATUS_MASK(offset) BIT((offset + 3)) + +/* QFPROM addresses */ +#define TSENS_8960_QFPROM_ADDR0 0x0 +#define TSENS_8960_QFPROM_SPARE_OFFSET 0x10 +/* 8960 Specifics */ +#define TSENS_8960_CONFIG 0x9b +#define TSENS_8960_CONFIG_MASK GENMASK(7, 0) +#define TSENS_8960_CONFIG_ADDR 0x3640 +#define TSENS_8064_STATUS_CNTL 0x3660 +#define TSENS_8064_SEQ_SENSORS 5 +#define TSENS_8064_S4_S5_OFFSET 40 + +#define TSENS_CAL_MILLI_DEGC 30000 +#define TSENS_MAX_SENSORS 11 + +/* Trips: from very hot to very cold */ +enum tsens_trip_type { + TSENS_TRIP_STAGE3 = 0, + TSENS_TRIP_STAGE2, + TSENS_TRIP_STAGE1, + TSENS_TRIP_STAGE0, + TSENS_TRIP_NUM, +}; + +struct tsens_tm_device; + +struct tsens_tm_device_sensor { + struct thermal_zone_device *tzone; + enum thermal_device_mode mode; + struct tsens_tm_device *tmdev; + unsigned int sensor_num; + int offset; + uint32_t slope; + bool user_zone; +}; + +struct tsens_variant_data { + int slope[TSENS_MAX_SENSORS]; + u32 nsensors; + /* enable to tsens slp clock */ + u32 slp_clk_ena; + u32 sctrl_reg; + u32 sctrl_offset; + u32 config_reg; + u32 config_mask; + u32 config; + int (*calib_sensors)(struct tsens_tm_device *tmdev); +}; + +struct tsens_tm_device { + struct regmap *base; + struct regmap *qfprom_base; + //void __iomem *qfprom_base; + struct tsens_variant_data *data; + struct work_struct tsens_work; + int qfprom_offset; + int qfprom_size; + int nsensors; + int irq; + /* tsens ready bit for reading valid temperature */ + bool trdy; + struct tsens_tm_device_sensor sensor[0]; +}; + +/* Temperature on y axis and ADC-code on x-axis */ +static inline int tsens_code_to_degc(struct tsens_tm_device_sensor *s, + int adc_code) +{ + return adc_code * s->slope + s->offset; +} + +static inline int tsens_degc_to_code(struct tsens_tm_device_sensor *s, int degc) +{ + return (degc - s->offset + s->slope / 2) / s->slope; +} + +static int __tsens_get_temp(void *data, long *temp) +{ + struct tsens_tm_device_sensor *tm_sensor = data; + unsigned int code, offset = 0; + struct tsens_tm_device *tmdev = tm_sensor->tmdev; + int sensor_num = tm_sensor->sensor_num; + struct regmap *base = tmdev->base; + + if (!tmdev->trdy) { + u32 val; + + regmap_read(base, TSENS_INT_STATUS_ADDR, &val); + + while (!(val & TSENS_TRDY_MASK)) { + usleep_range(TSENS_TRDY_RDY_MIN_TIME, + TSENS_TRDY_RDY_MAX_TIME); + regmap_read(base, TSENS_INT_STATUS_ADDR, &val); + } + tmdev->trdy = true; + } +//FIXME + if (sensor_num >= TSENS_8064_SEQ_SENSORS) + offset = TSENS_8064_S4_S5_OFFSET; + + regmap_read(base, TSENS_S0_STATUS_ADDR + offset + + (sensor_num << TSENS_STATUS_ADDR_OFFSET), &code); + *temp = tsens_code_to_degc(tm_sensor, code); + + return 0; +} + +static int tsens_get_temp(struct thermal_zone_device *thermal, + unsigned long *temp) +{ + struct tsens_tm_device_sensor *tm_sensor = thermal->devdata; + + if (!tm_sensor || tm_sensor->mode != THERMAL_DEVICE_ENABLED) + return -EINVAL; + + return __tsens_get_temp(tm_sensor, temp); + +} + +static int tsens_get_mode(struct thermal_zone_device *thermal, + enum thermal_device_mode *mode) +{ + struct tsens_tm_device_sensor *tm_sensor = thermal->devdata; + + if (tm_sensor) + *mode = tm_sensor->mode; + + return 0; +} + +/** + * Function to enable the mode. + * If the main sensor is disabled all the sensors are disable and + * the clock is disabled. + * If the main sensor is not enabled and sub sensor is enabled + * returns with an error stating the main sensor is not enabled. + **/ + +static int tsens_set_mode(struct thermal_zone_device *thermal, + enum thermal_device_mode mode) +{ + struct tsens_tm_device_sensor *tm_sensor = thermal->devdata; + struct tsens_tm_device *tmdev = tm_sensor->tmdev; + struct tsens_variant_data *data = tmdev->data; + struct regmap *base = tmdev->base; + unsigned int reg, mask, i; + + if (!tm_sensor) + return -EINVAL; + + if (mode != tm_sensor->mode) { + regmap_read(base, TSENS_CNTL_ADDR, ®); + mask = BIT(tm_sensor->sensor_num + TSENS_SENSOR0_SHIFT); + if (mode == THERMAL_DEVICE_ENABLED) { + if ((mask != SENSOR0_EN) && !(reg & SENSOR0_EN)) { + pr_info("Main sensor not enabled\n"); + return -EINVAL; + } + regmap_write(base, TSENS_CNTL_ADDR, reg | TSENS_SW_RST); + reg |= mask | data->slp_clk_ena | TSENS_EN; + tmdev->trdy = false; + } else { + reg &= ~mask; + if (!(reg & SENSOR0_EN)) { + reg &= ~(SENSORS_EN_MASK(tmdev->nsensors) | + data->slp_clk_ena | + TSENS_EN); + + for (i = 1; i < tmdev->nsensors; i++) + tmdev->sensor[i].mode = mode; + + } + } + regmap_write(base, TSENS_CNTL_ADDR, reg); + } + tm_sensor->mode = mode; + + return 0; +} + +static int tsens_get_trip_type(struct thermal_zone_device *thermal, + int trip, enum thermal_trip_type *type) +{ + struct tsens_tm_device_sensor *tm_sensor = thermal->devdata; + + if (!tm_sensor || trip < 0 || !type) + return -EINVAL; + + switch (trip) { + case TSENS_TRIP_STAGE3: + *type = THERMAL_TRIP_CRITICAL; + break; + case TSENS_TRIP_STAGE2: + *type = THERMAL_TRIP_HOT; + break; + case TSENS_TRIP_STAGE1: + *type = THERMAL_TRIP_PASSIVE; + break; + case TSENS_TRIP_STAGE0: + *type = THERMAL_TRIP_ACTIVE; + break; + default: + return -EINVAL; + } + + return 0; +} + +static int tsens_get_trip_temp(struct thermal_zone_device *thermal, + int trip, unsigned long *temp) +{ + struct tsens_tm_device_sensor *tm_sensor = thermal->devdata; + struct tsens_tm_device *tmdev = tm_sensor->tmdev; + unsigned int reg; + + if (!tm_sensor || trip < 0 || !temp) + return -EINVAL; + + regmap_read(tmdev->base, TSENS_TH_ADDR, ®); + switch (trip) { + case TSENS_TRIP_STAGE3: + reg = (reg & TSENS_TH_MAX_LIMIT_MASK) + >> TSENS_TH_MAX_LIMIT_SHIFT; + break; + case TSENS_TRIP_STAGE2: + reg = (reg & TSENS_TH_UPPER_LIMIT_MASK) + >> TSENS_TH_UPPER_LIMIT_SHIFT; + break; + case TSENS_TRIP_STAGE1: + reg = (reg & TSENS_TH_LOWER_LIMIT_MASK) + >> TSENS_TH_LOWER_LIMIT_SHIFT; + break; + case TSENS_TRIP_STAGE0: + reg = (reg & TSENS_TH_MIN_LIMIT_MASK) + >> TSENS_TH_MIN_LIMIT_SHIFT; + break; + default: + return -EINVAL; + } + + *temp = tsens_code_to_degc(tm_sensor, reg); + + return 0; +} + +static int tsens_set_trip_temp(struct thermal_zone_device *thermal, + int trip, unsigned long temp) +{ + struct tsens_tm_device_sensor *tm_sensor = thermal->devdata; + unsigned int reg_th, reg_cntl; + int code, hi_code, lo_code, code_err_chk; + struct tsens_tm_device *tmdev = tm_sensor->tmdev; + struct regmap *base = tmdev->base; + struct tsens_variant_data *data = tmdev->data; + u32 offset = data->sctrl_offset; + + code_err_chk = code = tsens_degc_to_code(tm_sensor, temp); + if (!tm_sensor || trip < 0) + return -EINVAL; + + lo_code = TSENS_TH_MIN_CODE; + hi_code = TSENS_TH_MAX_CODE; + + regmap_read(base, data->sctrl_reg, ®_cntl); + + regmap_read(base, TSENS_TH_ADDR, ®_th); + switch (trip) { + case TSENS_TRIP_STAGE3: + code <<= TSENS_TH_MAX_LIMIT_SHIFT; + reg_th &= ~TSENS_TH_MAX_LIMIT_MASK; + + if (!(reg_cntl & TSENS_UPPER_STATUS_CLR(offset))) + lo_code = (reg_th & TSENS_TH_UPPER_LIMIT_MASK) + >> TSENS_TH_UPPER_LIMIT_SHIFT; + else if (!(reg_cntl & TSENS_LOWER_STATUS_CLR(offset))) + lo_code = (reg_th & TSENS_TH_LOWER_LIMIT_MASK) + >> TSENS_TH_LOWER_LIMIT_SHIFT; + else if (!(reg_cntl & TSENS_MIN_STATUS_MASK(offset))) + lo_code = (reg_th & TSENS_TH_MIN_LIMIT_MASK) + >> TSENS_TH_MIN_LIMIT_SHIFT; + break; + case TSENS_TRIP_STAGE2: + code <<= TSENS_TH_UPPER_LIMIT_SHIFT; + reg_th &= ~TSENS_TH_UPPER_LIMIT_MASK; + + if (!(reg_cntl & TSENS_MAX_STATUS_MASK(offset))) + hi_code = (reg_th & TSENS_TH_MAX_LIMIT_MASK) + >> TSENS_TH_MAX_LIMIT_SHIFT; + if (!(reg_cntl & TSENS_LOWER_STATUS_CLR(offset))) + lo_code = (reg_th & TSENS_TH_LOWER_LIMIT_MASK) + >> TSENS_TH_LOWER_LIMIT_SHIFT; + else if (!(reg_cntl & TSENS_MIN_STATUS_MASK(offset))) + lo_code = (reg_th & TSENS_TH_MIN_LIMIT_MASK) + >> TSENS_TH_MIN_LIMIT_SHIFT; + break; + case TSENS_TRIP_STAGE1: + code <<= TSENS_TH_LOWER_LIMIT_SHIFT; + reg_th &= ~TSENS_TH_LOWER_LIMIT_MASK; + + if (!(reg_cntl & TSENS_MIN_STATUS_MASK(offset))) + lo_code = (reg_th & TSENS_TH_MIN_LIMIT_MASK) + >> TSENS_TH_MIN_LIMIT_SHIFT; + if (!(reg_cntl & TSENS_UPPER_STATUS_CLR(offset))) + hi_code = (reg_th & TSENS_TH_UPPER_LIMIT_MASK) + >> TSENS_TH_UPPER_LIMIT_SHIFT; + else if (!(reg_cntl & TSENS_MAX_STATUS_MASK(offset))) + hi_code = (reg_th & TSENS_TH_MAX_LIMIT_MASK) + >> TSENS_TH_MAX_LIMIT_SHIFT; + break; + case TSENS_TRIP_STAGE0: + code <<= TSENS_TH_MIN_LIMIT_SHIFT; + reg_th &= ~TSENS_TH_MIN_LIMIT_MASK; + + if (!(reg_cntl & TSENS_LOWER_STATUS_CLR(offset))) + hi_code = (reg_th & TSENS_TH_LOWER_LIMIT_MASK) + >> TSENS_TH_LOWER_LIMIT_SHIFT; + else if (!(reg_cntl & TSENS_UPPER_STATUS_CLR(offset))) + hi_code = (reg_th & TSENS_TH_UPPER_LIMIT_MASK) + >> TSENS_TH_UPPER_LIMIT_SHIFT; + else if (!(reg_cntl & TSENS_MAX_STATUS_MASK(offset))) + hi_code = (reg_th & TSENS_TH_MAX_LIMIT_MASK) + >> TSENS_TH_MAX_LIMIT_SHIFT; + break; + default: + return -EINVAL; + } + + if (code_err_chk < lo_code || code_err_chk > hi_code) + return -EINVAL; + + regmap_write(base, TSENS_TH_ADDR, reg_th | code); + + return 0; +} + +static int tsens_get_crit_temp(struct thermal_zone_device *thermal, + unsigned long *temp) +{ + return tsens_get_trip_temp(thermal, TSENS_TRIP_STAGE3, temp); +} + +static struct thermal_zone_device_ops tsens_thermal_zone_ops = { + .get_temp = tsens_get_temp, + .get_mode = tsens_get_mode, + .set_mode = tsens_set_mode, + .get_trip_type = tsens_get_trip_type, + .get_trip_temp = tsens_get_trip_temp, + .set_trip_temp = tsens_set_trip_temp, + .get_crit_temp = tsens_get_crit_temp, +}; + +static void tsens_scheduler_fn(struct work_struct *work) +{ + struct tsens_tm_device *tmdev = container_of(work, + struct tsens_tm_device, tsens_work); + unsigned int threshold, threshold_low, i, code, reg, sensor, mask; + unsigned int sensor_addr; + bool upper_th_x, lower_th_x; + struct regmap *base = tmdev->base; + struct tsens_variant_data *data = tmdev->data; + u32 offset = data->sctrl_offset; + + regmap_read(base, data->sctrl_reg, ®); + regmap_write(base, data->sctrl_reg, reg | + TSENS_LOWER_STATUS_CLR(offset) | + TSENS_UPPER_STATUS_CLR(offset)); + + mask = ~(TSENS_LOWER_STATUS_CLR(offset) | TSENS_UPPER_STATUS_CLR(offset)); + regmap_read(tmdev->base, TSENS_TH_ADDR, &threshold); + threshold_low = (threshold & TSENS_TH_LOWER_LIMIT_MASK) + >> TSENS_TH_LOWER_LIMIT_SHIFT; + threshold = (threshold & TSENS_TH_UPPER_LIMIT_MASK) + >> TSENS_TH_UPPER_LIMIT_SHIFT; + + regmap_read(base, TSENS_CNTL_ADDR, &sensor); + sensor &= SENSORS_EN_MASK(tmdev->nsensors); + sensor >>= TSENS_SENSOR0_SHIFT; + sensor_addr = TSENS_S0_STATUS_ADDR; + for (i = 0; i < tmdev->nsensors; i++) { + if (i == TSENS_8064_SEQ_SENSORS) + sensor_addr += TSENS_8064_S4_S5_OFFSET; + if (sensor & TSENS_MASK1) { + regmap_read(base, sensor_addr, &code); + upper_th_x = code >= threshold; + lower_th_x = code <= threshold_low; + if (upper_th_x) + mask |= TSENS_UPPER_STATUS_CLR(offset); + if (lower_th_x) + mask |= TSENS_LOWER_STATUS_CLR(offset); + if (upper_th_x || lower_th_x) + thermal_zone_device_update(tmdev->sensor[i].tzone); + } + sensor >>= 1; + sensor_addr += 4; + } + regmap_read(base, data->sctrl_reg, ®); + regmap_write(base, data->sctrl_reg, reg & mask); +} + +static irqreturn_t tsens_isr(int irq, void *dev) +{ + struct tsens_tm_device *tmdev = dev; + + schedule_work(&tmdev->tsens_work); + + return IRQ_HANDLED; +} + +static void tsens_disable_mode(struct tsens_tm_device *tmdev) +{ + unsigned int reg_cntl = 0; + struct regmap *base = tmdev->base; + struct tsens_variant_data *data = tmdev->data; + + regmap_read(base, TSENS_CNTL_ADDR, ®_cntl); + regmap_write(base, TSENS_CNTL_ADDR, reg_cntl & + ~(SENSORS_EN_MASK(tmdev->nsensors) | data->slp_clk_ena + | TSENS_EN)); +} + +static void tsens_hw_init(struct tsens_tm_device *tmdev) +{ + unsigned int reg_cntl = 0, reg_cfg = 0, reg_thr = 0; + unsigned int reg_status_cntl = 0; + struct regmap *base = tmdev->base; + struct tsens_variant_data *data = tmdev->data; + u32 offset = data->sctrl_offset; + + regmap_read(base, TSENS_CNTL_ADDR, ®_cntl); + regmap_write(base, TSENS_CNTL_ADDR, reg_cntl | TSENS_SW_RST); + + reg_cntl |= data->slp_clk_ena | + (TSENS_MEASURE_PERIOD << 18) | + SENSORS_EN_MASK(tmdev->nsensors); + regmap_write(base, TSENS_CNTL_ADDR, reg_cntl); + + regmap_read(base, data->sctrl_reg, ®_status_cntl); + reg_status_cntl |= TSENS_LOWER_STATUS_CLR(offset) | + TSENS_UPPER_STATUS_CLR(offset) | + TSENS_MIN_STATUS_MASK(offset) | + TSENS_MAX_STATUS_MASK(offset); + + regmap_write(base, data->sctrl_reg, reg_status_cntl); + + + reg_cntl |= TSENS_EN; + regmap_write(base, TSENS_CNTL_ADDR, reg_cntl); + + regmap_read(base, data->config_reg, ®_cfg); + reg_cfg = (reg_cfg & ~data->config_mask) | data->config; + regmap_write(base, data->config_reg, reg_cfg); + + reg_thr |= (TSENS_LOWER_LIMIT_TH << TSENS_TH_LOWER_LIMIT_SHIFT) | + (TSENS_UPPER_LIMIT_TH << TSENS_TH_UPPER_LIMIT_SHIFT) | + (TSENS_MIN_LIMIT_TH << TSENS_TH_MIN_LIMIT_SHIFT) | + (TSENS_MAX_LIMIT_TH << TSENS_TH_MAX_LIMIT_SHIFT); + regmap_write(base, TSENS_TH_ADDR, reg_thr); +} + +static int tsens_8960_calib_sensors(struct tsens_tm_device *tmdev) +{ + + int rc, i; + uint8_t calib_data, calib_data_backup; + int sz = round_up(tmdev->qfprom_size, sizeof(int)); + uint8_t *cdata = kzalloc(sz, GFP_KERNEL | GFP_ATOMIC); + + rc = regmap_bulk_read(tmdev->qfprom_base, tmdev->qfprom_offset, cdata, sz/sizeof(int)); + if (rc < 0) + return rc; + for (i = 0; i < tmdev->nsensors; i++) { + calib_data = cdata[i]; + calib_data_backup = cdata[TSENS_8960_QFPROM_SPARE_OFFSET + i]; + if (calib_data_backup) + calib_data = calib_data_backup; + + if (!calib_data) { + pr_err("QFPROM TSENS calibration data not present\n"); + return -ENODEV; + } + tmdev->sensor[i].offset = TSENS_CAL_MILLI_DEGC - (calib_data * tmdev->sensor[i].slope); + tmdev->trdy = false; + } + kfree(cdata); + return 0; +} + +static int tsens_calib_sensors(struct tsens_tm_device *tmdev) +{ + if (tmdev->data->calib_sensors) + return tmdev->data->calib_sensors(tmdev); + + return -ENODEV; +} + +struct tsens_variant_data msm8960_data = +{ + /** + * Slope for a thermocouple in a given part is always same + * for desired range of temperature measurements + **/ + .slope = {910, 910, 910, 910, 910}, + .nsensors = 5, + .slp_clk_ena = BIT(26), + .sctrl_reg = TSENS_CNTL_ADDR, + .sctrl_offset = 8, + .config_reg = TSENS_8960_CONFIG_ADDR, + .config_mask = TSENS_8960_CONFIG_MASK, + .config = TSENS_8960_CONFIG, + .calib_sensors = tsens_8960_calib_sensors, +}; + +struct tsens_variant_data apq8064_data = { + /** + * Slope for a thermocouple in a given part is always same + * for desired range of temperature measurements + **/ + .slope = {1176, 1176, 1154, 1176, 1111, 1132, + 1132, 1199, 1132, 1199, 1132}, + .nsensors = 11, + .slp_clk_ena = BIT(26), + .sctrl_reg = TSENS_8064_STATUS_CNTL, + .sctrl_offset = 0, + .config_reg = TSENS_8960_CONFIG_ADDR, + .config_mask = TSENS_8960_CONFIG_MASK, + .config = TSENS_8960_CONFIG, + .calib_sensors = tsens_8960_calib_sensors, +}; + +static struct of_device_id qcom_tsens_of_match[] = { + { .compatible = "qcom,msm8960-tsens", .data = &msm8960_data }, + { .compatible = "qcom,apq8064-tsens", .data = &apq8064_data }, +}; +MODULE_DEVICE_TABLE(of, qcom_tsens_of_match); + +static int tsens_tm_probe(struct platform_device *pdev) +{ + int rc, i; + struct device *dev = &pdev->dev; + struct tsens_tm_device *tmdev; + const struct tsens_variant_data *data; + struct device_node *np = pdev->dev.of_node; + + if (!np) { + dev_err(dev, "Non DT not supported\n"); + return -EINVAL; + } + + data = of_match_node(qcom_tsens_of_match, np)->data; + tmdev = devm_kzalloc(dev, sizeof(struct tsens_tm_device) + + data->nsensors * + sizeof(struct tsens_tm_device_sensor), + GFP_ATOMIC); + if (tmdev == NULL) { + pr_err("%s: kzalloc() failed.\n", __func__); + return -ENOMEM; + } + + tmdev->qfprom_base = syscon_regmap_lookup_by_phandle(np, "qcom,calib-data"); + if (IS_ERR(tmdev->qfprom_base)) + return PTR_ERR(tmdev->qfprom_base); + + rc = of_property_read_u32_index(np, "qcom,calib-data", 1, &tmdev->qfprom_offset); + if (rc) { + dev_info(dev, "Could not read qfprom_offset from qcom,calib-data!\n"); + return -EINVAL; + } + + rc = of_property_read_u32_index(np, "qcom,calib-data", 2, &tmdev->qfprom_size); + if (rc) { + dev_info(dev, "Could not read qfprom_size from qcom,calib-data!\n"); + return -EINVAL; + } + + for (i = 0; i < data->nsensors; i++) + tmdev->sensor[i].slope = data->slope[i]; + + tmdev->data =(struct tsens_variant_data *) data; + tmdev->nsensors = data->nsensors; + tmdev->base = dev_get_regmap(dev->parent, NULL); + + if (!tmdev->base) { + dev_err(&pdev->dev, "Parent regmap unavailable.\n"); + return -ENXIO; + } + + rc = tsens_calib_sensors(tmdev); + if (rc < 0) + return rc; + + tsens_hw_init(tmdev); + + for (i = 0; i < tmdev->nsensors; i++) { + char name[18]; + snprintf(name, sizeof(name), "tsens_sensor%d", i); + tmdev->sensor[i].mode = THERMAL_DEVICE_ENABLED; + tmdev->sensor[i].sensor_num = i; + tmdev->sensor[i].tmdev = tmdev; + + tmdev->sensor[i].tzone = thermal_zone_of_sensor_register(dev, i, + &tmdev->sensor[i], __tsens_get_temp, NULL); + + if (IS_ERR(tmdev->sensor[i].tzone)) { + tmdev->sensor[i].tzone = thermal_zone_device_register( + name, + TSENS_TRIP_NUM, + GENMASK(TSENS_TRIP_NUM - 1 , 0), + &tmdev->sensor[i], + &tsens_thermal_zone_ops, NULL, 0, 0); + if (IS_ERR(tmdev->sensor[i].tzone)) { + dev_err(dev, "thermal_zone_device_register() failed.\n"); + rc = -ENODEV; + goto fail; + } + tmdev->sensor[i].user_zone = true; + } + } + + tmdev->irq = platform_get_irq_byname(pdev, "tsens-ul"); + if (tmdev->irq > 0) { + rc = devm_request_irq(dev, tmdev->irq, tsens_isr, + IRQF_TRIGGER_RISING, + "tsens_interrupt", tmdev); + if (rc < 0) { + pr_err("%s: request_irq FAIL: %d\n", __func__, rc); + for (i = 0; i < tmdev->nsensors; i++) + if (tmdev->sensor[i].user_zone) + thermal_zone_device_unregister( + tmdev->sensor[i].tzone); + else + thermal_zone_of_sensor_unregister(dev, + tmdev->sensor[i].tzone); + goto fail; + } + INIT_WORK(&tmdev->tsens_work, tsens_scheduler_fn); + } + platform_set_drvdata(pdev, tmdev); + + dev_info(dev, "Probed sucessfully\n"); + + return 0; +fail: + tsens_disable_mode(tmdev); + + return rc; +} + +static int tsens_tm_remove(struct platform_device *pdev) +{ + int i; + struct tsens_tm_device *tmdev = platform_get_drvdata(pdev); + struct tsens_tm_device_sensor *s; + + tsens_disable_mode(tmdev); + + for (i = 0; i < tmdev->nsensors; i++) { + s = &tmdev->sensor[i]; + if (s->user_zone) + thermal_zone_device_unregister(s->tzone); + else + thermal_zone_of_sensor_unregister(&pdev->dev, + s->tzone); + } + + return 0; +} + +static struct platform_driver tsens_tm_driver = { + .probe = tsens_tm_probe, + .remove = tsens_tm_remove, + .driver = { + .name = "tsens8960-tm", + .owner = THIS_MODULE, + .of_match_table = qcom_tsens_of_match, + }, +}; + +module_platform_driver(tsens_tm_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("MSM8960 Temperature Sensor driver"); +MODULE_VERSION("1.0"); +MODULE_ALIAS("platform:tsens8960-tm"); diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig index 649b784081c7..d72d916147f0 100644 --- a/drivers/tty/serial/Kconfig +++ b/drivers/tty/serial/Kconfig @@ -1074,7 +1074,7 @@ config SERIAL_MSM_CONSOLE config SERIAL_MSM_HS tristate "MSM UART High Speed: Serial Driver" - depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50 + depends on ARCH_QCOM select SERIAL_CORE help If you have a machine based on MSM family of SoCs, you diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c index 4b6c78331a64..dbc278d6a456 100644 --- a/drivers/tty/serial/msm_serial.c +++ b/drivers/tty/serial/msm_serial.c @@ -1044,17 +1044,22 @@ static int msm_serial_probe(struct platform_device *pdev) struct resource *resource; struct uart_port *port; const struct of_device_id *id; - int irq; + int irq, line; - if (pdev->id == -1) - pdev->id = atomic_inc_return(&msm_uart_next_id) - 1; + if (pdev->dev.of_node) + line = of_alias_get_id(pdev->dev.of_node, "serial"); + else + line = pdev->id; + + if (line < 0) + line = atomic_inc_return(&msm_uart_next_id) - 1; - if (unlikely(pdev->id < 0 || pdev->id >= UART_NR)) + if (unlikely(line < 0 || line >= UART_NR)) return -ENXIO; - dev_info(&pdev->dev, "msm_serial: detected port #%d\n", pdev->id); + dev_info(&pdev->dev, "msm_serial: detected port #%d\n", line); - port = get_port_from_line(pdev->id); + port = get_port_from_line(line); port->dev = &pdev->dev; msm_port = UART_TO_MSM(port); diff --git a/drivers/tty/serial/msm_serial_hs.c b/drivers/tty/serial/msm_serial_hs.c index 48e94961a9e5..6c0f56abb16a 100644 --- a/drivers/tty/serial/msm_serial_hs.c +++ b/drivers/tty/serial/msm_serial_hs.c @@ -1,10 +1,14 @@ -/* - * MSM 7k/8k High speed uart driver +/* drivers/serial/msm_serial_hs.c + * + * MSM 7k High speed uart driver * - * Copyright (c) 2007-2011, Code Aurora Forum. All rights reserved. * Copyright (c) 2008 Google Inc. + * Copyright (c) 2007-2013, The Linux Foundation. All rights reserved. * Modified: Nick Pelly <npelly@google.com> * + * All source code in this file is licensed under the following license + * except where indicated. + * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * version 2 as published by the Free Software Foundation. @@ -30,8 +34,6 @@ #include <linux/serial.h> #include <linux/serial_core.h> -#include <linux/tty.h> -#include <linux/tty_flip.h> #include <linux/slab.h> #include <linux/init.h> #include <linux/interrupt.h> @@ -40,168 +42,31 @@ #include <linux/ioport.h> #include <linux/kernel.h> #include <linux/timer.h> +#include <linux/delay.h> #include <linux/clk.h> #include <linux/platform_device.h> #include <linux/pm_runtime.h> #include <linux/dma-mapping.h> #include <linux/dmapool.h> +#include <linux/tty_flip.h> #include <linux/wait.h> -#include <linux/workqueue.h> - -#include <linux/atomic.h> +#include <linux/sysfs.h> +#include <linux/stat.h> +#include <linux/device.h> +#include <linux/debugfs.h> +#include <linux/gpio.h> +#include <asm/atomic.h> #include <asm/irq.h> -#include <mach/hardware.h> -#include <mach/dma.h> -#include <linux/platform_data/msm_serial_hs.h> - -/* HSUART Registers */ -#define UARTDM_MR1_ADDR 0x0 -#define UARTDM_MR2_ADDR 0x4 - -/* Data Mover result codes */ -#define RSLT_FIFO_CNTR_BMSK (0xE << 28) -#define RSLT_VLD BIT(1) - -/* write only register */ -#define UARTDM_CSR_ADDR 0x8 -#define UARTDM_CSR_115200 0xFF -#define UARTDM_CSR_57600 0xEE -#define UARTDM_CSR_38400 0xDD -#define UARTDM_CSR_28800 0xCC -#define UARTDM_CSR_19200 0xBB -#define UARTDM_CSR_14400 0xAA -#define UARTDM_CSR_9600 0x99 -#define UARTDM_CSR_7200 0x88 -#define UARTDM_CSR_4800 0x77 -#define UARTDM_CSR_3600 0x66 -#define UARTDM_CSR_2400 0x55 -#define UARTDM_CSR_1200 0x44 -#define UARTDM_CSR_600 0x33 -#define UARTDM_CSR_300 0x22 -#define UARTDM_CSR_150 0x11 -#define UARTDM_CSR_75 0x00 - -/* write only register */ -#define UARTDM_TF_ADDR 0x70 -#define UARTDM_TF2_ADDR 0x74 -#define UARTDM_TF3_ADDR 0x78 -#define UARTDM_TF4_ADDR 0x7C - -/* write only register */ -#define UARTDM_CR_ADDR 0x10 -#define UARTDM_IMR_ADDR 0x14 - -#define UARTDM_IPR_ADDR 0x18 -#define UARTDM_TFWR_ADDR 0x1c -#define UARTDM_RFWR_ADDR 0x20 -#define UARTDM_HCR_ADDR 0x24 -#define UARTDM_DMRX_ADDR 0x34 -#define UARTDM_IRDA_ADDR 0x38 -#define UARTDM_DMEN_ADDR 0x3c - -/* UART_DM_NO_CHARS_FOR_TX */ -#define UARTDM_NCF_TX_ADDR 0x40 - -#define UARTDM_BADR_ADDR 0x44 - -#define UARTDM_SIM_CFG_ADDR 0x80 -/* Read Only register */ -#define UARTDM_SR_ADDR 0x8 - -/* Read Only register */ -#define UARTDM_RF_ADDR 0x70 -#define UARTDM_RF2_ADDR 0x74 -#define UARTDM_RF3_ADDR 0x78 -#define UARTDM_RF4_ADDR 0x7C - -/* Read Only register */ -#define UARTDM_MISR_ADDR 0x10 - -/* Read Only register */ -#define UARTDM_ISR_ADDR 0x14 -#define UARTDM_RX_TOTAL_SNAP_ADDR 0x38 - -#define UARTDM_RXFS_ADDR 0x50 - -/* Register field Mask Mapping */ -#define UARTDM_SR_PAR_FRAME_BMSK BIT(5) -#define UARTDM_SR_OVERRUN_BMSK BIT(4) -#define UARTDM_SR_TXEMT_BMSK BIT(3) -#define UARTDM_SR_TXRDY_BMSK BIT(2) -#define UARTDM_SR_RXRDY_BMSK BIT(0) - -#define UARTDM_CR_TX_DISABLE_BMSK BIT(3) -#define UARTDM_CR_RX_DISABLE_BMSK BIT(1) -#define UARTDM_CR_TX_EN_BMSK BIT(2) -#define UARTDM_CR_RX_EN_BMSK BIT(0) - -/* UARTDM_CR channel_comman bit value (register field is bits 8:4) */ -#define RESET_RX 0x10 -#define RESET_TX 0x20 -#define RESET_ERROR_STATUS 0x30 -#define RESET_BREAK_INT 0x40 -#define START_BREAK 0x50 -#define STOP_BREAK 0x60 -#define RESET_CTS 0x70 -#define RESET_STALE_INT 0x80 -#define RFR_LOW 0xD0 -#define RFR_HIGH 0xE0 -#define CR_PROTECTION_EN 0x100 -#define STALE_EVENT_ENABLE 0x500 -#define STALE_EVENT_DISABLE 0x600 -#define FORCE_STALE_EVENT 0x400 -#define CLEAR_TX_READY 0x300 -#define RESET_TX_ERROR 0x800 -#define RESET_TX_DONE 0x810 - -#define UARTDM_MR1_AUTO_RFR_LEVEL1_BMSK 0xffffff00 -#define UARTDM_MR1_AUTO_RFR_LEVEL0_BMSK 0x3f -#define UARTDM_MR1_CTS_CTL_BMSK 0x40 -#define UARTDM_MR1_RX_RDY_CTL_BMSK 0x80 - -#define UARTDM_MR2_ERROR_MODE_BMSK 0x40 -#define UARTDM_MR2_BITS_PER_CHAR_BMSK 0x30 - -/* bits per character configuration */ -#define FIVE_BPC (0 << 4) -#define SIX_BPC (1 << 4) -#define SEVEN_BPC (2 << 4) -#define EIGHT_BPC (3 << 4) - -#define UARTDM_MR2_STOP_BIT_LEN_BMSK 0xc -#define STOP_BIT_ONE (1 << 2) -#define STOP_BIT_TWO (3 << 2) - -#define UARTDM_MR2_PARITY_MODE_BMSK 0x3 - -/* Parity configuration */ -#define NO_PARITY 0x0 -#define EVEN_PARITY 0x1 -#define ODD_PARITY 0x2 -#define SPACE_PARITY 0x3 - -#define UARTDM_IPR_STALE_TIMEOUT_MSB_BMSK 0xffffff80 -#define UARTDM_IPR_STALE_LSB_BMSK 0x1f - -/* These can be used for both ISR and IMR register */ -#define UARTDM_ISR_TX_READY_BMSK BIT(7) -#define UARTDM_ISR_CURRENT_CTS_BMSK BIT(6) -#define UARTDM_ISR_DELTA_CTS_BMSK BIT(5) -#define UARTDM_ISR_RXLEV_BMSK BIT(4) -#define UARTDM_ISR_RXSTALE_BMSK BIT(3) -#define UARTDM_ISR_RXBREAK_BMSK BIT(2) -#define UARTDM_ISR_RXHUNT_BMSK BIT(1) -#define UARTDM_ISR_TXLEV_BMSK BIT(0) - -/* Field definitions for UART_DM_DMEN*/ -#define UARTDM_TX_DM_EN_BMSK 0x1 -#define UARTDM_RX_DM_EN_BMSK 0x2 - -#define UART_FIFOSIZE 64 -#define UARTCLK 7372800 - -/* Rx DMA request states */ +#include <soc/qcom/msm_dma.h> + +#include "msm_serial_hs.h" +#include "msm_serial_hs_hwreg.h" + +static int hs_serial_debug_mask = 1; +module_param_named(debug_mask, hs_serial_debug_mask, + int, S_IRUGO | S_IWUSR | S_IWGRP); + enum flush_reason { FLUSH_NONE, FLUSH_DATA_READY, @@ -211,7 +76,6 @@ enum flush_reason { FLUSH_SHUTDOWN, }; -/* UART clock states */ enum msm_hs_clk_states_e { MSM_HS_CLK_PORT_OFF, /* port not in use */ MSM_HS_CLK_OFF, /* clock disabled */ @@ -228,27 +92,11 @@ enum msm_hs_clk_req_off_state_e { CLK_REQ_OFF_RXSTALE_FLUSHED, }; -/** - * struct msm_hs_tx - * @tx_ready_int_en: ok to dma more tx? - * @dma_in_flight: tx dma in progress - * @xfer: top level DMA command pointer structure - * @command_ptr: third level command struct pointer - * @command_ptr_ptr: second level command list struct pointer - * @mapped_cmd_ptr: DMA view of third level command struct - * @mapped_cmd_ptr_ptr: DMA view of second level command list struct - * @tx_count: number of bytes to transfer in DMA transfer - * @dma_base: DMA view of UART xmit buffer - * - * This structure describes a single Tx DMA transaction. MSM DMA - * commands have two levels of indirection. The top level command - * ptr points to a list of command ptr which in turn points to a - * single DMA 'command'. In our case each Tx transaction consists - * of a single second level pointer pointing to a 'box type' command. - */ struct msm_hs_tx { - unsigned int tx_ready_int_en; - unsigned int dma_in_flight; + unsigned int tx_ready_int_en; /* ok to dma more tx */ + unsigned int dma_in_flight; /* tx dma in progress */ + enum flush_reason flush; + wait_queue_head_t wait; struct msm_dmov_cmd xfer; dmov_box *command_ptr; u32 *command_ptr_ptr; @@ -256,25 +104,9 @@ struct msm_hs_tx { dma_addr_t mapped_cmd_ptr_ptr; int tx_count; dma_addr_t dma_base; + struct tasklet_struct tlet; }; -/** - * struct msm_hs_rx - * @flush: Rx DMA request state - * @xfer: top level DMA command pointer structure - * @cmdptr_dmaaddr: DMA view of second level command structure - * @command_ptr: third level DMA command pointer structure - * @command_ptr_ptr: second level DMA command list pointer - * @mapped_cmd_ptr: DMA view of the third level command structure - * @wait: wait for DMA completion before shutdown - * @buffer: destination buffer for RX DMA - * @rbuffer: DMA view of buffer - * @pool: dma pool out of which coherent rx buffer is allocated - * @tty_work: private work-queue for tty flip buffer push task - * - * This structure describes a single Rx DMA transaction. Rx DMA - * transactions use box mode DMA commands. - */ struct msm_hs_rx { enum flush_reason flush; struct msm_dmov_cmd xfer; @@ -285,121 +117,286 @@ struct msm_hs_rx { wait_queue_head_t wait; dma_addr_t rbuffer; unsigned char *buffer; + unsigned int buffer_pending; struct dma_pool *pool; - struct work_struct tty_work; + struct delayed_work flip_insert_work; + struct tasklet_struct tlet; }; -/** - * struct msm_hs_rx_wakeup - * @irq: IRQ line to be configured as interrupt source on Rx activity - * @ignore: boolean value. 1 = ignore the wakeup interrupt - * @rx_to_inject: extra character to be inserted to Rx tty on wakeup - * @inject_rx: 1 = insert rx_to_inject. 0 = do not insert extra character - * - * This is an optional structure required for UART Rx GPIO IRQ based - * wakeup from low power state. UART wakeup can be triggered by RX activity - * (using a wakeup GPIO on the UART RX pin). This should only be used if - * there is not a wakeup GPIO on the UART CTS, and the first RX byte is - * known (eg., with the Bluetooth Texas Instruments HCILL protocol), - * since the first RX byte will always be lost. RTS will be asserted even - * while the UART is clocked off in this mode of operation. - */ -struct msm_hs_rx_wakeup { +enum buffer_states { + NONE_PENDING = 0x0, + FIFO_OVERRUN = 0x1, + PARITY_ERROR = 0x2, + CHARS_NORMAL = 0x4, +}; + +/* optional low power wakeup, typically on a GPIO RX irq */ +struct msm_hs_wakeup { int irq; /* < 0 indicates low power wakeup disabled */ - unsigned char ignore; + unsigned char ignore; /* bool */ + + /* bool: inject char into rx tty on wakeup */ unsigned char inject_rx; char rx_to_inject; }; -/** - * struct msm_hs_port - * @uport: embedded uart port structure - * @imr_reg: shadow value of UARTDM_IMR - * @clk: uart input clock handle - * @tx: Tx transaction related data structure - * @rx: Rx transaction related data structure - * @dma_tx_channel: Tx DMA command channel - * @dma_rx_channel Rx DMA command channel - * @dma_tx_crci: Tx channel rate control interface number - * @dma_rx_crci: Rx channel rate control interface number - * @clk_off_timer: Timer to poll DMA event completion before clock off - * @clk_off_delay: clk_off_timer poll interval - * @clk_state: overall clock state - * @clk_req_off_state: post flush clock states - * @rx_wakeup: optional rx_wakeup feature related data - * @exit_lpm_cb: optional callback to exit low power mode - * - * Low level serial port structure. +/* + * UART can be used in 2-wire or 4-wire mode. + * Use uart_func_mode to set 2-wire or 4-wire mode. */ +enum uart_func_mode { + UART_TWO_WIRE, /* can't support HW Flow control. */ + UART_FOUR_WIRE,/* can support HW Flow control. */ +}; + struct msm_hs_port { struct uart_port uport; - unsigned long imr_reg; + unsigned long imr_reg; /* shadow value of UARTDM_IMR */ struct clk *clk; + struct clk *pclk; struct msm_hs_tx tx; struct msm_hs_rx rx; - + /* gsbi uarts have to do additional writes to gsbi memory */ + /* block and top control status block. The following pointers */ + /* keep a handle to these blocks. */ + unsigned char __iomem *mapped_gsbi; int dma_tx_channel; int dma_rx_channel; int dma_tx_crci; int dma_rx_crci; - - struct hrtimer clk_off_timer; + struct hrtimer clk_off_timer; /* to poll TXEMT before clock off */ ktime_t clk_off_delay; enum msm_hs_clk_states_e clk_state; enum msm_hs_clk_req_off_state_e clk_req_off_state; - struct msm_hs_rx_wakeup rx_wakeup; - void (*exit_lpm_cb)(struct uart_port *); + struct msm_hs_wakeup wakeup; + + struct dentry *loopback_dir; + struct work_struct clock_off_w; /* work for actual clock off */ + struct workqueue_struct *hsuart_wq; /* hsuart workqueue */ + struct mutex clk_mutex; /* mutex to guard against clock off/clock on */ + bool tty_flush_receive; + bool rx_discard_flush_issued; + enum uart_func_mode func_mode; + bool is_shutdown; }; #define MSM_UARTDM_BURST_SIZE 16 /* DM burst size (in bytes) */ #define UARTDM_TX_BUF_SIZE UART_XMIT_SIZE #define UARTDM_RX_BUF_SIZE 512 +#define RETRY_TIMEOUT 5 +#define UARTDM_NR 256 +#define RX_FLUSH_COMPLETE_TIMEOUT 300 /* In jiffies */ -#define UARTDM_NR 2 - +static struct dentry *debug_base; static struct msm_hs_port q_uart_port[UARTDM_NR]; static struct platform_driver msm_serial_hs_platform_driver; static struct uart_driver msm_hs_driver; static struct uart_ops msm_hs_ops; -static struct workqueue_struct *msm_hs_workqueue; +static void msm_hs_start_rx_locked(struct uart_port *uport); #define UARTDM_TO_MSM(uart_port) \ container_of((uart_port), struct msm_hs_port, uport) -static unsigned int use_low_power_rx_wakeup(struct msm_hs_port - *msm_uport) +static ssize_t show_clock(struct device *dev, struct device_attribute *attr, + char *buf) { - return (msm_uport->rx_wakeup.irq >= 0); + int state = 1; + enum msm_hs_clk_states_e clk_state; + unsigned long flags; + + struct platform_device *pdev = container_of(dev, struct + platform_device, dev); + struct msm_hs_port *msm_uport = &q_uart_port[pdev->id]; + + spin_lock_irqsave(&msm_uport->uport.lock, flags); + clk_state = msm_uport->clk_state; + spin_unlock_irqrestore(&msm_uport->uport.lock, flags); + + if (clk_state <= MSM_HS_CLK_OFF) + state = 0; + + return snprintf(buf, PAGE_SIZE, "%d\n", state); } -static unsigned int msm_hs_read(struct uart_port *uport, +static ssize_t set_clock(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + int state; + struct platform_device *pdev = container_of(dev, struct + platform_device, dev); + struct msm_hs_port *msm_uport = &q_uart_port[pdev->id]; + + state = buf[0] - '0'; + switch (state) { + case 0: { + msm_hs_request_clock_off(&msm_uport->uport); + break; + } + case 1: { + msm_hs_request_clock_on(&msm_uport->uport); + break; + } + default: { + return -EINVAL; + } + } + return count; +} + +static DEVICE_ATTR(clock, S_IWUSR | S_IRUGO, show_clock, set_clock); + +static inline unsigned int use_low_power_wakeup(struct msm_hs_port *msm_uport) +{ + return (msm_uport->wakeup.irq > 0); +} + +static inline int is_gsbi_uart(struct msm_hs_port *msm_uport) +{ + /* assume gsbi uart if gsbi resource found in pdata */ + return ((msm_uport->mapped_gsbi != NULL)); +} + +static inline unsigned int msm_hs_read(struct uart_port *uport, unsigned int offset) { - return ioread32(uport->membase + offset); + return readl_relaxed(uport->membase + offset); } -static void msm_hs_write(struct uart_port *uport, unsigned int offset, +static inline void msm_hs_write(struct uart_port *uport, unsigned int offset, unsigned int value) { - iowrite32(value, uport->membase + offset); + writel_relaxed(value, uport->membase + offset); } static void msm_hs_release_port(struct uart_port *port) { - iounmap(port->membase); + struct msm_hs_port *msm_uport = UARTDM_TO_MSM(port); + struct platform_device *pdev = to_platform_device(port->dev); + struct resource *gsbi_resource; + resource_size_t size; + + if (is_gsbi_uart(msm_uport)) { + iowrite32(GSBI_PROTOCOL_IDLE, msm_uport->mapped_gsbi + + GSBI_CONTROL_ADDR); + gsbi_resource = platform_get_resource_byname(pdev, + IORESOURCE_MEM, + "gsbi_resource"); + if (unlikely(!gsbi_resource)) + return; + + size = resource_size(gsbi_resource); + release_mem_region(gsbi_resource->start, size); + iounmap(msm_uport->mapped_gsbi); + msm_uport->mapped_gsbi = NULL; + } } static int msm_hs_request_port(struct uart_port *port) { - port->membase = ioremap(port->mapbase, PAGE_SIZE); - if (unlikely(!port->membase)) - return -ENOMEM; + struct msm_hs_port *msm_uport = UARTDM_TO_MSM(port); + struct platform_device *pdev = to_platform_device(port->dev); + struct resource *gsbi_resource; + resource_size_t size; + + gsbi_resource = platform_get_resource_byname(pdev, + IORESOURCE_MEM, + "gsbi_resource"); + if (gsbi_resource) { + size = resource_size(gsbi_resource); + if (unlikely(!request_mem_region(gsbi_resource->start, size, + "msm_serial_hs"))) + return -EBUSY; + msm_uport->mapped_gsbi = ioremap(gsbi_resource->start, + size); + if (!msm_uport->mapped_gsbi) { + release_mem_region(gsbi_resource->start, size); + return -EBUSY; + } + } + /* no gsbi uart */ + return 0; +} - /* configure the CR Protection to Enable */ - msm_hs_write(port, UARTDM_CR_ADDR, CR_PROTECTION_EN); +static int msm_serial_loopback_enable_set(void *data, u64 val) +{ + struct msm_hs_port *msm_uport = data; + struct uart_port *uport = &(msm_uport->uport); + unsigned long flags; + int ret = 0; + + clk_prepare_enable(msm_uport->clk); + if (msm_uport->pclk) + clk_prepare_enable(msm_uport->pclk); + + if (val) { + spin_lock_irqsave(&uport->lock, flags); + ret = msm_hs_read(uport, UARTDM_MR2_ADDR); + ret |= UARTDM_MR2_LOOP_MODE_BMSK; + msm_hs_write(uport, UARTDM_MR2_ADDR, ret); + spin_unlock_irqrestore(&uport->lock, flags); + } else { + spin_lock_irqsave(&uport->lock, flags); + ret = msm_hs_read(uport, UARTDM_MR2_ADDR); + ret &= ~UARTDM_MR2_LOOP_MODE_BMSK; + msm_hs_write(uport, UARTDM_MR2_ADDR, ret); + spin_unlock_irqrestore(&uport->lock, flags); + } + /* Calling CLOCK API. Hence mb() requires here. */ + mb(); + clk_disable_unprepare(msm_uport->clk); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); + + return 0; +} + +static int msm_serial_loopback_enable_get(void *data, u64 *val) +{ + struct msm_hs_port *msm_uport = data; + struct uart_port *uport = &(msm_uport->uport); + unsigned long flags; + int ret = 0; + + clk_prepare_enable(msm_uport->clk); + if (msm_uport->pclk) + clk_prepare_enable(msm_uport->pclk); + + spin_lock_irqsave(&uport->lock, flags); + ret = msm_hs_read(&msm_uport->uport, UARTDM_MR2_ADDR); + spin_unlock_irqrestore(&uport->lock, flags); + + clk_disable_unprepare(msm_uport->clk); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); + + *val = (ret & UARTDM_MR2_LOOP_MODE_BMSK) ? 1 : 0; return 0; } +DEFINE_SIMPLE_ATTRIBUTE(loopback_enable_fops, msm_serial_loopback_enable_get, + msm_serial_loopback_enable_set, "%llu\n"); + +/* + * msm_serial_hs debugfs node: <debugfs_root>/msm_serial_hs/loopback.<id> + * writing 1 turns on internal loopback mode in HW. Useful for automation + * test scripts. + * writing 0 disables the internal loopback mode. Default is disabled. + */ +static void msm_serial_debugfs_init(struct msm_hs_port *msm_uport, + int id) +{ + char node_name[15]; + snprintf(node_name, sizeof(node_name), "loopback.%d", id); + msm_uport->loopback_dir = debugfs_create_file(node_name, + S_IRUGO | S_IWUSR, + debug_base, + msm_uport, + &loopback_enable_fops); + + if (IS_ERR_OR_NULL(msm_uport->loopback_dir)) + pr_err("%s(): Cannot create loopback.%d debug entry", + __func__, id); +} static int msm_hs_remove(struct platform_device *pdev) { @@ -415,6 +412,9 @@ static int msm_hs_remove(struct platform_device *pdev) msm_uport = &q_uart_port[pdev->id]; dev = msm_uport->uport.dev; + sysfs_remove_file(&pdev->dev.kobj, &dev_attr_clock.attr); + debugfs_remove(msm_uport->loopback_dir); + dma_unmap_single(dev, msm_uport->rx.mapped_cmd_ptr, sizeof(dmov_box), DMA_TO_DEVICE); dma_pool_free(msm_uport->rx.pool, msm_uport->rx.buffer, @@ -428,8 +428,13 @@ static int msm_hs_remove(struct platform_device *pdev) dma_unmap_single(dev, msm_uport->tx.mapped_cmd_ptr, sizeof(dmov_box), DMA_TO_DEVICE); + destroy_workqueue(msm_uport->hsuart_wq); + mutex_destroy(&msm_uport->clk_mutex); + uart_remove_one_port(&msm_hs_driver, &msm_uport->uport); clk_put(msm_uport->clk); + if (msm_uport->pclk) + clk_put(msm_uport->pclk); /* Free the tx resources */ kfree(msm_uport->tx.command_ptr); @@ -444,64 +449,176 @@ static int msm_hs_remove(struct platform_device *pdev) return 0; } -static int msm_hs_init_clk_locked(struct uart_port *uport) +/** + * msm_hs_config_uart_tx_rx_gpios - Configures UART Tx and RX GPIOs + * @port: uart port + */ +static int msm_hs_config_uart_tx_rx_gpios(struct uart_port *uport) +{ + struct platform_device *pdev = to_platform_device(uport->dev); + const struct msm_serial_hs_platform_data *pdata = + pdev->dev.platform_data; + int ret = -EINVAL; + + + ret = gpio_request(pdata->uart_tx_gpio, "UART_TX_GPIO"); + if (unlikely(ret)) { + pr_err("gpio request failed for:%d\n", + pdata->uart_tx_gpio); + goto exit_uart_config; + } + + ret = gpio_request(pdata->uart_rx_gpio, "UART_RX_GPIO"); + if (unlikely(ret)) { + pr_err("gpio request failed for:%d\n", + pdata->uart_rx_gpio); + gpio_free(pdata->uart_tx_gpio); + goto exit_uart_config; + } + +exit_uart_config: + return ret; +} + +/** + * msm_hs_unconfig_uart_tx_rx_gpios: Unconfigures UART Tx and RX GPIOs + * @port: uart port + */ +static void msm_hs_unconfig_uart_tx_rx_gpios(struct uart_port *uport) +{ + struct platform_device *pdev = to_platform_device(uport->dev); + const struct msm_serial_hs_platform_data *pdata = + pdev->dev.platform_data; + + + gpio_free(pdata->uart_tx_gpio); + gpio_free(pdata->uart_rx_gpio); + +} + +/** + * msm_hs_config_uart_hwflow_gpios: Configures UART HWFlow GPIOs + * @port: uart port + */ +static int msm_hs_config_uart_hwflow_gpios(struct uart_port *uport) +{ + struct platform_device *pdev = to_platform_device(uport->dev); + const struct msm_serial_hs_platform_data *pdata = + pdev->dev.platform_data; + int ret = -EINVAL; + + ret = gpio_request(pdata->uart_cts_gpio, "UART_CTS_GPIO"); + if (unlikely(ret)) { + pr_err("gpio request failed for:%d\n", pdata->uart_cts_gpio); + goto exit_config_uart; + } + + ret = gpio_request(pdata->uart_rfr_gpio, "UART_RFR_GPIO"); + if (unlikely(ret)) { + pr_err("gpio request failed for:%d\n", pdata->uart_rfr_gpio); + gpio_free(pdata->uart_cts_gpio); + goto exit_config_uart; + } + +exit_config_uart: + return ret; +} + +/** + * msm_hs_unconfig_uart_hwflow_gpios: Unonfigures UART HWFlow GPIOs + * @port: uart port + */ +static void msm_hs_unconfig_uart_hwflow_gpios(struct uart_port *uport) +{ + struct platform_device *pdev = to_platform_device(uport->dev); + const struct msm_serial_hs_platform_data *pdata = + pdev->dev.platform_data; + + gpio_free(pdata->uart_cts_gpio); + gpio_free(pdata->uart_rfr_gpio); + +} + +/** + * msm_hs_config_uart_gpios: Configures UART GPIOs and returns success or + * Failure + * @port: uart port + */ +static int msm_hs_config_uart_gpios(struct uart_port *uport) { - int ret; struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + int ret; - ret = clk_enable(msm_uport->clk); - if (ret) { - printk(KERN_ERR "Error could not turn on UART clk\n"); - return ret; + /* Configure UART Tx and Rx GPIOs */ + ret = msm_hs_config_uart_tx_rx_gpios(uport); + if (!ret) { + if (msm_uport->func_mode == UART_FOUR_WIRE) { + /*if 4-wire uart, configure CTS and RFR GPIOs */ + ret = msm_hs_config_uart_hwflow_gpios(uport); + if (ret) + msm_hs_unconfig_uart_tx_rx_gpios(uport); + } } + return ret; +} + +/** + * msm_hs_unconfig_uart_gpios: Unconfigures UART GPIOs + * @port: uart port + */ +static void msm_hs_unconfig_uart_gpios(struct uart_port *port) +{ + struct msm_hs_port *msm_uport = UARTDM_TO_MSM(port); + + msm_hs_unconfig_uart_tx_rx_gpios(port); + if (msm_uport->func_mode == UART_FOUR_WIRE) + msm_hs_unconfig_uart_hwflow_gpios(port); +} + +static int msm_hs_init_clk(struct uart_port *uport) +{ + int ret; + struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + /* Set up the MREG/NREG/DREG/MNDREG */ ret = clk_set_rate(msm_uport->clk, uport->uartclk); if (ret) { printk(KERN_WARNING "Error setting clock rate on UART\n"); - clk_disable(msm_uport->clk); return ret; } + ret = clk_prepare_enable(msm_uport->clk); + if (ret) { + printk(KERN_ERR "Error could not turn on UART clk\n"); + return ret; + } + if (msm_uport->pclk) { + ret = clk_prepare_enable(msm_uport->pclk); + if (ret) { + clk_disable_unprepare(msm_uport->clk); + dev_err(uport->dev, + "Error could not turn on UART pclk\n"); + return ret; + } + } + msm_uport->clk_state = MSM_HS_CLK_ON; return 0; } -/* Enable and Disable clocks (Used for power management) */ -static void msm_hs_pm(struct uart_port *uport, unsigned int state, - unsigned int oldstate) -{ - struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - - if (use_low_power_rx_wakeup(msm_uport) || - msm_uport->exit_lpm_cb) - return; /* ignore linux PM states, - use msm_hs_request_clock API */ - - switch (state) { - case 0: - clk_enable(msm_uport->clk); - break; - case 3: - clk_disable(msm_uport->clk); - break; - default: - dev_err(uport->dev, "msm_serial: Unknown PM state %d\n", - state); - } -} - /* * programs the UARTDM_CSR register with correct bit rates * * Interrupts should be disabled before we are called, as * we modify Set Baud rate - * Set receive stale interrupt level, dependent on Bit Rate + * Set receive stale interrupt level, dependant on Bit Rate * Goal is to have around 8 ms before indicate stale. * roundup (((Bit Rate * .008) / 10) + 1 */ -static void msm_hs_set_bps_locked(struct uart_port *uport, - unsigned int bps) +static unsigned long msm_hs_set_bps_locked(struct uart_port *uport, + unsigned int bps, + unsigned long flags) { unsigned long rxstale; unsigned long data; @@ -509,63 +626,63 @@ static void msm_hs_set_bps_locked(struct uart_port *uport, switch (bps) { case 300: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_75); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x00); rxstale = 1; break; case 600: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_150); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x11); rxstale = 1; break; case 1200: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_300); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x22); rxstale = 1; break; case 2400: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_600); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x33); rxstale = 1; break; case 4800: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_1200); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x44); rxstale = 1; break; case 9600: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_2400); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x55); rxstale = 2; break; case 14400: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_3600); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x66); rxstale = 3; break; case 19200: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_4800); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x77); rxstale = 4; break; case 28800: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_7200); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x88); rxstale = 6; break; case 38400: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_9600); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x99); rxstale = 8; break; case 57600: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_14400); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xaa); rxstale = 16; break; case 76800: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_19200); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xbb); rxstale = 16; break; case 115200: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_28800); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xcc); rxstale = 31; break; case 230400: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_57600); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xee); rxstale = 31; break; case 460800: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_115200); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xff); rxstale = 31; break; case 4000000: @@ -578,24 +695,87 @@ static void msm_hs_set_bps_locked(struct uart_port *uport, case 1152000: case 1000000: case 921600: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_115200); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xff); rxstale = 31; break; default: - msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_2400); + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xff); /* default to 9600 */ bps = 9600; rxstale = 2; break; } - if (bps > 460800) + /* + * uart baud rate depends on CSR and MND Values + * we are updating CSR before and then calling + * clk_set_rate which updates MND Values. Hence + * dsb requires here. + */ + mb(); + if (bps > 460800) { uport->uartclk = bps * 16; - else - uport->uartclk = UARTCLK; + } else { + uport->uartclk = 7372800; + } + spin_unlock_irqrestore(&uport->lock, flags); if (clk_set_rate(msm_uport->clk, uport->uartclk)) { printk(KERN_WARNING "Error setting clock rate on UART\n"); - return; + WARN_ON(1); + spin_lock_irqsave(&uport->lock, flags); + return flags; + } + + spin_lock_irqsave(&uport->lock, flags); + data = rxstale & UARTDM_IPR_STALE_LSB_BMSK; + data |= UARTDM_IPR_STALE_TIMEOUT_MSB_BMSK & (rxstale << 2); + + msm_hs_write(uport, UARTDM_IPR_ADDR, data); + return flags; +} + + +static void msm_hs_set_std_bps_locked(struct uart_port *uport, + unsigned int bps) +{ + unsigned long rxstale; + unsigned long data; + + switch (bps) { + case 9600: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x99); + rxstale = 2; + break; + case 14400: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xaa); + rxstale = 3; + break; + case 19200: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xbb); + rxstale = 4; + break; + case 28800: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xcc); + rxstale = 6; + break; + case 38400: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xdd); + rxstale = 8; + break; + case 57600: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xee); + rxstale = 16; + break; + case 115200: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0xff); + rxstale = 31; + break; + default: + msm_hs_write(uport, UARTDM_CSR_ADDR, 0x99); + /* default to 9600 */ + bps = 9600; + rxstale = 2; + break; } data = rxstale & UARTDM_IPR_STALE_LSB_BMSK; @@ -611,17 +791,31 @@ static void msm_hs_set_bps_locked(struct uart_port *uport, * Configure the serial port */ static void msm_hs_set_termios(struct uart_port *uport, - struct ktermios *termios, - struct ktermios *oldtermios) + struct ktermios *termios, + struct ktermios *oldtermios) { unsigned int bps; unsigned long data; unsigned long flags; + int ret; unsigned int c_cflag = termios->c_cflag; struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + mutex_lock(&msm_uport->clk_mutex); spin_lock_irqsave(&uport->lock, flags); - clk_enable(msm_uport->clk); + + /* + * Disable Rx channel of UARTDM + * DMA Rx Stall happens if enqueue and flush of Rx command happens + * concurrently. Hence before changing the baud rate/protocol + * configuration and sending flush command to ADM, disable the Rx + * channel of UARTDM. + * Note: should not reset the receiver here immediately as it is not + * suggested to do disable/reset or reset/disable at the same time. + */ + data = msm_hs_read(uport, UARTDM_DMEN_ADDR); + data &= ~UARTDM_RX_DM_EN_BMSK; + msm_hs_write(uport, UARTDM_DMEN_ADDR, data); /* 300 is the minimum baud support by the driver */ bps = uart_get_baud_rate(uport, termios, oldtermios, 200, 4000000); @@ -630,18 +824,23 @@ static void msm_hs_set_termios(struct uart_port *uport, if (bps == 200) bps = 3200000; - msm_hs_set_bps_locked(uport, bps); + uport->uartclk = clk_get_rate(msm_uport->clk); + if (!uport->uartclk) + msm_hs_set_std_bps_locked(uport, bps); + else + flags = msm_hs_set_bps_locked(uport, bps, flags); data = msm_hs_read(uport, UARTDM_MR2_ADDR); data &= ~UARTDM_MR2_PARITY_MODE_BMSK; /* set parity */ if (PARENB == (c_cflag & PARENB)) { - if (PARODD == (c_cflag & PARODD)) + if (PARODD == (c_cflag & PARODD)) { data |= ODD_PARITY; - else if (CMSPAR == (c_cflag & CMSPAR)) + } else if (CMSPAR == (c_cflag & CMSPAR)) { data |= SPACE_PARITY; - else + } else { data |= EVEN_PARITY; + } } /* Set bits per char */ @@ -686,6 +885,8 @@ static void msm_hs_set_termios(struct uart_port *uport, uport->ignore_status_mask = termios->c_iflag & INPCK; uport->ignore_status_mask |= termios->c_iflag & IGNPAR; + uport->ignore_status_mask |= termios->c_iflag & IGNBRK; + uport->read_status_mask = (termios->c_cflag & CREAD); msm_hs_write(uport, UARTDM_IMR_ADDR, 0); @@ -693,40 +894,59 @@ static void msm_hs_set_termios(struct uart_port *uport, /* Set Transmit software time out */ uart_update_timeout(uport, c_cflag, bps); - msm_hs_write(uport, UARTDM_CR_ADDR, RESET_RX); - msm_hs_write(uport, UARTDM_CR_ADDR, RESET_TX); - if (msm_uport->rx.flush == FLUSH_NONE) { msm_uport->rx.flush = FLUSH_IGNORE; - msm_dmov_stop_cmd(msm_uport->dma_rx_channel, NULL, 1); + /* + * Before using dmov APIs make sure that + * previous writel are completed. Hence + * dsb requires here. + */ + mb(); + msm_uport->rx_discard_flush_issued = true; + /* do discard flush */ + msm_dmov_flush(msm_uport->dma_rx_channel, 0); + spin_unlock_irqrestore(&uport->lock, flags); + ret = wait_event_timeout(msm_uport->rx.wait, + msm_uport->rx_discard_flush_issued == false, + RX_FLUSH_COMPLETE_TIMEOUT); + if (!ret) + pr_err("%s(): dmov flush callback timed out\n", + __func__); + spin_lock_irqsave(&uport->lock, flags); } - msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); + /* After baudrate changes, should perform RX and TX reset */ + msm_hs_write(uport, UARTDM_CR_ADDR, (RESET_RX | RESET_TX)); + /* + * Rx and Tx reset operation takes few clock cycles, hence as + * safe side adding 10us delay. + */ + udelay(10); - clk_disable(msm_uport->clk); + /* Initiate new Rx transfer */ + msm_hs_start_rx_locked(&msm_uport->uport); + + mb(); spin_unlock_irqrestore(&uport->lock, flags); + mutex_unlock(&msm_uport->clk_mutex); } /* * Standard API, Transmitter * Any character in the transmit shift register is sent */ -static unsigned int msm_hs_tx_empty(struct uart_port *uport) +unsigned int msm_hs_tx_empty(struct uart_port *uport) { unsigned int data; unsigned int ret = 0; - struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - - clk_enable(msm_uport->clk); data = msm_hs_read(uport, UARTDM_SR_ADDR); if (data & UARTDM_SR_TXEMT_BMSK) ret = TIOCSER_TEMT; - clk_disable(msm_uport->clk); - return ret; } +EXPORT_SYMBOL(msm_hs_tx_empty); /* * Standard API, Stop transmitter. @@ -753,21 +973,20 @@ static void msm_hs_stop_rx_locked(struct uart_port *uport) struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); unsigned int data; - clk_enable(msm_uport->clk); - /* disable dlink */ data = msm_hs_read(uport, UARTDM_DMEN_ADDR); data &= ~UARTDM_RX_DM_EN_BMSK; msm_hs_write(uport, UARTDM_DMEN_ADDR, data); + /* calling DMOV or CLOCK API. Hence mb() */ + mb(); /* Disable the receiver */ - if (msm_uport->rx.flush == FLUSH_NONE) - msm_dmov_stop_cmd(msm_uport->dma_rx_channel, NULL, 1); - + if (msm_uport->rx.flush == FLUSH_NONE) { + /* do discard flush */ + msm_dmov_flush(msm_uport->dma_rx_channel, 0); + } if (msm_uport->rx.flush != FLUSH_SHUTDOWN) msm_uport->rx.flush = FLUSH_STOP; - - clk_disable(msm_uport->clk); } /* Transmit the next chunk of data */ @@ -775,11 +994,16 @@ static void msm_hs_submit_tx_locked(struct uart_port *uport) { int left; int tx_count; + int aligned_tx_count; dma_addr_t src_addr; + dma_addr_t aligned_src_addr; struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); struct msm_hs_tx *tx = &msm_uport->tx; struct circ_buf *tx_buf = &msm_uport->uport.state->xmit; + if (tx->dma_in_flight || msm_uport->is_shutdown) + return; + if (uart_circ_empty(tx_buf) || uport->state->port.tty->stopped) { msm_hs_stop_tx_locked(uport); return; @@ -798,8 +1022,13 @@ static void msm_hs_submit_tx_locked(struct uart_port *uport) tx_count = left; src_addr = tx->dma_base + tx_buf->tail; - dma_sync_single_for_device(uport->dev, src_addr, tx_count, - DMA_TO_DEVICE); + /* Mask the src_addr to align on a cache + * and add those bytes to tx_count */ + aligned_src_addr = src_addr & ~(dma_get_cache_alignment() - 1); + aligned_tx_count = tx_count + src_addr - aligned_src_addr; + + dma_sync_single_for_device(uport->dev, aligned_src_addr, + aligned_tx_count, DMA_TO_DEVICE); tx->command_ptr->num_rows = (((tx_count + 15) >> 4) << 16) | ((tx_count + 15) >> 4); @@ -810,9 +1039,6 @@ static void msm_hs_submit_tx_locked(struct uart_port *uport) *tx->command_ptr_ptr = CMD_PTR_LP | DMOV_CMD_ADDR(tx->mapped_cmd_ptr); - dma_sync_single_for_device(uport->dev, tx->mapped_cmd_ptr_ptr, - sizeof(u32), DMA_TO_DEVICE); - /* Save tx_count to use in Callback */ tx->tx_count = tx_count; msm_hs_write(uport, UARTDM_NCF_TX_ADDR, tx_count); @@ -820,6 +1046,12 @@ static void msm_hs_submit_tx_locked(struct uart_port *uport) /* Disable the tx_ready interrupt */ msm_uport->imr_reg &= ~UARTDM_ISR_TX_READY_BMSK; msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); + /* Calling next DMOV API. Hence mb() here. */ + mb(); + + dma_sync_single_for_device(uport->dev, tx->mapped_cmd_ptr_ptr, + sizeof(u32), DMA_TO_DEVICE); + msm_uport->tx.flush = FLUSH_NONE; msm_dmov_enqueue_cmd(msm_uport->dma_tx_channel, &tx->xfer); } @@ -827,106 +1059,122 @@ static void msm_hs_submit_tx_locked(struct uart_port *uport) static void msm_hs_start_rx_locked(struct uart_port *uport) { struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + unsigned int buffer_pending = msm_uport->rx.buffer_pending; + unsigned int data; + + msm_uport->rx.buffer_pending = 0; + if (buffer_pending && hs_serial_debug_mask) + printk(KERN_ERR "Error: rx started in buffer state = %x", + buffer_pending); msm_hs_write(uport, UARTDM_CR_ADDR, RESET_STALE_INT); msm_hs_write(uport, UARTDM_DMRX_ADDR, UARTDM_RX_BUF_SIZE); msm_hs_write(uport, UARTDM_CR_ADDR, STALE_EVENT_ENABLE); msm_uport->imr_reg |= UARTDM_ISR_RXLEV_BMSK; + + /* + * Enable UARTDM Rx Interface as previously it has been + * disable in set_termios before configuring baud rate. + */ + data = msm_hs_read(uport, UARTDM_DMEN_ADDR); + data |= UARTDM_RX_DM_EN_BMSK; + msm_hs_write(uport, UARTDM_DMEN_ADDR, data); msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); + /* Calling next DMOV API. Hence mb() here. */ + mb(); msm_uport->rx.flush = FLUSH_NONE; msm_dmov_enqueue_cmd(msm_uport->dma_rx_channel, &msm_uport->rx.xfer); - /* might have finished RX and be ready to clock off */ - hrtimer_start(&msm_uport->clk_off_timer, msm_uport->clk_off_delay, - HRTIMER_MODE_REL); -} - -/* Enable the transmitter Interrupt */ -static void msm_hs_start_tx_locked(struct uart_port *uport) -{ - struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - - clk_enable(msm_uport->clk); - - if (msm_uport->exit_lpm_cb) - msm_uport->exit_lpm_cb(uport); - - if (msm_uport->tx.tx_ready_int_en == 0) { - msm_uport->tx.tx_ready_int_en = 1; - msm_hs_submit_tx_locked(uport); - } - - clk_disable(msm_uport->clk); } -/* - * This routine is called when we are done with a DMA transfer - * - * This routine is registered with Data mover when we set - * up a Data Mover transfer. It is called from Data mover ISR - * when the DMA transfer is done. - */ -static void msm_hs_dmov_tx_callback(struct msm_dmov_cmd *cmd_ptr, - unsigned int result, - struct msm_dmov_errdata *err) +static void flip_insert_work(struct work_struct *work) { unsigned long flags; - struct msm_hs_port *msm_uport; - - /* DMA did not finish properly */ - WARN_ON((((result & RSLT_FIFO_CNTR_BMSK) >> 28) == 1) && - !(result & RSLT_VLD)); - - msm_uport = container_of(cmd_ptr, struct msm_hs_port, tx.xfer); + int retval; + struct msm_hs_port *msm_uport = + container_of(work, struct msm_hs_port, + rx.flip_insert_work.work); + struct tty_struct *tty = msm_uport->uport.state->port.tty; + struct tty_port *port = tty->port; spin_lock_irqsave(&msm_uport->uport.lock, flags); - clk_enable(msm_uport->clk); - - msm_uport->imr_reg |= UARTDM_ISR_TX_READY_BMSK; - msm_hs_write(&msm_uport->uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); - - clk_disable(msm_uport->clk); + if (msm_uport->rx.buffer_pending == NONE_PENDING) { + if (hs_serial_debug_mask) + printk(KERN_ERR "Error: No buffer pending in %s", + __func__); + return; + } + if (msm_uport->rx.buffer_pending & FIFO_OVERRUN) { + retval = tty_insert_flip_char(port, 0, TTY_OVERRUN); + if (retval) + msm_uport->rx.buffer_pending &= ~FIFO_OVERRUN; + } + if (msm_uport->rx.buffer_pending & PARITY_ERROR) { + retval = tty_insert_flip_char(port, 0, TTY_PARITY); + if (retval) + msm_uport->rx.buffer_pending &= ~PARITY_ERROR; + } + if (msm_uport->rx.buffer_pending & CHARS_NORMAL) { + int rx_count, rx_offset; + rx_count = (msm_uport->rx.buffer_pending & 0xFFFF0000) >> 16; + rx_offset = (msm_uport->rx.buffer_pending & 0xFFD0) >> 5; + retval = tty_insert_flip_string(port, msm_uport->rx.buffer + + rx_offset, rx_count); + msm_uport->rx.buffer_pending &= (FIFO_OVERRUN | + PARITY_ERROR); + if (retval != rx_count) + msm_uport->rx.buffer_pending |= CHARS_NORMAL | + retval << 8 | (rx_count - retval) << 16; + } + if (msm_uport->rx.buffer_pending) + schedule_delayed_work(&msm_uport->rx.flip_insert_work, + msecs_to_jiffies(RETRY_TIMEOUT)); + else + if ((msm_uport->clk_state == MSM_HS_CLK_ON) && + (msm_uport->rx.flush <= FLUSH_IGNORE)) { + if (hs_serial_debug_mask) + printk(KERN_WARNING + "msm_serial_hs: " + "Pending buffers cleared. " + "Restarting\n"); + msm_hs_start_rx_locked(&msm_uport->uport); + } spin_unlock_irqrestore(&msm_uport->uport.lock, flags); + tty_flip_buffer_push(port); } -/* - * This routine is called when we are done with a DMA transfer or the - * a flush has been sent to the data mover driver. - * - * This routine is registered with Data mover when we set up a Data Mover - * transfer. It is called from Data mover ISR when the DMA transfer is done. - */ -static void msm_hs_dmov_rx_callback(struct msm_dmov_cmd *cmd_ptr, - unsigned int result, - struct msm_dmov_errdata *err) +static void msm_serial_hs_rx_tlet(unsigned long tlet_ptr) { int retval; int rx_count; unsigned long status; - unsigned int error_f = 0; unsigned long flags; - unsigned int flush; - struct tty_port *port; + unsigned int error_f = 0; struct uart_port *uport; struct msm_hs_port *msm_uport; + unsigned int flush; + struct tty_struct *tty; + struct tty_port *port; - msm_uport = container_of(cmd_ptr, struct msm_hs_port, rx.xfer); + msm_uport = container_of((struct tasklet_struct *)tlet_ptr, + struct msm_hs_port, rx.tlet); uport = &msm_uport->uport; + tty = uport->state->port.tty; + port = tty->port; - spin_lock_irqsave(&uport->lock, flags); - clk_enable(msm_uport->clk); + status = msm_hs_read(uport, UARTDM_SR_ADDR); - port = &uport->state->port; + spin_lock_irqsave(&uport->lock, flags); msm_hs_write(uport, UARTDM_CR_ADDR, STALE_EVENT_DISABLE); - status = msm_hs_read(uport, UARTDM_SR_ADDR); - /* overflow is not connect to data in a FIFO */ if (unlikely((status & UARTDM_SR_OVERRUN_BMSK) && (uport->read_status_mask & CREAD))) { - tty_insert_flip_char(port, 0, TTY_OVERRUN); + retval = tty_insert_flip_char(port, 0, TTY_OVERRUN); + if (!retval) + msm_uport->rx.buffer_pending |= TTY_OVERRUN; uport->icount.buf_overrun++; error_f = 1; } @@ -936,10 +1184,27 @@ static void msm_hs_dmov_rx_callback(struct msm_dmov_cmd *cmd_ptr, if (unlikely(status & UARTDM_SR_PAR_FRAME_BMSK)) { /* Can not tell difference between parity & frame error */ + if (hs_serial_debug_mask) + printk(KERN_WARNING "msm_serial_hs: parity error\n"); uport->icount.parity++; error_f = 1; - if (uport->ignore_status_mask & IGNPAR) - tty_insert_flip_char(port, 0, TTY_PARITY); + if (!(uport->ignore_status_mask & IGNPAR)) { + retval = tty_insert_flip_char(port, 0, TTY_PARITY); + if (!retval) + msm_uport->rx.buffer_pending |= TTY_PARITY; + } + } + + if (unlikely(status & UARTDM_SR_RX_BREAK_BMSK)) { + if (hs_serial_debug_mask) + printk(KERN_WARNING "msm_serial_hs: Rx break\n"); + uport->icount.brk++; + error_f = 1; + if (!(uport->ignore_status_mask & IGNBRK)) { + retval = tty_insert_flip_char(port, 0, TTY_BREAK); + if (!retval) + msm_uport->rx.buffer_pending |= TTY_BREAK; + } } if (error_f) @@ -947,40 +1212,149 @@ static void msm_hs_dmov_rx_callback(struct msm_dmov_cmd *cmd_ptr, if (msm_uport->clk_req_off_state == CLK_REQ_OFF_FLUSH_ISSUED) msm_uport->clk_req_off_state = CLK_REQ_OFF_RXSTALE_FLUSHED; - flush = msm_uport->rx.flush; - if (flush == FLUSH_IGNORE) - msm_hs_start_rx_locked(uport); - if (flush == FLUSH_STOP) + + /* Part of set_termios sets the flush as FLUSH_IGNORE */ + if (flush == FLUSH_IGNORE) { + if (!msm_uport->rx_discard_flush_issued && + !msm_uport->rx.buffer_pending) { + msm_hs_start_rx_locked(uport); + } else { + msm_uport->rx_discard_flush_issued = false; + wake_up(&msm_uport->rx.wait); + } + } + + /* Part of stop_rx sets the flush as FLUSH_STOP */ + if (flush == FLUSH_STOP) { + if (msm_uport->rx_discard_flush_issued) + msm_uport->rx_discard_flush_issued = false; msm_uport->rx.flush = FLUSH_SHUTDOWN; + wake_up(&msm_uport->rx.wait); + } + if (flush >= FLUSH_DATA_INVALID) goto out; rx_count = msm_hs_read(uport, UARTDM_RX_TOTAL_SNAP_ADDR); + /* order the read of rx.buffer */ + rmb(); + if (0 != (uport->read_status_mask & CREAD)) { retval = tty_insert_flip_string(port, msm_uport->rx.buffer, rx_count); - BUG_ON(retval != rx_count); + if (retval != rx_count) { + msm_uport->rx.buffer_pending |= CHARS_NORMAL | + retval << 5 | (rx_count - retval) << 16; + } } - msm_hs_start_rx_locked(uport); + /* order the read of rx.buffer and the start of next rx xfer */ + wmb(); -out: - clk_disable(msm_uport->clk); + if (!msm_uport->rx.buffer_pending) + msm_hs_start_rx_locked(uport); +out: + if (msm_uport->rx.buffer_pending) { + if (hs_serial_debug_mask) + printk(KERN_WARNING + "msm_serial_hs: " + "tty buffer exhausted. " + "Stalling\n"); + schedule_delayed_work(&msm_uport->rx.flip_insert_work + , msecs_to_jiffies(RETRY_TIMEOUT)); + } + /* tty_flip_buffer_push() might call msm_hs_start(), so unlock */ spin_unlock_irqrestore(&uport->lock, flags); - if (flush < FLUSH_DATA_INVALID) - queue_work(msm_hs_workqueue, &msm_uport->rx.tty_work); + tty_flip_buffer_push(port); } -static void msm_hs_tty_flip_buffer_work(struct work_struct *work) +/* Enable the transmitter Interrupt */ +static void msm_hs_start_tx_locked(struct uart_port *uport ) { - struct msm_hs_port *msm_uport = - container_of(work, struct msm_hs_port, rx.tty_work); + struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - tty_flip_buffer_push(&msm_uport->uport.state->port); + if (msm_uport->is_shutdown) + return; + + if (msm_uport->tx.tx_ready_int_en == 0) { + msm_uport->tx.tx_ready_int_en = 1; + if (msm_uport->tx.dma_in_flight == 0) + msm_hs_submit_tx_locked(uport); + } +} + +/* + * This routine is called when we are done with a DMA transfer + * + * This routine is registered with Data mover when we set + * up a Data Mover transfer. It is called from Data mover ISR + * when the DMA transfer is done. + */ +static void msm_hs_dmov_tx_callback(struct msm_dmov_cmd *cmd_ptr, + unsigned int result, + struct msm_dmov_errdata *err) +{ + struct msm_hs_port *msm_uport; + + msm_uport = container_of(cmd_ptr, struct msm_hs_port, tx.xfer); + if (msm_uport->tx.flush == FLUSH_STOP) + /* DMA FLUSH unsuccesfful */ + WARN_ON(!(result & DMOV_RSLT_FLUSH)); + else + /* DMA did not finish properly */ + WARN_ON(!(result & DMOV_RSLT_DONE)); + + tasklet_schedule(&msm_uport->tx.tlet); +} + +static void msm_serial_hs_tx_tlet(unsigned long tlet_ptr) +{ + unsigned long flags; + struct msm_hs_port *msm_uport = container_of((struct tasklet_struct *) + tlet_ptr, struct msm_hs_port, tx.tlet); + struct msm_hs_tx *tx = &msm_uport->tx; + + spin_lock_irqsave(&(msm_uport->uport.lock), flags); + + tx->dma_in_flight = 0; + if (msm_uport->tx.flush == FLUSH_STOP) { + msm_uport->tx.flush = FLUSH_SHUTDOWN; + wake_up(&msm_uport->tx.wait); + spin_unlock_irqrestore(&(msm_uport->uport.lock), flags); + return; + } + + msm_uport->imr_reg |= UARTDM_ISR_TX_READY_BMSK; + msm_hs_write(&(msm_uport->uport), UARTDM_IMR_ADDR, msm_uport->imr_reg); + /* Calling clk API. Hence mb() requires. */ + mb(); + + spin_unlock_irqrestore(&(msm_uport->uport.lock), flags); +} + +/* + * This routine is called when we are done with a DMA transfer or the + * a flush has been sent to the data mover driver. + * + * This routine is registered with Data mover when we set up a Data Mover + * transfer. It is called from Data mover ISR when the DMA transfer is done. + */ +static void msm_hs_dmov_rx_callback(struct msm_dmov_cmd *cmd_ptr, + unsigned int result, + struct msm_dmov_errdata *err) +{ + struct msm_hs_port *msm_uport; + + if (result & DMOV_RSLT_ERROR) + pr_err("%s(): DMOV_RSLT_ERROR\n", __func__); + + msm_uport = container_of(cmd_ptr, struct msm_hs_port, rx.xfer); + + tasklet_schedule(&msm_uport->rx.tlet); } /* @@ -1002,59 +1376,66 @@ static unsigned int msm_hs_get_mctrl_locked(struct uart_port *uport) } /* - * True enables UART auto RFR, which indicates we are ready for data if the RX - * buffer is not full. False disables auto RFR, and deasserts RFR to indicate - * we are not ready for data. Must be called with UART clock on. + * Standard API, Set or clear RFR_signal + * + * Set RFR high, (Indicate we are not ready for data), we disable auto + * ready for receiving and then set RFR_N high. To set RFR to low we just turn + * back auto ready for receiving and it should lower RFR signal + * when hardware is ready */ -static void set_rfr_locked(struct uart_port *uport, int auto_rfr) +void msm_hs_set_mctrl_locked(struct uart_port *uport, + unsigned int mctrl) { + unsigned int set_rts; unsigned int data; - data = msm_hs_read(uport, UARTDM_MR1_ADDR); + /* RTS is active low */ + set_rts = TIOCM_RTS & mctrl ? 0 : 1; - if (auto_rfr) { - /* enable auto ready-for-receiving */ - data |= UARTDM_MR1_RX_RDY_CTL_BMSK; - msm_hs_write(uport, UARTDM_MR1_ADDR, data); - } else { - /* disable auto ready-for-receiving */ + data = msm_hs_read(uport, UARTDM_MR1_ADDR); + if (set_rts) { + /*disable auto ready-for-receiving */ data &= ~UARTDM_MR1_RX_RDY_CTL_BMSK; msm_hs_write(uport, UARTDM_MR1_ADDR, data); - /* RFR is active low, set high */ + /* set RFR_N to high */ msm_hs_write(uport, UARTDM_CR_ADDR, RFR_HIGH); + } else { + /* Enable auto ready-for-receiving */ + data |= UARTDM_MR1_RX_RDY_CTL_BMSK; + msm_hs_write(uport, UARTDM_MR1_ADDR, data); } + mb(); } -/* - * Standard API, used to set or clear RFR - */ -static void msm_hs_set_mctrl_locked(struct uart_port *uport, +void msm_hs_set_mctrl(struct uart_port *uport, unsigned int mctrl) { - unsigned int auto_rfr; - struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - - clk_enable(msm_uport->clk); - - auto_rfr = TIOCM_RTS & mctrl ? 1 : 0; - set_rfr_locked(uport, auto_rfr); + unsigned long flags; - clk_disable(msm_uport->clk); + spin_lock_irqsave(&uport->lock, flags); + msm_hs_set_mctrl_locked(uport, mctrl); + spin_unlock_irqrestore(&uport->lock, flags); } +EXPORT_SYMBOL(msm_hs_set_mctrl); /* Standard API, Enable modem status (CTS) interrupt */ static void msm_hs_enable_ms_locked(struct uart_port *uport) { struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - clk_enable(msm_uport->clk); - /* Enable DELTA_CTS Interrupt */ msm_uport->imr_reg |= UARTDM_ISR_DELTA_CTS_BMSK; msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); + mb(); + +} - clk_disable(msm_uport->clk); +static void msm_hs_flush_buffer_locked(struct uart_port *uport) +{ + struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + if (msm_uport->tx.dma_in_flight) + msm_uport->tty_flush_receive = true; } /* @@ -1065,38 +1446,45 @@ static void msm_hs_enable_ms_locked(struct uart_port *uport) */ static void msm_hs_break_ctl(struct uart_port *uport, int ctl) { - struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + unsigned long flags; - clk_enable(msm_uport->clk); + spin_lock_irqsave(&uport->lock, flags); msm_hs_write(uport, UARTDM_CR_ADDR, ctl ? START_BREAK : STOP_BREAK); - clk_disable(msm_uport->clk); + mb(); + spin_unlock_irqrestore(&uport->lock, flags); } static void msm_hs_config_port(struct uart_port *uport, int cfg_flags) { unsigned long flags; + struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - spin_lock_irqsave(&uport->lock, flags); if (cfg_flags & UART_CONFIG_TYPE) { uport->type = PORT_MSM; msm_hs_request_port(uport); } - spin_unlock_irqrestore(&uport->lock, flags); + + if (is_gsbi_uart(msm_uport)) { + if (msm_uport->pclk) + clk_prepare_enable(msm_uport->pclk); + spin_lock_irqsave(&uport->lock, flags); + iowrite32(GSBI_PROTOCOL_UART, msm_uport->mapped_gsbi + + GSBI_CONTROL_ADDR); + spin_unlock_irqrestore(&uport->lock, flags); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); + } } /* Handle CTS changes (Called from interrupt handler) */ static void msm_hs_handle_delta_cts_locked(struct uart_port *uport) { - struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); - - clk_enable(msm_uport->clk); - /* clear interrupt */ msm_hs_write(uport, UARTDM_CR_ADDR, RESET_CTS); + /* Calling CLOCK API. Hence mb() requires here. */ + mb(); uport->icount.cts++; - clk_disable(msm_uport->clk); - /* clear the IOCTL TIOCMIWAIT if called */ wake_up_interruptible(&uport->state->port.delta_msr_wait); } @@ -1106,81 +1494,126 @@ static void msm_hs_handle_delta_cts_locked(struct uart_port *uport) * -1 did not clock off, do not retry * 1 if we clocked off */ -static int msm_hs_check_clock_off_locked(struct uart_port *uport) +static int msm_hs_check_clock_off(struct uart_port *uport) { unsigned long sr_status; + unsigned long flags; + int ret; struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); struct circ_buf *tx_buf = &uport->state->xmit; + mutex_lock(&msm_uport->clk_mutex); + spin_lock_irqsave(&uport->lock, flags); + /* Cancel if tx tty buffer is not empty, dma is in flight, - * or tx fifo is not empty, or rx fifo is not empty */ + * or tx fifo is not empty */ if (msm_uport->clk_state != MSM_HS_CLK_REQUEST_OFF || !uart_circ_empty(tx_buf) || msm_uport->tx.dma_in_flight || - (msm_uport->imr_reg & UARTDM_ISR_TXLEV_BMSK) || - !(msm_uport->imr_reg & UARTDM_ISR_RXLEV_BMSK)) { + msm_uport->imr_reg & UARTDM_ISR_TXLEV_BMSK) { + spin_unlock_irqrestore(&uport->lock, flags); + mutex_unlock(&msm_uport->clk_mutex); return -1; } /* Make sure the uart is finished with the last byte */ sr_status = msm_hs_read(uport, UARTDM_SR_ADDR); - if (!(sr_status & UARTDM_SR_TXEMT_BMSK)) + if (!(sr_status & UARTDM_SR_TXEMT_BMSK)) { + spin_unlock_irqrestore(&uport->lock, flags); + mutex_unlock(&msm_uport->clk_mutex); return 0; /* retry */ + } /* Make sure forced RXSTALE flush complete */ switch (msm_uport->clk_req_off_state) { case CLK_REQ_OFF_START: msm_uport->clk_req_off_state = CLK_REQ_OFF_RXSTALE_ISSUED; msm_hs_write(uport, UARTDM_CR_ADDR, FORCE_STALE_EVENT); + /* + * Before returning make sure that device writel completed. + * Hence mb() requires here. + */ + mb(); + spin_unlock_irqrestore(&uport->lock, flags); + mutex_unlock(&msm_uport->clk_mutex); return 0; /* RXSTALE flush not complete - retry */ case CLK_REQ_OFF_RXSTALE_ISSUED: case CLK_REQ_OFF_FLUSH_ISSUED: + spin_unlock_irqrestore(&uport->lock, flags); + mutex_unlock(&msm_uport->clk_mutex); return 0; /* RXSTALE flush not complete - retry */ case CLK_REQ_OFF_RXSTALE_FLUSHED: break; /* continue */ } if (msm_uport->rx.flush != FLUSH_SHUTDOWN) { - if (msm_uport->rx.flush == FLUSH_NONE) + if (msm_uport->rx.flush == FLUSH_NONE) { msm_hs_stop_rx_locked(uport); + msm_uport->rx_discard_flush_issued = true; + } + + spin_unlock_irqrestore(&uport->lock, flags); + if (msm_uport->rx_discard_flush_issued) { + pr_debug("%s(): wainting for flush completion.\n", + __func__); + ret = wait_event_timeout(msm_uport->rx.wait, + msm_uport->rx_discard_flush_issued == false, + RX_FLUSH_COMPLETE_TIMEOUT); + if (!ret) + pr_err("%s(): Flush complete pending.\n", + __func__); + } + + mutex_unlock(&msm_uport->clk_mutex); return 0; /* come back later to really clock off */ } + spin_unlock_irqrestore(&uport->lock, flags); + /* we really want to clock off */ - clk_disable(msm_uport->clk); + clk_disable_unprepare(msm_uport->clk); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); + msm_uport->clk_state = MSM_HS_CLK_OFF; - if (use_low_power_rx_wakeup(msm_uport)) { - msm_uport->rx_wakeup.ignore = 1; - enable_irq(msm_uport->rx_wakeup.irq); + spin_lock_irqsave(&uport->lock, flags); + if (use_low_power_wakeup(msm_uport)) { + msm_uport->wakeup.ignore = 1; + enable_irq(msm_uport->wakeup.irq); } + + spin_unlock_irqrestore(&uport->lock, flags); + mutex_unlock(&msm_uport->clk_mutex); return 1; } -static enum hrtimer_restart msm_hs_clk_off_retry(struct hrtimer *timer) +static void hsuart_clock_off_work(struct work_struct *w) { - unsigned long flags; - int ret = HRTIMER_NORESTART; - struct msm_hs_port *msm_uport = container_of(timer, struct msm_hs_port, - clk_off_timer); + struct msm_hs_port *msm_uport = container_of(w, struct msm_hs_port, + clock_off_w); struct uart_port *uport = &msm_uport->uport; - spin_lock_irqsave(&uport->lock, flags); - - if (!msm_hs_check_clock_off_locked(uport)) { - hrtimer_forward_now(timer, msm_uport->clk_off_delay); - ret = HRTIMER_RESTART; + if (!msm_hs_check_clock_off(uport)) { + hrtimer_start(&msm_uport->clk_off_timer, + msm_uport->clk_off_delay, + HRTIMER_MODE_REL); } +} - spin_unlock_irqrestore(&uport->lock, flags); +static enum hrtimer_restart msm_hs_clk_off_retry(struct hrtimer *timer) +{ + struct msm_hs_port *msm_uport = container_of(timer, struct msm_hs_port, + clk_off_timer); - return ret; + queue_work(msm_uport->hsuart_wq, &msm_uport->clock_off_w); + return HRTIMER_NORESTART; } static irqreturn_t msm_hs_isr(int irq, void *dev) { unsigned long flags; unsigned long isr_status; - struct msm_hs_port *msm_uport = dev; + struct msm_hs_port *msm_uport = (struct msm_hs_port *)dev; struct uart_port *uport = &msm_uport->uport; struct circ_buf *tx_buf = &uport->state->xmit; struct msm_hs_tx *tx = &msm_uport->tx; @@ -1188,24 +1621,37 @@ static irqreturn_t msm_hs_isr(int irq, void *dev) spin_lock_irqsave(&uport->lock, flags); + if (msm_uport->is_shutdown) { + spin_unlock_irqrestore(&uport->lock, flags); + return IRQ_HANDLED; + } + isr_status = msm_hs_read(uport, UARTDM_MISR_ADDR); /* Uart RX starting */ if (isr_status & UARTDM_ISR_RXLEV_BMSK) { msm_uport->imr_reg &= ~UARTDM_ISR_RXLEV_BMSK; msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); + /* Complete device write for IMR. Hence mb() requires. */ + mb(); } /* Stale rx interrupt */ if (isr_status & UARTDM_ISR_RXSTALE_BMSK) { msm_hs_write(uport, UARTDM_CR_ADDR, STALE_EVENT_DISABLE); msm_hs_write(uport, UARTDM_CR_ADDR, RESET_STALE_INT); + /* + * Complete device write before calling DMOV API. Hence + * mb() requires here. + */ + mb(); if (msm_uport->clk_req_off_state == CLK_REQ_OFF_RXSTALE_ISSUED) msm_uport->clk_req_off_state = - CLK_REQ_OFF_FLUSH_ISSUED; + CLK_REQ_OFF_FLUSH_ISSUED; + if (rx->flush == FLUSH_NONE) { rx->flush = FLUSH_DATA_READY; - msm_dmov_stop_cmd(msm_uport->dma_rx_channel, NULL, 1); + msm_dmov_flush(msm_uport->dma_rx_channel, 1); } } /* tx ready interrupt */ @@ -1218,11 +1664,20 @@ static irqreturn_t msm_hs_isr(int irq, void *dev) msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); } - + /* + * Complete both writes before starting new TX. + * Hence mb() requires here. + */ + mb(); /* Complete DMA TX transactions and submit new transactions */ - tx_buf->tail = (tx_buf->tail + tx->tx_count) & ~UART_XMIT_SIZE; - tx->dma_in_flight = 0; + /* Do not update tx_buf.tail if uart_flush_buffer already + called in serial core */ + if (!msm_uport->tty_flush_receive) + tx_buf->tail = (tx_buf->tail + + tx->tx_count) & ~UART_XMIT_SIZE; + else + msm_uport->tty_flush_receive = false; uport->icount.tx += tx->tx_count; if (tx->tx_ready_int_en) @@ -1235,10 +1690,12 @@ static irqreturn_t msm_hs_isr(int irq, void *dev) /* TX FIFO is empty */ msm_uport->imr_reg &= ~UARTDM_ISR_TXLEV_BMSK; msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); - if (!msm_hs_check_clock_off_locked(uport)) - hrtimer_start(&msm_uport->clk_off_timer, - msm_uport->clk_off_delay, - HRTIMER_MODE_REL); + /* + * Complete device write before starting clock_off request. + * Hence mb() requires here. + */ + mb(); + queue_work(msm_uport->hsuart_wq, &msm_uport->clock_off_w); } /* Change in CTS interrupt */ @@ -1250,51 +1707,59 @@ static irqreturn_t msm_hs_isr(int irq, void *dev) return IRQ_HANDLED; } -void msm_hs_request_clock_off_locked(struct uart_port *uport) -{ +/* request to turn off uart clock once pending TX is flushed */ +void msm_hs_request_clock_off(struct uart_port *uport) { + unsigned long flags; struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + spin_lock_irqsave(&uport->lock, flags); if (msm_uport->clk_state == MSM_HS_CLK_ON) { msm_uport->clk_state = MSM_HS_CLK_REQUEST_OFF; msm_uport->clk_req_off_state = CLK_REQ_OFF_START; - if (!use_low_power_rx_wakeup(msm_uport)) - set_rfr_locked(uport, 0); msm_uport->imr_reg |= UARTDM_ISR_TXLEV_BMSK; msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); + /* + * Complete device write before retuning back. + * Hence mb() requires here. + */ + mb(); } -} - -/** - * msm_hs_request_clock_off - request to (i.e. asynchronously) turn off uart - * clock once pending TX is flushed and Rx DMA command is terminated. - * @uport: uart_port structure for the device instance. - * - * This functions puts the device into a partially active low power mode. It - * waits to complete all pending tx transactions, flushes ongoing Rx DMA - * command and terminates UART side Rx transaction, puts UART HW in non DMA - * mode and then clocks off the device. A client calls this when no UART - * data is expected. msm_request_clock_on() must be called before any further - * UART can be sent or received. - */ -void msm_hs_request_clock_off(struct uart_port *uport) -{ - unsigned long flags; - - spin_lock_irqsave(&uport->lock, flags); - msm_hs_request_clock_off_locked(uport); spin_unlock_irqrestore(&uport->lock, flags); } +EXPORT_SYMBOL(msm_hs_request_clock_off); -void msm_hs_request_clock_on_locked(struct uart_port *uport) +void msm_hs_request_clock_on(struct uart_port *uport) { struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + unsigned long flags; unsigned int data; + int ret = 0; + + mutex_lock(&msm_uport->clk_mutex); + spin_lock_irqsave(&uport->lock, flags); switch (msm_uport->clk_state) { case MSM_HS_CLK_OFF: - clk_enable(msm_uport->clk); - disable_irq_nosync(msm_uport->rx_wakeup.irq); - /* fall-through */ + disable_irq_nosync(msm_uport->wakeup.irq); + spin_unlock_irqrestore(&uport->lock, flags); + ret = clk_prepare_enable(msm_uport->clk); + if (ret) { + dev_err(uport->dev, "Clock ON Failure" + "For UART CLK Stalling HSUART\n"); + break; + } + + if (msm_uport->pclk) { + ret = clk_prepare_enable(msm_uport->pclk); + if (unlikely(ret)) { + clk_disable_unprepare(msm_uport->clk); + dev_err(uport->dev, "Clock ON Failure" + "For UART Pclk Stalling HSUART\n"); + break; + } + } + spin_lock_irqsave(&uport->lock, flags); + /* else fall-through */ case MSM_HS_CLK_REQUEST_OFF: if (msm_uport->rx.flush == FLUSH_STOP || msm_uport->rx.flush == FLUSH_SHUTDOWN) { @@ -1302,12 +1767,12 @@ void msm_hs_request_clock_on_locked(struct uart_port *uport) data = msm_hs_read(uport, UARTDM_DMEN_ADDR); data |= UARTDM_RX_DM_EN_BMSK; msm_hs_write(uport, UARTDM_DMEN_ADDR, data); + /* Complete above device write. Hence mb() here. */ + mb(); } hrtimer_try_to_cancel(&msm_uport->clk_off_timer); if (msm_uport->rx.flush == FLUSH_SHUTDOWN) msm_hs_start_rx_locked(uport); - if (!use_low_power_rx_wakeup(msm_uport)) - set_rfr_locked(uport, 1); if (msm_uport->rx.flush == FLUSH_STOP) msm_uport->rx.flush = FLUSH_IGNORE; msm_uport->clk_state = MSM_HS_CLK_ON; @@ -1317,39 +1782,27 @@ void msm_hs_request_clock_on_locked(struct uart_port *uport) case MSM_HS_CLK_PORT_OFF: break; } -} -/** - * msm_hs_request_clock_on - Switch the device from partially active low - * power mode to fully active (i.e. clock on) mode. - * @uport: uart_port structure for the device. - * - * This function switches on the input clock, puts UART HW into DMA mode - * and enqueues an Rx DMA command if the device was in partially active - * mode. It has no effect if called with the device in inactive state. - */ -void msm_hs_request_clock_on(struct uart_port *uport) -{ - unsigned long flags; - - spin_lock_irqsave(&uport->lock, flags); - msm_hs_request_clock_on_locked(uport); spin_unlock_irqrestore(&uport->lock, flags); + mutex_unlock(&msm_uport->clk_mutex); } +EXPORT_SYMBOL(msm_hs_request_clock_on); -static irqreturn_t msm_hs_rx_wakeup_isr(int irq, void *dev) +static irqreturn_t msm_hs_wakeup_isr(int irq, void *dev) { unsigned int wakeup = 0; unsigned long flags; - struct msm_hs_port *msm_uport = dev; + struct msm_hs_port *msm_uport = (struct msm_hs_port *)dev; struct uart_port *uport = &msm_uport->uport; + struct tty_struct *tty = NULL; + struct tty_port *port = NULL; spin_lock_irqsave(&uport->lock, flags); - if (msm_uport->clk_state == MSM_HS_CLK_OFF) { - /* ignore the first irq - it is a pending irq that occurred + if (msm_uport->clk_state == MSM_HS_CLK_OFF) { + /* ignore the first irq - it is a pending irq that occured * before enable_irq() */ - if (msm_uport->rx_wakeup.ignore) - msm_uport->rx_wakeup.ignore = 0; + if (msm_uport->wakeup.ignore) + msm_uport->wakeup.ignore = 0; else wakeup = 1; } @@ -1357,23 +1810,28 @@ static irqreturn_t msm_hs_rx_wakeup_isr(int irq, void *dev) if (wakeup) { /* the uart was clocked off during an rx, wake up and * optionally inject char into tty rx */ - msm_hs_request_clock_on_locked(uport); - if (msm_uport->rx_wakeup.inject_rx) { - tty_insert_flip_char(&uport->state->port, - msm_uport->rx_wakeup.rx_to_inject, + spin_unlock_irqrestore(&uport->lock, flags); + msm_hs_request_clock_on(uport); + spin_lock_irqsave(&uport->lock, flags); + if (msm_uport->wakeup.inject_rx) { + tty = uport->state->port.tty; + port = tty->port; + tty_insert_flip_char(port, + msm_uport->wakeup.rx_to_inject, TTY_NORMAL); - queue_work(msm_hs_workqueue, &msm_uport->rx.tty_work); } } spin_unlock_irqrestore(&uport->lock, flags); + if (wakeup && msm_uport->wakeup.inject_rx) + tty_flip_buffer_push(port); return IRQ_HANDLED; } static const char *msm_hs_type(struct uart_port *port) { - return (port->type == PORT_MSM) ? "MSM_HS_UART" : NULL; + return ("MSM HS UART"); } /* Called when port is opened */ @@ -1384,9 +1842,13 @@ static int msm_hs_startup(struct uart_port *uport) unsigned long flags; unsigned int data; struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + struct platform_device *pdev = to_platform_device(uport->dev); + const struct msm_serial_hs_platform_data *pdata = + pdev->dev.platform_data; struct circ_buf *tx_buf = &uport->state->xmit; struct msm_hs_tx *tx = &msm_uport->tx; - struct msm_hs_rx *rx = &msm_uport->rx; + + msm_uport->is_shutdown = false; rfr_level = uport->fifosize; if (rfr_level > 16) @@ -1395,15 +1857,19 @@ static int msm_hs_startup(struct uart_port *uport) tx->dma_base = dma_map_single(uport->dev, tx_buf->buf, UART_XMIT_SIZE, DMA_TO_DEVICE); - /* do not let tty layer execute RX in global workqueue, use a - * dedicated workqueue managed by this driver */ - uport->state->port.low_latency = 1; - /* turn on uart clk */ - ret = msm_hs_init_clk_locked(uport); + ret = msm_hs_init_clk(uport); if (unlikely(ret)) { - printk(KERN_ERR "Turning uartclk failed!\n"); - goto err_msm_hs_init_clk; + pr_err("Turning ON uartclk error\n"); + return ret; + } + + if (pdata && pdata->config_gpio) { + ret = msm_hs_config_uart_gpios(uport); + if (ret) + goto deinit_uart_clk; + } else { + pr_debug("%s(): UART GPIOs not specified.\n", __func__); } /* Set auto RFR Level */ @@ -1425,9 +1891,14 @@ static int msm_hs_startup(struct uart_port *uport) data = UARTDM_TX_DM_EN_BMSK | UARTDM_RX_DM_EN_BMSK; msm_hs_write(uport, UARTDM_DMEN_ADDR, data); - /* Reset TX */ - msm_hs_write(uport, UARTDM_CR_ADDR, RESET_TX); - msm_hs_write(uport, UARTDM_CR_ADDR, RESET_RX); + /* Reset both RX and TX HW state machine */ + msm_hs_write(uport, UARTDM_CR_ADDR, (RESET_RX | RESET_TX)); + /* + * Rx and Tx reset operation takes few clock cycles, hence as + * safe side adding 10us delay. + */ + udelay(10); + msm_hs_write(uport, UARTDM_CR_ADDR, RESET_ERROR_STATUS); msm_hs_write(uport, UARTDM_CR_ADDR, RESET_BREAK_INT); msm_hs_write(uport, UARTDM_CR_ADDR, RESET_STALE_INT); @@ -1444,7 +1915,6 @@ static int msm_hs_startup(struct uart_port *uport) tx->dma_in_flight = 0; tx->xfer.complete_func = msm_hs_dmov_tx_callback; - tx->xfer.execute_func = NULL; tx->command_ptr->cmd = CMD_LC | CMD_DST_CRCI(msm_uport->dma_tx_crci) | CMD_MODE_BOX; @@ -1457,63 +1927,67 @@ static int msm_hs_startup(struct uart_port *uport) tx->command_ptr->dst_row_addr = msm_uport->uport.mapbase + UARTDM_TF_ADDR; - - /* Turn on Uart Receive */ - rx->xfer.complete_func = msm_hs_dmov_rx_callback; - rx->xfer.execute_func = NULL; - - rx->command_ptr->cmd = CMD_LC | - CMD_SRC_CRCI(msm_uport->dma_rx_crci) | CMD_MODE_BOX; - - rx->command_ptr->src_dst_len = (MSM_UARTDM_BURST_SIZE << 16) - | (MSM_UARTDM_BURST_SIZE); - rx->command_ptr->row_offset = MSM_UARTDM_BURST_SIZE; - rx->command_ptr->src_row_addr = uport->mapbase + UARTDM_RF_ADDR; - - msm_uport->imr_reg |= UARTDM_ISR_RXSTALE_BMSK; /* Enable reading the current CTS, no harm even if CTS is ignored */ msm_uport->imr_reg |= UARTDM_ISR_CURRENT_CTS_BMSK; msm_hs_write(uport, UARTDM_TFWR_ADDR, 0); /* TXLEV on empty TX fifo */ + /* + * Complete all device write related configuration before + * queuing RX request. Hence mb() requires here. + */ + mb(); + if (use_low_power_wakeup(msm_uport)) { + ret = irq_set_irq_wake(msm_uport->wakeup.irq, 1); + if (unlikely(ret)) { + pr_err("%s():Err setting wakeup irq\n", __func__); + goto unconfigure_uart_gpio; + } + } ret = request_irq(uport->irq, msm_hs_isr, IRQF_TRIGGER_HIGH, "msm_hs_uart", msm_uport); if (unlikely(ret)) { - printk(KERN_ERR "Request msm_hs_uart IRQ failed!\n"); - goto err_request_irq; - } - if (use_low_power_rx_wakeup(msm_uport)) { - ret = request_irq(msm_uport->rx_wakeup.irq, - msm_hs_rx_wakeup_isr, - IRQF_TRIGGER_FALLING, - "msm_hs_rx_wakeup", msm_uport); + pr_err("%s():Error getting uart irq\n", __func__); + goto free_wake_irq; + } + if (use_low_power_wakeup(msm_uport)) { + + ret = request_threaded_irq(msm_uport->wakeup.irq, NULL, + msm_hs_wakeup_isr, + IRQF_TRIGGER_FALLING, + "msm_hs_wakeup", msm_uport); + if (unlikely(ret)) { - printk(KERN_ERR "Request msm_hs_rx_wakeup IRQ failed!\n"); - free_irq(uport->irq, msm_uport); - goto err_request_irq; + pr_err("%s():Err getting uart wakeup_irq\n", __func__); + goto free_uart_irq; } - disable_irq(msm_uport->rx_wakeup.irq); + disable_irq(msm_uport->wakeup.irq); } spin_lock_irqsave(&uport->lock, flags); - msm_hs_write(uport, UARTDM_RFWR_ADDR, 0); msm_hs_start_rx_locked(uport); spin_unlock_irqrestore(&uport->lock, flags); - ret = pm_runtime_set_active(uport->dev); - if (ret) - dev_err(uport->dev, "set active error:%d\n", ret); - pm_runtime_enable(uport->dev); + pm_runtime_enable(uport->dev); + printk("%s: 10\n", __func__); return 0; -err_request_irq: -err_msm_hs_init_clk: - dma_unmap_single(uport->dev, tx->dma_base, - UART_XMIT_SIZE, DMA_TO_DEVICE); +free_uart_irq: + free_irq(uport->irq, msm_uport); +free_wake_irq: + irq_set_irq_wake(msm_uport->wakeup.irq, 0); +unconfigure_uart_gpio: + if (pdata && pdata->config_gpio) + msm_hs_unconfig_uart_gpios(uport); +deinit_uart_clk: + clk_disable_unprepare(msm_uport->clk); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); + return ret; } @@ -1533,7 +2007,7 @@ static int uartdm_init_port(struct uart_port *uport) tx->command_ptr_ptr = kmalloc(sizeof(u32), GFP_KERNEL | __GFP_DMA); if (!tx->command_ptr_ptr) { ret = -ENOMEM; - goto err_tx_command_ptr_ptr; + goto free_tx_command_ptr; } tx->mapped_cmd_ptr = dma_map_single(uport->dev, tx->command_ptr, @@ -1544,20 +2018,26 @@ static int uartdm_init_port(struct uart_port *uport) tx->xfer.cmdptr = DMOV_CMD_ADDR(tx->mapped_cmd_ptr_ptr); init_waitqueue_head(&rx->wait); + init_waitqueue_head(&tx->wait); + + tasklet_init(&rx->tlet, msm_serial_hs_rx_tlet, + (unsigned long) &rx->tlet); + tasklet_init(&tx->tlet, msm_serial_hs_tx_tlet, + (unsigned long) &tx->tlet); rx->pool = dma_pool_create("rx_buffer_pool", uport->dev, UARTDM_RX_BUF_SIZE, 16, 0); if (!rx->pool) { pr_err("%s(): cannot allocate rx_buffer_pool", __func__); ret = -ENOMEM; - goto err_dma_pool_create; + goto exit_tasket_init; } rx->buffer = dma_pool_alloc(rx->pool, GFP_KERNEL, &rx->rbuffer); if (!rx->buffer) { pr_err("%s(): cannot allocate rx->buffer", __func__); ret = -ENOMEM; - goto err_dma_pool_alloc; + goto free_pool; } /* Allocate the command pointer. Needs to be 64 bit aligned */ @@ -1565,14 +2045,14 @@ static int uartdm_init_port(struct uart_port *uport) if (!rx->command_ptr) { pr_err("%s(): cannot allocate rx->command_ptr", __func__); ret = -ENOMEM; - goto err_rx_command_ptr; + goto free_rx_buffer; } rx->command_ptr_ptr = kmalloc(sizeof(u32), GFP_KERNEL | __GFP_DMA); if (!rx->command_ptr_ptr) { pr_err("%s(): cannot allocate rx->command_ptr_ptr", __func__); ret = -ENOMEM; - goto err_rx_command_ptr_ptr; + goto free_rx_command_ptr; } rx->command_ptr->num_rows = ((UARTDM_RX_BUF_SIZE >> 4) << 16) | @@ -1580,6 +2060,19 @@ static int uartdm_init_port(struct uart_port *uport) rx->command_ptr->dst_row_addr = rx->rbuffer; + /* Set up Uart Receive */ + msm_hs_write(uport, UARTDM_RFWR_ADDR, 0); + + rx->xfer.complete_func = msm_hs_dmov_rx_callback; + + rx->command_ptr->cmd = CMD_LC | + CMD_SRC_CRCI(msm_uport->dma_rx_crci) | CMD_MODE_BOX; + + rx->command_ptr->src_dst_len = (MSM_UARTDM_BURST_SIZE << 16) + | (MSM_UARTDM_BURST_SIZE); + rx->command_ptr->row_offset = MSM_UARTDM_BURST_SIZE; + rx->command_ptr->src_row_addr = uport->mapbase + UARTDM_RF_ADDR; + rx->mapped_cmd_ptr = dma_map_single(uport->dev, rx->command_ptr, sizeof(dmov_box), DMA_TO_DEVICE); @@ -1589,43 +2082,60 @@ static int uartdm_init_port(struct uart_port *uport) sizeof(u32), DMA_TO_DEVICE); rx->xfer.cmdptr = DMOV_CMD_ADDR(rx->cmdptr_dmaaddr); - INIT_WORK(&rx->tty_work, msm_hs_tty_flip_buffer_work); + INIT_DELAYED_WORK(&rx->flip_insert_work, flip_insert_work); return ret; -err_rx_command_ptr_ptr: +free_rx_command_ptr: kfree(rx->command_ptr); -err_rx_command_ptr: + +free_rx_buffer: dma_pool_free(msm_uport->rx.pool, msm_uport->rx.buffer, - msm_uport->rx.rbuffer); -err_dma_pool_alloc: + msm_uport->rx.rbuffer); + +free_pool: dma_pool_destroy(msm_uport->rx.pool); -err_dma_pool_create: + +exit_tasket_init: + tasklet_kill(&msm_uport->tx.tlet); + tasklet_kill(&msm_uport->rx.tlet); dma_unmap_single(uport->dev, msm_uport->tx.mapped_cmd_ptr_ptr, - sizeof(u32), DMA_TO_DEVICE); + sizeof(u32), DMA_TO_DEVICE); dma_unmap_single(uport->dev, msm_uport->tx.mapped_cmd_ptr, - sizeof(dmov_box), DMA_TO_DEVICE); + sizeof(dmov_box), DMA_TO_DEVICE); kfree(msm_uport->tx.command_ptr_ptr); -err_tx_command_ptr_ptr: + +free_tx_command_ptr: kfree(msm_uport->tx.command_ptr); return ret; } +static atomic_t msm_uart_next_id = ATOMIC_INIT(0); + static int msm_hs_probe(struct platform_device *pdev) { - int ret; + int ret, line, size; struct uart_port *uport; struct msm_hs_port *msm_uport; struct resource *resource; - const struct msm_serial_hs_platform_data *pdata = - dev_get_platdata(&pdev->dev); + struct msm_serial_hs_platform_data *pdata = pdev->dev.platform_data; - if (pdev->id < 0 || pdev->id >= UARTDM_NR) { + if (pdev->dev.of_node) + line = of_alias_get_id(pdev->dev.of_node, "serial"); + else + line = pdev->id; + + if (line < 0) + line = atomic_inc_return(&msm_uart_next_id) - 1; + + if (unlikely(line < 0 || line >= 2)) { printk(KERN_ERR "Invalid plaform device ID = %d\n", pdev->id); - return -EINVAL; + return -ENXIO; } - msm_uport = &q_uart_port[pdev->id]; + dev_info(&pdev->dev, "msm_serial_hs: detected port #%d\n", line); + + msm_uport = &q_uart_port[line]; uport = &msm_uport->uport; uport->dev = &pdev->dev; @@ -1633,64 +2143,143 @@ static int msm_hs_probe(struct platform_device *pdev) resource = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (unlikely(!resource)) return -ENXIO; + uport->mapbase = resource->start; /* virtual address */ - uport->mapbase = resource->start; - uport->irq = platform_get_irq(pdev, 0); - if (unlikely(uport->irq < 0)) - return -ENXIO; + size = resource_size(resource); + if (!request_mem_region(uport->mapbase, size, "msm_serial_hs")) + return -EBUSY; + + uport->membase = ioremap(uport->mapbase, PAGE_SIZE); + if (unlikely(!uport->membase)) + return -ENOMEM; - if (unlikely(irq_set_irq_wake(uport->irq, 1))) + uport->irq = platform_get_irq(pdev, 0); + if (unlikely((int)uport->irq < 0)) return -ENXIO; - if (pdata == NULL || pdata->rx_wakeup_irq < 0) - msm_uport->rx_wakeup.irq = -1; + if (pdata == NULL) + msm_uport->wakeup.irq = -1; else { - msm_uport->rx_wakeup.irq = pdata->rx_wakeup_irq; - msm_uport->rx_wakeup.ignore = 1; - msm_uport->rx_wakeup.inject_rx = pdata->inject_rx_on_wakeup; - msm_uport->rx_wakeup.rx_to_inject = pdata->rx_to_inject; + msm_uport->wakeup.irq = pdata->wakeup_irq; + msm_uport->wakeup.ignore = 1; + msm_uport->wakeup.inject_rx = pdata->inject_rx_on_wakeup; + msm_uport->wakeup.rx_to_inject = pdata->rx_to_inject; - if (unlikely(msm_uport->rx_wakeup.irq < 0)) + if (unlikely(msm_uport->wakeup.irq < 0)) return -ENXIO; - if (unlikely(irq_set_irq_wake(msm_uport->rx_wakeup.irq, 1))) - return -ENXIO; } - if (pdata == NULL) - msm_uport->exit_lpm_cb = NULL; - else - msm_uport->exit_lpm_cb = pdata->exit_lpm_cb; - + /* Identify UART functional mode as 2-wire or 4-wire. */ + if (pdata && pdata->config_gpio) { + switch (pdata->config_gpio) { + case 4: + if (gpio_is_valid(pdata->uart_tx_gpio) + && gpio_is_valid(pdata->uart_rx_gpio) + && gpio_is_valid(pdata->uart_cts_gpio) + && gpio_is_valid(pdata->uart_rfr_gpio)) { + msm_uport->func_mode = UART_FOUR_WIRE; + } else { + pr_err("%s(): Wrong GPIO Number for 4-Wire.\n", + __func__); + return -EINVAL; + } + break; + case 2: + if (gpio_is_valid(pdata->uart_tx_gpio) + && gpio_is_valid(pdata->uart_rx_gpio)) { + msm_uport->func_mode = UART_TWO_WIRE; + } else { + pr_err("%s(): Wrong GPIO Number for 2-Wire.\n", + __func__); + return -EINVAL; + } + break; + default: + pr_err("%s(): Invalid number of GPIOs.\n", __func__); + pdata->config_gpio = 0; + return -EINVAL; + } + } +/* resource = platform_get_resource_byname(pdev, IORESOURCE_DMA, "uartdm_channels"); if (unlikely(!resource)) return -ENXIO; +*/ + msm_uport->dma_tx_channel = 7; + msm_uport->dma_rx_channel = 6; - msm_uport->dma_tx_channel = resource->start; - msm_uport->dma_rx_channel = resource->end; - +/* resource = platform_get_resource_byname(pdev, IORESOURCE_DMA, "uartdm_crci"); if (unlikely(!resource)) return -ENXIO; - - msm_uport->dma_tx_crci = resource->start; - msm_uport->dma_rx_crci = resource->end; +*/ + msm_uport->dma_tx_crci = 6; + msm_uport->dma_rx_crci = 11; uport->iotype = UPIO_MEM; - uport->fifosize = UART_FIFOSIZE; + uport->fifosize = 64; uport->ops = &msm_hs_ops; uport->flags = UPF_BOOT_AUTOCONF; - uport->uartclk = UARTCLK; + uport->uartclk = 7372800; msm_uport->imr_reg = 0x0; - msm_uport->clk = clk_get(&pdev->dev, "uartdm_clk"); + + msm_uport->clk = clk_get(&pdev->dev, "core"); if (IS_ERR(msm_uport->clk)) return PTR_ERR(msm_uport->clk); + msm_uport->pclk = clk_get(&pdev->dev, "iface"); + /* + * Some configurations do not require explicit pclk control so + * do not flag error on pclk get failure. + */ + if (IS_ERR(msm_uport->pclk)) + msm_uport->pclk = NULL; + + ret = clk_set_rate(msm_uport->clk, uport->uartclk); + if (ret) { + printk(KERN_WARNING "Error setting clock rate on UART\n"); + return ret; + } + + msm_uport->hsuart_wq = alloc_workqueue("k_hsuart", + WQ_UNBOUND | WQ_MEM_RECLAIM, 1); + if (!msm_uport->hsuart_wq) { + pr_err("%s(): Unable to create workqueue hsuart_wq\n", + __func__); + return -ENOMEM; + } + + INIT_WORK(&msm_uport->clock_off_w, hsuart_clock_off_work); + mutex_init(&msm_uport->clk_mutex); + + clk_prepare_enable(msm_uport->clk); + if (msm_uport->pclk) + clk_prepare_enable(msm_uport->pclk); + ret = uartdm_init_port(uport); - if (unlikely(ret)) + if (unlikely(ret)) { + clk_disable_unprepare(msm_uport->clk); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); return ret; + } + + /* configure the CR Protection to Enable */ + msm_hs_write(uport, UARTDM_CR_ADDR, CR_PROTECTION_EN); + + clk_disable_unprepare(msm_uport->clk); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); + + /* + * Enable Command register protection before going ahead as this hw + * configuration makes sure that issued cmd to CR register gets complete + * before next issued cmd start. Hence mb() requires here. + */ + mb(); msm_uport->clk_state = MSM_HS_CLK_PORT_OFF; hrtimer_init(&msm_uport->clk_off_timer, CLOCK_MONOTONIC, @@ -1698,43 +2287,44 @@ static int msm_hs_probe(struct platform_device *pdev) msm_uport->clk_off_timer.function = msm_hs_clk_off_retry; msm_uport->clk_off_delay = ktime_set(0, 1000000); /* 1ms */ - uport->line = pdev->id; + ret = sysfs_create_file(&pdev->dev.kobj, &dev_attr_clock.attr); + if (unlikely(ret)) + return ret; + + msm_serial_debugfs_init(msm_uport, pdev->id); + return uart_add_one_port(&msm_hs_driver, uport); } static int __init msm_serial_hs_init(void) { - int ret, i; + int ret; + int i; /* Init all UARTS as non-configured */ for (i = 0; i < UARTDM_NR; i++) q_uart_port[i].uport.type = PORT_UNKNOWN; - msm_hs_workqueue = create_singlethread_workqueue("msm_serial_hs"); - if (unlikely(!msm_hs_workqueue)) - return -ENOMEM; - ret = uart_register_driver(&msm_hs_driver); if (unlikely(ret)) { - printk(KERN_ERR "%s failed to load\n", __func__); - goto err_uart_register_driver; + printk(KERN_ERR "%s failed to load\n", __FUNCTION__); + return ret; } + debug_base = debugfs_create_dir("msm_serial_hs", NULL); + if (IS_ERR_OR_NULL(debug_base)) + pr_info("msm_serial_hs: Cannot create debugfs dir\n"); ret = platform_driver_register(&msm_serial_hs_platform_driver); if (ret) { - printk(KERN_ERR "%s failed to load\n", __func__); - goto err_platform_driver_register; + printk(KERN_ERR "%s failed to load\n", __FUNCTION__); + debugfs_remove_recursive(debug_base); + uart_unregister_driver(&msm_hs_driver); + return ret; } - return ret; - -err_platform_driver_register: - uart_unregister_driver(&msm_hs_driver); -err_uart_register_driver: - destroy_workqueue(msm_hs_workqueue); + printk(KERN_INFO "msm_serial_hs module loaded\n"); return ret; } -module_init(msm_serial_hs_init); /* * Called by the upper layer when port is closed. @@ -1743,56 +2333,88 @@ module_init(msm_serial_hs_init); */ static void msm_hs_shutdown(struct uart_port *uport) { + int ret; + unsigned int data; unsigned long flags; struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport); + struct platform_device *pdev = to_platform_device(uport->dev); + const struct msm_serial_hs_platform_data *pdata = + pdev->dev.platform_data; - BUG_ON(msm_uport->rx.flush < FLUSH_STOP); spin_lock_irqsave(&uport->lock, flags); - clk_enable(msm_uport->clk); + /* Disable all UART interrupts */ + msm_uport->imr_reg = 0; + msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); + msm_uport->is_shutdown = true; + /* disable UART TX interface to DM */ + data = msm_hs_read(uport, UARTDM_DMEN_ADDR); + data &= ~UARTDM_TX_DM_EN_BMSK; + msm_hs_write(uport, UARTDM_DMEN_ADDR, data); + mb(); + + if (msm_uport->tx.dma_in_flight) { + msm_uport->tx.flush = FLUSH_STOP; + /* discard flush */ + msm_dmov_flush(msm_uport->dma_tx_channel, 0); + spin_unlock_irqrestore(&uport->lock, flags); + ret = wait_event_timeout(msm_uport->tx.wait, + msm_uport->tx.flush == FLUSH_SHUTDOWN, 100); + if (!ret) + pr_err("%s():HSUART TX Stalls.\n", __func__); + } else { + spin_unlock_irqrestore(&uport->lock, flags); + } + + tasklet_kill(&msm_uport->tx.tlet); + BUG_ON(msm_uport->rx.flush < FLUSH_STOP); + wait_event(msm_uport->rx.wait, msm_uport->rx.flush == FLUSH_SHUTDOWN); + tasklet_kill(&msm_uport->rx.tlet); + cancel_delayed_work_sync(&msm_uport->rx.flip_insert_work); + flush_workqueue(msm_uport->hsuart_wq); + pm_runtime_disable(uport->dev); /* Disable the transmitter */ msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_TX_DISABLE_BMSK); /* Disable the receiver */ msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_RX_DISABLE_BMSK); - pm_runtime_disable(uport->dev); - pm_runtime_set_suspended(uport->dev); - - /* Free the interrupt */ - free_irq(uport->irq, msm_uport); - if (use_low_power_rx_wakeup(msm_uport)) - free_irq(msm_uport->rx_wakeup.irq, msm_uport); - - msm_uport->imr_reg = 0; - msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg); - - wait_event(msm_uport->rx.wait, msm_uport->rx.flush == FLUSH_SHUTDOWN); + /* + * Complete all device write before actually disabling uartclk. + * Hence mb() requires here. + */ + mb(); + if (msm_uport->clk_state != MSM_HS_CLK_OFF) { + /* to balance clk_state */ + clk_disable_unprepare(msm_uport->clk); + if (msm_uport->pclk) + clk_disable_unprepare(msm_uport->pclk); + } - clk_disable(msm_uport->clk); /* to balance local clk_enable() */ - if (msm_uport->clk_state != MSM_HS_CLK_OFF) - clk_disable(msm_uport->clk); /* to balance clk_state */ msm_uport->clk_state = MSM_HS_CLK_PORT_OFF; - dma_unmap_single(uport->dev, msm_uport->tx.dma_base, UART_XMIT_SIZE, DMA_TO_DEVICE); - spin_unlock_irqrestore(&uport->lock, flags); + if (use_low_power_wakeup(msm_uport)) + irq_set_irq_wake(msm_uport->wakeup.irq, 0); - if (cancel_work_sync(&msm_uport->rx.tty_work)) - msm_hs_tty_flip_buffer_work(&msm_uport->rx.tty_work); + /* Free the interrupt */ + free_irq(uport->irq, msm_uport); + if (use_low_power_wakeup(msm_uport)) + free_irq(msm_uport->wakeup.irq, msm_uport); + + if (pdata && pdata->config_gpio) + msm_hs_unconfig_uart_gpios(uport); } static void __exit msm_serial_hs_exit(void) { - flush_workqueue(msm_hs_workqueue); - destroy_workqueue(msm_hs_workqueue); + printk(KERN_INFO "msm_serial_hs module removed\n"); + debugfs_remove_recursive(debug_base); platform_driver_unregister(&msm_serial_hs_platform_driver); uart_unregister_driver(&msm_hs_driver); } -module_exit(msm_serial_hs_exit); -#ifdef CONFIG_PM_RUNTIME static int msm_hs_runtime_idle(struct device *dev) { /* @@ -1807,7 +2429,6 @@ static int msm_hs_runtime_resume(struct device *dev) struct platform_device *pdev = container_of(dev, struct platform_device, dev); struct msm_hs_port *msm_uport = &q_uart_port[pdev->id]; - msm_hs_request_clock_on(&msm_uport->uport); return 0; } @@ -1817,15 +2438,9 @@ static int msm_hs_runtime_suspend(struct device *dev) struct platform_device *pdev = container_of(dev, struct platform_device, dev); struct msm_hs_port *msm_uport = &q_uart_port[pdev->id]; - msm_hs_request_clock_off(&msm_uport->uport); return 0; } -#else -#define msm_hs_runtime_idle NULL -#define msm_hs_runtime_resume NULL -#define msm_hs_runtime_suspend NULL -#endif static const struct dev_pm_ops msm_hs_dev_pm_ops = { .runtime_suspend = msm_hs_runtime_suspend, @@ -1833,12 +2448,18 @@ static const struct dev_pm_ops msm_hs_dev_pm_ops = { .runtime_idle = msm_hs_runtime_idle, }; +static const struct of_device_id msm_match_table[] = { + { .compatible = "qcom,msm-uart" }, + { .compatible = "qcom,msm-uartdm-hs" }, + {} +}; + static struct platform_driver msm_serial_hs_platform_driver = { - .probe = msm_hs_probe, + .probe = msm_hs_probe, .remove = msm_hs_remove, .driver = { .name = "msm_serial_hs", - .owner = THIS_MODULE, + .of_match_table = msm_match_table, .pm = &msm_hs_dev_pm_ops, }, }; @@ -1863,13 +2484,15 @@ static struct uart_ops msm_hs_ops = { .startup = msm_hs_startup, .shutdown = msm_hs_shutdown, .set_termios = msm_hs_set_termios, - .pm = msm_hs_pm, .type = msm_hs_type, .config_port = msm_hs_config_port, .release_port = msm_hs_release_port, .request_port = msm_hs_request_port, + .flush_buffer = msm_hs_flush_buffer_locked, }; +module_init(msm_serial_hs_init); +module_exit(msm_serial_hs_exit); MODULE_DESCRIPTION("High Speed UART Driver for the MSM chipset"); MODULE_VERSION("1.2"); MODULE_LICENSE("GPL v2"); diff --git a/drivers/tty/serial/msm_serial_hs.h b/drivers/tty/serial/msm_serial_hs.h new file mode 100644 index 000000000000..d4223d1bf0bd --- /dev/null +++ b/drivers/tty/serial/msm_serial_hs.h @@ -0,0 +1,55 @@ +/* + * Copyright (C) 2008 Google, Inc. + * Copyright (C) 2010-2013, The Linux Foundation. All rights reserved. + * Author: Nick Pelly <npelly@google.com> + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __ASM_ARCH_MSM_SERIAL_HS_H +#define __ASM_ARCH_MSM_SERIAL_HS_H + +#include<linux/serial_core.h> + +/** + * struct msm_serial_hs_platform_data - platform device data + * for msm hsuart device + * @wakeup_irq : IRQ line to be configured as Wakeup source. + * @inject_rx_on_wakeup : Set 1 if specific character to be inserted on wakeup + * @rx_to_inject : Character to be inserted on wakeup + * @config_gpio : Pass number of GPIOs to configure + (2 for 2-wire UART, 4 for 4-wire UART) + * @userid : User-defined number to be used to enumerate device as tty<userid> + * @uart_tx_gpio: GPIO number for UART Tx Line. + * @uart_rx_gpio: GPIO number for UART Rx Line. + * @uart_cts_gpio: GPIO number for UART CTS Line. + * @uart_rfr_gpio: GPIO number for UART RFR Line. + * @userid: Number to be used as ttyHS<userid> instead of ttyHS<id> +*/ + +struct msm_serial_hs_platform_data { + int wakeup_irq; /* wakeup irq */ + /* bool: inject char into rx tty on wakeup */ + unsigned char inject_rx_on_wakeup; + char rx_to_inject; + unsigned config_gpio; + int uart_tx_gpio; + int uart_rx_gpio; + int uart_cts_gpio; + int uart_rfr_gpio; + int userid; +}; + +unsigned int msm_hs_tx_empty(struct uart_port *uport); +void msm_hs_request_clock_off(struct uart_port *uport); +void msm_hs_request_clock_on(struct uart_port *uport); +void msm_hs_set_mctrl(struct uart_port *uport, + unsigned int mctrl); +#endif diff --git a/drivers/tty/serial/msm_serial_hs_hwreg.h b/drivers/tty/serial/msm_serial_hs_hwreg.h new file mode 100644 index 000000000000..401797fcf30c --- /dev/null +++ b/drivers/tty/serial/msm_serial_hs_hwreg.h @@ -0,0 +1,207 @@ +/* drivers/serial/msm_serial_hs_hwreg.h + * + * Copyright (c) 2007-2009, 2012, The Linux Foundation. All rights reserved. + * + * All source code in this file is licensed under the following license + * except where indicated. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * See the GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, you can find it at http://www.fsf.org + */ + +#ifndef MSM_SERIAL_HS_HWREG_H +#define MSM_SERIAL_HS_HWREG_H + +#define GSBI_CONTROL_ADDR 0x0 +#define GSBI_PROTOCOL_CODE_MASK 0x30 +#define GSBI_PROTOCOL_I2C_UART 0x60 +#define GSBI_PROTOCOL_UART 0x40 +#define GSBI_PROTOCOL_IDLE 0x0 + +#define TCSR_ADM_1_A_CRCI_MUX_SEL 0x78 +#define TCSR_ADM_1_B_CRCI_MUX_SEL 0x7C +#define ADM1_CRCI_GSBI6_RX_SEL 0x800 +#define ADM1_CRCI_GSBI6_TX_SEL 0x400 + +enum msm_hsl_regs { + UARTDM_MR1, + UARTDM_MR2, + UARTDM_IMR, + UARTDM_SR, + UARTDM_CR, + UARTDM_CSR, + UARTDM_IPR, + UARTDM_ISR, + UARTDM_RX_TOTAL_SNAP, + UARTDM_RFWR, + UARTDM_TFWR, + UARTDM_RF, + UARTDM_TF, + UARTDM_MISR, + UARTDM_DMRX, + UARTDM_NCF_TX, + UARTDM_DMEN, + UARTDM_BCR, + UARTDM_TXFS, + UARTDM_RXFS, + UARTDM_LAST, +}; + +#define UARTDM_MR1_ADDR 0x0 +#define UARTDM_MR2_ADDR 0x4 + +/* write only register */ +#define UARTDM_CSR_ADDR 0x8 +#define UARTDM_CSR_115200 0xFF +#define UARTDM_CSR_57600 0xEE +#define UARTDM_CSR_38400 0xDD +#define UARTDM_CSR_28800 0xCC +#define UARTDM_CSR_19200 0xBB +#define UARTDM_CSR_14400 0xAA +#define UARTDM_CSR_9600 0x99 +#define UARTDM_CSR_7200 0x88 +#define UARTDM_CSR_4800 0x77 +#define UARTDM_CSR_3600 0x66 +#define UARTDM_CSR_2400 0x55 +#define UARTDM_CSR_1200 0x44 +#define UARTDM_CSR_600 0x33 +#define UARTDM_CSR_300 0x22 +#define UARTDM_CSR_150 0x11 +#define UARTDM_CSR_75 0x00 + +/* write only register */ +#define UARTDM_TF_ADDR 0x70 +#define UARTDM_TF2_ADDR 0x74 +#define UARTDM_TF3_ADDR 0x78 +#define UARTDM_TF4_ADDR 0x7C + +/* write only register */ +#define UARTDM_CR_ADDR 0x10 +/* write only register */ +#define UARTDM_IMR_ADDR 0x14 + +#define UARTDM_IPR_ADDR 0x18 +#define UARTDM_TFWR_ADDR 0x1c +#define UARTDM_RFWR_ADDR 0x20 +#define UARTDM_HCR_ADDR 0x24 +#define UARTDM_DMRX_ADDR 0x34 +#define UARTDM_IRDA_ADDR 0x38 +#define UARTDM_DMEN_ADDR 0x3c + +/* UART_DM_NO_CHARS_FOR_TX */ +#define UARTDM_NCF_TX_ADDR 0x40 + +#define UARTDM_BADR_ADDR 0x44 + +#define UARTDM_SIM_CFG_ADDR 0x80 + +/* Read Only register */ +#define UARTDM_SR_ADDR 0x8 + +/* Read Only register */ +#define UARTDM_RF_ADDR 0x70 +#define UARTDM_RF2_ADDR 0x74 +#define UARTDM_RF3_ADDR 0x78 +#define UARTDM_RF4_ADDR 0x7C + +/* Read Only register */ +#define UARTDM_MISR_ADDR 0x10 + +/* Read Only register */ +#define UARTDM_ISR_ADDR 0x14 +#define UARTDM_RX_TOTAL_SNAP_ADDR 0x38 + +#define UARTDM_TXFS_ADDR 0x4C +#define UARTDM_RXFS_ADDR 0x50 + +/* Register field Mask Mapping */ +#define UARTDM_SR_RX_BREAK_BMSK BIT(6) +#define UARTDM_SR_PAR_FRAME_BMSK BIT(5) +#define UARTDM_SR_OVERRUN_BMSK BIT(4) +#define UARTDM_SR_TXEMT_BMSK BIT(3) +#define UARTDM_SR_TXRDY_BMSK BIT(2) +#define UARTDM_SR_RXRDY_BMSK BIT(0) + +#define UARTDM_CR_TX_DISABLE_BMSK BIT(3) +#define UARTDM_CR_RX_DISABLE_BMSK BIT(1) +#define UARTDM_CR_TX_EN_BMSK BIT(2) +#define UARTDM_CR_RX_EN_BMSK BIT(0) + +/* UARTDM_CR channel_comman bit value (register field is bits 8:4) */ +#define RESET_RX 0x10 +#define RESET_TX 0x20 +#define RESET_ERROR_STATUS 0x30 +#define RESET_BREAK_INT 0x40 +#define START_BREAK 0x50 +#define STOP_BREAK 0x60 +#define RESET_CTS 0x70 +#define RESET_STALE_INT 0x80 +#define RFR_LOW 0xD0 +#define RFR_HIGH 0xE0 +#define CR_PROTECTION_EN 0x100 +#define STALE_EVENT_ENABLE 0x500 +#define STALE_EVENT_DISABLE 0x600 +#define FORCE_STALE_EVENT 0x400 +#define CLEAR_TX_READY 0x300 +#define RESET_TX_ERROR 0x800 +#define RESET_TX_DONE 0x810 + +#define UARTDM_MR1_AUTO_RFR_LEVEL1_BMSK 0xffffff00 +#define UARTDM_MR1_AUTO_RFR_LEVEL0_BMSK 0x3f +#define UARTDM_MR1_CTS_CTL_BMSK 0x40 +#define UARTDM_MR1_RX_RDY_CTL_BMSK 0x80 + +#define UARTDM_MR2_LOOP_MODE_BMSK 0x80 +#define UARTDM_MR2_ERROR_MODE_BMSK 0x40 +#define UARTDM_MR2_BITS_PER_CHAR_BMSK 0x30 +#define UARTDM_MR2_RX_ZERO_CHAR_OFF 0x100 +#define UARTDM_MR2_RX_ERROR_CHAR_OFF 0x200 +#define UARTDM_MR2_RX_BREAK_ZERO_CHAR_OFF 0x100 + +#define UARTDM_MR2_BITS_PER_CHAR_8 (0x3 << 4) + +/* bits per character configuration */ +#define FIVE_BPC (0 << 4) +#define SIX_BPC (1 << 4) +#define SEVEN_BPC (2 << 4) +#define EIGHT_BPC (3 << 4) + +#define UARTDM_MR2_STOP_BIT_LEN_BMSK 0xc +#define STOP_BIT_ONE (1 << 2) +#define STOP_BIT_TWO (3 << 2) + +#define UARTDM_MR2_PARITY_MODE_BMSK 0x3 + +/* Parity configuration */ +#define NO_PARITY 0x0 +#define EVEN_PARITY 0x2 +#define ODD_PARITY 0x1 +#define SPACE_PARITY 0x3 + +#define UARTDM_IPR_STALE_TIMEOUT_MSB_BMSK 0xffffff80 +#define UARTDM_IPR_STALE_LSB_BMSK 0x1f + +/* These can be used for both ISR and IMR register */ +#define UARTDM_ISR_TX_READY_BMSK BIT(7) +#define UARTDM_ISR_CURRENT_CTS_BMSK BIT(6) +#define UARTDM_ISR_DELTA_CTS_BMSK BIT(5) +#define UARTDM_ISR_RXLEV_BMSK BIT(4) +#define UARTDM_ISR_RXSTALE_BMSK BIT(3) +#define UARTDM_ISR_RXBREAK_BMSK BIT(2) +#define UARTDM_ISR_RXHUNT_BMSK BIT(1) +#define UARTDM_ISR_TXLEV_BMSK BIT(0) + +/* Field definitions for UART_DM_DMEN*/ +#define UARTDM_TX_DM_EN_BMSK 0x1 +#define UARTDM_RX_DM_EN_BMSK 0x2 + +#endif /* MSM_SERIAL_HS_HWREG_H */ diff --git a/fixup.S b/fixup.S new file mode 100644 index 000000000000..42d27fc0c949 --- /dev/null +++ b/fixup.S @@ -0,0 +1,84 @@ +/* Copyright (c) 2013, The Linux Foundation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are + * met: + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials provided + * with the distribution. + * * Neither the name of The Linux Foundation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE + * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN + * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + */ +/* Fixup the atags for dragonboard (and possibly other targets). */ +/* + * The bootloader on some targets passes an ATAG in that sets the + * first memory region to be 2MB after the actual start of memory. + * With newer upstream kernels, the PHYS_OFFSET must be a multiple of + * a large boundary (currently 128MB). Without devicetree, we work + * around this with an early init hook, and a fixup that adds a + * reservation. These hooks don't run in time to fix it in Device + * tree. + * + * The following code can be prepended to the zImage. It adjusts the + * memory atag back down to the actual start of memory. The + * assumption is that the device tree will describe the necessary + * memory reservation. The zImage is relocatable, so it is easy to + * prepend this code. + */ + + /* Figure out what the broken mem tag would be */ + mov r8, pc + and r8, r8, #0xf8000000 + add r8, r8, #0x00200000 + + /* R2 is where the atags are passed. r5 on are scratch. */ + mov r5, r2 + ldr r7, .tag_mem + +.next: + /* Load the tag, and check. */ + ldr r6, [r5, #4] + cmp r6, #0 + beq .done + + /* Is the a 'mem' tag. */ + cmp r6, r7 + bne .not_mem + + /* Is this memory base what we want? */ + ldr r6, [r5, #12] + cmp r6, r8 + + subeq r6, r6, #0x200000 + streq r6, [r5, #12] + ldreq r6, [r5, #8] + addeq r6, r6, #0x200000 + streq r6, [r5, #8] + +.not_mem: + /* Move r5 to the next tag. */ + ldr r6, [r5, #0] + add r5, r5, r6, asl #2 + b .next + +.tag_mem: + .word 0x54410002 + +.done: diff --git a/fixup.bin b/fixup.bin Binary files differnew file mode 100644 index 000000000000..67f2a17eaaa8 --- /dev/null +++ b/fixup.bin diff --git a/fixup.txt b/fixup.txt new file mode 100644 index 000000000000..07bcb3c52a0d --- /dev/null +++ b/fixup.txt @@ -0,0 +1,17 @@ + +Fixup loader to boot mainline on Qualcomm platforms that needs adjustment of +ATAG MEM. + +Found at [1], built by issuing: + + arm-eabi-as -o fixup.o fixup.S + arm-eabi-objcopy -O binary fixup.o fixup.bin + + +Use by concatenating together with zImage: + + cat fixup.bin arch/arm/boot/zImage > zImage + + +[1] https://www.codeaurora.org/cgit/quic/kernel/skales/tree/atag-fix/fixup.S + diff --git a/include/dt-bindings/clock/qcom,gcc-msm8960.h b/include/dt-bindings/clock/qcom,gcc-msm8960.h index 7d20eedfee98..e02742fc81cc 100644 --- a/include/dt-bindings/clock/qcom,gcc-msm8960.h +++ b/include/dt-bindings/clock/qcom,gcc-msm8960.h @@ -319,5 +319,7 @@ #define CE3_SRC 303 #define CE3_CORE_CLK 304 #define CE3_H_CLK 305 +#define PLL16 306 +#define PLL17 307 #endif diff --git a/include/dt-bindings/mfd/qcom-rpm.h b/include/dt-bindings/mfd/qcom-rpm.h new file mode 100644 index 000000000000..388a6f3d6165 --- /dev/null +++ b/include/dt-bindings/mfd/qcom-rpm.h @@ -0,0 +1,154 @@ +/* + * This header provides constants for the Qualcomm RPM bindings. + */ + +#ifndef _DT_BINDINGS_MFD_QCOM_RPM_H +#define _DT_BINDINGS_MFD_QCOM_RPM_H + +/* + * Constants use to identify individual resources in the RPM. + */ +#define QCOM_RPM_APPS_FABRIC_ARB 1 +#define QCOM_RPM_APPS_FABRIC_CLK 2 +#define QCOM_RPM_APPS_FABRIC_HALT 3 +#define QCOM_RPM_APPS_FABRIC_IOCTL 4 +#define QCOM_RPM_APPS_FABRIC_MODE 5 +#define QCOM_RPM_APPS_L2_CACHE_CTL 6 +#define QCOM_RPM_CFPB_CLK 7 +#define QCOM_RPM_CXO_BUFFERS 8 +#define QCOM_RPM_CXO_CLK 9 +#define QCOM_RPM_DAYTONA_FABRIC_CLK 10 +#define QCOM_RPM_DDR_DMM 11 +#define QCOM_RPM_EBI1_CLK 12 +#define QCOM_RPM_HDMI_SWITCH 13 +#define QCOM_RPM_MMFPB_CLK 14 +#define QCOM_RPM_MM_FABRIC_ARB 15 +#define QCOM_RPM_MM_FABRIC_CLK 16 +#define QCOM_RPM_MM_FABRIC_HALT 17 +#define QCOM_RPM_MM_FABRIC_IOCTL 18 +#define QCOM_RPM_MM_FABRIC_MODE 19 +#define QCOM_RPM_PLL_4 20 +#define QCOM_RPM_PM8058_LDO0 21 +#define QCOM_RPM_PM8058_LDO1 22 +#define QCOM_RPM_PM8058_LDO2 23 +#define QCOM_RPM_PM8058_LDO3 24 +#define QCOM_RPM_PM8058_LDO4 25 +#define QCOM_RPM_PM8058_LDO5 26 +#define QCOM_RPM_PM8058_LDO6 27 +#define QCOM_RPM_PM8058_LDO7 28 +#define QCOM_RPM_PM8058_LDO8 29 +#define QCOM_RPM_PM8058_LDO9 30 +#define QCOM_RPM_PM8058_LDO10 31 +#define QCOM_RPM_PM8058_LDO11 32 +#define QCOM_RPM_PM8058_LDO12 33 +#define QCOM_RPM_PM8058_LDO13 34 +#define QCOM_RPM_PM8058_LDO14 35 +#define QCOM_RPM_PM8058_LDO15 36 +#define QCOM_RPM_PM8058_LDO16 37 +#define QCOM_RPM_PM8058_LDO17 38 +#define QCOM_RPM_PM8058_LDO18 39 +#define QCOM_RPM_PM8058_LDO19 40 +#define QCOM_RPM_PM8058_LDO20 41 +#define QCOM_RPM_PM8058_LDO21 42 +#define QCOM_RPM_PM8058_LDO22 43 +#define QCOM_RPM_PM8058_LDO23 44 +#define QCOM_RPM_PM8058_LDO24 45 +#define QCOM_RPM_PM8058_LDO25 46 +#define QCOM_RPM_PM8058_LVS0 47 +#define QCOM_RPM_PM8058_LVS1 48 +#define QCOM_RPM_PM8058_NCP 49 +#define QCOM_RPM_PM8058_SMPS0 50 +#define QCOM_RPM_PM8058_SMPS1 51 +#define QCOM_RPM_PM8058_SMPS2 52 +#define QCOM_RPM_PM8058_SMPS3 53 +#define QCOM_RPM_PM8058_SMPS4 54 +#define QCOM_RPM_PM8821_LDO1 55 +#define QCOM_RPM_PM8821_SMPS1 56 +#define QCOM_RPM_PM8821_SMPS2 57 +#define QCOM_RPM_PM8901_LDO0 58 +#define QCOM_RPM_PM8901_LDO1 59 +#define QCOM_RPM_PM8901_LDO2 60 +#define QCOM_RPM_PM8901_LDO3 61 +#define QCOM_RPM_PM8901_LDO4 62 +#define QCOM_RPM_PM8901_LDO5 63 +#define QCOM_RPM_PM8901_LDO6 64 +#define QCOM_RPM_PM8901_LVS0 65 +#define QCOM_RPM_PM8901_LVS1 66 +#define QCOM_RPM_PM8901_LVS2 67 +#define QCOM_RPM_PM8901_LVS3 68 +#define QCOM_RPM_PM8901_MVS 69 +#define QCOM_RPM_PM8901_SMPS0 70 +#define QCOM_RPM_PM8901_SMPS1 71 +#define QCOM_RPM_PM8901_SMPS2 72 +#define QCOM_RPM_PM8901_SMPS3 73 +#define QCOM_RPM_PM8901_SMPS4 74 +#define QCOM_RPM_PM8921_CLK1 75 +#define QCOM_RPM_PM8921_CLK2 76 +#define QCOM_RPM_PM8921_LDO1 77 +#define QCOM_RPM_PM8921_LDO2 78 +#define QCOM_RPM_PM8921_LDO3 79 +#define QCOM_RPM_PM8921_LDO4 80 +#define QCOM_RPM_PM8921_LDO5 81 +#define QCOM_RPM_PM8921_LDO6 82 +#define QCOM_RPM_PM8921_LDO7 83 +#define QCOM_RPM_PM8921_LDO8 84 +#define QCOM_RPM_PM8921_LDO9 85 +#define QCOM_RPM_PM8921_LDO10 86 +#define QCOM_RPM_PM8921_LDO11 87 +#define QCOM_RPM_PM8921_LDO12 88 +#define QCOM_RPM_PM8921_LDO13 89 +#define QCOM_RPM_PM8921_LDO14 90 +#define QCOM_RPM_PM8921_LDO15 91 +#define QCOM_RPM_PM8921_LDO16 92 +#define QCOM_RPM_PM8921_LDO17 93 +#define QCOM_RPM_PM8921_LDO18 94 +#define QCOM_RPM_PM8921_LDO19 95 +#define QCOM_RPM_PM8921_LDO20 96 +#define QCOM_RPM_PM8921_LDO21 97 +#define QCOM_RPM_PM8921_LDO22 98 +#define QCOM_RPM_PM8921_LDO23 99 +#define QCOM_RPM_PM8921_LDO24 100 +#define QCOM_RPM_PM8921_LDO25 101 +#define QCOM_RPM_PM8921_LDO26 102 +#define QCOM_RPM_PM8921_LDO27 103 +#define QCOM_RPM_PM8921_LDO28 104 +#define QCOM_RPM_PM8921_LDO29 105 +#define QCOM_RPM_PM8921_LVS1 106 +#define QCOM_RPM_PM8921_LVS2 107 +#define QCOM_RPM_PM8921_LVS3 108 +#define QCOM_RPM_PM8921_LVS4 109 +#define QCOM_RPM_PM8921_LVS5 110 +#define QCOM_RPM_PM8921_LVS6 111 +#define QCOM_RPM_PM8921_LVS7 112 +#define QCOM_RPM_PM8921_MVS 113 +#define QCOM_RPM_PM8921_NCP 114 +#define QCOM_RPM_PM8921_SMPS1 115 +#define QCOM_RPM_PM8921_SMPS2 116 +#define QCOM_RPM_PM8921_SMPS3 117 +#define QCOM_RPM_PM8921_SMPS4 118 +#define QCOM_RPM_PM8921_SMPS5 119 +#define QCOM_RPM_PM8921_SMPS6 120 +#define QCOM_RPM_PM8921_SMPS7 121 +#define QCOM_RPM_PM8921_SMPS8 122 +#define QCOM_RPM_PXO_CLK 123 +#define QCOM_RPM_QDSS_CLK 124 +#define QCOM_RPM_SFPB_CLK 125 +#define QCOM_RPM_SMI_CLK 126 +#define QCOM_RPM_SYS_FABRIC_ARB 127 +#define QCOM_RPM_SYS_FABRIC_CLK 128 +#define QCOM_RPM_SYS_FABRIC_HALT 129 +#define QCOM_RPM_SYS_FABRIC_IOCTL 130 +#define QCOM_RPM_SYS_FABRIC_MODE 131 +#define QCOM_RPM_USB_OTG_SWITCH 132 +#define QCOM_RPM_VDDMIN_GPIO 133 + +/* + * Constants used to select force mode for regulators. + */ +#define QCOM_RPM_FORCE_MODE_NONE 0 +#define QCOM_RPM_FORCE_MODE_LPM 1 +#define QCOM_RPM_FORCE_MODE_HPM 2 +#define QCOM_RPM_FORCE_MODE_AUTO 3 +#define QCOM_RPM_FORCE_MODE_BYPASS 4 + +#endif diff --git a/include/dt-bindings/pinctrl/qcom,pmic-gpio.h b/include/dt-bindings/pinctrl/qcom,pmic-gpio.h new file mode 100644 index 000000000000..fa74d7cc960c --- /dev/null +++ b/include/dt-bindings/pinctrl/qcom,pmic-gpio.h @@ -0,0 +1,142 @@ +/* + * This header provides constants for the Qualcomm PMIC GPIO binding. + */ + +#ifndef _DT_BINDINGS_PINCTRL_QCOM_PMIC_GPIO_H +#define _DT_BINDINGS_PINCTRL_QCOM_PMIC_GPIO_H + +#define PMIC_GPIO_PULL_UP_30 0 +#define PMIC_GPIO_PULL_UP_1P5 1 +#define PMIC_GPIO_PULL_UP_31P5 2 +#define PMIC_GPIO_PULL_UP_1P5_30 3 + +#define PMIC_GPIO_STRENGTH_NO 0 +#define PMIC_GPIO_STRENGTH_HIGH 1 +#define PMIC_GPIO_STRENGTH_MED 2 +#define PMIC_GPIO_STRENGTH_LOW 3 + +/* + * Note: PM8018 GPIO3 and GPIO4 are supporting + * only S3 and L2 options (1.8V) + */ +#define PM8018_GPIO_L6 0 +#define PM8018_GPIO_L5 1 +#define PM8018_GPIO_S3 2 +#define PM8018_GPIO_L14 3 +#define PM8018_GPIO_L2 4 +#define PM8018_GPIO_L4 5 +#define PM8018_GPIO_VDD 6 + +/* + * Note: PM8038 GPIO7 and GPIO8 are supporting + * only L11 and L4 options (1.8V) + */ +#define PM8038_GPIO_VPH 0 +#define PM8038_GPIO_BB 1 +#define PM8038_GPIO_L11 2 +#define PM8038_GPIO_L15 3 +#define PM8038_GPIO_L4 4 +#define PM8038_GPIO_L3 5 +#define PM8038_GPIO_L17 6 + +#define PM8058_GPIO_VPH 0 +#define PM8058_GPIO_BB 1 +#define PM8058_GPIO_S3 2 +#define PM8058_GPIO_L3 3 +#define PM8058_GPIO_L7 4 +#define PM8058_GPIO_L6 5 +#define PM8058_GPIO_L5 6 +#define PM8058_GPIO_L2 7 + +#define PM8917_GPIO_VPH 0 +#define PM8917_GPIO_S4 2 +#define PM8917_GPIO_L15 3 +#define PM8917_GPIO_L4 4 +#define PM8917_GPIO_L3 5 +#define PM8917_GPIO_L17 6 + +#define PM8921_GPIO_VPH 0 +#define PM8921_GPIO_BB 1 +#define PM8921_GPIO_S4 2 +#define PM8921_GPIO_L15 3 +#define PM8921_GPIO_L4 4 +#define PM8921_GPIO_L3 5 +#define PM8921_GPIO_L17 6 + +/* + * Note: PM8941 gpios from 15 to 18 are supporting + * only S3 and L6 options (1.8V) + */ +#define PM8941_GPIO_VPH 0 +#define PM8941_GPIO_L1 1 +#define PM8941_GPIO_S3 2 +#define PM8941_GPIO_L6 3 + +/* + * Note: PMA8084 gpios from 15 to 18 are supporting + * only S4 and L6 options (1.8V) + */ +#define PMA8084_GPIO_VPH 0 +#define PMA8084_GPIO_L1 1 +#define PMA8084_GPIO_S4 2 +#define PMA8084_GPIO_L6 3 + +/* To be used with "function" */ +#define PMIC_GPIO_FUNC_NORMAL "normal" +#define PMIC_GPIO_FUNC_PAIRED "paired" +#define PMIC_GPIO_FUNC_FUNC1 "func1" +#define PMIC_GPIO_FUNC_FUNC2 "func2" +#define PMIC_GPIO_FUNC_DTEST1 "dtest1" +#define PMIC_GPIO_FUNC_DTEST2 "dtest2" +#define PMIC_GPIO_FUNC_DTEST3 "dtest3" +#define PMIC_GPIO_FUNC_DTEST4 "dtest4" + +#define PM8038_GPIO1_2_LPG_DRV PMIC_GPIO_FUNC_FUNC1 +#define PM8038_GPIO3_5V_BOOST_EN PMIC_GPIO_FUNC_FUNC1 +#define PM8038_GPIO4_SSBI_ALT_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8038_GPIO5_6_EXT_REG_EN PMIC_GPIO_FUNC_FUNC1 +#define PM8038_GPIO10_11_EXT_REG_EN PMIC_GPIO_FUNC_FUNC1 +#define PM8038_GPIO6_7_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8038_GPIO9_BAT_ALRM_OUT PMIC_GPIO_FUNC_FUNC1 +#define PM8038_GPIO6_12_KYPD_DRV PMIC_GPIO_FUNC_FUNC2 + +#define PM8058_GPIO7_8_MP3_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO7_8_BCLK_19P2MHZ PMIC_GPIO_FUNC_FUNC2 +#define PM8058_GPIO9_26_KYPD_DRV PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO21_23_UART_TX PMIC_GPIO_FUNC_FUNC2 +#define PM8058_GPIO24_26_LPG_DRV PMIC_GPIO_FUNC_FUNC2 +#define PM8058_GPIO33_BCLK_19P2MHZ PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO34_35_MP3_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO36_BCLK_19P2MHZ PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO37_UPL_OUT PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO37_UART_M_RX PMIC_GPIO_FUNC_FUNC2 +#define PM8058_GPIO38_XO_SLEEP_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO38_39_CLK_32KHZ PMIC_GPIO_FUNC_FUNC2 +#define PM8058_GPIO39_MP3_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8058_GPIO40_EXT_BB_EN PMIC_GPIO_FUNC_FUNC1 + +#define PM8917_GPIO9_18_KEYP_DRV PMIC_GPIO_FUNC_FUNC1 +#define PM8917_GPIO20_BAT_ALRM_OUT PMIC_GPIO_FUNC_FUNC1 +#define PM8917_GPIO21_23_UART_TX PMIC_GPIO_FUNC_FUNC2 +#define PM8917_GPIO25_26_EXT_REG_EN PMIC_GPIO_FUNC_FUNC1 +#define PM8917_GPIO37_38_XO_SLEEP_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8917_GPIO37_38_MP3_CLK PMIC_GPIO_FUNC_FUNC2 + +#define PM8941_GPIO9_14_KYPD_DRV PMIC_GPIO_FUNC_FUNC1 +#define PM8941_GPIO15_18_DIV_CLK PMIC_GPIO_FUNC_FUNC1 +#define PM8941_GPIO15_18_SLEEP_CLK PMIC_GPIO_FUNC_FUNC2 +#define PM8941_GPIO23_26_KYPD_DRV PMIC_GPIO_FUNC_FUNC1 +#define PM8941_GPIO23_26_LPG_DRV_HI PMIC_GPIO_FUNC_FUNC2 +#define PM8941_GPIO31_BAT_ALRM_OUT PMIC_GPIO_FUNC_FUNC1 +#define PM8941_GPIO33_36_LPG_DRV_3D PMIC_GPIO_FUNC_FUNC1 +#define PM8941_GPIO33_36_LPG_DRV_HI PMIC_GPIO_FUNC_FUNC2 + +#define PMA8084_GPIO4_5_LPG_DRV PMIC_GPIO_FUNC_FUNC1 +#define PMA8084_GPIO7_10_LPG_DRV PMIC_GPIO_FUNC_FUNC1 +#define PMA8084_GPIO5_14_KEYP_DRV PMIC_GPIO_FUNC_FUNC2 +#define PMA8084_GPIO19_21_KEYP_DRV PMIC_GPIO_FUNC_FUNC2 +#define PMA8084_GPIO15_18_DIV_CLK PMIC_GPIO_FUNC_FUNC1 +#define PMA8084_GPIO15_18_SLEEP_CLK PMIC_GPIO_FUNC_FUNC2 +#define PMA8084_GPIO22_BAT_ALRM_OUT PMIC_GPIO_FUNC_FUNC1 + +#endif diff --git a/include/dt-bindings/pinctrl/qcom,pmic-mpp.h b/include/dt-bindings/pinctrl/qcom,pmic-mpp.h new file mode 100644 index 000000000000..d2c7dabe3223 --- /dev/null +++ b/include/dt-bindings/pinctrl/qcom,pmic-mpp.h @@ -0,0 +1,44 @@ +/* + * This header provides constants for the Qualcomm PMIC's + * Multi-Purpose Pin binding. + */ + +#ifndef _DT_BINDINGS_PINCTRL_QCOM_PMIC_MPP_H +#define _DT_BINDINGS_PINCTRL_QCOM_PMIC_MPP_H + +/* power-source */ +#define PM8841_MPP_VPH 0 +#define PM8841_MPP_S3 2 + +#define PM8941_MPP_VPH 0 +#define PM8941_MPP_L1 1 +#define PM8941_MPP_S3 2 +#define PM8941_MPP_L6 3 + +#define PMA8084_MPP_VPH 0 +#define PMA8084_MPP_L1 1 +#define PMA8084_MPP_S4 2 +#define PMA8084_MPP_L6 3 + +/* + * Analog Input - Set the source for analog input. + * To be used with "qcom,amux-route" property + */ +#define PMIC_MPP_AMUX_ROUTE_CH5 0 +#define PMIC_MPP_AMUX_ROUTE_CH6 1 +#define PMIC_MPP_AMUX_ROUTE_CH7 2 +#define PMIC_MPP_AMUX_ROUTE_CH8 3 +#define PMIC_MPP_AMUX_ROUTE_ABUS1 4 +#define PMIC_MPP_AMUX_ROUTE_ABUS2 5 +#define PMIC_MPP_AMUX_ROUTE_ABUS3 6 +#define PMIC_MPP_AMUX_ROUTE_ABUS4 7 + +/* To be used with "function" */ +#define PMIC_MPP_FUNC_NORMAL "normal" +#define PMIC_MPP_FUNC_PAIRED "paired" +#define PMIC_MPP_FUNC_DTEST1 "dtest1" +#define PMIC_MPP_FUNC_DTEST2 "dtest2" +#define PMIC_MPP_FUNC_DTEST3 "dtest3" +#define PMIC_MPP_FUNC_DTEST4 "dtest4" + +#endif diff --git a/include/linux/clk-private.h b/include/linux/clk-private.h index 228a98724b1e..f6acc91347f9 100644 --- a/include/linux/clk-private.h +++ b/include/linux/clk-private.h @@ -48,8 +48,10 @@ struct clk { struct clk **parents; u8 num_parents; u8 new_parent_index; + u8 safe_parent_index; unsigned long rate; unsigned long new_rate; + struct clk *safe_parent; struct clk *new_parent; struct clk *new_child; unsigned long flags; diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index 2839c639f092..37e03df0d39c 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -179,6 +179,7 @@ struct clk_ops { struct clk **best_parent_clk); int (*set_parent)(struct clk_hw *hw, u8 index); u8 (*get_parent)(struct clk_hw *hw); + struct clk *(*get_safe_parent)(struct clk_hw *hw); int (*set_rate)(struct clk_hw *hw, unsigned long rate, unsigned long parent_rate); int (*set_rate_and_parent)(struct clk_hw *hw, @@ -352,6 +353,17 @@ struct clk_divider { #define CLK_DIVIDER_READ_ONLY BIT(5) extern const struct clk_ops clk_divider_ops; + +unsigned long divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate, + unsigned int val, const struct clk_div_table *table, + unsigned long flags); +long divider_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *prate, const struct clk_div_table *table, + u8 width, unsigned long flags); +int divider_get_val(unsigned long rate, unsigned long parent_rate, + const struct clk_div_table *table, u8 width, + unsigned long flags); + struct clk *clk_register_divider(struct device *dev, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, @@ -382,11 +394,13 @@ struct clk *clk_register_divider_table(struct device *dev, const char *name, * register, and mask of mux bits are in higher 16-bit of this register. * While setting the mux bits, higher 16-bit should also be updated to * indicate changing mux bits. + * CLK_MUX_ROUND_CLOSEST - Use the parent rate that is closest to the desired + * frequency. */ struct clk_mux { struct clk_hw hw; void __iomem *reg; - u32 *table; + unsigned int *table; u32 mask; u8 shift; u8 flags; @@ -396,11 +410,17 @@ struct clk_mux { #define CLK_MUX_INDEX_ONE BIT(0) #define CLK_MUX_INDEX_BIT BIT(1) #define CLK_MUX_HIWORD_MASK BIT(2) -#define CLK_MUX_READ_ONLY BIT(3) /* mux setting cannot be changed */ +#define CLK_MUX_READ_ONLY BIT(3) /* mux can't be changed */ +#define CLK_MUX_ROUND_CLOSEST BIT(4) extern const struct clk_ops clk_mux_ops; extern const struct clk_ops clk_mux_ro_ops; +unsigned int clk_mux_get_parent(struct clk_hw *hw, unsigned int val, + unsigned int *table, unsigned long flags); +unsigned int clk_mux_reindex(u8 index, unsigned int *table, + unsigned long flags); + struct clk *clk_register_mux(struct device *dev, const char *name, const char **parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u8 width, @@ -409,7 +429,9 @@ struct clk *clk_register_mux(struct device *dev, const char *name, struct clk *clk_register_mux_table(struct device *dev, const char *name, const char **parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u32 mask, - u8 clk_mux_flags, u32 *table, spinlock_t *lock); + u8 clk_mux_flags, unsigned int *table, spinlock_t *lock); + +void clk_unregister_mux(struct clk *clk); void of_fixed_factor_clk_setup(struct device_node *node); @@ -554,7 +576,9 @@ struct clk *__clk_lookup(const char *name); long __clk_mux_determine_rate(struct clk_hw *hw, unsigned long rate, unsigned long *best_parent_rate, struct clk **best_parent_p); - +long __clk_mux_determine_rate_closest(struct clk_hw *hw, unsigned long rate, + unsigned long *best_parent_rate, + struct clk **best_parent_p); /* * FIXME clock api without lock protection */ diff --git a/include/linux/memblock.h b/include/linux/memblock.h index e8cc45307f8f..96b7073ad452 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -285,6 +285,7 @@ phys_addr_t memblock_end_of_DRAM(void); void memblock_enforce_memory_limit(phys_addr_t memory_limit); int memblock_is_memory(phys_addr_t addr); int memblock_is_region_memory(phys_addr_t base, phys_addr_t size); +int memblock_overlaps_memory(phys_addr_t base, phys_addr_t size); int memblock_is_reserved(phys_addr_t addr); int memblock_is_region_reserved(phys_addr_t base, phys_addr_t size); diff --git a/include/linux/mfd/pm8921-core.h b/include/linux/mfd/pm8921-core.h new file mode 100644 index 000000000000..b31d785761f4 --- /dev/null +++ b/include/linux/mfd/pm8921-core.h @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2014, Sony Mobile Communications AB + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __MFD_PM8921_CORE_H +#define __MFD_PM8921_CORE_H + +#include <linux/err.h> + +#if IS_ENABLED(CONFIG_MFD_PM8921_CORE) + +int pm8xxx_read_irq_status(int irq); + +#else +static inline int pm8xxx_read_irq_status(int irq) +{ + return -ENOSYS; +} + +#endif + +#endif diff --git a/include/linux/mfd/qcom_rpm.h b/include/linux/mfd/qcom_rpm.h new file mode 100644 index 000000000000..a7af2edf0f64 --- /dev/null +++ b/include/linux/mfd/qcom_rpm.h @@ -0,0 +1,14 @@ +#ifndef __QCOM_RPM_H__ +#define __QCOM_RPM_H__ + +#include <linux/types.h> + +struct qcom_rpm; + +#define QCOM_RPM_ACTIVE_STATE 0 +#define QCOM_RPM_SLEEP_STATE 1 + +int qcom_rpm_write(struct qcom_rpm *rpm, int state, int resource, u32 *buf, size_t count); +int qcom_rpm_read(struct qcom_rpm *rpm, int resource, u32 *buf, size_t count); + +#endif diff --git a/include/linux/msm_audio.h b/include/linux/msm_audio.h new file mode 100644 index 000000000000..04d4e5b56b21 --- /dev/null +++ b/include/linux/msm_audio.h @@ -0,0 +1,367 @@ +/* include/linux/msm_audio.h + * + * Copyright (C) 2008 Google, Inc. + * Copyright (c) 2012 The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef __LINUX_MSM_AUDIO_H +#define __LINUX_MSM_AUDIO_H + +#include <linux/types.h> +#include <linux/ioctl.h> + +/* PCM Audio */ + +#define AUDIO_IOCTL_MAGIC 'a' + +#define AUDIO_START _IOW(AUDIO_IOCTL_MAGIC, 0, unsigned) +#define AUDIO_STOP _IOW(AUDIO_IOCTL_MAGIC, 1, unsigned) +#define AUDIO_FLUSH _IOW(AUDIO_IOCTL_MAGIC, 2, unsigned) +#define AUDIO_GET_CONFIG _IOR(AUDIO_IOCTL_MAGIC, 3, unsigned) +#define AUDIO_SET_CONFIG _IOW(AUDIO_IOCTL_MAGIC, 4, unsigned) +#define AUDIO_GET_STATS _IOR(AUDIO_IOCTL_MAGIC, 5, unsigned) +#define AUDIO_ENABLE_AUDPP _IOW(AUDIO_IOCTL_MAGIC, 6, unsigned) +#define AUDIO_SET_ADRC _IOW(AUDIO_IOCTL_MAGIC, 7, unsigned) +#define AUDIO_SET_EQ _IOW(AUDIO_IOCTL_MAGIC, 8, unsigned) +#define AUDIO_SET_RX_IIR _IOW(AUDIO_IOCTL_MAGIC, 9, unsigned) +#define AUDIO_SET_VOLUME _IOW(AUDIO_IOCTL_MAGIC, 10, unsigned) +#define AUDIO_PAUSE _IOW(AUDIO_IOCTL_MAGIC, 11, unsigned) +#define AUDIO_PLAY_DTMF _IOW(AUDIO_IOCTL_MAGIC, 12, unsigned) +#define AUDIO_GET_EVENT _IOR(AUDIO_IOCTL_MAGIC, 13, unsigned) +#define AUDIO_ABORT_GET_EVENT _IOW(AUDIO_IOCTL_MAGIC, 14, unsigned) +#define AUDIO_REGISTER_PMEM _IOW(AUDIO_IOCTL_MAGIC, 15, unsigned) +#define AUDIO_DEREGISTER_PMEM _IOW(AUDIO_IOCTL_MAGIC, 16, unsigned) +#define AUDIO_ASYNC_WRITE _IOW(AUDIO_IOCTL_MAGIC, 17, unsigned) +#define AUDIO_ASYNC_READ _IOW(AUDIO_IOCTL_MAGIC, 18, unsigned) +#define AUDIO_SET_INCALL _IOW(AUDIO_IOCTL_MAGIC, 19, struct msm_voicerec_mode) +#define AUDIO_GET_NUM_SND_DEVICE _IOR(AUDIO_IOCTL_MAGIC, 20, unsigned) +#define AUDIO_GET_SND_DEVICES _IOWR(AUDIO_IOCTL_MAGIC, 21, \ + struct msm_snd_device_list) +#define AUDIO_ENABLE_SND_DEVICE _IOW(AUDIO_IOCTL_MAGIC, 22, unsigned) +#define AUDIO_DISABLE_SND_DEVICE _IOW(AUDIO_IOCTL_MAGIC, 23, unsigned) +#define AUDIO_ROUTE_STREAM _IOW(AUDIO_IOCTL_MAGIC, 24, \ + struct msm_audio_route_config) +#define AUDIO_GET_PCM_CONFIG _IOR(AUDIO_IOCTL_MAGIC, 30, unsigned) +#define AUDIO_SET_PCM_CONFIG _IOW(AUDIO_IOCTL_MAGIC, 31, unsigned) +#define AUDIO_SWITCH_DEVICE _IOW(AUDIO_IOCTL_MAGIC, 32, unsigned) +#define AUDIO_SET_MUTE _IOW(AUDIO_IOCTL_MAGIC, 33, unsigned) +#define AUDIO_UPDATE_ACDB _IOW(AUDIO_IOCTL_MAGIC, 34, unsigned) +#define AUDIO_START_VOICE _IOW(AUDIO_IOCTL_MAGIC, 35, unsigned) +#define AUDIO_STOP_VOICE _IOW(AUDIO_IOCTL_MAGIC, 36, unsigned) +#define AUDIO_REINIT_ACDB _IOW(AUDIO_IOCTL_MAGIC, 39, unsigned) +#define AUDIO_OUTPORT_FLUSH _IOW(AUDIO_IOCTL_MAGIC, 40, unsigned short) +#define AUDIO_SET_ERR_THRESHOLD_VALUE _IOW(AUDIO_IOCTL_MAGIC, 41, \ + unsigned short) +#define AUDIO_GET_BITSTREAM_ERROR_INFO _IOR(AUDIO_IOCTL_MAGIC, 42, \ + struct msm_audio_bitstream_error_info) + +#define AUDIO_SET_SRS_TRUMEDIA_PARAM _IOW(AUDIO_IOCTL_MAGIC, 43, unsigned) + +/* Qualcomm extensions */ +#define AUDIO_SET_STREAM_CONFIG _IOW(AUDIO_IOCTL_MAGIC, 80, \ + struct msm_audio_stream_config) +#define AUDIO_GET_STREAM_CONFIG _IOR(AUDIO_IOCTL_MAGIC, 81, \ + struct msm_audio_stream_config) +#define AUDIO_GET_SESSION_ID _IOR(AUDIO_IOCTL_MAGIC, 82, unsigned short) +#define AUDIO_GET_STREAM_INFO _IOR(AUDIO_IOCTL_MAGIC, 83, \ + struct msm_audio_bitstream_info) +#define AUDIO_SET_PAN _IOW(AUDIO_IOCTL_MAGIC, 84, unsigned) +#define AUDIO_SET_QCONCERT_PLUS _IOW(AUDIO_IOCTL_MAGIC, 85, unsigned) +#define AUDIO_SET_MBADRC _IOW(AUDIO_IOCTL_MAGIC, 86, unsigned) +#define AUDIO_SET_VOLUME_PATH _IOW(AUDIO_IOCTL_MAGIC, 87, \ + struct msm_vol_info) +#define AUDIO_SET_MAX_VOL_ALL _IOW(AUDIO_IOCTL_MAGIC, 88, unsigned) +#define AUDIO_ENABLE_AUDPRE _IOW(AUDIO_IOCTL_MAGIC, 89, unsigned) +#define AUDIO_SET_AGC _IOW(AUDIO_IOCTL_MAGIC, 90, unsigned) +#define AUDIO_SET_NS _IOW(AUDIO_IOCTL_MAGIC, 91, unsigned) +#define AUDIO_SET_TX_IIR _IOW(AUDIO_IOCTL_MAGIC, 92, unsigned) +#define AUDIO_GET_BUF_CFG _IOW(AUDIO_IOCTL_MAGIC, 93, \ + struct msm_audio_buf_cfg) +#define AUDIO_SET_BUF_CFG _IOW(AUDIO_IOCTL_MAGIC, 94, \ + struct msm_audio_buf_cfg) +#define AUDIO_SET_ACDB_BLK _IOW(AUDIO_IOCTL_MAGIC, 95, \ + struct msm_acdb_cmd_device) +#define AUDIO_GET_ACDB_BLK _IOW(AUDIO_IOCTL_MAGIC, 96, \ + struct msm_acdb_cmd_device) + +#define AUDIO_REGISTER_ION _IOW(AUDIO_IOCTL_MAGIC, 97, unsigned) +#define AUDIO_DEREGISTER_ION _IOW(AUDIO_IOCTL_MAGIC, 98, unsigned) + +#define AUDIO_MAX_COMMON_IOCTL_NUM 100 + + +#define HANDSET_MIC 0x01 +#define HANDSET_SPKR 0x02 +#define HEADSET_MIC 0x03 +#define HEADSET_SPKR_MONO 0x04 +#define HEADSET_SPKR_STEREO 0x05 +#define SPKR_PHONE_MIC 0x06 +#define SPKR_PHONE_MONO 0x07 +#define SPKR_PHONE_STEREO 0x08 +#define BT_SCO_MIC 0x09 +#define BT_SCO_SPKR 0x0A +#define BT_A2DP_SPKR 0x0B +#define TTY_HEADSET_MIC 0x0C +#define TTY_HEADSET_SPKR 0x0D + +/* Default devices are not supported in a */ +/* device switching context. Only supported */ +/* for stream devices. */ +/* DO NOT USE */ +#define DEFAULT_TX 0x0E +#define DEFAULT_RX 0x0F + +#define BT_A2DP_TX 0x10 + +#define HEADSET_MONO_PLUS_SPKR_MONO_RX 0x11 +#define HEADSET_MONO_PLUS_SPKR_STEREO_RX 0x12 +#define HEADSET_STEREO_PLUS_SPKR_MONO_RX 0x13 +#define HEADSET_STEREO_PLUS_SPKR_STEREO_RX 0x14 + +#define I2S_RX 0x20 +#define I2S_TX 0x21 + +#define ADRC_ENABLE 0x0001 +#define EQ_ENABLE 0x0002 +#define IIR_ENABLE 0x0004 +#define QCONCERT_PLUS_ENABLE 0x0008 +#define MBADRC_ENABLE 0x0010 +#define SRS_ENABLE 0x0020 +#define SRS_DISABLE 0x0040 + +#define AGC_ENABLE 0x0001 +#define NS_ENABLE 0x0002 +#define TX_IIR_ENABLE 0x0004 +#define FLUENCE_ENABLE 0x0008 + +#define VOC_REC_UPLINK 0x00 +#define VOC_REC_DOWNLINK 0x01 +#define VOC_REC_BOTH 0x02 + +struct msm_audio_config { + uint32_t buffer_size; + uint32_t buffer_count; + uint32_t channel_count; + uint32_t sample_rate; + uint32_t type; + uint32_t meta_field; + uint32_t bits; + uint32_t unused[3]; +}; + +struct msm_audio_stream_config { + uint32_t buffer_size; + uint32_t buffer_count; +}; + +struct msm_audio_buf_cfg{ + uint32_t meta_info_enable; + uint32_t frames_per_buf; +}; + +struct msm_audio_stats { + uint32_t byte_count; + uint32_t sample_count; + uint32_t unused[2]; +}; + +struct msm_audio_ion_info { + int fd; + void *vaddr; +}; + +struct msm_audio_pmem_info { + int fd; + void *vaddr; +}; + +struct msm_audio_aio_buf { + void *buf_addr; + uint32_t buf_len; + uint32_t data_len; + void *private_data; + unsigned short mfield_sz; /*only useful for data has meta field */ +}; + +/* Audio routing */ + +#define SND_IOCTL_MAGIC 's' + +#define SND_MUTE_UNMUTED 0 +#define SND_MUTE_MUTED 1 + +struct msm_mute_info { + uint32_t mute; + uint32_t path; +}; + +struct msm_vol_info { + uint32_t vol; + uint32_t path; +}; + +struct msm_voicerec_mode { + uint32_t rec_mode; +}; + +struct msm_snd_device_config { + uint32_t device; + uint32_t ear_mute; + uint32_t mic_mute; +}; + +#define SND_SET_DEVICE _IOW(SND_IOCTL_MAGIC, 2, struct msm_device_config *) + +#define SND_METHOD_VOICE 0 + +struct msm_snd_volume_config { + uint32_t device; + uint32_t method; + uint32_t volume; +}; + +#define SND_SET_VOLUME _IOW(SND_IOCTL_MAGIC, 3, struct msm_snd_volume_config *) + +/* Returns the number of SND endpoints supported. */ + +#define SND_GET_NUM_ENDPOINTS _IOR(SND_IOCTL_MAGIC, 4, unsigned *) + +struct msm_snd_endpoint { + int id; /* input and output */ + char name[64]; /* output only */ +}; + +/* Takes an index between 0 and one less than the number returned by + * SND_GET_NUM_ENDPOINTS, and returns the SND index and name of a + * SND endpoint. On input, the .id field contains the number of the + * endpoint, and on exit it contains the SND index, while .name contains + * the description of the endpoint. + */ + +#define SND_GET_ENDPOINT _IOWR(SND_IOCTL_MAGIC, 5, struct msm_snd_endpoint *) + + +#define SND_AVC_CTL _IOW(SND_IOCTL_MAGIC, 6, unsigned *) +#define SND_AGC_CTL _IOW(SND_IOCTL_MAGIC, 7, unsigned *) + +struct msm_audio_pcm_config { + uint32_t pcm_feedback; /* 0 - disable > 0 - enable */ + uint32_t buffer_count; /* Number of buffers to allocate */ + uint32_t buffer_size; /* Size of buffer for capturing of + PCM samples */ +}; + +#define AUDIO_EVENT_SUSPEND 0 +#define AUDIO_EVENT_RESUME 1 +#define AUDIO_EVENT_WRITE_DONE 2 +#define AUDIO_EVENT_READ_DONE 3 +#define AUDIO_EVENT_STREAM_INFO 4 +#define AUDIO_EVENT_BITSTREAM_ERROR_INFO 5 + +#define AUDIO_CODEC_TYPE_MP3 0 +#define AUDIO_CODEC_TYPE_AAC 1 + +struct msm_audio_bitstream_info { + uint32_t codec_type; + uint32_t chan_info; + uint32_t sample_rate; + uint32_t bit_stream_info; + uint32_t bit_rate; + uint32_t unused[3]; +}; + +struct msm_audio_bitstream_error_info { + uint32_t dec_id; + uint32_t err_msg_indicator; + uint32_t err_type; +}; + +union msm_audio_event_payload { + struct msm_audio_aio_buf aio_buf; + struct msm_audio_bitstream_info stream_info; + struct msm_audio_bitstream_error_info error_info; + int reserved; +}; + +struct msm_audio_event { + int event_type; + int timeout_ms; + union msm_audio_event_payload event_payload; +}; + +#define MSM_SNDDEV_CAP_RX 0x1 +#define MSM_SNDDEV_CAP_TX 0x2 +#define MSM_SNDDEV_CAP_VOICE 0x4 + +struct msm_snd_device_info { + uint32_t dev_id; + uint32_t dev_cap; /* bitmask describe capability of device */ + char dev_name[64]; +}; + +struct msm_snd_device_list { + uint32_t num_dev; /* Indicate number of device info to be retrieved */ + struct msm_snd_device_info *list; +}; + +struct msm_dtmf_config { + uint16_t path; + uint16_t dtmf_hi; + uint16_t dtmf_low; + uint16_t duration; + uint16_t tx_gain; + uint16_t rx_gain; + uint16_t mixing; +}; + +#define AUDIO_ROUTE_STREAM_VOICE_RX 0 +#define AUDIO_ROUTE_STREAM_VOICE_TX 1 +#define AUDIO_ROUTE_STREAM_PLAYBACK 2 +#define AUDIO_ROUTE_STREAM_REC 3 + +struct msm_audio_route_config { + uint32_t stream_type; + uint32_t stream_id; + uint32_t dev_id; +}; + +#define AUDIO_MAX_EQ_BANDS 12 + +struct msm_audio_eq_band { + uint16_t band_idx; /* The band index, 0 .. 11 */ + uint32_t filter_type; /* Filter band type */ + uint32_t center_freq_hz; /* Filter band center frequency */ + uint32_t filter_gain; /* Filter band initial gain (dB) */ + /* Range is +12 dB to -12 dB with 1dB increments. */ + uint32_t q_factor; +} __attribute__ ((packed)); + +struct msm_audio_eq_stream_config { + uint32_t enable; /* Number of consequtive bands specified */ + uint32_t num_bands; + struct msm_audio_eq_band eq_bands[AUDIO_MAX_EQ_BANDS]; +} __attribute__ ((packed)); + +struct msm_acdb_cmd_device { + uint32_t command_id; + uint32_t device_id; + uint32_t network_id; + uint32_t sample_rate_id; /* Actual sample rate value */ + uint32_t interface_id; /* See interface id's above */ + uint32_t algorithm_block_id; /* See enumerations above */ + uint32_t total_bytes; /* Length in bytes used by buffer */ + uint32_t *phys_buf; /* Physical Address of data */ +}; + + +#endif diff --git a/include/linux/msm_audio_acdb.h b/include/linux/msm_audio_acdb.h new file mode 100644 index 000000000000..e7f06b53ac84 --- /dev/null +++ b/include/linux/msm_audio_acdb.h @@ -0,0 +1,81 @@ +#ifndef __MSM_AUDIO_ACDB_H +#define __MSM_AUDIO_ACDB_H + +#include <linux/msm_audio.h> + +#define AUDIO_SET_VOCPROC_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+0), unsigned) +#define AUDIO_SET_VOCPROC_STREAM_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+1), unsigned) +#define AUDIO_SET_VOCPROC_VOL_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+2), unsigned) +#define AUDIO_SET_AUDPROC_RX_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+3), unsigned) +#define AUDIO_SET_AUDPROC_RX_STREAM_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+4), unsigned) +#define AUDIO_SET_AUDPROC_RX_VOL_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+5), unsigned) +#define AUDIO_SET_AUDPROC_TX_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+6), unsigned) +#define AUDIO_SET_AUDPROC_TX_STREAM_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+7), unsigned) +#define AUDIO_SET_AUDPROC_TX_VOL_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+8), unsigned) +#define AUDIO_SET_SIDETONE_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+9), unsigned) +#define AUDIO_SET_ANC_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+10), unsigned) +#define AUDIO_SET_VOICE_RX_TOPOLOGY _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+11), unsigned) +#define AUDIO_SET_VOICE_TX_TOPOLOGY _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+12), unsigned) +#define AUDIO_SET_ADM_RX_TOPOLOGY _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+13), unsigned) +#define AUDIO_SET_ADM_TX_TOPOLOGY _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+14), unsigned) +#define AUDIO_SET_ASM_TOPOLOGY _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+15), unsigned) +#define AUDIO_SET_AFE_TX_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+16), unsigned) +#define AUDIO_SET_AFE_RX_CAL _IOW(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_COMMON_IOCTL_NUM+17), unsigned) + + +#define AUDIO_MAX_ACDB_IOCTL (AUDIO_MAX_COMMON_IOCTL_NUM+30) + +/* ACDB structures */ +struct cal_block { + uint32_t cal_size; /* Size of Cal Data */ + uint32_t cal_offset; /* offset pointer to Cal Data */ +}; + +struct sidetone_cal { + uint16_t enable; + uint16_t gain; +}; + +/* For Real-Time Audio Calibration */ +#define AUDIO_GET_RTAC_ADM_INFO _IOR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+1), unsigned) +#define AUDIO_GET_RTAC_VOICE_INFO _IOR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+2), unsigned) +#define AUDIO_GET_RTAC_ADM_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+3), unsigned) +#define AUDIO_SET_RTAC_ADM_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+4), unsigned) +#define AUDIO_GET_RTAC_ASM_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+5), unsigned) +#define AUDIO_SET_RTAC_ASM_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+6), unsigned) +#define AUDIO_GET_RTAC_CVS_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+7), unsigned) +#define AUDIO_SET_RTAC_CVS_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+8), unsigned) +#define AUDIO_GET_RTAC_CVP_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+9), unsigned) +#define AUDIO_SET_RTAC_CVP_CAL _IOWR(AUDIO_IOCTL_MAGIC, \ + (AUDIO_MAX_ACDB_IOCTL+10), unsigned) + +#define AUDIO_MAX_RTAC_IOCTL (AUDIO_MAX_ACDB_IOCTL+20) + +#endif /* __MSM_AUDIO_ACDB_H */ diff --git a/include/linux/msm_smd.h b/include/linux/msm_smd.h new file mode 100644 index 000000000000..935c1d1e0f61 --- /dev/null +++ b/include/linux/msm_smd.h @@ -0,0 +1,475 @@ +/* linux/include/asm-arm/arch-msm/msm_smd.h + * + * Copyright (C) 2007 Google, Inc. + * Copyright (c) 2009-2012, The Linux Foundation. All rights reserved. + * Author: Brian Swetland <swetland@google.com> + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef __ASM_ARCH_MSM_SMD_H +#define __ASM_ARCH_MSM_SMD_H + +#include <linux/io.h> +#include <linux/msm_smsm.h> + +typedef struct smd_channel smd_channel_t; + +#define SMD_MAX_CH_NAME_LEN 20 /* includes null char at end */ + +#define SMD_EVENT_DATA 1 +#define SMD_EVENT_OPEN 2 +#define SMD_EVENT_CLOSE 3 +#define SMD_EVENT_STATUS 4 +#define SMD_EVENT_REOPEN_READY 5 + +/* + * SMD Processor ID's. + * + * For all processors that have both SMSM and SMD clients, + * the SMSM Processor ID and the SMD Processor ID will + * be the same. In cases where a processor only supports + * SMD, the entry will only exist in this enum. + */ +enum { + SMD_APPS = SMSM_APPS, + SMD_MODEM = SMSM_MODEM, + SMD_Q6 = SMSM_Q6, + SMD_WCNSS = SMSM_WCNSS, + SMD_DSPS = SMSM_DSPS, + SMD_MODEM_Q6_FW, + SMD_RPM, + NUM_SMD_SUBSYSTEMS, +}; + +enum { + SMD_APPS_MODEM = 0, + SMD_APPS_QDSP, + SMD_MODEM_QDSP, + SMD_APPS_DSPS, + SMD_MODEM_DSPS, + SMD_QDSP_DSPS, + SMD_APPS_WCNSS, + SMD_MODEM_WCNSS, + SMD_QDSP_WCNSS, + SMD_DSPS_WCNSS, + SMD_APPS_Q6FW, + SMD_MODEM_Q6FW, + SMD_QDSP_Q6FW, + SMD_DSPS_Q6FW, + SMD_WCNSS_Q6FW, + SMD_APPS_RPM, + SMD_MODEM_RPM, + SMD_QDSP_RPM, + SMD_WCNSS_RPM, + SMD_NUM_TYPE, + SMD_LOOPBACK_TYPE = 100, + +}; + +/* + * SMD IRQ Configuration + * + * Used to initialize IRQ configurations from platform data + * + * @irq_name: irq_name to query platform data + * @irq_id: initialized to -1 in platform data, stores actual irq id on + * successful registration + * @out_base: if not null then settings used for outgoing interrupt + * initialied from platform data + */ + +struct smd_irq_config { + /* incoming interrupt config */ + const char *irq_name; + unsigned long flags; + int irq_id; + const char *device_name; + const void *dev_id; + + /* outgoing interrupt config */ + uint32_t out_bit_pos; + void __iomem *out_base; + uint32_t out_offset; +}; + +/* + * SMD subsystem configurations + * + * SMD subsystems configurations for platform data. This contains the + * M2A and A2M interrupt configurations for both SMD and SMSM per + * subsystem. + * + * @subsys_name: name of subsystem passed to PIL + * @irq_config_id: unique id for each subsystem + * @edge: maps to actual remote subsystem edge + * + */ +struct smd_subsystem_config { + unsigned irq_config_id; + const char *subsys_name; + int edge; + + struct smd_irq_config smd_int; + struct smd_irq_config smsm_int; + +}; + +/* + * Subsystem Restart Configuration + * + * @disable_smsm_reset_handshake + */ +struct smd_subsystem_restart_config { + int disable_smsm_reset_handshake; +}; + +/* + * Shared Memory Regions + * + * the array of these regions is expected to be in ascending order by phys_addr + * + * @phys_addr: physical base address of the region + * @size: size of the region in bytes + */ +struct smd_smem_regions { + void *phys_addr; + unsigned size; +}; + +struct smd_platform { + uint32_t num_ss_configs; + struct smd_subsystem_config *smd_ss_configs; + struct smd_subsystem_restart_config *smd_ssr_config; + uint32_t num_smem_areas; + struct smd_smem_regions *smd_smem_areas; +}; + +#ifdef CONFIG_QCOM_SMD +/* warning: notify() may be called before open returns */ +int smd_open(const char *name, smd_channel_t **ch, void *priv, + void (*notify)(void *priv, unsigned event)); + +int smd_close(smd_channel_t *ch); + +/* passing a null pointer for data reads and discards */ +int smd_read(smd_channel_t *ch, void *data, int len); +int smd_read_from_cb(smd_channel_t *ch, void *data, int len); +/* Same as smd_read() but takes a data buffer from userspace + * The function might sleep. Only safe to call from user context + */ +int smd_read_user_buffer(smd_channel_t *ch, void *data, int len); + +/* Write to stream channels may do a partial write and return +** the length actually written. +** Write to packet channels will never do a partial write -- +** it will return the requested length written or an error. +*/ +int smd_write(smd_channel_t *ch, const void *data, int len); +/* Same as smd_write() but takes a data buffer from userspace + * The function might sleep. Only safe to call from user context + */ +int smd_write_user_buffer(smd_channel_t *ch, const void *data, int len); + +int smd_write_avail(smd_channel_t *ch); +int smd_read_avail(smd_channel_t *ch); + +/* Returns the total size of the current packet being read. +** Returns 0 if no packets available or a stream channel. +*/ +int smd_cur_packet_size(smd_channel_t *ch); + + +#if 0 +/* these are interruptable waits which will block you until the specified +** number of bytes are readable or writable. +*/ +int smd_wait_until_readable(smd_channel_t *ch, int bytes); +int smd_wait_until_writable(smd_channel_t *ch, int bytes); +#endif + +/* these are used to get and set the IF sigs of a channel. + * DTR and RTS can be set; DSR, CTS, CD and RI can be read. + */ +int smd_tiocmget(smd_channel_t *ch); +int smd_tiocmset(smd_channel_t *ch, unsigned int set, unsigned int clear); +int +smd_tiocmset_from_cb(smd_channel_t *ch, unsigned int set, unsigned int clear); +int smd_named_open_on_edge(const char *name, uint32_t edge, smd_channel_t **_ch, + void *priv, void (*notify)(void *, unsigned)); + +/* Tells the other end of the smd channel that this end wants to recieve + * interrupts when the written data is read. Read interrupts should only + * enabled when there is no space left in the buffer to write to, thus the + * interrupt acts as notification that space may be avaliable. If the + * other side does not support enabling/disabling interrupts on demand, + * then this function has no effect if called. + */ +void smd_enable_read_intr(smd_channel_t *ch); + +/* Tells the other end of the smd channel that this end does not want + * interrupts when written data is read. The interrupts should be + * disabled by default. If the other side does not support enabling/ + * disabling interrupts on demand, then this function has no effect if + * called. + */ +void smd_disable_read_intr(smd_channel_t *ch); + +/** + * Enable/disable receive interrupts for the remote processor used by a + * particular channel. + * @ch: open channel handle to use for the edge + * @mask: 1 = mask interrupts; 0 = unmask interrupts + * @returns: 0 for success; < 0 for failure + * + * Note that this enables/disables all interrupts from the remote subsystem for + * all channels. As such, it should be used with care and only for specific + * use cases such as power-collapse sequencing. + */ +int smd_mask_receive_interrupt(smd_channel_t *ch, bool mask); + +/* Starts a packet transaction. The size of the packet may exceed the total + * size of the smd ring buffer. + * + * @ch: channel to write the packet to + * @len: total length of the packet + * + * Returns: + * 0 - success + * -ENODEV - invalid smd channel + * -EACCES - non-packet channel specified + * -EINVAL - invalid length + * -EBUSY - transaction already in progress + * -EAGAIN - no enough memory in ring buffer to start transaction + * -EPERM - unable to sucessfully start transaction due to write error + */ +int smd_write_start(smd_channel_t *ch, int len); + +/* Writes a segment of the packet for a packet transaction. + * + * @ch: channel to write packet to + * @data: buffer of data to write + * @len: length of data buffer + * @user_buf: (0) - buffer from kernelspace (1) - buffer from userspace + * + * Returns: + * number of bytes written + * -ENODEV - invalid smd channel + * -EINVAL - invalid length + * -ENOEXEC - transaction not started + */ +int smd_write_segment(smd_channel_t *ch, void *data, int len, int user_buf); + +/* Completes a packet transaction. Do not call from interrupt context. + * + * @ch: channel to complete transaction on + * + * Returns: + * 0 - success + * -ENODEV - invalid smd channel + * -E2BIG - some ammount of packet is not yet written + */ +int smd_write_end(smd_channel_t *ch); + +/* + * Returns a pointer to the subsystem name or NULL if no + * subsystem name is available. + * + * @type - Edge definition + */ +const char *smd_edge_to_subsystem(uint32_t type); + +/* + * Returns a pointer to the subsystem name given the + * remote processor ID. + * + * @pid Remote processor ID + * @returns Pointer to subsystem name or NULL if not found + */ +const char *smd_pid_to_subsystem(uint32_t pid); + +/* + * Checks to see if a new packet has arrived on the channel. Only to be + * called with interrupts disabled. + * + * @ch: channel to check if a packet has arrived + * + * Returns: + * 0 - packet not available + * 1 - packet available + * -EINVAL - NULL parameter or non-packet based channel provided + */ +int smd_is_pkt_avail(smd_channel_t *ch); + +/** + * smd_module_init_notifier_register() - Register a smd module + * init notifier block + * @nb: Notifier block to be registered + * + * In order to mark the dependency on SMD Driver module initialization + * register a notifier using this API. Once the smd module_init is + * done, notification will be passed to the registered module. + */ +int smd_module_init_notifier_register(struct notifier_block *nb); + +/** + * smd_module_init_notifier_register() - Unregister a smd module + * init notifier block + * @nb: Notifier block to be registered + */ +int smd_module_init_notifier_unregister(struct notifier_block *nb); + +/* + * SMD initialization function that registers for a SMD platform driver. + * + * returns success on successful driver registration. + */ +int __init msm_smd_init(void); + +#else + +static inline int smd_open(const char *name, smd_channel_t **ch, void *priv, + void (*notify)(void *priv, unsigned event)) +{ + return -ENODEV; +} + +static inline int smd_close(smd_channel_t *ch) +{ + return -ENODEV; +} + +static inline int smd_read(smd_channel_t *ch, void *data, int len) +{ + return -ENODEV; +} + +static inline int smd_read_from_cb(smd_channel_t *ch, void *data, int len) +{ + return -ENODEV; +} + +static inline int smd_read_user_buffer(smd_channel_t *ch, void *data, int len) +{ + return -ENODEV; +} + +static inline int smd_write(smd_channel_t *ch, const void *data, int len) +{ + return -ENODEV; +} + +static inline int +smd_write_user_buffer(smd_channel_t *ch, const void *data, int len) +{ + return -ENODEV; +} + +static inline int smd_write_avail(smd_channel_t *ch) +{ + return -ENODEV; +} + +static inline int smd_read_avail(smd_channel_t *ch) +{ + return -ENODEV; +} + +static inline int smd_cur_packet_size(smd_channel_t *ch) +{ + return -ENODEV; +} + +static inline int smd_tiocmget(smd_channel_t *ch) +{ + return -ENODEV; +} + +static inline int +smd_tiocmset(smd_channel_t *ch, unsigned int set, unsigned int clear) +{ + return -ENODEV; +} + +static inline int +smd_tiocmset_from_cb(smd_channel_t *ch, unsigned int set, unsigned int clear) +{ + return -ENODEV; +} + +static inline int +smd_named_open_on_edge(const char *name, uint32_t edge, smd_channel_t **_ch, + void *priv, void (*notify)(void *, unsigned)) +{ + return -ENODEV; +} + +static inline void smd_enable_read_intr(smd_channel_t *ch) +{ +} + +static inline void smd_disable_read_intr(smd_channel_t *ch) +{ +} + +static inline int smd_mask_receive_interrupt(smd_channel_t *ch, bool mask) +{ + return -ENODEV; +} + +static inline int smd_write_start(smd_channel_t *ch, int len) +{ + return -ENODEV; +} + +static inline int +smd_write_segment(smd_channel_t *ch, void *data, int len, int user_buf) +{ + return -ENODEV; +} + +static inline int smd_write_end(smd_channel_t *ch) +{ + return -ENODEV; +} + +static inline const char *smd_edge_to_subsystem(uint32_t type) +{ + return NULL; +} + +static inline const char *smd_pid_to_subsystem(uint32_t pid) +{ + return NULL; +} + +static inline int smd_is_pkt_avail(smd_channel_t *ch) +{ + return -ENODEV; +} + +static inline int smd_module_init_notifier_register(struct notifier_block *nb) +{ + return -ENODEV; +} + +static inline int smd_module_init_notifier_unregister(struct notifier_block *nb) +{ + return -ENODEV; +} + +static inline int __init msm_smd_init(void) +{ + return 0; +} +#endif + +#endif diff --git a/include/linux/msm_smsm.h b/include/linux/msm_smsm.h new file mode 100644 index 000000000000..68adad3b942d --- /dev/null +++ b/include/linux/msm_smsm.h @@ -0,0 +1,259 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef _ARCH_ARM_MACH_MSM_SMSM_H_ +#define _ARCH_ARM_MACH_MSM_SMSM_H_ + +#include <linux/notifier.h> + +#define CONFIG_MSM_N_WAY_SMSM + +#if defined(CONFIG_MSM_N_WAY_SMSM) +enum { + SMSM_APPS_STATE, + SMSM_MODEM_STATE, + SMSM_Q6_STATE, + SMSM_APPS_DEM, + SMSM_WCNSS_STATE = SMSM_APPS_DEM, + SMSM_MODEM_DEM, + SMSM_DSPS_STATE = SMSM_MODEM_DEM, + SMSM_Q6_DEM, + SMSM_POWER_MASTER_DEM, + SMSM_TIME_MASTER_DEM, +}; +extern uint32_t SMSM_NUM_ENTRIES; +#else +enum { + SMSM_APPS_STATE = 1, + SMSM_MODEM_STATE = 3, + SMSM_NUM_ENTRIES, +}; +#endif + +enum { + SMSM_APPS, + SMSM_MODEM, + SMSM_Q6, + SMSM_WCNSS, + SMSM_DSPS, +}; +extern uint32_t SMSM_NUM_HOSTS; + +#define SMSM_INIT 0x00000001 +#define SMSM_OSENTERED 0x00000002 +#define SMSM_SMDWAIT 0x00000004 +#define SMSM_SMDINIT 0x00000008 +#define SMSM_RPCWAIT 0x00000010 +#define SMSM_RPCINIT 0x00000020 +#define SMSM_RESET 0x00000040 +#define SMSM_RSA 0x00000080 +#define SMSM_RUN 0x00000100 +#define SMSM_PWRC 0x00000200 +#define SMSM_TIMEWAIT 0x00000400 +#define SMSM_TIMEINIT 0x00000800 +#define SMSM_PROC_AWAKE 0x00001000 +#define SMSM_WFPI 0x00002000 +#define SMSM_SLEEP 0x00004000 +#define SMSM_SLEEPEXIT 0x00008000 +#define SMSM_OEMSBL_RELEASE 0x00010000 +#define SMSM_APPS_REBOOT 0x00020000 +#define SMSM_SYSTEM_POWER_DOWN 0x00040000 +#define SMSM_SYSTEM_REBOOT 0x00080000 +#define SMSM_SYSTEM_DOWNLOAD 0x00100000 +#define SMSM_PWRC_SUSPEND 0x00200000 +#define SMSM_APPS_SHUTDOWN 0x00400000 +#define SMSM_SMD_LOOPBACK 0x00800000 +#define SMSM_RUN_QUIET 0x01000000 +#define SMSM_MODEM_WAIT 0x02000000 +#define SMSM_MODEM_BREAK 0x04000000 +#define SMSM_MODEM_CONTINUE 0x08000000 +#define SMSM_SYSTEM_REBOOT_USR 0x20000000 +#define SMSM_SYSTEM_PWRDWN_USR 0x40000000 +#define SMSM_UNKNOWN 0x80000000 + +#define SMSM_WKUP_REASON_RPC 0x00000001 +#define SMSM_WKUP_REASON_INT 0x00000002 +#define SMSM_WKUP_REASON_GPIO 0x00000004 +#define SMSM_WKUP_REASON_TIMER 0x00000008 +#define SMSM_WKUP_REASON_ALARM 0x00000010 +#define SMSM_WKUP_REASON_RESET 0x00000020 +#define SMSM_A2_FORCE_SHUTDOWN 0x00002000 +#define SMSM_A2_RESET_BAM 0x00004000 + +#define SMSM_VENDOR 0x00020000 + +#define SMSM_A2_POWER_CONTROL 0x00000002 +#define SMSM_A2_POWER_CONTROL_ACK 0x00000800 + +#define SMSM_WLAN_TX_RINGS_EMPTY 0x00000200 +#define SMSM_WLAN_TX_ENABLE 0x00000400 + +#define SMSM_SUBSYS2AP_STATUS 0x00008000 + +#ifdef CONFIG_QCOM_SMD +void *smem_alloc(unsigned id, unsigned size); +#else +void *smem_alloc(unsigned id, unsigned size) +{ + return NULL; +} +#endif +void *smem_alloc2(unsigned id, unsigned size_in); +void *smem_get_entry(unsigned id, unsigned *size); +int smsm_change_state(uint32_t smsm_entry, + uint32_t clear_mask, uint32_t set_mask); + +/* + * Changes the global interrupt mask. The set and clear masks are re-applied + * every time the global interrupt mask is updated for callback registration + * and de-registration. + * + * The clear mask is applied first, so if a bit is set to 1 in both the clear + * mask and the set mask, the result will be that the interrupt is set. + * + * @smsm_entry SMSM entry to change + * @clear_mask 1 = clear bit, 0 = no-op + * @set_mask 1 = set bit, 0 = no-op + * + * @returns 0 for success, < 0 for error + */ +int smsm_change_intr_mask(uint32_t smsm_entry, + uint32_t clear_mask, uint32_t set_mask); +int smsm_get_intr_mask(uint32_t smsm_entry, uint32_t *intr_mask); +uint32_t smsm_get_state(uint32_t smsm_entry); +int smsm_state_cb_register(uint32_t smsm_entry, uint32_t mask, + void (*notify)(void *, uint32_t old_state, uint32_t new_state), + void *data); +int smsm_state_cb_deregister(uint32_t smsm_entry, uint32_t mask, + void (*notify)(void *, uint32_t, uint32_t), void *data); +void smsm_print_sleep_info(uint32_t sleep_delay, uint32_t sleep_limit, + uint32_t irq_mask, uint32_t wakeup_reason, uint32_t pending_irqs); +void smsm_reset_modem(unsigned mode); +void smsm_reset_modem_cont(void); +void smd_sleep_exit(void); + +#define SMEM_NUM_SMD_STREAM_CHANNELS 64 +#define SMEM_NUM_SMD_BLOCK_CHANNELS 64 + +enum { + /* fixed items */ + SMEM_PROC_COMM = 0, + SMEM_HEAP_INFO, + SMEM_ALLOCATION_TABLE, + SMEM_VERSION_INFO, + SMEM_HW_RESET_DETECT, + SMEM_AARM_WARM_BOOT, + SMEM_DIAG_ERR_MESSAGE, + SMEM_SPINLOCK_ARRAY, + SMEM_MEMORY_BARRIER_LOCATION, + SMEM_FIXED_ITEM_LAST = SMEM_MEMORY_BARRIER_LOCATION, + + /* dynamic items */ + SMEM_AARM_PARTITION_TABLE, + SMEM_AARM_BAD_BLOCK_TABLE, + SMEM_RESERVE_BAD_BLOCKS, + SMEM_WM_UUID, + SMEM_CHANNEL_ALLOC_TBL, + SMEM_SMD_BASE_ID, + SMEM_SMEM_LOG_IDX = SMEM_SMD_BASE_ID + SMEM_NUM_SMD_STREAM_CHANNELS, + SMEM_SMEM_LOG_EVENTS, + SMEM_SMEM_STATIC_LOG_IDX, + SMEM_SMEM_STATIC_LOG_EVENTS, + SMEM_SMEM_SLOW_CLOCK_SYNC, + SMEM_SMEM_SLOW_CLOCK_VALUE, + SMEM_BIO_LED_BUF, + SMEM_SMSM_SHARED_STATE, + SMEM_SMSM_INT_INFO, + SMEM_SMSM_SLEEP_DELAY, + SMEM_SMSM_LIMIT_SLEEP, + SMEM_SLEEP_POWER_COLLAPSE_DISABLED, + SMEM_KEYPAD_KEYS_PRESSED, + SMEM_KEYPAD_STATE_UPDATED, + SMEM_KEYPAD_STATE_IDX, + SMEM_GPIO_INT, + SMEM_MDDI_LCD_IDX, + SMEM_MDDI_HOST_DRIVER_STATE, + SMEM_MDDI_LCD_DISP_STATE, + SMEM_LCD_CUR_PANEL, + SMEM_MARM_BOOT_SEGMENT_INFO, + SMEM_AARM_BOOT_SEGMENT_INFO, + SMEM_SLEEP_STATIC, + SMEM_SCORPION_FREQUENCY, + SMEM_SMD_PROFILES, + SMEM_TSSC_BUSY, + SMEM_HS_SUSPEND_FILTER_INFO, + SMEM_BATT_INFO, + SMEM_APPS_BOOT_MODE, + SMEM_VERSION_FIRST, + SMEM_VERSION_SMD = SMEM_VERSION_FIRST, + SMEM_VERSION_LAST = SMEM_VERSION_FIRST + 24, + SMEM_OSS_RRCASN1_BUF1, + SMEM_OSS_RRCASN1_BUF2, + SMEM_ID_VENDOR0, + SMEM_ID_VENDOR1, + SMEM_ID_VENDOR2, + SMEM_HW_SW_BUILD_ID, + SMEM_SMD_BLOCK_PORT_BASE_ID, + SMEM_SMD_BLOCK_PORT_PROC0_HEAP = SMEM_SMD_BLOCK_PORT_BASE_ID + + SMEM_NUM_SMD_BLOCK_CHANNELS, + SMEM_SMD_BLOCK_PORT_PROC1_HEAP = SMEM_SMD_BLOCK_PORT_PROC0_HEAP + + SMEM_NUM_SMD_BLOCK_CHANNELS, + SMEM_I2C_MUTEX = SMEM_SMD_BLOCK_PORT_PROC1_HEAP + + SMEM_NUM_SMD_BLOCK_CHANNELS, + SMEM_SCLK_CONVERSION, + SMEM_SMD_SMSM_INTR_MUX, + SMEM_SMSM_CPU_INTR_MASK, + SMEM_APPS_DEM_SLAVE_DATA, + SMEM_QDSP6_DEM_SLAVE_DATA, + SMEM_CLKREGIM_BSP, + SMEM_CLKREGIM_SOURCES, + SMEM_SMD_FIFO_BASE_ID, + SMEM_USABLE_RAM_PARTITION_TABLE = SMEM_SMD_FIFO_BASE_ID + + SMEM_NUM_SMD_STREAM_CHANNELS, + SMEM_POWER_ON_STATUS_INFO, + SMEM_DAL_AREA, + SMEM_SMEM_LOG_POWER_IDX, + SMEM_SMEM_LOG_POWER_WRAP, + SMEM_SMEM_LOG_POWER_EVENTS, + SMEM_ERR_CRASH_LOG, + SMEM_ERR_F3_TRACE_LOG, + SMEM_SMD_BRIDGE_ALLOC_TABLE, + SMEM_SMDLITE_TABLE, + SMEM_SD_IMG_UPGRADE_STATUS, + SMEM_SEFS_INFO, + SMEM_RESET_LOG, + SMEM_RESET_LOG_SYMBOLS, + SMEM_MODEM_SW_BUILD_ID, + SMEM_SMEM_LOG_MPROC_WRAP, + SMEM_BOOT_INFO_FOR_APPS, + SMEM_SMSM_SIZE_INFO, + SMEM_SMD_LOOPBACK_REGISTER, + SMEM_SSR_REASON_MSS0, + SMEM_SSR_REASON_WCNSS0, + SMEM_SSR_REASON_LPASS0, + SMEM_SSR_REASON_DSPS0, + SMEM_SSR_REASON_VCODEC0, + SMEM_MEM_LAST = SMEM_SSR_REASON_VCODEC0, + SMEM_NUM_ITEMS, +}; + +enum { + SMEM_APPS_Q6_SMSM = 3, + SMEM_Q6_APPS_SMSM = 5, + SMSM_NUM_INTR_MUX = 8, +}; + +int smsm_check_for_modem_crash(void); +void *smem_find(unsigned id, unsigned size); +void *smem_get_entry(unsigned id, unsigned *size); + +#endif diff --git a/include/linux/peripheral-loader.h b/include/linux/peripheral-loader.h new file mode 100644 index 000000000000..bb2557810646 --- /dev/null +++ b/include/linux/peripheral-loader.h @@ -0,0 +1,26 @@ +/* Copyright (c) 2010-2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __MACH_PERIPHERAL_LOADER_H +#define __MACH_PERIPHERAL_LOADER_H +#ifdef CONFIG_MSM_PIL +extern void *pil_get(const char *name); +extern void pil_put(void *peripheral_handle); +extern void pil_force_shutdown(const char *name); +extern int pil_force_boot(const char *name); +#else +static inline void *pil_get(const char *name) { return NULL; } +static inline void pil_put(void *peripheral_handle) { } +static inline void pil_force_shutdown(const char *name) { } +static inline int pil_force_boot(const char *name) { return -ENOSYS; } +#endif + +#endif diff --git a/include/linux/remote_spinlock.h b/include/linux/remote_spinlock.h new file mode 100644 index 000000000000..8c78f08a79ad --- /dev/null +++ b/include/linux/remote_spinlock.h @@ -0,0 +1,288 @@ +/* Copyright (c) 2008-2009, 2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef __LINUX_REMOTE_SPINLOCK_H +#define __LINUX_REMOTE_SPINLOCK_H + +#include <linux/io.h> +#include <linux/types.h> +#include <linux/spinlock.h> +#include <linux/mutex.h> + +//#include <asm/remote_spinlock.h> + + +/* Remote spinlock definitions. */ + +struct dek_spinlock { + volatile uint8_t self_lock; + volatile uint8_t other_lock; + volatile uint8_t next_yield; + uint8_t pad; +}; + +typedef union { + volatile uint32_t lock; + struct dek_spinlock dek; +} raw_remote_spinlock_t; + +typedef raw_remote_spinlock_t *_remote_spinlock_t; + +#define remote_spinlock_id_t const char * +#define SMEM_SPINLOCK_PID_APPS 1 + +#define DEK_LOCK_REQUEST 1 +#define DEK_LOCK_YIELD (!DEK_LOCK_REQUEST) +#define DEK_YIELD_TURN_SELF 0 +static inline void __raw_remote_dek_spin_lock(raw_remote_spinlock_t *lock) +{ + lock->dek.self_lock = DEK_LOCK_REQUEST; + + while (lock->dek.other_lock) { + + if (lock->dek.next_yield == DEK_YIELD_TURN_SELF) + lock->dek.self_lock = DEK_LOCK_YIELD; + + while (lock->dek.other_lock) + ; + + lock->dek.self_lock = DEK_LOCK_REQUEST; + } + lock->dek.next_yield = DEK_YIELD_TURN_SELF; + + smp_mb(); +} + +static inline int __raw_remote_dek_spin_trylock(raw_remote_spinlock_t *lock) +{ + lock->dek.self_lock = DEK_LOCK_REQUEST; + + if (lock->dek.other_lock) { + lock->dek.self_lock = DEK_LOCK_YIELD; + return 0; + } + + lock->dek.next_yield = DEK_YIELD_TURN_SELF; + + smp_mb(); + return 1; +} + +static inline void __raw_remote_dek_spin_unlock(raw_remote_spinlock_t *lock) +{ + smp_mb(); + + lock->dek.self_lock = DEK_LOCK_YIELD; +} + +static inline int __raw_remote_dek_spin_release(raw_remote_spinlock_t *lock, + uint32_t pid) +{ + return -EINVAL; +} + +static inline void __raw_remote_sfpb_spin_lock(raw_remote_spinlock_t *lock) +{ + do { + writel_relaxed(SMEM_SPINLOCK_PID_APPS, lock); + smp_mb(); + } while (readl_relaxed(lock) != SMEM_SPINLOCK_PID_APPS); +} + +static inline int __raw_remote_sfpb_spin_trylock(raw_remote_spinlock_t *lock) +{ + return 1; +} + +static inline void __raw_remote_sfpb_spin_unlock(raw_remote_spinlock_t *lock) +{ + writel_relaxed(0, lock); + smp_mb(); +} + +/** + * Release spinlock if it is owned by @pid. + * + * This is only to be used for situations where the processor owning + * the spinlock has crashed and the spinlock must be released. + * + * @lock - lock structure + * @pid - processor ID of processor to release + */ +static inline int __raw_remote_gen_spin_release(raw_remote_spinlock_t *lock, + uint32_t pid) +{ + int ret = 1; + + if (readl_relaxed(&lock->lock) == pid) { + writel_relaxed(0, &lock->lock); + wmb(); + ret = 0; + } + return ret; +} +#define CONFIG_MSM_REMOTE_SPINLOCK_SFPB +#if defined(CONFIG_QCOM_SMD) || defined(CONFIG_MSM_REMOTE_SPINLOCK_SFPB) +int _remote_spin_lock_init(remote_spinlock_id_t, _remote_spinlock_t *lock); +void _remote_spin_release_all(uint32_t pid); +#else +static inline +int _remote_spin_lock_init(remote_spinlock_id_t id, _remote_spinlock_t *lock) +{ + return -EINVAL; +} +static inline void _remote_spin_release_all(uint32_t pid) {} +#endif + +/* Use SFPB Hardware Mutex Registers */ +#define _remote_spin_lock(lock) __raw_remote_sfpb_spin_lock(*lock) +#define _remote_spin_unlock(lock) __raw_remote_sfpb_spin_unlock(*lock) +#define _remote_spin_trylock(lock) __raw_remote_sfpb_spin_trylock(*lock) +#define _remote_spin_release(lock, pid) __raw_remote_gen_spin_release(*lock,\ + pid) + +/* Remote mutex definitions. */ + +typedef struct { + _remote_spinlock_t r_spinlock; + uint32_t delay_us; +} _remote_mutex_t; + +struct remote_mutex_id { + remote_spinlock_id_t r_spinlock_id; + uint32_t delay_us; +}; + +#ifdef CONFIG_QCOM_SMD +int _remote_mutex_init(struct remote_mutex_id *id, _remote_mutex_t *lock); +void _remote_mutex_lock(_remote_mutex_t *lock); +void _remote_mutex_unlock(_remote_mutex_t *lock); +int _remote_mutex_trylock(_remote_mutex_t *lock); +#else +static inline +int _remote_mutex_init(struct remote_mutex_id *id, _remote_mutex_t *lock) +{ + return -EINVAL; +} +static inline void _remote_mutex_lock(_remote_mutex_t *lock) {} +static inline void _remote_mutex_unlock(_remote_mutex_t *lock) {} +static inline int _remote_mutex_trylock(_remote_mutex_t *lock) +{ + return 0; +} +#endif + + + + + +/* Grabbing a local spin lock before going for a remote lock has several + * advantages: + * 1. Get calls to preempt enable/disable and IRQ save/restore for free. + * 2. For UP kernel, there is no overhead. + * 3. Reduces the possibility of executing the remote spin lock code. This is + * especially useful when the remote CPUs' mutual exclusion instructions + * don't work with the local CPUs' instructions. In such cases, one has to + * use software based mutex algorithms (e.g. Lamport's bakery algorithm) + * which could get expensive when the no. of contending CPUs is high. + * 4. In the case of software based mutex algorithm the exection time will be + * smaller since the no. of contending CPUs is reduced by having just one + * contender for all the local CPUs. + * 5. Get most of the spin lock debug features for free. + * 6. The code will continue to work "gracefully" even when the remote spin + * lock code is stubbed out for debug purposes or when there is no remote + * CPU in some board/machine types. + */ +typedef struct { + spinlock_t local; + _remote_spinlock_t remote; +} remote_spinlock_t; + +#define remote_spin_lock_init(lock, id) \ + ({ \ + spin_lock_init(&((lock)->local)); \ + _remote_spin_lock_init(id, &((lock)->remote)); \ + }) +#define remote_spin_lock(lock) \ + do { \ + spin_lock(&((lock)->local)); \ + _remote_spin_lock(&((lock)->remote)); \ + } while (0) +#define remote_spin_unlock(lock) \ + do { \ + _remote_spin_unlock(&((lock)->remote)); \ + spin_unlock(&((lock)->local)); \ + } while (0) +#define remote_spin_lock_irqsave(lock, flags) \ + do { \ + spin_lock_irqsave(&((lock)->local), flags); \ + _remote_spin_lock(&((lock)->remote)); \ + } while (0) +#define remote_spin_unlock_irqrestore(lock, flags) \ + do { \ + _remote_spin_unlock(&((lock)->remote)); \ + spin_unlock_irqrestore(&((lock)->local), flags); \ + } while (0) +#define remote_spin_trylock(lock) \ + ({ \ + spin_trylock(&((lock)->local)) \ + ? _remote_spin_trylock(&((lock)->remote)) \ + ? 1 \ + : ({ spin_unlock(&((lock)->local)); 0; }) \ + : 0; \ + }) +#define remote_spin_trylock_irqsave(lock, flags) \ + ({ \ + spin_trylock_irqsave(&((lock)->local), flags) \ + ? _remote_spin_trylock(&((lock)->remote)) \ + ? 1 \ + : ({ spin_unlock_irqrestore(&((lock)->local), flags); \ + 0; }) \ + : 0; \ + }) + +#define remote_spin_release(lock, pid) \ + _remote_spin_release(&((lock)->remote), pid) + +#define remote_spin_release_all(pid) \ + _remote_spin_release_all(pid) + +typedef struct { + struct mutex local; + _remote_mutex_t remote; +} remote_mutex_t; + +#define remote_mutex_init(lock, id) \ + ({ \ + mutex_init(&((lock)->local)); \ + _remote_mutex_init(id, &((lock)->remote)); \ + }) +#define remote_mutex_lock(lock) \ + do { \ + mutex_lock(&((lock)->local)); \ + _remote_mutex_lock(&((lock)->remote)); \ + } while (0) +#define remote_mutex_trylock(lock) \ + ({ \ + mutex_trylock(&((lock)->local)) \ + ? _remote_mutex_trylock(&((lock)->remote)) \ + ? 1 \ + : ({mutex_unlock(&((lock)->local)); 0; }) \ + : 0; \ + }) +#define remote_mutex_unlock(lock) \ + do { \ + _remote_mutex_unlock(&((lock)->remote)); \ + mutex_unlock(&((lock)->local)); \ + } while (0) + +#endif diff --git a/include/linux/subsystem_notif.h b/include/linux/subsystem_notif.h new file mode 100644 index 000000000000..7f817af80bf8 --- /dev/null +++ b/include/linux/subsystem_notif.h @@ -0,0 +1,81 @@ +/* Copyright (c) 2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * + * Subsystem restart notifier API header + * + */ + +#ifndef _SUBSYS_NOTIFIER_H +#define _SUBSYS_NOTIFIER_H + +#include <linux/notifier.h> + +enum subsys_notif_type { + SUBSYS_BEFORE_SHUTDOWN, + SUBSYS_AFTER_SHUTDOWN, + SUBSYS_BEFORE_POWERUP, + SUBSYS_AFTER_POWERUP, + SUBSYS_NOTIF_TYPE_COUNT +}; + +#define CONFIG_MSM_SUBSYSTEM_RESTART +#if defined(CONFIG_MSM_SUBSYSTEM_RESTART) +/* Use the subsys_notif_register_notifier API to register for notifications for + * a particular subsystem. This API will return a handle that can be used to + * un-reg for notifications using the subsys_notif_unregister_notifier API by + * passing in that handle as an argument. + * + * On receiving a notification, the second (unsigned long) argument of the + * notifier callback will contain the notification type, and the third (void *) + * argument will contain the handle that was returned by + * subsys_notif_register_notifier. + */ +void *subsys_notif_register_notifier( + const char *subsys_name, struct notifier_block *nb); +int subsys_notif_unregister_notifier(void *subsys_handle, + struct notifier_block *nb); + +/* Use the subsys_notif_init_subsys API to initialize the notifier chains form + * a particular subsystem. This API will return a handle that can be used to + * queue notifications using the subsys_notif_queue_notification API by passing + * in that handle as an argument. + */ +void *subsys_notif_add_subsys(const char *); +int subsys_notif_queue_notification(void *subsys_handle, + enum subsys_notif_type notif_type); +#else + +static inline void *subsys_notif_register_notifier( + const char *subsys_name, struct notifier_block *nb) +{ + return NULL; +} + +static inline int subsys_notif_unregister_notifier(void *subsys_handle, + struct notifier_block *nb) +{ + return 0; +} + +static inline void *subsys_notif_add_subsys(const char *subsys_name) +{ + return NULL; +} + +static inline int subsys_notif_queue_notification(void *subsys_handle, + enum subsys_notif_type notif_type) +{ + return 0; +} +#endif /* CONFIG_MSM_SUBSYSTEM_RESTART */ + +#endif diff --git a/include/linux/subsystem_restart.h b/include/linux/subsystem_restart.h new file mode 100644 index 000000000000..582d7dc85338 --- /dev/null +++ b/include/linux/subsystem_restart.h @@ -0,0 +1,75 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef __SUBSYS_RESTART_H +#define __SUBSYS_RESTART_H + +#include <linux/spinlock.h> + +#define SUBSYS_NAME_MAX_LENGTH 40 + +struct subsys_device; + +enum { + RESET_SOC = 1, + RESET_SUBSYS_COUPLED, + RESET_SUBSYS_INDEPENDENT, + RESET_LEVEL_MAX +}; + +struct subsys_desc { + const char *name; + + int (*shutdown)(const struct subsys_desc *desc); + int (*powerup)(const struct subsys_desc *desc); + void (*crash_shutdown)(const struct subsys_desc *desc); + int (*ramdump)(int, const struct subsys_desc *desc); +}; + +#if defined(CONFIG_MSM_SUBSYSTEM_RESTART) + +extern int get_restart_level(void); +extern int subsystem_restart_dev(struct subsys_device *dev); +extern int subsystem_restart(const char *name); + +extern struct subsys_device *subsys_register(struct subsys_desc *desc); +extern void subsys_unregister(struct subsys_device *dev); + +#else + +static inline int get_restart_level(void) +{ + return 0; +} + +static inline int subsystem_restart_dev(struct subsys_device *dev) +{ + return 0; +} + +static inline int subsystem_restart(const char *name) +{ + return 0; +} + +static inline +struct subsys_device *subsys_register(struct subsys_desc *desc) +{ + return NULL; +} + +static inline void subsys_unregister(struct subsys_device *dev) { } + +#endif /* CONFIG_MSM_SUBSYSTEM_RESTART */ + +#endif diff --git a/include/soc/qcom/msm_dma.h b/include/soc/qcom/msm_dma.h new file mode 100644 index 000000000000..a5ae944e1bcf --- /dev/null +++ b/include/soc/qcom/msm_dma.h @@ -0,0 +1,350 @@ +/* linux/include/asm-arm/arch-msm/dma.h + * + * Copyright (C) 2007 Google, Inc. + * Copyright (c) 2008-2012, The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef __ASM_ARCH_MSM_DMA_H +#define __ASM_ARCH_MSM_DMA_H + +#include <linux/list.h> + +struct msm_dmov_errdata { + uint32_t flush[6]; +}; + +struct msm_dmov_cmd { + struct list_head list; + unsigned int cmdptr; + void (*complete_func)(struct msm_dmov_cmd *cmd, + unsigned int result, + struct msm_dmov_errdata *err); + void (*exec_func)(struct msm_dmov_cmd *cmd); + struct work_struct work; + unsigned id; /* For internal use */ + void *user; /* Pointer for caller's reference */ + u8 toflush; +}; + +struct msm_dmov_pdata { + int sd; + size_t sd_size; +}; + +void msm_dmov_enqueue_cmd(unsigned id, struct msm_dmov_cmd *cmd); +void msm_dmov_enqueue_cmd_ext(unsigned id, struct msm_dmov_cmd *cmd); +void msm_dmov_flush(unsigned int id, int graceful); +int msm_dmov_exec_cmd(unsigned id, unsigned int cmdptr); + +#define DMOV_CRCIS_PER_CONF 10 + +#define DMOV_ADDR(off, ch) ((off) + ((ch) << 2)) + +#define DMOV_CMD_PTR(ch) DMOV_ADDR(0x000, ch) +#define DMOV_CMD_LIST (0 << 29) /* does not work */ +#define DMOV_CMD_PTR_LIST (1 << 29) /* works */ +#define DMOV_CMD_INPUT_CFG (2 << 29) /* untested */ +#define DMOV_CMD_OUTPUT_CFG (3 << 29) /* untested */ +#define DMOV_CMD_ADDR(addr) ((addr) >> 3) + +#define DMOV_RSLT(ch) DMOV_ADDR(0x040, ch) +#define DMOV_RSLT_VALID (1 << 31) /* 0 == host has empties result fifo */ +#define DMOV_RSLT_ERROR (1 << 3) +#define DMOV_RSLT_FLUSH (1 << 2) +#define DMOV_RSLT_DONE (1 << 1) /* top pointer done */ +#define DMOV_RSLT_USER (1 << 0) /* command with FR force result */ + +#define DMOV_FLUSH0(ch) DMOV_ADDR(0x080, ch) +#define DMOV_FLUSH1(ch) DMOV_ADDR(0x0C0, ch) +#define DMOV_FLUSH2(ch) DMOV_ADDR(0x100, ch) +#define DMOV_FLUSH3(ch) DMOV_ADDR(0x140, ch) +#define DMOV_FLUSH4(ch) DMOV_ADDR(0x180, ch) +#define DMOV_FLUSH5(ch) DMOV_ADDR(0x1C0, ch) +#define DMOV_FLUSH_TYPE (1 << 31) + +#define DMOV_STATUS(ch) DMOV_ADDR(0x200, ch) +#define DMOV_STATUS_RSLT_COUNT(n) (((n) >> 29)) +#define DMOV_STATUS_CMD_COUNT(n) (((n) >> 27) & 3) +#define DMOV_STATUS_RSLT_VALID (1 << 1) +#define DMOV_STATUS_CMD_PTR_RDY (1 << 0) + +#define DMOV_CONF(ch) DMOV_ADDR(0x240, ch) +#define DMOV_CONF_SD(sd) (((sd & 4) << 11) | ((sd & 3) << 4)) +#define DMOV_CONF_IRQ_EN (1 << 6) +#define DMOV_CONF_FORCE_RSLT_EN (1 << 7) +#define DMOV_CONF_SHADOW_EN (1 << 12) +#define DMOV_CONF_MPU_DISABLE (1 << 11) +#define DMOV_CONF_PRIORITY(n) (n << 0) + +#define DMOV_DBG_ERR(ci) DMOV_ADDR(0x280, ci) + +#define DMOV_RSLT_CONF(ch) DMOV_ADDR(0x300, ch) +#define DMOV_RSLT_CONF_FORCE_TOP_PTR_RSLT (1 << 2) +#define DMOV_RSLT_CONF_FORCE_FLUSH_RSLT (1 << 1) +#define DMOV_RSLT_CONF_IRQ_EN (1 << 0) + +#define DMOV_ISR DMOV_ADDR(0x380, 0) + +#define DMOV_CI_CONF(ci) DMOV_ADDR(0x390, ci) +#define DMOV_CI_CONF_RANGE_END(n) ((n) << 24) +#define DMOV_CI_CONF_RANGE_START(n) ((n) << 16) +#define DMOV_CI_CONF_MAX_BURST(n) ((n) << 0) + +#define DMOV_CI_DBG_ERR(ci) DMOV_ADDR(0x3B0, ci) + +#define DMOV_CRCI_CONF0 DMOV_ADDR(0x3D0, 0) +#define DMOV_CRCI_CONF1 DMOV_ADDR(0x3D4, 0) +#define DMOV_CRCI_CONF0_SD(crci, sd) (sd << (crci*3)) +#define DMOV_CRCI_CONF1_SD(crci, sd) (sd << ((crci-DMOV_CRCIS_PER_CONF)*3)) + +#define DMOV_CRCI_CTL(crci) DMOV_ADDR(0x400, crci) +#define DMOV_CRCI_CTL_BLK_SZ(n) ((n) << 0) +#define DMOV_CRCI_CTL_RST (1 << 17) +#define DMOV_CRCI_MUX (1 << 18) + +/* channel assignments */ + +/* + * Format of CRCI numbers: crci number + (muxsel << 4) + */ + +#if defined(CONFIG_ARCH_MSM8X60) +#define DMOV_GP_CHAN 15 + +#define DMOV_NAND_CHAN 17 +#define DMOV_NAND_CHAN_MODEM 26 +#define DMOV_NAND_CHAN_Q6 27 +#define DMOV_NAND_CRCI_CMD 15 +#define DMOV_NAND_CRCI_DATA 3 + +#define DMOV_CE_IN_CHAN 2 +#define DMOV_CE_IN_CRCI 4 + +#define DMOV_CE_OUT_CHAN 3 +#define DMOV_CE_OUT_CRCI 5 + +#define DMOV_CE_HASH_CRCI 15 + +#define DMOV_SDC1_CHAN 18 +#define DMOV_SDC1_CRCI 1 + +#define DMOV_SDC2_CHAN 19 +#define DMOV_SDC2_CRCI 4 + +#define DMOV_SDC3_CHAN 20 +#define DMOV_SDC3_CRCI 2 + +#define DMOV_SDC4_CHAN 21 +#define DMOV_SDC4_CRCI 5 + +#define DMOV_SDC5_CHAN 21 +#define DMOV_SDC5_CRCI 14 + +#define DMOV_TSIF_CHAN 4 +#define DMOV_TSIF_CRCI 6 + +#define DMOV_HSUART1_TX_CHAN 22 +#define DMOV_HSUART1_TX_CRCI 8 + +#define DMOV_HSUART1_RX_CHAN 23 +#define DMOV_HSUART1_RX_CRCI 9 + +#define DMOV_HSUART2_TX_CHAN 8 +#define DMOV_HSUART2_TX_CRCI 13 + +#define DMOV_HSUART2_RX_CHAN 8 +#define DMOV_HSUART2_RX_CRCI 14 + +#elif defined(CONFIG_ARCH_MSM8960) +#define DMOV_GP_CHAN 9 + +#define DMOV_CE_IN_CHAN 0 +#define DMOV_CE_IN_CRCI 2 + +#define DMOV_CE_OUT_CHAN 1 +#define DMOV_CE_OUT_CRCI 3 + +#define DMOV_TSIF_CHAN 2 +#define DMOV_TSIF_CRCI 11 + +#define DMOV_HSUART_GSBI6_TX_CHAN 7 +#define DMOV_HSUART_GSBI6_TX_CRCI 6 + +#define DMOV_HSUART_GSBI6_RX_CHAN 8 +#define DMOV_HSUART_GSBI6_RX_CRCI 11 + +#define DMOV_HSUART_GSBI8_TX_CHAN 7 +#define DMOV_HSUART_GSBI8_TX_CRCI 10 + +#define DMOV_HSUART_GSBI8_RX_CHAN 8 +#define DMOV_HSUART_GSBI8_RX_CRCI 9 + +#define DMOV_HSUART_GSBI9_TX_CHAN 4 +#define DMOV_HSUART_GSBI9_TX_CRCI 13 + +#define DMOV_HSUART_GSBI9_RX_CHAN 3 +#define DMOV_HSUART_GSBI9_RX_CRCI 12 + +#elif defined(CONFIG_ARCH_MSM9615) + +#define DMOV_GP_CHAN 4 + +#define DMOV_CE_IN_CHAN 0 +#define DMOV_CE_IN_CRCI 12 + +#define DMOV_CE_OUT_CHAN 1 +#define DMOV_CE_OUT_CRCI 13 + +#define DMOV_NAND_CHAN 3 +#define DMOV_NAND_CRCI_CMD 15 +#define DMOV_NAND_CRCI_DATA 3 + +#elif defined(CONFIG_ARCH_FSM9XXX) +/* defined in dma-fsm9xxx.h */ + +#else +#define DMOV_GP_CHAN 4 + +#define DMOV_CE_IN_CHAN 5 +#define DMOV_CE_IN_CRCI 1 + +#define DMOV_CE_OUT_CHAN 6 +#define DMOV_CE_OUT_CRCI 2 + +#define DMOV_CE_HASH_CRCI 3 + +#define DMOV_NAND_CHAN 7 +#define DMOV_NAND_CRCI_CMD 5 +#define DMOV_NAND_CRCI_DATA 4 + +#define DMOV_SDC1_CHAN 8 +#define DMOV_SDC1_CRCI 6 + +#define DMOV_SDC2_CHAN 8 +#define DMOV_SDC2_CRCI 7 + +#define DMOV_SDC3_CHAN 8 +#define DMOV_SDC3_CRCI 12 + +#define DMOV_SDC4_CHAN 8 +#define DMOV_SDC4_CRCI 13 + +#define DMOV_TSIF_CHAN 10 +#define DMOV_TSIF_CRCI 10 + +#define DMOV_USB_CHAN 11 + +#define DMOV_HSUART1_TX_CHAN 4 +#define DMOV_HSUART1_TX_CRCI 8 + +#define DMOV_HSUART1_RX_CHAN 9 +#define DMOV_HSUART1_RX_CRCI 9 + +#define DMOV_HSUART2_TX_CHAN 4 +#define DMOV_HSUART2_TX_CRCI 14 + +#define DMOV_HSUART2_RX_CHAN 11 +#define DMOV_HSUART2_RX_CRCI 15 +#endif + +/* channels for APQ8064 */ +#define DMOV8064_CE_IN_CHAN 0 +#define DMOV8064_CE_IN_CRCI 14 + +#define DMOV8064_CE_OUT_CHAN 1 +#define DMOV8064_CE_OUT_CRCI 15 + +#define DMOV8064_TSIF_CHAN 2 +#define DMOV8064_TSIF_CRCI 1 + +/* channels for APQ8064 SGLTE*/ +#define DMOV_APQ8064_HSUART_GSBI4_TX_CHAN 11 +#define DMOV_APQ8064_HSUART_GSBI4_TX_CRCI 8 + +#define DMOV_APQ8064_HSUART_GSBI4_RX_CHAN 10 +#define DMOV_APQ8064_HSUART_GSBI4_RX_CRCI 7 + +/* channels for MPQ8064 */ +#define DMOV_MPQ8064_HSUART_GSBI6_TX_CHAN 7 +#define DMOV_MPQ8064_HSUART_GSBI6_TX_CRCI 6 + +#define DMOV_MPQ8064_HSUART_GSBI6_RX_CHAN 6 +#define DMOV_MPQ8064_HSUART_GSBI6_RX_CRCI 11 + +/* no client rate control ifc (eg, ram) */ +#define DMOV_NONE_CRCI 0 + + +/* If the CMD_PTR register has CMD_PTR_LIST selected, the data mover + * is going to walk a list of 32bit pointers as described below. Each + * pointer points to a *array* of dmov_s, etc structs. The last pointer + * in the list is marked with CMD_PTR_LP. The last struct in each array + * is marked with CMD_LC (see below). + */ +#define CMD_PTR_ADDR(addr) ((addr) >> 3) +#define CMD_PTR_LP (1 << 31) /* last pointer */ +#define CMD_PTR_PT (3 << 29) /* ? */ + +/* Single Item Mode */ +typedef struct { + unsigned cmd; + unsigned src; + unsigned dst; + unsigned len; +} dmov_s; + +/* Scatter/Gather Mode */ +typedef struct { + unsigned cmd; + unsigned src_dscr; + unsigned dst_dscr; + unsigned _reserved; +} dmov_sg; + +/* Box mode */ +typedef struct { + uint32_t cmd; + uint32_t src_row_addr; + uint32_t dst_row_addr; + uint32_t src_dst_len; + uint32_t num_rows; + uint32_t row_offset; +} dmov_box; + +/* bits for the cmd field of the above structures */ + +#define CMD_LC (1 << 31) /* last command */ +#define CMD_FR (1 << 22) /* force result -- does not work? */ +#define CMD_OCU (1 << 21) /* other channel unblock */ +#define CMD_OCB (1 << 20) /* other channel block */ +#define CMD_TCB (1 << 19) /* ? */ +#define CMD_DAH (1 << 18) /* destination address hold -- does not work?*/ +#define CMD_SAH (1 << 17) /* source address hold -- does not work? */ + +#define CMD_MODE_SINGLE (0 << 0) /* dmov_s structure used */ +#define CMD_MODE_SG (1 << 0) /* untested */ +#define CMD_MODE_IND_SG (2 << 0) /* untested */ +#define CMD_MODE_BOX (3 << 0) /* untested */ + +#define CMD_DST_SWAP_BYTES (1 << 14) /* exchange each byte n with byte n+1 */ +#define CMD_DST_SWAP_SHORTS (1 << 15) /* exchange each short n with short n+1 */ +#define CMD_DST_SWAP_WORDS (1 << 16) /* exchange each word n with word n+1 */ + +#define CMD_SRC_SWAP_BYTES (1 << 11) /* exchange each byte n with byte n+1 */ +#define CMD_SRC_SWAP_SHORTS (1 << 12) /* exchange each short n with short n+1 */ +#define CMD_SRC_SWAP_WORDS (1 << 13) /* exchange each word n with word n+1 */ + +#define CMD_DST_CRCI(n) (((n) & 15) << 7) +#define CMD_SRC_CRCI(n) (((n) & 15) << 3) + +#endif diff --git a/include/soc/qcom/pm.h b/include/soc/qcom/pm.h new file mode 100644 index 000000000000..d9a56d7c0206 --- /dev/null +++ b/include/soc/qcom/pm.h @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2009-2014, The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef __QCOM_PM_H +#define __QCOM_PM_H + +enum pm_sleep_mode { + PM_SLEEP_MODE_STBY, + PM_SLEEP_MODE_RET, + PM_SLEEP_MODE_SPC, + PM_SLEEP_MODE_PC, + PM_SLEEP_MODE_NR, +}; + +struct qcom_cpu_pm_ops { + int (*standby)(void *data); + int (*spc)(void *data); +}; + +#endif /* __QCOM_PM_H */ diff --git a/arch/arm/mach-qcom/scm-boot.h b/include/soc/qcom/scm-boot.h index 6aabb2428176..529f55ae07f7 100644 --- a/arch/arm/mach-qcom/scm-boot.h +++ b/include/soc/qcom/scm-boot.h @@ -16,9 +16,8 @@ #define SCM_FLAG_COLDBOOT_CPU1 0x01 #define SCM_FLAG_COLDBOOT_CPU2 0x08 #define SCM_FLAG_COLDBOOT_CPU3 0x20 -#define SCM_FLAG_WARMBOOT_CPU0 0x04 -#define SCM_FLAG_WARMBOOT_CPU1 0x02 int scm_set_boot_addr(phys_addr_t addr, int flags); +int scm_set_warm_boot_addr(void *entry, int cpu); #endif diff --git a/arch/arm/mach-qcom/scm.h b/include/soc/qcom/scm.h index 00b31ea58f29..e486266c2009 100644 --- a/arch/arm/mach-qcom/scm.h +++ b/include/soc/qcom/scm.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2010, Code Aurora Forum. All rights reserved. +/* Copyright (c) 2010-2014, The Linux Foundation. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and @@ -14,12 +14,26 @@ #define SCM_SVC_BOOT 0x1 #define SCM_SVC_PIL 0x2 +#define SCM_SVC_UTIL 0x3 +#define SCM_SVC_TZ 0x4 +#define SCM_SVC_IO 0x5 +#define SCM_SVC_INFO 0x6 +#define SCM_SVC_SSD 0x7 +#define SCM_SVC_FUSE 0x8 +#define SCM_SVC_PWR 0x9 +#define SCM_SVC_CP 0xC +#define SCM_SVC_DCVS 0xD +#define SCM_SVC_TZSCHEDULER 0xFC extern int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len, void *resp_buf, size_t resp_len); +extern s32 scm_call_atomic1(u32 svc, u32 cmd, u32 arg1); +extern s32 scm_call_atomic2(u32 svc, u32 cmd, u32 arg1, u32 arg2); + #define SCM_VERSION(major, minor) (((major) << 16) | ((minor) & 0xFF)) extern u32 scm_get_version(void); +extern int scm_is_call_available(u32 svc_id, u32 cmd_id); #endif diff --git a/include/sound/apr_audio.h b/include/sound/apr_audio.h new file mode 100644 index 000000000000..d3c8a4f23dae --- /dev/null +++ b/include/sound/apr_audio.h @@ -0,0 +1,1675 @@ +/* + * + * Copyright (c) 2010-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef _APR_AUDIO_H_ +#define _APR_AUDIO_H_ + +/* ASM opcodes without APR payloads*/ +#include <sound/qdsp6v2/apr.h> + +/* + * Audio Front End (AFE) + */ + +/* Port ID. Update afe_get_port_index when a new port is added here. */ +#define PRIMARY_I2S_RX 0 /* index = 0 */ +#define PRIMARY_I2S_TX 1 /* index = 1 */ +#define PCM_RX 2 /* index = 2 */ +#define PCM_TX 3 /* index = 3 */ +#define SECONDARY_I2S_RX 4 /* index = 4 */ +#define SECONDARY_I2S_TX 5 /* index = 5 */ +#define MI2S_RX 6 /* index = 6 */ +#define MI2S_TX 7 /* index = 7 */ +#define HDMI_RX 8 /* index = 8 */ +#define RSVD_2 9 /* index = 9 */ +#define RSVD_3 10 /* index = 10 */ +#define DIGI_MIC_TX 11 /* index = 11 */ +#define VOICE_RECORD_RX 0x8003 /* index = 12 */ +#define VOICE_RECORD_TX 0x8004 /* index = 13 */ +#define VOICE_PLAYBACK_TX 0x8005 /* index = 14 */ + +/* Slimbus Multi channel port id pool */ +#define SLIMBUS_0_RX 0x4000 /* index = 15 */ +#define SLIMBUS_0_TX 0x4001 /* index = 16 */ +#define SLIMBUS_1_RX 0x4002 /* index = 17 */ +#define SLIMBUS_1_TX 0x4003 /* index = 18 */ +#define SLIMBUS_2_RX 0x4004 +#define SLIMBUS_2_TX 0x4005 +#define SLIMBUS_3_RX 0x4006 +#define SLIMBUS_3_TX 0x4007 +#define SLIMBUS_4_RX 0x4008 +#define SLIMBUS_4_TX 0x4009 /* index = 24 */ + +#define INT_BT_SCO_RX 0x3000 /* index = 25 */ +#define INT_BT_SCO_TX 0x3001 /* index = 26 */ +#define INT_BT_A2DP_RX 0x3002 /* index = 27 */ +#define INT_FM_RX 0x3004 /* index = 28 */ +#define INT_FM_TX 0x3005 /* index = 29 */ +#define RT_PROXY_PORT_001_RX 0x2000 /* index = 30 */ +#define RT_PROXY_PORT_001_TX 0x2001 /* index = 31 */ +#define SECONDARY_PCM_RX 12 /* index = 32 */ +#define SECONDARY_PCM_TX 13 /* index = 33 */ + + +#define AFE_PORT_INVALID 0xFFFF +#define SLIMBUS_EXTPROC_RX AFE_PORT_INVALID + +#define AFE_PORT_CMD_START 0x000100ca + +#define AFE_EVENT_RTPORT_START 0 +#define AFE_EVENT_RTPORT_STOP 1 +#define AFE_EVENT_RTPORT_LOW_WM 2 +#define AFE_EVENT_RTPORT_HI_WM 3 + +struct afe_port_start_command { + struct apr_hdr hdr; + u16 port_id; + u16 gain; /* Q13 */ + u32 sample_rate; /* 8 , 16, 48khz */ +} __attribute__ ((packed)); + +#define AFE_PORT_CMD_STOP 0x000100cb +struct afe_port_stop_command { + struct apr_hdr hdr; + u16 port_id; + u16 reserved; +} __attribute__ ((packed)); + +#define AFE_PORT_CMD_APPLY_GAIN 0x000100cc +struct afe_port_gain_command { + struct apr_hdr hdr; + u16 port_id; + u16 gain;/* Q13 */ +} __attribute__ ((packed)); + +#define AFE_PORT_CMD_SIDETONE_CTL 0x000100cd +struct afe_port_sidetone_command { + struct apr_hdr hdr; + u16 rx_port_id; /* Primary i2s tx = 1 */ + /* PCM tx = 3 */ + /* Secondary i2s tx = 5 */ + /* Mi2s tx = 7 */ + /* Digital mic tx = 11 */ + u16 tx_port_id; /* Primary i2s rx = 0 */ + /* PCM rx = 2 */ + /* Secondary i2s rx = 4 */ + /* Mi2S rx = 6 */ + /* HDMI rx = 8 */ + u16 gain; /* Q13 */ + u16 enable; /* 1 = enable, 0 = disable */ +} __attribute__ ((packed)); + +#define AFE_PORT_CMD_LOOPBACK 0x000100ce +struct afe_loopback_command { + struct apr_hdr hdr; + u16 tx_port_id; /* Primary i2s rx = 0 */ + /* PCM rx = 2 */ + /* Secondary i2s rx = 4 */ + /* Mi2S rx = 6 */ + /* HDMI rx = 8 */ + u16 rx_port_id; /* Primary i2s tx = 1 */ + /* PCM tx = 3 */ + /* Secondary i2s tx = 5 */ + /* Mi2s tx = 7 */ + /* Digital mic tx = 11 */ + u16 mode; /* Default -1, DSP will conver + the tx to rx format */ + u16 enable; /* 1 = enable, 0 = disable */ +} __attribute__ ((packed)); + +#define AFE_PSEUDOPORT_CMD_START 0x000100cf +struct afe_pseudoport_start_command { + struct apr_hdr hdr; + u16 port_id; /* Pseudo Port 1 = 0x8000 */ + /* Pseudo Port 2 = 0x8001 */ + /* Pseudo Port 3 = 0x8002 */ + u16 timing; /* FTRT = 0 , AVTimer = 1, */ +} __attribute__ ((packed)); + +#define AFE_PSEUDOPORT_CMD_STOP 0x000100d0 +struct afe_pseudoport_stop_command { + struct apr_hdr hdr; + u16 port_id; /* Pseudo Port 1 = 0x8000 */ + /* Pseudo Port 2 = 0x8001 */ + /* Pseudo Port 3 = 0x8002 */ + u16 reserved; +} __attribute__ ((packed)); + +#define AFE_CMD_GET_ACTIVE_PORTS 0x000100d1 + + +#define AFE_CMD_GET_ACTIVE_HANDLES_FOR_PORT 0x000100d2 +struct afe_get_active_handles_command { + struct apr_hdr hdr; + u16 port_id; + u16 reserved; +} __attribute__ ((packed)); + +#define AFE_PCM_CFG_MODE_PCM 0x0 +#define AFE_PCM_CFG_MODE_AUX 0x1 +#define AFE_PCM_CFG_SYNC_EXT 0x0 +#define AFE_PCM_CFG_SYNC_INT 0x1 +#define AFE_PCM_CFG_FRM_8BPF 0x0 +#define AFE_PCM_CFG_FRM_16BPF 0x1 +#define AFE_PCM_CFG_FRM_32BPF 0x2 +#define AFE_PCM_CFG_FRM_64BPF 0x3 +#define AFE_PCM_CFG_FRM_128BPF 0x4 +#define AFE_PCM_CFG_FRM_256BPF 0x5 +#define AFE_PCM_CFG_QUANT_ALAW_NOPAD 0x0 +#define AFE_PCM_CFG_QUANT_MULAW_NOPAD 0x1 +#define AFE_PCM_CFG_QUANT_LINEAR_NOPAD 0x2 +#define AFE_PCM_CFG_QUANT_ALAW_PAD 0x3 +#define AFE_PCM_CFG_QUANT_MULAW_PAD 0x4 +#define AFE_PCM_CFG_QUANT_LINEAR_PAD 0x5 +#define AFE_PCM_CFG_CDATAOE_MASTER 0x0 +#define AFE_PCM_CFG_CDATAOE_SHARE 0x1 + +struct afe_port_pcm_cfg { + u16 mode; /* PCM (short sync) = 0, AUXPCM (long sync) = 1 */ + u16 sync; /* external = 0 , internal = 1 */ + u16 frame; /* 8 bpf = 0 */ + /* 16 bpf = 1 */ + /* 32 bpf = 2 */ + /* 64 bpf = 3 */ + /* 128 bpf = 4 */ + /* 256 bpf = 5 */ + u16 quant; + u16 slot; /* Slot for PCM stream , 0 - 31 */ + u16 data; /* 0, PCM block is the only master */ + /* 1, PCM block is shares to driver data out signal */ + /* other master */ + u16 reserved; +} __attribute__ ((packed)); + +enum { + AFE_I2S_SD0 = 1, + AFE_I2S_SD1, + AFE_I2S_SD2, + AFE_I2S_SD3, + AFE_I2S_QUAD01, + AFE_I2S_QUAD23, + AFE_I2S_6CHS, + AFE_I2S_8CHS, +}; + +#define AFE_MI2S_MONO 0 +#define AFE_MI2S_STEREO 3 +#define AFE_MI2S_4CHANNELS 4 +#define AFE_MI2S_6CHANNELS 6 +#define AFE_MI2S_8CHANNELS 8 + +struct afe_port_mi2s_cfg { + u16 bitwidth; /* 16,24,32 */ + u16 line; /* Called ChannelMode in documentation */ + /* i2s_sd0 = 1 */ + /* i2s_sd1 = 2 */ + /* i2s_sd2 = 3 */ + /* i2s_sd3 = 4 */ + /* i2s_quad01 = 5 */ + /* i2s_quad23 = 6 */ + /* i2s_6chs = 7 */ + /* i2s_8chs = 8 */ + u16 channel; /* Called MonoStereo in documentation */ + /* i2s mono = 0 */ + /* i2s mono right = 1 */ + /* i2s mono left = 2 */ + /* i2s stereo = 3 */ + u16 ws; /* 0, word select signal from external source */ + /* 1, word select signal from internal source */ + u16 format; /* don't touch this field if it is not for */ + /* AFE_PORT_CMD_I2S_CONFIG opcode */ +} __attribute__ ((packed)); + +struct afe_port_hdmi_cfg { + u16 bitwidth; /* 16,24,32 */ + u16 channel_mode; /* HDMI Stereo = 0 */ + /* HDMI_3Point1 (4-ch) = 1 */ + /* HDMI_5Point1 (6-ch) = 2 */ + /* HDMI_6Point1 (8-ch) = 3 */ + u16 data_type; /* HDMI_Linear = 0 */ + /* HDMI_non_Linear = 1 */ +} __attribute__ ((packed)); + + +struct afe_port_hdmi_multi_ch_cfg { + u16 data_type; /* HDMI_Linear = 0 */ + /* HDMI_non_Linear = 1 */ + u16 channel_allocation; /* The default is 0 (Stereo) */ + u16 reserved; /* must be set to 0 */ +} __packed; + + +/* Slimbus Device Ids */ +#define AFE_SLIMBUS_DEVICE_1 0x0 +#define AFE_SLIMBUS_DEVICE_2 0x1 +#define AFE_PORT_MAX_AUDIO_CHAN_CNT 16 + +struct afe_port_slimbus_cfg { + u16 slimbus_dev_id; /* SLIMBUS Device id.*/ + + u16 slave_dev_pgd_la; /* Slave ported generic device + * logical address. + */ + u16 slave_dev_intfdev_la; /* Slave interface device logical + * address. + */ + u16 bit_width; /** bit width of the samples, 16, 24.*/ + + u16 data_format; /** data format.*/ + + u16 num_channels; /** Number of channels.*/ + + /** Slave port mapping for respective channels.*/ + u16 slave_port_mapping[AFE_PORT_MAX_AUDIO_CHAN_CNT]; + + u16 reserved; +} __packed; + +struct afe_port_slimbus_sch_cfg { + u16 slimbus_dev_id; /* SLIMBUS Device id.*/ + u16 bit_width; /** bit width of the samples, 16, 24.*/ + u16 data_format; /** data format.*/ + u16 num_channels; /** Number of channels.*/ + u16 reserved; + /** Slave channel mapping for respective channels.*/ + u8 slave_ch_mapping[8]; +} __packed; + +struct afe_port_rtproxy_cfg { + u16 bitwidth; /* 16,24,32 */ + u16 interleaved; /* interleaved = 1 */ + /* Noninterleaved = 0 */ + u16 frame_sz; /* 5ms buffers = 160bytes */ + u16 jitter; /* 10ms of jitter = 320 */ + u16 lw_mark; /* Low watermark in bytes for triggering event*/ + u16 hw_mark; /* High watermark bytes for triggering event*/ + u16 rsvd; + int num_ch; /* 1 to 8 */ +} __packed; + +#define AFE_PORT_AUDIO_IF_CONFIG 0x000100d3 +#define AFE_PORT_AUDIO_SLIM_SCH_CONFIG 0x000100e4 +#define AFE_PORT_MULTI_CHAN_HDMI_AUDIO_IF_CONFIG 0x000100D9 +#define AFE_PORT_CMD_I2S_CONFIG 0x000100E7 + +union afe_port_config { + struct afe_port_pcm_cfg pcm; + struct afe_port_mi2s_cfg mi2s; + struct afe_port_hdmi_cfg hdmi; + struct afe_port_hdmi_multi_ch_cfg hdmi_multi_ch; + struct afe_port_slimbus_cfg slimbus; + struct afe_port_slimbus_sch_cfg slim_sch; + struct afe_port_rtproxy_cfg rtproxy; +} __attribute__((packed)); + +struct afe_audioif_config_command { + struct apr_hdr hdr; + u16 port_id; + union afe_port_config port; +} __attribute__ ((packed)); + +#define AFE_TEST_CODEC_LOOPBACK_CTL 0x000100d5 +struct afe_codec_loopback_command { + u16 port_inf; /* Primary i2s = 0 */ + /* PCM = 2 */ + /* Secondary i2s = 4 */ + /* Mi2s = 6 */ + u16 enable; /* 0, disable. 1, enable */ +} __attribute__ ((packed)); + + +#define AFE_PARAM_ID_SIDETONE_GAIN 0x00010300 +struct afe_param_sidetone_gain { + u16 gain; + u16 reserved; +} __attribute__ ((packed)); + +#define AFE_PARAM_ID_SAMPLING_RATE 0x00010301 +struct afe_param_sampling_rate { + u32 sampling_rate; +} __attribute__ ((packed)); + + +#define AFE_PARAM_ID_CHANNELS 0x00010302 +struct afe_param_channels { + u16 channels; + u16 reserved; +} __attribute__ ((packed)); + + +#define AFE_PARAM_ID_LOOPBACK_GAIN 0x00010303 +struct afe_param_loopback_gain { + u16 gain; + u16 reserved; +} __attribute__ ((packed)); + +/* Parameter ID used to configure and enable/disable the loopback path. The + * difference with respect to the existing API, AFE_PORT_CMD_LOOPBACK, is that + * it allows Rx port to be configured as source port in loopback path. Port-id + * in AFE_PORT_CMD_SET_PARAM cmd is the source port whcih can be Tx or Rx port. + * In addition, we can configure the type of routing mode to handle different + * use cases. +*/ +enum { + /* Regular loopback from source to destination port */ + LB_MODE_DEFAULT = 1, + /* Sidetone feed from Tx source to Rx destination port */ + LB_MODE_SIDETONE, + /* Echo canceller reference, voice + audio + DTMF */ + LB_MODE_EC_REF_VOICE_AUDIO, + /* Echo canceller reference, voice alone */ + LB_MODE_EC_REF_VOICE +}; + +#define AFE_PARAM_ID_LOOPBACK_CONFIG 0x0001020B +#define AFE_API_VERSION_LOOPBACK_CONFIG 0x1 +struct afe_param_loopback_cfg { + /* Minor version used for tracking the version of the configuration + * interface. + */ + uint32_t loopback_cfg_minor_version; + + /* Destination Port Id. */ + uint16_t dst_port_id; + + /* Specifies data path type from src to dest port. Supported values: + * LB_MODE_DEFAULT + * LB_MODE_SIDETONE + * LB_MODE_EC_REF_VOICE_AUDIO + * LB_MODE_EC_REF_VOICE + */ + uint16_t routing_mode; + + /* Specifies whether to enable (1) or disable (0) an AFE loopback. */ + uint16_t enable; + + /* Reserved for 32-bit alignment. This field must be set to 0. */ + uint16_t reserved; +} __packed; + +#define AFE_MODULE_ID_PORT_INFO 0x00010200 +/* Module ID for the loopback-related parameters. */ +#define AFE_MODULE_LOOPBACK 0x00010205 +struct afe_param_payload_base { + u32 module_id; + u32 param_id; + u16 param_size; + u16 reserved; +} __packed; + +struct afe_param_payload { + struct afe_param_payload_base base; + union { + struct afe_param_sidetone_gain sidetone_gain; + struct afe_param_sampling_rate sampling_rate; + struct afe_param_channels channels; + struct afe_param_loopback_gain loopback_gain; + struct afe_param_loopback_cfg loopback_cfg; + } __attribute__((packed)) param; +} __attribute__ ((packed)); + +#define AFE_PORT_CMD_SET_PARAM 0x000100dc + +struct afe_port_cmd_set_param { + struct apr_hdr hdr; + u16 port_id; + u16 payload_size; + u32 payload_address; + struct afe_param_payload payload; +} __attribute__ ((packed)); + +struct afe_port_cmd_set_param_no_payload { + struct apr_hdr hdr; + u16 port_id; + u16 payload_size; + u32 payload_address; +} __packed; + +#define AFE_EVENT_GET_ACTIVE_PORTS 0x00010100 +struct afe_get_active_ports_rsp { + u16 num_ports; + u16 port_id; +} __attribute__ ((packed)); + + +#define AFE_EVENT_GET_ACTIVE_HANDLES 0x00010102 +struct afe_get_active_handles_rsp { + u16 port_id; + u16 num_handles; + u16 mode; /* 0, voice rx */ + /* 1, voice tx */ + /* 2, audio rx */ + /* 3, audio tx */ + u16 handle; +} __attribute__ ((packed)); + +#define AFE_SERVICE_CMD_MEMORY_MAP 0x000100DE +struct afe_cmd_memory_map { + struct apr_hdr hdr; + u32 phy_addr; + u32 mem_sz; + u16 mem_id; + u16 rsvd; +} __packed; + +#define AFE_SERVICE_CMD_MEMORY_UNMAP 0x000100DF +struct afe_cmd_memory_unmap { + struct apr_hdr hdr; + u32 phy_addr; +} __packed; + +#define AFE_SERVICE_CMD_REG_RTPORT 0x000100E0 +struct afe_cmd_reg_rtport { + struct apr_hdr hdr; + u16 port_id; + u16 rsvd; +} __packed; + +#define AFE_SERVICE_CMD_UNREG_RTPORT 0x000100E1 +struct afe_cmd_unreg_rtport { + struct apr_hdr hdr; + u16 port_id; + u16 rsvd; +} __packed; + +#define AFE_SERVICE_CMD_RTPORT_WR 0x000100E2 +struct afe_cmd_rtport_wr { + struct apr_hdr hdr; + u16 port_id; + u16 rsvd; + u32 buf_addr; + u32 bytes_avail; +} __packed; + +#define AFE_SERVICE_CMD_RTPORT_RD 0x000100E3 +struct afe_cmd_rtport_rd { + struct apr_hdr hdr; + u16 port_id; + u16 rsvd; + u32 buf_addr; + u32 bytes_avail; +} __packed; + +#define AFE_EVENT_RT_PROXY_PORT_STATUS 0x00010105 + +#define ADM_MAX_COPPS 5 + +#define ADM_SERVICE_CMD_GET_COPP_HANDLES 0x00010300 +struct adm_get_copp_handles_command { + struct apr_hdr hdr; +} __attribute__ ((packed)); + +#define ADM_CMD_MATRIX_MAP_ROUTINGS 0x00010301 +struct adm_routings_session { + u16 id; + u16 num_copps; + u16 copp_id[ADM_MAX_COPPS+1]; /*Padding if numCopps is odd */ +} __packed; + +struct adm_routings_command { + struct apr_hdr hdr; + u32 path; /* 0 = Rx, 1 Tx */ + u32 num_sessions; + struct adm_routings_session session[8]; +} __attribute__ ((packed)); + + +#define ADM_CMD_MATRIX_RAMP_GAINS 0x00010302 +struct adm_ramp_gain { + struct apr_hdr hdr; + u16 session_id; + u16 copp_id; + u16 initial_gain; + u16 gain_increment; + u16 ramp_duration; + u16 reserved; +} __attribute__ ((packed)); + +struct adm_ramp_gains_command { + struct apr_hdr hdr; + u32 id; + u32 num_gains; + struct adm_ramp_gain gains[ADM_MAX_COPPS]; +} __attribute__ ((packed)); + + +#define ADM_CMD_COPP_OPEN 0x00010304 +struct adm_copp_open_command { + struct apr_hdr hdr; + u16 flags; + u16 mode; /* 1-RX, 2-Live TX, 3-Non Live TX */ + u16 endpoint_id1; + u16 endpoint_id2; + u32 topology_id; + u16 channel_config; + u16 reserved; + u32 rate; +} __attribute__ ((packed)); + +#define ADM_CMD_COPP_CLOSE 0x00010305 + +#define ADM_CMD_MULTI_CHANNEL_COPP_OPEN 0x00010310 +#define ADM_CMD_MULTI_CHANNEL_COPP_OPEN_V3 0x00010333 +struct adm_multi_ch_copp_open_command { + struct apr_hdr hdr; + u16 flags; + u16 mode; /* 1-RX, 2-Live TX, 3-Non Live TX */ + u16 endpoint_id1; + u16 endpoint_id2; + u32 topology_id; + u16 channel_config; + u16 reserved; + u32 rate; + u8 dev_channel_mapping[8]; +} __packed; +#define ADM_CMD_MEMORY_MAP 0x00010C30 +struct adm_cmd_memory_map{ + struct apr_hdr hdr; + u32 buf_add; + u32 buf_size; + u16 mempool_id; + u16 reserved; +} __attribute__((packed)); + +#define ADM_CMD_MEMORY_UNMAP 0x00010C31 +struct adm_cmd_memory_unmap{ + struct apr_hdr hdr; + u32 buf_add; +} __attribute__((packed)); + +#define ADM_CMD_MEMORY_MAP_REGIONS 0x00010C47 +struct adm_memory_map_regions{ + u32 phys; + u32 buf_size; +} __attribute__((packed)); + +struct adm_cmd_memory_map_regions{ + struct apr_hdr hdr; + u16 mempool_id; + u16 nregions; +} __attribute__((packed)); + +#define ADM_CMD_MEMORY_UNMAP_REGIONS 0x00010C48 +struct adm_memory_unmap_regions{ + u32 phys; +} __attribute__((packed)); + +struct adm_cmd_memory_unmap_regions{ + struct apr_hdr hdr; + u16 nregions; + u16 reserved; +} __attribute__((packed)); + +#define DEFAULT_COPP_TOPOLOGY 0x00010be3 +#define DEFAULT_POPP_TOPOLOGY 0x00010be4 +#define VPM_TX_SM_ECNS_COPP_TOPOLOGY 0x00010F71 +#define VPM_TX_DM_FLUENCE_COPP_TOPOLOGY 0x00010F72 +#define VPM_TX_QMIC_FLUENCE_COPP_TOPOLOGY 0x00010F75 + +#define LOWLATENCY_POPP_TOPOLOGY 0x00010C68 +#define LOWLATENCY_COPP_TOPOLOGY 0x00010312 +#define PCM_BITS_PER_SAMPLE 16 + +#define ASM_OPEN_WRITE_PERF_MODE_BIT (1<<28) +#define ASM_OPEN_READ_PERF_MODE_BIT (1<<29) +#define ADM_MULTI_CH_COPP_OPEN_PERF_MODE_BIT (1<<13) + +/* SRS TRUMEDIA GUIDS */ +/* topology */ +#define SRS_TRUMEDIA_TOPOLOGY_ID 0x00010D90 +/* module */ +#define SRS_TRUMEDIA_MODULE_ID 0x10005010 +/* parameters */ +#define SRS_TRUMEDIA_PARAMS 0x10005011 +#define SRS_TRUMEDIA_PARAMS_WOWHD 0x10005012 +#define SRS_TRUMEDIA_PARAMS_CSHP 0x10005013 +#define SRS_TRUMEDIA_PARAMS_HPF 0x10005014 +#define SRS_TRUMEDIA_PARAMS_PEQ 0x10005015 +#define SRS_TRUMEDIA_PARAMS_HL 0x10005016 + +#define ASM_MAX_EQ_BANDS 12 + +struct asm_eq_band { + u32 band_idx; /* The band index, 0 .. 11 */ + u32 filter_type; /* Filter band type */ + u32 center_freq_hz; /* Filter band center frequency */ + u32 filter_gain; /* Filter band initial gain (dB) */ + /* Range is +12 dB to -12 dB with 1dB increments. */ + u32 q_factor; +} __attribute__ ((packed)); + +struct asm_equalizer_params { + u32 enable; + u32 num_bands; + struct asm_eq_band eq_bands[ASM_MAX_EQ_BANDS]; +} __attribute__ ((packed)); + +struct asm_master_gain_params { + u16 master_gain; + u16 padding; +} __attribute__ ((packed)); + +struct asm_lrchannel_gain_params { + u16 left_gain; + u16 right_gain; +} __attribute__ ((packed)); + +struct asm_mute_params { + u32 muteflag; +} __attribute__ ((packed)); + +struct asm_softvolume_params { + u32 period; + u32 step; + u32 rampingcurve; +} __attribute__ ((packed)); + +struct asm_softpause_params { + u32 enable; + u32 period; + u32 step; + u32 rampingcurve; +} __packed; + +struct asm_pp_param_data_hdr { + u32 module_id; + u32 param_id; + u16 param_size; + u16 reserved; +} __attribute__ ((packed)); + +struct asm_pp_params_command { + struct apr_hdr hdr; + u32 *payload; + u32 payload_size; + struct asm_pp_param_data_hdr params; +} __attribute__ ((packed)); + +#define EQUALIZER_MODULE_ID 0x00010c27 +#define EQUALIZER_PARAM_ID 0x00010c28 + +#define VOLUME_CONTROL_MODULE_ID 0x00010bfe +#define MASTER_GAIN_PARAM_ID 0x00010bff +#define L_R_CHANNEL_GAIN_PARAM_ID 0x00010c00 +#define MUTE_CONFIG_PARAM_ID 0x00010c01 +#define SOFT_PAUSE_PARAM_ID 0x00010D6A +#define SOFT_VOLUME_PARAM_ID 0x00010C29 + +#define IIR_FILTER_ENABLE_PARAM_ID 0x00010c03 +#define IIR_FILTER_PREGAIN_PARAM_ID 0x00010c04 +#define IIR_FILTER_CONFIG_PARAM_ID 0x00010c05 + +#define MBADRC_MODULE_ID 0x00010c06 +#define MBADRC_ENABLE_PARAM_ID 0x00010c07 +#define MBADRC_CONFIG_PARAM_ID 0x00010c08 + + +#define ADM_CMD_SET_PARAMS 0x00010306 +#define ADM_CMD_GET_PARAMS 0x0001030B +#define ADM_CMDRSP_GET_PARAMS 0x0001030C +struct adm_set_params_command { + struct apr_hdr hdr; + u32 payload; + u32 payload_size; +} __attribute__ ((packed)); + + +#define ADM_CMD_TAP_COPP_PCM 0x00010307 +struct adm_tap_copp_pcm_command { + struct apr_hdr hdr; +} __attribute__ ((packed)); + + +/* QDSP6 to Client messages +*/ +#define ADM_SERVICE_CMDRSP_GET_COPP_HANDLES 0x00010308 +struct adm_get_copp_handles_respond { + struct apr_hdr hdr; + u32 handles; + u32 copp_id; +} __attribute__ ((packed)); + +#define ADM_CMDRSP_COPP_OPEN 0x0001030A +struct adm_copp_open_respond { + u32 status; + u16 copp_id; + u16 reserved; +} __attribute__ ((packed)); + +#define ADM_CMDRSP_MULTI_CHANNEL_COPP_OPEN 0x00010311 +#define ADM_CMDRSP_MULTI_CHANNEL_COPP_OPEN_V3 0x00010334 + + +#define ASM_STREAM_PRIORITY_NORMAL 0 +#define ASM_STREAM_PRIORITY_LOW 1 +#define ASM_STREAM_PRIORITY_HIGH 2 +#define ASM_STREAM_PRIORITY_RESERVED 3 + +#define ASM_END_POINT_DEVICE_MATRIX 0 +#define ASM_END_POINT_STREAM 1 + +#define AAC_ENC_MODE_AAC_LC 0x02 +#define AAC_ENC_MODE_AAC_P 0x05 +#define AAC_ENC_MODE_EAAC_P 0x1D + +#define ASM_STREAM_CMD_CLOSE 0x00010BCD +#define ASM_STREAM_CMD_FLUSH 0x00010BCE +#define ASM_STREAM_CMD_SET_PP_PARAMS 0x00010BCF +#define ASM_STREAM_CMD_GET_PP_PARAMS 0x00010BD0 +#define ASM_STREAM_CMDRSP_GET_PP_PARAMS 0x00010BD1 +#define ASM_SESSION_CMD_PAUSE 0x00010BD3 +#define ASM_SESSION_CMD_GET_SESSION_TIME 0x00010BD4 +#define ASM_DATA_CMD_EOS 0x00010BDB +#define ASM_DATA_EVENT_EOS 0x00010BDD + +#define ASM_SERVICE_CMD_GET_STREAM_HANDLES 0x00010C0B +#define ASM_STREAM_CMD_FLUSH_READBUFS 0x00010C09 + +#define ASM_SESSION_EVENT_RX_UNDERFLOW 0x00010C17 +#define ASM_SESSION_EVENT_TX_OVERFLOW 0x00010C18 +#define ASM_SERVICE_CMD_GET_WALLCLOCK_TIME 0x00010C19 +#define ASM_DATA_CMDRSP_EOS 0x00010C1C + +/* ASM Data structures */ + +/* common declarations */ +struct asm_pcm_cfg { + u16 ch_cfg; + u16 bits_per_sample; + u32 sample_rate; + u16 is_signed; + u16 interleaved; +}; + +#define PCM_CHANNEL_NULL 0 + +/* Front left channel. */ +#define PCM_CHANNEL_FL 1 + +/* Front right channel. */ +#define PCM_CHANNEL_FR 2 + +/* Front center channel. */ +#define PCM_CHANNEL_FC 3 + +/* Left surround channel.*/ +#define PCM_CHANNEL_LS 4 + +/* Right surround channel.*/ +#define PCM_CHANNEL_RS 5 + +/* Low frequency effect channel. */ +#define PCM_CHANNEL_LFE 6 + +/* Center surround channel; Rear center channel. */ +#define PCM_CHANNEL_CS 7 + +/* Left back channel; Rear left channel. */ +#define PCM_CHANNEL_LB 8 + +/* Right back channel; Rear right channel. */ +#define PCM_CHANNEL_RB 9 + +/* Top surround channel. */ +#define PCM_CHANNEL_TS 10 + +/* Center vertical height channel.*/ +#define PCM_CHANNEL_CVH 11 + +/* Mono surround channel.*/ +#define PCM_CHANNEL_MS 12 + +/* Front left of center. */ +#define PCM_CHANNEL_FLC 13 + +/* Front right of center. */ +#define PCM_CHANNEL_FRC 14 + +/* Rear left of center. */ +#define PCM_CHANNEL_RLC 15 + +/* Rear right of center. */ +#define PCM_CHANNEL_RRC 16 + +#define PCM_FORMAT_MAX_NUM_CHANNEL 8 + +/* Maximum number of channels supported + * in ASM_ENCDEC_DEC_CHAN_MAP command + */ +#define MAX_CHAN_MAP_CHANNELS 16 +/* + * Multiple-channel PCM decoder format block structure used in the + * #ASM_STREAM_CMD_OPEN_WRITE command. + * The data must be in little-endian format. + */ +struct asm_multi_channel_pcm_fmt_blk { + + u16 num_channels; /* + * Number of channels. + * Supported values:1 to 8 + */ + + u16 bits_per_sample; /* + * Number of bits per sample per channel. + * Supported values: 16, 24 When used for + * playback, the client must send 24-bit + * samples packed in 32-bit words. The + * 24-bit samples must be placed in the most + * significant 24 bits of the 32-bit word. When + * used for recording, the aDSP sends 24-bit + * samples packed in 32-bit words. The 24-bit + * samples are placed in the most significant + * 24 bits of the 32-bit word. + */ + + u32 sample_rate; /* + * Number of samples per second + * (in Hertz). Supported values: + * 2000 to 48000 + */ + + u16 is_signed; /* + * Flag that indicates the samples + * are signed (1). + */ + + u16 is_interleaved; /* + * Flag that indicates whether the channels are + * de-interleaved (0) or interleaved (1). + * Interleaved format means corresponding + * samples from the left and right channels are + * interleaved within the buffer. + * De-interleaved format means samples from + * each channel are contiguous in the buffer. + * The samples from one channel immediately + * follow those of the previous channel. + */ + + u8 channel_mapping[8]; /* + * Supported values: + * PCM_CHANNEL_NULL, PCM_CHANNEL_FL, + * PCM_CHANNEL_FR, PCM_CHANNEL_FC, + * PCM_CHANNEL_LS, PCM_CHANNEL_RS, + * PCM_CHANNEL_LFE, PCM_CHANNEL_CS, + * PCM_CHANNEL_LB, PCM_CHANNEL_RB, + * PCM_CHANNEL_TS, PCM_CHANNEL_CVH, + * PCM_CHANNEL_MS, PCM_CHANNEL_FLC, + * PCM_CHANNEL_FRC, PCM_CHANNEL_RLC, + * PCM_CHANNEL_RRC. + * Channel[i] mapping describes channel I. Each + * element i of the array describes channel I + * inside the buffer where I < num_channels. + * An unused channel is set to zero. + */ +}; + +struct asm_adpcm_cfg { + u16 ch_cfg; + u16 bits_per_sample; + u32 sample_rate; + u32 block_size; +}; + +struct asm_yadpcm_cfg { + u16 ch_cfg; + u16 bits_per_sample; + u32 sample_rate; +}; + +struct asm_midi_cfg { + u32 nMode; +}; + +struct asm_wma_cfg { + u16 format_tag; + u16 ch_cfg; + u32 sample_rate; + u32 avg_bytes_per_sec; + u16 block_align; + u16 valid_bits_per_sample; + u32 ch_mask; + u16 encode_opt; + u16 adv_encode_opt; + u32 adv_encode_opt2; + u32 drc_peak_ref; + u32 drc_peak_target; + u32 drc_ave_ref; + u32 drc_ave_target; +}; + +struct asm_wmapro_cfg { + u16 format_tag; + u16 ch_cfg; + u32 sample_rate; + u32 avg_bytes_per_sec; + u16 block_align; + u16 valid_bits_per_sample; + u32 ch_mask; + u16 encode_opt; + u16 adv_encode_opt; + u32 adv_encode_opt2; + u32 drc_peak_ref; + u32 drc_peak_target; + u32 drc_ave_ref; + u32 drc_ave_target; +}; + +struct asm_aac_cfg { + u16 format; + u16 aot; + u16 ep_config; + u16 section_data_resilience; + u16 scalefactor_data_resilience; + u16 spectral_data_resilience; + u16 ch_cfg; + u16 reserved; + u32 sample_rate; +}; + +struct asm_amrwbplus_cfg { + u32 size_bytes; + u32 version; + u32 num_channels; + u32 amr_band_mode; + u32 amr_dtx_mode; + u32 amr_frame_fmt; + u32 amr_lsf_idx; +}; + +struct asm_flac_cfg { + u16 stream_info_present; + u16 min_blk_size; + u16 max_blk_size; + u16 ch_cfg; + u16 sample_size; + u16 sample_rate; + u16 md5_sum; + u32 ext_sample_rate; + u32 min_frame_size; + u32 max_frame_size; +}; + +struct asm_vorbis_cfg { + u32 ch_cfg; + u32 bit_rate; + u32 min_bit_rate; + u32 max_bit_rate; + u16 bit_depth_pcm_sample; + u16 bit_stream_format; +}; + +struct asm_aac_read_cfg { + u32 bitrate; + u32 enc_mode; + u16 format; + u16 ch_cfg; + u32 sample_rate; +}; + +struct asm_amrnb_read_cfg { + u16 mode; + u16 dtx_mode; +}; + +struct asm_amrwb_read_cfg { + u16 mode; + u16 dtx_mode; +}; + +struct asm_evrc_read_cfg { + u16 max_rate; + u16 min_rate; + u16 rate_modulation_cmd; + u16 reserved; +}; + +struct asm_qcelp13_read_cfg { + u16 max_rate; + u16 min_rate; + u16 reduced_rate_level; + u16 rate_modulation_cmd; +}; + +struct asm_sbc_read_cfg { + u32 subband; + u32 block_len; + u32 ch_mode; + u32 alloc_method; + u32 bit_rate; + u32 sample_rate; +}; + +struct asm_sbc_bitrate { + u32 bitrate; +}; + +struct asm_immed_decode { + u32 mode; +}; + +struct asm_sbr_ps { + u32 enable; +}; + +struct asm_dual_mono { + u16 sce_left; + u16 sce_right; +}; + +struct asm_dec_chan_map { + u32 num_channels; /* Number of decoder output + * channels. A value of 0 + * indicates native channel + * mapping, which is valid + * only for NT mode. This + * means the output of the + * decoder is to be preserved + * as is. + */ + + u8 channel_mapping[MAX_CHAN_MAP_CHANNELS];/* Channel array of size + * num_channels. It can grow + * till MAX_CHAN_MAP_CHANNELS. + * Channel[i] mapping + * describes channel I inside + * the decoder output buffer. + * Valid channel mapping + * values are to be present at + * the beginning of the array. + * All remaining elements of + * the array are to be filled + * with PCM_CHANNEL_NULL. + */ +}; + +struct asm_encode_cfg_blk { + u32 frames_per_buf; + u32 format_id; + u32 cfg_size; + union { + struct asm_pcm_cfg pcm; + struct asm_aac_read_cfg aac; + struct asm_amrnb_read_cfg amrnb; + struct asm_evrc_read_cfg evrc; + struct asm_qcelp13_read_cfg qcelp13; + struct asm_sbc_read_cfg sbc; + struct asm_amrwb_read_cfg amrwb; + struct asm_multi_channel_pcm_fmt_blk mpcm; + } __attribute__((packed)) cfg; +}; + +struct asm_frame_meta_info { + u32 offset_to_frame; + u32 frame_size; + u32 encoded_pcm_samples; + u32 msw_ts; + u32 lsw_ts; + u32 nflags; +}; + +/* Stream level commands */ +#define ASM_STREAM_CMD_OPEN_READ 0x00010BCB +#define ASM_STREAM_CMD_OPEN_READ_V2_1 0x00010DB2 +struct asm_stream_cmd_open_read { + struct apr_hdr hdr; + u32 uMode; + u32 src_endpoint; + u32 pre_proc_top; + u32 format; +} __attribute__((packed)); + +struct asm_stream_cmd_open_read_v2_1 { + struct apr_hdr hdr; + u32 uMode; + u32 src_endpoint; + u32 pre_proc_top; + u32 format; + u16 bits_per_sample; + u16 reserved; +} __packed; + +/* Supported formats */ +#define LINEAR_PCM 0x00010BE5 +#define DTMF 0x00010BE6 +#define ADPCM 0x00010BE7 +#define YADPCM 0x00010BE8 +#define MP3 0x00010BE9 +#define MPEG4_AAC 0x00010BEA +#define AMRNB_FS 0x00010BEB +#define AMRWB_FS 0x00010BEC +#define V13K_FS 0x00010BED +#define EVRC_FS 0x00010BEE +#define EVRCB_FS 0x00010BEF +#define EVRCWB_FS 0x00010BF0 +#define MIDI 0x00010BF1 +#define SBC 0x00010BF2 +#define WMA_V10PRO 0x00010BF3 +#define WMA_V9 0x00010BF4 +#define AMR_WB_PLUS 0x00010BF5 +#define AC3_DECODER 0x00010BF6 +#define EAC3_DECODER 0x00010C3C +#define DTS 0x00010D88 +#define DTS_LBR 0x00010DBB +#define ATRAC 0x00010D89 +#define MAT 0x00010D8A +#define G711_ALAW_FS 0x00010BF7 +#define G711_MLAW_FS 0x00010BF8 +#define G711_PCM_FS 0x00010BF9 +#define MPEG4_MULTI_AAC 0x00010D86 +#define US_POINT_EPOS_FORMAT 0x00012310 +#define US_RAW_FORMAT 0x0001127C +#define MULTI_CHANNEL_PCM 0x00010C66 + +#define ASM_ENCDEC_SBCRATE 0x00010C13 +#define ASM_ENCDEC_IMMDIATE_DECODE 0x00010C14 +#define ASM_ENCDEC_CFG_BLK 0x00010C2C + +#define ASM_ENCDEC_SBCRATE 0x00010C13 +#define ASM_ENCDEC_IMMDIATE_DECODE 0x00010C14 +#define ASM_ENCDEC_CFG_BLK 0x00010C2C + +#define ASM_STREAM_CMD_OPEN_READ_COMPRESSED 0x00010D95 +struct asm_stream_cmd_open_read_compressed { + struct apr_hdr hdr; + u32 uMode; + u32 frame_per_buf; +} __packed; + +#define ASM_STREAM_CMD_OPEN_WRITE 0x00010BCA +#define ASM_STREAM_CMD_OPEN_WRITE_V2_1 0x00010DB1 +struct asm_stream_cmd_open_write { + struct apr_hdr hdr; + u32 uMode; + u16 sink_endpoint; + u16 stream_handle; + u32 post_proc_top; + u32 format; +} __attribute__((packed)); + +#define IEC_61937_MASK 0x00000001 +#define IEC_60958_MASK 0x00000002 + +#define ASM_STREAM_CMD_OPEN_WRITE_COMPRESSED 0x00010D84 +struct asm_stream_cmd_open_write_compressed { + struct apr_hdr hdr; + u32 flags; + u32 format; +} __packed; + +#define ASM_STREAM_CMD_OPEN_READWRITE 0x00010BCC + +struct asm_stream_cmd_open_read_write { + struct apr_hdr hdr; + u32 uMode; + u32 post_proc_top; + u32 write_format; + u32 read_format; +} __attribute__((packed)); + +#define ASM_STREAM_CMD_OPEN_LOOPBACK 0x00010D6E +struct asm_stream_cmd_open_loopback { + struct apr_hdr hdr; + u32 mode_flags; +/* Mode flags. + * Bit 0-31: reserved; client should set these bits to 0 + */ + u16 src_endpointype; + /* Endpoint type. 0 = Tx Matrix */ + u16 sink_endpointype; + /* Endpoint type. 0 = Rx Matrix */ + u32 postprocopo_id; +/* Postprocessor topology ID. Specifies the topology of + * postprocessing algorithms. + */ +} __packed; + +#define ADM_CMD_CONNECT_AFE_PORT 0x00010320 +#define ADM_CMD_DISCONNECT_AFE_PORT 0x00010321 + +struct adm_cmd_connect_afe_port { + struct apr_hdr hdr; + u8 mode; /*mode represent the interface is for RX or TX*/ + u8 session_id; /*ASM session ID*/ + u16 afe_port_id; +} __packed; + +#define ADM_CMD_CONNECT_AFE_PORT_V2 0x00010332 + +struct adm_cmd_connect_afe_port_v2 { + struct apr_hdr hdr; + u8 mode; /*mode represent the interface is for RX or TX*/ + u8 session_id; /*ASM session ID*/ + u16 afe_port_id; + u32 num_channels; + u32 sampleing_rate; +} __packed; + +#define ASM_STREAM_CMD_SET_ENCDEC_PARAM 0x00010C10 +#define ASM_STREAM_CMD_GET_ENCDEC_PARAM 0x00010C11 +#define ASM_ENCDEC_CFG_BLK_ID 0x00010C2C +#define ASM_ENABLE_SBR_PS 0x00010C63 +#define ASM_CONFIGURE_DUAL_MONO 0x00010C64 +struct asm_stream_cmd_encdec_cfg_blk{ + struct apr_hdr hdr; + u32 param_id; + u32 param_size; + struct asm_encode_cfg_blk enc_blk; +} __attribute__((packed)); + +struct asm_stream_cmd_encdec_sbc_bitrate{ + struct apr_hdr hdr; + u32 param_id; + struct asm_sbc_bitrate sbc_bitrate; +} __attribute__((packed)); + +struct asm_stream_cmd_encdec_immed_decode{ + struct apr_hdr hdr; + u32 param_id; + u32 param_size; + struct asm_immed_decode dec; +} __attribute__((packed)); + +struct asm_stream_cmd_encdec_sbr{ + struct apr_hdr hdr; + u32 param_id; + u32 param_size; + struct asm_sbr_ps sbr_ps; +} __attribute__((packed)); + +struct asm_stream_cmd_encdec_dualmono { + struct apr_hdr hdr; + u32 param_id; + u32 param_size; + struct asm_dual_mono channel_map; +} __packed; + +#define ASM_PARAM_ID_AAC_STEREO_MIX_COEFF_SELECTION_FLAG 0x00010DD8 + +/* Structure for AAC decoder stereo coefficient setting. */ + +struct asm_aac_stereo_mix_coeff_selection_param { + struct apr_hdr hdr; + u32 param_id; + u32 param_size; + u32 aac_stereo_mix_coeff_flag; +} __packed; + +#define ASM_ENCDEC_DEC_CHAN_MAP 0x00010D82 +struct asm_stream_cmd_encdec_channelmap { + struct apr_hdr hdr; + u32 param_id; + u32 param_size; + struct asm_dec_chan_map chan_map; +} __packed; + +#define ASM_STREAM _CMD_ADJUST_SAMPLES 0x00010C0A +struct asm_stream_cmd_adjust_samples{ + struct apr_hdr hdr; + u16 nsamples; + u16 reserved; +} __attribute__((packed)); + +#define ASM_STREAM_CMD_TAP_POPP_PCM 0x00010BF9 +struct asm_stream_cmd_tap_popp_pcm{ + struct apr_hdr hdr; + u16 enable; + u16 reserved; + u32 module_id; +} __attribute__((packed)); + +/* Session Level commands */ +#define ASM_SESSION_CMD_MEMORY_MAP 0x00010C32 +struct asm_stream_cmd_memory_map{ + struct apr_hdr hdr; + u32 buf_add; + u32 buf_size; + u16 mempool_id; + u16 reserved; +} __attribute__((packed)); + +#define ASM_SESSION_CMD_MEMORY_UNMAP 0x00010C33 +struct asm_stream_cmd_memory_unmap{ + struct apr_hdr hdr; + u32 buf_add; +} __attribute__((packed)); + +#define ASM_SESSION_CMD_MEMORY_MAP_REGIONS 0x00010C45 +struct asm_memory_map_regions{ + u32 phys; + u32 buf_size; +} __attribute__((packed)); + +struct asm_stream_cmd_memory_map_regions{ + struct apr_hdr hdr; + u16 mempool_id; + u16 nregions; +} __attribute__((packed)); + +#define ASM_SESSION_CMD_MEMORY_UNMAP_REGIONS 0x00010C46 +struct asm_memory_unmap_regions{ + u32 phys; +} __attribute__((packed)); + +struct asm_stream_cmd_memory_unmap_regions{ + struct apr_hdr hdr; + u16 nregions; + u16 reserved; +} __attribute__((packed)); + +#define ASM_SESSION_CMD_RUN 0x00010BD2 +struct asm_stream_cmd_run{ + struct apr_hdr hdr; + u32 flags; + u32 msw_ts; + u32 lsw_ts; +} __attribute__((packed)); + +/* Session level events */ +#define ASM_SESSION_CMD_REGISTER_FOR_RX_UNDERFLOW_EVENTS 0x00010BD5 +struct asm_stream_cmd_reg_rx_underflow_event{ + struct apr_hdr hdr; + u16 enable; + u16 reserved; +} __attribute__((packed)); + +#define ASM_SESSION_CMD_REGISTER_FOR_TX_OVERFLOW_EVENTS 0x00010BD6 +struct asm_stream_cmd_reg_tx_overflow_event{ + struct apr_hdr hdr; + u16 enable; + u16 reserved; +} __attribute__((packed)); + +/* Data Path commands */ +#define ASM_DATA_CMD_WRITE 0x00010BD9 +struct asm_stream_cmd_write{ + struct apr_hdr hdr; + u32 buf_add; + u32 avail_bytes; + u32 uid; + u32 msw_ts; + u32 lsw_ts; + u32 uflags; +} __attribute__((packed)); + +#define ASM_DATA_CMD_READ 0x00010BDA +struct asm_stream_cmd_read{ + struct apr_hdr hdr; + u32 buf_add; + u32 buf_size; + u32 uid; +} __attribute__((packed)); + +#define ASM_DATA_CMD_READ_COMPRESSED 0x00010DBF +struct asm_stream_cmd_read_compressed { + struct apr_hdr hdr; + u32 buf_add; + u32 buf_size; + u32 uid; +} __packed; + +#define ASM_DATA_CMD_MEDIA_FORMAT_UPDATE 0x00010BDC +#define ASM_DATA_EVENT_ENC_SR_CM_NOTIFY 0x00010BDE +struct asm_stream_media_format_update{ + struct apr_hdr hdr; + u32 format; + u32 cfg_size; + union { + struct asm_pcm_cfg pcm_cfg; + struct asm_adpcm_cfg adpcm_cfg; + struct asm_yadpcm_cfg yadpcm_cfg; + struct asm_midi_cfg midi_cfg; + struct asm_wma_cfg wma_cfg; + struct asm_wmapro_cfg wmapro_cfg; + struct asm_aac_cfg aac_cfg; + struct asm_flac_cfg flac_cfg; + struct asm_vorbis_cfg vorbis_cfg; + struct asm_multi_channel_pcm_fmt_blk multi_ch_pcm_cfg; + struct asm_amrwbplus_cfg amrwbplus_cfg; + } __attribute__((packed)) write_cfg; +} __attribute__((packed)); + + +/* Command Responses */ +#define ASM_STREAM_CMDRSP_GET_ENCDEC_PARAM 0x00010C12 +struct asm_stream_cmdrsp_get_readwrite_param{ + struct apr_hdr hdr; + u32 status; + u32 param_id; + u16 param_size; + u16 padding; + union { + struct asm_sbc_bitrate sbc_bitrate; + struct asm_immed_decode aac_dec; + } __attribute__((packed)) read_write_cfg; +} __attribute__((packed)); + + +#define ASM_SESSION_CMDRSP_GET_SESSION_TIME 0x00010BD8 +struct asm_stream_cmdrsp_get_session_time{ + struct apr_hdr hdr; + u32 status; + u32 msw_ts; + u32 lsw_ts; +} __attribute__((packed)); + +#define ASM_DATA_EVENT_WRITE_DONE 0x00010BDF +struct asm_data_event_write_done{ + u32 buf_add; + u32 status; +} __attribute__((packed)); + +#define ASM_DATA_EVENT_READ_DONE 0x00010BE0 +struct asm_data_event_read_done{ + u32 status; + u32 buffer_add; + u32 enc_frame_size; + u32 offset; + u32 msw_ts; + u32 lsw_ts; + u32 flags; + u32 num_frames; + u32 id; +} __attribute__((packed)); + +#define ASM_DATA_EVENT_READ_COMPRESSED_DONE 0x00010DC0 +struct asm_data_event_read_compressed_done { + u32 status; + u32 buffer_add; + u32 enc_frame_size; + u32 offset; + u32 msw_ts; + u32 lsw_ts; + u32 flags; + u32 num_frames; + u32 id; +} __packed; + +#define ASM_DATA_EVENT_SR_CM_CHANGE_NOTIFY 0x00010C65 +struct asm_data_event_sr_cm_change_notify { + u32 sample_rate; + u16 no_of_channels; + u16 reserved; + u8 channel_map[8]; +} __packed; + +/* service level events */ + +#define ASM_SERVICE_CMDRSP_GET_STREAM_HANDLES 0x00010C1B +struct asm_svc_cmdrsp_get_strm_handles{ + struct apr_hdr hdr; + u32 num_handles; + u32 stream_handles; +} __attribute__((packed)); + + +#define ASM_SERVICE_CMDRSP_GET_WALLCLOCK_TIME 0x00010C1A +struct asm_svc_cmdrsp_get_wallclock_time{ + struct apr_hdr hdr; + u32 status; + u32 msw_ts; + u32 lsw_ts; +} __attribute__((packed)); + +/* + * Error code +*/ +#define ADSP_EOK 0x00000000 /* Success / completed / no errors. */ +#define ADSP_EFAILED 0x00000001 /* General failure. */ +#define ADSP_EBADPARAM 0x00000002 /* Bad operation parameter(s). */ +#define ADSP_EUNSUPPORTED 0x00000003 /* Unsupported routine/operation. */ +#define ADSP_EVERSION 0x00000004 /* Unsupported version. */ +#define ADSP_EUNEXPECTED 0x00000005 /* Unexpected problem encountered. */ +#define ADSP_EPANIC 0x00000006 /* Unhandled problem occurred. */ +#define ADSP_ENORESOURCE 0x00000007 /* Unable to allocate resource(s). */ +#define ADSP_EHANDLE 0x00000008 /* Invalid handle. */ +#define ADSP_EALREADY 0x00000009 /* Operation is already processed. */ +#define ADSP_ENOTREADY 0x0000000A /* Operation not ready to be processed*/ +#define ADSP_EPENDING 0x0000000B /* Operation is pending completion*/ +#define ADSP_EBUSY 0x0000000C /* Operation could not be accepted or + processed. */ +#define ADSP_EABORTED 0x0000000D /* Operation aborted due to an error. */ +#define ADSP_EPREEMPTED 0x0000000E /* Operation preempted by higher priority*/ +#define ADSP_ECONTINUE 0x0000000F /* Operation requests intervention + to complete. */ +#define ADSP_EIMMEDIATE 0x00000010 /* Operation requests immediate + intervention to complete. */ +#define ADSP_ENOTIMPL 0x00000011 /* Operation is not implemented. */ +#define ADSP_ENEEDMORE 0x00000012 /* Operation needs more data or resources*/ + +/* SRS TRUMEDIA start */ +#define SRS_ID_GLOBAL 0x00000001 +#define SRS_ID_WOWHD 0x00000002 +#define SRS_ID_CSHP 0x00000003 +#define SRS_ID_HPF 0x00000004 +#define SRS_ID_PEQ 0x00000005 +#define SRS_ID_HL 0x00000006 + +#define SRS_CMD_UPLOAD 0x7FFF0000 +#define SRS_PARAM_INDEX_MASK 0x80000000 +#define SRS_PARAM_OFFSET_MASK 0x3FFF0000 +#define SRS_PARAM_VALUE_MASK 0x0000FFFF + +struct srs_trumedia_params_GLOBAL { + uint8_t v1; + uint8_t v2; + uint8_t v3; + uint8_t v4; + uint8_t v5; + uint8_t v6; + uint8_t v7; + uint8_t v8; +} __packed; + +struct srs_trumedia_params_WOWHD { + uint32_t v1; + uint16_t v2; + uint16_t v3; + uint16_t v4; + uint16_t v5; + uint16_t v6; + uint16_t v7; + uint16_t v8; + uint16_t v____A1; + uint32_t v9; + uint16_t v10; + uint16_t v11; + uint32_t v12[16]; +} __packed; + +struct srs_trumedia_params_CSHP { + uint32_t v1; + uint16_t v2; + uint16_t v3; + uint16_t v4; + uint16_t v5; + uint16_t v6; + uint16_t v____A1; + uint32_t v7; + uint16_t v8; + uint16_t v9; + uint32_t v10[16]; +} __packed; + +struct srs_trumedia_params_HPF { + uint32_t v1; + uint32_t v2[26]; +} __packed; + +struct srs_trumedia_params_PEQ { + uint32_t v1; + uint16_t v2; + uint16_t v3; + uint16_t v4; + uint16_t v____A1; + uint32_t v5[26]; + uint32_t v6[26]; +} __packed; + +struct srs_trumedia_params_HL { + uint16_t v1; + uint16_t v2; + uint16_t v3; + uint16_t v____A1; + int32_t v4; + uint32_t v5; + uint16_t v6; + uint16_t v____A2; + uint32_t v7; +} __packed; + +struct srs_trumedia_params { + struct srs_trumedia_params_GLOBAL global; + struct srs_trumedia_params_WOWHD wowhd; + struct srs_trumedia_params_CSHP cshp; + struct srs_trumedia_params_HPF hpf; + struct srs_trumedia_params_PEQ peq; + struct srs_trumedia_params_HL hl; +} __packed; +int srs_trumedia_open(int port_id, int srs_tech_id, void *srs_params); +/* SRS TruMedia end */ + +/* SRS Studio Sound 3D start */ +#define SRS_ID_SS3D_GLOBAL 0x00000001 +#define SRS_ID_SS3D_CTRL 0x00000002 +#define SRS_ID_SS3D_FILTER 0x00000003 + +struct srs_SS3D_params_GLOBAL { + uint8_t v1; + uint8_t v2; + uint8_t v3; + uint8_t v4; + uint8_t v5; + uint8_t v6; + uint8_t v7; + uint8_t v8; +} __packed; + +struct srs_SS3D_ctrl_params { + uint8_t v[236]; +} __packed; + +struct srs_SS3D_filter_params { + uint8_t v[28 + 2752]; +} __packed; + +struct srs_SS3D_params { + struct srs_SS3D_params_GLOBAL global; + struct srs_SS3D_ctrl_params ss3d; + struct srs_SS3D_filter_params ss3d_f; +} __packed; + +int srs_ss3d_open(int port_id, int srs_tech_id, void *srs_params); +/* SRS Studio Sound 3D end */ +#endif /*_APR_AUDIO_H_*/ diff --git a/include/sound/msm-dai-q6.h b/include/sound/msm-dai-q6.h new file mode 100644 index 000000000000..a39d3dc08d00 --- /dev/null +++ b/include/sound/msm-dai-q6.h @@ -0,0 +1,45 @@ +/* Copyright (c) 2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __MSM_DAI_Q6_PDATA_H__ + +#define __MSM_DAI_Q6_PDATA_H__ + +#define MSM_MI2S_SD0 (1 << 0) +#define MSM_MI2S_SD1 (1 << 1) +#define MSM_MI2S_SD2 (1 << 2) +#define MSM_MI2S_SD3 (1 << 3) +#define MSM_MI2S_CAP_RX 0 +#define MSM_MI2S_CAP_TX 1 + +struct msm_dai_auxpcm_config { + u16 mode; + u16 sync; + u16 frame; + u16 quant; + u16 slot; + u16 data; + int pcm_clk_rate; +}; + +struct msm_mi2s_pdata { + u16 rx_sd_lines; + u16 tx_sd_lines; +}; + +struct msm_dai_auxpcm_pdata { + const char *clk; + struct msm_dai_auxpcm_config mode_8k; + struct msm_dai_auxpcm_config mode_16k; +}; + +#endif diff --git a/include/sound/msm_hdmi_audio.h b/include/sound/msm_hdmi_audio.h new file mode 100644 index 000000000000..8ada49f82234 --- /dev/null +++ b/include/sound/msm_hdmi_audio.h @@ -0,0 +1,54 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __MSM_HDMI_AUDIO_H +#define __MSM_HDMI_AUDIO_H + +/* Supported HDMI Audio channels */ +#define MSM_HDMI_AUDIO_CHANNEL_2 0 +#define MSM_HDMI_AUDIO_CHANNEL_4 1 +#define MSM_HDMI_AUDIO_CHANNEL_6 2 +#define MSM_HDMI_AUDIO_CHANNEL_8 3 + +#define TRUE 1 +#define FALSE 0 + +enum hdmi_supported_sample_rates { + HDMI_SAMPLE_RATE_32KHZ, + HDMI_SAMPLE_RATE_44_1KHZ, + HDMI_SAMPLE_RATE_48KHZ, + HDMI_SAMPLE_RATE_88_2KHZ, + HDMI_SAMPLE_RATE_96KHZ, + HDMI_SAMPLE_RATE_176_4KHZ, + HDMI_SAMPLE_RATE_192KHZ +}; + +int hdmi_audio_enable(bool on , u32 fifo_water_mark); +int hdmi_audio_packet_enable(bool on); +int hdmi_msm_audio_get_sample_rate(void); + +#if defined(CONFIG_FB_MSM_HDMI_MSM_PANEL) || defined(CONFIG_DRM_MSM) +int hdmi_msm_audio_info_setup(bool enabled, u32 num_of_channels, + u32 channel_allocation, u32 level_shift, bool down_mix); +void hdmi_msm_audio_sample_rate_reset(int rate); +#else +static inline int hdmi_msm_audio_info_setup(bool enabled, + u32 num_of_channels, u32 channel_allocation, u32 level_shift, + bool down_mix) +{ + return 0; +} +static inline void hdmi_msm_audio_sample_rate_reset(int rate) +{ +} +#endif +#endif /* __MSM_HDMI_AUDIO_H*/ diff --git a/include/sound/q6adm-v2.h b/include/sound/q6adm-v2.h new file mode 100644 index 000000000000..a800f9ac87a3 --- /dev/null +++ b/include/sound/q6adm-v2.h @@ -0,0 +1,50 @@ +/* Copyright (c) 2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __Q6_ADM_V2_H__ +#define __Q6_ADM_V2_H__ + + +#define ADM_PATH_PLAYBACK 0x1 +#define ADM_PATH_LIVE_REC 0x2 +#define ADM_PATH_NONLIVE_REC 0x3 +#include <sound/q6audio-v2.h> + +#define Q6_AFE_MAX_PORTS 32 + +/* multiple copp per stream. */ +struct route_payload { + unsigned int copp_ids[Q6_AFE_MAX_PORTS]; + unsigned short num_copps; + unsigned int session_id; +}; + +int adm_open(int port, int path, int rate, int mode, int topology); + +int adm_multi_ch_copp_open(int port, int path, int rate, int mode, + int topology); + +int adm_memory_map_regions(int port_id, uint32_t *buf_add, uint32_t mempool_id, + uint32_t *bufsz, uint32_t bufcnt); + +int adm_memory_unmap_regions(int port_id, uint32_t *buf_add, uint32_t *bufsz, + uint32_t bufcnt); + +int adm_close(int port); + +int adm_matrix_map(int session_id, int path, int num_copps, + unsigned int *port_id, int copp_id); + +int adm_connect_afe_port(int mode, int session_id, int port_id); + +int adm_get_copp_id(int port_id); + +#endif /* __Q6_ADM_V2_H__ */ diff --git a/include/sound/q6adm.h b/include/sound/q6adm.h new file mode 100644 index 000000000000..cb3273f5119b --- /dev/null +++ b/include/sound/q6adm.h @@ -0,0 +1,50 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __Q6_ADM_H__ +#define __Q6_ADM_H__ +#include <sound/q6afe.h> + +#define ADM_PATH_PLAYBACK 0x1 +#define ADM_PATH_LIVE_REC 0x2 +#define ADM_PATH_NONLIVE_REC 0x3 + +/* multiple copp per stream. */ +struct route_payload { + unsigned int copp_ids[AFE_MAX_PORTS]; + unsigned short num_copps; + unsigned int session_id; +}; + +int adm_open(int port, int path, int rate, int mode, int topology); + +int adm_multi_ch_copp_open(int port, int path, int rate, int mode, + int topology, int perfmode); + +int adm_memory_map_regions(uint32_t *buf_add, uint32_t mempool_id, + uint32_t *bufsz, uint32_t bufcnt); + +int adm_memory_unmap_regions(uint32_t *buf_add, uint32_t *bufsz, + uint32_t bufcnt); + +int adm_close(int port); + +int adm_matrix_map(int session_id, int path, int num_copps, + unsigned int *port_id, int copp_id); + +int adm_connect_afe_port(int mode, int session_id, int port_id); +int adm_disconnect_afe_port(int mode, int session_id, int port_id); + +void adm_ec_ref_rx_id(int port_id); + +int adm_get_copp_id(int port_id); + +#endif /* __Q6_ADM_H__ */ diff --git a/include/sound/q6afe-v2.h b/include/sound/q6afe-v2.h new file mode 100644 index 000000000000..79ef7ddc4b44 --- /dev/null +++ b/include/sound/q6afe-v2.h @@ -0,0 +1,108 @@ +/* Copyright (c) 2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __Q6AFE_V2_H__ +#define __Q6AFE_V2_H__ +#include <sound/apr_audio-v2.h> + +#define MSM_AFE_MONO 0 +#define MSM_AFE_MONO_RIGHT 1 +#define MSM_AFE_MONO_LEFT 2 +#define MSM_AFE_STEREO 3 +#define MSM_AFE_4CHANNELS 4 +#define MSM_AFE_6CHANNELS 6 +#define MSM_AFE_8CHANNELS 8 + +#define MSM_AFE_I2S_FORMAT_LPCM 0 +#define MSM_AFE_I2S_FORMAT_COMPR 1 +#define MSM_AFE_I2S_FORMAT_IEC60958_LPCM 2 +#define MSM_AFE_I2S_FORMAT_IEC60958_COMPR 3 + +#define MSM_AFE_PORT_TYPE_RX 0 +#define MSM_AFE_PORT_TYPE_TX 1 + +#define RT_PROXY_DAI_001_RX 0xE0 +#define RT_PROXY_DAI_001_TX 0xF0 +#define RT_PROXY_DAI_002_RX 0xF1 +#define RT_PROXY_DAI_002_TX 0xE1 +#define VIRTUAL_ID_TO_PORTID(val) ((val & 0xF) | 0x2000) + +enum { + IDX_PRIMARY_I2S_RX = 0, + IDX_PRIMARY_I2S_TX = 1, + IDX_PCM_RX = 2, + IDX_PCM_TX = 3, + IDX_SECONDARY_I2S_RX = 4, + IDX_SECONDARY_I2S_TX = 5, + IDX_MI2S_RX = 6, + IDX_MI2S_TX = 7, + IDX_HDMI_RX = 8, + IDX_RSVD_2 = 9, + IDX_RSVD_3 = 10, + IDX_DIGI_MIC_TX = 11, + IDX_VOICE_RECORD_RX = 12, + IDX_VOICE_RECORD_TX = 13, + IDX_VOICE_PLAYBACK_TX = 14, + IDX_SLIMBUS_0_RX = 15, + IDX_SLIMBUS_0_TX = 16, + IDX_SLIMBUS_1_RX = 17, + IDX_SLIMBUS_1_TX = 18, + IDX_SLIMBUS_2_RX = 19, + IDX_SLIMBUS_2_TX = 20, + IDX_SLIMBUS_3_RX = 21, + IDX_SLIMBUS_3_TX = 22, + IDX_SLIMBUS_4_RX = 23, + IDX_SLIMBUS_4_TX = 24, + IDX_INT_BT_SCO_RX = 25, + IDX_INT_BT_SCO_TX = 26, + IDX_INT_BT_A2DP_RX = 27, + IDX_INT_FM_RX = 28, + IDX_INT_FM_TX = 29, + IDX_RT_PROXY_PORT_001_RX = 30, + IDX_RT_PROXY_PORT_001_TX = 31, + AFE_MAX_PORTS +}; + +int afe_open(u16 port_id, union afe_port_config *afe_config, int rate); +int afe_close(int port_id); +int afe_loopback(u16 enable, u16 rx_port, u16 tx_port); +int afe_sidetone(u16 tx_port_id, u16 rx_port_id, u16 enable, uint16_t gain); +int afe_loopback_gain(u16 port_id, u16 volume); +int afe_validate_port(u16 port_id); +int afe_get_port_index(u16 port_id); +int afe_start_pseudo_port(u16 port_id); +int afe_stop_pseudo_port(u16 port_id); +int afe_cmd_memory_map(u32 dma_addr_p, u32 dma_buf_sz); +int afe_cmd_memory_map_nowait(int port_id, u32 dma_addr_p, u32 dma_buf_sz); +int afe_cmd_memory_unmap(u32 dma_addr_p); +int afe_cmd_memory_unmap_nowait(u32 dma_addr_p); + +int afe_register_get_events(u16 port_id, + void (*cb) (uint32_t opcode, + uint32_t token, uint32_t *payload, void *priv), + void *private_data); +int afe_unregister_get_events(u16 port_id); +int afe_rt_proxy_port_write(u32 buf_addr_p, u32 mem_map_handle, int bytes); +int afe_rt_proxy_port_read(u32 buf_addr_p, u32 mem_map_handle, int bytes); +int afe_port_start(u16 port_id, union afe_port_config *afe_config, + u32 rate); +int afe_port_stop_nowait(int port_id); +int afe_apply_gain(u16 port_id, u16 gain); +int afe_q6_interface_prepare(void); +int afe_get_port_type(u16 port_id); +/* if port_id is virtual, convert to physical.. + * if port_id is already physical, return physical + */ +int afe_convert_virtual_to_portid(u16 port_id); + +int afe_pseudo_port_start_nowait(u16 port_id); +int afe_pseudo_port_stop_nowait(u16 port_id); +#endif /* __Q6AFE_V2_H__ */ diff --git a/include/sound/q6afe.h b/include/sound/q6afe.h new file mode 100644 index 000000000000..1b7a79093c1b --- /dev/null +++ b/include/sound/q6afe.h @@ -0,0 +1,110 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __Q6AFE_H__ +#define __Q6AFE_H__ +#include <sound/apr_audio.h> + +#define MSM_AFE_MONO 0 +#define MSM_AFE_MONO_RIGHT 1 +#define MSM_AFE_MONO_LEFT 2 +#define MSM_AFE_STEREO 3 +#define MSM_AFE_4CHANNELS 4 +#define MSM_AFE_6CHANNELS 6 +#define MSM_AFE_8CHANNELS 8 + +#define MSM_AFE_I2S_FORMAT_LPCM 0 +#define MSM_AFE_I2S_FORMAT_COMPR 1 +#define MSM_AFE_I2S_FORMAT_IEC60958_LPCM 2 +#define MSM_AFE_I2S_FORMAT_IEC60958_COMPR 3 + +#define MSM_AFE_PORT_TYPE_RX 0 +#define MSM_AFE_PORT_TYPE_TX 1 + +#define RT_PROXY_DAI_001_RX 0xE0 +#define RT_PROXY_DAI_001_TX 0xF0 +#define RT_PROXY_DAI_002_RX 0xF1 +#define RT_PROXY_DAI_002_TX 0xE1 +#define VIRTUAL_ID_TO_PORTID(val) ((val & 0xF) | 0x2000) + +enum { + IDX_PRIMARY_I2S_RX = 0, + IDX_PRIMARY_I2S_TX = 1, + IDX_PCM_RX = 2, + IDX_PCM_TX = 3, + IDX_SECONDARY_I2S_RX = 4, + IDX_SECONDARY_I2S_TX = 5, + IDX_MI2S_RX = 6, + IDX_MI2S_TX = 7, + IDX_HDMI_RX = 8, + IDX_RSVD_2 = 9, + IDX_RSVD_3 = 10, + IDX_DIGI_MIC_TX = 11, + IDX_VOICE_RECORD_RX = 12, + IDX_VOICE_RECORD_TX = 13, + IDX_VOICE_PLAYBACK_TX = 14, + IDX_SLIMBUS_0_RX = 15, + IDX_SLIMBUS_0_TX = 16, + IDX_SLIMBUS_1_RX = 17, + IDX_SLIMBUS_1_TX = 18, + IDX_SLIMBUS_2_RX = 19, + IDX_SLIMBUS_2_TX = 20, + IDX_SLIMBUS_3_RX = 21, + IDX_SLIMBUS_3_TX = 22, + IDX_SLIMBUS_4_RX = 23, + IDX_SLIMBUS_4_TX = 24, + IDX_INT_BT_SCO_RX = 25, + IDX_INT_BT_SCO_TX = 26, + IDX_INT_BT_A2DP_RX = 27, + IDX_INT_FM_RX = 28, + IDX_INT_FM_TX = 29, + IDX_RT_PROXY_PORT_001_RX = 30, + IDX_RT_PROXY_PORT_001_TX = 31, + IDX_SECONDARY_PCM_RX = 32, + IDX_SECONDARY_PCM_TX = 33, + AFE_MAX_PORTS +}; + +int afe_open(u16 port_id, union afe_port_config *afe_config, int rate); +int afe_close(int port_id); +int afe_loopback(u16 enable, u16 rx_port, u16 tx_port); +int afe_loopback_cfg(u16 enable, u16 dst_port, u16 src_port, u16 mode); +int afe_sidetone(u16 tx_port_id, u16 rx_port_id, u16 enable, uint16_t gain); +int afe_loopback_gain(u16 port_id, u16 volume); +int afe_validate_port(u16 port_id); +int afe_get_port_index(u16 port_id); +int afe_start_pseudo_port(u16 port_id); +int afe_stop_pseudo_port(u16 port_id); +int afe_cmd_memory_map(u32 dma_addr_p, u32 dma_buf_sz); +int afe_cmd_memory_map_nowait(u32 dma_addr_p, u32 dma_buf_sz); +int afe_cmd_memory_unmap(u32 dma_addr_p); +int afe_cmd_memory_unmap_nowait(u32 dma_addr_p); + +int afe_register_get_events(u16 port_id, + void (*cb) (uint32_t opcode, + uint32_t token, uint32_t *payload, void *priv), + void *private_data); +int afe_unregister_get_events(u16 port_id); +int afe_rt_proxy_port_write(u32 buf_addr_p, int bytes); +int afe_rt_proxy_port_read(u32 buf_addr_p, int bytes); +int afe_port_start(u16 port_id, union afe_port_config *afe_config, u32 rate); +int afe_port_stop_nowait(int port_id); +int afe_apply_gain(u16 port_id, u16 gain); +int afe_q6_interface_prepare(void); +int afe_get_port_type(u16 port_id); +/* if port_id is virtual, convert to physical.. + * if port_id is already physical, return physical + */ +int afe_convert_virtual_to_portid(u16 port_id); + +int afe_pseudo_port_start_nowait(u16 port_id); +int afe_pseudo_port_stop_nowait(u16 port_id); +#endif /* __Q6AFE_H__ */ diff --git a/include/sound/q6asm-v2.h b/include/sound/q6asm-v2.h new file mode 100644 index 000000000000..e5b9d6bbe03c --- /dev/null +++ b/include/sound/q6asm-v2.h @@ -0,0 +1,311 @@ +/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __Q6_ASM_V2_H__ +#define __Q6_ASM_V2_H__ + +#include <mach/qdsp6v2/apr.h> +#include <mach/msm_subsystem_map.h> +#include <sound/apr_audio-v2.h> +#include <linux/list.h> +#include <linux/msm_ion.h> + +#define IN 0x000 +#define OUT 0x001 +#define CH_MODE_MONO 0x001 +#define CH_MODE_STEREO 0x002 + +#define FORMAT_LINEAR_PCM 0x0000 +#define FORMAT_DTMF 0x0001 +#define FORMAT_ADPCM 0x0002 +#define FORMAT_YADPCM 0x0003 +#define FORMAT_MP3 0x0004 +#define FORMAT_MPEG4_AAC 0x0005 +#define FORMAT_AMRNB 0x0006 +#define FORMAT_AMRWB 0x0007 +#define FORMAT_V13K 0x0008 +#define FORMAT_EVRC 0x0009 +#define FORMAT_EVRCB 0x000a +#define FORMAT_EVRCWB 0x000b +#define FORMAT_MIDI 0x000c +#define FORMAT_SBC 0x000d +#define FORMAT_WMA_V10PRO 0x000e +#define FORMAT_WMA_V9 0x000f +#define FORMAT_AMR_WB_PLUS 0x0010 +#define FORMAT_MPEG4_MULTI_AAC 0x0011 +#define FORMAT_MULTI_CHANNEL_LINEAR_PCM 0x0012 + +#define ENCDEC_SBCBITRATE 0x0001 +#define ENCDEC_IMMEDIATE_DECODE 0x0002 +#define ENCDEC_CFG_BLK 0x0003 + +#define CMD_PAUSE 0x0001 +#define CMD_FLUSH 0x0002 +#define CMD_EOS 0x0003 +#define CMD_CLOSE 0x0004 +#define CMD_OUT_FLUSH 0x0005 + +/* bit 0:1 represents priority of stream */ +#define STREAM_PRIORITY_NORMAL 0x0000 +#define STREAM_PRIORITY_LOW 0x0001 +#define STREAM_PRIORITY_HIGH 0x0002 + +/* bit 4 represents META enable of encoded data buffer */ +#define BUFFER_META_ENABLE 0x0010 + +/* Enable Sample_Rate/Channel_Mode notification event from Decoder */ +#define SR_CM_NOTIFY_ENABLE 0x0004 + +#define SYNC_IO_MODE 0x0001 +#define ASYNC_IO_MODE 0x0002 +#define NT_MODE 0x0400 + + +#define NO_TIMESTAMP 0xFF00 +#define SET_TIMESTAMP 0x0000 + +#define SOFT_PAUSE_ENABLE 1 +#define SOFT_PAUSE_DISABLE 0 + +#define SESSION_MAX 0x08 + +#define SOFT_PAUSE_PERIOD 30 /* ramp up/down for 30ms */ +#define SOFT_PAUSE_STEP 2000 /* Step value 2ms or 2000us */ +enum { + SOFT_PAUSE_CURVE_LINEAR = 0, + SOFT_PAUSE_CURVE_EXP, + SOFT_PAUSE_CURVE_LOG, +}; + +#define SOFT_VOLUME_PERIOD 30 /* ramp up/down for 30ms */ +#define SOFT_VOLUME_STEP 2000 /* Step value 2ms or 2000us */ +enum { + SOFT_VOLUME_CURVE_LINEAR = 0, + SOFT_VOLUME_CURVE_EXP, + SOFT_VOLUME_CURVE_LOG, +}; + +typedef void (*app_cb)(uint32_t opcode, uint32_t token, + uint32_t *payload, void *priv); + +struct audio_buffer { + dma_addr_t phys; + void *data; + uint32_t used; + uint32_t size;/* size of buffer */ + uint32_t actual_size; /* actual number of bytes read by DSP */ + struct ion_handle *handle; + struct ion_client *client; +}; + +struct audio_aio_write_param { + unsigned long paddr; + uint32_t len; + uint32_t uid; + uint32_t lsw_ts; + uint32_t msw_ts; + uint32_t flags; +}; + +struct audio_aio_read_param { + unsigned long paddr; + uint32_t len; + uint32_t uid; +}; + +struct audio_port_data { + struct audio_buffer *buf; + uint32_t max_buf_cnt; + uint32_t dsp_buf; + uint32_t cpu_buf; + struct list_head mem_map_handle; + uint32_t tmp_hdl; + /* read or write locks */ + struct mutex lock; + spinlock_t dsp_lock; +}; + +struct audio_client { + int session; + app_cb cb; + atomic_t cmd_state; + /* Relative or absolute TS */ + uint32_t time_flag; + void *priv; + uint32_t io_mode; + uint64_t time_stamp; + struct apr_svc *apr; + struct apr_svc *mmap_apr; + struct mutex cmd_lock; + /* idx:1 out port, 0: in port*/ + struct audio_port_data port[2]; + wait_queue_head_t cmd_wait; +}; + +void q6asm_audio_client_free(struct audio_client *ac); + +struct audio_client *q6asm_audio_client_alloc(app_cb cb, void *priv); + +struct audio_client *q6asm_get_audio_client(int session_id); + +int q6asm_audio_client_buf_alloc(unsigned int dir/* 1:Out,0:In */, + struct audio_client *ac, + unsigned int bufsz, + unsigned int bufcnt); +int q6asm_audio_client_buf_alloc_contiguous(unsigned int dir + /* 1:Out,0:In */, + struct audio_client *ac, + unsigned int bufsz, + unsigned int bufcnt); + +int q6asm_audio_client_buf_free_contiguous(unsigned int dir, + struct audio_client *ac); + +int q6asm_open_read(struct audio_client *ac, uint32_t format + /*, uint16_t bits_per_sample*/); + +int q6asm_open_write(struct audio_client *ac, uint32_t format + /*, uint16_t bits_per_sample*/); + +int q6asm_open_read_write(struct audio_client *ac, + uint32_t rd_format, + uint32_t wr_format); + +int q6asm_write(struct audio_client *ac, uint32_t len, uint32_t msw_ts, + uint32_t lsw_ts, uint32_t flags); +int q6asm_write_nolock(struct audio_client *ac, uint32_t len, uint32_t msw_ts, + uint32_t lsw_ts, uint32_t flags); + +int q6asm_async_write(struct audio_client *ac, + struct audio_aio_write_param *param); + +int q6asm_async_read(struct audio_client *ac, + struct audio_aio_read_param *param); + +int q6asm_read(struct audio_client *ac); +int q6asm_read_nolock(struct audio_client *ac); + +int q6asm_memory_map(struct audio_client *ac, uint32_t buf_add, + int dir, uint32_t bufsz, uint32_t bufcnt); + +int q6asm_memory_unmap(struct audio_client *ac, uint32_t buf_add, + int dir); + +int q6asm_run(struct audio_client *ac, uint32_t flags, + uint32_t msw_ts, uint32_t lsw_ts); + +int q6asm_run_nowait(struct audio_client *ac, uint32_t flags, + uint32_t msw_ts, uint32_t lsw_ts); + +int q6asm_reg_tx_overflow(struct audio_client *ac, uint16_t enable); + +int q6asm_cmd(struct audio_client *ac, int cmd); + +int q6asm_cmd_nowait(struct audio_client *ac, int cmd); + +void *q6asm_is_cpu_buf_avail(int dir, struct audio_client *ac, + uint32_t *size, uint32_t *idx); + +void *q6asm_is_cpu_buf_avail_nolock(int dir, struct audio_client *ac, + uint32_t *size, uint32_t *idx); + +int q6asm_is_dsp_buf_avail(int dir, struct audio_client *ac); + +/* File format specific configurations to be added below */ + +int q6asm_enc_cfg_blk_aac(struct audio_client *ac, + uint32_t frames_per_buf, + uint32_t sample_rate, uint32_t channels, + uint32_t bit_rate, + uint32_t mode, uint32_t format); + +int q6asm_enc_cfg_blk_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_set_encdec_chan_map(struct audio_client *ac, + uint32_t num_channels); + +int q6asm_enc_cfg_blk_pcm_native(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_enable_sbrps(struct audio_client *ac, + uint32_t sbr_ps); + +int q6asm_cfg_dual_mono_aac(struct audio_client *ac, + uint16_t sce_left, uint16_t sce_right); + +int q6asm_cfg_aac_sel_mix_coef(struct audio_client *ac, uint32_t mix_coeff); + +int q6asm_enc_cfg_blk_qcelp(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t min_rate, uint16_t max_rate, + uint16_t reduced_rate_level, uint16_t rate_modulation_cmd); + +int q6asm_enc_cfg_blk_evrc(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t min_rate, uint16_t max_rate, + uint16_t rate_modulation_cmd); + +int q6asm_enc_cfg_blk_amrnb(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t band_mode, uint16_t dtx_enable); + +int q6asm_enc_cfg_blk_amrwb(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t band_mode, uint16_t dtx_enable); + +int q6asm_media_format_block_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_media_format_block_multi_ch_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_media_format_block_aac(struct audio_client *ac, + struct asm_aac_cfg *cfg); + +int q6asm_media_format_block_multi_aac(struct audio_client *ac, + struct asm_aac_cfg *cfg); + +int q6asm_media_format_block_wma(struct audio_client *ac, + void *cfg); + +int q6asm_media_format_block_wmapro(struct audio_client *ac, + void *cfg); + +/* PP specific */ +int q6asm_equalizer(struct audio_client *ac, void *eq); + +/* Send Volume Command */ +int q6asm_set_volume(struct audio_client *ac, int volume); + +/* Set SoftPause Params */ +int q6asm_set_softpause(struct audio_client *ac, + struct asm_softpause_params *param); + +/* Set Softvolume Params */ +int q6asm_set_softvolume(struct audio_client *ac, + struct asm_softvolume_params *param); + +/* Send left-right channel gain */ +int q6asm_set_lrgain(struct audio_client *ac, int left_gain, int right_gain); + +/* Enable Mute/unmute flag */ +int q6asm_set_mute(struct audio_client *ac, int muteflag); + +int q6asm_get_session_time(struct audio_client *ac, uint64_t *tstamp); + +/* Client can set the IO mode to either AIO/SIO mode */ +int q6asm_set_io_mode(struct audio_client *ac, uint32_t mode); + +/* Get Service ID for APR communication */ +int q6asm_get_apr_service_id(int session_id); + +/* Common format block without any payload +*/ +int q6asm_media_format_block(struct audio_client *ac, uint32_t format); + +#endif /* __Q6_ASM_H__ */ diff --git a/include/sound/q6asm.h b/include/sound/q6asm.h new file mode 100644 index 000000000000..139b501a3568 --- /dev/null +++ b/include/sound/q6asm.h @@ -0,0 +1,334 @@ +/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __Q6_ASM_H__ +#define __Q6_ASM_H__ + +#include <sound/qdsp6v2/apr.h> +#include <sound/apr_audio.h> + +#define IN 0x000 +#define OUT 0x001 +#define CH_MODE_MONO 0x001 +#define CH_MODE_STEREO 0x002 + +#define FORMAT_LINEAR_PCM 0x0000 +#define FORMAT_DTMF 0x0001 +#define FORMAT_ADPCM 0x0002 +#define FORMAT_YADPCM 0x0003 +#define FORMAT_MP3 0x0004 +#define FORMAT_MPEG4_AAC 0x0005 +#define FORMAT_AMRNB 0x0006 +#define FORMAT_AMRWB 0x0007 +#define FORMAT_V13K 0x0008 +#define FORMAT_EVRC 0x0009 +#define FORMAT_EVRCB 0x000a +#define FORMAT_EVRCWB 0x000b +#define FORMAT_MIDI 0x000c +#define FORMAT_SBC 0x000d +#define FORMAT_WMA_V10PRO 0x000e +#define FORMAT_WMA_V9 0x000f +#define FORMAT_AMR_WB_PLUS 0x0010 +#define FORMAT_MPEG4_MULTI_AAC 0x0011 +#define FORMAT_MULTI_CHANNEL_LINEAR_PCM 0x0012 +#define FORMAT_AC3 0x0013 +#define FORMAT_DTS 0x0014 +#define FORMAT_EAC3 0x0015 +#define FORMAT_ATRAC 0x0016 +#define FORMAT_MAT 0x0017 +#define FORMAT_AAC 0x0018 +#define FORMAT_DTS_LBR 0x0019 + +#define ENCDEC_SBCBITRATE 0x0001 +#define ENCDEC_IMMEDIATE_DECODE 0x0002 +#define ENCDEC_CFG_BLK 0x0003 + +#define CMD_PAUSE 0x0001 +#define CMD_FLUSH 0x0002 +#define CMD_EOS 0x0003 +#define CMD_CLOSE 0x0004 +#define CMD_OUT_FLUSH 0x0005 + +/* bit 0:1 represents priority of stream */ +#define STREAM_PRIORITY_NORMAL 0x0000 +#define STREAM_PRIORITY_LOW 0x0001 +#define STREAM_PRIORITY_HIGH 0x0002 + +/* bit 4 represents META enable of encoded data buffer */ +#define BUFFER_META_ENABLE 0x0010 + +/* Enable Sample_Rate/Channel_Mode notification event from Decoder */ +#define SR_CM_NOTIFY_ENABLE 0x0004 + +#define TUN_WRITE_IO_MODE 0x0008 /* tunnel read write mode */ +#define TUN_READ_IO_MODE 0x0004 /* tunnel read write mode */ +#define ASYNC_IO_MODE 0x0002 +#define SYNC_IO_MODE 0x0001 +#define NO_TIMESTAMP 0xFF00 +#define SET_TIMESTAMP 0x0000 + +#define SOFT_PAUSE_ENABLE 1 +#define SOFT_PAUSE_DISABLE 0 + +#define SESSION_MAX 0x08 + +#define SOFT_PAUSE_PERIOD 30 /* ramp up/down for 30ms */ +#define SOFT_PAUSE_STEP_LINEAR 0 /* Step value 0ms or 0us */ +#define SOFT_PAUSE_STEP 0 /* Step value 0ms or 0us */ +enum { + SOFT_PAUSE_CURVE_LINEAR = 0, + SOFT_PAUSE_CURVE_EXP, + SOFT_PAUSE_CURVE_LOG, +}; + +#define SOFT_VOLUME_PERIOD 30 /* ramp up/down for 30ms */ +#define SOFT_VOLUME_STEP_LINEAR 0 /* Step value 0ms or 0us */ +#define SOFT_VOLUME_STEP 0 /* Step value 0ms or 0us */ +enum { + SOFT_VOLUME_CURVE_LINEAR = 0, + SOFT_VOLUME_CURVE_EXP, + SOFT_VOLUME_CURVE_LOG, +}; + +typedef void (*app_cb)(uint32_t opcode, uint32_t token, + uint32_t *payload, void *priv); + +struct audio_buffer { + dma_addr_t phys; + void *data; + uint32_t used; + uint32_t size;/* size of buffer */ + uint32_t actual_size; /* actual number of bytes read by DSP */ + void *mem_buffer; +}; + +struct audio_aio_write_param { + unsigned long paddr; + uint32_t uid; + uint32_t len; + uint32_t msw_ts; + uint32_t lsw_ts; + uint32_t flags; +}; + +struct audio_aio_read_param { + unsigned long paddr; + uint32_t len; + uint32_t uid; +}; + +struct audio_port_data { + struct audio_buffer *buf; + uint32_t max_buf_cnt; + uint32_t dsp_buf; + uint32_t cpu_buf; + /* read or write locks */ + struct mutex lock; + spinlock_t dsp_lock; +}; + +struct audio_client { + int session; + /* idx:1 out port, 0: in port*/ + struct audio_port_data port[2]; + + struct apr_svc *apr; + struct mutex cmd_lock; + + atomic_t cmd_state; + atomic_t cmd_close_state; + atomic_t time_flag; + atomic_t nowait_cmd_cnt; + wait_queue_head_t cmd_wait; + wait_queue_head_t time_wait; + + app_cb cb; + void *priv; + uint32_t io_mode; + uint64_t time_stamp; + atomic_t cmd_response; + bool perf_mode; +}; + +void q6asm_audio_client_free(struct audio_client *ac); + +struct audio_client *q6asm_audio_client_alloc(app_cb cb, void *priv); + +struct audio_client *q6asm_get_audio_client(int session_id); + +int q6asm_audio_client_buf_alloc(unsigned int dir/* 1:Out,0:In */, + struct audio_client *ac, + unsigned int bufsz, + unsigned int bufcnt); +int q6asm_audio_client_buf_alloc_contiguous(unsigned int dir + /* 1:Out,0:In */, + struct audio_client *ac, + unsigned int bufsz, + unsigned int bufcnt); + +int q6asm_audio_client_buf_free_contiguous(unsigned int dir, + struct audio_client *ac); + +int q6asm_open_read(struct audio_client *ac, uint32_t format); +int q6asm_open_read_v2_1(struct audio_client *ac, uint32_t format); + +int q6asm_open_read_compressed(struct audio_client *ac, + uint32_t frames_per_buffer, uint32_t meta_data_mode); + +int q6asm_open_write(struct audio_client *ac, uint32_t format); + +int q6asm_open_write_compressed(struct audio_client *ac, uint32_t format); + +int q6asm_open_read_write(struct audio_client *ac, + uint32_t rd_format, + uint32_t wr_format); + +int q6asm_open_loopack(struct audio_client *ac); + +int q6asm_write(struct audio_client *ac, uint32_t len, uint32_t msw_ts, + uint32_t lsw_ts, uint32_t flags); +int q6asm_write_nolock(struct audio_client *ac, uint32_t len, uint32_t msw_ts, + uint32_t lsw_ts, uint32_t flags); + +int q6asm_async_write(struct audio_client *ac, + struct audio_aio_write_param *param); + +int q6asm_async_read(struct audio_client *ac, + struct audio_aio_read_param *param); + +int q6asm_async_read_compressed(struct audio_client *ac, + struct audio_aio_read_param *param); + +int q6asm_read(struct audio_client *ac); +int q6asm_read_nolock(struct audio_client *ac); + +int q6asm_memory_map(struct audio_client *ac, uint32_t buf_add, + int dir, uint32_t bufsz, uint32_t bufcnt); + +int q6asm_memory_unmap(struct audio_client *ac, uint32_t buf_add, + int dir); + +int q6asm_run(struct audio_client *ac, uint32_t flags, + uint32_t msw_ts, uint32_t lsw_ts); + +int q6asm_run_nowait(struct audio_client *ac, uint32_t flags, + uint32_t msw_ts, uint32_t lsw_ts); + +int q6asm_reg_tx_overflow(struct audio_client *ac, uint16_t enable); + +int q6asm_cmd(struct audio_client *ac, int cmd); + +int q6asm_cmd_nowait(struct audio_client *ac, int cmd); + +void *q6asm_is_cpu_buf_avail(int dir, struct audio_client *ac, + uint32_t *size, uint32_t *idx); + +void *q6asm_is_cpu_buf_avail_nolock(int dir, struct audio_client *ac, + uint32_t *size, uint32_t *idx); + +int q6asm_is_dsp_buf_avail(int dir, struct audio_client *ac); + +/* File format specific configurations to be added below */ + +int q6asm_enc_cfg_blk_aac(struct audio_client *ac, + uint32_t frames_per_buf, + uint32_t sample_rate, uint32_t channels, + uint32_t bit_rate, + uint32_t mode, uint32_t format); + +int q6asm_enc_cfg_blk_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_enc_cfg_blk_pcm_native(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_enc_cfg_blk_multi_ch_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_enable_sbrps(struct audio_client *ac, + uint32_t sbr_ps); + +int q6asm_cfg_dual_mono_aac(struct audio_client *ac, + uint16_t sce_left, uint16_t sce_right); + +int q6asm_cfg_aac_sel_mix_coef(struct audio_client *ac, uint32_t mix_coeff); + +int q6asm_set_encdec_chan_map(struct audio_client *ac, + uint32_t num_channels); + +int q6asm_enc_cfg_blk_qcelp(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t min_rate, uint16_t max_rate, + uint16_t reduced_rate_level, uint16_t rate_modulation_cmd); + +int q6asm_enc_cfg_blk_evrc(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t min_rate, uint16_t max_rate, + uint16_t rate_modulation_cmd); + +int q6asm_enc_cfg_blk_amrnb(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t band_mode, uint16_t dtx_enable); + +int q6asm_enc_cfg_blk_amrwb(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t band_mode, uint16_t dtx_enable); + +int q6asm_media_format_block_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_media_format_block_multi_ch_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels); + +int q6asm_media_format_block_aac(struct audio_client *ac, + struct asm_aac_cfg *cfg); + +int q6asm_media_format_block_amrwbplus(struct audio_client *ac, + struct asm_amrwbplus_cfg *cfg); + +int q6asm_media_format_block_multi_aac(struct audio_client *ac, + struct asm_aac_cfg *cfg); + +int q6asm_media_format_block_wma(struct audio_client *ac, + void *cfg); + +int q6asm_media_format_block_wmapro(struct audio_client *ac, + void *cfg); + +/* PP specific */ +int q6asm_equalizer(struct audio_client *ac, void *eq); + +/* Send Volume Command */ +int q6asm_set_volume(struct audio_client *ac, int volume); + +/* Set SoftPause Params */ +int q6asm_set_softpause(struct audio_client *ac, + struct asm_softpause_params *param); + +/* Set Softvolume Params */ +int q6asm_set_softvolume(struct audio_client *ac, + struct asm_softvolume_params *param); + +/* Send left-right channel gain */ +int q6asm_set_lrgain(struct audio_client *ac, int left_gain, int right_gain); + +/* Enable Mute/unmute flag */ +int q6asm_set_mute(struct audio_client *ac, int muteflag); + +int q6asm_get_session_time(struct audio_client *ac, uint64_t *tstamp); + +/* Client can set the IO mode to either AIO/SIO mode */ +int q6asm_set_io_mode(struct audio_client *ac, uint32_t mode); + +/* Get Service ID for APR communication */ +int q6asm_get_apr_service_id(int session_id); + +/* Common format block without any payload +*/ +int q6asm_media_format_block(struct audio_client *ac, uint32_t format); + +#endif /* __Q6_ASM_H__ */ diff --git a/include/sound/qdsp6v2/apr.h b/include/sound/qdsp6v2/apr.h new file mode 100644 index 000000000000..f3024cba7e67 --- /dev/null +++ b/include/sound/qdsp6v2/apr.h @@ -0,0 +1,168 @@ +/* Copyright (c) 2010-2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef __APR_H_ +#define __APR_H_ + +#include <linux/mutex.h> + +enum apr_subsys_state { + APR_SUBSYS_DOWN, + APR_SUBSYS_UP, + APR_SUBSYS_LOADED, +}; + +struct apr_q6 { + void *pil; + atomic_t q6_state; + atomic_t modem_state; + struct mutex lock; +}; + +struct apr_hdr { + uint16_t hdr_field; + uint16_t pkt_size; + uint8_t src_svc; + uint8_t src_domain; + uint16_t src_port; + uint8_t dest_svc; + uint8_t dest_domain; + uint16_t dest_port; + uint32_t token; + uint32_t opcode; +}; + +#define APR_HDR_LEN(hdr_len) ((hdr_len)/4) +#define APR_PKT_SIZE(hdr_len, payload_len) ((hdr_len) + (payload_len)) +#define APR_HDR_FIELD(msg_type, hdr_len, ver)\ + (((msg_type & 0x3) << 8) | ((hdr_len & 0xF) << 4) | (ver & 0xF)) + +#define APR_HDR_SIZE sizeof(struct apr_hdr) + +/* Version */ +#define APR_PKT_VER 0x0 + +/* Command and Response Types */ +#define APR_MSG_TYPE_EVENT 0x0 +#define APR_MSG_TYPE_CMD_RSP 0x1 +#define APR_MSG_TYPE_SEQ_CMD 0x2 +#define APR_MSG_TYPE_NSEQ_CMD 0x3 +#define APR_MSG_TYPE_MAX 0x04 + +/* APR Basic Response Message */ +#define APR_BASIC_RSP_RESULT 0x000110E8 +#define APR_RSP_ACCEPTED 0x000100BE + +/* Domain IDs */ +#define APR_DOMAIN_SIM 0x1 +#define APR_DOMAIN_PC 0x2 +#define APR_DOMAIN_MODEM 0x3 +#define APR_DOMAIN_ADSP 0x4 +#define APR_DOMAIN_APPS 0x5 +#define APR_DOMAIN_MAX 0x6 + +/* ADSP service IDs */ +#define APR_SVC_TEST_CLIENT 0x2 +#define APR_SVC_ADSP_CORE 0x3 +#define APR_SVC_AFE 0x4 +#define APR_SVC_VSM 0x5 +#define APR_SVC_VPM 0x6 +#define APR_SVC_ASM 0x7 +#define APR_SVC_ADM 0x8 +#define APR_SVC_ADSP_MVM 0x09 +#define APR_SVC_ADSP_CVS 0x0A +#define APR_SVC_ADSP_CVP 0x0B +#define APR_SVC_USM 0x0C +#define APR_SVC_MAX 0x0D + +/* Modem Service IDs */ +#define APR_SVC_MVS 0x3 +#define APR_SVC_MVM 0x4 +#define APR_SVC_CVS 0x5 +#define APR_SVC_CVP 0x6 +#define APR_SVC_SRD 0x7 + +/* APR Port IDs */ +#define APR_MAX_PORTS 0x40 + +#define APR_NAME_MAX 0x40 + +#define RESET_EVENTS 0xFFFFFFFF + +#define LPASS_RESTART_EVENT 0x1000 +#define LPASS_RESTART_READY 0x1001 + +struct apr_client_data { + uint16_t reset_event; + uint16_t reset_proc; + uint16_t payload_size; + uint16_t hdr_len; + uint16_t msg_type; + uint16_t src; + uint16_t dest_svc; + uint16_t src_port; + uint16_t dest_port; + uint32_t token; + uint32_t opcode; + void *payload; +}; + +typedef int32_t (*apr_fn)(struct apr_client_data *data, void *priv); + +struct apr_svc { + uint16_t id; + uint16_t dest_id; + uint16_t client_id; + uint8_t rvd; + uint8_t port_cnt; + uint8_t svc_cnt; + uint8_t need_reset; + apr_fn port_fn[APR_MAX_PORTS]; + void *port_priv[APR_MAX_PORTS]; + apr_fn fn; + void *priv; + struct mutex m_lock; + spinlock_t w_lock; +}; + +struct apr_client { + uint8_t id; + uint8_t svc_cnt; + uint8_t rvd; + struct mutex m_lock; + struct apr_svc_ch_dev *handle; + struct apr_svc svc[APR_SVC_MAX]; +}; + +int apr_load_adsp_image(void); +struct apr_client *apr_get_client(int dest_id, int client_id); +int apr_wait_for_device_up(int dest_id); +int apr_get_svc(const char *svc_name, int dest_id, int *client_id, + int *svc_idx, int *svc_id); +void apr_cb_func(void *buf, int len, void *priv); +struct apr_svc *apr_register(char *dest, char *svc_name, apr_fn svc_fn, + uint32_t src_port, void *priv); +inline int apr_fill_hdr(void *handle, uint32_t *buf, uint16_t src_port, + uint16_t msg_type, uint16_t dest_port, + uint32_t token, uint32_t opcode, uint16_t len); + +int apr_send_pkt(void *handle, uint32_t *buf); +int apr_deregister(void *handle); +void change_q6_state(int state); +void q6audio_dsp_not_responding(void); +void apr_reset(void *handle); +enum apr_subsys_state apr_get_modem_state(void); +void apr_set_modem_state(enum apr_subsys_state state); +enum apr_subsys_state apr_get_q6_state(void); +int apr_set_q6_state(enum apr_subsys_state state); +void apr_set_subsys_state(void); +#endif diff --git a/include/sound/qdsp6v2/apr_tal.h b/include/sound/qdsp6v2/apr_tal.h new file mode 100644 index 000000000000..dd6442462e3a --- /dev/null +++ b/include/sound/qdsp6v2/apr_tal.h @@ -0,0 +1,55 @@ +/* Copyright (c) 2010-2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef __APR_TAL_H_ +#define __APR_TAL_H_ + +#include <linux/kernel.h> +#include <linux/kthread.h> +#include <linux/uaccess.h> + +/* APR Client IDs */ +#define APR_CLIENT_AUDIO 0x0 +#define APR_CLIENT_VOICE 0x1 +#define APR_CLIENT_MAX 0x2 + +#define APR_DL_SMD 0 +#define APR_DL_MAX 1 + +#define APR_DEST_MODEM 0 +#define APR_DEST_QDSP6 1 +#define APR_DEST_MAX 2 + +#define APR_MAX_BUF 8192 + +#define APR_OPEN_TIMEOUT_MS 5000 + +typedef void (*apr_svc_cb_fn)(void *buf, int len, void *priv); +struct apr_svc_ch_dev *apr_tal_open(uint32_t svc, uint32_t dest, + uint32_t dl, apr_svc_cb_fn func, void *priv); +int apr_tal_write(struct apr_svc_ch_dev *apr_ch, void *data, int len); +int apr_tal_close(struct apr_svc_ch_dev *apr_ch); +struct apr_svc_ch_dev { + struct smd_channel *ch; + spinlock_t lock; + spinlock_t w_lock; + struct mutex m_lock; + apr_svc_cb_fn func; + char data[APR_MAX_BUF]; + wait_queue_head_t wait; + void *priv; + uint32_t smd_state; + wait_queue_head_t dest; + uint32_t dest_state; +}; + +#endif diff --git a/include/sound/qdsp6v2/apr_us.h b/include/sound/qdsp6v2/apr_us.h new file mode 100644 index 000000000000..3145d2621406 --- /dev/null +++ b/include/sound/qdsp6v2/apr_us.h @@ -0,0 +1,154 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef __APR_US_H__ +#define __APR_US_H__ + +#include <mach/qdsp6v2/apr.h> + +/* ======================================================================= */ +/* Session Level commands */ +#define USM_SESSION_CMD_MEMORY_MAP 0x00012304 +struct usm_stream_cmd_memory_map { + struct apr_hdr hdr; + u32 buf_add; + u32 buf_size; + u16 mempool_id; + u16 reserved; +} __packed; + +#define USM_SESSION_CMD_MEMORY_UNMAP 0x00012305 +struct usm_stream_cmd_memory_unmap { + struct apr_hdr hdr; + u32 buf_add; +} __packed; + +#define USM_SESSION_CMD_RUN 0x00012306 +struct usm_stream_cmd_run { + struct apr_hdr hdr; + u32 flags; + u32 msw_ts; + u32 lsw_ts; +} __packed; + +/* Stream level commands */ +#define USM_STREAM_CMD_OPEN_READ 0x00012309 +struct usm_stream_cmd_open_read { + struct apr_hdr hdr; + u32 uMode; + u32 src_endpoint; + u32 pre_proc_top; + u32 format; +} __packed; + +#define USM_STREAM_CMD_OPEN_WRITE 0x00011271 +struct usm_stream_cmd_open_write { + struct apr_hdr hdr; + u32 format; +} __packed; + + +#define USM_STREAM_CMD_CLOSE 0x0001230A + +/* Encoder configuration definitions */ +#define USM_STREAM_CMD_SET_ENC_PARAM 0x0001230B +/* Decoder configuration definitions */ +#define USM_DATA_CMD_MEDIA_FORMAT_UPDATE 0x00011272 + +/* Encoder/decoder configuration block */ +#define USM_PARAM_ID_ENCDEC_ENC_CFG_BLK 0x0001230D + +/* Parameter structures used in USM_STREAM_CMD_SET_ENCDEC_PARAM command */ +/* common declarations */ +struct usm_cfg_common { + u16 ch_cfg; + u16 bits_per_sample; + u32 sample_rate; + u32 dev_id; + u32 data_map; +} __packed; + +/* Max number of static located transparent data (bytes) */ +#define USM_MAX_CFG_DATA_SIZE 20 +struct usm_encode_cfg_blk { + u32 frames_per_buf; + u32 format_id; + /* <cfg_size> = sizeof(usm_cfg_common)+|tarnsp_data| */ + u32 cfg_size; + struct usm_cfg_common cfg_common; + /* Transparent configuration data for specific encoder */ + u8 transp_data[USM_MAX_CFG_DATA_SIZE]; +} __packed; + +struct usm_stream_cmd_encdec_cfg_blk { + struct apr_hdr hdr; + u32 param_id; + u32 param_size; + struct usm_encode_cfg_blk enc_blk; +} __packed; + +struct us_encdec_cfg { + u32 format_id; + struct usm_cfg_common cfg_common; + u16 params_size; + u8 *params; +} __packed; + +struct usm_stream_media_format_update { + struct apr_hdr hdr; + u32 format_id; + /* <cfg_size> = sizeof(usm_cfg_common)+|tarnsp_data| */ + u32 cfg_size; + struct usm_cfg_common cfg_common; + /* Transparent configuration data for specific encoder */ + u8 transp_data[USM_MAX_CFG_DATA_SIZE]; +} __packed; + + +#define USM_DATA_CMD_READ 0x0001230E +struct usm_stream_cmd_read { + struct apr_hdr hdr; + u32 buf_add; + u32 buf_size; + u32 uid; + u32 counter; +} __packed; + +#define USM_DATA_EVENT_READ_DONE 0x0001230F + +#define USM_DATA_CMD_WRITE 0x00011273 +struct usm_stream_cmd_write { + struct apr_hdr hdr; + u32 buf_add; + u32 buf_size; + u32 uid; + u32 msw_ts; + u32 lsw_ts; + u32 flags; +} __packed; + +#define USM_DATA_EVENT_WRITE_DONE 0x00011274 + +/* Start/stop US signal detection */ +#define USM_SESSION_CMD_SIGNAL_DETECT_MODE 0x00012719 + +struct usm_session_cmd_detect_info { + struct apr_hdr hdr; + u32 detect_mode; + u32 skip_interval; + u32 algorithm_cfg_size; +} __packed; + +/* US signal detection result */ +#define USM_SESSION_EVENT_SIGNAL_DETECT_RESULT 0x00012720 + +#endif /* __APR_US_H__ */ diff --git a/include/sound/qdsp6v2/audio_acdb.h b/include/sound/qdsp6v2/audio_acdb.h new file mode 100644 index 000000000000..9af653c76e6c --- /dev/null +++ b/include/sound/qdsp6v2/audio_acdb.h @@ -0,0 +1,61 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef _AUDIO_ACDB_H +#define _AUDIO_ACDB_H + +#include <linux/msm_audio_acdb.h> +#include <sound/q6adm.h> +enum { + RX_CAL, + TX_CAL, + MAX_AUDPROC_TYPES +}; + +struct acdb_cal_block { + uint32_t cal_size; + uint32_t cal_kvaddr; + uint32_t cal_paddr; +}; + +struct acdb_atomic_cal_block { + atomic_t cal_size; + atomic_t cal_kvaddr; + atomic_t cal_paddr; +}; + +struct acdb_cal_data { + uint32_t num_cal_blocks; + struct acdb_atomic_cal_block *cal_blocks; +}; + +uint32_t get_voice_rx_topology(void); +uint32_t get_voice_tx_topology(void); +uint32_t get_adm_rx_topology(void); +uint32_t get_adm_tx_topology(void); +uint32_t get_asm_topology(void); +void get_all_voice_cal(struct acdb_cal_block *cal_block); +void get_all_cvp_cal(struct acdb_cal_block *cal_block); +void get_all_vocproc_cal(struct acdb_cal_block *cal_block); +void get_all_vocstrm_cal(struct acdb_cal_block *cal_block); +void get_all_vocvol_cal(struct acdb_cal_block *cal_block); +void get_anc_cal(struct acdb_cal_block *cal_block); +void get_afe_cal(int32_t path, struct acdb_cal_block *cal_block); +void get_audproc_cal(int32_t path, struct acdb_cal_block *cal_block); +void get_audstrm_cal(int32_t path, struct acdb_cal_block *cal_block); +void get_audvol_cal(int32_t path, struct acdb_cal_block *cal_block); +void get_vocproc_cal(struct acdb_cal_data *cal_data); +void get_vocstrm_cal(struct acdb_cal_data *cal_data); +void get_vocvol_cal(struct acdb_cal_data *cal_data); +void get_sidetone_cal(struct sidetone_cal *cal_data); + +#endif diff --git a/include/sound/qdsp6v2/audio_def.h b/include/sound/qdsp6v2/audio_def.h new file mode 100644 index 000000000000..35a4d5c2f794 --- /dev/null +++ b/include/sound/qdsp6v2/audio_def.h @@ -0,0 +1,35 @@ +/* Copyright (c) 2009,2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef _MACH_QDSP5_V2_AUDIO_DEF_H +#define _MACH_QDSP5_V2_AUDIO_DEF_H + +/* Define sound device capability */ +#define SNDDEV_CAP_RX 0x1 /* RX direction */ +#define SNDDEV_CAP_TX 0x2 /* TX direction */ +#define SNDDEV_CAP_VOICE 0x4 /* Support voice call */ +#define SNDDEV_CAP_PLAYBACK 0x8 /* Support playback */ +#define SNDDEV_CAP_FM 0x10 /* Support FM radio */ +#define SNDDEV_CAP_TTY 0x20 /* Support TTY */ +#define SNDDEV_CAP_ANC 0x40 /* Support ANC */ +#define SNDDEV_CAP_LB 0x80 /* Loopback */ +#define VOC_NB_INDEX 0 +#define VOC_WB_INDEX 1 +#define VOC_RX_VOL_ARRAY_NUM 2 + +/* Device volume types . In Current deisgn only one of these are supported. */ +#define SNDDEV_DEV_VOL_DIGITAL 0x1 /* Codec Digital volume control */ +#define SNDDEV_DEV_VOL_ANALOG 0x2 /* Codec Analog volume control */ + +#define SIDE_TONE_MASK 0x01 + +#endif /* _MACH_QDSP5_V2_AUDIO_DEF_H */ diff --git a/include/sound/qdsp6v2/audio_dev_ctl.h b/include/sound/qdsp6v2/audio_dev_ctl.h new file mode 100644 index 000000000000..403dc6eccdaf --- /dev/null +++ b/include/sound/qdsp6v2/audio_dev_ctl.h @@ -0,0 +1,221 @@ +/* Copyright (c) 2009-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef __MACH_QDSP6_V2_SNDDEV_H +#define __MACH_QDSP6_V2_SNDDEV_H +#include <sound/qdsp6v2/audio_def.h> +#include <sound/q6afe.h> + +#define AUDIO_DEV_CTL_MAX_DEV 64 +#define DIR_TX 2 +#define DIR_RX 1 + +#define DEVICE_IGNORE 0xffff +#define COPP_IGNORE 0xffffffff +#define SESSION_IGNORE 0x0UL + +/* 8 concurrent sessions with Q6 possible, session:0 + reserved in DSP */ +#define MAX_SESSIONS 0x09 + +/* This represents Maximum bit needed for representing sessions + per clients, MAX_BIT_PER_CLIENT >= MAX_SESSIONS */ +#define MAX_BIT_PER_CLIENT 16 + +#define VOICE_STATE_INVALID 0x0 +#define VOICE_STATE_INCALL 0x1 +#define VOICE_STATE_OFFCALL 0x2 +#define ONE_TO_MANY 1 +#define MANY_TO_ONE 2 + +struct msm_snddev_info { + const char *name; + u32 capability; + u32 copp_id; + u32 acdb_id; + u32 dev_volume; + struct msm_snddev_ops { + int (*open)(struct msm_snddev_info *); + int (*close)(struct msm_snddev_info *); + int (*set_freq)(struct msm_snddev_info *, u32); + int (*enable_sidetone)(struct msm_snddev_info *, u32, uint16_t); + int (*set_device_volume)(struct msm_snddev_info *, u32); + int (*enable_anc)(struct msm_snddev_info *, u32); + } dev_ops; + u8 opened; + void *private_data; + bool state; + u32 sample_rate; + u32 channel_mode; + u32 set_sample_rate; + u64 sessions; + int usage_count; + s32 max_voc_rx_vol[VOC_RX_VOL_ARRAY_NUM]; /* [0] is for NB,[1] for WB */ + s32 min_voc_rx_vol[VOC_RX_VOL_ARRAY_NUM]; +}; + +struct msm_volume { + int volume; /* Volume parameter, in % Scale */ + int pan; +}; + +extern struct msm_volume msm_vol_ctl; + +void msm_snddev_register(struct msm_snddev_info *); +void msm_snddev_unregister(struct msm_snddev_info *); +int msm_snddev_devcount(void); +int msm_snddev_query(int dev_id); +unsigned short msm_snddev_route_dec(int popp_id); +unsigned short msm_snddev_route_enc(int enc_id); + +int msm_snddev_set_dec(int popp_id, int copp_id, int set, + int rate, int channel_mode); +int msm_snddev_set_enc(int popp_id, int copp_id, int set, + int rate, int channel_mode); + +int msm_snddev_is_set(int popp_id, int copp_id); +int msm_get_voc_route(u32 *rx_id, u32 *tx_id); +int msm_set_voc_route(struct msm_snddev_info *dev_info, int stream_type, + int dev_id); +int msm_snddev_enable_sidetone(u32 dev_id, u32 enable, uint16_t gain); + +int msm_set_copp_id(int session_id, int copp_id); + +int msm_clear_copp_id(int session_id, int copp_id); + +int msm_clear_session_id(int session_id); + +int msm_reset_all_device(void); + +int reset_device(void); + +int msm_clear_all_session(void); + +struct msm_snddev_info *audio_dev_ctrl_find_dev(u32 dev_id); + +void msm_release_voc_thread(void); + +int snddev_voice_set_volume(int vol, int path); + +struct auddev_evt_voc_devinfo { + u32 dev_type; /* Rx or Tx */ + u32 acdb_dev_id; /* acdb id of device */ + u32 dev_sample; /* Sample rate of device */ + s32 max_rx_vol[VOC_RX_VOL_ARRAY_NUM]; /* unit is mb (milibel), + [0] is for NB, other for WB */ + s32 min_rx_vol[VOC_RX_VOL_ARRAY_NUM]; /* unit is mb */ + u32 dev_id; /* registered device id */ + u32 dev_port_id; +}; + +struct auddev_evt_audcal_info { + u32 dev_id; + u32 acdb_id; + u32 sample_rate; + u32 dev_type; + u32 sessions; +}; + +union msm_vol_mute { + int vol; + bool mute; +}; + +struct auddev_evt_voc_mute_info { + u32 dev_type; + u32 acdb_dev_id; + u32 voice_session_id; + union msm_vol_mute dev_vm_val; +}; + +struct auddev_evt_freq_info { + u32 dev_type; + u32 acdb_dev_id; + u32 sample_rate; +}; + +union auddev_evt_data { + struct auddev_evt_voc_devinfo voc_devinfo; + struct auddev_evt_voc_mute_info voc_vm_info; + struct auddev_evt_freq_info freq_info; + u32 routing_id; + s32 session_vol; + s32 voice_state; + struct auddev_evt_audcal_info audcal_info; + u32 voice_session_id; +}; + +struct message_header { + uint32_t id; + uint32_t data_len; +}; + +#define AUDDEV_EVT_DEV_CHG_VOICE 0x01 /* device change event */ +#define AUDDEV_EVT_DEV_RDY 0x02 /* device ready event */ +#define AUDDEV_EVT_DEV_RLS 0x04 /* device released event */ +#define AUDDEV_EVT_REL_PENDING 0x08 /* device release pending */ +#define AUDDEV_EVT_DEVICE_VOL_MUTE_CHG 0x10 /* device volume changed */ +#define AUDDEV_EVT_START_VOICE 0x20 /* voice call start */ +#define AUDDEV_EVT_END_VOICE 0x40 /* voice call end */ +#define AUDDEV_EVT_STREAM_VOL_CHG 0x80 /* device volume changed */ +#define AUDDEV_EVT_FREQ_CHG 0x100 /* Change in freq */ +#define AUDDEV_EVT_VOICE_STATE_CHG 0x200 /* Change in voice state */ + +#define AUDDEV_CLNT_VOC 0x1 /*Vocoder clients*/ +#define AUDDEV_CLNT_DEC 0x2 /*Decoder clients*/ +#define AUDDEV_CLNT_ENC 0x3 /* Encoder clients */ +#define AUDDEV_CLNT_AUDIOCAL 0x4 /* AudioCalibration client */ + +#define AUDIO_DEV_CTL_MAX_LISTNER 20 /* Max Listeners Supported */ + +struct msm_snd_evt_listner { + uint32_t evt_id; + uint32_t clnt_type; + uint32_t clnt_id; + void *private_data; + void (*auddev_evt_listener)(u32 evt_id, + union auddev_evt_data *evt_payload, + void *private_data); + struct msm_snd_evt_listner *cb_next; + struct msm_snd_evt_listner *cb_prev; +}; + +struct event_listner { + struct msm_snd_evt_listner *cb; + u32 num_listner; + int state; /* Call state */ /* TODO remove this if not req*/ +}; + +extern struct event_listner event; +int auddev_register_evt_listner(u32 evt_id, u32 clnt_type, u32 clnt_id, + void (*listner)(u32 evt_id, + union auddev_evt_data *evt_payload, + void *private_data), + void *private_data); +int auddev_unregister_evt_listner(u32 clnt_type, u32 clnt_id); +void mixer_post_event(u32 evt_id, u32 dev_id); +void broadcast_event(u32 evt_id, u32 dev_id, u64 session_id); +int auddev_cfg_tx_copp_topology(int session_id, int cfg); +int msm_snddev_request_freq(int *freq, u32 session_id, + u32 capability, u32 clnt_type); +int msm_snddev_withdraw_freq(u32 session_id, + u32 capability, u32 clnt_type); +int msm_device_is_voice(int dev_id); +int msm_get_voc_freq(int *tx_freq, int *rx_freq); +int msm_snddev_get_enc_freq(int session_id); +int msm_set_voice_vol(int dir, s32 volume, u32 session_id); +int msm_set_voice_mute(int dir, int mute, u32 session_id); +int msm_get_voice_state(void); +int msm_enable_incall_recording(int popp_id, int rec_mode, int rate, + int channel_mode); +int msm_disable_incall_recording(uint32_t popp_id, uint32_t rec_mode); +#endif diff --git a/include/sound/qdsp6v2/dsp_debug.h b/include/sound/qdsp6v2/dsp_debug.h new file mode 100644 index 000000000000..bc1cd9ec8743 --- /dev/null +++ b/include/sound/qdsp6v2/dsp_debug.h @@ -0,0 +1,22 @@ +/* Copyright (c) 2010, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef __DSP_DEBUG_H_ +#define __DSP_DEBUG_H_ + +typedef int (*dsp_state_cb)(int state); +int dsp_debug_register(dsp_state_cb ptr); + +#define DSP_STATE_CRASHED 0x0 +#define DSP_STATE_CRASH_DUMP_DONE 0x1 + +#endif diff --git a/include/sound/qdsp6v2/q6voice.h b/include/sound/qdsp6v2/q6voice.h new file mode 100644 index 000000000000..7165998e2efa --- /dev/null +++ b/include/sound/qdsp6v2/q6voice.h @@ -0,0 +1,778 @@ +/* Copyright (c) 2010-2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#ifndef __QDSP6VOICE_H__ +#define __QDSP6VOICE_H__ + +#include <mach/qdsp6v2/apr.h> + +/* Device Event */ +#define DEV_CHANGE_READY 0x1 + +#define VOICE_CALL_START 0x1 +#define VOICE_CALL_END 0 + +#define VOICE_DEV_ENABLED 0x1 +#define VOICE_DEV_DISABLED 0 + +#define MAX_VOC_PKT_SIZE 642 + +#define SESSION_NAME_LEN 20 + +struct voice_header { + uint32_t id; + uint32_t data_len; +}; + +struct voice_init { + struct voice_header hdr; + void *cb_handle; +}; + + +/* Device information payload structure */ + +struct device_data { + uint32_t dev_acdb_id; + uint32_t volume; /* in percentage */ + uint32_t mute; + uint32_t sample; + uint32_t enabled; + uint32_t dev_id; + uint32_t dev_port_id; +}; + +enum { + VOC_INIT = 0, + VOC_RUN, + VOC_CHANGE, + VOC_RELEASE, +}; + +/* TO MVM commands */ +#define VSS_IMVM_CMD_CREATE_PASSIVE_CONTROL_SESSION 0x000110FF +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IMVM_CMD_CREATE_FULL_CONTROL_SESSION 0x000110FE +/* Create a new full control MVM session. */ + +#define APRV2_IBASIC_CMD_DESTROY_SESSION 0x0001003C +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IMVM_CMD_ATTACH_STREAM 0x0001123C +/* Attach a stream to the MVM. */ + +#define VSS_IMVM_CMD_DETACH_STREAM 0x0001123D +/* Detach a stream from the MVM. */ + +#define VSS_IMVM_CMD_ATTACH_VOCPROC 0x0001123E +/* Attach a vocproc to the MVM. The MVM will symmetrically connect this vocproc + * to all the streams currently attached to it. + */ + +#define VSS_IMVM_CMD_DETACH_VOCPROC 0x0001123F +/* Detach a vocproc from the MVM. The MVM will symmetrically disconnect this + * vocproc from all the streams to which it is currently attached. + */ + +#define VSS_IMVM_CMD_START_VOICE 0x00011190 +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IMVM_CMD_STOP_VOICE 0x00011192 +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ISTREAM_CMD_ATTACH_VOCPROC 0x000110F8 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ISTREAM_CMD_DETACH_VOCPROC 0x000110F9 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + + +#define VSS_ISTREAM_CMD_SET_TTY_MODE 0x00011196 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ICOMMON_CMD_SET_NETWORK 0x0001119C +/* Set the network type. */ + +#define VSS_ICOMMON_CMD_SET_VOICE_TIMING 0x000111E0 +/* Set the voice timing parameters. */ + +struct vss_imvm_cmd_create_control_session_t { + char name[SESSION_NAME_LEN]; + /* + * A variable-sized stream name. + * + * The stream name size is the payload size minus the size of the other + * fields. + */ +} __packed; + +struct vss_istream_cmd_set_tty_mode_t { + uint32_t mode; + /**< + * TTY mode. + * + * 0 : TTY disabled + * 1 : HCO + * 2 : VCO + * 3 : FULL + */ +} __attribute__((packed)); + +struct vss_istream_cmd_attach_vocproc_t { + uint16_t handle; + /**< Handle of vocproc being attached. */ +} __attribute__((packed)); + +struct vss_istream_cmd_detach_vocproc_t { + uint16_t handle; + /**< Handle of vocproc being detached. */ +} __attribute__((packed)); + +struct vss_imvm_cmd_attach_stream_t { + uint16_t handle; + /* The stream handle to attach. */ +} __attribute__((packed)); + +struct vss_imvm_cmd_detach_stream_t { + uint16_t handle; + /* The stream handle to detach. */ +} __attribute__((packed)); + +struct vss_icommon_cmd_set_network_t { + uint32_t network_id; + /* Network ID. (Refer to VSS_NETWORK_ID_XXX). */ +} __attribute__((packed)); + +struct vss_icommon_cmd_set_voice_timing_t { + uint16_t mode; + /* + * The vocoder frame synchronization mode. + * + * 0 : No frame sync. + * 1 : Hard VFR (20ms Vocoder Frame Reference interrupt). + */ + uint16_t enc_offset; + /* + * The offset in microseconds from the VFR to deliver a Tx vocoder + * packet. The offset should be less than 20000us. + */ + uint16_t dec_req_offset; + /* + * The offset in microseconds from the VFR to request for an Rx vocoder + * packet. The offset should be less than 20000us. + */ + uint16_t dec_offset; + /* + * The offset in microseconds from the VFR to indicate the deadline to + * receive an Rx vocoder packet. The offset should be less than 20000us. + * Rx vocoder packets received after this deadline are not guaranteed to + * be processed. + */ +} __attribute__((packed)); + +struct mvm_attach_vocproc_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_attach_vocproc_t mvm_attach_cvp_handle; +} __attribute__((packed)); + +struct mvm_detach_vocproc_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_detach_vocproc_t mvm_detach_cvp_handle; +} __attribute__((packed)); + +struct mvm_create_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_imvm_cmd_create_control_session_t mvm_session; +} __packed; + +struct mvm_set_tty_mode_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_tty_mode_t tty_mode; +} __attribute__((packed)); + +struct mvm_attach_stream_cmd { + struct apr_hdr hdr; + struct vss_imvm_cmd_attach_stream_t attach_stream; +} __attribute__((packed)); + +struct mvm_detach_stream_cmd { + struct apr_hdr hdr; + struct vss_imvm_cmd_detach_stream_t detach_stream; +} __attribute__((packed)); + +struct mvm_set_network_cmd { + struct apr_hdr hdr; + struct vss_icommon_cmd_set_network_t network; +} __attribute__((packed)); + +struct mvm_set_voice_timing_cmd { + struct apr_hdr hdr; + struct vss_icommon_cmd_set_voice_timing_t timing; +} __attribute__((packed)); + +/* TO CVS commands */ +#define VSS_ISTREAM_CMD_CREATE_PASSIVE_CONTROL_SESSION 0x00011140 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ISTREAM_CMD_CREATE_FULL_CONTROL_SESSION 0x000110F7 +/* Create a new full control stream session. */ + +#define APRV2_IBASIC_CMD_DESTROY_SESSION 0x0001003C + +#define VSS_ISTREAM_CMD_CACHE_CALIBRATION_DATA 0x000110FB + +#define VSS_ISTREAM_CMD_SET_MUTE 0x00011022 + +#define VSS_ISTREAM_CMD_SET_MEDIA_TYPE 0x00011186 +/* Set media type on the stream. */ + +#define VSS_ISTREAM_EVT_SEND_ENC_BUFFER 0x00011015 +/* Event sent by the stream to its client to provide an encoded packet. */ + +#define VSS_ISTREAM_EVT_REQUEST_DEC_BUFFER 0x00011017 +/* Event sent by the stream to its client requesting for a decoder packet. + * The client should respond with a VSS_ISTREAM_EVT_SEND_DEC_BUFFER event. + */ + +#define VSS_ISTREAM_EVT_SEND_DEC_BUFFER 0x00011016 +/* Event sent by the client to the stream in response to a + * VSS_ISTREAM_EVT_REQUEST_DEC_BUFFER event, providing a decoder packet. + */ + +#define VSS_ISTREAM_CMD_VOC_AMR_SET_ENC_RATE 0x0001113E +/* Set AMR encoder rate. */ + +#define VSS_ISTREAM_CMD_VOC_AMRWB_SET_ENC_RATE 0x0001113F +/* Set AMR-WB encoder rate. */ + +#define VSS_ISTREAM_CMD_CDMA_SET_ENC_MINMAX_RATE 0x00011019 +/* Set encoder minimum and maximum rate. */ + +#define VSS_ISTREAM_CMD_SET_ENC_DTX_MODE 0x0001101D +/* Set encoder DTX mode. */ + +#define VSS_ISTREAM_CMD_START_RECORD 0x00011236 +/* Start in-call conversation recording. */ + +#define VSS_ISTREAM_CMD_STOP_RECORD 0x00011237 +/* Stop in-call conversation recording. */ + +#define VSS_ISTREAM_CMD_START_PLAYBACK 0x00011238 +/* Start in-call music delivery on the Tx voice path. */ + +#define VSS_ISTREAM_CMD_STOP_PLAYBACK 0x00011239 +/* Stop the in-call music delivery on the Tx voice path. */ + +struct vss_istream_cmd_create_passive_control_session_t { + char name[SESSION_NAME_LEN]; + /**< + * A variable-sized stream name. + * + * The stream name size is the payload size minus the size of the other + * fields. + */ +} __attribute__((packed)); + +struct vss_istream_cmd_set_mute_t { + uint16_t direction; + /**< + * 0 : TX only + * 1 : RX only + * 2 : TX and Rx + */ + uint16_t mute_flag; + /**< + * Mute, un-mute. + * + * 0 : Silence disable + * 1 : Silence enable + * 2 : CNG enable. Applicable to TX only. If set on RX behavior + * will be the same as 1 + */ +} __attribute__((packed)); + +struct vss_istream_cmd_create_full_control_session_t { + uint16_t direction; + /* + * Stream direction. + * + * 0 : TX only + * 1 : RX only + * 2 : TX and RX + * 3 : TX and RX loopback + */ + uint32_t enc_media_type; + /* Tx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ + uint32_t dec_media_type; + /* Rx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ + uint32_t network_id; + /* Network ID. (Refer to VSS_NETWORK_ID_XXX). */ + char name[SESSION_NAME_LEN]; + /* + * A variable-sized stream name. + * + * The stream name size is the payload size minus the size of the other + * fields. + */ +} __attribute__((packed)); + +struct vss_istream_cmd_set_media_type_t { + uint32_t rx_media_id; + /* Set the Rx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ + uint32_t tx_media_id; + /* Set the Tx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ +} __attribute__((packed)); + +struct vss_istream_evt_send_enc_buffer_t { + uint32_t media_id; + /* Media ID of the packet. */ + uint8_t packet_data[MAX_VOC_PKT_SIZE]; + /* Packet data buffer. */ +} __attribute__((packed)); + +struct vss_istream_evt_send_dec_buffer_t { + uint32_t media_id; + /* Media ID of the packet. */ + uint8_t packet_data[MAX_VOC_PKT_SIZE]; + /* Packet data. */ +} __attribute__((packed)); + +struct vss_istream_cmd_voc_amr_set_enc_rate_t { + uint32_t mode; + /* Set the AMR encoder rate. + * + * 0x00000000 : 4.75 kbps + * 0x00000001 : 5.15 kbps + * 0x00000002 : 5.90 kbps + * 0x00000003 : 6.70 kbps + * 0x00000004 : 7.40 kbps + * 0x00000005 : 7.95 kbps + * 0x00000006 : 10.2 kbps + * 0x00000007 : 12.2 kbps + */ +} __attribute__((packed)); + +struct vss_istream_cmd_voc_amrwb_set_enc_rate_t { + uint32_t mode; + /* Set the AMR-WB encoder rate. + * + * 0x00000000 : 6.60 kbps + * 0x00000001 : 8.85 kbps + * 0x00000002 : 12.65 kbps + * 0x00000003 : 14.25 kbps + * 0x00000004 : 15.85 kbps + * 0x00000005 : 18.25 kbps + * 0x00000006 : 19.85 kbps + * 0x00000007 : 23.05 kbps + * 0x00000008 : 23.85 kbps + */ +} __attribute__((packed)); + +struct vss_istream_cmd_cdma_set_enc_minmax_rate_t { + uint16_t min_rate; + /* Set the lower bound encoder rate. + * + * 0x0000 : Blank frame + * 0x0001 : Eighth rate + * 0x0002 : Quarter rate + * 0x0003 : Half rate + * 0x0004 : Full rate + */ + uint16_t max_rate; + /* Set the upper bound encoder rate. + * + * 0x0000 : Blank frame + * 0x0001 : Eighth rate + * 0x0002 : Quarter rate + * 0x0003 : Half rate + * 0x0004 : Full rate + */ +} __attribute__((packed)); + +struct vss_istream_cmd_set_enc_dtx_mode_t { + uint32_t enable; + /* Toggle DTX on or off. + * + * 0 : Disables DTX + * 1 : Enables DTX + */ +} __attribute__((packed)); + +#define VSS_TAP_POINT_NONE 0x00010F78 +/* Indicates no tapping for specified path. */ + +#define VSS_TAP_POINT_STREAM_END 0x00010F79 +/* Indicates that specified path should be tapped at the end of the stream. */ + +struct vss_istream_cmd_start_record_t { + uint32_t rx_tap_point; + /* Tap point to use on the Rx path. Supported values are: + * VSS_TAP_POINT_NONE : Do not record Rx path. + * VSS_TAP_POINT_STREAM_END : Rx tap point is at the end of the stream. + */ + uint32_t tx_tap_point; + /* Tap point to use on the Tx path. Supported values are: + * VSS_TAP_POINT_NONE : Do not record tx path. + * VSS_TAP_POINT_STREAM_END : Tx tap point is at the end of the stream. + */ +} __attribute__((packed)); + +struct cvs_create_passive_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_create_passive_control_session_t cvs_session; +} __attribute__((packed)); + +struct cvs_create_full_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_create_full_control_session_t cvs_session; +} __attribute__((packed)); + +struct cvs_destroy_session_cmd { + struct apr_hdr hdr; +} __attribute__((packed)); + +struct cvs_cache_calibration_data_cmd { + struct apr_hdr hdr; +} __attribute__ ((packed)); + +struct cvs_set_mute_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_mute_t cvs_set_mute; +} __attribute__((packed)); + +struct cvs_set_media_type_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_media_type_t media_type; +} __attribute__((packed)); + +struct cvs_send_dec_buf_cmd { + struct apr_hdr hdr; + struct vss_istream_evt_send_dec_buffer_t dec_buf; +} __attribute__((packed)); + +struct cvs_set_amr_enc_rate_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_voc_amr_set_enc_rate_t amr_rate; +} __attribute__((packed)); + +struct cvs_set_amrwb_enc_rate_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_voc_amrwb_set_enc_rate_t amrwb_rate; +} __attribute__((packed)); + +struct cvs_set_cdma_enc_minmax_rate_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_cdma_set_enc_minmax_rate_t cdma_rate; +} __attribute__((packed)); + +struct cvs_set_enc_dtx_mode_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_enc_dtx_mode_t dtx_mode; +} __attribute__((packed)); + +struct cvs_start_record_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_start_record_t rec_mode; +} __attribute__((packed)); + +/* TO CVP commands */ + +#define VSS_IVOCPROC_CMD_CREATE_FULL_CONTROL_SESSION 0x000100C3 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define APRV2_IBASIC_CMD_DESTROY_SESSION 0x0001003C + +#define VSS_IVOCPROC_CMD_SET_DEVICE 0x000100C4 + +#define VSS_IVOCPROC_CMD_CACHE_CALIBRATION_DATA 0x000110E3 + +#define VSS_IVOCPROC_CMD_CACHE_VOLUME_CALIBRATION_TABLE 0x000110E4 + +#define VSS_IVOCPROC_CMD_SET_VP3_DATA 0x000110EB + +#define VSS_IVOCPROC_CMD_SET_RX_VOLUME_INDEX 0x000110EE + +#define VSS_IVOCPROC_CMD_ENABLE 0x000100C6 +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IVOCPROC_CMD_DISABLE 0x000110E1 +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IVOCPROC_TOPOLOGY_ID_NONE 0x00010F70 +#define VSS_IVOCPROC_TOPOLOGY_ID_TX_SM_ECNS 0x00010F71 +#define VSS_IVOCPROC_TOPOLOGY_ID_TX_DM_FLUENCE 0x00010F72 + +#define VSS_IVOCPROC_TOPOLOGY_ID_RX_DEFAULT 0x00010F77 + +/* Newtwork IDs */ +#define VSS_NETWORK_ID_DEFAULT 0x00010037 +#define VSS_NETWORK_ID_VOIP_NB 0x00011240 +#define VSS_NETWORK_ID_VOIP_WB 0x00011241 +#define VSS_NETWORK_ID_VOIP_WV 0x00011242 + +/* Media types */ +#define VSS_MEDIA_ID_13K_MODEM 0x00010FC1 +/* Qcelp vocoder modem format */ +#define VSS_MEDIA_ID_EVRC_MODEM 0x00010FC2 +/* 80-VF690-47 CDMA enhanced variable rate vocoder modem format. */ +#define VSS_MEDIA_ID_4GV_NB_MODEM 0x00010FC3 +/* 4GV Narrowband modem format */ +#define VSS_MEDIA_ID_4GV_WB_MODEM 0x00010FC4 +/* 4GV Wideband modem format */ +#define VSS_MEDIA_ID_AMR_NB_MODEM 0x00010FC6 +/* 80-VF690-47 UMTS AMR-NB vocoder modem format. */ +#define VSS_MEDIA_ID_AMR_WB_MODEM 0x00010FC7 +/* 80-VF690-47 UMTS AMR-WB vocoder modem format. */ +#define VSS_MEDIA_ID_EFR_MODEM 0x00010FC8 +/*EFR modem format */ +#define VSS_MEDIA_ID_FR_MODEM 0x00010FC9 +/*FR modem format */ +#define VSS_MEDIA_ID_HR_MODEM 0x00010FCA +/*HR modem format */ +#define VSS_MEDIA_ID_PCM_NB 0x00010FCB +/* Linear PCM (16-bit, little-endian). */ +#define VSS_MEDIA_ID_PCM_WB 0x00010FCC +/* Linear wideband PCM vocoder modem format (16 bits, little endian). */ +#define VSS_MEDIA_ID_G711_ALAW 0x00010FCD +/* G.711 a-law (contains two 10ms vocoder frames). */ +#define VSS_MEDIA_ID_G711_MULAW 0x00010FCE +/* G.711 mu-law (contains two 10ms vocoder frames). */ +#define VSS_MEDIA_ID_G729 0x00010FD0 +/* G.729AB (contains two 10ms vocoder frames. */ + +#define VOICE_CMD_SET_PARAM 0x00011006 +#define VOICE_CMD_GET_PARAM 0x00011007 +#define VOICE_EVT_GET_PARAM_ACK 0x00011008 + +struct vss_ivocproc_cmd_create_full_control_session_t { + uint16_t direction; + /* + * stream direction. + * 0 : TX only + * 1 : RX only + * 2 : TX and RX + */ + uint32_t tx_port_id; + /* + * TX device port ID which vocproc will connect to. If not supplying a + * port ID set to VSS_IVOCPROC_PORT_ID_NONE. + */ + uint32_t tx_topology_id; + /* + * Tx leg topology ID. If not supplying a topology ID set to + * VSS_IVOCPROC_TOPOLOGY_ID_NONE. + */ + uint32_t rx_port_id; + /* + * RX device port ID which vocproc will connect to. If not supplying a + * port ID set to VSS_IVOCPROC_PORT_ID_NONE. + */ + uint32_t rx_topology_id; + /* + * Rx leg topology ID. If not supplying a topology ID set to + * VSS_IVOCPROC_TOPOLOGY_ID_NONE. + */ + int32_t network_id; + /* + * Network ID. (Refer to VSS_NETWORK_ID_XXX). If not supplying a network + * ID set to VSS_NETWORK_ID_DEFAULT. + */ +} __attribute__((packed)); + +struct vss_ivocproc_cmd_set_device_t { + uint32_t tx_port_id; + /**< + * TX device port ID which vocproc will connect to. + * VSS_IVOCPROC_PORT_ID_NONE means vocproc will not connect to any port. + */ + uint32_t tx_topology_id; + /**< + * TX leg topology ID. + * VSS_IVOCPROC_TOPOLOGY_ID_NONE means vocproc does not contain any + * pre/post-processing blocks and is pass-through. + */ + int32_t rx_port_id; + /**< + * RX device port ID which vocproc will connect to. + * VSS_IVOCPROC_PORT_ID_NONE means vocproc will not connect to any port. + */ + uint32_t rx_topology_id; + /**< + * RX leg topology ID. + * VSS_IVOCPROC_TOPOLOGY_ID_NONE means vocproc does not contain any + * pre/post-processing blocks and is pass-through. + */ +} __attribute__((packed)); + +struct vss_ivocproc_cmd_set_volume_index_t { + uint16_t vol_index; + /**< + * Volume index utilized by the vocproc to index into the volume table + * provided in VSS_IVOCPROC_CMD_CACHE_VOLUME_CALIBRATION_TABLE and set + * volume on the VDSP. + */ +} __attribute__((packed)); + +struct cvp_create_full_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_create_full_control_session_t cvp_session; +} __attribute__ ((packed)); + +struct cvp_command { + struct apr_hdr hdr; +} __attribute__((packed)); + +struct cvp_set_device_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_set_device_t cvp_set_device; +} __attribute__ ((packed)); + +struct cvp_cache_calibration_data_cmd { + struct apr_hdr hdr; +} __attribute__((packed)); + +struct cvp_cache_volume_calibration_table_cmd { + struct apr_hdr hdr; +} __attribute__((packed)); + +struct cvp_set_vp3_data_cmd { + struct apr_hdr hdr; +} __attribute__((packed)); + +struct cvp_set_rx_volume_index_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_set_volume_index_t cvp_set_vol_idx; +} __attribute__((packed)); + +/* CB for up-link packets. */ +typedef void (*ul_cb_fn)(uint8_t *voc_pkt, + uint32_t pkt_len, + void *private_data); + +/* CB for down-link packets. */ +typedef void (*dl_cb_fn)(uint8_t *voc_pkt, + uint32_t *pkt_len, + void *private_data); + +struct q_min_max_rate { + uint32_t min_rate; + uint32_t max_rate; +}; + +struct mvs_driver_info { + uint32_t media_type; + uint32_t rate; + uint32_t network_type; + uint32_t dtx_mode; + struct q_min_max_rate q_min_max_rate; + ul_cb_fn ul_cb; + dl_cb_fn dl_cb; + void *private_data; +}; + +struct incall_rec_info { + uint32_t pending; + uint32_t rec_mode; +}; + +struct incall_music_info { + uint32_t pending; + uint32_t playing; +}; + +struct voice_data { + int voc_state;/*INIT, CHANGE, RELEASE, RUN */ + + wait_queue_head_t mvm_wait; + wait_queue_head_t cvs_wait; + wait_queue_head_t cvp_wait; + + /* cache the values related to Rx and Tx */ + struct device_data dev_rx; + struct device_data dev_tx; + + /* call status */ + int v_call_status; /* Start or End */ + + u32 mvm_state; + u32 cvs_state; + u32 cvp_state; + + /* Handle to MVM */ + u16 mvm_handle; + /* Handle to CVS */ + u16 cvs_handle; + /* Handle to CVP */ + u16 cvp_handle; + + struct mutex lock; + + struct incall_rec_info rec_info; + + struct incall_music_info music_info; + + u16 session_id; +}; + +#define MAX_VOC_SESSIONS 2 +#define SESSION_ID_BASE 0xFFF0 + +struct common_data { + uint32_t voc_path; + uint32_t adsp_version; + uint32_t device_events; + + /* These default values are for all devices */ + uint32_t default_mute_val; + uint32_t default_vol_val; + uint32_t default_sample_val; + + /* APR to MVM in the modem */ + void *apr_mvm; + /* APR to CVS in the modem */ + void *apr_cvs; + /* APR to CVP in the modem */ + void *apr_cvp; + + /* APR to MVM in the Q6 */ + void *apr_q6_mvm; + /* APR to CVS in the Q6 */ + void *apr_q6_cvs; + /* APR to CVP in the Q6 */ + void *apr_q6_cvp; + + struct mutex common_lock; + + struct mvs_driver_info mvs_info; + + struct voice_data voice[MAX_VOC_SESSIONS]; +}; + +int voice_set_voc_path_full(uint32_t set); + +void voice_register_mvs_cb(ul_cb_fn ul_cb, + dl_cb_fn dl_cb, + void *private_data); + +void voice_config_vocoder(uint32_t media_type, + uint32_t rate, + uint32_t network_type, + uint32_t dtx_mode, + struct q_min_max_rate q_min_max_rate); + +int voice_start_record(uint32_t rec_mode, uint32_t set); + +int voice_start_playback(uint32_t set); + +u16 voice_get_session_id(const char *name); +#endif diff --git a/include/sound/qdsp6v2/rtac.h b/include/sound/qdsp6v2/rtac.h new file mode 100644 index 000000000000..07be428f14d3 --- /dev/null +++ b/include/sound/qdsp6v2/rtac.h @@ -0,0 +1,39 @@ +/* Copyright (c) 2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef __RTAC_H__ +#define __RTAC_H__ + +/* Voice Modes */ +#define RTAC_CVP 0 +#define RTAC_CVS 1 +#define RTAC_VOICE_MODES 2 + +void rtac_add_adm_device(u32 port_id, u32 copp_id, u32 path_id, u32 popp_id); +void rtac_remove_adm_device(u32 port_id); +void rtac_remove_popp_from_adm_devices(u32 popp_id); +void rtac_add_voice(u32 cvs_handle, u32 cvp_handle, u32 rx_afe_port, + u32 tx_afe_port, u32 session_id); +void rtac_remove_voice(u32 cvs_handle); +void rtac_set_adm_handle(void *handle); +bool rtac_make_adm_callback(uint32_t *payload, u32 payload_size); +void rtac_copy_adm_payload_to_user(void *payload, u32 payload_size); +void rtac_set_asm_handle(u32 session_id, void *handle); +bool rtac_make_asm_callback(u32 session_id, uint32_t *payload, + u32 payload_size); +void rtac_copy_asm_payload_to_user(void *payload, u32 payload_size); +void rtac_set_voice_handle(u32 mode, void *handle); +bool rtac_make_voice_callback(u32 mode, uint32_t *payload, u32 payload_size); +void rtac_copy_voice_payload_to_user(void *payload, u32 payload_size); + +#endif diff --git a/include/sound/qdsp6v2/usf.h b/include/sound/qdsp6v2/usf.h new file mode 100644 index 000000000000..ff399297e797 --- /dev/null +++ b/include/sound/qdsp6v2/usf.h @@ -0,0 +1,274 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#ifndef __USF_H__ +#define __USF_H__ + +#include <linux/types.h> +#include <linux/ioctl.h> + +#define USF_IOCTL_MAGIC 'U' + +#define US_SET_TX_INFO _IOW(USF_IOCTL_MAGIC, 0, \ + struct us_tx_info_type) +#define US_START_TX _IO(USF_IOCTL_MAGIC, 1) +#define US_GET_TX_UPDATE _IOWR(USF_IOCTL_MAGIC, 2, \ + struct us_tx_update_info_type) +#define US_SET_RX_INFO _IOW(USF_IOCTL_MAGIC, 3, \ + struct us_rx_info_type) +#define US_SET_RX_UPDATE _IOWR(USF_IOCTL_MAGIC, 4, \ + struct us_rx_update_info_type) +#define US_START_RX _IO(USF_IOCTL_MAGIC, 5) + +#define US_STOP_TX _IO(USF_IOCTL_MAGIC, 6) +#define US_STOP_RX _IO(USF_IOCTL_MAGIC, 7) + +#define US_SET_DETECTION _IOWR(USF_IOCTL_MAGIC, 8, \ + struct us_detect_info_type) + +#define US_GET_VERSION _IOWR(USF_IOCTL_MAGIC, 9, \ + struct us_version_info_type) + +/* Special timeout values */ +#define USF_NO_WAIT_TIMEOUT 0x00000000 +/* Infinitive */ +#define USF_INFINITIVE_TIMEOUT 0xffffffff +/* Default value, used by the driver */ +#define USF_DEFAULT_TIMEOUT 0xfffffffe + +/* US detection place (HW|FW) */ +enum us_detect_place_enum { +/* US is detected in HW */ + US_DETECT_HW, +/* US is detected in FW */ + US_DETECT_FW +}; + +/* US detection mode */ +enum us_detect_mode_enum { +/* US detection is disabled */ + US_DETECT_DISABLED_MODE, +/* US detection is enabled in continue mode */ + US_DETECT_CONTINUE_MODE, +/* US detection is enabled in one shot mode */ + US_DETECT_SHOT_MODE +}; + +/* Encoder (TX), decoder (RX) supported US data formats */ +#define USF_POINT_EPOS_FORMAT 0 +#define USF_RAW_FORMAT 1 + +/* Indexes of event types, produced by the calculators */ +#define USF_TSC_EVENT_IND 0 +#define USF_TSC_PTR_EVENT_IND 1 +#define USF_MOUSE_EVENT_IND 2 +#define USF_KEYBOARD_EVENT_IND 3 +#define USF_MAX_EVENT_IND 4 + +/* Types of events, produced by the calculators */ +#define USF_NO_EVENT 0 +#define USF_TSC_EVENT (1 << USF_TSC_EVENT_IND) +#define USF_TSC_PTR_EVENT (1 << USF_TSC_PTR_EVENT_IND) +#define USF_MOUSE_EVENT (1 << USF_MOUSE_EVENT_IND) +#define USF_KEYBOARD_EVENT (1 << USF_KEYBOARD_EVENT_IND) +#define USF_ALL_EVENTS (USF_TSC_EVENT |\ + USF_TSC_PTR_EVENT |\ + USF_MOUSE_EVENT |\ + USF_KEYBOARD_EVENT) + +/* min, max array dimension */ +#define MIN_MAX_DIM 2 + +/* coordinates (x,y,z) array dimension */ +#define COORDINATES_DIM 3 + +/* tilts (x,y) array dimension */ +#define TILTS_DIM 2 + +/* Max size of the client name */ +#define USF_MAX_CLIENT_NAME_SIZE 20 +/* Info structure common for TX and RX */ +struct us_xx_info_type { +/* Input: general info */ +/* Name of the client - event calculator */ + const char *client_name; +/* Selected device identification, accepted in the kernel's CAD */ + uint32_t dev_id; +/* 0 - point_epos type; (e.g. 1 - gr_mmrd) */ + uint32_t stream_format; +/* Required sample rate in Hz */ + uint32_t sample_rate; +/* Size of a buffer (bytes) for US data transfer between the module and USF */ + uint32_t buf_size; +/* Number of the buffers for the US data transfer */ + uint16_t buf_num; +/* Number of the microphones (TX) or speakers(RX) */ + uint16_t port_cnt; +/* Microphones(TX) or speakers(RX) indexes in their enumeration */ + uint8_t port_id[4]; +/* Bits per sample 16 or 32 */ + uint16_t bits_per_sample; +/* Input: Transparent info for encoder in the LPASS */ +/* Parameters data size in bytes */ + uint16_t params_data_size; +/* Pointer to the parameters */ + uint8_t *params_data; +}; + +/* Input events sources */ +enum us_input_event_src_type { + US_INPUT_SRC_PEN, + US_INPUT_SRC_FINGER, + US_INPUT_SRC_UNDEF +}; + +struct us_input_info_type { + /* Touch screen dimensions: min & max;for input module */ + int tsc_x_dim[MIN_MAX_DIM]; + int tsc_y_dim[MIN_MAX_DIM]; + int tsc_z_dim[MIN_MAX_DIM]; + /* Touch screen tilt dimensions: min & max;for input module */ + int tsc_x_tilt[MIN_MAX_DIM]; + int tsc_y_tilt[MIN_MAX_DIM]; + /* Touch screen pressure limits: min & max; for input module */ + int tsc_pressure[MIN_MAX_DIM]; + /* Bitmap of types of events (USF_X_EVENT), produced by calculator */ + uint16_t event_types; + /* Input event source */ + enum us_input_event_src_type event_src; + /* Bitmap of types of events from devs, conflicting with USF */ + uint16_t conflicting_event_types; +}; + +struct us_tx_info_type { + /* Common info */ + struct us_xx_info_type us_xx_info; + /* Info specific for TX*/ + struct us_input_info_type input_info; +}; + +struct us_rx_info_type { + /* Common info */ + struct us_xx_info_type us_xx_info; + /* Info specific for RX*/ +}; + +struct point_event_type { +/* Pen coordinates (x, y, z) in units, defined by <coordinates_type> */ + int coordinates[COORDINATES_DIM]; + /* {x;y} in transparent units */ + int inclinations[TILTS_DIM]; +/* [0-1023] (10bits); 0 - pen up */ + uint32_t pressure; +}; + +/* Mouse buttons, supported by USF */ +#define USF_BUTTON_LEFT_MASK 1 +#define USF_BUTTON_MIDDLE_MASK 2 +#define USF_BUTTON_RIGHT_MASK 4 +struct mouse_event_type { +/* The mouse relative movement (dX, dY, dZ) */ + int rels[COORDINATES_DIM]; +/* Bitmap of mouse buttons states: 1 - down, 0 - up; */ + uint16_t buttons_states; +}; + +struct key_event_type { +/* Calculated MS key- see input.h. */ + uint32_t key; +/* Keyboard's key state: 1 - down, 0 - up; */ + uint8_t key_state; +}; + +struct usf_event_type { +/* Event sequence number */ + uint32_t seq_num; +/* Event generation system time */ + uint32_t timestamp; +/* Destination input event type index (e.g. touch screen, mouse, key) */ + uint16_t event_type_ind; + union { + struct point_event_type point_event; + struct mouse_event_type mouse_event; + struct key_event_type key_event; + } event_data; +}; + +struct us_tx_update_info_type { +/* Input general: */ +/* Number of calculated events */ + uint16_t event_counter; +/* Calculated events or NULL */ + struct usf_event_type *event; +/* Pointer (read index) to the end of available region */ +/* in the shared US data memory */ + uint32_t free_region; +/* Time (sec) to wait for data or special values: */ +/* USF_NO_WAIT_TIMEOUT, USF_INFINITIVE_TIMEOUT, USF_DEFAULT_TIMEOUT */ + uint32_t timeout; +/* Events (from conflicting devs) to be disabled/enabled */ + uint16_t event_filters; + +/* Input transparent data: */ +/* Parameters size */ + uint16_t params_data_size; +/* Pointer to the parameters */ + uint8_t *params_data; +/* Output parameters: */ +/* Pointer (write index) to the end of ready US data region */ +/* in the shared memory */ + uint32_t ready_region; +}; + +struct us_rx_update_info_type { +/* Input general: */ +/* Pointer (write index) to the end of ready US data region */ +/* in the shared memory */ + uint32_t ready_region; +/* Input transparent data: */ +/* Parameters size */ + uint16_t params_data_size; +/* pPointer to the parameters */ + uint8_t *params_data; +/* Output parameters: */ +/* Pointer (read index) to the end of available region */ +/* in the shared US data memory */ + uint32_t free_region; +}; + +struct us_detect_info_type { +/* US detection place (HW|FW) */ +/* NA in the Active and OFF states */ + enum us_detect_place_enum us_detector; +/* US detection mode */ + enum us_detect_mode_enum us_detect_mode; +/* US data dropped during this time (msec) */ + uint32_t skip_time; +/* Transparent data size */ + uint16_t params_data_size; +/* Pointer to the transparent data */ + uint8_t *params_data; +/* Time (sec) to wait for US presence event */ + uint32_t detect_timeout; +/* Out parameter: US presence */ + bool is_us; +}; + +struct us_version_info_type { +/* Size of memory for the version string */ + uint16_t buf_size; +/* Pointer to the memory for the version string */ + char *pbuf; +}; + +#endif /* __USF_H__ */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 0664c31f010c..0e099ccef9c5 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -196,6 +196,14 @@ struct drm_msm_wait_fence { struct drm_msm_timespec timeout; /* in */ }; +/* + * User space can get the last fence processed by the GPU + * */ +struct drm_msm_get_last_fence{ + uint32_t fence; /* out*/ + uint32_t pad; +}; + #define DRM_MSM_GET_PARAM 0x00 /* placeholder: #define DRM_MSM_SET_PARAM 0x01 @@ -206,7 +214,8 @@ struct drm_msm_wait_fence { #define DRM_MSM_GEM_CPU_FINI 0x05 #define DRM_MSM_GEM_SUBMIT 0x06 #define DRM_MSM_WAIT_FENCE 0x07 -#define DRM_MSM_NUM_IOCTLS 0x08 +#define DRM_MSM_GET_LAST_FENCE 0x08 +#define DRM_MSM_NUM_IOCTLS 0x09 #define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GET_PARAM, struct drm_msm_param) #define DRM_IOCTL_MSM_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GEM_NEW, struct drm_msm_gem_new) @@ -215,5 +224,6 @@ struct drm_msm_wait_fence { #define DRM_IOCTL_MSM_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_GEM_CPU_FINI, struct drm_msm_gem_cpu_fini) #define DRM_IOCTL_MSM_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GEM_SUBMIT, struct drm_msm_gem_submit) #define DRM_IOCTL_MSM_WAIT_FENCE DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_WAIT_FENCE, struct drm_msm_wait_fence) +#define DRM_IOCTL_MSM_GET_LAST_FENCE DRM_IOR (DRM_COMMAND_BASE + DRM_MSM_GET_LAST_FENCE, struct drm_msm_get_last_fence) #endif /* __MSM_DRM_H__ */ diff --git a/mm/memblock.c b/mm/memblock.c index 6ecb0d937fb5..d3b42d164749 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1440,6 +1440,12 @@ int __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t size memblock.memory.regions[idx].size) >= end; } +int __init_memblock memblock_overlaps_memory(phys_addr_t base, phys_addr_t size) +{ + memblock_cap_size(base, &size); + return memblock_overlaps_region(&memblock.memory, base, size) >= 0; +} + /** * memblock_is_region_reserved - check if a region intersects reserved memory * @base: base of region to check diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3541c12c47c9..f851734f3c46 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4933,6 +4933,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat, static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) { + unsigned long __maybe_unused offset = 0; + /* Skip empty nodes */ if (!pgdat->node_spanned_pages) return; @@ -4949,6 +4951,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) * for the buddy allocator to function correctly. */ start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1); + offset = pgdat->node_start_pfn - start; end = pgdat_end_pfn(pgdat); end = ALIGN(end, MAX_ORDER_NR_PAGES); size = (end - start) * sizeof(struct page); @@ -4956,7 +4959,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) if (!map) map = memblock_virt_alloc_node_nopanic(size, pgdat->node_id); - pgdat->node_mem_map = map + (pgdat->node_start_pfn - start); + pgdat->node_mem_map = map + offset; } #ifndef CONFIG_NEED_MULTIPLE_NODES /* @@ -4964,10 +4967,13 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) */ if (pgdat == NODE_DATA(0)) { mem_map = NODE_DATA(0)->node_mem_map; -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP - if (page_to_pfn(mem_map) != pgdat->node_start_pfn) - mem_map -= (pgdat->node_start_pfn - ARCH_PFN_OFFSET); -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +#if defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) || defined(CONFIG_FLATMEM) + if (page_to_pfn(mem_map) != pgdat->node_start_pfn) { + if (IS_ENABLED(CONFIG_HAVE_MEMBLOCK_NODE_MAP)) + offset = pgdat->node_start_pfn - ARCH_PFN_OFFSET; + mem_map -= offset; + } +#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP || CONFIG_FLATMEM */ } #endif #endif /* CONFIG_FLAT_NODE_MEM_MAP */ diff --git a/sound/soc/Kconfig b/sound/soc/Kconfig index 7d5d6444a837..3bd2131ab36d 100644 --- a/sound/soc/Kconfig +++ b/sound/soc/Kconfig @@ -55,6 +55,7 @@ source "sound/soc/spear/Kconfig" source "sound/soc/tegra/Kconfig" source "sound/soc/txx9/Kconfig" source "sound/soc/ux500/Kconfig" +source "sound/soc/msm/Kconfig" # Supported codecs source "sound/soc/codecs/Kconfig" diff --git a/sound/soc/Makefile b/sound/soc/Makefile index d88edfced8c4..a3739f578f41 100644 --- a/sound/soc/Makefile +++ b/sound/soc/Makefile @@ -1,3 +1,4 @@ +ccflags-$(CONFIG_SND_SOC) := -DDEBUG snd-soc-core-objs := soc-core.o soc-dapm.o soc-jack.o soc-cache.o soc-utils.o snd-soc-core-objs += soc-pcm.o soc-compress.o soc-io.o soc-devres.o @@ -19,6 +20,7 @@ obj-$(CONFIG_SND_SOC) += dwc/ obj-$(CONFIG_SND_SOC) += fsl/ obj-$(CONFIG_SND_SOC) += jz4740/ obj-$(CONFIG_SND_SOC) += intel/ +obj-$(CONFIG_SND_SOC) += msm/ obj-$(CONFIG_SND_SOC) += mxs/ obj-$(CONFIG_SND_SOC) += nuc900/ obj-$(CONFIG_SND_SOC) += omap/ diff --git a/sound/soc/msm/Kconfig b/sound/soc/msm/Kconfig new file mode 100644 index 000000000000..f4fe1cdb57c5 --- /dev/null +++ b/sound/soc/msm/Kconfig @@ -0,0 +1,72 @@ +menu "MSM SoC Audio support" + +config SND_MSM_DAI_SOC + tristate + +config SND_MSM_SOC + tristate "SoC Audio for the MSM series chips" + depends on ARCH_MSM7X27 + select SND_MSM_DAI_SOC + select SND_MSM_SOC_MSM7K + default n + help + To add support for ALSA PCM driver for MSM board. + +#8660 Variants +config SND_SOC_MSM8X60_PCM + tristate + +config SND_SOC_MSM8X60_DAI + tristate + +config SND_SOC_MSM_QDSP6_HDMI_AUDIO + tristate "Soc QDSP6 HDMI Audio DAI driver" + depends on FB_MSM_HDMI_MSM_PANEL || DRM_MSM + default n + help + To support HDMI Audio on MSM8960 over QDSP6. + +config SND_SOC_MSM_QDSP6_INTF + bool "SoC Q6 audio driver for MSM8960" + default n + help + To add support for SoC audio on MSM8960. + +config SND_SOC_MSM_QDSP6V2_INTF + bool "SoC Q6 audio driver for MSM8974" + help + To add support for SoC audio on MSM8974. + This will enable all the platform specific + interactions towards DSP. It includes asm, + adm and afe interfaces on the DSP. + + +config SND_SOC_QDSP6 + tristate "SoC ALSA audio driver for QDSP6" + select SND_SOC_MSM_QDSP6_INTF + default n + help + To add support for MSM QDSP6 Soc Audio. + +config SND_SOC_QDSP6V2 + tristate "SoC ALSA audio driver for QDSP6V2" + select SND_SOC_MSM_QDSP6V2_INTF + help + To add support for MSM QDSP6V2 Soc Audio. + This will enable sound soc platform specific + audio drivers. This includes q6asm, q6adm, + q6afe interfaces to DSP using apr. + +config SND_SOC_MSM8960 + tristate "SoC Machine driver for MSM8960 and APQ8064 boards" + depends on ARCH_MSM8960 || ARCH_APQ8064 + select SND_SOC_VOICE + select SND_SOC_QDSP6 + select SND_SOC_MSM_STUB + select SND_SOC_MSM_HOSTLESS_PCM + select SND_SOC_MSM_QDSP6_HDMI_AUDIO if FB_MSM_HDMI_MSM_PANEL + default n + help + To add support for SoC audio on MSM8960 and APQ8064 boards + +endmenu diff --git a/sound/soc/msm/Makefile b/sound/soc/msm/Makefile new file mode 100644 index 000000000000..a35f4020ba3d --- /dev/null +++ b/sound/soc/msm/Makefile @@ -0,0 +1,8 @@ +# for MSM 8960 sound card driver +obj-$(CONFIG_SND_SOC_MSM_QDSP6_INTF) += qdsp6/ +snd-soc-qdsp6-objs := msm-pcm-q6.o msm-pcm-routing.o msm-dai-fe.o +obj-$(CONFIG_SND_SOC_MSM_QDSP6_HDMI_AUDIO) += msm-dai-q6-hdmi.o +obj-$(CONFIG_SND_SOC_QDSP6) += snd-soc-qdsp6.o + +snd-soc-msm8960-objs := apq8064.o +obj-$(CONFIG_SND_SOC_MSM8960) += snd-soc-msm8960.o diff --git a/sound/soc/msm/apq8064.c b/sound/soc/msm/apq8064.c new file mode 100644 index 000000000000..5b916fd8595c --- /dev/null +++ b/sound/soc/msm/apq8064.c @@ -0,0 +1,170 @@ + +#include <linux/clk.h> +#include <linux/platform_device.h> +#include <linux/module.h> +#include <linux/of_device.h> + +#include <linux/pm_runtime.h> +#include <sound/core.h> +#include <sound/soc.h> +#include <sound/soc-dapm.h> +#include <sound/pcm.h> +#include "msm-pcm-routing.h" + +static int msm_hdmi_rx_ch = 2; +static int hdmi_rate_variable; + +static const char *hdmi_rx_ch_text[] = {"Two", "Three", "Four", "Five", + "Six", "Seven", "Eight"}; +static const char * const hdmi_rate[] = {"Default", "Variable"}; + +static const struct soc_enum msm_enum[] = { + SOC_ENUM_SINGLE_EXT(7, hdmi_rx_ch_text), + SOC_ENUM_SINGLE_EXT(2, hdmi_rate), +}; + +static int msm_hdmi_rx_ch_get(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + pr_debug("%s: msm_hdmi_rx_ch = %d\n", __func__, + msm_hdmi_rx_ch); + ucontrol->value.integer.value[0] = msm_hdmi_rx_ch - 2; + return 0; +} + +static int msm_hdmi_rx_ch_put(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + msm_hdmi_rx_ch = ucontrol->value.integer.value[0] + 2; + + pr_debug("%s: msm_hdmi_rx_ch = %d\n", __func__, + msm_hdmi_rx_ch); + return 1; +} + +static int msm_hdmi_rate_put(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + hdmi_rate_variable = ucontrol->value.integer.value[0]; + pr_debug("%s: hdmi_rate_variable = %d\n", __func__, hdmi_rate_variable); + return 0; +} + +static int msm_hdmi_rate_get(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + ucontrol->value.integer.value[0] = hdmi_rate_variable; + return 0; +} + +static const struct snd_kcontrol_new tabla_msm_controls[] = { + SOC_ENUM_EXT("HDMI_RX Channels", msm_enum[0], + msm_hdmi_rx_ch_get, msm_hdmi_rx_ch_put), + SOC_ENUM_EXT("HDMI RX Rate", msm_enum[1], + msm_hdmi_rate_get, + msm_hdmi_rate_put), +}; + +int msm_hdmi_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd, + struct snd_pcm_hw_params *params) +{ + struct snd_interval *rate = hw_param_interval(params, + SNDRV_PCM_HW_PARAM_RATE); + + struct snd_interval *channels = hw_param_interval(params, + SNDRV_PCM_HW_PARAM_CHANNELS); + + pr_debug("%s channels->min %u channels->max %u ()\n", __func__, + channels->min, channels->max); + + if (!hdmi_rate_variable) + rate->min = rate->max = 48000; + channels->min = channels->max = msm_hdmi_rx_ch; + if (channels->max < 2) + channels->min = channels->max = 2; + + return 0; +} + +/* Digital audio interface glue - connects codec <---> CPU */ +static struct snd_soc_dai_link msm_dai[] = { + /* FrontEnd DAI Links */ + { + .name = "MultiMedia1", + .stream_name = "MultiMedia1", + .cpu_dai_name = "soc:dai_fe", //device name?? + .platform_name = "soc:msm_pcm", + .dynamic = 1, + .dpcm_playback = 1, + .trigger = {SND_SOC_DPCM_TRIGGER_POST, SND_SOC_DPCM_TRIGGER_POST}, + .codec_dai_name = "snd-soc-dummy-dai", + .codec_name = "snd-soc-dummy", + .ignore_suspend = 1, + .ignore_pmdown_time = 1, /* this dainlink has playback support */ + .be_id = MSM_FRONTEND_DAI_MULTIMEDIA1, + .be_hw_params_fixup = msm_hdmi_be_hw_params_fixup, + + }, + /* HDMI BACK END DAI Link */ + { + .name = LPASS_BE_HDMI, + .stream_name = "HDMI", + .cpu_dai_name = "soc:dai_hdmi", + .platform_name = "soc:msm_pcm_routing", + .codec_name = "soc:codec_hdmi", + .codec_dai_name = "hdmi-hifi", + .no_pcm = 1, + .dpcm_playback = 1, + .be_id = MSM_BACKEND_DAI_HDMI_RX, + .be_hw_params_fixup = msm_hdmi_be_hw_params_fixup, + + }, +}; + +static struct snd_soc_card snd_soc_card_msm = { + .name = "apq8064-tabla-snd-card", + .dai_link = msm_dai, + .num_links = ARRAY_SIZE(msm_dai), + .owner = THIS_MODULE, + .controls = tabla_msm_controls, + .num_controls = ARRAY_SIZE(tabla_msm_controls), +}; + +static int msm_snd_apq8064_probe(struct platform_device *pdev) +{ + int ret; + + snd_soc_card_msm.dev = &pdev->dev; + ret = snd_soc_register_card(&snd_soc_card_msm); + if (ret) + dev_err(&pdev->dev, "Error: snd_soc_register_card failed (%d)!\n", ret); + + return ret; + +} + +static int msm_snd_apq8064_remove(struct platform_device *pdev) +{ + return 0; +} + +static const struct of_device_id msm_snd_apq8064_dt_match[] = { + {.compatible = "qcom,snd-apq8064"}, + {} +}; + +static struct platform_driver msm_snd_apq8064_driver = { + .probe = msm_snd_apq8064_probe, + .remove = msm_snd_apq8064_remove, + .driver = { + .name = "msm-snd-apq8064", + .owner = THIS_MODULE, + .of_match_table = msm_snd_apq8064_dt_match, + }, +}; +module_platform_driver(msm_snd_apq8064_driver); +/* Module information */ + +MODULE_DESCRIPTION("ALSA SoC msm"); +MODULE_LICENSE("GPL v2"); + diff --git a/sound/soc/msm/msm-dai-fe.c b/sound/soc/msm/msm-dai-fe.c new file mode 100644 index 000000000000..2c04fd50655c --- /dev/null +++ b/sound/soc/msm/msm-dai-fe.c @@ -0,0 +1,100 @@ +/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/device.h> +#include <linux/platform_device.h> +#include <linux/of_device.h> +#include <sound/core.h> +#include <sound/pcm.h> +#include <sound/soc.h> + +/* Conventional and unconventional sample rate supported */ +static unsigned int supported_sample_rates[] = { + 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000 +}; + +static struct snd_pcm_hw_constraint_list constraints_sample_rates = { + .count = ARRAY_SIZE(supported_sample_rates), + .list = supported_sample_rates, + .mask = 0, +}; + +static int multimedia_startup(struct snd_pcm_substream *substream, + struct snd_soc_dai *dai) +{ + snd_pcm_hw_constraint_list(substream->runtime, 0, + SNDRV_PCM_HW_PARAM_RATE, + &constraints_sample_rates); + + return 0; +} + +static struct snd_soc_dai_ops msm_fe_Multimedia_dai_ops = { + .startup = multimedia_startup, +}; + +static struct snd_soc_dai_driver msm_fe_dais[] = { + { + .playback = { + .stream_name = "MultiMedia1 Playback", + .rates = (SNDRV_PCM_RATE_8000_48000| + SNDRV_PCM_RATE_KNOT), + .formats = SNDRV_PCM_FMTBIT_S16_LE, + .channels_min = 1, + .channels_max = 6, + .rate_min = 8000, + .rate_max = 48000, + }, + .ops = &msm_fe_Multimedia_dai_ops, + .name = "MultiMedia1", + }, +}; + +static const struct snd_soc_component_driver msm_dai_fe_component = { + .name = "msm-dai-fe", +}; + +static int msm_fe_dai_dev_probe(struct platform_device *pdev) +{ + int ret; + + ret = snd_soc_register_component(&pdev->dev, &msm_dai_fe_component, msm_fe_dais, 1); + + return ret; +} + +static int msm_fe_dai_dev_remove(struct platform_device *pdev) +{ + snd_soc_unregister_component(&pdev->dev); + return 0; +} + +static const struct of_device_id msm_dai_fe_dt_match[] = { + {.compatible = "qcom,msm-dai-fe"}, + {} +}; + +static struct platform_driver msm_fe_dai_driver = { + .probe = msm_fe_dai_dev_probe, + .remove = msm_fe_dai_dev_remove, + .driver = { + .name = "msm-dai-fe", + .owner = THIS_MODULE, + .of_match_table = msm_dai_fe_dt_match, + }, +}; +module_platform_driver(msm_fe_dai_driver); +/* Module information */ +MODULE_DESCRIPTION("MSM Frontend DAI driver"); +MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/msm/msm-dai-q6-hdmi.c b/sound/soc/msm/msm-dai-q6-hdmi.c new file mode 100644 index 000000000000..5b69274c2950 --- /dev/null +++ b/sound/soc/msm/msm-dai-q6-hdmi.c @@ -0,0 +1,332 @@ +/* Copyright (c) 2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/device.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include <linux/bitops.h> +#include <linux/slab.h> +#include <sound/core.h> +#include <sound/pcm.h> +#include <sound/soc.h> +#include <sound/apr_audio.h> +#include <sound/q6afe.h> +#include <sound/q6adm.h> +#include <sound/msm-dai-q6.h> +#include <sound/msm_hdmi_audio.h> + + +enum { + STATUS_PORT_STARTED, /* track if AFE port has started */ + STATUS_MAX +}; + +struct msm_dai_q6_hdmi_dai_data { + DECLARE_BITMAP(status_mask, STATUS_MAX); + u32 rate; + u32 channels; + union afe_port_config port_config; +}; + +static int msm_dai_q6_hdmi_format_put(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + + struct msm_dai_q6_hdmi_dai_data *dai_data = kcontrol->private_data; + int value = ucontrol->value.integer.value[0]; + dai_data->port_config.hdmi_multi_ch.data_type = value; + pr_debug("%s: value = %d\n", __func__, value); + return 0; +} + +static int msm_dai_q6_hdmi_format_get(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + + struct msm_dai_q6_hdmi_dai_data *dai_data = kcontrol->private_data; + ucontrol->value.integer.value[0] = + dai_data->port_config.hdmi_multi_ch.data_type; + return 0; +} + + +/* HDMI format field for AFE_PORT_MULTI_CHAN_HDMI_AUDIO_IF_CONFIG command + * 0: linear PCM + * 1: non-linear PCM + */ +static const char *hdmi_format[] = { + "LPCM", + "Compr" +}; + +static const struct soc_enum hdmi_config_enum[] = { + SOC_ENUM_SINGLE_EXT(2, hdmi_format), +}; + +static const struct snd_kcontrol_new hdmi_config_controls[] = { + SOC_ENUM_EXT("HDMI RX Format", hdmi_config_enum[0], + msm_dai_q6_hdmi_format_get, + msm_dai_q6_hdmi_format_put), +}; +/* Current implementation assumes hw_param is called once + * This may not be the case but what to do when ADM and AFE + * port are already opened and parameter changes + */ +static int msm_dai_q6_hdmi_hw_params(struct snd_pcm_substream *substream, + struct snd_pcm_hw_params *params, + struct snd_soc_dai *dai) +{ + struct msm_dai_q6_hdmi_dai_data *dai_data = dev_get_drvdata(dai->dev); + u32 channel_allocation = 0; + u32 level_shift = 0; /* 0dB */ + bool down_mix = FALSE; + int sample_rate = 48000; + + printk("ADEBUG-DAI_HDMI::: %s\n", __func__); + dai_data->channels = params_channels(params); + dai_data->rate = params_rate(params); + dai_data->port_config.hdmi_multi_ch.reserved = 0; + + dai_data->channels = 2; + dai_data->rate = 48000; + switch (dai_data->rate) { + case 48000: + sample_rate = HDMI_SAMPLE_RATE_48KHZ; + break; + case 44100: + sample_rate = HDMI_SAMPLE_RATE_44_1KHZ; + break; + case 32000: + sample_rate = HDMI_SAMPLE_RATE_32KHZ; + break; + } + hdmi_msm_audio_sample_rate_reset(sample_rate); + + switch (dai_data->channels) { + case 2: + channel_allocation = 0; + hdmi_msm_audio_info_setup(1, MSM_HDMI_AUDIO_CHANNEL_2, + channel_allocation, level_shift, down_mix); + dai_data->port_config.hdmi_multi_ch.channel_allocation = + channel_allocation; + break; + case 6: + channel_allocation = 0x0B; + hdmi_msm_audio_info_setup(1, MSM_HDMI_AUDIO_CHANNEL_6, + channel_allocation, level_shift, down_mix); + dai_data->port_config.hdmi_multi_ch.channel_allocation = + channel_allocation; + break; + case 8: + channel_allocation = 0x1F; + hdmi_msm_audio_info_setup(1, MSM_HDMI_AUDIO_CHANNEL_8, + channel_allocation, level_shift, down_mix); + dai_data->port_config.hdmi_multi_ch.channel_allocation = + channel_allocation; + break; + default: + dev_err(dai->dev, "invalid Channels = %u\n", + dai_data->channels); + return -EINVAL; + } + dev_info(dai->dev, "%s() num_ch = %u rate =%u" + " channel_allocation = %u data type = %d\n", __func__, + dai_data->channels, + dai_data->rate, + dai_data->port_config.hdmi_multi_ch.channel_allocation, + dai_data->port_config.hdmi_multi_ch.data_type); + + return 0; +} + + +static void msm_dai_q6_hdmi_shutdown(struct snd_pcm_substream *substream, + struct snd_soc_dai *dai) +{ + struct msm_dai_q6_hdmi_dai_data *dai_data = dev_get_drvdata(dai->dev); + int rc = 0; + + printk("ADEBUG-DAI_HDMI::: %s\n", __func__); + if (!test_bit(STATUS_PORT_STARTED, dai_data->status_mask)) { + pr_info("%s: afe port not started. dai_data->status_mask" + " = %ld\n", __func__, *dai_data->status_mask); + return; + } + + rc = afe_close(HDMI_RX); /* can block */ + + if (IS_ERR_VALUE(rc)) + dev_err(dai->dev, "fail to close AFE port\n"); + + pr_debug("%s: dai_data->status_mask = %ld\n", __func__, + *dai_data->status_mask); + + clear_bit(STATUS_PORT_STARTED, dai_data->status_mask); +} + +static union afe_port_config hdmi_port_config; + +static int msm_dai_q6_hdmi_prepare(struct snd_pcm_substream *substream, + struct snd_soc_dai *dai) +{ + struct msm_dai_q6_hdmi_dai_data *dai_data = dev_get_drvdata(dai->dev); + int rc = 0; + dai_data->rate = 48000; + dai_data->port_config.hdmi_multi_ch.channel_allocation = 0; + dai_data->port_config.hdmi_multi_ch.reserved = 0; + dai_data->port_config.hdmi_multi_ch.data_type = 0; + + hdmi_port_config.hdmi_multi_ch.channel_allocation = 0, + hdmi_port_config.hdmi_multi_ch.reserved = 0, +//HDMI_RX = 8; + printk("ADEBUG-DAI_HDMI::: %s HDMI_RX %d \n", __func__, dai->id); + printk("ADEBUG-DAI_HDMI::: %s dai id %d hdmi rate %d config ch al %d reserved %d data type %d \n", __func__, HDMI_RX, dai_data->rate, + dai_data->port_config.hdmi_multi_ch.channel_allocation, + dai_data->port_config.hdmi_multi_ch.reserved, + dai_data->port_config.hdmi_multi_ch.data_type); + if (!test_bit(STATUS_PORT_STARTED, dai_data->status_mask)) { + rc = afe_port_start(HDMI_RX, &dai_data->port_config, + dai_data->rate); + if (IS_ERR_VALUE(rc)) + dev_err(dai->dev, "fail to open AFE port %x\n", + HDMI_RX); + else + set_bit(STATUS_PORT_STARTED, + dai_data->status_mask); + } + + return rc; +} + +static int msm_dai_q6_hdmi_dai_probe(struct snd_soc_dai *dai) +{ + struct msm_dai_q6_hdmi_dai_data *dai_data; + const struct snd_kcontrol_new *kcontrol; + int rc = 0; + + printk("ADEBUG-DAI_HDMI::: %s\n", __func__); + dai_data = kzalloc(sizeof(struct msm_dai_q6_hdmi_dai_data), + GFP_KERNEL); + + if (!dai_data) { + dev_err(dai->dev, "DAI-%d: fail to allocate dai data\n", + HDMI_RX); + rc = -ENOMEM; + } else + dev_set_drvdata(dai->dev, dai_data); + + kcontrol = &hdmi_config_controls[0]; + + rc = snd_ctl_add(dai->card->snd_card, + snd_ctl_new1(kcontrol, dai_data)); + return rc; +} + +static int msm_dai_q6_hdmi_dai_remove(struct snd_soc_dai *dai) +{ + struct msm_dai_q6_hdmi_dai_data *dai_data; + int rc; + printk("ADEBUG-DAI_HDMI::: %s\n", __func__); + + dai_data = dev_get_drvdata(dai->dev); + + /* If AFE port is still up, close it */ + if (test_bit(STATUS_PORT_STARTED, dai_data->status_mask)) { + rc = afe_close(HDMI_RX); /* can block */ + + if (IS_ERR_VALUE(rc)) + dev_err(dai->dev, "fail to close AFE port\n"); + + clear_bit(STATUS_PORT_STARTED, dai_data->status_mask); + } + kfree(dai_data); + snd_soc_unregister_component(dai->dev); + + return 0; +} + +static struct snd_soc_dai_ops msm_dai_q6_hdmi_ops = { + .prepare = msm_dai_q6_hdmi_prepare, + .hw_params = msm_dai_q6_hdmi_hw_params, + .shutdown = msm_dai_q6_hdmi_shutdown, +}; + +static struct snd_soc_dai_driver msm_dai_q6_hdmi_hdmi_rx_dai = { + .playback = { + .stream_name = "HDMI Playback", + .rates = SNDRV_PCM_RATE_48000, + .formats = SNDRV_PCM_FMTBIT_S16_LE, + .channels_min = 2, + .channels_max = 6, + .rate_max = 48000, + .rate_min = 48000, + }, + .ops = &msm_dai_q6_hdmi_ops, + .probe = msm_dai_q6_hdmi_dai_probe, + .remove = msm_dai_q6_hdmi_dai_remove, + .id = HDMI_RX, + .name = "HDMI", +}; + + +static const struct snd_soc_component_driver msm_hdmi_q6_component = { + .name = "msm-dai-q6-hdmi", +}; +/* To do: change to register DAIs as batch */ +static int msm_dai_q6_hdmi_dev_probe(struct platform_device *pdev) +{ + int rc = 0; + + dev_dbg(&pdev->dev, "dev name %s dev-id %d\n", + dev_name(&pdev->dev), pdev->id); + + printk("ADEBUG-DAI_HDMI: %s \n", __func__); +// switch (pdev->id) { +// case HDMI_RX: + rc = snd_soc_register_component(&pdev->dev, &msm_hdmi_q6_component, + &msm_dai_q6_hdmi_hdmi_rx_dai, 1); +// break; +// default: +// dev_err(&pdev->dev, "invalid device ID %d\n", pdev->id); +// rc = -ENODEV; +// break; +// } + printk("ADEBUG-DAI_HDMI: %s ret %d \n", __func__, rc); + return rc; +} + +static int msm_dai_q6_hdmi_dev_remove(struct platform_device *pdev) +{ + snd_soc_unregister_component(&pdev->dev); + return 0; +} + +static const struct of_device_id msm_dai_hdmi_dt_match[] = { + {.compatible = "qcom,msm-dai-q6-hdmi"}, + {} +}; +static struct platform_driver msm_dai_q6_hdmi_driver = { + .probe = msm_dai_q6_hdmi_dev_probe, + .remove = msm_dai_q6_hdmi_dev_remove, + .driver = { + .name = "msm-dai-q6-hdmi", + .owner = THIS_MODULE, + .of_match_table = msm_dai_hdmi_dt_match, + }, +}; + +module_platform_driver(msm_dai_q6_hdmi_driver); +/* Module information */ +MODULE_DESCRIPTION("MSM DSP HDMI DAI driver"); +MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/msm/msm-pcm-afe.h b/sound/soc/msm/msm-pcm-afe.h new file mode 100644 index 000000000000..674c7b5828cb --- /dev/null +++ b/sound/soc/msm/msm-pcm-afe.h @@ -0,0 +1,46 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef _MSM_PCM_AFE_H +#define _MSM_PCM_AFE_H +#include <sound/apr_audio.h> +#include <sound/q6afe.h> + + +struct pcm_afe_info { + unsigned long dma_addr; + struct snd_pcm_substream *substream; + unsigned int pcm_irq_pos; /* IRQ position */ + struct mutex lock; + spinlock_t dsp_lock; + uint32_t samp_rate; + uint32_t channel_mode; + uint8_t start; + uint32_t dsp_cnt; + uint32_t buf_phys; + int32_t mmap_flag; + int prepared; + struct hrtimer hrt; + int poll_time; + struct audio_client *audio_client; +}; + + +#define MSM_EXT(xname, fp_info, fp_get, fp_put, addr) \ + {.iface = SNDRV_CTL_ELEM_IFACE_MIXER, \ + .access = SNDRV_CTL_ELEM_ACCESS_READWRITE, \ + .name = xname, \ + .info = fp_info,\ + .get = fp_get, .put = fp_put, \ + .private_value = addr, \ + } + +#endif /*_MSM_PCM_AFE_H*/ diff --git a/sound/soc/msm/msm-pcm-q6.c b/sound/soc/msm/msm-pcm-q6.c new file mode 100644 index 000000000000..b3a289507f7c --- /dev/null +++ b/sound/soc/msm/msm-pcm-q6.c @@ -0,0 +1,850 @@ +/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/init.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/of_device.h> +#include <linux/moduleparam.h> +#include <linux/time.h> +#include <linux/wait.h> +#include <linux/platform_device.h> +#include <linux/slab.h> +#include <sound/core.h> +#include <sound/soc.h> +#include <sound/soc-dapm.h> +#include <sound/pcm.h> +#include <sound/initval.h> +#include <sound/control.h> +#include <asm/dma.h> +#include <linux/dma-mapping.h> + +#include <sound/q6afe.h> +#include <sound/q6adm.h> +#include <sound/msm-dai-q6.h> +#include <sound/msm_hdmi_audio.h> +#include "msm-pcm-q6.h" +#include "msm-pcm-routing.h" + +static struct audio_locks the_locks; + +extern int q6_hdmi_prepare(struct snd_pcm_substream *substream); +struct snd_msm { + struct snd_card *card; + struct snd_pcm *pcm; +}; + +#define PLAYBACK_NUM_PERIODS 8 +#define PLAYBACK_PERIOD_SIZE 2048 +#define CAPTURE_MIN_NUM_PERIODS 2 +#define CAPTURE_MAX_NUM_PERIODS 16 +#define CAPTURE_MAX_PERIOD_SIZE 4096 +#define CAPTURE_MIN_PERIOD_SIZE 320 + +static struct snd_pcm_hardware msm_pcm_hardware_capture = { + .info = (SNDRV_PCM_INFO_MMAP | + SNDRV_PCM_INFO_BLOCK_TRANSFER | + SNDRV_PCM_INFO_MMAP_VALID | + SNDRV_PCM_INFO_INTERLEAVED | + SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME), + .formats = SNDRV_PCM_FMTBIT_S16_LE, + .rates = SNDRV_PCM_RATE_8000_48000, + .rate_min = 8000, + .rate_max = 48000, + .channels_min = 1, + .channels_max = 4, + .buffer_bytes_max = CAPTURE_MAX_NUM_PERIODS * + CAPTURE_MAX_PERIOD_SIZE, + .period_bytes_min = CAPTURE_MIN_PERIOD_SIZE, + .period_bytes_max = CAPTURE_MAX_PERIOD_SIZE, + .periods_min = CAPTURE_MIN_NUM_PERIODS, + .periods_max = CAPTURE_MAX_NUM_PERIODS, + .fifo_size = 0, +}; + +static struct snd_pcm_hardware msm_pcm_hardware_playback = { + .info = (SNDRV_PCM_INFO_MMAP | + SNDRV_PCM_INFO_BLOCK_TRANSFER | + SNDRV_PCM_INFO_MMAP_VALID | + SNDRV_PCM_INFO_INTERLEAVED | + SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME), + .formats = SNDRV_PCM_FMTBIT_S16_LE, + .rates = SNDRV_PCM_RATE_8000_48000 | SNDRV_PCM_RATE_KNOT, + .rate_min = 8000, + .rate_max = 48000, + .channels_min = 1, + .channels_max = 2, + .buffer_bytes_max = PLAYBACK_NUM_PERIODS * PLAYBACK_PERIOD_SIZE, + .period_bytes_min = PLAYBACK_PERIOD_SIZE, + .period_bytes_max = PLAYBACK_PERIOD_SIZE, + .periods_min = PLAYBACK_NUM_PERIODS, + .periods_max = PLAYBACK_NUM_PERIODS, + .fifo_size = 0, +}; + +/* Conventional and unconventional sample rate supported */ +static unsigned int supported_sample_rates[] = { + 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000 +}; + +static uint32_t in_frame_info[CAPTURE_MAX_NUM_PERIODS][2]; + +static struct snd_pcm_hw_constraint_list constraints_sample_rates = { + .count = ARRAY_SIZE(supported_sample_rates), + .list = supported_sample_rates, + .mask = 0, +}; + +static void msm_pcm_route_event_handler(enum msm_pcm_routing_event event, + void *priv_data) +{ + struct msm_audio *prtd = priv_data; + + BUG_ON(!prtd); + + pr_debug("%s: event %x\n", __func__, event); + + switch (event) { + case MSM_PCM_RT_EVT_BUF_RECFG: + q6asm_cmd(prtd->audio_client, CMD_PAUSE); + q6asm_cmd(prtd->audio_client, CMD_FLUSH); + q6asm_run(prtd->audio_client, 0, 0, 0); + default: + break; + } +} + +static void event_handler(uint32_t opcode, + uint32_t token, uint32_t *payload, void *priv) +{ + struct msm_audio *prtd = priv; + struct snd_pcm_substream *substream = prtd->substream; + uint32_t *ptrmem = (uint32_t *)payload; + int i = 0; + uint32_t idx = 0; + uint32_t size = 0; + + pr_debug("%s\n", __func__); + switch (opcode) { + case ASM_DATA_EVENT_WRITE_DONE: { + pr_debug("ASM_DATA_EVENT_WRITE_DONE\n"); + pr_debug("Buffer Consumed = 0x%08x\n", *ptrmem); + prtd->pcm_irq_pos += prtd->pcm_count; + if (atomic_read(&prtd->start)) + snd_pcm_period_elapsed(substream); + atomic_inc(&prtd->out_count); + wake_up(&the_locks.write_wait); + if (!atomic_read(&prtd->start)) + break; + if (!prtd->mmap_flag) + break; + if (q6asm_is_cpu_buf_avail_nolock(IN, + prtd->audio_client, + &size, &idx)) { + pr_debug("%s:writing %d bytes of buffer to dsp 2\n", + __func__, prtd->pcm_count); + q6asm_write_nolock(prtd->audio_client, + prtd->pcm_count, 0, 0, NO_TIMESTAMP); + } + break; + } + case ASM_DATA_CMDRSP_EOS: + pr_debug("ASM_DATA_CMDRSP_EOS\n"); + prtd->cmd_ack = 1; + wake_up(&the_locks.eos_wait); + break; + case ASM_DATA_EVENT_READ_DONE: { + pr_debug("ASM_DATA_EVENT_READ_DONE\n"); + pr_debug("token = 0x%08x\n", token); + for (i = 0; i < 8; i++, ++ptrmem) + pr_debug("cmd[%d]=0x%08x\n", i, *ptrmem); + in_frame_info[token][0] = payload[2]; + in_frame_info[token][1] = payload[3]; + + /* assume data size = 0 during flushing */ + if (in_frame_info[token][0]) { + prtd->pcm_irq_pos += in_frame_info[token][0]; + pr_debug("pcm_irq_pos=%d\n", prtd->pcm_irq_pos); + if (atomic_read(&prtd->start)) + snd_pcm_period_elapsed(substream); + if (atomic_read(&prtd->in_count) <= prtd->periods) + atomic_inc(&prtd->in_count); + wake_up(&the_locks.read_wait); + if (prtd->mmap_flag && + q6asm_is_cpu_buf_avail_nolock(OUT, + prtd->audio_client, + &size, &idx)) + q6asm_read_nolock(prtd->audio_client); + } else { + pr_debug("%s: reclaim flushed buf in_count %x\n", + __func__, atomic_read(&prtd->in_count)); + atomic_inc(&prtd->in_count); + if (atomic_read(&prtd->in_count) == prtd->periods) { + pr_info("%s: reclaimed all bufs\n", __func__); + if (atomic_read(&prtd->start)) + snd_pcm_period_elapsed(substream); + wake_up(&the_locks.read_wait); + } + } + + break; + } + case APR_BASIC_RSP_RESULT: { + switch (payload[0]) { + case ASM_SESSION_CMD_RUN: + if (substream->stream + != SNDRV_PCM_STREAM_PLAYBACK) { + atomic_set(&prtd->start, 1); + break; + } + if (prtd->mmap_flag) { + pr_debug("%s:writing %d bytes" + " of buffer to dsp\n", + __func__, + prtd->pcm_count); + q6asm_write_nolock(prtd->audio_client, + prtd->pcm_count, + 0, 0, NO_TIMESTAMP); + } else { + while (atomic_read(&prtd->out_needed)) { + pr_debug("%s:writing %d bytes" + " of buffer to dsp\n", + __func__, + prtd->pcm_count); + q6asm_write_nolock(prtd->audio_client, + prtd->pcm_count, + 0, 0, NO_TIMESTAMP); + atomic_dec(&prtd->out_needed); + wake_up(&the_locks.write_wait); + }; + } + atomic_set(&prtd->start, 1); + break; + default: + break; + } + } + break; + default: + pr_debug("Not Supported Event opcode[0x%x]\n", opcode); + break; + } +} + +static int msm_pcm_playback_prepare(struct snd_pcm_substream *substream) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = runtime->private_data; + int ret; + + pr_debug("%s\n", __func__); + prtd->pcm_size = snd_pcm_lib_buffer_bytes(substream); + prtd->pcm_count = snd_pcm_lib_period_bytes(substream); + prtd->pcm_irq_pos = 0; + /* rate and channels are sent to audio driver */ + prtd->samp_rate = runtime->rate; + prtd->channel_mode = runtime->channels; + if (prtd->enabled) + return 0; + + ret = q6asm_media_format_block_pcm(prtd->audio_client, runtime->rate, + runtime->channels); + if (ret < 0) + pr_info("%s: CMD Format block failed\n", __func__); + + atomic_set(&prtd->out_count, runtime->periods); + + prtd->enabled = 1; + prtd->cmd_ack = 0; + + return 0; +} + +static int msm_pcm_capture_prepare(struct snd_pcm_substream *substream) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = runtime->private_data; + int ret = 0; + int i = 0; + pr_debug("%s\n", __func__); + prtd->pcm_size = snd_pcm_lib_buffer_bytes(substream); + prtd->pcm_count = snd_pcm_lib_period_bytes(substream); + prtd->pcm_irq_pos = 0; + + /* rate and channels are sent to audio driver */ + prtd->samp_rate = runtime->rate; + prtd->channel_mode = runtime->channels; + + if (prtd->enabled) + return 0; + + pr_debug("Samp_rate = %d\n", prtd->samp_rate); + pr_debug("Channel = %d\n", prtd->channel_mode); + if (prtd->channel_mode > 2) { + ret = q6asm_enc_cfg_blk_multi_ch_pcm(prtd->audio_client, + prtd->samp_rate, prtd->channel_mode); + } else { + ret = q6asm_enc_cfg_blk_pcm(prtd->audio_client, + prtd->samp_rate, prtd->channel_mode); + } + + if (ret < 0) + pr_debug("%s: cmd cfg pcm was block failed", __func__); + + for (i = 0; i < runtime->periods; i++) + q6asm_read(prtd->audio_client); + prtd->periods = runtime->periods; + + prtd->enabled = 1; + + return ret; +} + +static int msm_pcm_trigger(struct snd_pcm_substream *substream, int cmd) +{ + int ret = 0; + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = runtime->private_data; + + switch (cmd) { + case SNDRV_PCM_TRIGGER_START: + case SNDRV_PCM_TRIGGER_RESUME: + case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: + pr_debug("%s: Trigger start\n", __func__); + q6asm_run_nowait(prtd->audio_client, 0, 0, 0); + break; + case SNDRV_PCM_TRIGGER_STOP: + pr_debug("SNDRV_PCM_TRIGGER_STOP\n"); + atomic_set(&prtd->start, 0); + if (substream->stream != SNDRV_PCM_STREAM_PLAYBACK) + break; + prtd->cmd_ack = 0; + q6asm_cmd_nowait(prtd->audio_client, CMD_EOS); + break; + case SNDRV_PCM_TRIGGER_SUSPEND: + case SNDRV_PCM_TRIGGER_PAUSE_PUSH: + pr_debug("SNDRV_PCM_TRIGGER_PAUSE\n"); + q6asm_cmd_nowait(prtd->audio_client, CMD_PAUSE); + atomic_set(&prtd->start, 0); + break; + default: + ret = -EINVAL; + break; + } + + return ret; +} + +static int msm_pcm_open(struct snd_pcm_substream *substream) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct snd_soc_pcm_runtime *soc_prtd = substream->private_data; + struct msm_audio *prtd; + int ret = 0; + + pr_debug("%s\n", __func__); + prtd = kzalloc(sizeof(struct msm_audio), GFP_KERNEL); + if (prtd == NULL) { + pr_err("Failed to allocate memory for msm_audio\n"); + return -ENOMEM; + } + prtd->substream = substream; + prtd->audio_client = q6asm_audio_client_alloc( + (app_cb)event_handler, prtd); + if (!prtd->audio_client) { + pr_info("%s: Could not allocate memory\n", __func__); + kfree(prtd); + return -ENOMEM; + } + prtd->audio_client->perf_mode = false; + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + runtime->hw = msm_pcm_hardware_playback; + snd_soc_set_runtime_hwparams(substream, &msm_pcm_hardware_playback); + if(snd_pcm_hw_constraint_integer(runtime, + SNDRV_PCM_HW_PARAM_PERIODS) < 0) + printk("DEBUG::: %s Failed to set hw periods\n", __func__); + + if(snd_pcm_hw_constraint_integer(runtime, + SNDRV_PCM_HW_PARAM_PERIOD_SIZE) < 0) + printk("DEBUG::: %s Failed to set hw period size\n", __func__); + + ret = q6asm_open_write(prtd->audio_client, FORMAT_LINEAR_PCM); + if (ret < 0) { + pr_err("%s: pcm out open failed\n", __func__); + q6asm_audio_client_free(prtd->audio_client); + kfree(prtd); + return -ENOMEM; + } + + pr_debug("%s: session ID %d\n", __func__, + prtd->audio_client->session); + prtd->session_id = prtd->audio_client->session; + msm_pcm_routing_reg_phy_stream(soc_prtd->dai_link->be_id, + prtd->audio_client->perf_mode, + prtd->session_id, substream->stream); + prtd->cmd_ack = 1; + + } + /* Capture path */ + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) { + runtime->hw = msm_pcm_hardware_capture; + } + + ret = snd_pcm_hw_constraint_list(runtime, 0, + SNDRV_PCM_HW_PARAM_RATE, + &constraints_sample_rates); + if (ret < 0) + pr_err("snd_pcm_hw_constraint_list failed\n"); + /* Ensure that buffer size is a multiple of period size */ + ret = snd_pcm_hw_constraint_integer(runtime, + SNDRV_PCM_HW_PARAM_PERIODS); + if (ret < 0) + pr_err("snd_pcm_hw_constraint_integer failed\n"); + + if (snd_pcm_hw_constraint_step(substream->runtime, 0, + SNDRV_PCM_HW_PARAM_PERIOD_BYTES, + 2048) <0) + pr_err("snd_pcm_hw_constraint_integer period bytes failed\n"); + + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) { + ret = snd_pcm_hw_constraint_minmax(runtime, + SNDRV_PCM_HW_PARAM_BUFFER_BYTES, + CAPTURE_MIN_NUM_PERIODS * CAPTURE_MIN_PERIOD_SIZE, + CAPTURE_MAX_NUM_PERIODS * CAPTURE_MAX_PERIOD_SIZE); + if (ret < 0) { + pr_err("constraint for buffer bytes min max ret = %d\n", + ret); + } + } + + prtd->dsp_cnt = 0; + runtime->private_data = prtd; + + return 0; +} + +static int msm_pcm_playback_copy(struct snd_pcm_substream *substream, int a, + snd_pcm_uframes_t hwoff, void __user *buf, snd_pcm_uframes_t frames) +{ + int ret = 0; + int fbytes = 0; + int xfer = 0; + char *bufptr = NULL; + void *data = NULL; + uint32_t idx = 0; + uint32_t size = 0; + + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = runtime->private_data; + + fbytes = frames_to_bytes(runtime, frames); + pr_debug("%s: prtd->out_count = %d\n", + __func__, atomic_read(&prtd->out_count)); + ret = wait_event_timeout(the_locks.write_wait, + (atomic_read(&prtd->out_count)), 5 * HZ); + if (ret < 0) { + pr_err("%s: wait_event_timeout failed\n", __func__); + goto fail; + } + + if (!atomic_read(&prtd->out_count)) { + pr_err("%s: pcm stopped out_count 0\n", __func__); + return 0; + } + + data = q6asm_is_cpu_buf_avail(IN, prtd->audio_client, &size, &idx); + bufptr = data; + if (bufptr) { + xfer = fbytes; + pr_debug("%s:fbytes =%d: xfer=%d size=%d bufptr=%p buf=%p\n", + __func__, fbytes, xfer, size, bufptr, buf); + if (copy_from_user(bufptr, buf, xfer)) { + ret = -EFAULT; + goto fail; + } + buf += xfer; + fbytes -= xfer; + pr_debug("%s:fbytes = %d: xfer=%d\n", __func__, fbytes, xfer); + if (atomic_read(&prtd->start)) { + pr_debug("%s:writing %d bytes of buffer to dsp\n", + __func__, xfer); + ret = q6asm_write(prtd->audio_client, xfer, + 0, 0, NO_TIMESTAMP); + if (ret < 0) { + ret = -EFAULT; + goto fail; + } + } else + atomic_inc(&prtd->out_needed); + atomic_dec(&prtd->out_count); + } +fail: + return ret; +} + +static int msm_pcm_playback_close(struct snd_pcm_substream *substream) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct snd_soc_pcm_runtime *soc_prtd = substream->private_data; + struct msm_audio *prtd = runtime->private_data; + int dir = 0; + int ret = 0; + + pr_debug("%s\n", __func__); + + dir = IN; + ret = wait_event_timeout(the_locks.eos_wait, + prtd->cmd_ack, 5 * HZ); + if (ret < 0) + pr_err("%s: CMD_EOS failed\n", __func__); + q6asm_cmd(prtd->audio_client, CMD_CLOSE); + q6asm_audio_client_buf_free_contiguous(dir, + prtd->audio_client); + + msm_pcm_routing_dereg_phy_stream(soc_prtd->dai_link->be_id, + SNDRV_PCM_STREAM_PLAYBACK); + q6asm_audio_client_free(prtd->audio_client); + kfree(prtd); + return 0; +} + +static int msm_pcm_capture_copy(struct snd_pcm_substream *substream, + int channel, snd_pcm_uframes_t hwoff, void __user *buf, + snd_pcm_uframes_t frames) +{ + int ret = 0; + int fbytes = 0; + int xfer; + char *bufptr; + void *data = NULL; + static uint32_t idx; + static uint32_t size; + uint32_t offset = 0; + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = substream->runtime->private_data; + + + pr_debug("%s\n", __func__); + fbytes = frames_to_bytes(runtime, frames); + + pr_debug("appl_ptr %d\n", (int)runtime->control->appl_ptr); + pr_debug("hw_ptr %d\n", (int)runtime->status->hw_ptr); + pr_debug("avail_min %d\n", (int)runtime->control->avail_min); + + ret = wait_event_timeout(the_locks.read_wait, + (atomic_read(&prtd->in_count)), 5 * HZ); + if (ret < 0) { + pr_debug("%s: wait_event_timeout failed\n", __func__); + goto fail; + } + if (!atomic_read(&prtd->in_count)) { + pr_debug("%s: pcm stopped in_count 0\n", __func__); + return 0; + } + pr_debug("Checking if valid buffer is available...%08x\n", + (unsigned int) data); + data = q6asm_is_cpu_buf_avail(OUT, prtd->audio_client, &size, &idx); + bufptr = data; + pr_debug("Size = %d\n", size); + pr_debug("fbytes = %d\n", fbytes); + pr_debug("idx = %d\n", idx); + if (bufptr) { + xfer = fbytes; + if (xfer > size) + xfer = size; + offset = in_frame_info[idx][1]; + pr_debug("Offset value = %d\n", offset); + if (copy_to_user(buf, bufptr+offset, xfer)) { + pr_err("Failed to copy buf to user\n"); + ret = -EFAULT; + goto fail; + } + fbytes -= xfer; + size -= xfer; + in_frame_info[idx][1] += xfer; + pr_debug("%s:fbytes = %d: size=%d: xfer=%d\n", + __func__, fbytes, size, xfer); + pr_debug(" Sending next buffer to dsp\n"); + memset(&in_frame_info[idx], 0, + sizeof(uint32_t) * 2); + atomic_dec(&prtd->in_count); + ret = q6asm_read(prtd->audio_client); + if (ret < 0) { + pr_err("q6asm read failed\n"); + ret = -EFAULT; + goto fail; + } + } else + pr_err("No valid buffer\n"); + + pr_debug("Returning from capture_copy... %d\n", ret); +fail: + return ret; +} + +static int msm_pcm_capture_close(struct snd_pcm_substream *substream) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct snd_soc_pcm_runtime *soc_prtd = substream->private_data; + struct msm_audio *prtd = runtime->private_data; + int dir = OUT; + + pr_debug("%s\n", __func__); + if (prtd->audio_client) { + q6asm_cmd(prtd->audio_client, CMD_CLOSE); + q6asm_audio_client_buf_free_contiguous(dir, + prtd->audio_client); + q6asm_audio_client_free(prtd->audio_client); + } + + msm_pcm_routing_dereg_phy_stream(soc_prtd->dai_link->be_id, + SNDRV_PCM_STREAM_CAPTURE); + kfree(prtd); + + return 0; +} + +static int msm_pcm_copy(struct snd_pcm_substream *substream, int a, + snd_pcm_uframes_t hwoff, void __user *buf, snd_pcm_uframes_t frames) +{ + int ret = 0; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + ret = msm_pcm_playback_copy(substream, a, hwoff, buf, frames); + else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) + ret = msm_pcm_capture_copy(substream, a, hwoff, buf, frames); + return ret; +} + +static int msm_pcm_close(struct snd_pcm_substream *substream) +{ + int ret = 0; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + ret = msm_pcm_playback_close(substream); + else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) + ret = msm_pcm_capture_close(substream); + return ret; +} +static int msm_pcm_prepare(struct snd_pcm_substream *substream) +{ + int ret = 0; + //q6_hdmi_prepare(substream); + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + ret = msm_pcm_playback_prepare(substream); + else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) + ret = msm_pcm_capture_prepare(substream); + return ret; +} + +static snd_pcm_uframes_t msm_pcm_pointer(struct snd_pcm_substream *substream) +{ + + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = runtime->private_data; + + if (prtd->pcm_irq_pos >= prtd->pcm_size) + prtd->pcm_irq_pos = 0; + + pr_debug("pcm_irq_pos = %d\n", prtd->pcm_irq_pos); + return bytes_to_frames(runtime, (prtd->pcm_irq_pos)); +} + +static int msm_pcm_mmap(struct snd_pcm_substream *substream, + struct vm_area_struct *vma) +{ + int result = 0; + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = runtime->private_data; + + pr_debug("%s\n", __func__); + prtd->mmap_flag = 1; + + if (runtime->dma_addr && runtime->dma_bytes) { + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + result = remap_pfn_range(vma, vma->vm_start, + runtime->dma_addr >> PAGE_SHIFT, + runtime->dma_bytes, + vma->vm_page_prot); + } else { + pr_err("Physical address or size of buf is NULL"); + return -EINVAL; + } + + return result; +} + +static int msm_pcm_hw_params(struct snd_pcm_substream *substream, + struct snd_pcm_hw_params *params) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct msm_audio *prtd = runtime->private_data; + struct snd_dma_buffer *dma_buf = &substream->dma_buffer; + struct snd_soc_pcm_runtime *soc_prtd = substream->private_data; + struct audio_buffer *buf; + int dir, ret; + int format = FORMAT_LINEAR_PCM; + struct msm_pcm_routing_evt event; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + dir = IN; + else + dir = OUT; + + /*capture path*/ + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) { + if (params_channels(params) > 2) + format = FORMAT_MULTI_CHANNEL_LINEAR_PCM; + pr_debug("%s format = :0x%x\n", __func__, format); + + ret = q6asm_open_read(prtd->audio_client, format); + if (ret < 0) { + pr_err("%s: q6asm_open_read failed\n", __func__); + q6asm_audio_client_free(prtd->audio_client); + prtd->audio_client = NULL; + return -ENOMEM; + } + + pr_debug("%s: session ID %d\n", __func__, + prtd->audio_client->session); + prtd->session_id = prtd->audio_client->session; + event.event_func = msm_pcm_route_event_handler; + event.priv_data = (void *) prtd; + msm_pcm_routing_reg_phy_stream_v2(soc_prtd->dai_link->be_id, + prtd->audio_client->perf_mode, + prtd->session_id, + substream->stream, event); + } + + if (dir == OUT) { + ret = q6asm_audio_client_buf_alloc_contiguous(dir, + prtd->audio_client, + (params_buffer_bytes(params) / params_periods(params)), + params_periods(params)); + pr_debug("buff bytes = %d, period size = %d,\ + period count = %d\n", params_buffer_bytes(params), + params_periods(params), + params_buffer_bytes(params) / params_periods(params)); + } else { + ret = q6asm_audio_client_buf_alloc_contiguous(dir, + prtd->audio_client, + runtime->hw.period_bytes_min, + runtime->hw.periods_max); + pr_debug("buff bytes = %d, period size = %d,\ + period count = %d\n", params_buffer_bytes(params), + params_periods(params), + params_buffer_bytes(params) / params_periods(params)); + } + + if (ret < 0) { + pr_err("Audio Start: Buffer Allocation failed \ + rc = %d\n", ret); + return -ENOMEM; + } + buf = prtd->audio_client->port[dir].buf; + if (buf == NULL || buf[0].data == NULL) + return -ENOMEM; + + pr_debug("%s:buf = %p\n", __func__, buf); + dma_buf->dev.type = SNDRV_DMA_TYPE_DEV; + dma_buf->dev.dev = substream->pcm->card->dev; + dma_buf->private_data = NULL; + dma_buf->area = buf[0].data; + dma_buf->addr = buf[0].phys; + dma_buf->bytes = runtime->hw.buffer_bytes_max; + if (dir == IN) + dma_buf->bytes = runtime->hw.buffer_bytes_max; + else + dma_buf->bytes = params_buffer_bytes(params); + + if (!dma_buf->area) + return -ENOMEM; + + snd_pcm_set_runtime_buffer(substream, &substream->dma_buffer); + return 0; +} + +static struct snd_pcm_ops msm_pcm_ops = { + .open = msm_pcm_open, + .copy = msm_pcm_copy, + .hw_params = msm_pcm_hw_params, + .close = msm_pcm_close, + .ioctl = snd_pcm_lib_ioctl, + .prepare = msm_pcm_prepare, + .trigger = msm_pcm_trigger, + .pointer = msm_pcm_pointer, + .mmap = msm_pcm_mmap, +}; + +static int msm_asoc_pcm_new(struct snd_soc_pcm_runtime *rtd) +{ + struct snd_card *card = rtd->card->snd_card; + + if (!card->dev->coherent_dma_mask) + card->dev->coherent_dma_mask = DMA_BIT_MASK(32); + return 0; +} + +static struct snd_soc_platform_driver msm_soc_platform = { + .ops = &msm_pcm_ops, + .pcm_new = msm_asoc_pcm_new, +}; + +static int msm_pcm_probe(struct platform_device *pdev) +{ + int ret; + pr_info("%s: dev name %s\n", __func__, dev_name(&pdev->dev)); + init_waitqueue_head(&the_locks.enable_wait); + init_waitqueue_head(&the_locks.eos_wait); + init_waitqueue_head(&the_locks.write_wait); + init_waitqueue_head(&the_locks.read_wait); + ret = snd_soc_register_platform(&pdev->dev, + &msm_soc_platform); + return ret; +} + +static int msm_pcm_remove(struct platform_device *pdev) +{ + snd_soc_unregister_platform(&pdev->dev); + return 0; +} + +static const struct of_device_id msm_pcm_dt_match[] = { + {.compatible = "qcom,msm-pcm-dsp"}, + {} +}; + +static struct platform_driver msm_pcm_driver = { + .driver = { + .name = "msm-pcm-dsp", + .owner = THIS_MODULE, + .of_match_table = msm_pcm_dt_match, + }, + .probe = msm_pcm_probe, + .remove = (msm_pcm_remove), +}; + +static int __init msm_soc_platform_init(void) +{ + + return platform_driver_register(&msm_pcm_driver); +} +module_init(msm_soc_platform_init); + +static void __exit msm_soc_platform_exit(void) +{ + platform_driver_unregister(&msm_pcm_driver); +} +module_exit(msm_soc_platform_exit); + +MODULE_DESCRIPTION("PCM module platform driver"); +MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/msm/msm-pcm-q6.h b/sound/soc/msm/msm-pcm-q6.h new file mode 100644 index 000000000000..0522f549e818 --- /dev/null +++ b/sound/soc/msm/msm-pcm-q6.h @@ -0,0 +1,96 @@ +/* + * Copyright (C) 2008 Google, Inc. + * Copyright (C) 2008 HTC Corporation + * Copyright (c) 2008-2009,2011 The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * See the GNU General Public License for more details. + * You should have received a copy of the GNU General Public License + * along with this program; if not, you can find it at http://www.fsf.org. + */ + +#ifndef _MSM_PCM_H +#define _MSM_PCM_H +#include <sound/apr_audio.h> +#include <sound/q6asm.h> + + +/* Support unconventional sample rates 12000, 24000 as well */ +#define USE_RATE \ + (SNDRV_PCM_RATE_8000_48000 | SNDRV_PCM_RATE_KNOT) + +extern int copy_count; + +struct buffer { + void *data; + unsigned size; + unsigned used; + unsigned addr; +}; + +struct buffer_rec { + void *data; + unsigned int size; + unsigned int read; + unsigned int addr; +}; + +struct audio_locks { + spinlock_t event_lock; + wait_queue_head_t read_wait; + wait_queue_head_t write_wait; + wait_queue_head_t eos_wait; + wait_queue_head_t enable_wait; + wait_queue_head_t flush_wait; +}; + +struct msm_audio { + struct snd_pcm_substream *substream; + unsigned int pcm_size; + unsigned int pcm_count; + unsigned int pcm_irq_pos; /* IRQ position */ + uint16_t source; /* Encoding source bit mask */ + + struct audio_client *audio_client; + + uint16_t session_id; + + uint32_t samp_rate; + uint32_t channel_mode; + uint32_t dsp_cnt; + + int abort; /* set when error, like sample rate mismatch */ + + int enabled; + int close_ack; + int cmd_ack; + atomic_t start; + atomic_t stop; + atomic_t out_count; + atomic_t in_count; + atomic_t out_needed; + atomic_t eos; + int out_head; + int periods; + int mmap_flag; + atomic_t pending_buffer; + int cmd_interrupt; + bool meta_data_mode; +}; + +struct output_meta_data_st { + uint32_t meta_data_length; + uint32_t frame_size; + uint32_t timestamp_lsw; + uint32_t timestamp_msw; + uint32_t reserved[12]; +}; + +#endif /*_MSM_PCM_H*/ diff --git a/sound/soc/msm/msm-pcm-routing.c b/sound/soc/msm/msm-pcm-routing.c new file mode 100644 index 000000000000..cb0cc7885fc4 --- /dev/null +++ b/sound/soc/msm/msm-pcm-routing.c @@ -0,0 +1,897 @@ +/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + + +#include <linux/init.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/of_device.h> +#include <linux/moduleparam.h> +#include <linux/platform_device.h> +#include <linux/bitops.h> +#include <linux/mutex.h> +#include <sound/core.h> +#include <sound/soc.h> +#include <sound/soc-dapm.h> +#include <sound/pcm.h> +#include <sound/initval.h> +#include <sound/control.h> +#include <sound/q6adm.h> +#include <sound/q6asm.h> +#include <sound/q6afe.h> +#include <sound/tlv.h> +#include "msm-pcm-routing.h" +#include "qdsp6/q6voice.h" + +struct msm_pcm_routing_bdai_data { + u16 port_id; /* AFE port ID */ + u8 active; /* track if this backend is enabled */ + unsigned long fe_sessions; /* Front-end sessions */ + unsigned long port_sessions; /* track Tx BE ports -> Rx BE */ + unsigned int sample_rate; + unsigned int channel; + bool perf_mode; +}; + +struct msm_pcm_routing_fdai_data { + u16 be_srate; /* track prior backend sample rate for flushing purpose */ + int strm_id; /* ASM stream ID */ + struct msm_pcm_routing_evt event_info; +}; + +#define INVALID_SESSION -1 +#define SESSION_TYPE_RX 0 +#define SESSION_TYPE_TX 1 + +static DEFINE_MUTEX(routing_lock); + +static int srs_alsa_ctrl_ever_called; + +#define INT_RX_VOL_MAX_STEPS 0x2000 +#define INT_RX_VOL_GAIN 0x2000 +#define INT_RX_LR_VOL_MAX_STEPS 0x20002000 + +static const DECLARE_TLV_DB_LINEAR(fm_rx_vol_gain, 0, + INT_RX_VOL_MAX_STEPS); + +static const DECLARE_TLV_DB_LINEAR(lpa_rx_vol_gain, 0, + INT_RX_LR_VOL_MAX_STEPS); + +static const DECLARE_TLV_DB_LINEAR(multimedia2_rx_vol_gain, 0, + INT_RX_VOL_MAX_STEPS); + +static const DECLARE_TLV_DB_LINEAR(multimedia5_rx_vol_gain, 0, + INT_RX_VOL_MAX_STEPS); + +static const DECLARE_TLV_DB_LINEAR(compressed_rx_vol_gain, 0, + INT_RX_LR_VOL_MAX_STEPS); + +/* Equal to Frontend after last of the MULTIMEDIA SESSIONS */ +#define MAX_EQ_SESSIONS MSM_FRONTEND_DAI_CS_VOICE + +enum { + EQ_BAND1 = 0, + EQ_BAND2, + EQ_BAND3, + EQ_BAND4, + EQ_BAND5, + EQ_BAND6, + EQ_BAND7, + EQ_BAND8, + EQ_BAND9, + EQ_BAND10, + EQ_BAND11, + EQ_BAND12, + EQ_BAND_MAX, +}; + +struct msm_audio_eq_band { + uint16_t band_idx; /* The band index, 0 .. 11 */ + uint32_t filter_type; /* Filter band type */ + uint32_t center_freq_hz; /* Filter band center frequency */ + uint32_t filter_gain; /* Filter band initial gain (dB) */ + /* Range is +12 dB to -12 dB with 1dB increments. */ + uint32_t q_factor; +} __packed; + +struct msm_audio_eq_stream_config { + uint32_t enable; /* Number of consequtive bands specified */ + uint32_t num_bands; + struct msm_audio_eq_band eq_bands[EQ_BAND_MAX]; +} __packed; + +struct msm_audio_eq_stream_config eq_data[MAX_EQ_SESSIONS]; + +static void msm_send_eq_values(int eq_idx); +/* This array is indexed by back-end DAI ID defined in msm-pcm-routing.h + * If new back-end is defined, add new back-end DAI ID at the end of enum + */ + +union srs_trumedia_params_u { + struct srs_trumedia_params srs_params; + unsigned short int raw_params[1]; +}; +static union srs_trumedia_params_u msm_srs_trumedia_params[2]; +static int srs_port_id = -1; + +static void srs_send_params(int port_id, unsigned int techs, + int param_block_idx) { + /* only send commands to dsp if srs alsa ctrl was used + at least one time */ + if (!srs_alsa_ctrl_ever_called) + return; + + pr_debug("SRS %s: called, port_id = %d, techs flags = %u," + " paramblockidx %d", __func__, port_id, techs, + param_block_idx); + /* force all if techs is set to 1 */ + if (techs == 1) + techs = 0xFFFFFFFF; + + if (techs & (1 << SRS_ID_WOWHD)) + srs_trumedia_open(port_id, SRS_ID_WOWHD, + (void *)&msm_srs_trumedia_params[param_block_idx].srs_params.wowhd); + if (techs & (1 << SRS_ID_CSHP)) + srs_trumedia_open(port_id, SRS_ID_CSHP, + (void *)&msm_srs_trumedia_params[param_block_idx].srs_params.cshp); + if (techs & (1 << SRS_ID_HPF)) + srs_trumedia_open(port_id, SRS_ID_HPF, + (void *)&msm_srs_trumedia_params[param_block_idx].srs_params.hpf); + if (techs & (1 << SRS_ID_PEQ)) + srs_trumedia_open(port_id, SRS_ID_PEQ, + (void *)&msm_srs_trumedia_params[param_block_idx].srs_params.peq); + if (techs & (1 << SRS_ID_HL)) + srs_trumedia_open(port_id, SRS_ID_HL, + (void *)&msm_srs_trumedia_params[param_block_idx].srs_params.hl); + if (techs & (1 << SRS_ID_GLOBAL)) + srs_trumedia_open(port_id, SRS_ID_GLOBAL, + (void *)&msm_srs_trumedia_params[param_block_idx].srs_params.global); +} + +static struct msm_pcm_routing_bdai_data msm_bedais[MSM_BACKEND_DAI_MAX] = { + { PRIMARY_I2S_RX, 0, 0, 0, 0, 0}, + { PRIMARY_I2S_TX, 0, 0, 0, 0, 0}, + { SLIMBUS_0_RX, 0, 0, 0, 0, 0}, + { SLIMBUS_0_TX, 0, 0, 0, 0, 0}, + { HDMI_RX, 0, 0, 0, 0, 0}, + { INT_BT_SCO_RX, 0, 0, 0, 0, 0}, + { INT_BT_SCO_TX, 0, 0, 0, 0, 0}, + { INT_FM_RX, 0, 0, 0, 0, 0}, + { INT_FM_TX, 0, 0, 0, 0, 0}, + { RT_PROXY_PORT_001_RX, 0, 0, 0, 0, 0}, + { RT_PROXY_PORT_001_TX, 0, 0, 0, 0, 0}, + { PCM_RX, 0, 0, 0, 0, 0}, + { PCM_TX, 0, 0, 0, 0, 0}, + { VOICE_PLAYBACK_TX, 0, 0, 0, 0, 0}, + { VOICE_RECORD_RX, 0, 0, 0, 0, 0}, + { VOICE_RECORD_TX, 0, 0, 0, 0, 0}, + { MI2S_RX, 0, 0, 0, 0, 0}, + { MI2S_TX, 0, 0, 0, 0}, + { SECONDARY_I2S_RX, 0, 0, 0, 0, 0}, + { SLIMBUS_1_RX, 0, 0, 0, 0, 0}, + { SLIMBUS_1_TX, 0, 0, 0, 0, 0}, + { SLIMBUS_4_RX, 0, 0, 0, 0, 0}, + { SLIMBUS_4_TX, 0, 0, 0, 0, 0}, + { SLIMBUS_3_RX, 0, 0, 0, 0, 0}, + { SLIMBUS_3_TX, 0, 0, 0, 0, 0}, + { SLIMBUS_EXTPROC_RX, 0, 0, 0, 0, 0}, + { SLIMBUS_EXTPROC_RX, 0, 0, 0, 0, 0}, + { SLIMBUS_EXTPROC_RX, 0, 0, 0, 0, 0}, + { SECONDARY_PCM_RX, 0, 0, 0, 0, 0}, + { SECONDARY_PCM_TX, 0, 0, 0, 0, 0}, +}; + +/* Track ASM playback & capture sessions of DAI */ +static struct msm_pcm_routing_fdai_data + fe_dai_map[MSM_FRONTEND_DAI_MM_SIZE][2] = { + /* MULTIMEDIA1 */ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, + /* MULTIMEDIA2 */ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, + /* MULTIMEDIA3 */ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, + /* MULTIMEDIA4 */ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, + /* MULTIMEDIA5 */ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, + /* MULTIMEDIA6 */ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, + /* MULTIMEDIA7*/ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, + /* MULTIMEDIA8 */ + {{0, INVALID_SESSION, {NULL, NULL} }, + {0, INVALID_SESSION, {NULL, NULL} } }, +}; + +static uint8_t is_be_dai_extproc(int be_dai) +{ + if (be_dai == MSM_BACKEND_DAI_EXTPROC_RX || + be_dai == MSM_BACKEND_DAI_EXTPROC_TX || + be_dai == MSM_BACKEND_DAI_EXTPROC_EC_TX) + return 1; + else + return 0; +} + +static void msm_pcm_routing_build_matrix(int fedai_id, int dspst_id, + int path_type) +{ + int i, port_type; + struct route_payload payload; + + payload.num_copps = 0; + port_type = (path_type == ADM_PATH_PLAYBACK ? + MSM_AFE_PORT_TYPE_RX : MSM_AFE_PORT_TYPE_TX); + + for (i = 0; i < MSM_BACKEND_DAI_MAX; i++) { + if (!is_be_dai_extproc(i) && + (afe_get_port_type(msm_bedais[i].port_id) == port_type) && + (msm_bedais[i].active) && + (test_bit(fedai_id, &msm_bedais[i].fe_sessions))) + payload.copp_ids[payload.num_copps++] = + msm_bedais[i].port_id; + } + + if (payload.num_copps) + adm_matrix_map(dspst_id, path_type, + payload.num_copps, payload.copp_ids, 0); +} + +void msm_pcm_routing_reg_psthr_stream(int fedai_id, int dspst_id, + int stream_type, int enable) +{ + int i, session_type, path_type, port_type; + u32 mode = 0; + + if (fedai_id > MSM_FRONTEND_DAI_MM_MAX_ID) { + /* bad ID assigned in machine driver */ + pr_err("%s: bad MM ID\n", __func__); + return; + } + + if (stream_type == SNDRV_PCM_STREAM_PLAYBACK) { + session_type = SESSION_TYPE_RX; + path_type = ADM_PATH_PLAYBACK; + port_type = MSM_AFE_PORT_TYPE_RX; + } else { + session_type = SESSION_TYPE_TX; + path_type = ADM_PATH_LIVE_REC; + port_type = MSM_AFE_PORT_TYPE_TX; + } + + mutex_lock(&routing_lock); + + fe_dai_map[fedai_id][session_type].strm_id = dspst_id; + for (i = 0; i < MSM_BACKEND_DAI_MAX; i++) { + if (!is_be_dai_extproc(i) && + (afe_get_port_type(msm_bedais[i].port_id) == port_type) && + (msm_bedais[i].active) && + (test_bit(fedai_id, &msm_bedais[i].fe_sessions))) { + mode = afe_get_port_type(msm_bedais[i].port_id); + if (enable) + adm_connect_afe_port(mode, dspst_id, + msm_bedais[i].port_id); + else + adm_disconnect_afe_port(mode, dspst_id, + msm_bedais[i].port_id); + + break; + } + } + mutex_unlock(&routing_lock); +} + +void msm_pcm_routing_reg_phy_stream(int fedai_id, bool perf_mode, int dspst_id, + int stream_type) +{ + int i, session_type, path_type, port_type; + struct route_payload payload; + u32 channels; + + if (fedai_id > MSM_FRONTEND_DAI_MM_MAX_ID) { + /* bad ID assigned in machine driver */ + pr_err("%s: bad MM ID %d\n", __func__, fedai_id); + return; + } + + if (stream_type == SNDRV_PCM_STREAM_PLAYBACK) { + session_type = SESSION_TYPE_RX; + path_type = ADM_PATH_PLAYBACK; + port_type = MSM_AFE_PORT_TYPE_RX; + } else { + session_type = SESSION_TYPE_TX; + path_type = ADM_PATH_LIVE_REC; + port_type = MSM_AFE_PORT_TYPE_TX; + } + + mutex_lock(&routing_lock); + + payload.num_copps = 0; /* only RX needs to use payload */ + fe_dai_map[fedai_id][session_type].strm_id = dspst_id; + /* re-enable EQ if active */ + if (eq_data[fedai_id].enable) + msm_send_eq_values(fedai_id); + for (i = 0; i < MSM_BACKEND_DAI_MAX; i++) { + if (test_bit(fedai_id, &msm_bedais[i].fe_sessions)) + msm_bedais[i].perf_mode = perf_mode; + if (!is_be_dai_extproc(i) && + (afe_get_port_type(msm_bedais[i].port_id) == port_type) && + (msm_bedais[i].active) && + (test_bit(fedai_id, &msm_bedais[i].fe_sessions))) { + + channels = msm_bedais[i].channel; + + if ((stream_type == SNDRV_PCM_STREAM_PLAYBACK) && + ((channels == 1) || (channels == 2)) && + msm_bedais[i].perf_mode) { + pr_debug("%s configure COPP to lowlatency mode", + __func__); + adm_multi_ch_copp_open(msm_bedais[i].port_id, + path_type, + msm_bedais[i].sample_rate, + msm_bedais[i].channel, + DEFAULT_COPP_TOPOLOGY, msm_bedais[i].perf_mode); + } else if ((stream_type == SNDRV_PCM_STREAM_PLAYBACK) && + (channels > 2)) + adm_multi_ch_copp_open(msm_bedais[i].port_id, + path_type, + msm_bedais[i].sample_rate, + msm_bedais[i].channel, + DEFAULT_COPP_TOPOLOGY, msm_bedais[i].perf_mode); + else + adm_open(msm_bedais[i].port_id, + path_type, + msm_bedais[i].sample_rate, + msm_bedais[i].channel, + DEFAULT_COPP_TOPOLOGY); + + payload.copp_ids[payload.num_copps++] = + msm_bedais[i].port_id; + srs_port_id = msm_bedais[i].port_id; + srs_send_params(srs_port_id, 1, 0); + } + } + if (payload.num_copps) + adm_matrix_map(dspst_id, path_type, + payload.num_copps, payload.copp_ids, 0); + + mutex_unlock(&routing_lock); +} +void msm_pcm_routing_reg_phy_stream_v2(int fedai_id, bool perf_mode, + int dspst_id, int stream_type, + struct msm_pcm_routing_evt event_info) +{ + msm_pcm_routing_reg_phy_stream(fedai_id, perf_mode, dspst_id, + stream_type); + + if (stream_type == SNDRV_PCM_STREAM_PLAYBACK) + fe_dai_map[fedai_id][SESSION_TYPE_RX].event_info = event_info; + else + fe_dai_map[fedai_id][SESSION_TYPE_TX].event_info = event_info; + +} + +void msm_pcm_routing_dereg_phy_stream(int fedai_id, int stream_type) +{ + int i, port_type, session_type; + + if (fedai_id > MSM_FRONTEND_DAI_MM_MAX_ID) { + /* bad ID assigned in machine driver */ + pr_err("%s: bad MM ID\n", __func__); + return; + } + + if (stream_type == SNDRV_PCM_STREAM_PLAYBACK) { + port_type = MSM_AFE_PORT_TYPE_RX; + session_type = SESSION_TYPE_RX; + } else { + port_type = MSM_AFE_PORT_TYPE_TX; + session_type = SESSION_TYPE_TX; + } + + mutex_lock(&routing_lock); + + for (i = 0; i < MSM_BACKEND_DAI_MAX; i++) { + if (!is_be_dai_extproc(i) && + (afe_get_port_type(msm_bedais[i].port_id) == port_type) && + (msm_bedais[i].active) && + (test_bit(fedai_id, &msm_bedais[i].fe_sessions))) + adm_close(msm_bedais[i].port_id); + } + + fe_dai_map[fedai_id][session_type].strm_id = INVALID_SESSION; + fe_dai_map[fedai_id][session_type].be_srate = 0; + mutex_unlock(&routing_lock); +} + +/* Check if FE/BE route is set */ +static bool msm_pcm_routing_route_is_set(u16 be_id, u16 fe_id) +{ + bool rc = false; + + if (fe_id > MSM_FRONTEND_DAI_MM_MAX_ID) { + /* recheck FE ID in the mixer control defined in this file */ + pr_err("%s: bad MM ID\n", __func__); + return rc; + } + + if (test_bit(fe_id, &msm_bedais[be_id].fe_sessions)) + rc = true; + + return rc; +} + +static void msm_pcm_routing_process_audio(u16 reg, u16 val, int set) +{ + int session_type, path_type; + u32 channels; + struct msm_pcm_routing_fdai_data *fdai; + + + if (val > MSM_FRONTEND_DAI_MM_MAX_ID) { + /* recheck FE ID in the mixer control defined in this file */ + pr_err("%s: bad MM ID\n", __func__); + return; + } + + if (afe_get_port_type(msm_bedais[reg].port_id) == + MSM_AFE_PORT_TYPE_RX) { + session_type = SESSION_TYPE_RX; + path_type = ADM_PATH_PLAYBACK; + } else { + session_type = SESSION_TYPE_TX; + path_type = ADM_PATH_LIVE_REC; + } + + mutex_lock(&routing_lock); + + if (set) { + + set_bit(val, &msm_bedais[reg].fe_sessions); + fdai = &fe_dai_map[val][session_type]; + if (msm_bedais[reg].active && fdai->strm_id != + INVALID_SESSION) { + channels = msm_bedais[reg].channel; + + if (session_type == SESSION_TYPE_TX && fdai->be_srate && + (fdai->be_srate != msm_bedais[reg].sample_rate)) { + pr_debug("%s: flush strm %d due diff BE rates\n", + __func__, fdai->strm_id); + + if (fdai->event_info.event_func) + fdai->event_info.event_func( + MSM_PCM_RT_EVT_BUF_RECFG, + fdai->event_info.priv_data); + fdai->be_srate = 0; /* might not need it */ + } + + if ((session_type == SESSION_TYPE_RX) && + ((channels == 1) || (channels == 2)) + && msm_bedais[reg].perf_mode) { + adm_multi_ch_copp_open(msm_bedais[reg].port_id, + path_type, + msm_bedais[reg].sample_rate, + channels, + DEFAULT_COPP_TOPOLOGY, + msm_bedais[reg].perf_mode); + pr_debug("%s:configure COPP to lowlatency mode", + __func__); + } else if ((session_type == SESSION_TYPE_RX) + && (channels > 2)) + adm_multi_ch_copp_open(msm_bedais[reg].port_id, + path_type, + msm_bedais[reg].sample_rate, + channels, + DEFAULT_COPP_TOPOLOGY, + msm_bedais[reg].perf_mode); + else + adm_open(msm_bedais[reg].port_id, + path_type, + msm_bedais[reg].sample_rate, channels, + DEFAULT_COPP_TOPOLOGY); + + if (session_type == SESSION_TYPE_RX && + fdai->event_info.event_func) + fdai->event_info.event_func( + MSM_PCM_RT_EVT_DEVSWITCH, + fdai->event_info.priv_data); + + msm_pcm_routing_build_matrix(val, + fdai->strm_id, path_type); + srs_port_id = msm_bedais[reg].port_id; + srs_send_params(srs_port_id, 1, 0); + } + } else { + clear_bit(val, &msm_bedais[reg].fe_sessions); + fdai = &fe_dai_map[val][session_type]; + if (msm_bedais[reg].active && fdai->strm_id != + INVALID_SESSION) { + fdai->be_srate = msm_bedais[reg].sample_rate; + adm_close(msm_bedais[reg].port_id); + msm_pcm_routing_build_matrix(val, + fdai->strm_id, path_type); + } + } + + mutex_unlock(&routing_lock); +} + +static int msm_routing_get_audio_mixer(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + struct soc_mixer_control *mc = + (struct soc_mixer_control *)kcontrol->private_value; + + if (test_bit(mc->shift, &msm_bedais[mc->reg].fe_sessions)) + ucontrol->value.integer.value[0] = 1; + else + ucontrol->value.integer.value[0] = 0; + + pr_debug("%s: reg %x shift %x val %ld\n", __func__, mc->reg, mc->shift, + ucontrol->value.integer.value[0]); + + return 0; +} + +static int msm_routing_put_audio_mixer(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + + struct snd_soc_dapm_context *dapm; + + struct soc_mixer_control *mc = + (struct soc_mixer_control *)kcontrol->private_value; + struct snd_soc_codec *codec = snd_soc_dapm_kcontrol_codec(kcontrol); + unsigned int mask = (1 << fls(mc->max)) - 1; + unsigned short val = (ucontrol->value.integer.value[0] & mask); + struct snd_soc_dapm_update update; + struct snd_soc_card *card; + + dapm = &codec->dapm; + card = dapm->card; + update.kcontrol = kcontrol; + update.reg = mc->reg; + update.mask = mask; + update.val = val; + + + if (ucontrol->value.integer.value[0] && + msm_pcm_routing_route_is_set(mc->reg, mc->shift) == false) { + msm_pcm_routing_process_audio(mc->reg, mc->shift, 1); + snd_soc_dapm_mixer_update_power(&codec->dapm, kcontrol, 1, &update); + } else if (!ucontrol->value.integer.value[0] && + msm_pcm_routing_route_is_set(mc->reg, mc->shift) == true) { + msm_pcm_routing_process_audio(mc->reg, mc->shift, 0); + snd_soc_dapm_mixer_update_power(&codec->dapm, kcontrol, 0, &update); + } + + return 0; +} + +static void msm_send_eq_values(int eq_idx) +{ + int result; + struct audio_client *ac = q6asm_get_audio_client( + fe_dai_map[eq_idx][SESSION_TYPE_RX].strm_id); + + if (ac == NULL) { + pr_err("%s: Could not get audio client for session: %d\n", + __func__, fe_dai_map[eq_idx][SESSION_TYPE_RX].strm_id); + goto done; + } + + result = q6asm_equalizer(ac, &eq_data[eq_idx]); + + if (result < 0) + pr_err("%s: Call to ASM equalizer failed, returned = %d\n", + __func__, result); +done: + return; +} + +static const struct snd_kcontrol_new hdmi_mixer_controls[] = { + SOC_SINGLE_EXT("MultiMedia1", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA1, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), + SOC_SINGLE_EXT("MultiMedia2", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA2, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), + SOC_SINGLE_EXT("MultiMedia3", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA3, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), + SOC_SINGLE_EXT("MultiMedia4", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA4, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), + SOC_SINGLE_EXT("MultiMedia5", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), + SOC_SINGLE_EXT("MultiMedia6", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA6, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), + SOC_SINGLE_EXT("MultiMedia7", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA7, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), + SOC_SINGLE_EXT("MultiMedia8", MSM_BACKEND_DAI_HDMI_RX, + MSM_FRONTEND_DAI_MULTIMEDIA8, 1, 0, msm_routing_get_audio_mixer, + msm_routing_put_audio_mixer), +}; + +static const struct snd_soc_dapm_widget msm_qdsp6_widgets[] = { + /* Frontend AIF */ + /* Widget name equals to Front-End DAI name<Need confirmation>, + * Stream name must contains substring of front-end dai name + */ + SND_SOC_DAPM_AIF_IN("MM_DL1", "MultiMedia1 Playback", 0, 0, 0, 0), + SND_SOC_DAPM_AIF_IN("MM_DL2", "MultiMedia2 Playback", 0, 0, 0, 0), + SND_SOC_DAPM_AIF_IN("MM_DL3", "MultiMedia3 Playback", 0, 0, 0, 0), + SND_SOC_DAPM_AIF_IN("MM_DL4", "MultiMedia4 Playback", 0, 0, 0, 0), + SND_SOC_DAPM_AIF_IN("MM_DL5", "MultiMedia5 Playback", 0, 0, 0, 0), + SND_SOC_DAPM_AIF_IN("MM_DL6", "MultiMedia6 Playback", 0, 0, 0, 0), + SND_SOC_DAPM_AIF_IN("MM_DL7", "MultiMedia7 Playback", 0, 0, 0, 0), + SND_SOC_DAPM_AIF_IN("MM_DL8", "MultiMedia8 Playback", 0, 0, 0, 0), + /* Backend AIF */ + /* Stream name equals to backend dai link stream name + */ + SND_SOC_DAPM_AIF_OUT("HDMI", "HDMI Playback", 0, 0, 0 , 0), + /* incall */ + /* Mixer definitions */ + SND_SOC_DAPM_MIXER("HDMI Mixer", SND_SOC_NOPM, 0, 0, + hdmi_mixer_controls, ARRAY_SIZE(hdmi_mixer_controls)), + SND_SOC_DAPM_OUTPUT("BE_OUT"), +}; + +static const struct snd_soc_dapm_route intercon[] = { + {"HDMI Mixer", "MultiMedia1", "MM_DL1"}, + {"HDMI Mixer", "MultiMedia2", "MM_DL2"}, + {"HDMI Mixer", "MultiMedia3", "MM_DL3"}, + {"HDMI Mixer", "MultiMedia4", "MM_DL4"}, + {"HDMI Mixer", "MultiMedia5", "MM_DL5"}, + {"HDMI Mixer", "MultiMedia6", "MM_DL6"}, + {"HDMI Mixer", "MultiMedia7", "MM_DL7"}, + {"HDMI Mixer", "MultiMedia8", "MM_DL8"}, + {"HDMI", NULL, "HDMI Mixer"}, + {"BE_OUT", NULL, "HDMI Playback"}, + {"HDMI Playback", NULL, "MultiMedia1 Playback"}, +}; + +int msm_pcm_routing_hw_params(struct snd_pcm_substream *substream, + struct snd_pcm_hw_params *params) +{ + struct snd_soc_pcm_runtime *rtd = substream->private_data; + unsigned int be_id = rtd->dai_link->be_id; + + if (be_id >= MSM_BACKEND_DAI_MAX) { + pr_err("%s: unexpected be_id %d\n", __func__, be_id); + return -EINVAL; + } + + mutex_lock(&routing_lock); + msm_bedais[be_id].sample_rate = params_rate(params); + msm_bedais[be_id].channel = params_channels(params); + mutex_unlock(&routing_lock); + return 0; +} + +static int msm_pcm_routing_close(struct snd_pcm_substream *substream) +{ + struct snd_soc_pcm_runtime *rtd = substream->private_data; + unsigned int be_id = rtd->dai_link->be_id; + int i, session_type; + struct msm_pcm_routing_bdai_data *bedai; + + if (be_id >= MSM_BACKEND_DAI_MAX) { + pr_err("%s: unexpected be_id %d\n", __func__, be_id); + return -EINVAL; + } + + bedai = &msm_bedais[be_id]; + session_type = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK ? + 0 : 1); + + mutex_lock(&routing_lock); + + for_each_set_bit(i, &bedai->fe_sessions, MSM_FRONTEND_DAI_MM_SIZE) { + if (fe_dai_map[i][session_type].strm_id != INVALID_SESSION) { + fe_dai_map[i][session_type].be_srate = + bedai->sample_rate; + adm_close(bedai->port_id); + srs_port_id = -1; + } + } + + bedai->active = 0; + bedai->sample_rate = 0; + bedai->channel = 0; + bedai->perf_mode = false; + mutex_unlock(&routing_lock); + + return 0; +} + +int msm_pcm_routing_prepare(struct snd_pcm_substream *substream) +{ + struct snd_soc_pcm_runtime *rtd = substream->private_data; + unsigned int be_id = rtd->dai_link->be_id; + int i, path_type, session_type; + struct msm_pcm_routing_bdai_data *bedai; + u32 channels; + bool playback, capture; + struct msm_pcm_routing_fdai_data *fdai; + + if (be_id >= MSM_BACKEND_DAI_MAX) { + pr_err("%s: unexpected be_id %d\n", __func__, be_id); + return -EINVAL; + } + + bedai = &msm_bedais[be_id]; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + path_type = ADM_PATH_PLAYBACK; + session_type = SESSION_TYPE_RX; + } else { + path_type = ADM_PATH_LIVE_REC; + session_type = SESSION_TYPE_TX; + } + + mutex_lock(&routing_lock); + + if (bedai->active == 1) + goto done; /* Ignore prepare if back-end already active */ + + /* AFE port is not active at this point. However, still + * go ahead setting active flag under the notion that + * QDSP6 is able to handle ADM starting before AFE port + * is started. + */ + bedai->active = 1; + playback = substream->stream == SNDRV_PCM_STREAM_PLAYBACK; + capture = substream->stream == SNDRV_PCM_STREAM_CAPTURE; + + for_each_set_bit(i, &bedai->fe_sessions, MSM_FRONTEND_DAI_MM_SIZE) { + fdai = &fe_dai_map[i][session_type]; + if (fdai->strm_id != INVALID_SESSION) { + if (session_type == SESSION_TYPE_TX && fdai->be_srate && + (fdai->be_srate != bedai->sample_rate)) { + pr_debug("%s: flush strm %d due diff BE rates\n", + __func__, + fdai->strm_id); + + if (fdai->event_info.event_func) + fdai->event_info.event_func( + MSM_PCM_RT_EVT_BUF_RECFG, + fdai->event_info.priv_data); + fdai->be_srate = 0; /* might not need it */ + } + + channels = bedai->channel; + if ((playback || capture) + && ((channels == 2) || (channels == 1)) && + bedai->perf_mode) { + adm_multi_ch_copp_open(bedai->port_id, + path_type, + bedai->sample_rate, + channels, + DEFAULT_COPP_TOPOLOGY, bedai->perf_mode); + pr_debug("%s:configure COPP to lowlatency mode", + __func__); + } else if ((playback || capture) + && (channels > 2)) + adm_multi_ch_copp_open(bedai->port_id, + path_type, + bedai->sample_rate, + channels, + DEFAULT_COPP_TOPOLOGY, bedai->perf_mode); + else + adm_open(bedai->port_id, + path_type, + bedai->sample_rate, + channels, + DEFAULT_COPP_TOPOLOGY); + + msm_pcm_routing_build_matrix(i, + fdai->strm_id, path_type); + srs_port_id = bedai->port_id; + srs_send_params(srs_port_id, 1, 0); + } + } + +done: + mutex_unlock(&routing_lock); + + return 0; +} + +static struct snd_pcm_ops msm_routing_pcm_ops = { + .hw_params = msm_pcm_routing_hw_params, + .close = msm_pcm_routing_close, + .prepare = msm_pcm_routing_prepare, +}; + +/* Not used but frame seems to require it */ +static int msm_routing_probe(struct snd_soc_platform *platform) +{ + return 0; +} + +static int msm_asoc_routing_pcm_new(struct snd_soc_pcm_runtime *rtd) +{ + return 0; +} + +static struct snd_soc_platform_driver msm_soc_routing_platform = { + .ops = &msm_routing_pcm_ops, + .probe = msm_routing_probe, + .pcm_new = msm_asoc_routing_pcm_new, + .component_driver = { + .name = "msm-qdsp6-routing-dai", + .dapm_widgets = msm_qdsp6_widgets, + .num_dapm_widgets = ARRAY_SIZE(msm_qdsp6_widgets), + .dapm_routes = intercon, + .num_dapm_routes = ARRAY_SIZE(intercon), + }, +}; + +static int msm_routing_pcm_probe(struct platform_device *pdev) +{ + int ret; + dev_dbg(&pdev->dev, "dev name %s\n", dev_name(&pdev->dev)); + ret = snd_soc_register_platform(&pdev->dev, + &msm_soc_routing_platform); + return ret; +} + +static int msm_routing_pcm_remove(struct platform_device *pdev) +{ + snd_soc_unregister_platform(&pdev->dev); + return 0; +} + +static const struct of_device_id msm_routing_dt_match[] = { + {.compatible = "qcom,msm-pcm-routing"}, + {} +}; + +static struct platform_driver msm_routing_pcm_driver = { + .driver = { + .name = "msm-pcm-routing", + .owner = THIS_MODULE, + .of_match_table = msm_routing_dt_match, + }, + .probe = msm_routing_pcm_probe, + .remove = (msm_routing_pcm_remove), +}; + +int msm_routing_check_backend_enabled(int fedai_id) +{ + int i; + + if (fedai_id >= MSM_FRONTEND_DAI_MM_MAX_ID) { + /* bad ID assigned in machine driver */ + pr_err("%s: bad MM ID\n", __func__); + return 0; + } + for (i = 0; i < MSM_BACKEND_DAI_MAX; i++) { + if (test_bit(fedai_id, &msm_bedais[i].fe_sessions)) + return msm_bedais[i].active; + } + return 0; +} +module_platform_driver(msm_routing_pcm_driver); +MODULE_DESCRIPTION("MSM routing platform driver"); +MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/msm/msm-pcm-routing.h b/sound/soc/msm/msm-pcm-routing.h new file mode 100644 index 000000000000..ad63c120aad8 --- /dev/null +++ b/sound/soc/msm/msm-pcm-routing.h @@ -0,0 +1,145 @@ +/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef _MSM_PCM_ROUTING_H +#define _MSM_PCM_ROUTING_H +#include <sound/apr_audio.h> + +#define LPASS_BE_PRI_I2S_RX "PRIMARY_I2S_RX" +#define LPASS_BE_PRI_I2S_TX "PRIMARY_I2S_TX" +#define LPASS_BE_SLIMBUS_0_RX "SLIMBUS_0_RX" +#define LPASS_BE_SLIMBUS_0_TX "SLIMBUS_0_TX" +#define LPASS_BE_HDMI "HDMI" +#define LPASS_BE_INT_BT_SCO_RX "INT_BT_SCO_RX" +#define LPASS_BE_INT_BT_SCO_TX "INT_BT_SCO_TX" +#define LPASS_BE_INT_FM_RX "INT_FM_RX" +#define LPASS_BE_INT_FM_TX "INT_FM_TX" +#define LPASS_BE_AFE_PCM_RX "RT_PROXY_DAI_001_RX" +#define LPASS_BE_AFE_PCM_TX "RT_PROXY_DAI_002_TX" +#define LPASS_BE_AUXPCM_RX "AUX_PCM_RX" +#define LPASS_BE_AUXPCM_TX "AUX_PCM_TX" +#define LPASS_BE_SEC_AUXPCM_RX "SEC_AUX_PCM_RX" +#define LPASS_BE_SEC_AUXPCM_TX "SEC_AUX_PCM_TX" +#define LPASS_BE_VOICE_PLAYBACK_TX "VOICE_PLAYBACK_TX" +#define LPASS_BE_INCALL_RECORD_RX "INCALL_RECORD_TX" +#define LPASS_BE_INCALL_RECORD_TX "INCALL_RECORD_RX" +#define LPASS_BE_SEC_I2S_RX "SECONDARY_I2S_RX" + +#define LPASS_BE_MI2S_RX "(Backend) MI2S_RX" +#define LPASS_BE_MI2S_TX "(Backend) MI2S_TX" +#define LPASS_BE_STUB_RX "(Backend) STUB_RX" +#define LPASS_BE_STUB_TX "(Backend) STUB_TX" +#define LPASS_BE_SLIMBUS_1_RX "(Backend) SLIMBUS_1_RX" +#define LPASS_BE_SLIMBUS_1_TX "(Backend) SLIMBUS_1_TX" +#define LPASS_BE_STUB_1_TX "(Backend) STUB_1_TX" +#define LPASS_BE_SLIMBUS_3_RX "(Backend) SLIMBUS_3_RX" +#define LPASS_BE_SLIMBUS_3_TX "(Backend) SLIMBUS_3_TX" +#define LPASS_BE_SLIMBUS_4_RX "(Backend) SLIMBUS_4_RX" +#define LPASS_BE_SLIMBUS_4_TX "(Backend) SLIMBUS_4_TX" + +/* For multimedia front-ends, asm session is allocated dynamically. + * Hence, asm session/multimedia front-end mapping has to be maintained. + * Due to this reason, additional multimedia front-end must be placed before + * non-multimedia front-ends. + */ + +enum { + MSM_FRONTEND_DAI_MULTIMEDIA1 = 0, + MSM_FRONTEND_DAI_MULTIMEDIA2, + MSM_FRONTEND_DAI_MULTIMEDIA3, + MSM_FRONTEND_DAI_MULTIMEDIA4, + MSM_FRONTEND_DAI_MULTIMEDIA5, + MSM_FRONTEND_DAI_MULTIMEDIA6, + MSM_FRONTEND_DAI_MULTIMEDIA7, + MSM_FRONTEND_DAI_MULTIMEDIA8, + MSM_FRONTEND_DAI_CS_VOICE, + MSM_FRONTEND_DAI_VOIP, + MSM_FRONTEND_DAI_AFE_RX, + MSM_FRONTEND_DAI_AFE_TX, + MSM_FRONTEND_DAI_VOICE_STUB, + MSM_FRONTEND_DAI_VOLTE, + MSM_FRONTEND_DAI_VOICE2, + MSM_FRONTEND_DAI_VOLTE_STUB, + MSM_FRONTEND_DAI_VOICE2_STUB, + MSM_FRONTEND_DAI_MAX, +}; + +#define MSM_FRONTEND_DAI_MM_SIZE (MSM_FRONTEND_DAI_MULTIMEDIA8 + 1) +#define MSM_FRONTEND_DAI_MM_MAX_ID MSM_FRONTEND_DAI_MULTIMEDIA8 + +enum { + MSM_BACKEND_DAI_PRI_I2S_RX = 0, + MSM_BACKEND_DAI_PRI_I2S_TX, + MSM_BACKEND_DAI_SLIMBUS_0_RX, + MSM_BACKEND_DAI_SLIMBUS_0_TX, + MSM_BACKEND_DAI_HDMI_RX, + MSM_BACKEND_DAI_INT_BT_SCO_RX, + MSM_BACKEND_DAI_INT_BT_SCO_TX, + MSM_BACKEND_DAI_INT_FM_RX, + MSM_BACKEND_DAI_INT_FM_TX, + MSM_BACKEND_DAI_AFE_PCM_RX, + MSM_BACKEND_DAI_AFE_PCM_TX, + MSM_BACKEND_DAI_AUXPCM_RX, + MSM_BACKEND_DAI_AUXPCM_TX, + MSM_BACKEND_DAI_VOICE_PLAYBACK_TX, + MSM_BACKEND_DAI_INCALL_RECORD_RX, + MSM_BACKEND_DAI_INCALL_RECORD_TX, + MSM_BACKEND_DAI_MI2S_RX, + MSM_BACKEND_DAI_MI2S_TX, + MSM_BACKEND_DAI_SEC_I2S_RX, + MSM_BACKEND_DAI_SLIMBUS_1_RX, + MSM_BACKEND_DAI_SLIMBUS_1_TX, + MSM_BACKEND_DAI_SLIMBUS_4_RX, + MSM_BACKEND_DAI_SLIMBUS_4_TX, + MSM_BACKEND_DAI_SLIMBUS_3_RX, + MSM_BACKEND_DAI_SLIMBUS_3_TX, + MSM_BACKEND_DAI_EXTPROC_RX, + MSM_BACKEND_DAI_EXTPROC_TX, + MSM_BACKEND_DAI_EXTPROC_EC_TX, + MSM_BACKEND_DAI_SEC_AUXPCM_RX, + MSM_BACKEND_DAI_SEC_AUXPCM_TX, + MSM_BACKEND_DAI_MAX, +}; + +enum msm_pcm_routing_event { + MSM_PCM_RT_EVT_BUF_RECFG, + MSM_PCM_RT_EVT_DEVSWITCH, + MSM_PCM_RT_EVT_MAX, +}; +/* dai_id: front-end ID, + * dspst_id: DSP audio stream ID + * stream_type: playback or capture + */ +void msm_pcm_routing_reg_phy_stream(int fedai_id, bool perf_mode, + int dspst_id, int stream_type); +void msm_pcm_routing_reg_psthr_stream(int fedai_id, int dspst_id, + int stream_type, int enable); + +struct msm_pcm_routing_evt { + void (*event_func)(enum msm_pcm_routing_event, void *); + void *priv_data; +}; + +void msm_pcm_routing_reg_phy_stream_v2(int fedai_id, bool perf_mode, + int dspst_id, int stream_type, + struct msm_pcm_routing_evt event_info); + +void msm_pcm_routing_dereg_phy_stream(int fedai_id, int stream_type); + +int lpa_set_volume(unsigned volume); + +int msm_routing_check_backend_enabled(int fedai_id); + +int multi_ch_pcm_set_volume(unsigned volume); + +int compressed_set_volume(unsigned volume); + +#endif /*_MSM_PCM_H*/ diff --git a/sound/soc/msm/qdsp6/Makefile b/sound/soc/msm/qdsp6/Makefile new file mode 100644 index 000000000000..669be94d4137 --- /dev/null +++ b/sound/soc/msm/qdsp6/Makefile @@ -0,0 +1,3 @@ +obj-y := q6asm.o q6adm.o q6afe.o +obj-$(CONFIG_SND_SOC_VOICE) += q6voice.o +obj-$(CONFIG_SND_SOC_QDSP6) += core/ diff --git a/sound/soc/msm/qdsp6/core/Makefile b/sound/soc/msm/qdsp6/core/Makefile new file mode 100644 index 000000000000..06089d9bced9 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/Makefile @@ -0,0 +1,3 @@ +obj-y += apr.o apr_v1.o apr_tal.o q6core.o dsp_debug.o +obj-y += audio_acdb.o +obj-y += rtac.o diff --git a/sound/soc/msm/qdsp6/core/apr.c b/sound/soc/msm/qdsp6/core/apr.c new file mode 100644 index 000000000000..a400e72f5a9a --- /dev/null +++ b/sound/soc/msm/qdsp6/core/apr.c @@ -0,0 +1,707 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/spinlock.h> +#include <linux/list.h> +#include <linux/sched.h> +#include <linux/wait.h> +#include <linux/errno.h> +#include <linux/fs.h> +#include <linux/delay.h> +#include <linux/debugfs.h> +#include <linux/platform_device.h> +#include <linux/sysfs.h> +#include <linux/device.h> +#include <linux/slab.h> +#include <asm/mach-types.h> +#include <linux/peripheral-loader.h> +#include <linux/msm_smd.h> +#include <sound/qdsp6v2/apr.h> +#include <sound/qdsp6v2/apr_tal.h> +#include <sound/qdsp6v2/dsp_debug.h> +#include <linux/subsystem_notif.h> +#include <linux/subsystem_restart.h> + +static struct apr_q6 q6; +static struct apr_client client[APR_DEST_MAX][APR_CLIENT_MAX]; + +static wait_queue_head_t dsp_wait; +static wait_queue_head_t modem_wait; +/* Subsystem restart: QDSP6 data, functions */ +static struct workqueue_struct *apr_reset_workqueue; +static void apr_reset_deregister(struct work_struct *work); +struct apr_reset_work { + void *handle; + struct work_struct work; +}; + +struct apr_svc_table { + char name[64]; + int idx; + int id; + int client_id; +}; + +static const struct apr_svc_table svc_tbl_audio[] = { + { + .name = "AFE", + .idx = 0, + .id = APR_SVC_AFE, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "ASM", + .idx = 1, + .id = APR_SVC_ASM, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "ADM", + .idx = 2, + .id = APR_SVC_ADM, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "CORE", + .idx = 3, + .id = APR_SVC_ADSP_CORE, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "TEST", + .idx = 4, + .id = APR_SVC_TEST_CLIENT, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "MVM", + .idx = 5, + .id = APR_SVC_ADSP_MVM, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "CVS", + .idx = 6, + .id = APR_SVC_ADSP_CVS, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "CVP", + .idx = 7, + .id = APR_SVC_ADSP_CVP, + .client_id = APR_CLIENT_AUDIO, + }, + { + .name = "USM", + .idx = 8, + .id = APR_SVC_USM, + .client_id = APR_CLIENT_AUDIO, + }, +}; + +static struct apr_svc_table svc_tbl_voice[] = { + { + .name = "VSM", + .idx = 0, + .id = APR_SVC_VSM, + .client_id = APR_CLIENT_VOICE, + }, + { + .name = "VPM", + .idx = 1, + .id = APR_SVC_VPM, + .client_id = APR_CLIENT_VOICE, + }, + { + .name = "MVS", + .idx = 2, + .id = APR_SVC_MVS, + .client_id = APR_CLIENT_VOICE, + }, + { + .name = "MVM", + .idx = 3, + .id = APR_SVC_MVM, + .client_id = APR_CLIENT_VOICE, + }, + { + .name = "CVS", + .idx = 4, + .id = APR_SVC_CVS, + .client_id = APR_CLIENT_VOICE, + }, + { + .name = "CVP", + .idx = 5, + .id = APR_SVC_CVP, + .client_id = APR_CLIENT_VOICE, + }, + { + .name = "SRD", + .idx = 6, + .id = APR_SVC_SRD, + .client_id = APR_CLIENT_VOICE, + }, + { + .name = "TEST", + .idx = 7, + .id = APR_SVC_TEST_CLIENT, + .client_id = APR_CLIENT_VOICE, + }, +}; + +enum apr_subsys_state apr_get_modem_state(void) +{ + return atomic_read(&q6.modem_state); +} + +void apr_set_modem_state(enum apr_subsys_state state) +{ + atomic_set(&q6.modem_state, state); +} + +enum apr_subsys_state apr_cmpxchg_modem_state(enum apr_subsys_state prev, + enum apr_subsys_state new) +{ + return atomic_cmpxchg(&q6.modem_state, prev, new); +} + +enum apr_subsys_state apr_get_q6_state(void) +{ + return atomic_read(&q6.q6_state); +} +EXPORT_SYMBOL_GPL(apr_get_q6_state); + +int apr_set_q6_state(enum apr_subsys_state state) +{ + pr_debug("%s: setting adsp state %d\n", __func__, state); + if (state < APR_SUBSYS_DOWN || state > APR_SUBSYS_LOADED) + return -EINVAL; + atomic_set(&q6.q6_state, state); + return 0; +} +EXPORT_SYMBOL_GPL(apr_set_q6_state); + +enum apr_subsys_state apr_cmpxchg_q6_state(enum apr_subsys_state prev, + enum apr_subsys_state new) +{ + return atomic_cmpxchg(&q6.q6_state, prev, new); +} + +int apr_wait_for_device_up(int dest_id) +{ + int rc = -1; + if (dest_id == APR_DEST_MODEM) + rc = wait_event_interruptible_timeout(modem_wait, + (apr_get_modem_state() == APR_SUBSYS_UP), + (1 * HZ)); + else if (dest_id == APR_DEST_QDSP6) + rc = wait_event_interruptible_timeout(dsp_wait, + (apr_get_q6_state() == APR_SUBSYS_UP), + (1 * HZ)); + else + pr_err("%s: unknown dest_id %d\n", __func__, dest_id); + /* returns left time */ + return rc; +} + +int apr_load_adsp_image(void) +{ + int rc = 0; + mutex_lock(&q6.lock); + if (apr_get_q6_state() == APR_SUBSYS_UP) { + q6.pil = pil_get("q6"); + if (IS_ERR(q6.pil)) { + rc = PTR_ERR(q6.pil); + pr_err("APR: Unable to load q6 image, error:%d\n", rc); + } else { + apr_set_q6_state(APR_SUBSYS_LOADED); + pr_debug("APR: Image is loaded, stated\n"); + } + } else + pr_debug("APR: cannot load state %d\n", apr_get_q6_state()); + mutex_unlock(&q6.lock); + return rc; +} + +struct apr_client *apr_get_client(int dest_id, int client_id) +{ + return &client[dest_id][client_id]; +} + +int apr_send_pkt(void *handle, uint32_t *buf) +{ + struct apr_svc *svc = handle; + struct apr_client *clnt; + struct apr_hdr *hdr; + uint16_t dest_id; + uint16_t client_id; + uint16_t w_len; + unsigned long flags; + + if (!handle || !buf) { + pr_err("APR: Wrong parameters\n"); + return -EINVAL; + } + if (svc->need_reset) { + pr_err("apr: send_pkt service need reset\n"); + return -ENETRESET; + } + + if ((svc->dest_id == APR_DEST_QDSP6) && + (apr_get_q6_state() != APR_SUBSYS_LOADED)) { + pr_err("%s: Still dsp is not Up\n", __func__); + return -ENETRESET; + } else if ((svc->dest_id == APR_DEST_MODEM) && + (apr_get_modem_state() == APR_SUBSYS_DOWN)) { + pr_err("apr: Still Modem is not Up\n"); + return -ENETRESET; + } + + + spin_lock_irqsave(&svc->w_lock, flags); + dest_id = svc->dest_id; + client_id = svc->client_id; + clnt = &client[dest_id][client_id]; + + if (!client[dest_id][client_id].handle) { + pr_err("APR: Still service is not yet opened\n"); + spin_unlock_irqrestore(&svc->w_lock, flags); + return -EINVAL; + } + hdr = (struct apr_hdr *)buf; + + hdr->src_domain = APR_DOMAIN_APPS; + hdr->src_svc = svc->id; + if (dest_id == APR_DEST_MODEM) + hdr->dest_domain = APR_DOMAIN_MODEM; + else if (dest_id == APR_DEST_QDSP6) + hdr->dest_domain = APR_DOMAIN_ADSP; + + hdr->dest_svc = svc->id; + + w_len = apr_tal_write(clnt->handle, buf, hdr->pkt_size); + if (w_len != hdr->pkt_size) + pr_err("Unable to write APR pkt successfully: %d\n", w_len); + spin_unlock_irqrestore(&svc->w_lock, flags); + + return w_len; +} + +void apr_cb_func(void *buf, int len, void *priv) +{ + struct apr_client_data data; + struct apr_client *apr_client; + struct apr_svc *c_svc; + struct apr_hdr *hdr; + uint16_t hdr_size; + uint16_t msg_type; + uint16_t ver; + uint16_t src; + uint16_t svc; + uint16_t clnt; + int i; + int temp_port = 0; + uint32_t *ptr; + + pr_debug("APR2: len = %d\n", len); + ptr = buf; + pr_debug("\n*****************\n"); + for (i = 0; i < len/4; i++) + pr_debug("%x ", ptr[i]); + pr_debug("\n"); + pr_debug("\n*****************\n"); + + if (!buf || len <= APR_HDR_SIZE) { + pr_err("APR: Improper apr pkt received:%p %d\n", buf, len); + return; + } + hdr = buf; + + ver = hdr->hdr_field; + ver = (ver & 0x000F); + if (ver > APR_PKT_VER + 1) { + pr_err("APR: Wrong version: %d\n", ver); + return; + } + + hdr_size = hdr->hdr_field; + hdr_size = ((hdr_size & 0x00F0) >> 0x4) * 4; + if (hdr_size < APR_HDR_SIZE) { + pr_err("APR: Wrong hdr size:%d\n", hdr_size); + return; + } + + if (hdr->pkt_size < APR_HDR_SIZE) { + pr_err("APR: Wrong paket size\n"); + return; + } + msg_type = hdr->hdr_field; + msg_type = (msg_type >> 0x08) & 0x0003; + if (msg_type >= APR_MSG_TYPE_MAX && msg_type != APR_BASIC_RSP_RESULT) { + pr_err("APR: Wrong message type: %d\n", msg_type); + return; + } + + if (hdr->src_domain >= APR_DOMAIN_MAX || + hdr->dest_domain >= APR_DOMAIN_MAX || + hdr->src_svc >= APR_SVC_MAX || + hdr->dest_svc >= APR_SVC_MAX) { + pr_err("APR: Wrong APR header\n"); + return; + } + + svc = hdr->dest_svc; + if (hdr->src_domain == APR_DOMAIN_MODEM) { + src = APR_DEST_MODEM; + if (svc == APR_SVC_MVS || svc == APR_SVC_MVM || + svc == APR_SVC_CVS || svc == APR_SVC_CVP || + svc == APR_SVC_TEST_CLIENT) + clnt = APR_CLIENT_VOICE; + else { + pr_err("APR: Wrong svc :%d\n", svc); + return; + } + } else if (hdr->src_domain == APR_DOMAIN_ADSP) { + src = APR_DEST_QDSP6; + if (svc == APR_SVC_AFE || svc == APR_SVC_ASM || + svc == APR_SVC_VSM || svc == APR_SVC_VPM || + svc == APR_SVC_ADM || svc == APR_SVC_ADSP_CORE || + svc == APR_SVC_USM || + svc == APR_SVC_TEST_CLIENT || svc == APR_SVC_ADSP_MVM || + svc == APR_SVC_ADSP_CVS || svc == APR_SVC_ADSP_CVP) + clnt = APR_CLIENT_AUDIO; + else { + pr_err("APR: Wrong svc :%d\n", svc); + return; + } + } else { + pr_err("APR: Pkt from wrong source: %d\n", hdr->src_domain); + return; + } + + pr_debug("src =%d clnt = %d\n", src, clnt); + apr_client = &client[src][clnt]; + for (i = 0; i < APR_SVC_MAX; i++) + if (apr_client->svc[i].id == svc) { + pr_debug("%d\n", apr_client->svc[i].id); + c_svc = &apr_client->svc[i]; + break; + } + + if (i == APR_SVC_MAX) { + pr_err("APR: service is not registered\n"); + return; + } + pr_debug("svc_idx = %d\n", i); + pr_debug("%x %x %x %p %p\n", c_svc->id, c_svc->dest_id, + c_svc->client_id, c_svc->fn, c_svc->priv); + data.payload_size = hdr->pkt_size - hdr_size; + data.opcode = hdr->opcode; + data.src = src; + data.src_port = hdr->src_port; + data.dest_port = hdr->dest_port; + data.token = hdr->token; + data.msg_type = msg_type; + if (data.payload_size > 0) + data.payload = (char *)hdr + hdr_size; + + temp_port = ((data.src_port >> 8) * 8) + (data.src_port & 0xFF); + pr_debug("port = %d t_port = %d\n", data.src_port, temp_port); + if (c_svc->port_cnt && c_svc->port_fn[temp_port]) + c_svc->port_fn[temp_port](&data, c_svc->port_priv[temp_port]); + else if (c_svc->fn) + c_svc->fn(&data, c_svc->priv); + else + pr_err("APR: Rxed a packet for NULL callback\n"); +} + +int apr_get_svc(const char *svc_name, int dest_id, int *client_id, + int *svc_idx, int *svc_id) +{ + int i; + int size; + struct apr_svc_table *tbl; + int ret = 0; + + if (dest_id == APR_DEST_QDSP6) { + tbl = (struct apr_svc_table *)&svc_tbl_audio; + size = ARRAY_SIZE(svc_tbl_audio); + } else { + tbl = (struct apr_svc_table *)&svc_tbl_voice; + size = ARRAY_SIZE(svc_tbl_voice); + } + + for (i = 0; i < size; i++) { + if (!strncmp(svc_name, tbl[i].name, strlen(tbl[i].name))) { + *client_id = tbl[i].client_id; + *svc_idx = tbl[i].idx; + *svc_id = tbl[i].id; + break; + } + } + + pr_debug("%s: svc_name = %s c_id = %d dest_id = %d\n", + __func__, svc_name, *client_id, dest_id); + if (i == size) { + pr_err("%s: APR: Wrong svc name %s\n", __func__, svc_name); + ret = -EINVAL; + } + + return ret; +} + +static void apr_reset_deregister(struct work_struct *work) +{ + struct apr_svc *handle = NULL; + struct apr_reset_work *apr_reset = + container_of(work, struct apr_reset_work, work); + + handle = apr_reset->handle; + pr_debug("%s:handle[%p]\n", __func__, handle); + apr_deregister(handle); + kfree(apr_reset); +} + +int apr_deregister(void *handle) +{ + struct apr_svc *svc = handle; + struct apr_client *clnt; + uint16_t dest_id; + uint16_t client_id; + + if (!handle) + return -EINVAL; + + mutex_lock(&svc->m_lock); + dest_id = svc->dest_id; + client_id = svc->client_id; + clnt = &client[dest_id][client_id]; + + if (svc->port_cnt > 0 || svc->svc_cnt > 0) { + if (svc->port_cnt) + svc->port_cnt--; + else if (svc->svc_cnt) + svc->svc_cnt--; + if (!svc->port_cnt && !svc->svc_cnt) { + client[dest_id][client_id].svc_cnt--; + svc->need_reset = 0x0; + } + } else if (client[dest_id][client_id].svc_cnt > 0) { + client[dest_id][client_id].svc_cnt--; + if (!client[dest_id][client_id].svc_cnt) { + svc->need_reset = 0x0; + pr_debug("%s: service is reset %p\n", __func__, svc); + } + } + + if (!svc->port_cnt && !svc->svc_cnt) { + svc->priv = NULL; + svc->id = 0; + svc->fn = NULL; + svc->dest_id = 0; + svc->client_id = 0; + svc->need_reset = 0x0; + } + if (client[dest_id][client_id].handle && + !client[dest_id][client_id].svc_cnt) { + apr_tal_close(client[dest_id][client_id].handle); + client[dest_id][client_id].handle = NULL; + } + mutex_unlock(&svc->m_lock); + + return 0; +} + +void apr_reset(void *handle) +{ + struct apr_reset_work *apr_reset_worker = NULL; + + if (!handle) + return; + pr_debug("%s: handle[%p]\n", __func__, handle); + + if (apr_reset_workqueue == NULL) { + pr_err("%s: apr_reset_workqueue is NULL\n", __func__); + return; + } + + apr_reset_worker = kzalloc(sizeof(struct apr_reset_work), + GFP_ATOMIC); + + if (apr_reset_worker == NULL) { + pr_err("%s: mem failure\n", __func__); + return; + } + + apr_reset_worker->handle = handle; + INIT_WORK(&apr_reset_worker->work, apr_reset_deregister); + queue_work(apr_reset_workqueue, &apr_reset_worker->work); +} + +static int adsp_state(int state) +{ + pr_info("dsp state = %d\n", state); + return 0; +} + +/* Dispatch the Reset events to Modem and audio clients */ +void dispatch_event(unsigned long code, unsigned short proc) +{ + struct apr_client *apr_client; + struct apr_client_data data; + struct apr_svc *svc; + uint16_t clnt; + int i, j; + + data.opcode = RESET_EVENTS; + data.reset_event = code; + data.reset_proc = proc; + + clnt = APR_CLIENT_AUDIO; + apr_client = &client[proc][clnt]; + for (i = 0; i < APR_SVC_MAX; i++) { + mutex_lock(&apr_client->svc[i].m_lock); + if (apr_client->svc[i].fn) { + apr_client->svc[i].need_reset = 0x1; + apr_client->svc[i].fn(&data, apr_client->svc[i].priv); + } + if (apr_client->svc[i].port_cnt) { + svc = &(apr_client->svc[i]); + svc->need_reset = 0x1; + for (j = 0; j < APR_MAX_PORTS; j++) + if (svc->port_fn[j]) + svc->port_fn[j](&data, + svc->port_priv[j]); + } + mutex_unlock(&apr_client->svc[i].m_lock); + } + + clnt = APR_CLIENT_VOICE; + apr_client = &client[proc][clnt]; + for (i = 0; i < APR_SVC_MAX; i++) { + mutex_lock(&apr_client->svc[i].m_lock); + if (apr_client->svc[i].fn) { + apr_client->svc[i].need_reset = 0x1; + apr_client->svc[i].fn(&data, apr_client->svc[i].priv); + } + if (apr_client->svc[i].port_cnt) { + svc = &(apr_client->svc[i]); + svc->need_reset = 0x1; + for (j = 0; j < APR_MAX_PORTS; j++) + if (svc->port_fn[j]) + svc->port_fn[j](&data, + svc->port_priv[j]); + } + mutex_unlock(&apr_client->svc[i].m_lock); + } +} + +#if 0 +static int modem_notifier_cb(struct notifier_block *this, unsigned long code, + void *_cmd) +{ + switch (code) { + case SUBSYS_BEFORE_SHUTDOWN: + pr_debug("M-Notify: Shutdown started\n"); + apr_set_modem_state(APR_SUBSYS_DOWN); + dispatch_event(code, APR_DEST_MODEM); + break; + case SUBSYS_AFTER_SHUTDOWN: + pr_debug("M-Notify: Shutdown Completed\n"); + break; + case SUBSYS_BEFORE_POWERUP: + pr_debug("M-notify: Bootup started\n"); + break; + case SUBSYS_AFTER_POWERUP: + if (apr_cmpxchg_modem_state(APR_SUBSYS_DOWN, APR_SUBSYS_UP) == + APR_SUBSYS_DOWN) + wake_up(&modem_wait); + pr_debug("M-Notify: Bootup Completed\n"); + break; + default: + pr_err("M-Notify: General: %lu\n", code); + break; + } + return NOTIFY_DONE; +} +static struct notifier_block mnb = { + .notifier_call = modem_notifier_cb, +}; +static int lpass_notifier_cb(struct notifier_block *this, unsigned long code, + void *_cmd) +{ + switch (code) { + case SUBSYS_BEFORE_SHUTDOWN: + pr_debug("L-Notify: Shutdown started\n"); + apr_set_q6_state(APR_SUBSYS_DOWN); + dispatch_event(code, APR_DEST_QDSP6); + break; + case SUBSYS_AFTER_SHUTDOWN: + pr_debug("L-Notify: Shutdown Completed\n"); + break; + case SUBSYS_BEFORE_POWERUP: + pr_debug("L-notify: Bootup started\n"); + break; + case SUBSYS_AFTER_POWERUP: + if (apr_cmpxchg_q6_state(APR_SUBSYS_DOWN, APR_SUBSYS_UP) == + APR_SUBSYS_DOWN) + wake_up(&dsp_wait); + pr_debug("L-Notify: Bootup Completed\n"); + break; + default: + pr_err("L-Notify: Generel: %lu\n", code); + break; + } + return NOTIFY_DONE; +} +static struct notifier_block lnb = { + .notifier_call = lpass_notifier_cb, +}; +#endif + +static int __init apr_init(void) +{ + int i, j, k; + + for (i = 0; i < APR_DEST_MAX; i++) + for (j = 0; j < APR_CLIENT_MAX; j++) { + mutex_init(&client[i][j].m_lock); + for (k = 0; k < APR_SVC_MAX; k++) { + mutex_init(&client[i][j].svc[k].m_lock); + spin_lock_init(&client[i][j].svc[k].w_lock); + } + } + apr_set_subsys_state(); + mutex_init(&q6.lock); + dsp_debug_register(adsp_state); + apr_reset_workqueue = create_singlethread_workqueue("apr_driver"); + if (!apr_reset_workqueue) + return -ENOMEM; + return 0; +} +device_initcall(apr_init); + +static int __init apr_late_init(void) +{ + int ret = 0; + init_waitqueue_head(&dsp_wait); + init_waitqueue_head(&modem_wait); + //subsys_notif_register_notifier("modem", &mnb); +// subsys_notif_register_notifier("lpass", &lnb); + return ret; +} +late_initcall(apr_late_init); diff --git a/sound/soc/msm/qdsp6/core/apr_tal.c b/sound/soc/msm/qdsp6/core/apr_tal.c new file mode 100644 index 000000000000..f35bab7aacd5 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/apr_tal.c @@ -0,0 +1,281 @@ +/* Copyright (c) 2010-2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/spinlock.h> +#include <linux/mutex.h> +#include <linux/list.h> +#include <linux/sched.h> +#include <linux/wait.h> +#include <linux/errno.h> +#include <linux/fs.h> +#include <linux/debugfs.h> +#include <linux/platform_device.h> +#include <linux/delay.h> +#include <linux/clk.h> +#include <linux/msm_smd.h> +#include <sound/qdsp6v2/apr_tal.h> + +static char *svc_names[APR_DEST_MAX][APR_CLIENT_MAX] = { + { + "apr_audio_svc", + "apr_voice_svc", + }, + { + "apr_audio_svc", + "apr_voice_svc", + }, +}; + +struct apr_svc_ch_dev apr_svc_ch[APR_DL_MAX][APR_DEST_MAX][APR_CLIENT_MAX]; + +int __apr_tal_write(struct apr_svc_ch_dev *apr_ch, void *data, int len) +{ + int w_len; + unsigned long flags; + + + spin_lock_irqsave(&apr_ch->w_lock, flags); + if (smd_write_avail(apr_ch->ch) < len) { + spin_unlock_irqrestore(&apr_ch->w_lock, flags); + return -EAGAIN; + } + + w_len = smd_write(apr_ch->ch, data, len); + spin_unlock_irqrestore(&apr_ch->w_lock, flags); + pr_debug("apr_tal:w_len = %d\n", w_len); + + if (w_len != len) { + pr_err("apr_tal: Error in write\n"); + return -ENETRESET; + } + return w_len; +} + +int apr_tal_write(struct apr_svc_ch_dev *apr_ch, void *data, int len) +{ + int rc = 0, retries = 0; + + if (!apr_ch->ch) + return -EINVAL; + + do { + if (rc == -EAGAIN) + udelay(50); + + rc = __apr_tal_write(apr_ch, data, len); + } while (rc == -EAGAIN && retries++ < 300); + + if (rc == -EAGAIN) + pr_err("apr_tal: TIMEOUT for write\n"); + + return rc; +} + +static void apr_tal_notify(void *priv, unsigned event) +{ + struct apr_svc_ch_dev *apr_ch = priv; + int len, r_len, sz; + int pkt_cnt = 0; + unsigned long flags; + + pr_debug("event = %d\n", event); + switch (event) { + case SMD_EVENT_DATA: + pkt_cnt = 0; + spin_lock_irqsave(&apr_ch->lock, flags); +check_pending: + len = smd_read_avail(apr_ch->ch); + if (len < 0) { + pr_err("apr_tal: Invalid Read Event :%d\n", len); + spin_unlock_irqrestore(&apr_ch->lock, flags); + return; + } + sz = smd_cur_packet_size(apr_ch->ch); + if (sz < 0) { + pr_debug("pkt size is zero\n"); + spin_unlock_irqrestore(&apr_ch->lock, flags); + return; + } + if (!len && !sz && !pkt_cnt) + goto check_write_avail; + if (!len) { + pr_debug("len = %d pkt_cnt = %d\n", len, pkt_cnt); + spin_unlock_irqrestore(&apr_ch->lock, flags); + return; + } + r_len = smd_read_from_cb(apr_ch->ch, apr_ch->data, len); + if (len != r_len) { + pr_err("apr_tal: Invalid Read\n"); + spin_unlock_irqrestore(&apr_ch->lock, flags); + return; + } + pkt_cnt++; + pr_debug("%d %d %d\n", len, sz, pkt_cnt); + if (apr_ch->func) + apr_ch->func(apr_ch->data, r_len, apr_ch->priv); + goto check_pending; +check_write_avail: + if (smd_write_avail(apr_ch->ch)) + wake_up(&apr_ch->wait); + spin_unlock_irqrestore(&apr_ch->lock, flags); + break; + case SMD_EVENT_OPEN: + pr_info("apr_tal: SMD_EVENT_OPEN\n"); + apr_ch->smd_state = 1; + wake_up(&apr_ch->wait); + break; + case SMD_EVENT_CLOSE: + pr_info("apr_tal: SMD_EVENT_CLOSE\n"); + break; + } +} + +struct apr_svc_ch_dev *apr_tal_open(uint32_t svc, uint32_t dest, + uint32_t dl, apr_svc_cb_fn func, void *priv) +{ + int rc; + + pr_err("apr_tal:open\n"); + if ((svc >= APR_CLIENT_MAX) || (dest >= APR_DEST_MAX) || + (dl >= APR_DL_MAX)) { + pr_err("apr_tal: Invalid params\n"); + return NULL; + } + + if (apr_svc_ch[dl][dest][svc].ch) { + pr_err("apr_tal: This channel alreday openend\n"); + return NULL; + } + + mutex_lock(&apr_svc_ch[dl][dest][svc].m_lock); + if (!apr_svc_ch[dl][dest][svc].dest_state) { + rc = wait_event_timeout(apr_svc_ch[dl][dest][svc].dest, + apr_svc_ch[dl][dest][svc].dest_state, + msecs_to_jiffies(APR_OPEN_TIMEOUT_MS)); + if (rc == 0) { + pr_err("apr_tal:open timeout\n"); + mutex_unlock(&apr_svc_ch[dl][dest][svc].m_lock); + return NULL; + } + pr_debug("apr_tal:Wakeup done\n"); + apr_svc_ch[dl][dest][svc].dest_state = 0; + } + rc = smd_named_open_on_edge(svc_names[dest][svc], dest, + &apr_svc_ch[dl][dest][svc].ch, + &apr_svc_ch[dl][dest][svc], + apr_tal_notify); + if (rc < 0) { + pr_err("apr_tal: smd_open failed %s\n", + svc_names[dest][svc]); + mutex_unlock(&apr_svc_ch[dl][dest][svc].m_lock); + return NULL; + } + rc = wait_event_timeout(apr_svc_ch[dl][dest][svc].wait, + (apr_svc_ch[dl][dest][svc].smd_state == 1), 5 * HZ); + if (rc == 0) { + pr_err("apr_tal:TIMEOUT for OPEN event\n"); + mutex_unlock(&apr_svc_ch[dl][dest][svc].m_lock); + apr_tal_close(&apr_svc_ch[dl][dest][svc]); + return NULL; + } + if (!apr_svc_ch[dl][dest][svc].dest_state) { + apr_svc_ch[dl][dest][svc].dest_state = 1; + pr_debug("apr_tal:Waiting for apr svc init\n"); + msleep(200); + pr_debug("apr_tal:apr svc init done\n"); + } + apr_svc_ch[dl][dest][svc].smd_state = 0; + + apr_svc_ch[dl][dest][svc].func = func; + apr_svc_ch[dl][dest][svc].priv = priv; + mutex_unlock(&apr_svc_ch[dl][dest][svc].m_lock); + + return &apr_svc_ch[dl][dest][svc]; +} + +int apr_tal_close(struct apr_svc_ch_dev *apr_ch) +{ + int r; + + if (!apr_ch->ch) + return -EINVAL; + + mutex_lock(&apr_ch->m_lock); + r = smd_close(apr_ch->ch); + apr_ch->ch = NULL; + apr_ch->func = NULL; + apr_ch->priv = NULL; + mutex_unlock(&apr_ch->m_lock); + return r; +} + +static int apr_smd_probe(struct platform_device *pdev) +{ + int dest; + int clnt; + + if (pdev->id == APR_DEST_MODEM) { + pr_info("apr_tal:Modem Is Up\n"); + dest = APR_DEST_MODEM; + clnt = APR_CLIENT_VOICE; + apr_svc_ch[APR_DL_SMD][dest][clnt].dest_state = 1; + wake_up(&apr_svc_ch[APR_DL_SMD][dest][clnt].dest); + } else if (pdev->id == APR_DEST_QDSP6) { + pr_info("apr_tal:Q6 Is Up\n"); + dest = APR_DEST_QDSP6; + clnt = APR_CLIENT_AUDIO; + apr_svc_ch[APR_DL_SMD][dest][clnt].dest_state = 1; + wake_up(&apr_svc_ch[APR_DL_SMD][dest][clnt].dest); + } else + pr_err("apr_tal:Invalid Dest Id: %d\n", pdev->id); + + return 0; +} + +static struct platform_driver apr_q6_driver = { + .probe = apr_smd_probe, + .driver = { + .name = "apr_audio_svc", + .owner = THIS_MODULE, + }, +}; + +static struct platform_driver apr_modem_driver = { + .probe = apr_smd_probe, + .driver = { + .name = "apr_voice_svc", + .owner = THIS_MODULE, + }, +}; + +static int __init apr_tal_init(void) +{ + int i, j, k; + + for (i = 0; i < APR_DL_MAX; i++) + for (j = 0; j < APR_DEST_MAX; j++) + for (k = 0; k < APR_CLIENT_MAX; k++) { + init_waitqueue_head(&apr_svc_ch[i][j][k].wait); + init_waitqueue_head(&apr_svc_ch[i][j][k].dest); + spin_lock_init(&apr_svc_ch[i][j][k].lock); + spin_lock_init(&apr_svc_ch[i][j][k].w_lock); + mutex_init(&apr_svc_ch[i][j][k].m_lock); + } + platform_driver_register(&apr_q6_driver); + platform_driver_register(&apr_modem_driver); + return 0; +} +device_initcall(apr_tal_init); diff --git a/sound/soc/msm/qdsp6/core/apr_v1.c b/sound/soc/msm/qdsp6/core/apr_v1.c new file mode 100644 index 000000000000..1dd859f5ad08 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/apr_v1.c @@ -0,0 +1,133 @@ +/* + * Copyright (c) 2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/err.h> +#include <sound/qdsp6v2/apr.h> +#include <sound/qdsp6v2/apr_tal.h> +#include <sound/qdsp6v2/dsp_debug.h> +#include <linux/peripheral-loader.h> + +struct apr_svc *apr_register(char *dest, char *svc_name, apr_fn svc_fn, + uint32_t src_port, void *priv) +{ + struct apr_client *client; + int client_id = 0; + int svc_idx = 0; + int svc_id = 0; + int dest_id = 0; + int temp_port = 0; + struct apr_svc *svc = NULL; + int rc = 0; + + if (!dest || !svc_name || !svc_fn) + return NULL; + + if (!strncmp(dest, "ADSP", 4)) + dest_id = APR_DEST_QDSP6; + else if (!strncmp(dest, "MODEM", 5)) { + dest_id = APR_DEST_MODEM; + } else { + pr_err("APR: wrong destination\n"); + goto done; + } + + if (dest_id == APR_DEST_QDSP6 && + apr_get_q6_state() == APR_SUBSYS_DOWN) { + pr_info("%s: Wait for Lpass to bootup\n", __func__); + rc = apr_wait_for_device_up(dest_id); + if (rc == 0) { + pr_err("%s: DSP is not Up\n", __func__); + return NULL; + } + pr_info("%s: Lpass Up\n", __func__); + } else if (dest_id == APR_DEST_MODEM && + (apr_get_modem_state() == APR_SUBSYS_DOWN)) { + pr_info("%s: Wait for modem to bootup\n", __func__); + rc = apr_wait_for_device_up(dest_id); + if (rc == 0) { + pr_err("%s: Modem is not Up\n", __func__); + return NULL; + } + pr_info("%s: modem Up\n", __func__); + } + + if (apr_get_svc(svc_name, dest_id, &client_id, &svc_idx, &svc_id)) { + pr_err("%s: apr_get_svc failed\n", __func__); + goto done; + } + + /* APRv1 loads ADSP image automatically */ + apr_load_adsp_image(); + + client = apr_get_client(dest_id, client_id); + mutex_lock(&client->m_lock); + if (!client->handle) { + client->handle = apr_tal_open(client_id, dest_id, APR_DL_SMD, + apr_cb_func, NULL); + if (!client->handle) { + svc = NULL; + pr_err("APR: Unable to open handle\n"); + mutex_unlock(&client->m_lock); + goto done; + } + } + mutex_unlock(&client->m_lock); + svc = &client->svc[svc_idx]; + mutex_lock(&svc->m_lock); + client->id = client_id; + if (svc->need_reset) { + mutex_unlock(&svc->m_lock); + pr_err("APR: Service needs reset\n"); + goto done; + } + svc->priv = priv; + svc->id = svc_id; + svc->dest_id = dest_id; + svc->client_id = client_id; + if (src_port != 0xFFFFFFFF) { + temp_port = ((src_port >> 8) * 8) + (src_port & 0xFF); + pr_debug("port = %d t_port = %d\n", src_port, temp_port); + if (temp_port >= APR_MAX_PORTS || temp_port < 0) { + pr_err("APR: temp_port out of bounds\n"); + mutex_unlock(&svc->m_lock); + return NULL; + } + if (!svc->port_cnt && !svc->svc_cnt) + client->svc_cnt++; + svc->port_cnt++; + svc->port_fn[temp_port] = svc_fn; + svc->port_priv[temp_port] = priv; + } else { + if (!svc->fn) { + if (!svc->port_cnt && !svc->svc_cnt) + client->svc_cnt++; + svc->fn = svc_fn; + if (svc->port_cnt) + svc->svc_cnt++; + } + } + + mutex_unlock(&svc->m_lock); +done: + return svc; +} + +void apr_set_subsys_state(void) +{ + apr_set_q6_state(APR_SUBSYS_UP); + apr_set_modem_state(APR_SUBSYS_UP); +} diff --git a/sound/soc/msm/qdsp6/core/audio_acdb.c b/sound/soc/msm/qdsp6/core/audio_acdb.c new file mode 100644 index 000000000000..941e1b85addc --- /dev/null +++ b/sound/soc/msm/qdsp6/core/audio_acdb.c @@ -0,0 +1,925 @@ +/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#include <linux/fs.h> +#include <linux/module.h> +#include <linux/miscdevice.h> +#include <linux/mutex.h> +#include <linux/uaccess.h> +#include <linux/mm.h> +#include <sound/qdsp6v2/audio_acdb.h> + + +#define MAX_NETWORKS 15 + +struct sidetone_atomic_cal { + atomic_t enable; + atomic_t gain; +}; + + +struct acdb_data { + struct mutex acdb_mutex; + + /* ANC Cal */ + struct acdb_atomic_cal_block anc_cal; + + /* AudProc Cal */ + atomic_t asm_topology; + atomic_t adm_topology[MAX_AUDPROC_TYPES]; + struct acdb_atomic_cal_block audproc_cal[MAX_AUDPROC_TYPES]; + struct acdb_atomic_cal_block audstrm_cal[MAX_AUDPROC_TYPES]; + struct acdb_atomic_cal_block audvol_cal[MAX_AUDPROC_TYPES]; + + /* VocProc Cal */ + atomic_t voice_rx_topology; + atomic_t voice_tx_topology; + struct acdb_atomic_cal_block vocproc_cal[MAX_NETWORKS]; + struct acdb_atomic_cal_block vocstrm_cal[MAX_NETWORKS]; + struct acdb_atomic_cal_block vocvol_cal[MAX_NETWORKS]; + /* size of cal block tables above*/ + atomic_t vocproc_cal_size; + atomic_t vocstrm_cal_size; + atomic_t vocvol_cal_size; + /* Total size of cal data for all networks */ + atomic_t vocproc_total_cal_size; + atomic_t vocstrm_total_cal_size; + atomic_t vocvol_total_cal_size; + + /* AFE cal */ + struct acdb_atomic_cal_block afe_cal[MAX_AUDPROC_TYPES]; + + /* Sidetone Cal */ + struct sidetone_atomic_cal sidetone_cal; + + + /* Allocation information */ +// struct ion_client *ion_client; +// struct ion_handle *ion_handle; + atomic_t map_handle; + atomic64_t paddr; + atomic64_t kvaddr; + atomic64_t mem_len; +}; + +static struct acdb_data acdb_data; +static atomic_t usage_count; + +uint32_t get_voice_rx_topology(void) +{ + return atomic_read(&acdb_data.voice_rx_topology); +} + +void store_voice_rx_topology(uint32_t topology) +{ + atomic_set(&acdb_data.voice_rx_topology, topology); +} + +uint32_t get_voice_tx_topology(void) +{ + return atomic_read(&acdb_data.voice_tx_topology); +} + +void store_voice_tx_topology(uint32_t topology) +{ + atomic_set(&acdb_data.voice_tx_topology, topology); +} + +uint32_t get_adm_rx_topology(void) +{ + return atomic_read(&acdb_data.adm_topology[RX_CAL]); +} + +void store_adm_rx_topology(uint32_t topology) +{ + atomic_set(&acdb_data.adm_topology[RX_CAL], topology); +} + +uint32_t get_adm_tx_topology(void) +{ + return atomic_read(&acdb_data.adm_topology[TX_CAL]); +} + +void store_adm_tx_topology(uint32_t topology) +{ + atomic_set(&acdb_data.adm_topology[TX_CAL], topology); +} + +uint32_t get_asm_topology(void) +{ + return atomic_read(&acdb_data.asm_topology); +} + +void store_asm_topology(uint32_t topology) +{ + atomic_set(&acdb_data.asm_topology, topology); +} + +void get_all_voice_cal(struct acdb_cal_block *cal_block) +{ + cal_block->cal_kvaddr = + atomic_read(&acdb_data.vocproc_cal[0].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.vocproc_cal[0].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.vocproc_total_cal_size) + + atomic_read(&acdb_data.vocstrm_total_cal_size) + + atomic_read(&acdb_data.vocvol_total_cal_size); +} + +void get_all_cvp_cal(struct acdb_cal_block *cal_block) +{ + cal_block->cal_kvaddr = + atomic_read(&acdb_data.vocproc_cal[0].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.vocproc_cal[0].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.vocproc_total_cal_size) + + atomic_read(&acdb_data.vocvol_total_cal_size); +} + +void get_all_vocproc_cal(struct acdb_cal_block *cal_block) +{ + cal_block->cal_kvaddr = + atomic_read(&acdb_data.vocproc_cal[0].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.vocproc_cal[0].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.vocproc_total_cal_size); +} + +void get_all_vocstrm_cal(struct acdb_cal_block *cal_block) +{ + cal_block->cal_kvaddr = + atomic_read(&acdb_data.vocstrm_cal[0].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.vocstrm_cal[0].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.vocstrm_total_cal_size); +} + +void get_all_vocvol_cal(struct acdb_cal_block *cal_block) +{ + cal_block->cal_kvaddr = + atomic_read(&acdb_data.vocvol_cal[0].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.vocvol_cal[0].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.vocvol_total_cal_size); +} + +void get_anc_cal(struct acdb_cal_block *cal_block) +{ + pr_debug("%s\n", __func__); + + if (cal_block == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + + cal_block->cal_kvaddr = + atomic_read(&acdb_data.anc_cal.cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.anc_cal.cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.anc_cal.cal_size); +done: + return; +} + +void store_anc_cal(struct cal_block *cal_block) +{ + pr_debug("%s,\n", __func__); + + if (cal_block->cal_offset > atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_block->cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + goto done; + } + + atomic_set(&acdb_data.anc_cal.cal_kvaddr, + cal_block->cal_offset + atomic64_read(&acdb_data.kvaddr)); + atomic_set(&acdb_data.anc_cal.cal_paddr, + cal_block->cal_offset + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.anc_cal.cal_size, + cal_block->cal_size); +done: + return; +} + +void store_afe_cal(int32_t path, struct cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block->cal_offset > atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_block->cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + goto done; + } + if ((path >= MAX_AUDPROC_TYPES) || (path < 0)) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + atomic_set(&acdb_data.afe_cal[path].cal_kvaddr, + cal_block->cal_offset + atomic64_read(&acdb_data.kvaddr)); + atomic_set(&acdb_data.afe_cal[path].cal_paddr, + cal_block->cal_offset + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.afe_cal[path].cal_size, + cal_block->cal_size); +done: + return; +} + +void get_afe_cal(int32_t path, struct acdb_cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + if ((path >= MAX_AUDPROC_TYPES) || (path < 0)) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + cal_block->cal_kvaddr = + atomic_read(&acdb_data.afe_cal[path].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.afe_cal[path].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.afe_cal[path].cal_size); +done: + return; +} + +void store_audproc_cal(int32_t path, struct cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block->cal_offset > atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_block->cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + goto done; + } + if (path >= MAX_AUDPROC_TYPES) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + atomic_set(&acdb_data.audproc_cal[path].cal_kvaddr, + cal_block->cal_offset + atomic64_read(&acdb_data.kvaddr)); + atomic_set(&acdb_data.audproc_cal[path].cal_paddr, + cal_block->cal_offset + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.audproc_cal[path].cal_size, + cal_block->cal_size); +done: + return; +} + +void get_audproc_cal(int32_t path, struct acdb_cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + if (path >= MAX_AUDPROC_TYPES) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + cal_block->cal_kvaddr = + atomic_read(&acdb_data.audproc_cal[path].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.audproc_cal[path].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.audproc_cal[path].cal_size); +done: + return; +} + +void store_audstrm_cal(int32_t path, struct cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block->cal_offset > atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_block->cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + goto done; + } + if (path >= MAX_AUDPROC_TYPES) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + atomic_set(&acdb_data.audstrm_cal[path].cal_kvaddr, + cal_block->cal_offset + atomic64_read(&acdb_data.kvaddr)); + atomic_set(&acdb_data.audstrm_cal[path].cal_paddr, + cal_block->cal_offset + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.audstrm_cal[path].cal_size, + cal_block->cal_size); +done: + return; +} + +void get_audstrm_cal(int32_t path, struct acdb_cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + if (path >= MAX_AUDPROC_TYPES) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + cal_block->cal_kvaddr = + atomic_read(&acdb_data.audstrm_cal[path].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.audstrm_cal[path].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.audstrm_cal[path].cal_size); +done: + return; +} + +void store_audvol_cal(int32_t path, struct cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block->cal_offset > atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_block->cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + goto done; + } + if (path >= MAX_AUDPROC_TYPES) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + atomic_set(&acdb_data.audvol_cal[path].cal_kvaddr, + cal_block->cal_offset + atomic64_read(&acdb_data.kvaddr)); + atomic_set(&acdb_data.audvol_cal[path].cal_paddr, + cal_block->cal_offset + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.audvol_cal[path].cal_size, + cal_block->cal_size); +done: + return; +} + +void get_audvol_cal(int32_t path, struct acdb_cal_block *cal_block) +{ + pr_debug("%s, path = %d\n", __func__, path); + + if (cal_block == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + if (path >= MAX_AUDPROC_TYPES || path < 0) { + pr_err("ACDB=> Bad path sent to %s, path: %d\n", + __func__, path); + goto done; + } + + cal_block->cal_kvaddr = + atomic_read(&acdb_data.audvol_cal[path].cal_kvaddr); + cal_block->cal_paddr = + atomic_read(&acdb_data.audvol_cal[path].cal_paddr); + cal_block->cal_size = + atomic_read(&acdb_data.audvol_cal[path].cal_size); +done: + return; +} + + +void store_vocproc_cal(int32_t len, struct cal_block *cal_blocks) +{ + int i; + pr_debug("%s\n", __func__); + + if (len > MAX_NETWORKS) { + pr_err("%s: Calibration sent for %d networks, only %d are " + "supported!\n", __func__, len, MAX_NETWORKS); + goto done; + } + + atomic_set(&acdb_data.vocproc_total_cal_size, 0); + for (i = 0; i < len; i++) { + if (cal_blocks[i].cal_offset > + atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_blocks[i].cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + atomic_set(&acdb_data.vocproc_cal[i].cal_size, 0); + } else { + atomic_add(cal_blocks[i].cal_size, + &acdb_data.vocproc_total_cal_size); + atomic_set(&acdb_data.vocproc_cal[i].cal_size, + cal_blocks[i].cal_size); + atomic_set(&acdb_data.vocproc_cal[i].cal_paddr, + cal_blocks[i].cal_offset + + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.vocproc_cal[i].cal_kvaddr, + cal_blocks[i].cal_offset + + atomic64_read(&acdb_data.kvaddr)); + } + } + atomic_set(&acdb_data.vocproc_cal_size, len); +done: + return; +} + +void get_vocproc_cal(struct acdb_cal_data *cal_data) +{ + pr_debug("%s\n", __func__); + + if (cal_data == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + + cal_data->num_cal_blocks = atomic_read(&acdb_data.vocproc_cal_size); + cal_data->cal_blocks = &acdb_data.vocproc_cal[0]; +done: + return; +} + +void store_vocstrm_cal(int32_t len, struct cal_block *cal_blocks) +{ + int i; + pr_debug("%s\n", __func__); + + if (len > MAX_NETWORKS) { + pr_err("%s: Calibration sent for %d networks, only %d are " + "supported!\n", __func__, len, MAX_NETWORKS); + goto done; + } + + atomic_set(&acdb_data.vocstrm_total_cal_size, 0); + for (i = 0; i < len; i++) { + if (cal_blocks[i].cal_offset > + atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_blocks[i].cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + atomic_set(&acdb_data.vocstrm_cal[i].cal_size, 0); + } else { + atomic_add(cal_blocks[i].cal_size, + &acdb_data.vocstrm_total_cal_size); + atomic_set(&acdb_data.vocstrm_cal[i].cal_size, + cal_blocks[i].cal_size); + atomic_set(&acdb_data.vocstrm_cal[i].cal_paddr, + cal_blocks[i].cal_offset + + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.vocstrm_cal[i].cal_kvaddr, + cal_blocks[i].cal_offset + + atomic64_read(&acdb_data.kvaddr)); + } + } + atomic_set(&acdb_data.vocstrm_cal_size, len); +done: + return; +} + +void get_vocstrm_cal(struct acdb_cal_data *cal_data) +{ + pr_debug("%s\n", __func__); + + if (cal_data == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + + cal_data->num_cal_blocks = atomic_read(&acdb_data.vocstrm_cal_size); + cal_data->cal_blocks = &acdb_data.vocstrm_cal[0]; +done: + return; +} + +void store_vocvol_cal(int32_t len, struct cal_block *cal_blocks) +{ + int i; + pr_debug("%s\n", __func__); + + if (len > MAX_NETWORKS) { + pr_err("%s: Calibration sent for %d networks, only %d are " + "supported!\n", __func__, len, MAX_NETWORKS); + goto done; + } + + atomic_set(&acdb_data.vocvol_total_cal_size, 0); + for (i = 0; i < len; i++) { + if (cal_blocks[i].cal_offset > + atomic64_read(&acdb_data.mem_len)) { + pr_err("%s: offset %d is > mem_len %ld\n", + __func__, cal_blocks[i].cal_offset, + (long)atomic64_read(&acdb_data.mem_len)); + atomic_set(&acdb_data.vocvol_cal[i].cal_size, 0); + } else { + atomic_add(cal_blocks[i].cal_size, + &acdb_data.vocvol_total_cal_size); + atomic_set(&acdb_data.vocvol_cal[i].cal_size, + cal_blocks[i].cal_size); + atomic_set(&acdb_data.vocvol_cal[i].cal_paddr, + cal_blocks[i].cal_offset + + atomic64_read(&acdb_data.paddr)); + atomic_set(&acdb_data.vocvol_cal[i].cal_kvaddr, + cal_blocks[i].cal_offset + + atomic64_read(&acdb_data.kvaddr)); + } + } + atomic_set(&acdb_data.vocvol_cal_size, len); +done: + return; +} + +void get_vocvol_cal(struct acdb_cal_data *cal_data) +{ + pr_debug("%s\n", __func__); + + if (cal_data == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + + cal_data->num_cal_blocks = atomic_read(&acdb_data.vocvol_cal_size); + cal_data->cal_blocks = &acdb_data.vocvol_cal[0]; +done: + return; +} + +void store_sidetone_cal(struct sidetone_cal *cal_data) +{ + pr_debug("%s\n", __func__); + + atomic_set(&acdb_data.sidetone_cal.enable, cal_data->enable); + atomic_set(&acdb_data.sidetone_cal.gain, cal_data->gain); +} + + +void get_sidetone_cal(struct sidetone_cal *cal_data) +{ + pr_debug("%s\n", __func__); + + if (cal_data == NULL) { + pr_err("ACDB=> NULL pointer sent to %s\n", __func__); + goto done; + } + + cal_data->enable = atomic_read(&acdb_data.sidetone_cal.enable); + cal_data->gain = atomic_read(&acdb_data.sidetone_cal.gain); +done: + return; +} + +static int acdb_open(struct inode *inode, struct file *f) +{ + s32 result = 0; + pr_debug("%s\n", __func__); + + if (atomic64_read(&acdb_data.mem_len)) { + pr_debug("%s: ACDB opened but memory allocated, " + "using existing allocation!\n", + __func__); + } + + atomic_inc(&usage_count); + return result; +} + +#if 0 +static int deregister_memory(void) +{ + if (atomic64_read(&acdb_data.mem_len)) { + mutex_lock(&acdb_data.acdb_mutex); + atomic_set(&acdb_data.vocstrm_total_cal_size, 0); + atomic_set(&acdb_data.vocproc_total_cal_size, 0); + atomic_set(&acdb_data.vocvol_total_cal_size, 0); + atomic64_set(&acdb_data.mem_len, 0); + ion_unmap_kernel(acdb_data.ion_client, acdb_data.ion_handle); + ion_free(acdb_data.ion_client, acdb_data.ion_handle); + ion_client_destroy(acdb_data.ion_client); + mutex_unlock(&acdb_data.acdb_mutex); + } + return 0; +} +#endif + +#if 0 +static int register_memory(void) +{ + int result; + unsigned long paddr; + void *kvptr; + unsigned long kvaddr; + unsigned long mem_len; + + mutex_lock(&acdb_data.acdb_mutex); + + kvptr = dma_alloc_writecombine(NULL, mem_len, + &paddr, GFP_KERNEL); + if (WARN_ON(IS_ERR_OR_NULL(kvptr))) { + pr_err("%s: allocation failed\n", __func__); + goto err; + } + + result = 0; + kvaddr = (unsigned long)kvptr; + atomic64_set(&acdb_data.paddr, paddr); + atomic64_set(&acdb_data.kvaddr, kvaddr); + atomic64_set(&acdb_data.mem_len, mem_len); + mutex_unlock(&acdb_data.acdb_mutex); + + pr_debug("%s done! paddr = 0x%lx, " + "kvaddr = 0x%lx, len = x%lx\n", + __func__, + (long)atomic64_read(&acdb_data.paddr), + (long)atomic64_read(&acdb_data.kvaddr), + (long)atomic64_read(&acdb_data.mem_len)); + + return result; + + atomic64_set(&acdb_data.mem_len, 0); + mutex_unlock(&acdb_data.acdb_mutex); + return result; + return 0; +} +#endif +static long acdb_ioctl(struct file *f, + unsigned int cmd, unsigned long arg) +{ + int32_t result = 0; + int32_t size; + int32_t map_fd; + uint32_t topology; + struct cal_block data[MAX_NETWORKS]; + pr_debug("%s\n", __func__); + + switch (cmd) { + + case AUDIO_REGISTER_PMEM: + pr_debug("AUDIO_REGISTER_PMEM\n"); + if (atomic_read(&acdb_data.mem_len)) { + //deregister_memory(); + pr_debug("Remove the existing memory\n"); + } + + if (copy_from_user(&map_fd, (void *)arg, sizeof(map_fd))) { + pr_err("%s: fail to copy memory handle!\n", __func__); + result = -EFAULT; + } else { + atomic_set(&acdb_data.map_handle, map_fd); + //result = register_memory(); + } + goto done; + + case AUDIO_DEREGISTER_PMEM: + pr_debug("AUDIO_DEREGISTER_PMEM\n"); + //deregister_memory(); + goto done; + case AUDIO_SET_VOICE_RX_TOPOLOGY: + if (copy_from_user(&topology, (void *)arg, + sizeof(topology))) { + pr_err("%s: fail to copy topology!\n", __func__); + result = -EFAULT; + } + store_voice_rx_topology(topology); + goto done; + case AUDIO_SET_VOICE_TX_TOPOLOGY: + if (copy_from_user(&topology, (void *)arg, + sizeof(topology))) { + pr_err("%s: fail to copy topology!\n", __func__); + result = -EFAULT; + } + store_voice_tx_topology(topology); + goto done; + case AUDIO_SET_ADM_RX_TOPOLOGY: + if (copy_from_user(&topology, (void *)arg, + sizeof(topology))) { + pr_err("%s: fail to copy topology!\n", __func__); + result = -EFAULT; + } + store_adm_rx_topology(topology); + goto done; + case AUDIO_SET_ADM_TX_TOPOLOGY: + if (copy_from_user(&topology, (void *)arg, + sizeof(topology))) { + pr_err("%s: fail to copy topology!\n", __func__); + result = -EFAULT; + } + store_adm_tx_topology(topology); + goto done; + case AUDIO_SET_ASM_TOPOLOGY: + if (copy_from_user(&topology, (void *)arg, + sizeof(topology))) { + pr_err("%s: fail to copy topology!\n", __func__); + result = -EFAULT; + } + store_asm_topology(topology); + goto done; + } + + if (copy_from_user(&size, (void *) arg, sizeof(size))) { + + result = -EFAULT; + goto done; + } + + if (size <= 0) { + pr_err("%s: Invalid size sent to driver: %d\n", + __func__, size); + result = -EFAULT; + goto done; + } + + if (copy_from_user(data, (void *)(arg + sizeof(size)), size)) { + + pr_err("%s: fail to copy table size %d\n", __func__, size); + result = -EFAULT; + goto done; + } + + if (data == NULL) { + pr_err("%s: NULL pointer sent to driver!\n", __func__); + result = -EFAULT; + goto done; + } + + switch (cmd) { + case AUDIO_SET_AUDPROC_TX_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More Audproc Cal then expected, " + "size received: %d\n", __func__, size); + store_audproc_cal(TX_CAL, data); + break; + case AUDIO_SET_AUDPROC_RX_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More Audproc Cal then expected, " + "size received: %d\n", __func__, size); + store_audproc_cal(RX_CAL, data); + break; + case AUDIO_SET_AUDPROC_TX_STREAM_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More Audproc Cal then expected, " + "size received: %d\n", __func__, size); + store_audstrm_cal(TX_CAL, data); + break; + case AUDIO_SET_AUDPROC_RX_STREAM_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More Audproc Cal then expected, " + "size received: %d\n", __func__, size); + store_audstrm_cal(RX_CAL, data); + break; + case AUDIO_SET_AUDPROC_TX_VOL_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More Audproc Cal then expected, " + "size received: %d\n", __func__, size); + store_audvol_cal(TX_CAL, data); + break; + case AUDIO_SET_AUDPROC_RX_VOL_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More Audproc Cal then expected, " + "size received: %d\n", __func__, size); + store_audvol_cal(RX_CAL, data); + break; + case AUDIO_SET_AFE_TX_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More AFE Cal then expected, " + "size received: %d\n", __func__, size); + store_afe_cal(TX_CAL, data); + break; + case AUDIO_SET_AFE_RX_CAL: + if (size > sizeof(struct cal_block)) + pr_err("%s: More AFE Cal then expected, " + "size received: %d\n", __func__, size); + store_afe_cal(RX_CAL, data); + break; + case AUDIO_SET_VOCPROC_CAL: + store_vocproc_cal(size / sizeof(struct cal_block), data); + break; + case AUDIO_SET_VOCPROC_STREAM_CAL: + store_vocstrm_cal(size / sizeof(struct cal_block), data); + break; + case AUDIO_SET_VOCPROC_VOL_CAL: + store_vocvol_cal(size / sizeof(struct cal_block), data); + break; + case AUDIO_SET_SIDETONE_CAL: + if (size > sizeof(struct sidetone_cal)) + pr_err("%s: More sidetone cal then expected, " + "size received: %d\n", __func__, size); + store_sidetone_cal((struct sidetone_cal *)data); + break; + case AUDIO_SET_ANC_CAL: + store_anc_cal(data); + break; + default: + pr_err("ACDB=> ACDB ioctl not found!\n"); + } + +done: + return result; +} + +static int acdb_mmap(struct file *file, struct vm_area_struct *vma) +{ + int result = 0; + int size = vma->vm_end - vma->vm_start; + + pr_debug("%s\n", __func__); + + if (atomic64_read(&acdb_data.mem_len)) { + if (size <= atomic64_read(&acdb_data.mem_len)) { + vma->vm_page_prot = pgprot_noncached( + vma->vm_page_prot); + result = remap_pfn_range(vma, + vma->vm_start, + atomic64_read(&acdb_data.paddr) >> PAGE_SHIFT, + size, + vma->vm_page_prot); + } else { + pr_err("%s: Not enough memory!\n", __func__); + result = -ENOMEM; + } + } else { + pr_err("%s: memory is not allocated, yet!\n", __func__); + result = -ENODEV; + } + + return result; +} + +static int acdb_release(struct inode *inode, struct file *f) +{ + s32 result = 0; + + atomic_dec(&usage_count); + atomic_read(&usage_count); + + pr_debug("%s: ref count %d!\n", __func__, + atomic_read(&usage_count)); + + if (atomic_read(&usage_count) >= 1) + result = -EBUSY; +// else +// result = deregister_memory(); + + return result; +} + +static const struct file_operations acdb_fops = { + .owner = THIS_MODULE, + .open = acdb_open, + .release = acdb_release, + .unlocked_ioctl = acdb_ioctl, + .mmap = acdb_mmap, +}; + +struct miscdevice acdb_misc = { + .minor = MISC_DYNAMIC_MINOR, + .name = "msm_acdb", + .fops = &acdb_fops, +}; + +static int __init acdb_init(void) +{ + memset(&acdb_data, 0, sizeof(acdb_data)); + mutex_init(&acdb_data.acdb_mutex); + atomic_set(&usage_count, 0); + return misc_register(&acdb_misc); +} + +static void __exit acdb_exit(void) +{ +} + +module_init(acdb_init); +module_exit(acdb_exit); + +MODULE_DESCRIPTION("MSM 8x60 Audio ACDB driver"); +MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/msm/qdsp6/core/dsp_debug.c b/sound/soc/msm/qdsp6/core/dsp_debug.c new file mode 100644 index 000000000000..f08aff947737 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/dsp_debug.c @@ -0,0 +1,261 @@ +/* arch/arm/mach-msm/qdsp6/dsp_dump.c + * + * Copyright (C) 2009 Google, Inc. + * Author: Brian Swetland <swetland@google.com> + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/io.h> +#include <linux/fs.h> +#include <linux/module.h> +#include <linux/miscdevice.h> +#include <linux/uaccess.h> +#include <linux/sched.h> +#include <linux/wait.h> +#include <linux/delay.h> +#include <linux/platform_device.h> +#include <asm/atomic.h> + +//#include <mach/proc_comm.h> +//#include <mach/debug_mm.h> +#include <sound/qdsp6v2/dsp_debug.h> + +static wait_queue_head_t dsp_wait; +static int dsp_has_crashed; +static int dsp_wait_count; + +static atomic_t dsp_crash_count = ATOMIC_INIT(0); +dsp_state_cb cb_ptr; + +void q6audio_dsp_not_responding(void) +{ + int i; + + if (cb_ptr) + cb_ptr(DSP_STATE_CRASHED); + if (atomic_add_return(1, &dsp_crash_count) != 1) { + pr_err("q6audio_dsp_not_responding() \ + - parking additional crasher...\n"); + for (i = 0; i < 600; i++) + msleep(1000); + } + if (dsp_wait_count) { + dsp_has_crashed = 1; + wake_up(&dsp_wait); + + while (dsp_has_crashed != 2) + wait_event(dsp_wait, dsp_has_crashed == 2); + } else { + pr_err("q6audio_dsp_not_responding() - no waiter?\n"); + } + if (cb_ptr) + cb_ptr(DSP_STATE_CRASH_DUMP_DONE); +} + +static int dsp_open(struct inode *inode, struct file *file) +{ + return 0; +} + +#define DSP_NMI_ADDR 0x28800010 + +static ssize_t dsp_write(struct file *file, const char __user *buf, + size_t count, loff_t *pos) +{ + char cmd[32]; + void __iomem *ptr; + void *mem_buffer; + + if (count >= sizeof(cmd)) + return -EINVAL; + if (copy_from_user(cmd, buf, count)) + return -EFAULT; + cmd[count] = 0; + + if ((count > 1) && (cmd[count-1] == '\n')) + cmd[count-1] = 0; + + if (!strcmp(cmd, "wait-for-crash")) { + while (!dsp_has_crashed) { + int res; + dsp_wait_count++; + res = wait_event_interruptible(dsp_wait, + dsp_has_crashed); + if (res < 0) { + dsp_wait_count--; + return res; + } + } + /* assert DSP NMI */ + mem_buffer = ioremap(DSP_NMI_ADDR, 0x16); + if (IS_ERR((void *)mem_buffer)) { + pr_err("%s:map_buffer failed, error = %ld\n", __func__, + PTR_ERR((void *)mem_buffer)); + return -ENOMEM; + } + ptr = mem_buffer; + if (!ptr) { + pr_err("Unable to map DSP NMI\n"); + return -EFAULT; + } + writel(0x1, (void *)ptr); + iounmap(mem_buffer); + } else if (!strcmp(cmd, "boom")) { + q6audio_dsp_not_responding(); + } else if (!strcmp(cmd, "continue-crash")) { + dsp_has_crashed = 2; + wake_up(&dsp_wait); + } else { + pr_err("[%s:%s] unknown dsp_debug command: %s\n", __FILE__, + __func__, cmd); + } + + return count; +} + +static unsigned copy_ok_count; +static uint32_t dsp_ram_size; +static uint32_t dsp_ram_base; + +static ssize_t dsp_read(struct file *file, char __user *buf, + size_t count, loff_t *pos) +{ + size_t actual = 0; + size_t mapsize = PAGE_SIZE; + unsigned addr; + void __iomem *ptr; + void *mem_buffer; + + if ((dsp_ram_base == 0) || (dsp_ram_size == 0)) { + pr_err("[%s:%s] Memory Invalid or not initialized, Base = 0x%x," + " size = 0x%x\n", __FILE__, + __func__, dsp_ram_base, dsp_ram_size); + return -EINVAL; + } + + if (*pos >= dsp_ram_size) + return 0; + + if (*pos & (PAGE_SIZE - 1)) + return -EINVAL; + + addr = (*pos + dsp_ram_base); + + /* don't blow up if we're unaligned */ + if (addr & (PAGE_SIZE - 1)) + mapsize *= 2; + + while (count >= PAGE_SIZE) { + mem_buffer = ioremap(addr, mapsize); + if (IS_ERR((void *)mem_buffer)) { + pr_err("%s:map_buffer failed, error = %ld\n", + __func__, PTR_ERR((void *)mem_buffer)); + return -ENOMEM; + } + ptr = mem_buffer; + if (!ptr) { + pr_err("[%s:%s] map error @ %x\n", __FILE__, + __func__, addr); + return -EFAULT; + } + if (copy_to_user(buf, ptr, PAGE_SIZE)) { + iounmap(mem_buffer); + pr_err("[%s:%s] copy error @ %p\n", __FILE__, + __func__, buf); + return -EFAULT; + } + copy_ok_count += PAGE_SIZE; + iounmap(mem_buffer); + addr += PAGE_SIZE; + buf += PAGE_SIZE; + actual += PAGE_SIZE; + count -= PAGE_SIZE; + } + + *pos += actual; + return actual; +} + +static int dsp_release(struct inode *inode, struct file *file) +{ + return 0; +} + +int dsp_debug_register(dsp_state_cb ptr) +{ + if (ptr == NULL) + return -EINVAL; + cb_ptr = ptr; + + return 0; +} + +static int dspcrashd_probe(struct platform_device *pdev) +{ + int rc = 0; + struct resource *res; + int *pdata; + + pdata = pdev->dev.platform_data; + res = platform_get_resource_byname(pdev, IORESOURCE_DMA, + "msm_dspcrashd"); + if (!res) { + pr_err("%s: failed to get resources for dspcrashd\n", __func__); + return -ENODEV; + } + + dsp_ram_base = res->start; + dsp_ram_size = res->end - res->start; + pr_info("%s: Platform driver values: Base = 0x%x, Size = 0x%x," + "pdata = 0x%x\n", __func__, + dsp_ram_base, dsp_ram_size, *pdata); + return rc; +} + +static const struct file_operations dsp_fops = { + .owner = THIS_MODULE, + .open = dsp_open, + .read = dsp_read, + .write = dsp_write, + .release = dsp_release, +}; + +static struct miscdevice dsp_misc = { + .minor = MISC_DYNAMIC_MINOR, + .name = "dsp_debug", + .fops = &dsp_fops, +}; + +static struct platform_driver dspcrashd_driver = { + .probe = dspcrashd_probe, + .driver = { .name = "msm_dspcrashd"} +}; + +static int __init dsp_init(void) +{ + int rc = 0; + init_waitqueue_head(&dsp_wait); + rc = platform_driver_register(&dspcrashd_driver); + if (IS_ERR_VALUE(rc)) { + pr_err("%s: platform_driver_register for dspcrashd failed\n", + __func__); + } + return misc_register(&dsp_misc); +} + +static int __exit dsp_exit(void) +{ + platform_driver_unregister(&dspcrashd_driver); + return 0; +} + +device_initcall(dsp_init); diff --git a/sound/soc/msm/qdsp6/core/q6audio_common.h b/sound/soc/msm/qdsp6/core/q6audio_common.h new file mode 100644 index 000000000000..68efafd65036 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/q6audio_common.h @@ -0,0 +1,35 @@ +/* Copyright (c) 2012, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * +*/ + +/* For Decoders */ +#ifndef __Q6_AUDIO_COMMON_H__ +#define __Q6_AUDIO_COMMON_H__ + +#include <sound/apr_audio.h> +#include <sound/q6asm.h> + +void q6_audio_cb(uint32_t opcode, uint32_t token, + uint32_t *payload, void *priv); + +void audio_aio_cb(uint32_t opcode, uint32_t token, + uint32_t *payload, void *audio); + + +/* For Encoders */ +void q6asm_in_cb(uint32_t opcode, uint32_t token, + uint32_t *payload, void *priv); + +void audio_in_get_dsp_frames(void *audio, + uint32_t token, uint32_t *payload); + +#endif /*__Q6_AUDIO_COMMON_H__*/ diff --git a/sound/soc/msm/qdsp6/core/q6core.c b/sound/soc/msm/qdsp6/core/q6core.c new file mode 100644 index 000000000000..be57ef2fec70 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/q6core.c @@ -0,0 +1,407 @@ +/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/string.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/spinlock.h> +#include <linux/mutex.h> +#include <linux/list.h> +#include <linux/sched.h> +#include <linux/wait.h> +#include <linux/errno.h> +#include <linux/fs.h> +#include <linux/debugfs.h> +#include <linux/delay.h> +#include <linux/msm_smd.h> +#include <sound/qdsp6v2/apr.h> +#include "q6core.h" + +#define TIMEOUT_MS 1000 + +static struct apr_svc *apr_handle_q; +static struct apr_svc *apr_handle_m; +static struct apr_svc *core_handle_q; + +static int32_t query_adsp_ver; +static wait_queue_head_t adsp_version_wait; +static uint32_t adsp_version; + +static wait_queue_head_t bus_bw_req_wait; +static u32 bus_bw_resp_received; + +static struct dentry *dentry; +static char l_buf[4096]; + +static int32_t aprv2_core_fn_q(struct apr_client_data *data, void *priv) +{ + struct adsp_get_version *payload; + uint32_t *payload1; + struct adsp_service_info *svc_info; + int i; + + pr_info("core msg: payload len = %u, apr resp opcode = 0x%X\n", + data->payload_size, data->opcode); + + switch (data->opcode) { + + case APR_BASIC_RSP_RESULT:{ + + if (data->payload_size == 0) { + pr_err("%s: APR_BASIC_RSP_RESULT No Payload ", + __func__); + return 0; + } + + payload1 = data->payload; + + switch (payload1[0]) { + + case ADSP_CMD_SET_POWER_COLLAPSE_STATE: + pr_info("Cmd = ADSP_CMD_SET_POWER_COLLAPSE_STATE" + " status[0x%x]\n", payload1[1]); + break; + case ADSP_CMD_REMOTE_BUS_BW_REQUEST: + pr_info("%s: cmd = ADSP_CMD_REMOTE_BUS_BW_REQUEST" + " status = 0x%x\n", __func__, payload1[1]); + + bus_bw_resp_received = 1; + wake_up(&bus_bw_req_wait); + break; + default: + pr_err("Invalid cmd rsp[0x%x][0x%x]\n", + payload1[0], payload1[1]); + break; + } + break; + } + case ADSP_GET_VERSION_RSP:{ + if (data->payload_size) { + payload = data->payload; + if (query_adsp_ver == 1) { + query_adsp_ver = 0; + adsp_version = payload->build_id; + wake_up(&adsp_version_wait); + } + svc_info = (struct adsp_service_info *) + ((char *)payload + sizeof(struct adsp_get_version)); + pr_info("----------------------------------------\n"); + pr_info("Build id = %x\n", payload->build_id); + pr_info("Number of services= %x\n", payload->svc_cnt); + pr_info("----------------------------------------\n"); + for (i = 0; i < payload->svc_cnt; i++) { + pr_info("svc-id[%d]\tver[%x.%x]\n", + svc_info[i].svc_id, + (svc_info[i].svc_ver & 0xFFFF0000) + >> 16, + (svc_info[i].svc_ver & 0xFFFF)); + } + pr_info("-----------------------------------------\n"); + } else + pr_info("zero payload for ADSP_GET_VERSION_RSP\n"); + break; + } + case RESET_EVENTS:{ + pr_debug("Reset event received in Core service"); + apr_reset(core_handle_q); + core_handle_q = NULL; + break; + } + + default: + pr_err("Message id from adsp core svc: %d\n", data->opcode); + break; + } + + return 0; +} + +static int32_t aprv2_debug_fn_q(struct apr_client_data *data, void *priv) +{ + pr_debug("Q6_Payload Length = %d\n", data->payload_size); + if (memcmp(data->payload, l_buf + 20, data->payload_size)) + pr_info("FAIL: %d\n", data->payload_size); + else + pr_info("SUCCESS: %d\n", data->payload_size); + return 0; +} + +static int32_t aprv2_debug_fn_m(struct apr_client_data *data, void *priv) +{ + pr_info("M_Payload Length = %d\n", data->payload_size); + return 0; +} + +static ssize_t apr_debug_open(struct inode *inode, struct file *file) +{ + file->private_data = inode->i_private; + pr_debug("apr debugfs opened\n"); + return 0; +} + +void core_open(void) +{ + if (core_handle_q == NULL) { + core_handle_q = apr_register("ADSP", "CORE", + aprv2_core_fn_q, 0xFFFFFFFF, NULL); + } + pr_info("Open_q %p\n", core_handle_q); + if (core_handle_q == NULL) { + pr_err("%s: Unable to register CORE\n", __func__); + } +} + +int core_req_bus_bandwith(u16 bus_id, u32 ab_bps, u32 ib_bps) +{ + struct adsp_cmd_remote_bus_bw_request bus_bw_req; + int ret; + + pr_debug("%s: bus_id %u ab_bps %u ib_bps %u\n", + __func__, bus_id, ab_bps, ib_bps); + + core_open(); + if (core_handle_q == NULL) { + pr_info("%s: apr registration for CORE failed\n", __func__); + return -ENODEV; + } + + bus_bw_req.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + bus_bw_req.hdr.pkt_size = sizeof(struct adsp_cmd_remote_bus_bw_request); + + bus_bw_req.hdr.src_port = 0; + bus_bw_req.hdr.dest_port = 0; + bus_bw_req.hdr.token = 0; + bus_bw_req.hdr.opcode = ADSP_CMD_REMOTE_BUS_BW_REQUEST; + + bus_bw_req.bus_identifier = bus_id; + bus_bw_req.reserved = 0; + bus_bw_req.ab_bps = ab_bps; + bus_bw_req.ib_bps = ib_bps; + + bus_bw_resp_received = 0; + ret = apr_send_pkt(core_handle_q, (uint32_t *) &bus_bw_req); + if (ret < 0) { + pr_err("%s: CORE bus bw request failed\n", __func__); + goto fail_cmd; + } + + ret = wait_event_timeout(bus_bw_req_wait, (bus_bw_resp_received == 1), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -ETIME; + goto fail_cmd; + } + + return 0; + +fail_cmd: + return ret; +} + +uint32_t core_get_adsp_version(void) +{ + struct apr_hdr *hdr; + int32_t rc = 0, ret = 0; + core_open(); + if (core_handle_q) { + hdr = (struct apr_hdr *)l_buf; + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + hdr->pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, 0); + hdr->src_port = 0; + hdr->dest_port = 0; + hdr->token = 0; + hdr->opcode = ADSP_GET_VERSION; + + apr_send_pkt(core_handle_q, (uint32_t *)l_buf); + query_adsp_ver = 1; + pr_info("Write_q\n"); + ret = wait_event_timeout(adsp_version_wait, + (query_adsp_ver == 0), + msecs_to_jiffies(TIMEOUT_MS)); + rc = adsp_version; + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + rc = -ENODEV; + } + } else + pr_info("apr registration failed\n"); + return rc; +} +EXPORT_SYMBOL(core_get_adsp_version); + +static ssize_t apr_debug_write(struct file *file, const char __user *buf, + size_t count, loff_t *ppos) +{ + int len; + static int t_len; + + len = count > 63 ? 63 : count; + if (copy_from_user(l_buf + 20 , buf, len)) { + pr_info("Unable to copy data from user space\n"); + return -EFAULT; + } + l_buf[len + 20] = 0; + if (l_buf[len + 20 - 1] == '\n') { + l_buf[len + 20 - 1] = 0; + len--; + } + if (!strncmp(l_buf + 20, "open_q", 64)) { + apr_handle_q = apr_register("ADSP", "TEST", aprv2_debug_fn_q, + 0xFFFFFFFF, NULL); + pr_info("Open_q %p\n", apr_handle_q); + } else if (!strncmp(l_buf + 20, "open_m", 64)) { + apr_handle_m = apr_register("MODEM", "TEST", aprv2_debug_fn_m, + 0xFFFFFFFF, NULL); + pr_info("Open_m %p\n", apr_handle_m); + } else if (!strncmp(l_buf + 20, "write_q", 64)) { + struct apr_hdr *hdr; + + t_len++; + t_len = t_len % 450; + if (!t_len % 99) + msleep(2000); + hdr = (struct apr_hdr *)l_buf; + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT, + APR_HDR_LEN(20), APR_PKT_VER); + hdr->pkt_size = APR_PKT_SIZE(20, t_len); + hdr->src_port = 0; + hdr->dest_port = 0; + hdr->token = 0; + hdr->opcode = 0x12345678; + memset(l_buf + 20, 9, 4060); + + apr_send_pkt(apr_handle_q, (uint32_t *)l_buf); + pr_debug("Write_q\n"); + } else if (!strncmp(l_buf + 20, "write_m", 64)) { + struct apr_hdr *hdr; + + hdr = (struct apr_hdr *)l_buf; + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT, + APR_HDR_LEN(20), APR_PKT_VER); + hdr->pkt_size = APR_PKT_SIZE(20, 8); + hdr->src_port = 0; + hdr->dest_port = 0; + hdr->token = 0; + hdr->opcode = 0x12345678; + memset(l_buf + 30, 9, 4060); + + apr_send_pkt(apr_handle_m, (uint32_t *)l_buf); + pr_info("Write_m\n"); + } else if (!strncmp(l_buf + 20, "write_q4", 64)) { + struct apr_hdr *hdr; + + hdr = (struct apr_hdr *)l_buf; + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT, + APR_HDR_LEN(20), APR_PKT_VER); + hdr->pkt_size = APR_PKT_SIZE(20, 4076); + hdr->src_port = 0; + hdr->dest_port = 0; + hdr->token = 0; + hdr->opcode = 0x12345678; + memset(l_buf + 30, 9, 4060); + + apr_send_pkt(apr_handle_q, (uint32_t *)l_buf); + pr_info("Write_q\n"); + } else if (!strncmp(l_buf + 20, "write_m4", 64)) { + struct apr_hdr *hdr; + + hdr = (struct apr_hdr *)l_buf; + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT, + APR_HDR_LEN(20), APR_PKT_VER); + hdr->pkt_size = APR_PKT_SIZE(20, 4076); + hdr->src_port = 0; + hdr->dest_port = 0; + hdr->token = 0; + hdr->opcode = 0x12345678; + memset(l_buf + 30, 9, 4060); + + apr_send_pkt(apr_handle_m, (uint32_t *)l_buf); + pr_info("Write_m\n"); + } else if (!strncmp(l_buf + 20, "close", 64)) { + if (apr_handle_q) + apr_deregister(apr_handle_q); + } else if (!strncmp(l_buf + 20, "loaded", 64)) { + apr_set_q6_state(APR_SUBSYS_LOADED); + } else if (!strncmp(l_buf + 20, "boom", 64)) { + q6audio_dsp_not_responding(); + } else if (!strncmp(l_buf + 20, "dsp_ver", 64)) { + core_get_adsp_version(); + } else if (!strncmp(l_buf + 20, "en_pwr_col", 64)) { + struct adsp_power_collapse pc; + + core_open(); + if (core_handle_q) { + pc.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + pc.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(uint32_t));; + pc.hdr.src_port = 0; + pc.hdr.dest_port = 0; + pc.hdr.token = 0; + pc.hdr.opcode = ADSP_CMD_SET_POWER_COLLAPSE_STATE; + pc.power_collapse = 0x00000000; + apr_send_pkt(core_handle_q, (uint32_t *)&pc); + pr_info("Write_q :enable power collapse\n"); + } + } else if (!strncmp(l_buf + 20, "dis_pwr_col", 64)) { + struct adsp_power_collapse pc; + + core_open(); + if (core_handle_q) { + pc.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + pc.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(uint32_t)); + pc.hdr.src_port = 0; + pc.hdr.dest_port = 0; + pc.hdr.token = 0; + pc.hdr.opcode = ADSP_CMD_SET_POWER_COLLAPSE_STATE; + pc.power_collapse = 0x00000001; + apr_send_pkt(core_handle_q, (uint32_t *)&pc); + pr_info("Write_q:disable power collapse\n"); + } + } else + pr_info("Unknown Command\n"); + + return count; +} + +static const struct file_operations apr_debug_fops = { + .write = apr_debug_write, + .open = apr_debug_open, +}; + +static int __init core_init(void) +{ + init_waitqueue_head(&bus_bw_req_wait); + bus_bw_resp_received = 0; + + query_adsp_ver = 0; + init_waitqueue_head(&adsp_version_wait); + adsp_version = 0; + + core_handle_q = NULL; + +#ifdef CONFIG_DEBUG_FS + dentry = debugfs_create_file("apr", S_IFREG | S_IRUGO | S_IWUSR + | S_IWGRP, NULL, (void *) NULL, &apr_debug_fops); +#endif /* CONFIG_DEBUG_FS */ + + return 0; +} + +device_initcall(core_init); diff --git a/sound/soc/msm/qdsp6/core/q6core.h b/sound/soc/msm/qdsp6/core/q6core.h new file mode 100644 index 000000000000..97a81cbe75f1 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/q6core.h @@ -0,0 +1,52 @@ +/* Copyright (c) 2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __Q6CORE_H__ +#define __Q6CORE_H__ +#include <sound/qdsp6v2/apr.h> + + +#define ADSP_CMD_REMOTE_BUS_BW_REQUEST 0x0001115D +#define AUDIO_IF_BUS_ID 1 + +struct adsp_cmd_remote_bus_bw_request { + struct apr_hdr hdr; + u16 bus_identifier; + u16 reserved; + u32 ab_bps; + u32 ib_bps; +} __packed; + +#define ADSP_GET_VERSION 0x00011152 +#define ADSP_GET_VERSION_RSP 0x00011153 + +struct adsp_get_version { + uint32_t build_id; + uint32_t svc_cnt; +}; + +struct adsp_service_info { + uint32_t svc_id; + uint32_t svc_ver; +}; + +#define ADSP_CMD_SET_POWER_COLLAPSE_STATE 0x0001115C +struct adsp_power_collapse { + struct apr_hdr hdr; + uint32_t power_collapse; +}; + +int core_req_bus_bandwith(u16 bus_id, u32 ab_bps, u32 ib_bps); + +uint32_t core_get_adsp_version(void); + +#endif /* __Q6CORE_H__ */ diff --git a/sound/soc/msm/qdsp6/core/rtac.c b/sound/soc/msm/qdsp6/core/rtac.c new file mode 100644 index 000000000000..4453aa52a8a0 --- /dev/null +++ b/sound/soc/msm/qdsp6/core/rtac.c @@ -0,0 +1,1047 @@ +/* Copyright (c) 2011, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include <linux/fs.h> +#include <linux/module.h> +#include <linux/miscdevice.h> +#include <linux/slab.h> +#include <linux/uaccess.h> +#include <linux/mutex.h> +#include <linux/sched.h> +#include <linux/msm_audio_acdb.h> +#include <asm/atomic.h> +#include <sound/qdsp6v2/audio_acdb.h> +#include <sound/qdsp6v2/rtac.h> +#include "q6audio_common.h" +#include <sound/q6afe.h> + +#define CONFIG_RTAC +#ifndef CONFIG_RTAC + +void rtac_add_adm_device(u32 port_id, u32 copp_id, u32 path_id, u32 popp_id) {} +void rtac_remove_adm_device(u32 port_id) {} +void rtac_remove_popp_from_adm_devices(u32 popp_id) {} +void rtac_set_adm_handle(void *handle) {} +bool rtac_make_adm_callback(uint32_t *payload, u32 payload_size) + {return false; } +void rtac_set_asm_handle(u32 session_id, void *handle) {} +bool rtac_make_asm_callback(u32 session_id, uint32_t *payload, + u32 payload_size) {return false; } +void rtac_add_voice(u32 cvs_handle, u32 cvp_handle, u32 rx_afe_port, + u32 tx_afe_port, u32 session_id) {} +void rtac_remove_voice(u32 cvs_handle) {} +void rtac_set_voice_handle(u32 mode, void *handle) {} +bool rtac_make_voice_callback(u32 mode, uint32_t *payload, + u32 payload_size) {return false; } + +#else + +#define VOICE_CMD_SET_PARAM 0x00011006 +#define VOICE_CMD_GET_PARAM 0x00011007 +#define VOICE_EVT_GET_PARAM_ACK 0x00011008 + +/* Max size of payload (buf size - apr header) */ +#define MAX_PAYLOAD_SIZE 4076 +#define RTAC_MAX_ACTIVE_DEVICES 4 +#define RTAC_MAX_ACTIVE_VOICE_COMBOS 2 +#define RTAC_MAX_ACTIVE_POPP 8 +#define RTAC_BUF_SIZE 4096 + +#define TIMEOUT_MS 1000 + +/* APR data */ +struct rtac_apr_data { + void *apr_handle; + atomic_t cmd_state; + wait_queue_head_t cmd_wait; +}; + +static struct rtac_apr_data rtac_adm_apr_data; +static struct rtac_apr_data rtac_asm_apr_data[SESSION_MAX+1]; +static struct rtac_apr_data rtac_voice_apr_data[RTAC_VOICE_MODES]; + + +/* ADM info & APR */ +struct rtac_adm_data { + uint32_t topology_id; + uint32_t afe_port; + uint32_t copp; + uint32_t num_of_popp; + uint32_t popp[RTAC_MAX_ACTIVE_POPP]; +}; + +struct rtac_adm { + uint32_t num_of_dev; + struct rtac_adm_data device[RTAC_MAX_ACTIVE_DEVICES]; +}; +static struct rtac_adm rtac_adm_data; +static u32 rtac_adm_payload_size; +static u32 rtac_adm_user_buf_size; +static u8 *rtac_adm_buffer; + + +/* ASM APR */ +static u32 rtac_asm_payload_size; +static u32 rtac_asm_user_buf_size; +static u8 *rtac_asm_buffer; + + +/* Voice info & APR */ +struct rtac_voice_data { + uint32_t tx_topology_id; + uint32_t rx_topology_id; + uint32_t tx_afe_port; + uint32_t rx_afe_port; + uint16_t cvs_handle; + uint16_t cvp_handle; +}; + +struct rtac_voice { + uint32_t num_of_voice_combos; + struct rtac_voice_data voice[RTAC_MAX_ACTIVE_VOICE_COMBOS]; +}; + +static struct rtac_voice rtac_voice_data; +static u32 rtac_voice_payload_size; +static u32 rtac_voice_user_buf_size; +static u8 *rtac_voice_buffer; +static u32 voice_session_id[RTAC_MAX_ACTIVE_VOICE_COMBOS]; + + +struct mutex rtac_adm_mutex; +struct mutex rtac_adm_apr_mutex; +struct mutex rtac_asm_apr_mutex; +struct mutex rtac_voice_mutex; +struct mutex rtac_voice_apr_mutex; + +static int rtac_open(struct inode *inode, struct file *f) +{ + pr_debug("%s\n", __func__); + return 0; +} + +static int rtac_release(struct inode *inode, struct file *f) +{ + pr_debug("%s\n", __func__); + return 0; +} + +/* ADM Info */ +void add_popp(u32 dev_idx, u32 port_id, u32 popp_id) +{ + u32 i = 0; + + for (; i < rtac_adm_data.device[dev_idx].num_of_popp; i++) + if (rtac_adm_data.device[dev_idx].popp[i] == popp_id) + goto done; + + if (rtac_adm_data.device[dev_idx].num_of_popp == + RTAC_MAX_ACTIVE_POPP) { + pr_err("%s, Max POPP!\n", __func__); + goto done; + } + rtac_adm_data.device[dev_idx].popp[ + rtac_adm_data.device[dev_idx].num_of_popp++] = popp_id; +done: + return; +} + +void rtac_add_adm_device(u32 port_id, u32 copp_id, u32 path_id, u32 popp_id) +{ + u32 i = 0; + pr_debug("%s: port_id = %d, popp_id = %d\n", __func__, port_id, + popp_id); + + mutex_lock(&rtac_adm_mutex); + if (rtac_adm_data.num_of_dev == RTAC_MAX_ACTIVE_DEVICES) { + pr_err("%s, Can't add anymore RTAC devices!\n", __func__); + goto done; + } + + /* Check if device already added */ + if (rtac_adm_data.num_of_dev != 0) { + for (; i < rtac_adm_data.num_of_dev; i++) { + if (rtac_adm_data.device[i].afe_port == port_id) { + add_popp(i, port_id, popp_id); + goto done; + } + if (rtac_adm_data.device[i].num_of_popp == + RTAC_MAX_ACTIVE_POPP) { + pr_err("%s, Max POPP!\n", __func__); + goto done; + } + } + } + + /* Add device */ + rtac_adm_data.num_of_dev++; + + if (path_id == ADM_PATH_PLAYBACK) + rtac_adm_data.device[i].topology_id = + get_adm_rx_topology(); + else + rtac_adm_data.device[i].topology_id = + get_adm_tx_topology(); + rtac_adm_data.device[i].afe_port = port_id; + rtac_adm_data.device[i].copp = copp_id; + rtac_adm_data.device[i].popp[ + rtac_adm_data.device[i].num_of_popp++] = popp_id; +done: + mutex_unlock(&rtac_adm_mutex); + return; +} + +static void shift_adm_devices(u32 dev_idx) +{ + for (; dev_idx < rtac_adm_data.num_of_dev; dev_idx++) { + memcpy(&rtac_adm_data.device[dev_idx], + &rtac_adm_data.device[dev_idx + 1], + sizeof(rtac_adm_data.device[dev_idx])); + memset(&rtac_adm_data.device[dev_idx + 1], 0, + sizeof(rtac_adm_data.device[dev_idx])); + } +} + +static void shift_popp(u32 copp_idx, u32 popp_idx) +{ + for (; popp_idx < rtac_adm_data.device[copp_idx].num_of_popp; + popp_idx++) { + memcpy(&rtac_adm_data.device[copp_idx].popp[popp_idx], + &rtac_adm_data.device[copp_idx].popp[popp_idx + 1], + sizeof(uint32_t)); + memset(&rtac_adm_data.device[copp_idx].popp[popp_idx + 1], 0, + sizeof(uint32_t)); + } +} + +void rtac_remove_adm_device(u32 port_id) +{ + s32 i; + pr_debug("%s: port_id = %d\n", __func__, port_id); + + mutex_lock(&rtac_adm_mutex); + /* look for device */ + for (i = 0; i < rtac_adm_data.num_of_dev; i++) { + if (rtac_adm_data.device[i].afe_port == port_id) { + memset(&rtac_adm_data.device[i], 0, + sizeof(rtac_adm_data.device[i])); + rtac_adm_data.num_of_dev--; + + if (rtac_adm_data.num_of_dev >= 1) { + shift_adm_devices(i); + break; + } + } + } + + mutex_unlock(&rtac_adm_mutex); + return; +} + +void rtac_remove_popp_from_adm_devices(u32 popp_id) +{ + s32 i, j; + pr_debug("%s: popp_id = %d\n", __func__, popp_id); + + mutex_lock(&rtac_adm_mutex); + + for (i = 0; i < rtac_adm_data.num_of_dev; i++) { + for (j = 0; j < rtac_adm_data.device[i].num_of_popp; j++) { + if (rtac_adm_data.device[i].popp[j] == popp_id) { + rtac_adm_data.device[i].popp[j] = 0; + rtac_adm_data.device[i].num_of_popp--; + shift_popp(i, j); + } + } + } + + mutex_unlock(&rtac_adm_mutex); +} + +/* Voice Info */ +static void set_rtac_voice_data(int idx, u32 cvs_handle, u32 cvp_handle, + u32 rx_afe_port, u32 tx_afe_port, + u32 session_id) +{ + rtac_voice_data.voice[idx].tx_topology_id = get_voice_tx_topology(); + rtac_voice_data.voice[idx].rx_topology_id = get_voice_rx_topology(); + rtac_voice_data.voice[idx].tx_afe_port = tx_afe_port; + rtac_voice_data.voice[idx].rx_afe_port = rx_afe_port; + rtac_voice_data.voice[idx].cvs_handle = cvs_handle; + rtac_voice_data.voice[idx].cvp_handle = cvp_handle; + + /* Store session ID for voice RTAC */ + voice_session_id[idx] = session_id; +} + +void rtac_add_voice(u32 cvs_handle, u32 cvp_handle, u32 rx_afe_port, + u32 tx_afe_port, u32 session_id) +{ + u32 i = 0; + pr_debug("%s\n", __func__); + mutex_lock(&rtac_voice_mutex); + + if (rtac_voice_data.num_of_voice_combos == + RTAC_MAX_ACTIVE_VOICE_COMBOS) { + pr_err("%s, Can't add anymore RTAC devices!\n", __func__); + goto done; + } + + /* Check if device already added */ + if (rtac_voice_data.num_of_voice_combos != 0) { + for (; i < rtac_voice_data.num_of_voice_combos; i++) { + if (rtac_voice_data.voice[i].cvs_handle == + cvs_handle) { + set_rtac_voice_data(i, cvs_handle, cvp_handle, + rx_afe_port, tx_afe_port, + session_id); + goto done; + } + } + } + + /* Add device */ + rtac_voice_data.num_of_voice_combos++; + set_rtac_voice_data(i, cvs_handle, cvp_handle, + rx_afe_port, tx_afe_port, + session_id); +done: + mutex_unlock(&rtac_voice_mutex); + return; +} + +static void shift_voice_devices(u32 idx) +{ + for (; idx < rtac_voice_data.num_of_voice_combos - 1; idx++) { + memcpy(&rtac_voice_data.voice[idx], + &rtac_voice_data.voice[idx + 1], + sizeof(rtac_voice_data.voice[idx])); + voice_session_id[idx] = voice_session_id[idx + 1]; + } +} + +void rtac_remove_voice(u32 cvs_handle) +{ + u32 i = 0; + pr_debug("%s\n", __func__); + + mutex_lock(&rtac_voice_mutex); + /* look for device */ + for (i = 0; i < rtac_voice_data.num_of_voice_combos; i++) { + if (rtac_voice_data.voice[i].cvs_handle == cvs_handle) { + shift_voice_devices(i); + rtac_voice_data.num_of_voice_combos--; + memset(&rtac_voice_data.voice[ + rtac_voice_data.num_of_voice_combos], 0, + sizeof(rtac_voice_data.voice + [rtac_voice_data.num_of_voice_combos])); + voice_session_id[rtac_voice_data.num_of_voice_combos] + = 0; + break; + } + } + mutex_unlock(&rtac_voice_mutex); + return; +} + +static int get_voice_index_cvs(u32 cvs_handle) +{ + u32 i; + + for (i = 0; i < rtac_voice_data.num_of_voice_combos; i++) { + if (rtac_voice_data.voice[i].cvs_handle == cvs_handle) + return i; + } + + pr_err("%s: No voice index for CVS handle %d found returning 0\n", + __func__, cvs_handle); + return 0; +} + +static int get_voice_index_cvp(u32 cvp_handle) +{ + u32 i; + + for (i = 0; i < rtac_voice_data.num_of_voice_combos; i++) { + if (rtac_voice_data.voice[i].cvp_handle == cvp_handle) + return i; + } + + pr_err("%s: No voice index for CVP handle %d found returning 0\n", + __func__, cvp_handle); + return 0; +} + +static int get_voice_index(u32 mode, u32 handle) +{ + if (mode == RTAC_CVP) + return get_voice_index_cvp(handle); + if (mode == RTAC_CVS) + return get_voice_index_cvs(handle); + + pr_err("%s: Invalid mode %d, returning 0\n", + __func__, mode); + return 0; +} + + +/* ADM APR */ +void rtac_set_adm_handle(void *handle) +{ + pr_debug("%s: handle = %d\n", __func__, (unsigned int)handle); + + mutex_lock(&rtac_adm_apr_mutex); + rtac_adm_apr_data.apr_handle = handle; + mutex_unlock(&rtac_adm_apr_mutex); +} + +bool rtac_make_adm_callback(uint32_t *payload, u32 payload_size) +{ + pr_debug("%s:cmd_state = %d\n", __func__, + atomic_read(&rtac_adm_apr_data.cmd_state)); + if (atomic_read(&rtac_adm_apr_data.cmd_state) != 1) + return false; + + /* Offset data for in-band payload */ + rtac_copy_adm_payload_to_user(payload, payload_size); + atomic_set(&rtac_adm_apr_data.cmd_state, 0); + wake_up(&rtac_adm_apr_data.cmd_wait); + return true; +} + +void rtac_copy_adm_payload_to_user(void *payload, u32 payload_size) +{ + pr_debug("%s\n", __func__); + rtac_adm_payload_size = payload_size; + + memcpy(rtac_adm_buffer, &payload_size, sizeof(u32)); + if (payload_size != 0) { + if (payload_size > rtac_adm_user_buf_size) { + pr_err("%s: Buffer set not big enough for returned data, buf size = %d,ret data = %d\n", + __func__, rtac_adm_user_buf_size, payload_size); + goto done; + } + memcpy(rtac_adm_buffer + sizeof(u32), payload, payload_size); + } +done: + return; +} + +u32 send_adm_apr(void *buf, u32 opcode) +{ + s32 result; + u32 count = 0; + u32 bytes_returned = 0; + u32 port_index = 0; + u32 copp_id; + u32 payload_size; + struct apr_hdr adm_params; + pr_debug("%s\n", __func__); + + if (copy_from_user(&count, (void *)buf, sizeof(count))) { + pr_err("%s: Copy to user failed! buf = 0x%x\n", + __func__, (unsigned int)buf); + result = -EFAULT; + goto done; + } + + if (count <= 0) { + pr_err("%s: Invalid buffer size = %d\n", __func__, count); + goto done; + } + + if (copy_from_user(&payload_size, buf + sizeof(u32), sizeof(u32))) { + pr_err("%s: Could not copy payload size from user buffer\n", + __func__); + goto done; + } + + + if (payload_size > MAX_PAYLOAD_SIZE) { + pr_err("%s: Invalid payload size = %d\n", + __func__, payload_size); + goto done; + } + + if (copy_from_user(&copp_id, buf + 2 * sizeof(u32), sizeof(u32))) { + pr_err("%s: Could not copy port id from user buffer\n", + __func__); + goto done; + } + + for (port_index = 0; port_index < AFE_MAX_PORTS; port_index++) { + if (adm_get_copp_id(port_index) == copp_id) + break; + } + if (port_index >= AFE_MAX_PORTS) { + pr_err("%s: Could not find port index for copp = %d\n", + __func__, copp_id); + goto done; + } + + mutex_lock(&rtac_adm_apr_mutex); + if (rtac_adm_apr_data.apr_handle == NULL) { + pr_err("%s: APR not initialized\n", __func__); + goto err; + } + + /* Set globals for copy of returned payload */ + rtac_adm_user_buf_size = count; + /* Copy buffer to in-band payload */ + if (copy_from_user(rtac_adm_buffer + sizeof(adm_params), + buf + 3 * sizeof(u32), payload_size)) { + pr_err("%s: Could not copy payload from user buffer\n", + __func__); + goto err; + } + + /* Pack header */ + adm_params.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(20), APR_PKT_VER); + adm_params.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + payload_size); + adm_params.src_svc = APR_SVC_ADM; + adm_params.src_domain = APR_DOMAIN_APPS; + adm_params.src_port = copp_id; + adm_params.dest_svc = APR_SVC_ADM; + adm_params.dest_domain = APR_DOMAIN_ADSP; + adm_params.dest_port = copp_id; + adm_params.token = copp_id; + adm_params.opcode = opcode; + + memcpy(rtac_adm_buffer, &adm_params, sizeof(adm_params)); + atomic_set(&rtac_adm_apr_data.cmd_state, 1); + + pr_debug("%s: Sending RTAC command size = %d\n", + __func__, adm_params.pkt_size); + + result = apr_send_pkt(rtac_adm_apr_data.apr_handle, + (uint32_t *)rtac_adm_buffer); + if (result < 0) { + pr_err("%s: Set params failed port = %d, copp = %d\n", + __func__, port_index, copp_id); + goto err; + } + /* Wait for the callback */ + result = wait_event_timeout(rtac_adm_apr_data.cmd_wait, + (atomic_read(&rtac_adm_apr_data.cmd_state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + mutex_unlock(&rtac_adm_apr_mutex); + if (!result) { + pr_err("%s: Set params timed out port = %d, copp = %d\n", + __func__, port_index, copp_id); + goto done; + } + + if (rtac_adm_payload_size != 0) { + if (copy_to_user(buf, rtac_adm_buffer, + rtac_adm_payload_size + sizeof(u32))) { + pr_err("%s: Could not copy buffer to user, size = %d\n", + __func__, payload_size); + goto done; + } + } + + /* Return data written for SET & data read for GET */ + if (opcode == ADM_CMD_GET_PARAMS) + bytes_returned = rtac_adm_payload_size; + else + bytes_returned = payload_size; +done: + return bytes_returned; +err: + mutex_unlock(&rtac_adm_apr_mutex); + return bytes_returned; +} + + +/* ASM APR */ +void rtac_set_asm_handle(u32 session_id, void *handle) +{ + pr_debug("%s\n", __func__); + + mutex_lock(&rtac_asm_apr_mutex); + rtac_asm_apr_data[session_id].apr_handle = handle; + mutex_unlock(&rtac_asm_apr_mutex); +} + +bool rtac_make_asm_callback(u32 session_id, uint32_t *payload, + u32 payload_size) +{ + if (atomic_read(&rtac_asm_apr_data[session_id].cmd_state) != 1) + return false; + + pr_debug("%s\n", __func__); + /* Offset data for in-band payload */ + rtac_copy_asm_payload_to_user(payload, payload_size); + atomic_set(&rtac_asm_apr_data[session_id].cmd_state, 0); + wake_up(&rtac_asm_apr_data[session_id].cmd_wait); + return true; +} + +void rtac_copy_asm_payload_to_user(void *payload, u32 payload_size) +{ + pr_debug("%s\n", __func__); + rtac_asm_payload_size = payload_size; + + memcpy(rtac_asm_buffer, &payload_size, sizeof(u32)); + if (payload_size) { + if (payload_size > rtac_asm_user_buf_size) { + pr_err("%s: Buffer set not big enough for returned data, buf size = %d, ret data = %d\n", + __func__, rtac_asm_user_buf_size, payload_size); + goto done; + } + memcpy(rtac_asm_buffer + sizeof(u32), payload, payload_size); + } +done: + return; +} + +u32 send_rtac_asm_apr(void *buf, u32 opcode) +{ + s32 result; + u32 count = 0; + u32 bytes_returned = 0; + u32 session_id = 0; + u32 payload_size; + struct apr_hdr asm_params; + pr_debug("%s\n", __func__); + + if (copy_from_user(&count, (void *)buf, sizeof(count))) { + pr_err("%s: Copy to user failed! buf = 0x%x\n", + __func__, (unsigned int)buf); + result = -EFAULT; + goto done; + } + + if (count <= 0) { + pr_err("%s: Invalid buffer size = %d\n", __func__, count); + goto done; + } + + if (copy_from_user(&payload_size, buf + sizeof(u32), sizeof(u32))) { + pr_err("%s: Could not copy payload size from user buffer\n", + __func__); + goto done; + } + + if (payload_size > MAX_PAYLOAD_SIZE) { + pr_err("%s: Invalid payload size = %d\n", + __func__, payload_size); + goto done; + } + + if (copy_from_user(&session_id, buf + 2 * sizeof(u32), sizeof(u32))) { + pr_err("%s: Could not copy session id from user buffer\n", + __func__); + goto done; + } + if (session_id > (SESSION_MAX + 1)) { + pr_err("%s: Invalid Session = %d\n", __func__, session_id); + goto done; + } + + mutex_lock(&rtac_asm_apr_mutex); + if (session_id < SESSION_MAX+1) { + if (rtac_asm_apr_data[session_id].apr_handle == NULL) { + pr_err("%s: APR not initialized\n", __func__); + goto err; + } + } + + /* Set globals for copy of returned payload */ + rtac_asm_user_buf_size = count; + + /* Copy buffer to in-band payload */ + if (copy_from_user(rtac_asm_buffer + sizeof(asm_params), + buf + 3 * sizeof(u32), payload_size)) { + pr_err("%s: Could not copy payload from user buffer\n", + __func__); + goto err; + } + + /* Pack header */ + asm_params.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(20), APR_PKT_VER); + asm_params.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + payload_size); + asm_params.src_svc = q6asm_get_apr_service_id(session_id); + asm_params.src_domain = APR_DOMAIN_APPS; + asm_params.src_port = (session_id << 8) | 0x0001; + asm_params.dest_svc = APR_SVC_ASM; + asm_params.dest_domain = APR_DOMAIN_ADSP; + asm_params.dest_port = (session_id << 8) | 0x0001; + asm_params.token = session_id; + asm_params.opcode = opcode; + + memcpy(rtac_asm_buffer, &asm_params, sizeof(asm_params)); + if (session_id < SESSION_MAX+1) + atomic_set(&rtac_asm_apr_data[session_id].cmd_state, 1); + + pr_debug("%s: Sending RTAC command size = %d, session_id=%d\n", + __func__, asm_params.pkt_size, session_id); + + result = apr_send_pkt(rtac_asm_apr_data[session_id].apr_handle, + (uint32_t *)rtac_asm_buffer); + if (result < 0) { + pr_err("%s: Set params failed session = %d\n", + __func__, session_id); + goto err; + } + + /* Wait for the callback */ + result = wait_event_timeout(rtac_asm_apr_data[session_id].cmd_wait, + (atomic_read(&rtac_asm_apr_data[session_id].cmd_state) == 0), + 5 * HZ); + mutex_unlock(&rtac_asm_apr_mutex); + if (!result) { + pr_err("%s: Set params timed out session = %d\n", + __func__, session_id); + goto done; + } + + if (rtac_asm_payload_size != 0) { + if (copy_to_user(buf, rtac_asm_buffer, + rtac_asm_payload_size + sizeof(u32))) { + pr_err("%s: Could not copy buffer to user,size = %d\n", + __func__, payload_size); + goto done; + } + } + + /* Return data written for SET & data read for GET */ + if (opcode == ASM_STREAM_CMD_GET_PP_PARAMS) + bytes_returned = rtac_asm_payload_size; + else + bytes_returned = payload_size; +done: + return bytes_returned; +err: + mutex_unlock(&rtac_asm_apr_mutex); + return bytes_returned; +} + + +/* Voice APR */ +void rtac_set_voice_handle(u32 mode, void *handle) +{ + pr_debug("%s\n", __func__); + mutex_lock(&rtac_voice_apr_mutex); + rtac_voice_apr_data[mode].apr_handle = handle; + mutex_unlock(&rtac_voice_apr_mutex); +} + +bool rtac_make_voice_callback(u32 mode, uint32_t *payload, u32 payload_size) +{ + if ((atomic_read(&rtac_voice_apr_data[mode].cmd_state) != 1) || + (mode >= RTAC_VOICE_MODES)) + return false; + + pr_debug("%s\n", __func__); + /* Offset data for in-band payload */ + rtac_copy_voice_payload_to_user(payload, payload_size); + atomic_set(&rtac_voice_apr_data[mode].cmd_state, 0); + wake_up(&rtac_voice_apr_data[mode].cmd_wait); + return true; +} + +void rtac_copy_voice_payload_to_user(void *payload, u32 payload_size) +{ + pr_debug("%s\n", __func__); + rtac_voice_payload_size = payload_size; + + memcpy(rtac_voice_buffer, &payload_size, sizeof(u32)); + if (payload_size) { + if (payload_size > rtac_voice_user_buf_size) { + pr_err("%s: Buffer set not big enough for returned data, buf size = %d, ret data = %d\n", + __func__, rtac_voice_user_buf_size, payload_size); + goto done; + } + memcpy(rtac_voice_buffer + sizeof(u32), payload, payload_size); + } +done: + return; +} + +u32 send_voice_apr(u32 mode, void *buf, u32 opcode) +{ + s32 result; + u32 count = 0; + u32 bytes_returned = 0; + u32 payload_size; + u32 dest_port; + struct apr_hdr voice_params; + pr_debug("%s\n", __func__); + + if (copy_from_user(&count, (void *)buf, sizeof(count))) { + pr_err("%s: Copy to user failed! buf = 0x%x\n", + __func__, (unsigned int)buf); + result = -EFAULT; + goto done; + } + + if (count <= 0) { + pr_err("%s: Invalid buffer size = %d\n", __func__, count); + goto done; + } + + if (copy_from_user(&payload_size, buf + sizeof(payload_size), + sizeof(payload_size))) { + pr_err("%s: Could not copy payload size from user buffer\n", + __func__); + goto done; + } + + if (payload_size > MAX_PAYLOAD_SIZE) { + pr_err("%s: Invalid payload size = %d\n", + __func__, payload_size); + goto done; + } + + if (copy_from_user(&dest_port, buf + 2 * sizeof(dest_port), + sizeof(dest_port))) { + pr_err("%s: Could not copy port id from user buffer\n", + __func__); + goto done; + } + + if ((mode != RTAC_CVP) && (mode != RTAC_CVS)) { + pr_err("%s: Invalid Mode for APR, mode = %d\n", + __func__, mode); + goto done; + } + + mutex_lock(&rtac_voice_apr_mutex); + if (rtac_voice_apr_data[mode].apr_handle == NULL) { + pr_err("%s: APR not initialized\n", __func__); + goto err; + } + + /* Set globals for copy of returned payload */ + rtac_voice_user_buf_size = count; + + /* Copy buffer to in-band payload */ + if (copy_from_user(rtac_voice_buffer + sizeof(voice_params), + buf + 3 * sizeof(u32), payload_size)) { + pr_err("%s: Could not copy payload from user buffer\n", + __func__); + goto err; + } + + /* Pack header */ + voice_params.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(20), APR_PKT_VER); + voice_params.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + payload_size); + voice_params.src_svc = 0; + voice_params.src_domain = APR_DOMAIN_APPS; + voice_params.src_port = voice_session_id[ + get_voice_index(mode, dest_port)]; + voice_params.dest_svc = 0; + voice_params.dest_domain = APR_DOMAIN_MODEM; + voice_params.dest_port = (u16)dest_port; + voice_params.token = 0; + voice_params.opcode = opcode; + + memcpy(rtac_voice_buffer, &voice_params, sizeof(voice_params)); + atomic_set(&rtac_voice_apr_data[mode].cmd_state, 1); + + pr_debug("%s: Sending RTAC command size = %d, opcode = %x\n", + __func__, voice_params.pkt_size, opcode); + + result = apr_send_pkt(rtac_voice_apr_data[mode].apr_handle, + (uint32_t *)rtac_voice_buffer); + if (result < 0) { + pr_err("%s: apr_send_pkt failed opcode = %x\n", + __func__, opcode); + goto err; + } + /* Wait for the callback */ + result = wait_event_timeout(rtac_voice_apr_data[mode].cmd_wait, + (atomic_read(&rtac_voice_apr_data[mode].cmd_state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + mutex_unlock(&rtac_voice_apr_mutex); + if (!result) { + pr_err("%s: apr_send_pkt timed out opcode = %x\n", + __func__, opcode); + goto done; + } + + if (rtac_voice_payload_size != 0) { + if (copy_to_user(buf, rtac_voice_buffer, + rtac_voice_payload_size + sizeof(u32))) { + pr_err("%s: Could not copy buffer to user,size = %d\n", + __func__, payload_size); + goto done; + } + } + + /* Return data written for SET & data read for GET */ + if (opcode == VOICE_CMD_GET_PARAM) + bytes_returned = rtac_voice_payload_size; + else + bytes_returned = payload_size; +done: + return bytes_returned; +err: + mutex_unlock(&rtac_voice_apr_mutex); + return bytes_returned; +} + + + +static long rtac_ioctl(struct file *f, + unsigned int cmd, unsigned long arg) +{ + s32 result = 0; + pr_debug("%s\n", __func__); + + if (arg == 0) { + pr_err("%s: No data sent to driver!\n", __func__); + result = -EFAULT; + goto done; + } + + switch (cmd) { + case AUDIO_GET_RTAC_ADM_INFO: + if (copy_to_user((void *)arg, &rtac_adm_data, + sizeof(rtac_adm_data))) + pr_err("%s: Could not copy to userspace!\n", __func__); + else + result = sizeof(rtac_adm_data); + break; + case AUDIO_GET_RTAC_VOICE_INFO: + if (copy_to_user((void *)arg, &rtac_voice_data, + sizeof(rtac_voice_data))) + pr_err("%s: Could not copy to userspace!\n", __func__); + else + result = sizeof(rtac_voice_data); + break; + case AUDIO_GET_RTAC_ADM_CAL: + result = send_adm_apr((void *)arg, ADM_CMD_GET_PARAMS); + break; + case AUDIO_SET_RTAC_ADM_CAL: + result = send_adm_apr((void *)arg, ADM_CMD_SET_PARAMS); + break; + case AUDIO_GET_RTAC_ASM_CAL: + result = send_rtac_asm_apr((void *)arg, + ASM_STREAM_CMD_GET_PP_PARAMS); + break; + case AUDIO_SET_RTAC_ASM_CAL: + result = send_rtac_asm_apr((void *)arg, + ASM_STREAM_CMD_SET_PP_PARAMS); + break; + case AUDIO_GET_RTAC_CVS_CAL: + result = send_voice_apr(RTAC_CVS, (void *)arg, + VOICE_CMD_GET_PARAM); + break; + case AUDIO_SET_RTAC_CVS_CAL: + result = send_voice_apr(RTAC_CVS, (void *)arg, + VOICE_CMD_SET_PARAM); + break; + case AUDIO_GET_RTAC_CVP_CAL: + result = send_voice_apr(RTAC_CVP, (void *)arg, + VOICE_CMD_GET_PARAM); + break; + case AUDIO_SET_RTAC_CVP_CAL: + result = send_voice_apr(RTAC_CVP, (void *)arg, + VOICE_CMD_SET_PARAM); + break; + default: + pr_err("%s: Invalid IOCTL, command = %d!\n", + __func__, cmd); + } +done: + return result; +} + + +static const struct file_operations rtac_fops = { + .owner = THIS_MODULE, + .open = rtac_open, + .release = rtac_release, + .unlocked_ioctl = rtac_ioctl, +}; + +struct miscdevice rtac_misc = { + .minor = MISC_DYNAMIC_MINOR, + .name = "msm_rtac", + .fops = &rtac_fops, +}; + +static int __init rtac_init(void) +{ + int i = 0; + pr_debug("%s\n", __func__); + + /* ADM */ + memset(&rtac_adm_data, 0, sizeof(rtac_adm_data)); + rtac_adm_apr_data.apr_handle = NULL; + atomic_set(&rtac_adm_apr_data.cmd_state, 0); + init_waitqueue_head(&rtac_adm_apr_data.cmd_wait); + mutex_init(&rtac_adm_mutex); + mutex_init(&rtac_adm_apr_mutex); + + rtac_adm_buffer = kzalloc(RTAC_BUF_SIZE, GFP_KERNEL); + if (rtac_adm_buffer == NULL) { + pr_err("%s: Could not allocate payload of size = %d\n", + __func__, RTAC_BUF_SIZE); + goto nomem; + } + + /* ASM */ + for (i = 0; i < SESSION_MAX+1; i++) { + rtac_asm_apr_data[i].apr_handle = NULL; + atomic_set(&rtac_asm_apr_data[i].cmd_state, 0); + init_waitqueue_head(&rtac_asm_apr_data[i].cmd_wait); + } + mutex_init(&rtac_asm_apr_mutex); + + rtac_asm_buffer = kzalloc(RTAC_BUF_SIZE, GFP_KERNEL); + if (rtac_asm_buffer == NULL) { + pr_err("%s: Could not allocate payload of size = %d\n", + __func__, RTAC_BUF_SIZE); + kzfree(rtac_adm_buffer); + goto nomem; + } + + /* Voice */ + memset(&rtac_voice_data, 0, sizeof(rtac_voice_data)); + for (i = 0; i < RTAC_VOICE_MODES; i++) { + rtac_voice_apr_data[i].apr_handle = NULL; + atomic_set(&rtac_voice_apr_data[i].cmd_state, 0); + init_waitqueue_head(&rtac_voice_apr_data[i].cmd_wait); + } + mutex_init(&rtac_voice_mutex); + mutex_init(&rtac_voice_apr_mutex); + + rtac_voice_buffer = kzalloc(RTAC_BUF_SIZE, GFP_KERNEL); + if (rtac_voice_buffer == NULL) { + pr_err("%s: Could not allocate payload of size = %d\n", + __func__, RTAC_BUF_SIZE); + kzfree(rtac_adm_buffer); + kzfree(rtac_asm_buffer); + goto nomem; + } + + return misc_register(&rtac_misc); +nomem: + return -ENOMEM; +} + +module_init(rtac_init); + +MODULE_DESCRIPTION("MSM 8x60 Real-Time Audio Calibration driver"); +MODULE_LICENSE("GPL v2"); + +#endif diff --git a/sound/soc/msm/qdsp6/q6adm.c b/sound/soc/msm/qdsp6/q6adm.c new file mode 100644 index 000000000000..b565411a5f7c --- /dev/null +++ b/sound/soc/msm/qdsp6/q6adm.c @@ -0,0 +1,1241 @@ +/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/slab.h> +#include <linux/wait.h> +#include <linux/sched.h> +#include <linux/jiffies.h> +#include <linux/uaccess.h> +#include <linux/atomic.h> +#include <linux/err.h> + +#include <sound/qdsp6v2/audio_dev_ctl.h> +#include <sound/qdsp6v2/audio_acdb.h> +#include <sound/qdsp6v2/rtac.h> + +#include <sound/apr_audio.h> +#include <sound/q6afe.h> + +#define TIMEOUT_MS 1000 +#define AUDIO_RX 0x0 +#define AUDIO_TX 0x1 + +#define ASM_MAX_SESSION 0x8 /* To do: define in a header */ +#define RESET_COPP_ID 99 +#define INVALID_COPP_ID 0xFF + +struct adm_ctl { + void *apr; + atomic_t copp_id[AFE_MAX_PORTS]; + atomic_t copp_cnt[AFE_MAX_PORTS]; + atomic_t copp_stat[AFE_MAX_PORTS]; + wait_queue_head_t wait; + int ec_ref_rx; +}; + +static struct acdb_cal_block mem_addr_audproc[MAX_AUDPROC_TYPES]; +static struct acdb_cal_block mem_addr_audvol[MAX_AUDPROC_TYPES]; + +static struct adm_ctl this_adm; + + +int srs_trumedia_open(int port_id, int srs_tech_id, void *srs_params) +{ + struct asm_pp_params_command *open = NULL; + int ret = 0, sz = 0; + int index; + + pr_debug("SRS - %s", __func__); + + index = afe_get_port_index(port_id); + + if (IS_ERR_VALUE(index)) { + pr_err("%s: invald port id\n", __func__); + return index; + } + + switch (srs_tech_id) { + case SRS_ID_GLOBAL: { + struct srs_trumedia_params_GLOBAL *glb_params = NULL; + sz = sizeof(struct asm_pp_params_command) + + sizeof(struct srs_trumedia_params_GLOBAL); + open = kzalloc(sz, GFP_KERNEL); + open->payload_size = sizeof(struct srs_trumedia_params_GLOBAL) + + sizeof(struct asm_pp_param_data_hdr); + open->params.param_id = SRS_TRUMEDIA_PARAMS; + open->params.param_size = + sizeof(struct srs_trumedia_params_GLOBAL); + glb_params = (struct srs_trumedia_params_GLOBAL *)((u8 *)open + + sizeof(struct asm_pp_params_command)); + memcpy(glb_params, srs_params, + sizeof(struct srs_trumedia_params_GLOBAL)); + pr_debug("SRS - %s: Global params - 1 = %x, 2 = %x, 3 = %x," + " 4 = %x, 5 = %x, 6 = %x, 7 = %x, 8 = %x\n", + __func__, (int)glb_params->v1, + (int)glb_params->v2, (int)glb_params->v3, + (int)glb_params->v4, (int)glb_params->v5, + (int)glb_params->v6, (int)glb_params->v7, + (int)glb_params->v8); + break; + } + case SRS_ID_WOWHD: { + struct srs_trumedia_params_WOWHD *whd_params = NULL; + sz = sizeof(struct asm_pp_params_command) + + sizeof(struct srs_trumedia_params_WOWHD); + open = kzalloc(sz, GFP_KERNEL); + open->payload_size = sizeof(struct srs_trumedia_params_WOWHD) + + sizeof(struct asm_pp_param_data_hdr); + open->params.param_id = SRS_TRUMEDIA_PARAMS_WOWHD; + open->params.param_size = + sizeof(struct srs_trumedia_params_WOWHD); + whd_params = (struct srs_trumedia_params_WOWHD *)((u8 *)open + + sizeof(struct asm_pp_params_command)); + memcpy(whd_params, srs_params, + sizeof(struct srs_trumedia_params_WOWHD)); + pr_debug("SRS - %s: WOWHD params - 1 = %x, 2 = %x, 3 = %x," + " 4 = %x, 5 = %x, 6 = %x, 7 = %x, 8 = %x, 9 = %x," + " 10 = %x, 11 = %x\n", __func__, (int)whd_params->v1, + (int)whd_params->v2, (int)whd_params->v3, + (int)whd_params->v4, (int)whd_params->v5, + (int)whd_params->v6, (int)whd_params->v7, + (int)whd_params->v8, (int)whd_params->v9, + (int)whd_params->v10, (int)whd_params->v11); + break; + } + case SRS_ID_CSHP: { + struct srs_trumedia_params_CSHP *chp_params = NULL; + sz = sizeof(struct asm_pp_params_command) + + sizeof(struct srs_trumedia_params_CSHP); + open = kzalloc(sz, GFP_KERNEL); + open->payload_size = sizeof(struct srs_trumedia_params_CSHP) + + sizeof(struct asm_pp_param_data_hdr); + open->params.param_id = SRS_TRUMEDIA_PARAMS_CSHP; + open->params.param_size = + sizeof(struct srs_trumedia_params_CSHP); + chp_params = (struct srs_trumedia_params_CSHP *)((u8 *)open + + sizeof(struct asm_pp_params_command)); + memcpy(chp_params, srs_params, + sizeof(struct srs_trumedia_params_CSHP)); + pr_debug("SRS - %s: CSHP params - 1 = %x, 2 = %x, 3 = %x," + " 4 = %x, 5 = %x, 6 = %x, 7 = %x, 8 = %x," + " 9 = %x\n", __func__, (int)chp_params->v1, + (int)chp_params->v2, (int)chp_params->v3, + (int)chp_params->v4, (int)chp_params->v5, + (int)chp_params->v6, (int)chp_params->v7, + (int)chp_params->v8, (int)chp_params->v9); + break; + } + case SRS_ID_HPF: { + struct srs_trumedia_params_HPF *hpf_params = NULL; + sz = sizeof(struct asm_pp_params_command) + + sizeof(struct srs_trumedia_params_HPF); + open = kzalloc(sz, GFP_KERNEL); + open->payload_size = sizeof(struct srs_trumedia_params_HPF) + + sizeof(struct asm_pp_param_data_hdr); + open->params.param_id = SRS_TRUMEDIA_PARAMS_HPF; + open->params.param_size = + sizeof(struct srs_trumedia_params_HPF); + hpf_params = (struct srs_trumedia_params_HPF *)((u8 *)open + + sizeof(struct asm_pp_params_command)); + memcpy(hpf_params, srs_params, + sizeof(struct srs_trumedia_params_HPF)); + pr_debug("SRS - %s: HPF params - 1 = %x\n", __func__, + (int)hpf_params->v1); + break; + } + case SRS_ID_PEQ: { + struct srs_trumedia_params_PEQ *peq_params = NULL; + sz = sizeof(struct asm_pp_params_command) + + sizeof(struct srs_trumedia_params_PEQ); + open = kzalloc(sz, GFP_KERNEL); + open->payload_size = sizeof(struct srs_trumedia_params_PEQ) + + sizeof(struct asm_pp_param_data_hdr); + open->params.param_id = SRS_TRUMEDIA_PARAMS_PEQ; + open->params.param_size = + sizeof(struct srs_trumedia_params_PEQ); + peq_params = (struct srs_trumedia_params_PEQ *)((u8 *)open + + sizeof(struct asm_pp_params_command)); + memcpy(peq_params, srs_params, + sizeof(struct srs_trumedia_params_PEQ)); + pr_debug("SRS - %s: PEQ params - 1 = %x 2 = %x, 3 = %x," + " 4 = %x\n", __func__, (int)peq_params->v1, + (int)peq_params->v2, (int)peq_params->v3, + (int)peq_params->v4); + break; + } + case SRS_ID_HL: { + struct srs_trumedia_params_HL *hl_params = NULL; + sz = sizeof(struct asm_pp_params_command) + + sizeof(struct srs_trumedia_params_HL); + open = kzalloc(sz, GFP_KERNEL); + open->payload_size = sizeof(struct srs_trumedia_params_HL) + + sizeof(struct asm_pp_param_data_hdr); + open->params.param_id = SRS_TRUMEDIA_PARAMS_HL; + open->params.param_size = sizeof(struct srs_trumedia_params_HL); + hl_params = (struct srs_trumedia_params_HL *)((u8 *)open + + sizeof(struct asm_pp_params_command)); + memcpy(hl_params, srs_params, + sizeof(struct srs_trumedia_params_HL)); + pr_debug("SRS - %s: HL params - 1 = %x, 2 = %x, 3 = %x, 4 = %x," + " 5 = %x, 6 = %x, 7 = %x\n", __func__, + (int)hl_params->v1, (int)hl_params->v2, + (int)hl_params->v3, (int)hl_params->v4, + (int)hl_params->v5, (int)hl_params->v6, + (int)hl_params->v7); + break; + } + default: + goto fail_cmd; + } + + open->payload = NULL; + open->params.module_id = SRS_TRUMEDIA_MODULE_ID; + open->params.reserved = 0; + open->hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + open->hdr.pkt_size = sz; + open->hdr.src_svc = APR_SVC_ADM; + open->hdr.src_domain = APR_DOMAIN_APPS; + open->hdr.src_port = port_id; + open->hdr.dest_svc = APR_SVC_ADM; + open->hdr.dest_domain = APR_DOMAIN_ADSP; + open->hdr.dest_port = atomic_read(&this_adm.copp_id[index]); + open->hdr.token = port_id; + open->hdr.opcode = ADM_CMD_SET_PARAMS; + pr_debug("SRS - %s: Command was sent now check Q6 - port id = %d," + " size %d, module id %x, param id %x.\n", __func__, + open->hdr.dest_port, open->payload_size, + open->params.module_id, open->params.param_id); + + ret = apr_send_pkt(this_adm.apr, (uint32_t *)open); + if (ret < 0) { + pr_err("SRS - %s: ADM enable for port %d failed\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + /* Wait for the callback with copp id */ + ret = wait_event_timeout(this_adm.wait, 1, + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("SRS - %s: ADM open failed for port %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + +fail_cmd: + kfree(open); + return ret; +} + +static int32_t adm_callback(struct apr_client_data *data, void *priv) +{ + uint32_t *payload; + int i, index; + payload = data->payload; + + if (data->opcode == RESET_EVENTS) { + pr_debug("adm_callback: Reset event is received: %d %d apr[%p]\n", + data->reset_event, data->reset_proc, + this_adm.apr); + if (this_adm.apr) { + apr_reset(this_adm.apr); + for (i = 0; i < AFE_MAX_PORTS; i++) { + atomic_set(&this_adm.copp_id[i], + RESET_COPP_ID); + atomic_set(&this_adm.copp_cnt[i], 0); + atomic_set(&this_adm.copp_stat[i], 0); + } + this_adm.apr = NULL; + } + pr_debug("Resetting calibration blocks"); + for (i = 0; i < MAX_AUDPROC_TYPES; i++) { + /* Device calibration */ + mem_addr_audproc[i].cal_size = 0; + mem_addr_audproc[i].cal_kvaddr = 0; + mem_addr_audproc[i].cal_paddr = 0; + + /* Volume calibration */ + mem_addr_audvol[i].cal_size = 0; + mem_addr_audvol[i].cal_kvaddr = 0; + mem_addr_audvol[i].cal_paddr = 0; + } + return 0; + } + + pr_debug("%s: code = 0x%x %x %x size = %d\n", __func__, + data->opcode, payload[0], payload[1], + data->payload_size); + + if (data->payload_size) { + index = afe_get_port_index(data->token); + pr_debug("%s: Port ID %d, index %d\n", __func__, + data->token, index); + if (index < 0 || index >= AFE_MAX_PORTS) { + pr_err("%s: invalid port idx %d token %d\n", + __func__, index, data->token); + return 0; + } + if (data->opcode == APR_BASIC_RSP_RESULT) { + pr_debug("APR_BASIC_RSP_RESULT id %x\n", payload[0]); + switch (payload[0]) { + case ADM_CMD_SET_PARAMS: + if (rtac_make_adm_callback(payload, + data->payload_size)) + break; + case ADM_CMD_COPP_CLOSE: + case ADM_CMD_MEMORY_MAP: + case ADM_CMD_MEMORY_UNMAP: + case ADM_CMD_MEMORY_MAP_REGIONS: + case ADM_CMD_MEMORY_UNMAP_REGIONS: + case ADM_CMD_MATRIX_MAP_ROUTINGS: + case ADM_CMD_CONNECT_AFE_PORT: + case ADM_CMD_DISCONNECT_AFE_PORT: + atomic_set(&this_adm.copp_stat[index], 1); + wake_up(&this_adm.wait); + break; + default: + pr_err("%s: Unknown Cmd: 0x%x\n", __func__, + payload[0]); + break; + } + return 0; + } + + switch (data->opcode) { + case ADM_CMDRSP_COPP_OPEN: + case ADM_CMDRSP_MULTI_CHANNEL_COPP_OPEN: + case ADM_CMDRSP_MULTI_CHANNEL_COPP_OPEN_V3: { + struct adm_copp_open_respond *open = data->payload; + if (open->copp_id == INVALID_COPP_ID) { + pr_err("%s: invalid coppid rxed %d\n", + __func__, open->copp_id); + atomic_set(&this_adm.copp_stat[index], 1); + wake_up(&this_adm.wait); + break; + } + atomic_set(&this_adm.copp_id[index], open->copp_id); + atomic_set(&this_adm.copp_stat[index], 1); + pr_debug("%s: coppid rxed=%d\n", __func__, + open->copp_id); + wake_up(&this_adm.wait); + } + break; + case ADM_CMDRSP_GET_PARAMS: + pr_debug("%s: ADM_CMDRSP_GET_PARAMS\n", __func__); + rtac_make_adm_callback(payload, + data->payload_size); + break; + default: + pr_err("%s: Unknown cmd:0x%x\n", __func__, + data->opcode); + break; + } + } + return 0; +} + +static int send_adm_cal_block(int port_id, struct acdb_cal_block *aud_cal) +{ + s32 result = 0; + struct adm_set_params_command adm_params; + int index = afe_get_port_index(port_id); + if (index < 0 || index >= AFE_MAX_PORTS) { + pr_err("%s: invalid port idx %d portid %d\n", + __func__, index, port_id); + return 0; + } + + pr_debug("%s: Port id %d, index %d\n", __func__, port_id, index); + + if (!aud_cal || aud_cal->cal_size == 0) { + pr_debug("%s: No ADM cal to send for port_id = %d!\n", + __func__, port_id); + result = -EINVAL; + goto done; + } + + adm_params.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(20), APR_PKT_VER); + adm_params.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(adm_params)); + adm_params.hdr.src_svc = APR_SVC_ADM; + adm_params.hdr.src_domain = APR_DOMAIN_APPS; + adm_params.hdr.src_port = port_id; + adm_params.hdr.dest_svc = APR_SVC_ADM; + adm_params.hdr.dest_domain = APR_DOMAIN_ADSP; + adm_params.hdr.dest_port = atomic_read(&this_adm.copp_id[index]); + adm_params.hdr.token = port_id; + adm_params.hdr.opcode = ADM_CMD_SET_PARAMS; + adm_params.payload = aud_cal->cal_paddr; + adm_params.payload_size = aud_cal->cal_size; + + atomic_set(&this_adm.copp_stat[index], 0); + pr_debug("%s: Sending SET_PARAMS payload = 0x%x, size = %d\n", + __func__, adm_params.payload, adm_params.payload_size); + result = apr_send_pkt(this_adm.apr, (uint32_t *)&adm_params); + if (result < 0) { + pr_err("%s: Set params failed port = %d payload = 0x%x\n", + __func__, port_id, aud_cal->cal_paddr); + result = -EINVAL; + goto done; + } + /* Wait for the callback */ + result = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[index]), + msecs_to_jiffies(TIMEOUT_MS)); + if (!result) { + pr_err("%s: Set params timed out port = %d, payload = 0x%x\n", + __func__, port_id, aud_cal->cal_paddr); + result = -EINVAL; + goto done; + } + + result = 0; +done: + return result; +} + +static void send_adm_cal(int port_id, int path) +{ + int result = 0; + s32 acdb_path; + struct acdb_cal_block aud_cal; + + pr_debug("%s\n", __func__); + + /* Maps audio_dev_ctrl path definition to ACDB definition */ + acdb_path = path - 1; + + pr_debug("%s: Sending audproc cal\n", __func__); + get_audproc_cal(acdb_path, &aud_cal); + + /* map & cache buffers used */ + if (((mem_addr_audproc[acdb_path].cal_paddr != aud_cal.cal_paddr) && + (aud_cal.cal_size > 0)) || + (aud_cal.cal_size > mem_addr_audproc[acdb_path].cal_size)) { + + if (mem_addr_audproc[acdb_path].cal_paddr != 0) + adm_memory_unmap_regions( + &mem_addr_audproc[acdb_path].cal_paddr, + &mem_addr_audproc[acdb_path].cal_size, 1); + + result = adm_memory_map_regions(&aud_cal.cal_paddr, 0, + &aud_cal.cal_size, 1); + if (result < 0) + pr_err("ADM audproc mmap did not work! path = %d, " + "addr = 0x%x, size = %d\n", acdb_path, + aud_cal.cal_paddr, aud_cal.cal_size); + else + mem_addr_audproc[acdb_path] = aud_cal; + } + + if (!send_adm_cal_block(port_id, &aud_cal)) + pr_debug("%s: Audproc cal sent for port id: %d, path %d\n", + __func__, port_id, acdb_path); + else + pr_debug("%s: Audproc cal not sent for port id: %d, path %d\n", + __func__, port_id, acdb_path); + + pr_debug("%s: Sending audvol cal\n", __func__); + get_audvol_cal(acdb_path, &aud_cal); + + /* map & cache buffers used */ + if (((mem_addr_audvol[acdb_path].cal_paddr != aud_cal.cal_paddr) && + (aud_cal.cal_size > 0)) || + (aud_cal.cal_size > mem_addr_audvol[acdb_path].cal_size)) { + if (mem_addr_audvol[acdb_path].cal_paddr != 0) + adm_memory_unmap_regions( + &mem_addr_audvol[acdb_path].cal_paddr, + &mem_addr_audvol[acdb_path].cal_size, 1); + + result = adm_memory_map_regions(&aud_cal.cal_paddr, 0, + &aud_cal.cal_size, 1); + if (result < 0) + pr_err("ADM audvol mmap did not work! path = %d, " + "addr = 0x%x, size = %d\n", acdb_path, + aud_cal.cal_paddr, aud_cal.cal_size); + else + mem_addr_audvol[acdb_path] = aud_cal; + } + + if (!send_adm_cal_block(port_id, &aud_cal)) + pr_debug("%s: Audvol cal sent for port id: %d, path %d\n", + __func__, port_id, acdb_path); + else + pr_debug("%s: Audvol cal not sent for port id: %d, path %d\n", + __func__, port_id, acdb_path); +} + +int adm_connect_afe_port(int mode, int session_id, int port_id) +{ + struct adm_cmd_connect_afe_port cmd; + int ret = 0; + int index; + + pr_debug("%s: port %d session id:%d mode:%d\n", __func__, + port_id, session_id, mode); + + port_id = afe_convert_virtual_to_portid(port_id); + + if (afe_validate_port(port_id) < 0) { + pr_err("%s port idi[%d] is invalid\n", __func__, port_id); + return -ENODEV; + } + if (this_adm.apr == NULL) { + this_adm.apr = apr_register("ADSP", "ADM", adm_callback, + 0xFFFFFFFF, &this_adm); + if (this_adm.apr == NULL) { + pr_err("%s: Unable to register ADM\n", __func__); + ret = -ENODEV; + return ret; + } + rtac_set_adm_handle(this_adm.apr); + } + index = afe_get_port_index(port_id); + pr_debug("%s: Port ID %d, index %d\n", __func__, port_id, index); + + cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cmd.hdr.pkt_size = sizeof(cmd); + cmd.hdr.src_svc = APR_SVC_ADM; + cmd.hdr.src_domain = APR_DOMAIN_APPS; + cmd.hdr.src_port = port_id; + cmd.hdr.dest_svc = APR_SVC_ADM; + cmd.hdr.dest_domain = APR_DOMAIN_ADSP; + cmd.hdr.dest_port = port_id; + cmd.hdr.token = port_id; + cmd.hdr.opcode = ADM_CMD_CONNECT_AFE_PORT; + + cmd.mode = mode; + cmd.session_id = session_id; + cmd.afe_port_id = port_id; + + atomic_set(&this_adm.copp_stat[index], 0); + ret = apr_send_pkt(this_adm.apr, (uint32_t *)&cmd); + if (ret < 0) { + pr_err("%s:ADM enable for port %d failed\n", + __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + /* Wait for the callback with copp id */ + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[index]), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s ADM connect AFE failed for port %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + atomic_inc(&this_adm.copp_cnt[index]); + return 0; + +fail_cmd: + + return ret; +} + +int adm_disconnect_afe_port(int mode, int session_id, int port_id) +{ + struct adm_cmd_connect_afe_port cmd; + int ret = 0; + int index; + + pr_debug("%s: port %d session id:%d mode:%d\n", __func__, + port_id, session_id, mode); + + port_id = afe_convert_virtual_to_portid(port_id); + + if (afe_validate_port(port_id) < 0) { + pr_err("%s port idi[%d] is invalid\n", __func__, port_id); + return -ENODEV; + } + if (this_adm.apr == NULL) { + this_adm.apr = apr_register("ADSP", "ADM", adm_callback, + 0xFFFFFFFF, &this_adm); + if (this_adm.apr == NULL) { + pr_err("%s: Unable to register ADM\n", __func__); + ret = -ENODEV; + return ret; + } + rtac_set_adm_handle(this_adm.apr); + } + index = afe_get_port_index(port_id); + pr_debug("%s: Port ID %d, index %d\n", __func__, port_id, index); + + cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cmd.hdr.pkt_size = sizeof(cmd); + cmd.hdr.src_svc = APR_SVC_ADM; + cmd.hdr.src_domain = APR_DOMAIN_APPS; + cmd.hdr.src_port = port_id; + cmd.hdr.dest_svc = APR_SVC_ADM; + cmd.hdr.dest_domain = APR_DOMAIN_ADSP; + cmd.hdr.dest_port = port_id; + cmd.hdr.token = port_id; + cmd.hdr.opcode = ADM_CMD_DISCONNECT_AFE_PORT; + + cmd.mode = mode; + cmd.session_id = session_id; + cmd.afe_port_id = port_id; + + atomic_set(&this_adm.copp_stat[index], 0); + ret = apr_send_pkt(this_adm.apr, (uint32_t *)&cmd); + if (ret < 0) { + pr_err("%s:ADM enable for port %d failed\n", + __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + /* Wait for the callback with copp id */ + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[index]), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s ADM connect AFE failed for port %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + atomic_dec(&this_adm.copp_cnt[index]); + return 0; + +fail_cmd: + + return ret; +} + +int adm_open(int port_id, int path, int rate, int channel_mode, int topology) +{ + struct adm_copp_open_command open; + int ret = 0; + int index; + + pr_debug("%s: port %d path:%d rate:%d mode:%d\n", __func__, + port_id, path, rate, channel_mode); + + port_id = afe_convert_virtual_to_portid(port_id); + + if (afe_validate_port(port_id) < 0) { + pr_err("%s port idi[%d] is invalid\n", __func__, port_id); + return -ENODEV; + } + + index = afe_get_port_index(port_id); + pr_debug("%s: Port ID %d, index %d\n", __func__, port_id, index); + + if (this_adm.apr == NULL) { + this_adm.apr = apr_register("ADSP", "ADM", adm_callback, + 0xFFFFFFFF, &this_adm); + if (this_adm.apr == NULL) { + pr_err("%s: Unable to register ADM\n", __func__); + ret = -ENODEV; + return ret; + } + rtac_set_adm_handle(this_adm.apr); + } + + + /* Create a COPP if port id are not enabled */ + if (atomic_read(&this_adm.copp_cnt[index]) == 0) { + + open.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + open.hdr.pkt_size = sizeof(open); + open.hdr.src_svc = APR_SVC_ADM; + open.hdr.src_domain = APR_DOMAIN_APPS; + open.hdr.src_port = port_id; + open.hdr.dest_svc = APR_SVC_ADM; + open.hdr.dest_domain = APR_DOMAIN_ADSP; + open.hdr.dest_port = port_id; + open.hdr.token = port_id; + open.hdr.opcode = ADM_CMD_COPP_OPEN; + + open.mode = path; + open.endpoint_id1 = port_id; + + if (this_adm.ec_ref_rx == 0) { + open.endpoint_id2 = 0xFFFF; + } else if (this_adm.ec_ref_rx && (path != 1)) { + open.endpoint_id2 = this_adm.ec_ref_rx; + this_adm.ec_ref_rx = 0; + } + + pr_debug("%s open.endpoint_id1:%d open.endpoint_id2:%d", + __func__, open.endpoint_id1, open.endpoint_id2); + /* convert path to acdb path */ + if (path == ADM_PATH_PLAYBACK) + open.topology_id = get_adm_rx_topology(); + else { + open.topology_id = get_adm_tx_topology(); + if ((open.topology_id == + VPM_TX_SM_ECNS_COPP_TOPOLOGY) || + (open.topology_id == + VPM_TX_DM_FLUENCE_COPP_TOPOLOGY)) + rate = 16000; + } + + if ((open.topology_id == 0) || (port_id == VOICE_RECORD_RX) || (port_id == VOICE_RECORD_TX)) + open.topology_id = topology; + + open.channel_config = channel_mode & 0x00FF; + open.rate = rate; + + pr_debug("%s: channel_config=%d port_id=%d rate=%d" + "topology_id=0x%X\n", __func__, open.channel_config,\ + open.endpoint_id1, open.rate,\ + open.topology_id); + + atomic_set(&this_adm.copp_stat[index], 0); + + ret = apr_send_pkt(this_adm.apr, (uint32_t *)&open); + if (ret < 0) { + pr_err("%s:ADM enable for port %d failed\n", + __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + /* Wait for the callback with copp id */ + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[index]), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s ADM open failed for port %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + } + atomic_inc(&this_adm.copp_cnt[index]); + return 0; + +fail_cmd: + + return ret; +} + + +int adm_multi_ch_copp_open(int port_id, int path, int rate, int channel_mode, + int topology, int perfmode) +{ + struct adm_multi_ch_copp_open_command open; + int ret = 0; + int index; + + pr_debug("%s: port %d path:%d rate:%d channel :%d\n", __func__, + port_id, path, rate, channel_mode); + + port_id = afe_convert_virtual_to_portid(port_id); + + if (afe_validate_port(port_id) < 0) { + pr_err("%s port idi[%d] is invalid\n", __func__, port_id); + return -ENODEV; + } + + index = afe_get_port_index(port_id); + pr_debug("%s: Port ID %d, index %d\n", __func__, port_id, index); + + if (this_adm.apr == NULL) { + this_adm.apr = apr_register("ADSP", "ADM", adm_callback, + 0xFFFFFFFF, &this_adm); + if (this_adm.apr == NULL) { + pr_err("%s: Unable to register ADM\n", __func__); + ret = -ENODEV; + return ret; + } + rtac_set_adm_handle(this_adm.apr); + } + + /* Create a COPP if port id are not enabled */ + if (atomic_read(&this_adm.copp_cnt[index]) == 0) { + + open.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + + open.hdr.pkt_size = + sizeof(struct adm_multi_ch_copp_open_command); + + if (perfmode) { + pr_debug("%s Performance mode", __func__); + open.hdr.opcode = ADM_CMD_MULTI_CHANNEL_COPP_OPEN_V3; + open.flags = ADM_MULTI_CH_COPP_OPEN_PERF_MODE_BIT; + open.reserved = PCM_BITS_PER_SAMPLE; + } else { + open.hdr.opcode = ADM_CMD_MULTI_CHANNEL_COPP_OPEN; + open.reserved = 0; + } + + memset(open.dev_channel_mapping, 0, 8); + + if (channel_mode == 1) { + open.dev_channel_mapping[0] = PCM_CHANNEL_FC; + } else if (channel_mode == 2) { + open.dev_channel_mapping[0] = PCM_CHANNEL_FL; + open.dev_channel_mapping[1] = PCM_CHANNEL_FR; + } else if (channel_mode == 4) { + open.dev_channel_mapping[0] = PCM_CHANNEL_FL; + open.dev_channel_mapping[1] = PCM_CHANNEL_FR; + open.dev_channel_mapping[2] = PCM_CHANNEL_RB; + open.dev_channel_mapping[3] = PCM_CHANNEL_LB; + } else if (channel_mode == 6) { + open.dev_channel_mapping[0] = PCM_CHANNEL_FL; + open.dev_channel_mapping[1] = PCM_CHANNEL_FR; + open.dev_channel_mapping[2] = PCM_CHANNEL_LFE; + open.dev_channel_mapping[3] = PCM_CHANNEL_FC; + open.dev_channel_mapping[4] = PCM_CHANNEL_LB; + open.dev_channel_mapping[5] = PCM_CHANNEL_RB; + } else if (channel_mode == 8) { + open.dev_channel_mapping[0] = PCM_CHANNEL_FL; + open.dev_channel_mapping[1] = PCM_CHANNEL_FR; + open.dev_channel_mapping[2] = PCM_CHANNEL_LFE; + open.dev_channel_mapping[3] = PCM_CHANNEL_FC; + open.dev_channel_mapping[4] = PCM_CHANNEL_LB; + open.dev_channel_mapping[5] = PCM_CHANNEL_RB; + open.dev_channel_mapping[6] = PCM_CHANNEL_FLC; + open.dev_channel_mapping[7] = PCM_CHANNEL_FRC; + } else { + pr_err("%s invalid num_chan %d\n", __func__, + channel_mode); + return -EINVAL; + } + open.hdr.src_svc = APR_SVC_ADM; + open.hdr.src_domain = APR_DOMAIN_APPS; + open.hdr.src_port = port_id; + open.hdr.dest_svc = APR_SVC_ADM; + open.hdr.dest_domain = APR_DOMAIN_ADSP; + open.hdr.dest_port = port_id; + open.hdr.token = port_id; + + open.mode = path; + open.endpoint_id1 = port_id; + + if (this_adm.ec_ref_rx == 0) { + open.endpoint_id2 = 0xFFFF; + } else if (this_adm.ec_ref_rx && (path != 1)) { + open.endpoint_id2 = this_adm.ec_ref_rx; + this_adm.ec_ref_rx = 0; + } + + pr_debug("%s open.endpoint_id1:%d open.endpoint_id2:%d", + __func__, open.endpoint_id1, open.endpoint_id2); + /* convert path to acdb path */ + if (path == ADM_PATH_PLAYBACK) + open.topology_id = get_adm_rx_topology(); + else { + open.topology_id = get_adm_tx_topology(); + if ((open.topology_id == + VPM_TX_SM_ECNS_COPP_TOPOLOGY) || + (open.topology_id == + VPM_TX_DM_FLUENCE_COPP_TOPOLOGY)) + rate = 16000; + } + + if ((open.topology_id == 0) || (port_id == VOICE_RECORD_RX) || (port_id == VOICE_RECORD_TX)) + open.topology_id = topology; + + open.channel_config = channel_mode & 0x00FF; + open.rate = rate; + + pr_debug("%s: channel_config=%d port_id=%d rate=%d" + " topology_id=0x%X\n", __func__, open.channel_config, + open.endpoint_id1, open.rate, + open.topology_id); + + atomic_set(&this_adm.copp_stat[index], 0); + + ret = apr_send_pkt(this_adm.apr, (uint32_t *)&open); + if (ret < 0) { + pr_err("%s:ADM enable for port %d failed\n", + __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + /* Wait for the callback with copp id */ + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[index]), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s ADM open failed for port %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + } + atomic_inc(&this_adm.copp_cnt[index]); + return 0; + +fail_cmd: + + return ret; +} + +int adm_matrix_map(int session_id, int path, int num_copps, + unsigned int *port_id, int copp_id) +{ + struct adm_routings_command route; + int ret = 0, i = 0; + /* Assumes port_ids have already been validated during adm_open */ + int index = afe_get_port_index(copp_id); + int copp_cnt; + + if (index < 0 || index >= AFE_MAX_PORTS) { + pr_err("%s: invalid port idx %d token %d\n", + __func__, index, copp_id); + return 0; + } + + pr_debug("%s: session 0x%x path:%d num_copps:%d port_id[0]:%d\n", + __func__, session_id, path, num_copps, port_id[0]); + + route.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + route.hdr.pkt_size = sizeof(route); + route.hdr.src_svc = 0; + route.hdr.src_domain = APR_DOMAIN_APPS; + route.hdr.src_port = copp_id; + route.hdr.dest_svc = APR_SVC_ADM; + route.hdr.dest_domain = APR_DOMAIN_ADSP; + route.hdr.dest_port = atomic_read(&this_adm.copp_id[index]); + route.hdr.token = copp_id; + route.hdr.opcode = ADM_CMD_MATRIX_MAP_ROUTINGS; + route.num_sessions = 1; + route.session[0].id = session_id; + + if (num_copps < ADM_MAX_COPPS) { + copp_cnt = num_copps; + } else { + copp_cnt = ADM_MAX_COPPS; + /* print out warning for now as playback/capture to/from + * COPPs more than maximum allowed is extremely unlikely + */ + pr_warn("%s: max out routable COPPs\n", __func__); + } + + route.session[0].num_copps = copp_cnt; + for (i = 0; i < copp_cnt; i++) { + int tmp; + port_id[i] = afe_convert_virtual_to_portid(port_id[i]); + + tmp = afe_get_port_index(port_id[i]); + + pr_debug("%s: port_id[%d]: %d, index: %d\n", __func__, i, + port_id[i], tmp); + + if (tmp >= 0 && tmp < AFE_MAX_PORTS) + route.session[0].copp_id[i] = + atomic_read(&this_adm.copp_id[tmp]); + } + + if (copp_cnt % 2) + route.session[0].copp_id[i] = 0; + + switch (path) { + case 0x1: + route.path = AUDIO_RX; + break; + case 0x2: + case 0x3: + route.path = AUDIO_TX; + break; + default: + pr_err("%s: Wrong path set[%d]\n", __func__, path); + break; + } + atomic_set(&this_adm.copp_stat[index], 0); + + ret = apr_send_pkt(this_adm.apr, (uint32_t *)&route); + if (ret < 0) { + pr_err("%s: ADM routing for port %d failed\n", + __func__, port_id[0]); + ret = -EINVAL; + goto fail_cmd; + } + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[index]), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: ADM cmd Route failed for port %d\n", + __func__, port_id[0]); + ret = -EINVAL; + goto fail_cmd; + } + + for (i = 0; i < num_copps; i++) + send_adm_cal(port_id[i], path); + + for (i = 0; i < num_copps; i++) { + int tmp; + tmp = afe_get_port_index(port_id[i]); + if (tmp >= 0 && tmp < AFE_MAX_PORTS) + rtac_add_adm_device(port_id[i], + atomic_read(&this_adm.copp_id[tmp]), + path, session_id); + else + pr_debug("%s: Invalid port index %d", + __func__, tmp); + } + return 0; + +fail_cmd: + + return ret; +} + +int adm_memory_map_regions(uint32_t *buf_add, uint32_t mempool_id, + uint32_t *bufsz, uint32_t bufcnt) +{ + struct adm_cmd_memory_map_regions *mmap_regions = NULL; + struct adm_memory_map_regions *mregions = NULL; + void *mmap_region_cmd = NULL; + void *payload = NULL; + int ret = 0; + int i = 0; + int cmd_size = 0; + + pr_debug("%s\n", __func__); + if (this_adm.apr == NULL) { + this_adm.apr = apr_register("ADSP", "ADM", adm_callback, + 0xFFFFFFFF, &this_adm); + if (this_adm.apr == NULL) { + pr_err("%s: Unable to register ADM\n", __func__); + ret = -ENODEV; + return ret; + } + rtac_set_adm_handle(this_adm.apr); + } + + cmd_size = sizeof(struct adm_cmd_memory_map_regions) + + sizeof(struct adm_memory_map_regions) * bufcnt; + + mmap_region_cmd = kzalloc(cmd_size, GFP_KERNEL); + if (!mmap_region_cmd) { + pr_err("%s: allocate mmap_region_cmd failed\n", __func__); + return -ENOMEM; + } + mmap_regions = (struct adm_cmd_memory_map_regions *)mmap_region_cmd; + mmap_regions->hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mmap_regions->hdr.pkt_size = cmd_size; + mmap_regions->hdr.src_port = 0; + mmap_regions->hdr.dest_port = 0; + mmap_regions->hdr.token = 0; + mmap_regions->hdr.opcode = ADM_CMD_MEMORY_MAP_REGIONS; + mmap_regions->mempool_id = mempool_id & 0x00ff; + mmap_regions->nregions = bufcnt & 0x00ff; + pr_debug("%s: map_regions->nregions = %d\n", __func__, + mmap_regions->nregions); + payload = ((u8 *) mmap_region_cmd + + sizeof(struct adm_cmd_memory_map_regions)); + mregions = (struct adm_memory_map_regions *)payload; + + for (i = 0; i < bufcnt; i++) { + mregions->phys = buf_add[i]; + mregions->buf_size = bufsz[i]; + ++mregions; + } + + atomic_set(&this_adm.copp_stat[0], 0); + ret = apr_send_pkt(this_adm.apr, (uint32_t *) mmap_region_cmd); + if (ret < 0) { + pr_err("%s: mmap_regions op[0x%x]rc[%d]\n", __func__, + mmap_regions->hdr.opcode, ret); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[0]), 5 * HZ); + if (!ret) { + pr_err("%s: timeout. waited for memory_map\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } +fail_cmd: + kfree(mmap_region_cmd); + return ret; +} + +int adm_memory_unmap_regions(uint32_t *buf_add, uint32_t *bufsz, + uint32_t bufcnt) +{ + struct adm_cmd_memory_unmap_regions *unmap_regions = NULL; + struct adm_memory_unmap_regions *mregions = NULL; + void *unmap_region_cmd = NULL; + void *payload = NULL; + int ret = 0; + int i = 0; + int cmd_size = 0; + + pr_debug("%s\n", __func__); + + if (this_adm.apr == NULL) { + pr_err("%s APR handle NULL\n", __func__); + return -EINVAL; + } + + cmd_size = sizeof(struct adm_cmd_memory_unmap_regions) + + sizeof(struct adm_memory_unmap_regions) * bufcnt; + + unmap_region_cmd = kzalloc(cmd_size, GFP_KERNEL); + if (!unmap_region_cmd) { + pr_err("%s: allocate unmap_region_cmd failed\n", __func__); + return -ENOMEM; + } + unmap_regions = (struct adm_cmd_memory_unmap_regions *) + unmap_region_cmd; + unmap_regions->hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + unmap_regions->hdr.pkt_size = cmd_size; + unmap_regions->hdr.src_port = 0; + unmap_regions->hdr.dest_port = 0; + unmap_regions->hdr.token = 0; + unmap_regions->hdr.opcode = ADM_CMD_MEMORY_UNMAP_REGIONS; + unmap_regions->nregions = bufcnt & 0x00ff; + unmap_regions->reserved = 0; + pr_debug("%s: unmap_regions->nregions = %d\n", __func__, + unmap_regions->nregions); + payload = ((u8 *) unmap_region_cmd + + sizeof(struct adm_cmd_memory_unmap_regions)); + mregions = (struct adm_memory_unmap_regions *)payload; + + for (i = 0; i < bufcnt; i++) { + mregions->phys = buf_add[i]; + ++mregions; + } + atomic_set(&this_adm.copp_stat[0], 0); + ret = apr_send_pkt(this_adm.apr, (uint32_t *) unmap_region_cmd); + if (ret < 0) { + pr_err("%s: mmap_regions op[0x%x]rc[%d]\n", __func__, + unmap_regions->hdr.opcode, ret); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[0]), 5 * HZ); + if (!ret) { + pr_err("%s: timeout. waited for memory_unmap\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } +fail_cmd: + kfree(unmap_region_cmd); + return ret; +} + +int adm_get_copp_id(int port_index) +{ + pr_debug("%s\n", __func__); + + if (port_index < 0) { + pr_err("%s: invalid port_id = %d\n", __func__, port_index); + return -EINVAL; + } + + return atomic_read(&this_adm.copp_id[port_index]); +} + +void adm_ec_ref_rx_id(int port_id) +{ + this_adm.ec_ref_rx = port_id; + pr_debug("%s ec_ref_rx:%d", __func__, this_adm.ec_ref_rx); +} + +int adm_close(int port_id) +{ + struct apr_hdr close; + + int ret = 0; + int index = 0; + + port_id = afe_convert_virtual_to_portid(port_id); + + index = afe_get_port_index(port_id); + if (afe_validate_port(port_id) < 0) + return -EINVAL; + + pr_debug("%s port_id=%d index %d\n", __func__, port_id, index); + + if (!(atomic_read(&this_adm.copp_cnt[index]))) { + pr_err("%s: copp count for port[%d]is 0\n", __func__, port_id); + + goto fail_cmd; + } + atomic_dec(&this_adm.copp_cnt[index]); + if (!(atomic_read(&this_adm.copp_cnt[index]))) { + + close.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + close.pkt_size = sizeof(close); + close.src_svc = APR_SVC_ADM; + close.src_domain = APR_DOMAIN_APPS; + close.src_port = port_id; + close.dest_svc = APR_SVC_ADM; + close.dest_domain = APR_DOMAIN_ADSP; + close.dest_port = atomic_read(&this_adm.copp_id[index]); + close.token = port_id; + close.opcode = ADM_CMD_COPP_CLOSE; + + atomic_set(&this_adm.copp_id[index], RESET_COPP_ID); + atomic_set(&this_adm.copp_stat[index], 0); + + + pr_debug("%s:coppid %d portid=%d index=%d coppcnt=%d\n", + __func__, + atomic_read(&this_adm.copp_id[index]), + port_id, index, + atomic_read(&this_adm.copp_cnt[index])); + + ret = apr_send_pkt(this_adm.apr, (uint32_t *)&close); + if (ret < 0) { + pr_err("%s ADM close failed\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_adm.wait, + atomic_read(&this_adm.copp_stat[index]), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: ADM cmd Route failed for port %d\n", + __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + + rtac_remove_adm_device(port_id); + } + +fail_cmd: + return ret; +} + +static int __init adm_init(void) +{ + int i = 0; + init_waitqueue_head(&this_adm.wait); + this_adm.apr = NULL; + + for (i = 0; i < AFE_MAX_PORTS; i++) { + atomic_set(&this_adm.copp_id[i], RESET_COPP_ID); + atomic_set(&this_adm.copp_cnt[i], 0); + atomic_set(&this_adm.copp_stat[i], 0); + } + return 0; +} + +device_initcall(adm_init); diff --git a/sound/soc/msm/qdsp6/q6afe.c b/sound/soc/msm/qdsp6/q6afe.c new file mode 100644 index 000000000000..5935ff3c90b7 --- /dev/null +++ b/sound/soc/msm/qdsp6/q6afe.c @@ -0,0 +1,1826 @@ +/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/debugfs.h> +#include <linux/kernel.h> +#include <linux/kthread.h> +#include <linux/uaccess.h> +#include <linux/wait.h> +#include <linux/jiffies.h> +#include <linux/sched.h> +#include <sound/qdsp6v2/audio_acdb.h> +#include <sound/apr_audio.h> +#include <sound/q6afe.h> + +struct afe_ctl { + void *apr; + atomic_t state; + atomic_t status; + wait_queue_head_t wait; + struct task_struct *task; + void (*tx_cb) (uint32_t opcode, + uint32_t token, uint32_t *payload, void *priv); + void (*rx_cb) (uint32_t opcode, + uint32_t token, uint32_t *payload, void *priv); + void *tx_private_data; + void *rx_private_data; +}; + +static struct afe_ctl this_afe; + +static struct acdb_cal_block afe_cal_addr[MAX_AUDPROC_TYPES]; +static int pcm_afe_instance[2]; +static int proxy_afe_instance[2]; +bool afe_close_done[2] = {true, true}; + +#define TIMEOUT_MS 1000 +#define Q6AFE_MAX_VOLUME 0x3FFF + +#define SIZEOF_CFG_CMD(y) \ + (sizeof(struct apr_hdr) + sizeof(u16) + (sizeof(struct y))) + +static int32_t afe_callback(struct apr_client_data *data, void *priv) +{ + if (data->opcode == RESET_EVENTS) { + pr_debug("q6afe: reset event = %d %d apr[%p]\n", + data->reset_event, data->reset_proc, this_afe.apr); + if (this_afe.apr) { + apr_reset(this_afe.apr); + atomic_set(&this_afe.state, 0); + this_afe.apr = NULL; + } + /* send info to user */ + pr_debug("task_name = %s pid = %d\n", + this_afe.task->comm, this_afe.task->pid); + send_sig(SIGUSR1, this_afe.task, 0); + return 0; + } + if (data->payload_size) { + uint32_t *payload; + uint16_t port_id = 0; + payload = data->payload; + pr_debug("%s:opcode = 0x%x cmd = 0x%x status = 0x%x\n", + __func__, data->opcode, + payload[0], payload[1]); + /* payload[1] contains the error status for response */ + if (payload[1] != 0) { + atomic_set(&this_afe.status, -1); + pr_err("%s: cmd = 0x%x returned error = 0x%x\n", + __func__, payload[0], payload[1]); + } + if (data->opcode == APR_BASIC_RSP_RESULT) { + switch (payload[0]) { + case AFE_PORT_AUDIO_IF_CONFIG: + case AFE_PORT_CMD_I2S_CONFIG: + case AFE_PORT_MULTI_CHAN_HDMI_AUDIO_IF_CONFIG: + case AFE_PORT_AUDIO_SLIM_SCH_CONFIG: + case AFE_PORT_CMD_STOP: + case AFE_PORT_CMD_START: + case AFE_PORT_CMD_LOOPBACK: + case AFE_PORT_CMD_SIDETONE_CTL: + case AFE_PORT_CMD_SET_PARAM: + case AFE_PSEUDOPORT_CMD_START: + case AFE_PSEUDOPORT_CMD_STOP: + case AFE_PORT_CMD_APPLY_GAIN: + case AFE_SERVICE_CMD_MEMORY_MAP: + case AFE_SERVICE_CMD_MEMORY_UNMAP: + case AFE_SERVICE_CMD_UNREG_RTPORT: + atomic_set(&this_afe.state, 0); + wake_up(&this_afe.wait); + break; + case AFE_SERVICE_CMD_REG_RTPORT: + break; + case AFE_SERVICE_CMD_RTPORT_WR: + port_id = RT_PROXY_PORT_001_TX; + break; + case AFE_SERVICE_CMD_RTPORT_RD: + port_id = RT_PROXY_PORT_001_RX; + break; + default: + pr_err("Unknown cmd 0x%x\n", + payload[0]); + break; + } + } else if (data->opcode == AFE_EVENT_RT_PROXY_PORT_STATUS) { + port_id = (uint16_t)(0x0000FFFF & payload[0]); + } + pr_debug("%s:port_id = %x\n", __func__, port_id); + switch (port_id) { + case RT_PROXY_PORT_001_TX: { + if (this_afe.tx_cb) { + this_afe.tx_cb(data->opcode, data->token, + data->payload, + this_afe.tx_private_data); + } + break; + } + case RT_PROXY_PORT_001_RX: { + if (this_afe.rx_cb) { + this_afe.rx_cb(data->opcode, data->token, + data->payload, + this_afe.rx_private_data); + } + break; + } + default: + break; + } + } + return 0; +} + +int afe_get_port_type(u16 port_id) +{ + int ret; + + switch (port_id) { + case PRIMARY_I2S_RX: + case PCM_RX: + case SECONDARY_PCM_RX: + case SECONDARY_I2S_RX: + case MI2S_RX: + case HDMI_RX: + case SLIMBUS_0_RX: + case SLIMBUS_1_RX: + case SLIMBUS_2_RX: + case SLIMBUS_3_RX: + case INT_BT_SCO_RX: + case INT_BT_A2DP_RX: + case INT_FM_RX: + case VOICE_PLAYBACK_TX: + case RT_PROXY_PORT_001_RX: + case SLIMBUS_4_RX: + ret = MSM_AFE_PORT_TYPE_RX; + break; + + case PRIMARY_I2S_TX: + case PCM_TX: + case SECONDARY_PCM_TX: + case SECONDARY_I2S_TX: + case MI2S_TX: + case DIGI_MIC_TX: + case VOICE_RECORD_TX: + case SLIMBUS_0_TX: + case SLIMBUS_1_TX: + case SLIMBUS_2_TX: + case SLIMBUS_3_TX: + case INT_FM_TX: + case VOICE_RECORD_RX: + case INT_BT_SCO_TX: + case RT_PROXY_PORT_001_TX: + case SLIMBUS_4_TX: + ret = MSM_AFE_PORT_TYPE_TX; + break; + + default: + pr_err("%s: invalid port id %d\n", __func__, port_id); + ret = -EINVAL; + } + + return ret; +} + +int afe_validate_port(u16 port_id) +{ + int ret; + + switch (port_id) { + case PRIMARY_I2S_RX: + case PRIMARY_I2S_TX: + case PCM_RX: + case PCM_TX: + case SECONDARY_PCM_RX: + case SECONDARY_PCM_TX: + case SECONDARY_I2S_RX: + case SECONDARY_I2S_TX: + case MI2S_RX: + case MI2S_TX: + case HDMI_RX: + case RSVD_2: + case RSVD_3: + case DIGI_MIC_TX: + case VOICE_RECORD_RX: + case VOICE_RECORD_TX: + case VOICE_PLAYBACK_TX: + case SLIMBUS_0_RX: + case SLIMBUS_0_TX: + case SLIMBUS_1_RX: + case SLIMBUS_1_TX: + case SLIMBUS_2_RX: + case SLIMBUS_2_TX: + case SLIMBUS_3_RX: + case SLIMBUS_3_TX: + case INT_BT_SCO_RX: + case INT_BT_SCO_TX: + case INT_BT_A2DP_RX: + case INT_FM_RX: + case INT_FM_TX: + case RT_PROXY_PORT_001_RX: + case RT_PROXY_PORT_001_TX: + case SLIMBUS_4_RX: + case SLIMBUS_4_TX: + { + ret = 0; + break; + } + + default: + ret = -EINVAL; + } + + return ret; +} + +int afe_convert_virtual_to_portid(u16 port_id) +{ + int ret; + + /* if port_id is virtual, convert to physical.. + * if port_id is already physical, return physical + */ + if (afe_validate_port(port_id) < 0) { + if (port_id == RT_PROXY_DAI_001_RX || + port_id == RT_PROXY_DAI_001_TX || + port_id == RT_PROXY_DAI_002_RX || + port_id == RT_PROXY_DAI_002_TX) + ret = VIRTUAL_ID_TO_PORTID(port_id); + else + ret = -EINVAL; + } else + ret = port_id; + + return ret; +} + +int afe_get_port_index(u16 port_id) +{ + switch (port_id) { + case PRIMARY_I2S_RX: return IDX_PRIMARY_I2S_RX; + case PRIMARY_I2S_TX: return IDX_PRIMARY_I2S_TX; + case PCM_RX: return IDX_PCM_RX; + case PCM_TX: return IDX_PCM_TX; + case SECONDARY_PCM_RX: return IDX_SECONDARY_PCM_RX; + case SECONDARY_PCM_TX: return IDX_SECONDARY_PCM_TX; + case SECONDARY_I2S_RX: return IDX_SECONDARY_I2S_RX; + case SECONDARY_I2S_TX: return IDX_SECONDARY_I2S_TX; + case MI2S_RX: return IDX_MI2S_RX; + case MI2S_TX: return IDX_MI2S_TX; + case HDMI_RX: return IDX_HDMI_RX; + case RSVD_2: return IDX_RSVD_2; + case RSVD_3: return IDX_RSVD_3; + case DIGI_MIC_TX: return IDX_DIGI_MIC_TX; + case VOICE_RECORD_RX: return IDX_VOICE_RECORD_RX; + case VOICE_RECORD_TX: return IDX_VOICE_RECORD_TX; + case VOICE_PLAYBACK_TX: return IDX_VOICE_PLAYBACK_TX; + case SLIMBUS_0_RX: return IDX_SLIMBUS_0_RX; + case SLIMBUS_0_TX: return IDX_SLIMBUS_0_TX; + case SLIMBUS_1_RX: return IDX_SLIMBUS_1_RX; + case SLIMBUS_1_TX: return IDX_SLIMBUS_1_TX; + case SLIMBUS_2_RX: return IDX_SLIMBUS_2_RX; + case SLIMBUS_2_TX: return IDX_SLIMBUS_2_TX; + case SLIMBUS_3_RX: return IDX_SLIMBUS_3_RX; + case SLIMBUS_3_TX: return IDX_SLIMBUS_3_TX; + case INT_BT_SCO_RX: return IDX_INT_BT_SCO_RX; + case INT_BT_SCO_TX: return IDX_INT_BT_SCO_TX; + case INT_BT_A2DP_RX: return IDX_INT_BT_A2DP_RX; + case INT_FM_RX: return IDX_INT_FM_RX; + case INT_FM_TX: return IDX_INT_FM_TX; + case RT_PROXY_PORT_001_RX: return IDX_RT_PROXY_PORT_001_RX; + case RT_PROXY_PORT_001_TX: return IDX_RT_PROXY_PORT_001_TX; + case SLIMBUS_4_RX: return IDX_SLIMBUS_4_RX; + case SLIMBUS_4_TX: return IDX_SLIMBUS_4_TX; + + default: return -EINVAL; + } +} + +int afe_sizeof_cfg_cmd(u16 port_id) +{ + int ret_size; + switch (port_id) { + case PRIMARY_I2S_RX: + case PRIMARY_I2S_TX: + case SECONDARY_I2S_RX: + case SECONDARY_I2S_TX: + case MI2S_RX: + case MI2S_TX: + ret_size = SIZEOF_CFG_CMD(afe_port_mi2s_cfg); + break; + case HDMI_RX: + ret_size = SIZEOF_CFG_CMD(afe_port_hdmi_multi_ch_cfg); + break; + case SLIMBUS_0_RX: + case SLIMBUS_0_TX: + case SLIMBUS_1_RX: + case SLIMBUS_1_TX: + case SLIMBUS_2_RX: + case SLIMBUS_2_TX: + case SLIMBUS_3_RX: + case SLIMBUS_3_TX: + case SLIMBUS_4_RX: + case SLIMBUS_4_TX: + ret_size = SIZEOF_CFG_CMD(afe_port_slimbus_sch_cfg); + break; + case RT_PROXY_PORT_001_RX: + case RT_PROXY_PORT_001_TX: + ret_size = SIZEOF_CFG_CMD(afe_port_rtproxy_cfg); + break; + case PCM_RX: + case PCM_TX: + case SECONDARY_PCM_RX: + case SECONDARY_PCM_TX: + default: + ret_size = SIZEOF_CFG_CMD(afe_port_pcm_cfg); + break; + } + return ret_size; +} + +int afe_q6_interface_prepare(void) +{ + int ret = 0; + + pr_debug("%s:", __func__); + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + } + } + return ret; +} + +static void afe_send_cal_block(int32_t path, u16 port_id) +{ + int result = 0; + struct acdb_cal_block cal_block; + struct afe_port_cmd_set_param_no_payload afe_cal; + pr_debug("%s: path %d\n", __func__, path); + + get_afe_cal(path, &cal_block); + if (cal_block.cal_size <= 0) { + pr_debug("%s: No AFE cal to send!\n", __func__); + goto done; + } + + if ((afe_cal_addr[path].cal_paddr != cal_block.cal_paddr) || + (cal_block.cal_size > afe_cal_addr[path].cal_size)) { + if (afe_cal_addr[path].cal_paddr != 0) + afe_cmd_memory_unmap( + afe_cal_addr[path].cal_paddr); + + afe_cmd_memory_map(cal_block.cal_paddr, cal_block.cal_size); + afe_cal_addr[path].cal_paddr = cal_block.cal_paddr; + afe_cal_addr[path].cal_size = cal_block.cal_size; + } + + afe_cal.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + afe_cal.hdr.pkt_size = sizeof(afe_cal); + afe_cal.hdr.src_port = 0; + afe_cal.hdr.dest_port = 0; + afe_cal.hdr.token = 0; + afe_cal.hdr.opcode = AFE_PORT_CMD_SET_PARAM; + afe_cal.port_id = port_id; + afe_cal.payload_size = cal_block.cal_size; + afe_cal.payload_address = cal_block.cal_paddr; + + pr_debug("%s: AFE cal sent for device port = %d, path = %d, " + "cal size = %d, cal addr = 0x%x\n", __func__, + port_id, path, cal_block.cal_size, cal_block.cal_paddr); + + atomic_set(&this_afe.state, 1); + result = apr_send_pkt(this_afe.apr, (uint32_t *) &afe_cal); + if (result < 0) { + pr_err("%s: AFE cal for port %d failed\n", + __func__, port_id); + } + + result = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!result) { + pr_err("%s: wait_event timeout SET AFE CAL\n", __func__); + goto done; + } + + pr_debug("%s: AFE cal sent for path %d device!\n", __func__, path); +done: + return; +} + +void afe_send_cal(u16 port_id) +{ + pr_debug("%s\n", __func__); + + if (afe_get_port_type(port_id) == MSM_AFE_PORT_TYPE_TX) + afe_send_cal_block(TX_CAL, port_id); + else if (afe_get_port_type(port_id) == MSM_AFE_PORT_TYPE_RX) + afe_send_cal_block(RX_CAL, port_id); +} + +/* This function sends multi-channel HDMI configuration command and AFE + * calibration which is only supported by QDSP6 on 8960 and onward. + */ +int afe_port_start(u16 port_id, union afe_port_config *afe_config, + u32 rate) +{ + struct afe_port_start_command start; + struct afe_audioif_config_command config; + int ret; + + if (!afe_config) { + pr_err("%s: Error, no configuration data\n", __func__); + ret = -EINVAL; + return ret; + } + pr_debug("%s: %d %d\n", __func__, port_id, rate); + + if ((port_id == RT_PROXY_DAI_001_RX) || + (port_id == RT_PROXY_DAI_002_TX)) { + pr_debug("%s: before incrementing pcm_afe_instance %d"\ + " port_id %d\n", __func__, + pcm_afe_instance[port_id & 0x1], port_id); + port_id = VIRTUAL_ID_TO_PORTID(port_id); + pcm_afe_instance[port_id & 0x1]++; + return 0; + } + if ((port_id == RT_PROXY_DAI_002_RX) || + (port_id == RT_PROXY_DAI_001_TX)) { + pr_debug("%s: before incrementing proxy_afe_instance %d"\ + " port_id %d\n", __func__, + proxy_afe_instance[port_id & 0x1], port_id); + if (!afe_close_done[port_id & 0x1]) { + /*close pcm dai corresponding to the proxy dai*/ + afe_close(port_id - 0x10); + pcm_afe_instance[port_id & 0x1]++; + pr_debug("%s: reconfigure afe port again\n", __func__); + } + proxy_afe_instance[port_id & 0x1]++; + afe_close_done[port_id & 0x1] = false; + port_id = VIRTUAL_ID_TO_PORTID(port_id); + } + + ret = afe_q6_interface_prepare(); + if (IS_ERR_VALUE(ret)) + return ret; + + if (port_id == HDMI_RX) { + config.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + config.hdr.pkt_size = afe_sizeof_cfg_cmd(port_id); + config.hdr.src_port = 0; + config.hdr.dest_port = 0; + config.hdr.token = 0; + config.hdr.opcode = AFE_PORT_MULTI_CHAN_HDMI_AUDIO_IF_CONFIG; + } else { + + config.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + config.hdr.pkt_size = afe_sizeof_cfg_cmd(port_id); + config.hdr.src_port = 0; + config.hdr.dest_port = 0; + config.hdr.token = 0; + switch (port_id) { + case SLIMBUS_0_RX: + case SLIMBUS_0_TX: + case SLIMBUS_1_RX: + case SLIMBUS_1_TX: + case SLIMBUS_2_RX: + case SLIMBUS_2_TX: + case SLIMBUS_3_RX: + case SLIMBUS_3_TX: + case SLIMBUS_4_RX: + case SLIMBUS_4_TX: + config.hdr.opcode = AFE_PORT_AUDIO_SLIM_SCH_CONFIG; + break; + case MI2S_TX: + case MI2S_RX: + case SECONDARY_I2S_RX: + case SECONDARY_I2S_TX: + case PRIMARY_I2S_RX: + case PRIMARY_I2S_TX: + /* AFE_PORT_CMD_I2S_CONFIG command is not supported + * in the LPASS EL 1.0. So we have to distiguish + * which AFE command, AFE_PORT_CMD_I2S_CONFIG or + * AFE_PORT_AUDIO_IF_CONFIG to use. If the format + * is L-PCM, the AFE_PORT_AUDIO_IF_CONFIG is used + * to make the backward compatible. + */ + pr_debug("%s: afe_config->mi2s.format = %d\n", __func__, + afe_config->mi2s.format); + if (afe_config->mi2s.format == MSM_AFE_I2S_FORMAT_LPCM) + config.hdr.opcode = AFE_PORT_AUDIO_IF_CONFIG; + else + config.hdr.opcode = AFE_PORT_CMD_I2S_CONFIG; + break; + default: + config.hdr.opcode = AFE_PORT_AUDIO_IF_CONFIG; + break; + } + } + + if (afe_validate_port(port_id) < 0) { + + pr_err("%s: Failed : Invalid Port id = %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + config.port_id = port_id; + config.port = *afe_config; + + atomic_set(&this_afe.state, 1); + atomic_set(&this_afe.status, 0); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &config); + if (ret < 0) { + pr_err("%s: AFE enable for port %d failed\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + + if (!ret) { + pr_err("%s: wait_event timeout IF CONFIG\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + if (atomic_read(&this_afe.status) != 0) { + pr_err("%s: config cmd failed\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + + /* send AFE cal */ + afe_send_cal(port_id); + + start.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + start.hdr.pkt_size = sizeof(start); + start.hdr.src_port = 0; + start.hdr.dest_port = 0; + start.hdr.token = 0; + start.hdr.opcode = AFE_PORT_CMD_START; + start.port_id = port_id; + start.gain = 0x2000; + start.sample_rate = rate; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &start); + + if (IS_ERR_VALUE(ret)) { + pr_err("%s: AFE enable for port %d failed\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout PORT START\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + + if (this_afe.task != current) + this_afe.task = current; + + pr_debug("task_name = %s pid = %d\n", + this_afe.task->comm, this_afe.task->pid); + return 0; + +fail_cmd: + return ret; +} + +/* This function should be used by 8660 exclusively */ +int afe_open(u16 port_id, union afe_port_config *afe_config, int rate) +{ + struct afe_port_start_command start; + struct afe_audioif_config_command config; + int ret = 0; + + if (!afe_config) { + pr_err("%s: Error, no configuration data\n", __func__); + ret = -EINVAL; + return ret; + } + + pr_debug("%s: %d %d\n", __func__, port_id, rate); + + if ((port_id == RT_PROXY_DAI_001_RX) || + (port_id == RT_PROXY_DAI_002_TX)) + return 0; + if ((port_id == RT_PROXY_DAI_002_RX) || + (port_id == RT_PROXY_DAI_001_TX)) + port_id = VIRTUAL_ID_TO_PORTID(port_id); + + ret = afe_q6_interface_prepare(); + if (ret != 0) + return ret; + + config.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + config.hdr.pkt_size = afe_sizeof_cfg_cmd(port_id); + config.hdr.src_port = 0; + config.hdr.dest_port = 0; + config.hdr.token = 0; + switch (port_id) { + case SLIMBUS_0_RX: + case SLIMBUS_0_TX: + case SLIMBUS_1_RX: + case SLIMBUS_1_TX: + case SLIMBUS_2_RX: + case SLIMBUS_2_TX: + case SLIMBUS_3_RX: + case SLIMBUS_3_TX: + case SLIMBUS_4_RX: + case SLIMBUS_4_TX: + config.hdr.opcode = AFE_PORT_AUDIO_SLIM_SCH_CONFIG; + break; + case MI2S_TX: + case MI2S_RX: + case SECONDARY_I2S_RX: + case SECONDARY_I2S_TX: + case PRIMARY_I2S_RX: + case PRIMARY_I2S_TX: + /* AFE_PORT_CMD_I2S_CONFIG command is not supported + * in the LPASS EL 1.0. So we have to distiguish + * which AFE command, AFE_PORT_CMD_I2S_CONFIG or + * AFE_PORT_AUDIO_IF_CONFIG to use. If the format + * is L-PCM, the AFE_PORT_AUDIO_IF_CONFIG is used + * to make the backward compatible. + */ + pr_debug("%s: afe_config->mi2s.format = %d\n", __func__, + afe_config->mi2s.format); + if (afe_config->mi2s.format == MSM_AFE_I2S_FORMAT_LPCM) + config.hdr.opcode = AFE_PORT_AUDIO_IF_CONFIG; + else + config.hdr.opcode = AFE_PORT_CMD_I2S_CONFIG; + break; + default: + config.hdr.opcode = AFE_PORT_AUDIO_IF_CONFIG; + break; + } + + if (afe_validate_port(port_id) < 0) { + + pr_err("%s: Failed : Invalid Port id = %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + + config.port_id = port_id; + config.port = *afe_config; + + atomic_set(&this_afe.state, 1); + atomic_set(&this_afe.status, 0); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &config); + if (ret < 0) { + pr_err("%s: AFE enable for port %d failed\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + if (atomic_read(&this_afe.status) != 0) { + pr_err("%s: config cmd failed\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + start.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + start.hdr.pkt_size = sizeof(start); + start.hdr.src_port = 0; + start.hdr.dest_port = 0; + start.hdr.token = 0; + start.hdr.opcode = AFE_PORT_CMD_START; + start.port_id = port_id; + start.gain = 0x2000; + start.sample_rate = rate; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &start); + if (ret < 0) { + pr_err("%s: AFE enable for port %d failed\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + + if (this_afe.task != current) + this_afe.task = current; + + pr_debug("task_name = %s pid = %d\n", + this_afe.task->comm, this_afe.task->pid); + return 0; +fail_cmd: + return ret; +} + +int afe_loopback(u16 enable, u16 dst_port, u16 src_port) +{ + struct afe_loopback_command lb_cmd; + int ret = 0; + + ret = afe_q6_interface_prepare(); + if (ret != 0) + return ret; + + if ((afe_get_port_type(dst_port) == MSM_AFE_PORT_TYPE_RX) && + (afe_get_port_type(src_port) == MSM_AFE_PORT_TYPE_RX)) + return afe_loopback_cfg(enable, dst_port, src_port, + LB_MODE_EC_REF_VOICE_AUDIO); + + lb_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(20), APR_PKT_VER); + lb_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(lb_cmd) - APR_HDR_SIZE); + lb_cmd.hdr.src_port = 0; + lb_cmd.hdr.dest_port = 0; + lb_cmd.hdr.token = 0; + lb_cmd.hdr.opcode = AFE_PORT_CMD_LOOPBACK; + lb_cmd.tx_port_id = src_port; + lb_cmd.rx_port_id = dst_port; + lb_cmd.mode = 0xFFFF; + lb_cmd.enable = (enable ? 1 : 0); + atomic_set(&this_afe.state, 1); + + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &lb_cmd); + if (ret < 0) { + pr_err("%s: AFE loopback failed\n", __func__); + ret = -EINVAL; + goto done; + } + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + } +done: + return ret; +} + +int afe_loopback_cfg(u16 enable, u16 dst_port, u16 src_port, u16 mode) +{ + struct afe_port_cmd_set_param lp_cfg; + int ret = 0; + + ret = afe_q6_interface_prepare(); + if (ret != 0) + return ret; + + pr_debug("%s: src_port %d, dst_port %d\n", + __func__, src_port, dst_port); + + lp_cfg.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + lp_cfg.hdr.pkt_size = sizeof(lp_cfg); + lp_cfg.hdr.src_port = 0; + lp_cfg.hdr.dest_port = 0; + lp_cfg.hdr.token = 0; + lp_cfg.hdr.opcode = AFE_PORT_CMD_SET_PARAM; + + lp_cfg.port_id = src_port; + lp_cfg.payload_size = sizeof(struct afe_param_payload_base) + + sizeof(struct afe_param_loopback_cfg); + lp_cfg.payload_address = 0; + + lp_cfg.payload.base.module_id = AFE_MODULE_LOOPBACK; + lp_cfg.payload.base.param_id = AFE_PARAM_ID_LOOPBACK_CONFIG; + lp_cfg.payload.base.param_size = sizeof(struct afe_param_loopback_cfg); + lp_cfg.payload.base.reserved = 0; + + lp_cfg.payload.param.loopback_cfg.loopback_cfg_minor_version = + AFE_API_VERSION_LOOPBACK_CONFIG; + lp_cfg.payload.param.loopback_cfg.dst_port_id = dst_port; + lp_cfg.payload.param.loopback_cfg.routing_mode = mode; + lp_cfg.payload.param.loopback_cfg.enable = enable; + lp_cfg.payload.param.loopback_cfg.reserved = 0; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &lp_cfg); + if (ret < 0) { + pr_err("%s: AFE loopback config failed for src_port %d, dst_port %d\n", + __func__, src_port, dst_port); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (ret < 0) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + return 0; +fail_cmd: + return ret; +} + +int afe_loopback_gain(u16 port_id, u16 volume) +{ + struct afe_port_cmd_set_param set_param; + int ret = 0; + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + return ret; + } + } + + if (afe_validate_port(port_id) < 0) { + + pr_err("%s: Failed : Invalid Port id = %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + + /* RX ports numbers are even .TX ports numbers are odd. */ + if (port_id % 2 == 0) { + pr_err("%s: Failed : afe loopback gain only for TX ports." + " port_id %d\n", __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + + pr_debug("%s: %d %hX\n", __func__, port_id, volume); + + set_param.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + set_param.hdr.pkt_size = sizeof(set_param); + set_param.hdr.src_port = 0; + set_param.hdr.dest_port = 0; + set_param.hdr.token = 0; + set_param.hdr.opcode = AFE_PORT_CMD_SET_PARAM; + + set_param.port_id = port_id; + set_param.payload_size = sizeof(struct afe_param_payload_base) + + sizeof(struct afe_param_loopback_gain); + set_param.payload_address = 0; + + set_param.payload.base.module_id = AFE_MODULE_ID_PORT_INFO; + set_param.payload.base.param_id = AFE_PARAM_ID_LOOPBACK_GAIN; + set_param.payload.base.param_size = + sizeof(struct afe_param_loopback_gain); + set_param.payload.base.reserved = 0; + + set_param.payload.param.loopback_gain.gain = volume; + set_param.payload.param.loopback_gain.reserved = 0; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &set_param); + if (ret < 0) { + pr_err("%s: AFE param set failed for port %d\n", + __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (ret < 0) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + return 0; +fail_cmd: + return ret; +} + +int afe_apply_gain(u16 port_id, u16 gain) +{ + struct afe_port_gain_command set_gain; + int ret = 0; + + if (this_afe.apr == NULL) { + pr_err("%s: AFE is not opened\n", __func__); + ret = -EPERM; + goto fail_cmd; + } + + if (afe_validate_port(port_id) < 0) { + pr_err("%s: Failed : Invalid Port id = %d\n", __func__, + port_id); + ret = -EINVAL; + goto fail_cmd; + } + + /* RX ports numbers are even .TX ports numbers are odd. */ + if (port_id % 2 == 0) { + pr_err("%s: Failed : afe apply gain only for TX ports." + " port_id %d\n", __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + + pr_debug("%s: %d %hX\n", __func__, port_id, gain); + + set_gain.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + set_gain.hdr.pkt_size = sizeof(set_gain); + set_gain.hdr.src_port = 0; + set_gain.hdr.dest_port = 0; + set_gain.hdr.token = 0; + set_gain.hdr.opcode = AFE_PORT_CMD_APPLY_GAIN; + + set_gain.port_id = port_id; + set_gain.gain = gain; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &set_gain); + if (ret < 0) { + pr_err("%s: AFE Gain set failed for port %d\n", + __func__, port_id); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (ret < 0) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + return 0; +fail_cmd: + return ret; +} + +int afe_pseudo_port_start_nowait(u16 port_id) +{ + int ret = 0; + struct afe_pseudoport_start_command start; + + pr_debug("%s: port_id=%d\n", __func__, port_id); + if (this_afe.apr == NULL) { + pr_err("%s: AFE APR is not registered\n", __func__); + return -ENODEV; + } + + + start.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + start.hdr.pkt_size = sizeof(start); + start.hdr.src_port = 0; + start.hdr.dest_port = 0; + start.hdr.token = 0; + start.hdr.opcode = AFE_PSEUDOPORT_CMD_START; + start.port_id = port_id; + start.timing = 1; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &start); + if (ret < 0) { + pr_err("%s: AFE enable for port %d failed %d\n", + __func__, port_id, ret); + return -EINVAL; + } + return 0; +} + +int afe_start_pseudo_port(u16 port_id) +{ + int ret = 0; + struct afe_pseudoport_start_command start; + + pr_debug("%s: port_id=%d\n", __func__, port_id); + + ret = afe_q6_interface_prepare(); + if (ret != 0) + return ret; + + start.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + start.hdr.pkt_size = sizeof(start); + start.hdr.src_port = 0; + start.hdr.dest_port = 0; + start.hdr.token = 0; + start.hdr.opcode = AFE_PSEUDOPORT_CMD_START; + start.port_id = port_id; + start.timing = 1; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &start); + if (ret < 0) { + pr_err("%s: AFE enable for port %d failed %d\n", + __func__, port_id, ret); + return -EINVAL; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + return -EINVAL; + } + + return 0; +} + +int afe_pseudo_port_stop_nowait(u16 port_id) +{ + int ret = 0; + struct afe_pseudoport_stop_command stop; + + pr_debug("%s: port_id=%d\n", __func__, port_id); + + if (this_afe.apr == NULL) { + pr_err("%s: AFE is already closed\n", __func__); + return -EINVAL; + } + + stop.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + stop.hdr.pkt_size = sizeof(stop); + stop.hdr.src_port = 0; + stop.hdr.dest_port = 0; + stop.hdr.token = 0; + stop.hdr.opcode = AFE_PSEUDOPORT_CMD_STOP; + stop.port_id = port_id; + stop.reserved = 0; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &stop); + if (ret < 0) { + pr_err("%s: AFE close failed %d\n", __func__, ret); + return -EINVAL; + } + + return 0; + +} + +int afe_stop_pseudo_port(u16 port_id) +{ + int ret = 0; + struct afe_pseudoport_stop_command stop; + + pr_debug("%s: port_id=%d\n", __func__, port_id); + + if (this_afe.apr == NULL) { + pr_err("%s: AFE is already closed\n", __func__); + return -EINVAL; + } + + stop.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + stop.hdr.pkt_size = sizeof(stop); + stop.hdr.src_port = 0; + stop.hdr.dest_port = 0; + stop.hdr.token = 0; + stop.hdr.opcode = AFE_PSEUDOPORT_CMD_STOP; + stop.port_id = port_id; + stop.reserved = 0; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &stop); + if (ret < 0) { + pr_err("%s: AFE close failed %d\n", __func__, ret); + return -EINVAL; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + return -EINVAL; + } + + return 0; +} + +int afe_cmd_memory_map(u32 dma_addr_p, u32 dma_buf_sz) +{ + int ret = 0; + struct afe_cmd_memory_map mregion; + + pr_debug("%s:\n", __func__); + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + return ret; + } + } + + mregion.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + mregion.hdr.pkt_size = sizeof(mregion); + mregion.hdr.src_port = 0; + mregion.hdr.dest_port = 0; + mregion.hdr.token = 0; + mregion.hdr.opcode = AFE_SERVICE_CMD_MEMORY_MAP; + mregion.phy_addr = dma_addr_p; + mregion.mem_sz = dma_buf_sz; + mregion.mem_id = 0; + mregion.rsvd = 0; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &mregion); + if (ret < 0) { + pr_err("%s: AFE memory map cmd failed %d\n", + __func__, ret); + ret = -EINVAL; + return ret; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + return ret; + } + + return 0; +} + +int afe_cmd_memory_map_nowait(u32 dma_addr_p, u32 dma_buf_sz) +{ + int ret = 0; + struct afe_cmd_memory_map mregion; + + pr_debug("%s:\n", __func__); + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + return ret; + } + } + + mregion.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + mregion.hdr.pkt_size = sizeof(mregion); + mregion.hdr.src_port = 0; + mregion.hdr.dest_port = 0; + mregion.hdr.token = 0; + mregion.hdr.opcode = AFE_SERVICE_CMD_MEMORY_MAP; + mregion.phy_addr = dma_addr_p; + mregion.mem_sz = dma_buf_sz; + mregion.mem_id = 0; + mregion.rsvd = 0; + + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &mregion); + if (ret < 0) { + pr_err("%s: AFE memory map cmd failed %d\n", + __func__, ret); + ret = -EINVAL; + } + return 0; +} + +int afe_cmd_memory_unmap(u32 dma_addr_p) +{ + int ret = 0; + struct afe_cmd_memory_unmap mregion; + + pr_debug("%s:\n", __func__); + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + return ret; + } + } + + mregion.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + mregion.hdr.pkt_size = sizeof(mregion); + mregion.hdr.src_port = 0; + mregion.hdr.dest_port = 0; + mregion.hdr.token = 0; + mregion.hdr.opcode = AFE_SERVICE_CMD_MEMORY_UNMAP; + mregion.phy_addr = dma_addr_p; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &mregion); + if (ret < 0) { + pr_err("%s: AFE memory unmap cmd failed %d\n", + __func__, ret); + ret = -EINVAL; + return ret; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + return ret; + } + return 0; +} + +int afe_cmd_memory_unmap_nowait(u32 dma_addr_p) +{ + int ret = 0; + struct afe_cmd_memory_unmap mregion; + + pr_debug("%s:\n", __func__); + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + return ret; + } + } + + mregion.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + mregion.hdr.pkt_size = sizeof(mregion); + mregion.hdr.src_port = 0; + mregion.hdr.dest_port = 0; + mregion.hdr.token = 0; + mregion.hdr.opcode = AFE_SERVICE_CMD_MEMORY_UNMAP; + mregion.phy_addr = dma_addr_p; + + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &mregion); + if (ret < 0) { + pr_err("%s: AFE memory unmap cmd failed %d\n", + __func__, ret); + ret = -EINVAL; + } + return 0; +} + +int afe_register_get_events(u16 port_id, + void (*cb) (uint32_t opcode, + uint32_t token, uint32_t *payload, void *priv), + void *private_data) +{ + int ret = 0; + struct afe_cmd_reg_rtport rtproxy; + + pr_debug("%s:\n", __func__); + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + return ret; + } + } + if ((port_id == RT_PROXY_DAI_002_RX) || + (port_id == RT_PROXY_DAI_001_TX)) + port_id = VIRTUAL_ID_TO_PORTID(port_id); + else + return -EINVAL; + + if (port_id == RT_PROXY_PORT_001_TX) { + this_afe.tx_cb = cb; + this_afe.tx_private_data = private_data; + } else if (port_id == RT_PROXY_PORT_001_RX) { + this_afe.rx_cb = cb; + this_afe.rx_private_data = private_data; + } + + rtproxy.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + rtproxy.hdr.pkt_size = sizeof(rtproxy); + rtproxy.hdr.src_port = 1; + rtproxy.hdr.dest_port = 1; + rtproxy.hdr.token = 0; + rtproxy.hdr.opcode = AFE_SERVICE_CMD_REG_RTPORT; + rtproxy.port_id = port_id; + rtproxy.rsvd = 0; + + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &rtproxy); + if (ret < 0) { + pr_err("%s: AFE reg. rtproxy_event failed %d\n", + __func__, ret); + ret = -EINVAL; + return ret; + } + return 0; +} + +int afe_unregister_get_events(u16 port_id) +{ + int ret = 0; + struct afe_cmd_unreg_rtport rtproxy; + + pr_debug("%s:\n", __func__); + + if (this_afe.apr == NULL) { + this_afe.apr = apr_register("ADSP", "AFE", afe_callback, + 0xFFFFFFFF, &this_afe); + pr_debug("%s: Register AFE\n", __func__); + if (this_afe.apr == NULL) { + pr_err("%s: Unable to register AFE\n", __func__); + ret = -ENODEV; + return ret; + } + } + if ((port_id == RT_PROXY_DAI_002_RX) || + (port_id == RT_PROXY_DAI_001_TX)) + port_id = VIRTUAL_ID_TO_PORTID(port_id); + else + return -EINVAL; + + rtproxy.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + rtproxy.hdr.pkt_size = sizeof(rtproxy); + rtproxy.hdr.src_port = 0; + rtproxy.hdr.dest_port = 0; + rtproxy.hdr.token = 0; + rtproxy.hdr.opcode = AFE_SERVICE_CMD_UNREG_RTPORT; + rtproxy.port_id = port_id; + rtproxy.rsvd = 0; + + if (port_id == RT_PROXY_PORT_001_TX) { + this_afe.tx_cb = NULL; + this_afe.tx_private_data = NULL; + } else if (port_id == RT_PROXY_PORT_001_RX) { + this_afe.rx_cb = NULL; + this_afe.rx_private_data = NULL; + } + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &rtproxy); + if (ret < 0) { + pr_err("%s: AFE enable Unreg. rtproxy_event failed %d\n", + __func__, ret); + ret = -EINVAL; + return ret; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + return ret; + } + return 0; +} + +int afe_rt_proxy_port_write(u32 buf_addr_p, int bytes) +{ + int ret = 0; + struct afe_cmd_rtport_wr afecmd_wr; + + if (this_afe.apr == NULL) { + pr_err("%s:register to AFE is not done\n", __func__); + ret = -ENODEV; + return ret; + } + pr_debug("%s: buf_addr_p = 0x%08x bytes = %d\n", __func__, + buf_addr_p, bytes); + + afecmd_wr.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + afecmd_wr.hdr.pkt_size = sizeof(afecmd_wr); + afecmd_wr.hdr.src_port = 0; + afecmd_wr.hdr.dest_port = 0; + afecmd_wr.hdr.token = 0; + afecmd_wr.hdr.opcode = AFE_SERVICE_CMD_RTPORT_WR; + afecmd_wr.buf_addr = (uint32_t)buf_addr_p; + afecmd_wr.port_id = RT_PROXY_PORT_001_TX; + afecmd_wr.bytes_avail = bytes; + afecmd_wr.rsvd = 0; + + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &afecmd_wr); + if (ret < 0) { + pr_err("%s: AFE rtproxy write to port 0x%x failed %d\n", + __func__, afecmd_wr.port_id, ret); + ret = -EINVAL; + return ret; + } + return 0; + +} + +int afe_rt_proxy_port_read(u32 buf_addr_p, int bytes) +{ + int ret = 0; + struct afe_cmd_rtport_rd afecmd_rd; + + if (this_afe.apr == NULL) { + pr_err("%s: register to AFE is not done\n", __func__); + ret = -ENODEV; + return ret; + } + pr_debug("%s: buf_addr_p = 0x%08x bytes = %d\n", __func__, + buf_addr_p, bytes); + + afecmd_rd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + afecmd_rd.hdr.pkt_size = sizeof(afecmd_rd); + afecmd_rd.hdr.src_port = 0; + afecmd_rd.hdr.dest_port = 0; + afecmd_rd.hdr.token = 0; + afecmd_rd.hdr.opcode = AFE_SERVICE_CMD_RTPORT_RD; + afecmd_rd.buf_addr = (uint32_t)buf_addr_p; + afecmd_rd.port_id = RT_PROXY_PORT_001_RX; + afecmd_rd.bytes_avail = bytes; + afecmd_rd.rsvd = 0; + + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &afecmd_rd); + if (ret < 0) { + pr_err("%s: AFE rtproxy read cmd to port 0x%x failed %d\n", + __func__, afecmd_rd.port_id, ret); + ret = -EINVAL; + return ret; + } + return 0; +} + +#ifdef CONFIG_DEBUG_FS +static struct dentry *debugfs_afelb; +static struct dentry *debugfs_afelb_gain; + +static int afe_debug_open(struct inode *inode, struct file *file) +{ + file->private_data = inode->i_private; + pr_info("debug intf %s\n", (char *) file->private_data); + return 0; +} + +static int afe_get_parameters(char *buf, long int *param1, int num_of_par) +{ + char *token; + int base, cnt; + + token = strsep(&buf, " "); + + for (cnt = 0; cnt < num_of_par; cnt++) { + if (token != NULL) { + if ((token[1] == 'x') || (token[1] == 'X')) + base = 16; + else + base = 10; + + if (kstrtoul(token, base, ¶m1[cnt]) != 0) + return -EINVAL; + + token = strsep(&buf, " "); + } else + return -EINVAL; + } + return 0; +} +#define AFE_LOOPBACK_ON (1) +#define AFE_LOOPBACK_OFF (0) +static ssize_t afe_debug_write(struct file *filp, + const char __user *ubuf, size_t cnt, loff_t *ppos) +{ + char *lb_str = filp->private_data; + char lbuf[32]; + int rc; + unsigned long param[5]; + + if (cnt > sizeof(lbuf) - 1) + return -EINVAL; + + rc = copy_from_user(lbuf, ubuf, cnt); + if (rc) + return -EFAULT; + + lbuf[cnt] = '\0'; + + if (!strcmp(lb_str, "afe_loopback")) { + rc = afe_get_parameters(lbuf, param, 3); + if (!rc) { + pr_info("%s %lu %lu %lu\n", lb_str, param[0], param[1], + param[2]); + + if ((param[0] != AFE_LOOPBACK_ON) && (param[0] != + AFE_LOOPBACK_OFF)) { + pr_err("%s: Error, parameter 0 incorrect\n", + __func__); + rc = -EINVAL; + goto afe_error; + } + if ((afe_validate_port(param[1]) < 0) || + (afe_validate_port(param[2])) < 0) { + pr_err("%s: Error, invalid afe port\n", + __func__); + } + if (this_afe.apr == NULL) { + pr_err("%s: Error, AFE not opened\n", __func__); + rc = -EINVAL; + } else { + rc = afe_loopback(param[0], param[1], param[2]); + } + } else { + pr_err("%s: Error, invalid parameters\n", __func__); + rc = -EINVAL; + } + + } else if (!strcmp(lb_str, "afe_loopback_gain")) { + rc = afe_get_parameters(lbuf, param, 2); + if (!rc) { + pr_info("%s %lu %lu\n", lb_str, param[0], param[1]); + + if (afe_validate_port(param[0]) < 0) { + pr_err("%s: Error, invalid afe port\n", + __func__); + rc = -EINVAL; + goto afe_error; + } + + if (param[1] > 100) { + pr_err("%s: Error, volume shoud be 0 to 100" + " percentage param = %lu\n", + __func__, param[1]); + rc = -EINVAL; + goto afe_error; + } + + param[1] = (Q6AFE_MAX_VOLUME * param[1]) / 100; + + if (this_afe.apr == NULL) { + pr_err("%s: Error, AFE not opened\n", __func__); + rc = -EINVAL; + } else { + rc = afe_loopback_gain(param[0], param[1]); + } + } else { + pr_err("%s: Error, invalid parameters\n", __func__); + rc = -EINVAL; + } + } + +afe_error: + if (rc == 0) + rc = cnt; + else + pr_err("%s: rc = %d\n", __func__, rc); + + return rc; +} + +static const struct file_operations afe_debug_fops = { + .open = afe_debug_open, + .write = afe_debug_write +}; +#endif +int afe_sidetone(u16 tx_port_id, u16 rx_port_id, u16 enable, uint16_t gain) +{ + struct afe_port_sidetone_command cmd_sidetone; + int ret = 0; + + pr_info("%s: tx_port_id:%d rx_port_id:%d enable:%d gain:%d\n", __func__, + tx_port_id, rx_port_id, enable, gain); + cmd_sidetone.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cmd_sidetone.hdr.pkt_size = sizeof(cmd_sidetone); + cmd_sidetone.hdr.src_port = 0; + cmd_sidetone.hdr.dest_port = 0; + cmd_sidetone.hdr.token = 0; + cmd_sidetone.hdr.opcode = AFE_PORT_CMD_SIDETONE_CTL; + cmd_sidetone.tx_port_id = tx_port_id; + cmd_sidetone.rx_port_id = rx_port_id; + cmd_sidetone.gain = gain; + cmd_sidetone.enable = enable; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &cmd_sidetone); + if (ret < 0) { + pr_err("%s: AFE sidetone failed for tx_port:%d rx_port:%d\n", + __func__, tx_port_id, rx_port_id); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (ret < 0) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + return 0; +fail_cmd: + return ret; +} + +int afe_port_stop_nowait(int port_id) +{ + struct afe_port_stop_command stop; + int ret = 0; + + if (this_afe.apr == NULL) { + pr_err("AFE is already closed\n"); + ret = -EINVAL; + goto fail_cmd; + } + pr_debug("%s: port_id=%d\n", __func__, port_id); + port_id = afe_convert_virtual_to_portid(port_id); + + stop.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + stop.hdr.pkt_size = sizeof(stop); + stop.hdr.src_port = 0; + stop.hdr.dest_port = 0; + stop.hdr.token = 0; + stop.hdr.opcode = AFE_PORT_CMD_STOP; + stop.port_id = port_id; + stop.reserved = 0; + + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &stop); + + if (ret == -ENETRESET) { + pr_info("%s: Need to reset, calling APR deregister", __func__); + return apr_deregister(this_afe.apr); + } else if (IS_ERR_VALUE(ret)) { + pr_err("%s: AFE close failed\n", __func__); + ret = -EINVAL; + } + +fail_cmd: + return ret; + +} + +int afe_close(int port_id) +{ + struct afe_port_stop_command stop; + int ret = 0; + + if (this_afe.apr == NULL) { + pr_err("AFE is already closed\n"); + ret = -EINVAL; + goto fail_cmd; + } + pr_debug("%s: port_id=%d\n", __func__, port_id); + + if ((port_id == RT_PROXY_DAI_001_RX) || + (port_id == RT_PROXY_DAI_002_TX)) { + pr_debug("%s: before decrementing pcm_afe_instance %d\n", + __func__, pcm_afe_instance[port_id & 0x1]); + port_id = VIRTUAL_ID_TO_PORTID(port_id); + pcm_afe_instance[port_id & 0x1]--; + if (!(pcm_afe_instance[port_id & 0x1] == 0 && + proxy_afe_instance[port_id & 0x1] == 0)) + return 0; + else + afe_close_done[port_id & 0x1] = true; + } + + if ((port_id == RT_PROXY_DAI_002_RX) || + (port_id == RT_PROXY_DAI_001_TX)) { + pr_debug("%s: before decrementing proxy_afe_instance %d\n", + __func__, proxy_afe_instance[port_id & 0x1]); + port_id = VIRTUAL_ID_TO_PORTID(port_id); + proxy_afe_instance[port_id & 0x1]--; + if (!(pcm_afe_instance[port_id & 0x1] == 0 && + proxy_afe_instance[port_id & 0x1] == 0)) + return 0; + else + afe_close_done[port_id & 0x1] = true; + } + + port_id = afe_convert_virtual_to_portid(port_id); + + stop.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + stop.hdr.pkt_size = sizeof(stop); + stop.hdr.src_port = 0; + stop.hdr.dest_port = 0; + stop.hdr.token = 0; + stop.hdr.opcode = AFE_PORT_CMD_STOP; + stop.port_id = port_id; + stop.reserved = 0; + + atomic_set(&this_afe.state, 1); + ret = apr_send_pkt(this_afe.apr, (uint32_t *) &stop); + + if (ret == -ENETRESET) { + pr_info("%s: Need to reset, calling APR deregister", __func__); + return apr_deregister(this_afe.apr); + } + + if (ret < 0) { + pr_err("%s: AFE close failed\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } + + ret = wait_event_timeout(this_afe.wait, + (atomic_read(&this_afe.state) == 0), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail_cmd; + } +fail_cmd: + return ret; +} + +static int __init afe_init(void) +{ + init_waitqueue_head(&this_afe.wait); + atomic_set(&this_afe.state, 0); + atomic_set(&this_afe.status, 0); + this_afe.apr = NULL; +#ifdef CONFIG_DEBUG_FS + debugfs_afelb = debugfs_create_file("afe_loopback", + 0220, NULL, (void *) "afe_loopback", + &afe_debug_fops); + + debugfs_afelb_gain = debugfs_create_file("afe_loopback_gain", + 0220, NULL, (void *) "afe_loopback_gain", + &afe_debug_fops); + + +#endif + return 0; +} + +static void __exit afe_exit(void) +{ + int i; +#ifdef CONFIG_DEBUG_FS + if (debugfs_afelb) + debugfs_remove(debugfs_afelb); + if (debugfs_afelb_gain) + debugfs_remove(debugfs_afelb_gain); +#endif + for (i = 0; i < MAX_AUDPROC_TYPES; i++) { + if (afe_cal_addr[i].cal_paddr != 0) + afe_cmd_memory_unmap_nowait( + afe_cal_addr[i].cal_paddr); + } +} + +device_initcall(afe_init); +__exitcall(afe_exit); diff --git a/sound/soc/msm/qdsp6/q6asm.c b/sound/soc/msm/qdsp6/q6asm.c new file mode 100644 index 000000000000..81cf7c6d198c --- /dev/null +++ b/sound/soc/msm/qdsp6/q6asm.c @@ -0,0 +1,3846 @@ +/* + * Copyright (c) 2010-2013, The Linux Foundation. All rights reserved. + * Author: Brian Swetland <swetland@google.com> + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#include <linux/fs.h> +#include <linux/kernel.h> +#include <linux/mutex.h> +#include <linux/wait.h> +#include <linux/miscdevice.h> +#include <linux/uaccess.h> +#include <linux/sched.h> +#include <linux/dma-mapping.h> +#include <linux/miscdevice.h> +#include <linux/delay.h> +#include <linux/spinlock.h> +#include <linux/slab.h> +#include <linux/msm_audio.h> +//#include <linux/android_pmem.h> +//#include <linux/memory_alloc.h> +#include <linux/debugfs.h> +#include <linux/time.h> +#include <linux/atomic.h> +#include <linux/dma-mapping.h> + +#include <asm/ioctls.h> + +//#include <mach/memory.h> +//#include <mach/debug_mm.h> +#include <linux/peripheral-loader.h> +#include <sound/qdsp6v2/audio_acdb.h> +#include <sound/qdsp6v2/rtac.h> + +#include <sound/apr_audio.h> +#include <sound/q6asm.h> + + +#define TRUE 0x01 +#define FALSE 0x00 +#define READDONE_IDX_STATUS 0 +#define READDONE_IDX_BUFFER 1 +#define READDONE_IDX_SIZE 2 +#define READDONE_IDX_OFFSET 3 +#define READDONE_IDX_MSW_TS 4 +#define READDONE_IDX_LSW_TS 5 +#define READDONE_IDX_FLAGS 6 +#define READDONE_IDX_NUMFRAMES 7 +#define READDONE_IDX_ID 8 +#ifdef CONFIG_DEBUG_FS +#define OUT_BUFFER_SIZE 56 +#define IN_BUFFER_SIZE 24 +#endif +static DEFINE_MUTEX(session_lock); + +/* session id: 0 reserved */ +static struct audio_client *session[SESSION_MAX+1]; +static int32_t q6asm_mmapcallback(struct apr_client_data *data, void *priv); +static int32_t q6asm_callback(struct apr_client_data *data, void *priv); +static void q6asm_add_hdr(struct audio_client *ac, struct apr_hdr *hdr, + uint32_t pkt_size, uint32_t cmd_flg); +static void q6asm_add_hdr_async(struct audio_client *ac, struct apr_hdr *hdr, + uint32_t pkt_size, uint32_t cmd_flg); +static int q6asm_memory_map_regions(struct audio_client *ac, int dir, + uint32_t bufsz, uint32_t bufcnt); +static int q6asm_memory_unmap_regions(struct audio_client *ac, int dir, + uint32_t bufsz, uint32_t bufcnt); + +static void q6asm_reset_buf_state(struct audio_client *ac); + +#ifdef CONFIG_DEBUG_FS +static struct timeval out_cold_tv; +static struct timeval out_warm_tv; +static struct timeval out_cont_tv; +static struct timeval in_cont_tv; +static long out_enable_flag; +static long in_enable_flag; +static struct dentry *out_dentry; +static struct dentry *in_dentry; +static int in_cont_index; +/*This var is used to keep track of first write done for cold output latency */ +static int out_cold_index; +static char *out_buffer; +static char *in_buffer; +static int audio_output_latency_dbgfs_open(struct inode *inode, + struct file *file) +{ + file->private_data = inode->i_private; + return 0; +} +static ssize_t audio_output_latency_dbgfs_read(struct file *file, + char __user *buf, size_t count, loff_t *ppos) +{ + snprintf(out_buffer, OUT_BUFFER_SIZE, "%ld,%ld,%ld,%ld,%ld,%ld,",\ + out_cold_tv.tv_sec, out_cold_tv.tv_usec, out_warm_tv.tv_sec,\ + out_warm_tv.tv_usec, out_cont_tv.tv_sec, out_cont_tv.tv_usec); + return simple_read_from_buffer(buf, OUT_BUFFER_SIZE, ppos, + out_buffer, OUT_BUFFER_SIZE); +} +static ssize_t audio_output_latency_dbgfs_write(struct file *file, + const char __user *buf, size_t count, loff_t *ppos) +{ + char *temp; + + if (count > 2*sizeof(char)) + return -EINVAL; + else + temp = kmalloc(2*sizeof(char), GFP_KERNEL); + + out_cold_index = 0; + + if (temp) { + if (copy_from_user(temp, buf, 2*sizeof(char))) { + kfree(temp); + return -EFAULT; + } + if (!kstrtol(temp, 10, &out_enable_flag)) { + kfree(temp); + return count; + } + kfree(temp); + } + return -EINVAL; +} +static const struct file_operations audio_output_latency_debug_fops = { + .open = audio_output_latency_dbgfs_open, + .read = audio_output_latency_dbgfs_read, + .write = audio_output_latency_dbgfs_write +}; + +static int audio_input_latency_dbgfs_open(struct inode *inode, + struct file *file) +{ + file->private_data = inode->i_private; + return 0; +} +static ssize_t audio_input_latency_dbgfs_read(struct file *file, + char __user *buf, size_t count, loff_t *ppos) +{ + snprintf(in_buffer, IN_BUFFER_SIZE, "%ld,%ld,",\ + in_cont_tv.tv_sec, in_cont_tv.tv_usec); + return simple_read_from_buffer(buf, IN_BUFFER_SIZE, ppos, + in_buffer, IN_BUFFER_SIZE); +} +static ssize_t audio_input_latency_dbgfs_write(struct file *file, + const char __user *buf, size_t count, loff_t *ppos) +{ + char *temp; + + if (count > 2*sizeof(char)) + return -EINVAL; + else + temp = kmalloc(2*sizeof(char), GFP_KERNEL); + if (temp) { + if (copy_from_user(temp, buf, 2*sizeof(char))) { + kfree(temp); + return -EFAULT; + } + if (!kstrtol(temp, 10, &in_enable_flag)) { + kfree(temp); + return count; + } + kfree(temp); + } + return -EINVAL; +} +static const struct file_operations audio_input_latency_debug_fops = { + .open = audio_input_latency_dbgfs_open, + .read = audio_input_latency_dbgfs_read, + .write = audio_input_latency_dbgfs_write +}; +#endif +struct asm_mmap { + atomic_t ref_cnt; + atomic_t cmd_state; + wait_queue_head_t cmd_wait; + void *apr; +}; + +static struct asm_mmap this_mmap; + +static int q6asm_session_alloc(struct audio_client *ac) +{ + int n; + mutex_lock(&session_lock); + for (n = 1; n <= SESSION_MAX; n++) { + if (!session[n]) { + session[n] = ac; + mutex_unlock(&session_lock); + return n; + } + } + mutex_unlock(&session_lock); + return -ENOMEM; +} + +static void q6asm_session_free(struct audio_client *ac) +{ + pr_debug("%s: sessionid[%d]\n", __func__, ac->session); + rtac_remove_popp_from_adm_devices(ac->session); + mutex_lock(&session_lock); + session[ac->session] = 0; + mutex_unlock(&session_lock); + ac->session = 0; + ac->perf_mode = false; + return; +} + +int q6asm_audio_client_buf_free(unsigned int dir, + struct audio_client *ac) +{ + struct audio_port_data *port; + int cnt = 0; + int rc = 0; + pr_debug("%s: Session id %d\n", __func__, ac->session); + mutex_lock(&ac->cmd_lock); + if (ac->io_mode & SYNC_IO_MODE) { + port = &ac->port[dir]; + if (!port->buf) { + mutex_unlock(&ac->cmd_lock); + return 0; + } + cnt = port->max_buf_cnt - 1; + + if (cnt >= 0) { + rc = q6asm_memory_unmap_regions(ac, dir, + port->buf[0].size, + port->max_buf_cnt); + if (rc < 0) + pr_err("%s CMD Memory_unmap_regions failed\n", + __func__); + } + + while (cnt >= 0) { + if (port->buf[cnt].data) { + pr_debug("%s:data[%p]phys[%p][%p] cnt[%d] mem_buffer[%p]\n", + __func__, (void *)port->buf[cnt].data, + (void *)port->buf[cnt].phys, + (void *)&port->buf[cnt].phys, cnt, + (void *)port->buf[cnt].mem_buffer); + + dma_free_writecombine(NULL, port->buf[cnt].size, port->buf[cnt].data, port->buf[cnt].phys); + + port->buf[cnt].data = NULL; + port->buf[cnt].phys = 0; + --(port->max_buf_cnt); + } + --cnt; + } + kfree(port->buf); + port->buf = NULL; + } + mutex_unlock(&ac->cmd_lock); + return 0; +} + +int q6asm_audio_client_buf_free_contiguous(unsigned int dir, + struct audio_client *ac) +{ + struct audio_port_data *port; + int cnt = 0; + int rc = 0; + pr_debug("%s: Session id %d\n", __func__, ac->session); + mutex_lock(&ac->cmd_lock); + port = &ac->port[dir]; + if (!port->buf) { + mutex_unlock(&ac->cmd_lock); + return 0; + } + cnt = port->max_buf_cnt - 1; + + if (cnt >= 0) { + rc = q6asm_memory_unmap(ac, port->buf[0].phys, dir); + if (rc < 0) + pr_err("%s CMD Memory_unmap_regions failed\n", + __func__); + } + + if (port->buf[0].data) { + pr_debug("%s:data[%p]phys[%p][%p] mem_buffer[%p]\n", + __func__, + (void *)port->buf[0].data, + (void *)port->buf[0].phys, + (void *)&port->buf[0].phys, + (void *)port->buf[0].mem_buffer); + + + dma_free_writecombine(NULL, port->max_buf_cnt * port->buf[0].size, + port->buf[0].data, port->buf[0].phys); + } + + while (cnt >= 0) { + port->buf[cnt].data = NULL; + port->buf[cnt].phys = 0; + cnt--; + } + port->max_buf_cnt = 0; + kfree(port->buf); + port->buf = NULL; + mutex_unlock(&ac->cmd_lock); + return 0; +} + +void q6asm_audio_client_free(struct audio_client *ac) +{ + int loopcnt; + struct audio_port_data *port; + if (!ac || !ac->session) + return; + pr_debug("%s: Session id %d\n", __func__, ac->session); + if (ac->io_mode & SYNC_IO_MODE) { + for (loopcnt = 0; loopcnt <= OUT; loopcnt++) { + port = &ac->port[loopcnt]; + if (!port->buf) + continue; + pr_debug("%s:loopcnt = %d\n", __func__, loopcnt); + q6asm_audio_client_buf_free(loopcnt, ac); + } + } + + apr_deregister(ac->apr); + q6asm_session_free(ac); + + pr_debug("%s: APR De-Register\n", __func__); + if (atomic_read(&this_mmap.ref_cnt) <= 0) { + pr_err("%s: APR Common Port Already Closed\n", __func__); + goto done; + } + + atomic_dec(&this_mmap.ref_cnt); + if (atomic_read(&this_mmap.ref_cnt) == 0) { + apr_deregister(this_mmap.apr); + pr_debug("%s:APR De-Register common port\n", __func__); + } +done: + kfree(ac); + return; +} + +int q6asm_set_io_mode(struct audio_client *ac, uint32_t mode) +{ + if (ac == NULL) { + pr_err("%s APR handle NULL\n", __func__); + return -EINVAL; + } + + if (mode == ASYNC_IO_MODE) { + ac->io_mode &= ~SYNC_IO_MODE; + ac->io_mode |= ASYNC_IO_MODE; + } else if (mode == SYNC_IO_MODE) { + ac->io_mode &= ~ASYNC_IO_MODE; + ac->io_mode |= SYNC_IO_MODE; + } else { + pr_err("%s:Not an valid IO Mode:%d\n", __func__, ac->io_mode); + return -EINVAL; + } + + pr_debug("%s:Set Mode to %d\n", __func__, ac->io_mode); + return 0; +} + +struct audio_client *q6asm_audio_client_alloc(app_cb cb, void *priv) +{ + struct audio_client *ac; + int n; + int lcnt = 0; + + ac = kzalloc(sizeof(struct audio_client), GFP_KERNEL); + if (!ac) + return NULL; + n = q6asm_session_alloc(ac); + if (n <= 0) + goto fail_session; + ac->session = n; + ac->cb = cb; + ac->priv = priv; + ac->io_mode = SYNC_IO_MODE; + ac->perf_mode = false; + ac->apr = apr_register("ADSP", "ASM", \ + (apr_fn)q6asm_callback,\ + ((ac->session) << 8 | 0x0001),\ + ac); + + if (ac->apr == NULL) { + pr_err("%s Registration with APR failed\n", __func__); + goto fail; + } + rtac_set_asm_handle(n, ac->apr); + + pr_debug("%s Registering the common port with APR\n", __func__); + if (atomic_read(&this_mmap.ref_cnt) == 0) { + this_mmap.apr = apr_register("ADSP", "ASM", \ + (apr_fn)q6asm_mmapcallback,\ + 0x0FFFFFFFF, &this_mmap); + if (this_mmap.apr == NULL) { + pr_debug("%s Unable to register APR ASM common port\n", + __func__); + goto fail; + } + } + + atomic_inc(&this_mmap.ref_cnt); + init_waitqueue_head(&ac->cmd_wait); + init_waitqueue_head(&ac->time_wait); + atomic_set(&ac->time_flag, 1); + mutex_init(&ac->cmd_lock); + for (lcnt = 0; lcnt <= OUT; lcnt++) { + mutex_init(&ac->port[lcnt].lock); + spin_lock_init(&ac->port[lcnt].dsp_lock); + } + atomic_set(&ac->cmd_state, 0); + atomic_set(&ac->cmd_response, 0); + + pr_debug("%s: session[%d]\n", __func__, ac->session); + + return ac; +fail: + q6asm_audio_client_free(ac); + return NULL; +fail_session: + kfree(ac); + return NULL; +} + +struct audio_client *q6asm_get_audio_client(int session_id) +{ + if ((session_id <= 0) || (session_id > SESSION_MAX)) { + pr_err("%s: invalid session: %d\n", __func__, session_id); + goto err; + } + + if (!session[session_id]) { + pr_err("%s: session not active: %d\n", __func__, session_id); + goto err; + } + + return session[session_id]; +err: + return NULL; +} + +int q6asm_audio_client_buf_alloc(unsigned int dir, + struct audio_client *ac, + unsigned int bufsz, + unsigned int bufcnt) +{ + int cnt = 0; + int rc = 0; + struct audio_buffer *buf; + + if (!(ac) || ((dir != IN) && (dir != OUT))) + return -EINVAL; + + pr_debug("%s: session[%d]bufsz[%d]bufcnt[%d]\n", __func__, ac->session, + bufsz, bufcnt); + + if (ac->session <= 0 || ac->session > 8) + goto fail; + + if (ac->io_mode & SYNC_IO_MODE) { + if (ac->port[dir].buf) { + pr_debug("%s: buffer already allocated\n", __func__); + return 0; + } + mutex_lock(&ac->cmd_lock); + buf = kzalloc(((sizeof(struct audio_buffer))*bufcnt), + GFP_KERNEL); + + if (!buf) { + mutex_unlock(&ac->cmd_lock); + goto fail; + } + + ac->port[dir].buf = buf; + + while (cnt < bufcnt) { + if (bufsz > 0) { + if (!buf[cnt].data) { + buf[cnt].size = bufsz; + buf[cnt].data = dma_alloc_writecombine(NULL, bufsz, + &buf[cnt].phys, GFP_KERNEL); + if (WARN_ON(IS_ERR_OR_NULL(buf[0].data))) { + pr_err("%s: allocation failed\n", __func__); + goto fail; + } + + + buf[cnt].used = 1; + buf[cnt].size = bufsz; + buf[cnt].actual_size = bufsz; + pr_debug("%s data[%p]phys[%p][%p]\n", + __func__, + (void *)buf[cnt].data, + (void *)buf[cnt].phys, + (void *)&buf[cnt].phys); + cnt++; + } + } + } + ac->port[dir].max_buf_cnt = cnt; + + mutex_unlock(&ac->cmd_lock); + rc = q6asm_memory_map_regions(ac, dir, bufsz, cnt); + if (rc < 0) { + pr_err("%s:CMD Memory_map_regions failed\n", __func__); + goto fail; + } + } + return 0; +fail: + q6asm_audio_client_buf_free(dir, ac); + return -EINVAL; +} + +int q6asm_audio_client_buf_alloc_contiguous(unsigned int dir, + struct audio_client *ac, + unsigned int bufsz, + unsigned int bufcnt) +{ + int cnt = 0; + int rc = 0; + struct audio_buffer *buf; + if (!(ac) || ((dir != IN) && (dir != OUT))) + return -EINVAL; + + pr_debug("%s: session[%d]bufsz[%d]bufcnt[%d]\n", + __func__, ac->session, + bufsz, bufcnt); + + if (ac->session <= 0 || ac->session > 8) + goto fail; + + if (ac->port[dir].buf) { + pr_debug("%s: buffer already allocated\n", __func__); + return 0; + } + mutex_lock(&ac->cmd_lock); + buf = kzalloc(((sizeof(struct audio_buffer))*bufcnt), + GFP_KERNEL); + + if (!buf) { + mutex_unlock(&ac->cmd_lock); + goto fail; + } + + ac->port[dir].buf = buf; + + + buf[0].size = bufsz * bufcnt; + buf[0].data = dma_alloc_writecombine(NULL, bufsz *bufcnt, + &buf[0].phys, GFP_KERNEL); + if (WARN_ON(IS_ERR_OR_NULL(buf[0].data))) { + pr_err("%s: allocation failed\n", __func__); + goto fail; + } + + if (!buf[0].data) { + pr_err("%s:invalid vaddr, iomap failed\n", __func__); + mutex_unlock(&ac->cmd_lock); + goto fail; + } + + buf[0].used = dir ^ 1; + buf[0].size = bufsz; + buf[0].actual_size = bufsz; + cnt = 1; + while (cnt < bufcnt) { + if (bufsz > 0) { + buf[cnt].data = buf[0].data + (cnt * bufsz); + buf[cnt].phys = buf[0].phys + (cnt * bufsz); + if (!buf[cnt].data) { + pr_err("%s Buf alloc failed\n", + __func__); + mutex_unlock(&ac->cmd_lock); + goto fail; + } + buf[cnt].used = dir ^ 1; + buf[cnt].size = bufsz; + buf[cnt].actual_size = bufsz; + pr_debug("%s data[%p]phys[%p][%p]\n", __func__, + (void *)buf[cnt].data, + (void *)buf[cnt].phys, + (void *)&buf[cnt].phys); + } + cnt++; + } + ac->port[dir].max_buf_cnt = cnt; + + pr_debug("%s ac->port[%d].max_buf_cnt[%d]\n", __func__, dir, + ac->port[dir].max_buf_cnt); + mutex_unlock(&ac->cmd_lock); + rc = q6asm_memory_map(ac, buf[0].phys, dir, bufsz, cnt); + if (rc < 0) { + pr_err("%s:CMD Memory_map_regions failed\n", __func__); + goto fail; + } + return 0; +fail: + q6asm_audio_client_buf_free_contiguous(dir, ac); + return -EINVAL; +} + +static int32_t q6asm_mmapcallback(struct apr_client_data *data, void *priv) +{ + uint32_t token; + uint32_t *payload = data->payload; + + if (data->opcode == RESET_EVENTS) { + pr_debug("%s: Reset event is received: %d %d apr[%p]\n", + __func__, + data->reset_event, + data->reset_proc, + this_mmap.apr); + apr_reset(this_mmap.apr); + this_mmap.apr = NULL; + atomic_set(&this_mmap.cmd_state, 0); + return 0; + } + + pr_debug("%s:ptr0[0x%x]ptr1[0x%x]opcode[0x%x] token[0x%x]payload_s[%d] src[%d] dest[%d]\n", + __func__, payload[0], payload[1], data->opcode, data->token, + data->payload_size, data->src_port, data->dest_port); + + if (data->opcode == APR_BASIC_RSP_RESULT) { + token = data->token; + switch (payload[0]) { + case ASM_SESSION_CMD_MEMORY_MAP: + case ASM_SESSION_CMD_MEMORY_UNMAP: + case ASM_SESSION_CMD_MEMORY_MAP_REGIONS: + case ASM_SESSION_CMD_MEMORY_UNMAP_REGIONS: + pr_debug("%s:command[0x%x]success [0x%x]\n", + __func__, payload[0], payload[1]); + if (atomic_read(&this_mmap.cmd_state)) { + atomic_set(&this_mmap.cmd_state, 0); + wake_up(&this_mmap.cmd_wait); + } + break; + default: + pr_debug("%s:command[0x%x] not expecting rsp\n", + __func__, payload[0]); + break; + } + } + return 0; +} + +static int32_t is_no_wait_cmd_rsp(uint32_t opcode, uint32_t *cmd_type) +{ + if (opcode == APR_BASIC_RSP_RESULT) { + if (cmd_type != NULL) { + switch (cmd_type[0]) { + case ASM_SESSION_CMD_RUN: + case ASM_SESSION_CMD_PAUSE: + case ASM_DATA_CMD_EOS: + return 1; + default: + break; + } + } else + pr_err("%s: null pointer!", __func__); + } else if (opcode == ASM_DATA_CMDRSP_EOS) + return 1; + + return 0; +} + +static int32_t q6asm_callback(struct apr_client_data *data, void *priv) +{ + int i = 0; + struct audio_client *ac = (struct audio_client *)priv; + uint32_t token; + unsigned long dsp_flags; + uint32_t *payload; + uint32_t wakeup_flag = 1; + + + if ((ac == NULL) || (data == NULL)) { + pr_err("ac or priv NULL\n"); + return -EINVAL; + } + if (ac->session <= 0 || ac->session > 8) { + pr_err("%s:Session ID is invalid, session = %d\n", __func__, + ac->session); + return -EINVAL; + } + + payload = data->payload; + if ((atomic_read(&ac->nowait_cmd_cnt) > 0) && + is_no_wait_cmd_rsp(data->opcode, payload)) { + pr_debug("%s: nowait_cmd_cnt %d\n", + __func__, + atomic_read(&ac->nowait_cmd_cnt)); + atomic_dec(&ac->nowait_cmd_cnt); + wakeup_flag = 0; + } + + if (data->opcode == RESET_EVENTS) { + pr_debug("q6asm_callback: Reset event is received: %d %d apr[%p]\n", + data->reset_event, data->reset_proc, ac->apr); + if (ac->cb) + ac->cb(data->opcode, data->token, + (uint32_t *)data->payload, ac->priv); + apr_reset(ac->apr); + return 0; + } + + pr_debug("%s: session[%d]opcode[0x%x] token[0x%x]payload_s[%d] src[%d] dest[%d]\n", + __func__, + ac->session, data->opcode, + data->token, data->payload_size, data->src_port, + data->dest_port); + + if (data->opcode == APR_BASIC_RSP_RESULT) { + token = data->token; + pr_debug("%s payload[0]:%x", __func__, payload[0]); + switch (payload[0]) { + case ASM_STREAM_CMD_SET_PP_PARAMS: + if (rtac_make_asm_callback(ac->session, payload, + data->payload_size)) + break; + case ASM_SESSION_CMD_PAUSE: + case ASM_DATA_CMD_EOS: + case ASM_STREAM_CMD_CLOSE: + case ASM_STREAM_CMD_FLUSH: + case ASM_SESSION_CMD_RUN: + case ASM_SESSION_CMD_REGISTER_FOR_TX_OVERFLOW_EVENTS: + case ASM_STREAM_CMD_FLUSH_READBUFS: + pr_debug("%s:Payload = [0x%x]\n", __func__, payload[0]); + if (token != ac->session) { + pr_err("%s:Invalid session[%d] rxed expected[%d]", + __func__, token, ac->session); + return -EINVAL; + } + case ASM_STREAM_CMD_OPEN_READ: + case ASM_STREAM_CMD_OPEN_READ_V2_1: + case ASM_STREAM_CMD_OPEN_WRITE: + case ASM_STREAM_CMD_OPEN_WRITE_V2_1: + case ASM_STREAM_CMD_OPEN_READWRITE: + case ASM_STREAM_CMD_OPEN_LOOPBACK: + case ASM_DATA_CMD_MEDIA_FORMAT_UPDATE: + case ASM_STREAM_CMD_SET_ENCDEC_PARAM: + case ASM_STREAM_CMD_OPEN_WRITE_COMPRESSED: + case ASM_STREAM_CMD_OPEN_READ_COMPRESSED: + if (payload[0] == ASM_STREAM_CMD_CLOSE) { + atomic_set(&ac->cmd_close_state, 0); + wake_up(&ac->cmd_wait); + } else if (atomic_read(&ac->cmd_state) && + wakeup_flag) { + atomic_set(&ac->cmd_state, 0); + if (payload[1] == ADSP_EUNSUPPORTED) { + pr_debug("paload[1]:%d unsupported", + payload[1]); + atomic_set(&ac->cmd_response, 1); + } + else + atomic_set(&ac->cmd_response, 0); + wake_up(&ac->cmd_wait); + } + if (ac->cb) + ac->cb(data->opcode, data->token, + (uint32_t *)data->payload, ac->priv); + break; + default: + pr_debug("%s:command[0x%x] not expecting rsp\n", + __func__, payload[0]); + break; + } + return 0; + } + + switch (data->opcode) { + case ASM_DATA_EVENT_WRITE_DONE:{ + struct audio_port_data *port = &ac->port[IN]; + pr_debug("%s: Rxed opcode[0x%x] status[0x%x] token[%d]", + __func__, payload[0], payload[1], + data->token); + if (ac->io_mode & SYNC_IO_MODE) { + if (port->buf == NULL) { + pr_err("%s: Unexpected Write Done\n", + __func__); + return -EINVAL; + } + spin_lock_irqsave(&port->dsp_lock, dsp_flags); + if (port->buf[data->token].phys != + payload[0]) { + pr_err("Buf expected[%p]rxed[%p]\n",\ + (void *)port->buf[data->token].phys,\ + (void *)payload[0]); + spin_unlock_irqrestore(&port->dsp_lock, + dsp_flags); + return -EINVAL; + } + token = data->token; + port->buf[token].used = 1; + spin_unlock_irqrestore(&port->dsp_lock, dsp_flags); +#ifdef CONFIG_DEBUG_FS + if (out_enable_flag) { + /* For first Write done log the time and reset + out_cold_index*/ + if (out_cold_index != 1) { + do_gettimeofday(&out_cold_tv); + pr_debug("COLD: apr_send_pkt at %ld sec %ld microsec\n", + out_cold_tv.tv_sec, + out_cold_tv.tv_usec); + out_cold_index = 1; + } + pr_debug("out_enable_flag %ld",\ + out_enable_flag); + } +#endif + for (i = 0; i < port->max_buf_cnt; i++) + pr_debug("%d ", port->buf[i].used); + + } + break; + } + case ASM_STREAM_CMDRSP_GET_PP_PARAMS: + rtac_make_asm_callback(ac->session, payload, + data->payload_size); + break; + case ASM_DATA_EVENT_READ_DONE:{ + + struct audio_port_data *port = &ac->port[OUT]; +#ifdef CONFIG_DEBUG_FS + if (in_enable_flag) { + /* when in_cont_index == 7, DSP would be + * writing into the 8th 512 byte buffer and this + * timestamp is tapped here.Once done it then writes + * to 9th 512 byte buffer.These two buffers(8th, 9th) + * reach the test application in 5th iteration and that + * timestamp is tapped at user level. The difference + * of these two timestamps gives us the time between + * the time at which dsp started filling the sample + * required and when it reached the test application. + * Hence continuous input latency + */ + if (in_cont_index == 7) { + do_gettimeofday(&in_cont_tv); + pr_err("In_CONT:previous read buffer done at %ld sec %ld microsec\n", + in_cont_tv.tv_sec, in_cont_tv.tv_usec); + } + } +#endif + pr_debug("%s:R-D: status=%d buff_add=%x act_size=%d offset=%d\n", + __func__, payload[READDONE_IDX_STATUS], + payload[READDONE_IDX_BUFFER], + payload[READDONE_IDX_SIZE], + payload[READDONE_IDX_OFFSET]); + pr_debug("%s:R-D:msw_ts=%d lsw_ts=%d flags=%d id=%d num=%d\n", + __func__, payload[READDONE_IDX_MSW_TS], + payload[READDONE_IDX_LSW_TS], + payload[READDONE_IDX_FLAGS], + payload[READDONE_IDX_ID], + payload[READDONE_IDX_NUMFRAMES]); +#ifdef CONFIG_DEBUG_FS + if (in_enable_flag) + in_cont_index++; +#endif + if (ac->io_mode & SYNC_IO_MODE) { + if (port->buf == NULL) { + pr_err("%s: Unexpected Write Done\n", __func__); + return -EINVAL; + } + spin_lock_irqsave(&port->dsp_lock, dsp_flags); + token = data->token; + port->buf[token].used = 0; + if (port->buf[token].phys != + payload[READDONE_IDX_BUFFER]) { + pr_err("Buf expected[%p]rxed[%p]\n",\ + (void *)port->buf[token].phys,\ + (void *)payload[READDONE_IDX_BUFFER]); + spin_unlock_irqrestore(&port->dsp_lock, + dsp_flags); + break; + } + port->buf[token].actual_size = + payload[READDONE_IDX_SIZE]; + spin_unlock_irqrestore(&port->dsp_lock, dsp_flags); + } + break; + } + case ASM_DATA_EVENT_EOS: + case ASM_DATA_CMDRSP_EOS: + pr_debug("%s:EOS ACK received: rxed opcode[0x%x]\n", + __func__, data->opcode); + break; + case ASM_STREAM_CMDRSP_GET_ENCDEC_PARAM: + break; + case ASM_SESSION_EVENT_TX_OVERFLOW: + pr_err("ASM_SESSION_EVENT_TX_OVERFLOW\n"); + break; + case ASM_SESSION_CMDRSP_GET_SESSION_TIME: + pr_debug("%s: ASM_SESSION_CMDRSP_GET_SESSION_TIME, payload[0] = %d, payload[1] = %d, payload[2] = %d\n", + __func__, + payload[0], payload[1], payload[2]); + ac->time_stamp = (uint64_t)(((uint64_t)payload[1] << 32) | + payload[2]); + if (atomic_read(&ac->time_flag)) { + atomic_set(&ac->time_flag, 0); + wake_up(&ac->time_wait); + } + break; + case ASM_DATA_EVENT_SR_CM_CHANGE_NOTIFY: + case ASM_DATA_EVENT_ENC_SR_CM_NOTIFY: + pr_debug("%s: ASM_DATA_EVENT_SR_CM_CHANGE_NOTIFY, payload[0] = %d, payload[1] = %d, payload[2] = %d, payload[3] = %d\n", + __func__, + payload[0], payload[1], payload[2], + payload[3]); + break; + } + if (ac->cb) + ac->cb(data->opcode, data->token, + data->payload, ac->priv); + + return 0; +} + +void *q6asm_is_cpu_buf_avail(int dir, struct audio_client *ac, uint32_t *size, + uint32_t *index) +{ + void *data; + unsigned char idx; + struct audio_port_data *port; + + if (!ac || ((dir != IN) && (dir != OUT))) + return NULL; + + if (ac->io_mode & SYNC_IO_MODE) { + port = &ac->port[dir]; + + mutex_lock(&port->lock); + idx = port->cpu_buf; + if (port->buf == NULL) { + pr_debug("%s:Buffer pointer null\n", __func__); + mutex_unlock(&port->lock); + return NULL; + } + /* dir 0: used = 0 means buf in use + dir 1: used = 1 means buf in use */ + if (port->buf[idx].used == dir) { + /* To make it more robust, we could loop and get the + next avail buf, its risky though */ + pr_debug("%s:Next buf idx[0x%x] not available,dir[%d]\n", + __func__, idx, dir); + mutex_unlock(&port->lock); + return NULL; + } + *size = port->buf[idx].actual_size; + *index = port->cpu_buf; + data = port->buf[idx].data; + pr_debug("%s:session[%d]index[%d] data[%p]size[%d]\n", + __func__, + ac->session, + port->cpu_buf, + data, *size); + /* By default increase the cpu_buf cnt + user accesses this function,increase cpu + buf(to avoid another api)*/ + port->buf[idx].used = dir; + port->cpu_buf = ((port->cpu_buf + 1) & (port->max_buf_cnt - 1)); + mutex_unlock(&port->lock); + return data; + } + return NULL; +} + +void *q6asm_is_cpu_buf_avail_nolock(int dir, struct audio_client *ac, + uint32_t *size, uint32_t *index) +{ + void *data; + unsigned char idx; + struct audio_port_data *port; + + if (!ac || ((dir != IN) && (dir != OUT))) + return NULL; + + port = &ac->port[dir]; + + idx = port->cpu_buf; + if (port->buf == NULL) { + pr_debug("%s:Buffer pointer null\n", __func__); + return NULL; + } + /* + * dir 0: used = 0 means buf in use + * dir 1: used = 1 means buf in use + */ + if (port->buf[idx].used == dir) { + /* + * To make it more robust, we could loop and get the + * next avail buf, its risky though + */ + pr_debug("%s:Next buf idx[0x%x] not available, dir[%d]\n", + __func__, idx, dir); + return NULL; + } + *size = port->buf[idx].actual_size; + *index = port->cpu_buf; + data = port->buf[idx].data; + pr_debug("%s:session[%d]index[%d] data[%p]size[%d]\n", + __func__, ac->session, port->cpu_buf, + data, *size); + /* + * By default increase the cpu_buf cnt + * user accesses this function,increase cpu + * buf(to avoid another api) + */ + port->buf[idx].used = dir; + port->cpu_buf = ((port->cpu_buf + 1) & (port->max_buf_cnt - 1)); + return data; +} + +int q6asm_is_dsp_buf_avail(int dir, struct audio_client *ac) +{ + int ret = -1; + struct audio_port_data *port; + uint32_t idx; + + if (!ac || (dir != OUT)) + return ret; + + if (ac->io_mode & SYNC_IO_MODE) { + port = &ac->port[dir]; + + mutex_lock(&port->lock); + idx = port->dsp_buf; + + if (port->buf[idx].used == (dir ^ 1)) { + /* To make it more robust, we could loop and get the + next avail buf, its risky though */ + pr_err("Next buf idx[0x%x] not available, dir[%d]\n", + idx, dir); + mutex_unlock(&port->lock); + return ret; + } + pr_debug("%s: session[%d]dsp_buf=%d cpu_buf=%d\n", __func__, + ac->session, port->dsp_buf, port->cpu_buf); + ret = ((port->dsp_buf != port->cpu_buf) ? 0 : -1); + mutex_unlock(&port->lock); + } + return ret; +} + +static void q6asm_add_hdr(struct audio_client *ac, struct apr_hdr *hdr, + uint32_t pkt_size, uint32_t cmd_flg) +{ + pr_debug("%s:session=%d pkt size=%d cmd_flg=%d\n", __func__, pkt_size, + cmd_flg, ac->session); + mutex_lock(&ac->cmd_lock); + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, \ + APR_HDR_LEN(sizeof(struct apr_hdr)),\ + APR_PKT_VER); + hdr->src_svc = ((struct apr_svc *)ac->apr)->id; + hdr->src_domain = APR_DOMAIN_APPS; + hdr->dest_svc = APR_SVC_ASM; + hdr->dest_domain = APR_DOMAIN_ADSP; + hdr->src_port = ((ac->session << 8) & 0xFF00) | 0x01; + hdr->dest_port = ((ac->session << 8) & 0xFF00) | 0x01; + if (cmd_flg) { + hdr->token = ac->session; + atomic_set(&ac->cmd_state, 1); + } + hdr->pkt_size = pkt_size; + mutex_unlock(&ac->cmd_lock); + return; +} + +static void q6asm_add_mmaphdr(struct apr_hdr *hdr, uint32_t pkt_size, + uint32_t cmd_flg) +{ + pr_debug("%s:pkt size=%d cmd_flg=%d\n", __func__, pkt_size, cmd_flg); + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, \ + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + hdr->src_port = 0; + hdr->dest_port = 0; + if (cmd_flg) { + hdr->token = 0; + atomic_set(&this_mmap.cmd_state, 1); + } + hdr->pkt_size = pkt_size; + return; +} + +int q6asm_open_read(struct audio_client *ac, + uint32_t format) +{ + int rc = 0x00; + struct asm_stream_cmd_open_read open; +#ifdef CONFIG_DEBUG_FS + in_cont_index = 0; +#endif + if ((ac == NULL) || (ac->apr == NULL)) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + pr_debug("%s:session[%d]", __func__, ac->session); + + q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE); + open.hdr.opcode = ASM_STREAM_CMD_OPEN_READ; + /* Stream prio : High, provide meta info with encoded frames */ + open.src_endpoint = ASM_END_POINT_DEVICE_MATRIX; + + open.pre_proc_top = get_asm_topology(); + if (open.pre_proc_top == 0) + open.pre_proc_top = DEFAULT_POPP_TOPOLOGY; + + switch (format) { + case FORMAT_LINEAR_PCM: + open.uMode = STREAM_PRIORITY_HIGH; + open.format = LINEAR_PCM; + break; + case FORMAT_MULTI_CHANNEL_LINEAR_PCM: + open.uMode = STREAM_PRIORITY_HIGH; + open.format = MULTI_CHANNEL_PCM; + break; + case FORMAT_MPEG4_AAC: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = MPEG4_AAC; + break; + case FORMAT_V13K: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = V13K_FS; + break; + case FORMAT_EVRC: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = EVRC_FS; + break; + case FORMAT_AMRNB: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = AMRNB_FS; + break; + case FORMAT_AMRWB: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = AMRWB_FS; + break; + default: + pr_err("Invalid format[%d]\n", format); + goto fail_cmd; + } + rc = apr_send_pkt(ac->apr, (uint32_t *) &open); + if (rc < 0) { + pr_err("open failed op[0x%x]rc[%d]\n", \ + open.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout. waited for OPEN_WRITE rc[%d]\n", __func__, + rc); + goto fail_cmd; + } + + ac->io_mode |= TUN_READ_IO_MODE; + + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_open_read_v2_1(struct audio_client *ac, + uint32_t format) +{ + int rc = 0x00; + struct asm_stream_cmd_open_read_v2_1 open; +#ifdef CONFIG_DEBUG_FS + in_cont_index = 0; +#endif + if ((ac == NULL) || (ac->apr == NULL)) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + pr_debug("%s:session[%d]", __func__, ac->session); + + q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE); + open.hdr.opcode = ASM_STREAM_CMD_OPEN_READ_V2_1; + open.src_endpoint = ASM_END_POINT_DEVICE_MATRIX; + open.pre_proc_top = get_asm_topology(); + if (open.pre_proc_top == 0) + open.pre_proc_top = DEFAULT_POPP_TOPOLOGY; + + switch (format) { + case FORMAT_LINEAR_PCM: + open.uMode = STREAM_PRIORITY_HIGH; + open.format = LINEAR_PCM; + break; + case FORMAT_MULTI_CHANNEL_LINEAR_PCM: + open.uMode = STREAM_PRIORITY_HIGH; + open.format = MULTI_CHANNEL_PCM; + break; + case FORMAT_MPEG4_AAC: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = MPEG4_AAC; + break; + case FORMAT_V13K: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = V13K_FS; + break; + case FORMAT_EVRC: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = EVRC_FS; + break; + case FORMAT_AMRNB: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = AMRNB_FS; + break; + case FORMAT_AMRWB: + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_HIGH; + open.format = AMRWB_FS; + break; + default: + pr_err("Invalid format[%d]\n", format); + goto fail_cmd; + } + open.uMode = ASM_OPEN_READ_PERF_MODE_BIT; + open.bits_per_sample = PCM_BITS_PER_SAMPLE; + open.reserved = 0; + rc = apr_send_pkt(ac->apr, (uint32_t *) &open); + if (rc < 0) { + pr_err("open failed op[0x%x]rc[%d]\n", \ + open.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout. waited for OPEN_WRITE rc[%d]\n", __func__, + rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + + +int q6asm_open_read_compressed(struct audio_client *ac, + uint32_t frames_per_buffer, uint32_t meta_data_mode) +{ + int rc = 0x00; + struct asm_stream_cmd_open_read_compressed open; +#ifdef CONFIG_DEBUG_FS + in_cont_index = 0; +#endif + if ((ac == NULL) || (ac->apr == NULL)) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + pr_debug("%s:session[%d]", __func__, ac->session); + + q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE); + open.hdr.opcode = ASM_STREAM_CMD_OPEN_READ_COMPRESSED; + /* hardcoded as following*/ + open.frame_per_buf = frames_per_buffer; + open.uMode = meta_data_mode; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &open); + if (rc < 0) { + pr_err("open failed op[0x%x]rc[%d]\n", open.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout. waited for OPEN_READ_COMPRESSED rc[%d]\n", + __func__, rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_open_write_compressed(struct audio_client *ac, uint32_t format) +{ + int rc = 0x00; + struct asm_stream_cmd_open_write_compressed open; + + if ((ac == NULL) || (ac->apr == NULL)) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + pr_debug("%s: session[%d] wr_format[0x%x]", __func__, ac->session, + format); + + q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE); + + open.hdr.opcode = ASM_STREAM_CMD_OPEN_WRITE_COMPRESSED; + + switch (format) { + case FORMAT_AC3: + open.format = AC3_DECODER; + break; + case FORMAT_EAC3: + open.format = EAC3_DECODER; + break; + case FORMAT_MP3: + open.format = MP3; + break; + case FORMAT_DTS: + open.format = DTS; + break; + case FORMAT_DTS_LBR: + open.format = DTS_LBR; + break; + case FORMAT_AAC: + open.format = MPEG4_AAC; + break; + case FORMAT_ATRAC: + open.format = ATRAC; + break; + case FORMAT_WMA_V10PRO: + open.format = WMA_V10PRO; + break; + case FORMAT_MAT: + open.format = MAT; + break; + default: + pr_err("%s: Invalid format[%d]\n", __func__, format); + goto fail_cmd; + } + /*Below flag indicates the DSP that Compressed audio input + stream is not IEC 61937 or IEC 60958 packetizied*/ + open.flags = 0x00000000; + rc = apr_send_pkt(ac->apr, (uint32_t *) &open); + if (rc < 0) { + pr_err("%s: open failed op[0x%x]rc[%d]\n", \ + __func__, open.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout. waited for OPEN_WRITE rc[%d]\n", __func__, + rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_open_write(struct audio_client *ac, uint32_t format) +{ + int rc = 0x00; + struct asm_stream_cmd_open_write open; + + if ((ac == NULL) || (ac->apr == NULL)) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + pr_debug("%s: session[%d] wr_format[0x%x]", __func__, ac->session, + format); + + q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE); + + if (ac->perf_mode) { + pr_debug("%s In Performance/lowlatency mode", __func__); + open.hdr.opcode = ASM_STREAM_CMD_OPEN_WRITE_V2_1; + open.uMode = ASM_OPEN_WRITE_PERF_MODE_BIT; + /* source endpoint : matrix */ + open.sink_endpoint = ASM_END_POINT_DEVICE_MATRIX; + open.stream_handle = PCM_BITS_PER_SAMPLE; + } else { + open.hdr.opcode = ASM_STREAM_CMD_OPEN_WRITE; + open.uMode = STREAM_PRIORITY_HIGH; + /* source endpoint : matrix */ + open.sink_endpoint = ASM_END_POINT_DEVICE_MATRIX; + open.stream_handle = 0x00; + } + open.post_proc_top = get_asm_topology(); + if (open.post_proc_top == 0) + open.post_proc_top = DEFAULT_POPP_TOPOLOGY; + + switch (format) { + case FORMAT_LINEAR_PCM: + open.format = LINEAR_PCM; + break; + case FORMAT_MULTI_CHANNEL_LINEAR_PCM: + open.format = MULTI_CHANNEL_PCM; + break; + case FORMAT_MPEG4_AAC: + open.format = MPEG4_AAC; + break; + case FORMAT_MPEG4_MULTI_AAC: + open.format = MPEG4_MULTI_AAC; + break; + case FORMAT_WMA_V9: + open.format = WMA_V9; + break; + case FORMAT_WMA_V10PRO: + open.format = WMA_V10PRO; + break; + case FORMAT_MP3: + open.format = MP3; + break; + case FORMAT_DTS: + open.format = DTS; + break; + case FORMAT_DTS_LBR: + open.format = DTS_LBR; + break; + case FORMAT_AMRWB: + open.format = AMRWB_FS; + pr_debug("q6asm_open_write FORMAT_AMRWB"); + break; + case FORMAT_AMR_WB_PLUS: + open.format = AMR_WB_PLUS; + pr_debug("q6asm_open_write FORMAT_AMR_WB_PLUS"); + break; + default: + pr_err("%s: Invalid format[%d]\n", __func__, format); + goto fail_cmd; + } + rc = apr_send_pkt(ac->apr, (uint32_t *) &open); + if (rc < 0) { + pr_err("%s: open failed op[0x%x]rc[%d]\n", \ + __func__, open.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout. waited for OPEN_WRITE rc[%d]\n", __func__, + rc); + goto fail_cmd; + } + if (atomic_read(&ac->cmd_response)) { + pr_err("%s: format = %x not supported\n", __func__, format); + goto fail_cmd; + } + + ac->io_mode |= TUN_WRITE_IO_MODE; + + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_open_read_write(struct audio_client *ac, + uint32_t rd_format, + uint32_t wr_format) +{ + int rc = 0x00; + struct asm_stream_cmd_open_read_write open; + + if ((ac == NULL) || (ac->apr == NULL)) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s: session[%d]", __func__, ac->session); + pr_debug("wr_format[0x%x]rd_format[0x%x]", + wr_format, rd_format); + + q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE); + open.hdr.opcode = ASM_STREAM_CMD_OPEN_READWRITE; + + open.uMode = BUFFER_META_ENABLE | STREAM_PRIORITY_NORMAL; + /* source endpoint : matrix */ + open.post_proc_top = get_asm_topology(); + if (open.post_proc_top == 0) + open.post_proc_top = DEFAULT_POPP_TOPOLOGY; + + switch (wr_format) { + case FORMAT_LINEAR_PCM: + open.write_format = LINEAR_PCM; + break; + case FORMAT_MPEG4_AAC: + open.write_format = MPEG4_AAC; + break; + case FORMAT_MPEG4_MULTI_AAC: + open.write_format = MPEG4_MULTI_AAC; + break; + case FORMAT_WMA_V9: + open.write_format = WMA_V9; + break; + case FORMAT_WMA_V10PRO: + open.write_format = WMA_V10PRO; + break; + case FORMAT_AMRNB: + open.write_format = AMRNB_FS; + break; + case FORMAT_AMRWB: + open.write_format = AMRWB_FS; + break; + case FORMAT_AMR_WB_PLUS: + open.write_format = AMR_WB_PLUS; + break; + case FORMAT_V13K: + open.write_format = V13K_FS; + break; + case FORMAT_EVRC: + open.write_format = EVRC_FS; + break; + case FORMAT_EVRCB: + open.write_format = EVRCB_FS; + break; + case FORMAT_EVRCWB: + open.write_format = EVRCWB_FS; + break; + case FORMAT_MP3: + open.write_format = MP3; + break; + default: + pr_err("Invalid format[%d]\n", wr_format); + goto fail_cmd; + } + + switch (rd_format) { + case FORMAT_LINEAR_PCM: + open.read_format = LINEAR_PCM; + break; + case FORMAT_MPEG4_AAC: + open.read_format = MPEG4_AAC; + break; + case FORMAT_V13K: + open.read_format = V13K_FS; + break; + case FORMAT_EVRC: + open.read_format = EVRC_FS; + break; + case FORMAT_AMRNB: + open.read_format = AMRNB_FS; + break; + case FORMAT_AMRWB: + open.read_format = AMRWB_FS; + break; + default: + pr_err("Invalid format[%d]\n", rd_format); + goto fail_cmd; + } + pr_debug("%s:rdformat[0x%x]wrformat[0x%x]\n", __func__, + open.read_format, open.write_format); + + rc = apr_send_pkt(ac->apr, (uint32_t *) &open); + if (rc < 0) { + pr_err("open failed op[0x%x]rc[%d]\n", \ + open.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for OPEN_WRITE rc[%d]\n", rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_open_loopack(struct audio_client *ac) +{ + int rc = 0x00; + struct asm_stream_cmd_open_loopback open; + + if ((ac == NULL) || (ac->apr == NULL)) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s: session[%d]", __func__, ac->session); + + q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE); + open.hdr.opcode = ASM_STREAM_CMD_OPEN_LOOPBACK; + + open.mode_flags = 0; + open.src_endpointype = 0; + open.sink_endpointype = 0; + /* source endpoint : matrix */ + open.postprocopo_id = get_asm_topology(); + if (open.postprocopo_id == 0) + open.postprocopo_id = DEFAULT_POPP_TOPOLOGY; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &open); + if (rc < 0) { + pr_err("open failed op[0x%x]rc[%d]\n", \ + open.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for OPEN_WRITE rc[%d]\n", rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_run(struct audio_client *ac, uint32_t flags, + uint32_t msw_ts, uint32_t lsw_ts) +{ + struct asm_stream_cmd_run run; + int rc; + if (!ac || ac->apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s session[%d]", __func__, ac->session); + q6asm_add_hdr(ac, &run.hdr, sizeof(run), TRUE); + + run.hdr.opcode = ASM_SESSION_CMD_RUN; + run.flags = flags; + run.msw_ts = msw_ts; + run.lsw_ts = lsw_ts; +#ifdef CONFIG_DEBUG_FS + if (out_enable_flag) { + do_gettimeofday(&out_cold_tv); + pr_debug("COLD: apr_send_pkt at %ld sec %ld microsec\n",\ + out_cold_tv.tv_sec, out_cold_tv.tv_usec); + } +#endif + rc = apr_send_pkt(ac->apr, (uint32_t *) &run); + if (rc < 0) { + pr_err("Commmand run failed[%d]", rc); + goto fail_cmd; + } + + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for run success rc[%d]", rc); + goto fail_cmd; + } + + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_run_nowait(struct audio_client *ac, uint32_t flags, + uint32_t msw_ts, uint32_t lsw_ts) +{ + struct asm_stream_cmd_run run; + int rc; + if (!ac || ac->apr == NULL) { + pr_err("%s:APR handle NULL\n", __func__); + return -EINVAL; + } + pr_debug("session[%d]", ac->session); + q6asm_add_hdr_async(ac, &run.hdr, sizeof(run), TRUE); + + run.hdr.opcode = ASM_SESSION_CMD_RUN; + run.flags = flags; + run.msw_ts = msw_ts; + run.lsw_ts = lsw_ts; + rc = apr_send_pkt(ac->apr, (uint32_t *) &run); + if (rc < 0) { + pr_err("%s:Commmand run failed[%d]", __func__, rc); + return -EINVAL; + } + atomic_inc(&ac->nowait_cmd_cnt); + return 0; +} + + +int q6asm_enc_cfg_blk_aac(struct audio_client *ac, + uint32_t frames_per_buf, + uint32_t sample_rate, uint32_t channels, + uint32_t bit_rate, uint32_t mode, uint32_t format) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + int rc = 0; + + pr_debug("%s:session[%d]frames[%d]SR[%d]ch[%d]bitrate[%d]mode[%d] format[%d]", + __func__, ac->session, frames_per_buf, + sample_rate, channels, bit_rate, mode, format); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + enc_cfg.enc_blk.frames_per_buf = frames_per_buf; + enc_cfg.enc_blk.format_id = MPEG4_AAC; + enc_cfg.enc_blk.cfg_size = sizeof(struct asm_aac_read_cfg); + enc_cfg.enc_blk.cfg.aac.bitrate = bit_rate; + enc_cfg.enc_blk.cfg.aac.enc_mode = mode; + enc_cfg.enc_blk.cfg.aac.format = format; + enc_cfg.enc_blk.cfg.aac.ch_cfg = channels; + enc_cfg.enc_blk.cfg.aac.sample_rate = sample_rate; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd %d failed\n", ASM_STREAM_CMD_SET_ENCDEC_PARAM); + rc = -EINVAL; + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for FORMAT_UPDATE\n"); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_enc_cfg_blk_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + + int rc = 0; + + pr_debug("%s: Session %d, rate = %d, channels = %d\n", __func__, + ac->session, rate, channels); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + enc_cfg.enc_blk.frames_per_buf = 1; + enc_cfg.enc_blk.format_id = LINEAR_PCM; + enc_cfg.enc_blk.cfg_size = sizeof(struct asm_pcm_cfg); + enc_cfg.enc_blk.cfg.pcm.ch_cfg = channels; + enc_cfg.enc_blk.cfg.pcm.bits_per_sample = 16; + enc_cfg.enc_blk.cfg.pcm.sample_rate = rate; + enc_cfg.enc_blk.cfg.pcm.is_signed = 1; + enc_cfg.enc_blk.cfg.pcm.interleaved = 1; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd open failed\n"); + rc = -EINVAL; + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout opcode[0x%x] ", enc_cfg.hdr.opcode); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_enc_cfg_blk_pcm_native(struct audio_client *ac, + uint32_t rate, uint32_t channels) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + + int rc = 0; + + pr_debug("%s: Session %d, rate = %d, channels = %d, setting the rate and channels to 0 for native\n", + __func__, ac->session, rate, channels); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + enc_cfg.enc_blk.frames_per_buf = 1; + enc_cfg.enc_blk.format_id = LINEAR_PCM; + enc_cfg.enc_blk.cfg_size = sizeof(struct asm_pcm_cfg); + enc_cfg.enc_blk.cfg.pcm.ch_cfg = 0;/*channels;*/ + enc_cfg.enc_blk.cfg.pcm.bits_per_sample = 16; + enc_cfg.enc_blk.cfg.pcm.sample_rate = 0;/*rate;*/ + enc_cfg.enc_blk.cfg.pcm.is_signed = 1; + enc_cfg.enc_blk.cfg.pcm.interleaved = 1; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd open failed\n"); + rc = -EINVAL; + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout opcode[0x%x] ", enc_cfg.hdr.opcode); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_enc_cfg_blk_multi_ch_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + + int rc = 0; + + pr_debug("%s: Session %d, rate = %d, channels = %d\n", __func__, + ac->session, rate, channels); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + enc_cfg.enc_blk.frames_per_buf = 1; + enc_cfg.enc_blk.format_id = MULTI_CHANNEL_PCM; + enc_cfg.enc_blk.cfg_size = + sizeof(struct asm_multi_channel_pcm_fmt_blk); + enc_cfg.enc_blk.cfg.mpcm.num_channels = channels; + enc_cfg.enc_blk.cfg.mpcm.bits_per_sample = 16; + enc_cfg.enc_blk.cfg.mpcm.sample_rate = rate; + enc_cfg.enc_blk.cfg.mpcm.is_signed = 1; + enc_cfg.enc_blk.cfg.mpcm.is_interleaved = 1; + if (channels == 1) { + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[0] = PCM_CHANNEL_FL; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[1] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[2] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[3] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[4] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[5] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[6] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[7] = 0; + } else if (channels == 2) { + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[0] = PCM_CHANNEL_FL; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[1] = PCM_CHANNEL_FR; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[2] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[3] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[4] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[5] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[6] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[7] = 0; + } else if (channels == 4) { + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[0] = PCM_CHANNEL_FL; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[1] = PCM_CHANNEL_FR; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[2] = PCM_CHANNEL_RB; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[3] = PCM_CHANNEL_LB; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[4] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[5] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[6] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[7] = 0; + } else if (channels == 6) { + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[0] = PCM_CHANNEL_FL; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[1] = PCM_CHANNEL_FR; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[2] = PCM_CHANNEL_LFE; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[3] = PCM_CHANNEL_FC; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[4] = PCM_CHANNEL_LB; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[5] = PCM_CHANNEL_RB; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[6] = 0; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[7] = 0; + } else if (channels == 8) { + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[0] = PCM_CHANNEL_FL; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[1] = PCM_CHANNEL_FR; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[2] = PCM_CHANNEL_LFE; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[3] = PCM_CHANNEL_FC; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[4] = PCM_CHANNEL_LB; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[5] = PCM_CHANNEL_RB; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[6] = PCM_CHANNEL_FLC; + enc_cfg.enc_blk.cfg.mpcm.channel_mapping[7] = PCM_CHANNEL_FRC; + } + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd open failed\n"); + rc = -EINVAL; + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout opcode[0x%x] ", enc_cfg.hdr.opcode); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_enable_sbrps(struct audio_client *ac, + uint32_t sbr_ps_enable) +{ + struct asm_stream_cmd_encdec_sbr sbrps; + + int rc = 0; + + pr_debug("%s: Session %d\n", __func__, ac->session); + + q6asm_add_hdr(ac, &sbrps.hdr, sizeof(sbrps), TRUE); + + sbrps.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + sbrps.param_id = ASM_ENABLE_SBR_PS; + sbrps.param_size = sizeof(struct asm_sbr_ps); + sbrps.sbr_ps.enable = sbr_ps_enable; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &sbrps); + if (rc < 0) { + pr_err("Command opcode[0x%x]paramid[0x%x] failed\n", + ASM_STREAM_CMD_SET_ENCDEC_PARAM, + ASM_ENABLE_SBR_PS); + rc = -EINVAL; + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout opcode[0x%x] ", sbrps.hdr.opcode); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_cfg_dual_mono_aac(struct audio_client *ac, + uint16_t sce_left, uint16_t sce_right) +{ + struct asm_stream_cmd_encdec_dualmono dual_mono; + + int rc = 0; + + pr_debug("%s: Session %d, sce_left = %d, sce_right = %d\n", + __func__, ac->session, sce_left, sce_right); + + q6asm_add_hdr(ac, &dual_mono.hdr, sizeof(dual_mono), TRUE); + + dual_mono.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + dual_mono.param_id = ASM_CONFIGURE_DUAL_MONO; + dual_mono.param_size = sizeof(struct asm_dual_mono); + dual_mono.channel_map.sce_left = sce_left; + dual_mono.channel_map.sce_right = sce_right; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &dual_mono); + if (rc < 0) { + pr_err("%s:Command opcode[0x%x]paramid[0x%x] failed\n", + __func__, ASM_STREAM_CMD_SET_ENCDEC_PARAM, + ASM_CONFIGURE_DUAL_MONO); + rc = -EINVAL; + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout opcode[0x%x]\n", __func__, + dual_mono.hdr.opcode); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_cfg_aac_sel_mix_coef(struct audio_client *ac, uint32_t mix_coeff) +{ + struct asm_aac_stereo_mix_coeff_selection_param aac_mix_coeff; + int rc = 0; + q6asm_add_hdr(ac, &aac_mix_coeff.hdr, sizeof(aac_mix_coeff), TRUE); + aac_mix_coeff.hdr.opcode = + ASM_STREAM_CMD_SET_ENCDEC_PARAM; + aac_mix_coeff.param_id = + ASM_PARAM_ID_AAC_STEREO_MIX_COEFF_SELECTION_FLAG; + aac_mix_coeff.param_size = + sizeof(struct asm_aac_stereo_mix_coeff_selection_param); + aac_mix_coeff.aac_stereo_mix_coeff_flag = mix_coeff; + pr_debug("%s, mix_coeff = %u", __func__, mix_coeff); + rc = apr_send_pkt(ac->apr, (uint32_t *) &aac_mix_coeff); + if (rc < 0) { + pr_err("%s:Command opcode[0x%x]paramid[0x%x] failed\n", + __func__, ASM_STREAM_CMD_SET_ENCDEC_PARAM, + ASM_PARAM_ID_AAC_STEREO_MIX_COEFF_SELECTION_FLAG); + rc = -EINVAL; + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout opcode[0x%x]\n", __func__, + aac_mix_coeff.hdr.opcode); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_set_encdec_chan_map(struct audio_client *ac, + uint32_t num_channels) +{ + struct asm_stream_cmd_encdec_channelmap chan_map; + u8 *channel_mapping; + + int rc = 0; + + pr_debug("%s: Session %d, num_channels = %d\n", + __func__, ac->session, num_channels); + + q6asm_add_hdr(ac, &chan_map.hdr, sizeof(chan_map), TRUE); + + chan_map.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + chan_map.param_id = ASM_ENCDEC_DEC_CHAN_MAP; + chan_map.param_size = sizeof(struct asm_dec_chan_map); + chan_map.chan_map.num_channels = num_channels; + + channel_mapping = + chan_map.chan_map.channel_mapping; + + memset(channel_mapping, PCM_CHANNEL_NULL, MAX_CHAN_MAP_CHANNELS); + if (num_channels == 1) { + channel_mapping[0] = PCM_CHANNEL_FL; + } else if (num_channels == 2) { + channel_mapping[0] = PCM_CHANNEL_FL; + channel_mapping[1] = PCM_CHANNEL_FR; + } else if (num_channels == 4) { + channel_mapping[0] = PCM_CHANNEL_FL; + channel_mapping[1] = PCM_CHANNEL_FR; + channel_mapping[1] = PCM_CHANNEL_LB; + channel_mapping[1] = PCM_CHANNEL_RB; + } else if (num_channels == 6) { + channel_mapping[0] = PCM_CHANNEL_FC; + channel_mapping[1] = PCM_CHANNEL_FL; + channel_mapping[2] = PCM_CHANNEL_FR; + channel_mapping[3] = PCM_CHANNEL_LB; + channel_mapping[4] = PCM_CHANNEL_RB; + channel_mapping[5] = PCM_CHANNEL_LFE; + } else if (num_channels == 8) { + channel_mapping[0] = PCM_CHANNEL_FC; + channel_mapping[1] = PCM_CHANNEL_FL; + channel_mapping[2] = PCM_CHANNEL_FR; + channel_mapping[3] = PCM_CHANNEL_LB; + channel_mapping[4] = PCM_CHANNEL_RB; + channel_mapping[5] = PCM_CHANNEL_LFE; + channel_mapping[6] = PCM_CHANNEL_FLC; + channel_mapping[7] = PCM_CHANNEL_FRC; + } else { + pr_err("%s: ERROR.unsupported num_ch = %u\n", __func__, + num_channels); + rc = -EINVAL; + goto fail_cmd; + } + + rc = apr_send_pkt(ac->apr, (uint32_t *) &chan_map); + if (rc < 0) { + pr_err("%s:Command opcode[0x%x]paramid[0x%x] failed\n", + __func__, ASM_STREAM_CMD_SET_ENCDEC_PARAM, + ASM_ENCDEC_DEC_CHAN_MAP); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout opcode[0x%x]\n", __func__, + chan_map.hdr.opcode); + rc = -ETIMEDOUT; + goto fail_cmd; + } + return 0; +fail_cmd: + return rc; +} + +int q6asm_enc_cfg_blk_qcelp(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t min_rate, uint16_t max_rate, + uint16_t reduced_rate_level, uint16_t rate_modulation_cmd) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + int rc = 0; + + pr_debug("%s:session[%d]frames[%d]min_rate[0x%4x]max_rate[0x%4x] reduced_rate_level[0x%4x]rate_modulation_cmd[0x%4x]", + __func__, + ac->session, frames_per_buf, min_rate, max_rate, + reduced_rate_level, rate_modulation_cmd); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + + enc_cfg.enc_blk.frames_per_buf = frames_per_buf; + enc_cfg.enc_blk.format_id = V13K_FS; + enc_cfg.enc_blk.cfg_size = sizeof(struct asm_qcelp13_read_cfg); + enc_cfg.enc_blk.cfg.qcelp13.min_rate = min_rate; + enc_cfg.enc_blk.cfg.qcelp13.max_rate = max_rate; + enc_cfg.enc_blk.cfg.qcelp13.reduced_rate_level = reduced_rate_level; + enc_cfg.enc_blk.cfg.qcelp13.rate_modulation_cmd = rate_modulation_cmd; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd %d failed\n", ASM_STREAM_CMD_SET_ENCDEC_PARAM); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for FORMAT_UPDATE\n"); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_enc_cfg_blk_evrc(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t min_rate, uint16_t max_rate, + uint16_t rate_modulation_cmd) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + int rc = 0; + + pr_debug("%s:session[%d]frames[%d]min_rate[0x%4x]max_rate[0x%4x] rate_modulation_cmd[0x%4x]", + __func__, ac->session, + frames_per_buf, min_rate, max_rate, rate_modulation_cmd); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + + enc_cfg.enc_blk.frames_per_buf = frames_per_buf; + enc_cfg.enc_blk.format_id = EVRC_FS; + enc_cfg.enc_blk.cfg_size = sizeof(struct asm_evrc_read_cfg); + enc_cfg.enc_blk.cfg.evrc.min_rate = min_rate; + enc_cfg.enc_blk.cfg.evrc.max_rate = max_rate; + enc_cfg.enc_blk.cfg.evrc.rate_modulation_cmd = rate_modulation_cmd; + enc_cfg.enc_blk.cfg.evrc.reserved = 0; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd %d failed\n", ASM_STREAM_CMD_SET_ENCDEC_PARAM); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for FORMAT_UPDATE\n"); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_enc_cfg_blk_amrnb(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t band_mode, uint16_t dtx_enable) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + int rc = 0; + + pr_debug("%s:session[%d]frames[%d]band_mode[0x%4x]dtx_enable[0x%4x]", + __func__, ac->session, frames_per_buf, band_mode, dtx_enable); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + + enc_cfg.enc_blk.frames_per_buf = frames_per_buf; + enc_cfg.enc_blk.format_id = AMRNB_FS; + enc_cfg.enc_blk.cfg_size = sizeof(struct asm_amrnb_read_cfg); + enc_cfg.enc_blk.cfg.amrnb.mode = band_mode; + enc_cfg.enc_blk.cfg.amrnb.dtx_mode = dtx_enable; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd %d failed\n", ASM_STREAM_CMD_SET_ENCDEC_PARAM); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for FORMAT_UPDATE\n"); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_enc_cfg_blk_amrwb(struct audio_client *ac, uint32_t frames_per_buf, + uint16_t band_mode, uint16_t dtx_enable) +{ + struct asm_stream_cmd_encdec_cfg_blk enc_cfg; + int rc = 0; + + pr_debug("%s:session[%d]frames[%d]band_mode[0x%4x]dtx_enable[0x%4x]", + __func__, ac->session, frames_per_buf, band_mode, dtx_enable); + + q6asm_add_hdr(ac, &enc_cfg.hdr, sizeof(enc_cfg), TRUE); + + enc_cfg.hdr.opcode = ASM_STREAM_CMD_SET_ENCDEC_PARAM; + + enc_cfg.param_id = ASM_ENCDEC_CFG_BLK_ID; + enc_cfg.param_size = sizeof(struct asm_encode_cfg_blk); + + enc_cfg.enc_blk.frames_per_buf = frames_per_buf; + enc_cfg.enc_blk.format_id = AMRWB_FS; + enc_cfg.enc_blk.cfg_size = sizeof(struct asm_amrwb_read_cfg); + enc_cfg.enc_blk.cfg.amrwb.mode = band_mode; + enc_cfg.enc_blk.cfg.amrwb.dtx_mode = dtx_enable; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &enc_cfg); + if (rc < 0) { + pr_err("Comamnd %d failed\n", ASM_STREAM_CMD_SET_ENCDEC_PARAM); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for FORMAT_UPDATE\n"); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_media_format_block_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels) +{ + struct asm_stream_media_format_update fmt; + int rc = 0; + + pr_debug("%s:session[%d]rate[%d]ch[%d]\n", __func__, ac->session, rate, + channels); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + + fmt.format = LINEAR_PCM; + fmt.cfg_size = sizeof(struct asm_pcm_cfg); + fmt.write_cfg.pcm_cfg.ch_cfg = channels; + fmt.write_cfg.pcm_cfg.bits_per_sample = 16; + fmt.write_cfg.pcm_cfg.sample_rate = rate; + fmt.write_cfg.pcm_cfg.is_signed = 1; + fmt.write_cfg.pcm_cfg.interleaved = 1; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd open failed\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_media_format_block_multi_ch_pcm(struct audio_client *ac, + uint32_t rate, uint32_t channels) +{ + struct asm_stream_media_format_update fmt; + u8 *channel_mapping; + int rc = 0; + + pr_debug("%s:session[%d]rate[%d]ch[%d]\n", __func__, ac->session, rate, + channels); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + + fmt.format = MULTI_CHANNEL_PCM; + fmt.cfg_size = sizeof(struct asm_multi_channel_pcm_fmt_blk); + fmt.write_cfg.multi_ch_pcm_cfg.num_channels = channels; + fmt.write_cfg.multi_ch_pcm_cfg.bits_per_sample = 16; + fmt.write_cfg.multi_ch_pcm_cfg.sample_rate = rate; + fmt.write_cfg.multi_ch_pcm_cfg.is_signed = 1; + fmt.write_cfg.multi_ch_pcm_cfg.is_interleaved = 1; + channel_mapping = + fmt.write_cfg.multi_ch_pcm_cfg.channel_mapping; + + memset(channel_mapping, 0, PCM_FORMAT_MAX_NUM_CHANNEL); + + if (channels == 1) { + channel_mapping[0] = PCM_CHANNEL_FL; + } else if (channels == 2) { + channel_mapping[0] = PCM_CHANNEL_FL; + channel_mapping[1] = PCM_CHANNEL_FR; + } else if (channels == 4) { + channel_mapping[0] = PCM_CHANNEL_FL; + channel_mapping[1] = PCM_CHANNEL_FR; + channel_mapping[1] = PCM_CHANNEL_LB; + channel_mapping[1] = PCM_CHANNEL_RB; + } else if (channels == 6) { + channel_mapping[0] = PCM_CHANNEL_FL; + channel_mapping[1] = PCM_CHANNEL_FR; + channel_mapping[2] = PCM_CHANNEL_FC; + channel_mapping[3] = PCM_CHANNEL_LFE; + channel_mapping[4] = PCM_CHANNEL_LB; + channel_mapping[5] = PCM_CHANNEL_RB; + } else if (channels == 8) { + channel_mapping[0] = PCM_CHANNEL_FC; + channel_mapping[1] = PCM_CHANNEL_FL; + channel_mapping[2] = PCM_CHANNEL_FR; + channel_mapping[3] = PCM_CHANNEL_LB; + channel_mapping[4] = PCM_CHANNEL_RB; + channel_mapping[5] = PCM_CHANNEL_LFE; + channel_mapping[6] = PCM_CHANNEL_FLC; + channel_mapping[7] = PCM_CHANNEL_FRC; + } else { + pr_err("%s: ERROR.unsupported num_ch = %u\n", __func__, + channels); + return -EINVAL; + } + + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd open failed\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_media_format_block_aac(struct audio_client *ac, + struct asm_aac_cfg *cfg) +{ + struct asm_stream_media_format_update fmt; + int rc = 0; + + pr_debug("%s:session[%d]rate[%d]ch[%d]\n", __func__, ac->session, + cfg->sample_rate, cfg->ch_cfg); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + + fmt.format = MPEG4_AAC; + fmt.cfg_size = sizeof(struct asm_aac_cfg); + fmt.write_cfg.aac_cfg.format = cfg->format; + fmt.write_cfg.aac_cfg.aot = cfg->aot; + fmt.write_cfg.aac_cfg.ep_config = cfg->ep_config; + fmt.write_cfg.aac_cfg.section_data_resilience = + cfg->section_data_resilience; + fmt.write_cfg.aac_cfg.scalefactor_data_resilience = + cfg->scalefactor_data_resilience; + fmt.write_cfg.aac_cfg.spectral_data_resilience = + cfg->spectral_data_resilience; + fmt.write_cfg.aac_cfg.ch_cfg = cfg->ch_cfg; + fmt.write_cfg.aac_cfg.sample_rate = cfg->sample_rate; + pr_info("%s:format=%x cfg_size=%d aac-cfg=%x aot=%d ch=%d sr=%d\n", + __func__, fmt.format, fmt.cfg_size, + fmt.write_cfg.aac_cfg.format, + fmt.write_cfg.aac_cfg.aot, + fmt.write_cfg.aac_cfg.ch_cfg, + fmt.write_cfg.aac_cfg.sample_rate); + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd open failed\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_media_format_block_amrwbplus(struct audio_client *ac, + struct asm_amrwbplus_cfg *cfg) +{ + struct asm_stream_media_format_update fmt; + int rc = 0; + pr_debug("q6asm_media_format_block_amrwbplus"); + + pr_debug("%s:session[%d]band-mode[%d]frame-fmt[%d]ch[%d]\n", + __func__, + ac->session, + cfg->amr_band_mode, + cfg->amr_frame_fmt, + cfg->num_channels); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + + fmt.format = AMR_WB_PLUS; + fmt.cfg_size = cfg->size_bytes; + + fmt.write_cfg.amrwbplus_cfg.size_bytes = cfg->size_bytes; + fmt.write_cfg.amrwbplus_cfg.version = cfg->version; + fmt.write_cfg.amrwbplus_cfg.num_channels = cfg->num_channels; + fmt.write_cfg.amrwbplus_cfg.amr_band_mode = cfg->amr_band_mode; + fmt.write_cfg.amrwbplus_cfg.amr_dtx_mode = cfg->amr_dtx_mode; + fmt.write_cfg.amrwbplus_cfg.amr_frame_fmt = cfg->amr_frame_fmt; + fmt.write_cfg.amrwbplus_cfg.amr_lsf_idx = cfg->amr_lsf_idx; + + pr_debug("%s: num_channels=%x amr_band_mode=%d amr_frame_fmt=%d\n", + __func__, + cfg->num_channels, + cfg->amr_band_mode, + cfg->amr_frame_fmt); + + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd media format update failed..\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} +int q6asm_media_format_block_multi_aac(struct audio_client *ac, + struct asm_aac_cfg *cfg) +{ + struct asm_stream_media_format_update fmt; + int rc = 0; + + pr_debug("%s:session[%d]rate[%d]ch[%d]\n", __func__, ac->session, + cfg->sample_rate, cfg->ch_cfg); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + + fmt.format = MPEG4_MULTI_AAC; + fmt.cfg_size = sizeof(struct asm_aac_cfg); + fmt.write_cfg.aac_cfg.format = cfg->format; + fmt.write_cfg.aac_cfg.aot = cfg->aot; + fmt.write_cfg.aac_cfg.ep_config = cfg->ep_config; + fmt.write_cfg.aac_cfg.section_data_resilience = + cfg->section_data_resilience; + fmt.write_cfg.aac_cfg.scalefactor_data_resilience = + cfg->scalefactor_data_resilience; + fmt.write_cfg.aac_cfg.spectral_data_resilience = + cfg->spectral_data_resilience; + fmt.write_cfg.aac_cfg.ch_cfg = cfg->ch_cfg; + fmt.write_cfg.aac_cfg.sample_rate = cfg->sample_rate; + pr_info("%s:format=%x cfg_size=%d aac-cfg=%x aot=%d ch=%d sr=%d\n", + __func__, fmt.format, fmt.cfg_size, + fmt.write_cfg.aac_cfg.format, + fmt.write_cfg.aac_cfg.aot, + fmt.write_cfg.aac_cfg.ch_cfg, + fmt.write_cfg.aac_cfg.sample_rate); + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd open failed\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + + + +int q6asm_media_format_block(struct audio_client *ac, uint32_t format) +{ + + struct asm_stream_media_format_update fmt; + int rc = 0; + + pr_debug("%s:session[%d] format[0x%x]\n", __func__, + ac->session, format); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + switch (format) { + case FORMAT_V13K: + fmt.format = V13K_FS; + break; + case FORMAT_EVRC: + fmt.format = EVRC_FS; + break; + case FORMAT_AMRWB: + fmt.format = AMRWB_FS; + break; + case FORMAT_AMR_WB_PLUS: + fmt.format = AMR_WB_PLUS; + break; + case FORMAT_AMRNB: + fmt.format = AMRNB_FS; + break; + case FORMAT_MP3: + fmt.format = MP3; + break; + case FORMAT_DTS: + fmt.format = DTS; + break; + case FORMAT_DTS_LBR: + fmt.format = DTS_LBR; + break; + default: + pr_err("Invalid format[%d]\n", format); + goto fail_cmd; + } + fmt.cfg_size = 0; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd open failed\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_media_format_block_wma(struct audio_client *ac, + void *cfg) +{ + struct asm_stream_media_format_update fmt; + struct asm_wma_cfg *wma_cfg = (struct asm_wma_cfg *)cfg; + int rc = 0; + + pr_debug("session[%d]format_tag[0x%4x] rate[%d] ch[0x%4x] bps[%d], balign[0x%4x], bit_sample[0x%4x], ch_msk[%d], enc_opt[0x%4x]\n", + ac->session, wma_cfg->format_tag, wma_cfg->sample_rate, + wma_cfg->ch_cfg, wma_cfg->avg_bytes_per_sec, + wma_cfg->block_align, wma_cfg->valid_bits_per_sample, + wma_cfg->ch_mask, wma_cfg->encode_opt); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + + fmt.format = WMA_V9; + fmt.cfg_size = sizeof(struct asm_wma_cfg); + fmt.write_cfg.wma_cfg.format_tag = wma_cfg->format_tag; + fmt.write_cfg.wma_cfg.ch_cfg = wma_cfg->ch_cfg; + fmt.write_cfg.wma_cfg.sample_rate = wma_cfg->sample_rate; + fmt.write_cfg.wma_cfg.avg_bytes_per_sec = wma_cfg->avg_bytes_per_sec; + fmt.write_cfg.wma_cfg.block_align = wma_cfg->block_align; + fmt.write_cfg.wma_cfg.valid_bits_per_sample = + wma_cfg->valid_bits_per_sample; + fmt.write_cfg.wma_cfg.ch_mask = wma_cfg->ch_mask; + fmt.write_cfg.wma_cfg.encode_opt = wma_cfg->encode_opt; + fmt.write_cfg.wma_cfg.adv_encode_opt = 0; + fmt.write_cfg.wma_cfg.adv_encode_opt2 = 0; + fmt.write_cfg.wma_cfg.drc_peak_ref = 0; + fmt.write_cfg.wma_cfg.drc_peak_target = 0; + fmt.write_cfg.wma_cfg.drc_ave_ref = 0; + fmt.write_cfg.wma_cfg.drc_ave_target = 0; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd open failed\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_media_format_block_wmapro(struct audio_client *ac, + void *cfg) +{ + struct asm_stream_media_format_update fmt; + struct asm_wmapro_cfg *wmapro_cfg = (struct asm_wmapro_cfg *)cfg; + int rc = 0; + + pr_debug("session[%d]format_tag[0x%4x] rate[%d] ch[0x%4x] bps[%d], balign[0x%4x], bit_sample[0x%4x], ch_msk[%d], enc_opt[0x%4x], adv_enc_opt[0x%4x], adv_enc_opt2[0x%8x]\n", + ac->session, wmapro_cfg->format_tag, wmapro_cfg->sample_rate, + wmapro_cfg->ch_cfg, wmapro_cfg->avg_bytes_per_sec, + wmapro_cfg->block_align, wmapro_cfg->valid_bits_per_sample, + wmapro_cfg->ch_mask, wmapro_cfg->encode_opt, + wmapro_cfg->adv_encode_opt, wmapro_cfg->adv_encode_opt2); + + q6asm_add_hdr(ac, &fmt.hdr, sizeof(fmt), TRUE); + + fmt.hdr.opcode = ASM_DATA_CMD_MEDIA_FORMAT_UPDATE; + + fmt.format = WMA_V10PRO; + fmt.cfg_size = sizeof(struct asm_wmapro_cfg); + fmt.write_cfg.wmapro_cfg.format_tag = wmapro_cfg->format_tag; + fmt.write_cfg.wmapro_cfg.ch_cfg = wmapro_cfg->ch_cfg; + fmt.write_cfg.wmapro_cfg.sample_rate = wmapro_cfg->sample_rate; + fmt.write_cfg.wmapro_cfg.avg_bytes_per_sec = + wmapro_cfg->avg_bytes_per_sec; + fmt.write_cfg.wmapro_cfg.block_align = wmapro_cfg->block_align; + fmt.write_cfg.wmapro_cfg.valid_bits_per_sample = + wmapro_cfg->valid_bits_per_sample; + fmt.write_cfg.wmapro_cfg.ch_mask = wmapro_cfg->ch_mask; + fmt.write_cfg.wmapro_cfg.encode_opt = wmapro_cfg->encode_opt; + fmt.write_cfg.wmapro_cfg.adv_encode_opt = wmapro_cfg->adv_encode_opt; + fmt.write_cfg.wmapro_cfg.adv_encode_opt2 = wmapro_cfg->adv_encode_opt2; + fmt.write_cfg.wmapro_cfg.drc_peak_ref = 0; + fmt.write_cfg.wmapro_cfg.drc_peak_target = 0; + fmt.write_cfg.wmapro_cfg.drc_ave_ref = 0; + fmt.write_cfg.wmapro_cfg.drc_ave_target = 0; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &fmt); + if (rc < 0) { + pr_err("%s:Comamnd open failed\n", __func__); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s:timeout. waited for FORMAT_UPDATE\n", __func__); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_memory_map(struct audio_client *ac, uint32_t buf_add, int dir, + uint32_t bufsz, uint32_t bufcnt) +{ + struct asm_stream_cmd_memory_map mem_map; + int rc = 0; + + if (!ac || ac->apr == NULL || this_mmap.apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + + pr_debug("%s: Session[%d]\n", __func__, ac->session); + + mem_map.hdr.opcode = ASM_SESSION_CMD_MEMORY_MAP; + + mem_map.buf_add = buf_add; + mem_map.buf_size = bufsz * bufcnt; + mem_map.mempool_id = 0; /* EBI */ + mem_map.reserved = 0; + + q6asm_add_mmaphdr(&mem_map.hdr, + sizeof(struct asm_stream_cmd_memory_map), TRUE); + + pr_debug("buf add[%x] buf_add_parameter[%x]\n", + mem_map.buf_add, buf_add); + + rc = apr_send_pkt(this_mmap.apr, (uint32_t *) &mem_map); + if (rc < 0) { + pr_err("mem_map op[0x%x]rc[%d]\n", + mem_map.hdr.opcode, rc); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(this_mmap.cmd_wait, + (atomic_read(&this_mmap.cmd_state) == 0), 5 * HZ); + if (!rc) { + pr_err("timeout. waited for memory_map\n"); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + return rc; +} + +int q6asm_memory_unmap(struct audio_client *ac, uint32_t buf_add, int dir) +{ + struct asm_stream_cmd_memory_unmap mem_unmap; + int rc = 0; + + if (!ac || ac->apr == NULL || this_mmap.apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s: Session[%d]\n", __func__, ac->session); + + q6asm_add_mmaphdr(&mem_unmap.hdr, + sizeof(struct asm_stream_cmd_memory_unmap), TRUE); + mem_unmap.hdr.opcode = ASM_SESSION_CMD_MEMORY_UNMAP; + mem_unmap.buf_add = buf_add; + + rc = apr_send_pkt(this_mmap.apr, (uint32_t *) &mem_unmap); + if (rc < 0) { + pr_err("mem_unmap op[0x%x]rc[%d]\n", + mem_unmap.hdr.opcode, rc); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(this_mmap.cmd_wait, + (atomic_read(&this_mmap.cmd_state) == 0), 5 * HZ); + if (!rc) { + pr_err("timeout. waited for memory_map\n"); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + return rc; +} + +int q6asm_set_lrgain(struct audio_client *ac, int left_gain, int right_gain) +{ + void *vol_cmd = NULL; + void *payload = NULL; + struct asm_pp_params_command *cmd = NULL; + struct asm_lrchannel_gain_params *lrgain = NULL; + int sz = 0; + int rc = 0; + + sz = sizeof(struct asm_pp_params_command) + + + sizeof(struct asm_lrchannel_gain_params); + vol_cmd = kzalloc(sz, GFP_KERNEL); + if (vol_cmd == NULL) { + pr_err("%s[%d]: Mem alloc failed\n", __func__, ac->session); + rc = -EINVAL; + return rc; + } + cmd = (struct asm_pp_params_command *)vol_cmd; + q6asm_add_hdr_async(ac, &cmd->hdr, sz, TRUE); + cmd->hdr.opcode = ASM_STREAM_CMD_SET_PP_PARAMS; + cmd->payload = NULL; + cmd->payload_size = sizeof(struct asm_pp_param_data_hdr) + + sizeof(struct asm_lrchannel_gain_params); + cmd->params.module_id = VOLUME_CONTROL_MODULE_ID; + cmd->params.param_id = L_R_CHANNEL_GAIN_PARAM_ID; + cmd->params.param_size = sizeof(struct asm_lrchannel_gain_params); + cmd->params.reserved = 0; + + payload = (u8 *)(vol_cmd + sizeof(struct asm_pp_params_command)); + lrgain = (struct asm_lrchannel_gain_params *)payload; + + lrgain->left_gain = left_gain; + lrgain->right_gain = right_gain; + rc = apr_send_pkt(ac->apr, (uint32_t *) vol_cmd); + if (rc < 0) { + pr_err("%s: Volume Command failed\n", __func__); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout in sending volume command to apr\n", + __func__); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + kfree(vol_cmd); + return rc; +} + +static int q6asm_memory_map_regions(struct audio_client *ac, int dir, + uint32_t bufsz, uint32_t bufcnt) +{ + struct asm_stream_cmd_memory_map_regions *mmap_regions = NULL; + struct asm_memory_map_regions *mregions = NULL; + struct audio_port_data *port = NULL; + struct audio_buffer *ab = NULL; + void *mmap_region_cmd = NULL; + void *payload = NULL; + int rc = 0; + int i = 0; + int cmd_size = 0; + + if (!ac || ac->apr == NULL || this_mmap.apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s: Session[%d]\n", __func__, ac->session); + + cmd_size = sizeof(struct asm_stream_cmd_memory_map_regions) + + sizeof(struct asm_memory_map_regions) * bufcnt; + + mmap_region_cmd = kzalloc(cmd_size, GFP_KERNEL); + if (mmap_region_cmd == NULL) { + pr_err("%s: Mem alloc failed\n", __func__); + rc = -EINVAL; + return rc; + } + mmap_regions = (struct asm_stream_cmd_memory_map_regions *) + mmap_region_cmd; + q6asm_add_mmaphdr(&mmap_regions->hdr, cmd_size, TRUE); + mmap_regions->hdr.opcode = ASM_SESSION_CMD_MEMORY_MAP_REGIONS; + mmap_regions->mempool_id = 0; + mmap_regions->nregions = bufcnt & 0x00ff; + pr_debug("map_regions->nregions = %d\n", mmap_regions->nregions); + payload = ((u8 *) mmap_region_cmd + + sizeof(struct asm_stream_cmd_memory_map_regions)); + mregions = (struct asm_memory_map_regions *)payload; + + port = &ac->port[dir]; + for (i = 0; i < bufcnt; i++) { + ab = &port->buf[i]; + mregions->phys = ab->phys; + mregions->buf_size = ab->size; + ++mregions; + } + + rc = apr_send_pkt(this_mmap.apr, (uint32_t *) mmap_region_cmd); + if (rc < 0) { + pr_err("mmap_regions op[0x%x]rc[%d]\n", + mmap_regions->hdr.opcode, rc); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(this_mmap.cmd_wait, + (atomic_read(&this_mmap.cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for memory_map\n"); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + kfree(mmap_region_cmd); + return rc; +} + +static int q6asm_memory_unmap_regions(struct audio_client *ac, int dir, + uint32_t bufsz, uint32_t bufcnt) +{ + struct asm_stream_cmd_memory_unmap_regions *unmap_regions = NULL; + struct asm_memory_unmap_regions *mregions = NULL; + struct audio_port_data *port = NULL; + struct audio_buffer *ab = NULL; + void *unmap_region_cmd = NULL; + void *payload = NULL; + int rc = 0; + int i = 0; + int cmd_size = 0; + + if (!ac || ac->apr == NULL || this_mmap.apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s: Session[%d]\n", __func__, ac->session); + + cmd_size = sizeof(struct asm_stream_cmd_memory_unmap_regions) + + sizeof(struct asm_memory_unmap_regions) * bufcnt; + + unmap_region_cmd = kzalloc(cmd_size, GFP_KERNEL); + if (unmap_region_cmd == NULL) { + pr_err("%s: Mem alloc failed\n", __func__); + rc = -EINVAL; + return rc; + } + unmap_regions = (struct asm_stream_cmd_memory_unmap_regions *) + unmap_region_cmd; + q6asm_add_mmaphdr(&unmap_regions->hdr, cmd_size, TRUE); + unmap_regions->hdr.opcode = ASM_SESSION_CMD_MEMORY_UNMAP_REGIONS; + unmap_regions->nregions = bufcnt & 0x00ff; + pr_debug("unmap_regions->nregions = %d\n", unmap_regions->nregions); + payload = ((u8 *) unmap_region_cmd + + sizeof(struct asm_stream_cmd_memory_unmap_regions)); + mregions = (struct asm_memory_unmap_regions *)payload; + port = &ac->port[dir]; + for (i = 0; i < bufcnt; i++) { + ab = &port->buf[i]; + mregions->phys = ab->phys; + ++mregions; + } + + rc = apr_send_pkt(this_mmap.apr, (uint32_t *) unmap_region_cmd); + if (rc < 0) { + pr_err("mmap_regions op[0x%x]rc[%d]\n", + unmap_regions->hdr.opcode, rc); + goto fail_cmd; + } + + rc = wait_event_timeout(this_mmap.cmd_wait, + (atomic_read(&this_mmap.cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for memory_unmap\n"); + goto fail_cmd; + } + rc = 0; + +fail_cmd: + kfree(unmap_region_cmd); + return rc; +} + +int q6asm_set_mute(struct audio_client *ac, int muteflag) +{ + void *vol_cmd = NULL; + void *payload = NULL; + struct asm_pp_params_command *cmd = NULL; + struct asm_mute_params *mute = NULL; + int sz = 0; + int rc = 0; + + sz = sizeof(struct asm_pp_params_command) + + + sizeof(struct asm_mute_params); + vol_cmd = kzalloc(sz, GFP_KERNEL); + if (vol_cmd == NULL) { + pr_err("%s[%d]: Mem alloc failed\n", __func__, ac->session); + rc = -EINVAL; + return rc; + } + cmd = (struct asm_pp_params_command *)vol_cmd; + q6asm_add_hdr_async(ac, &cmd->hdr, sz, TRUE); + cmd->hdr.opcode = ASM_STREAM_CMD_SET_PP_PARAMS; + cmd->payload = NULL; + cmd->payload_size = sizeof(struct asm_pp_param_data_hdr) + + sizeof(struct asm_mute_params); + cmd->params.module_id = VOLUME_CONTROL_MODULE_ID; + cmd->params.param_id = MUTE_CONFIG_PARAM_ID; + cmd->params.param_size = sizeof(struct asm_mute_params); + cmd->params.reserved = 0; + + payload = (u8 *)(vol_cmd + sizeof(struct asm_pp_params_command)); + mute = (struct asm_mute_params *)payload; + + mute->muteflag = muteflag; + rc = apr_send_pkt(ac->apr, (uint32_t *) vol_cmd); + if (rc < 0) { + pr_err("%s: Mute Command failed\n", __func__); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout in sending mute command to apr\n", + __func__); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + kfree(vol_cmd); + return rc; +} + +int q6asm_set_volume(struct audio_client *ac, int volume) +{ + void *vol_cmd = NULL; + void *payload = NULL; + struct asm_pp_params_command *cmd = NULL; + struct asm_master_gain_params *mgain = NULL; + int sz = 0; + int rc = 0; + + sz = sizeof(struct asm_pp_params_command) + + + sizeof(struct asm_master_gain_params); + vol_cmd = kzalloc(sz, GFP_KERNEL); + if (vol_cmd == NULL) { + pr_err("%s[%d]: Mem alloc failed\n", __func__, ac->session); + rc = -EINVAL; + return rc; + } + cmd = (struct asm_pp_params_command *)vol_cmd; + q6asm_add_hdr_async(ac, &cmd->hdr, sz, TRUE); + cmd->hdr.opcode = ASM_STREAM_CMD_SET_PP_PARAMS; + cmd->payload = NULL; + cmd->payload_size = sizeof(struct asm_pp_param_data_hdr) + + sizeof(struct asm_master_gain_params); + cmd->params.module_id = VOLUME_CONTROL_MODULE_ID; + cmd->params.param_id = MASTER_GAIN_PARAM_ID; + cmd->params.param_size = sizeof(struct asm_master_gain_params); + cmd->params.reserved = 0; + + payload = (u8 *)(vol_cmd + sizeof(struct asm_pp_params_command)); + mgain = (struct asm_master_gain_params *)payload; + + mgain->master_gain = volume; + mgain->padding = 0x00; + rc = apr_send_pkt(ac->apr, (uint32_t *) vol_cmd); + if (rc < 0) { + pr_err("%s: Volume Command failed\n", __func__); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout in sending volume command to apr\n", + __func__); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + kfree(vol_cmd); + return rc; +} + +int q6asm_set_softpause(struct audio_client *ac, + struct asm_softpause_params *pause_param) +{ + void *vol_cmd = NULL; + void *payload = NULL; + struct asm_pp_params_command *cmd = NULL; + struct asm_softpause_params *params = NULL; + int sz = 0; + int rc = 0; + + sz = sizeof(struct asm_pp_params_command) + + + sizeof(struct asm_softpause_params); + vol_cmd = kzalloc(sz, GFP_KERNEL); + if (vol_cmd == NULL) { + pr_err("%s[%d]: Mem alloc failed\n", __func__, ac->session); + rc = -EINVAL; + return rc; + } + cmd = (struct asm_pp_params_command *)vol_cmd; + q6asm_add_hdr_async(ac, &cmd->hdr, sz, TRUE); + cmd->hdr.opcode = ASM_STREAM_CMD_SET_PP_PARAMS; + cmd->payload = NULL; + cmd->payload_size = sizeof(struct asm_pp_param_data_hdr) + + sizeof(struct asm_softpause_params); + cmd->params.module_id = VOLUME_CONTROL_MODULE_ID; + cmd->params.param_id = SOFT_PAUSE_PARAM_ID; + cmd->params.param_size = sizeof(struct asm_softpause_params); + cmd->params.reserved = 0; + + payload = (u8 *)(vol_cmd + sizeof(struct asm_pp_params_command)); + params = (struct asm_softpause_params *)payload; + + params->enable = pause_param->enable; + params->period = pause_param->period; + params->step = pause_param->step; + params->rampingcurve = pause_param->rampingcurve; + pr_debug("%s: soft Pause Command: enable = %d, period = %d, step = %d, curve = %d\n", + __func__, params->enable, + params->period, params->step, params->rampingcurve); + rc = apr_send_pkt(ac->apr, (uint32_t *) vol_cmd); + if (rc < 0) { + pr_err("%s: Volume Command(soft_pause) failed\n", __func__); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout in sending volume command(soft_pause) to apr\n", + __func__); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + kfree(vol_cmd); + return rc; +} + +int q6asm_set_softvolume(struct audio_client *ac, + struct asm_softvolume_params *softvol_param) +{ + void *vol_cmd = NULL; + void *payload = NULL; + struct asm_pp_params_command *cmd = NULL; + struct asm_softvolume_params *params = NULL; + int sz = 0; + int rc = 0; + + sz = sizeof(struct asm_pp_params_command) + + + sizeof(struct asm_softvolume_params); + vol_cmd = kzalloc(sz, GFP_KERNEL); + if (vol_cmd == NULL) { + pr_err("%s[%d]: Mem alloc failed\n", __func__, ac->session); + rc = -EINVAL; + return rc; + } + cmd = (struct asm_pp_params_command *)vol_cmd; + q6asm_add_hdr_async(ac, &cmd->hdr, sz, TRUE); + cmd->hdr.opcode = ASM_STREAM_CMD_SET_PP_PARAMS; + cmd->payload = NULL; + cmd->payload_size = sizeof(struct asm_pp_param_data_hdr) + + sizeof(struct asm_softvolume_params); + cmd->params.module_id = VOLUME_CONTROL_MODULE_ID; + cmd->params.param_id = SOFT_VOLUME_PARAM_ID; + cmd->params.param_size = sizeof(struct asm_softvolume_params); + cmd->params.reserved = 0; + + payload = (u8 *)(vol_cmd + sizeof(struct asm_pp_params_command)); + params = (struct asm_softvolume_params *)payload; + + params->period = softvol_param->period; + params->step = softvol_param->step; + params->rampingcurve = softvol_param->rampingcurve; + pr_debug("%s: soft Volume:opcode = %d,payload_sz =%d,module_id =%d, param_id = %d, param_sz = %d\n", + __func__, + cmd->hdr.opcode, cmd->payload_size, + cmd->params.module_id, cmd->params.param_id, + cmd->params.param_size); + pr_debug("%s: soft Volume Command: period = %d, step = %d, curve = %d\n", + __func__, params->period, + params->step, params->rampingcurve); + rc = apr_send_pkt(ac->apr, (uint32_t *) vol_cmd); + if (rc < 0) { + pr_err("%s: Volume Command(soft_volume) failed\n", __func__); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout in sending volume command(soft_volume) to apr\n", + __func__); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + kfree(vol_cmd); + return rc; +} + +int q6asm_equalizer(struct audio_client *ac, void *eq) +{ + void *eq_cmd = NULL; + void *payload = NULL; + struct asm_pp_params_command *cmd = NULL; + struct asm_equalizer_params *equalizer = NULL; + struct msm_audio_eq_stream_config *eq_params = NULL; + int i = 0; + int sz = 0; + int rc = 0; + + sz = sizeof(struct asm_pp_params_command) + + + sizeof(struct asm_equalizer_params); + eq_cmd = kzalloc(sz, GFP_KERNEL); + if (eq_cmd == NULL) { + pr_err("%s[%d]: Mem alloc failed\n", __func__, ac->session); + rc = -EINVAL; + goto fail_cmd; + } + eq_params = (struct msm_audio_eq_stream_config *) eq; + cmd = (struct asm_pp_params_command *)eq_cmd; + q6asm_add_hdr(ac, &cmd->hdr, sz, TRUE); + cmd->hdr.opcode = ASM_STREAM_CMD_SET_PP_PARAMS; + cmd->payload = NULL; + cmd->payload_size = sizeof(struct asm_pp_param_data_hdr) + + sizeof(struct asm_equalizer_params); + cmd->params.module_id = EQUALIZER_MODULE_ID; + cmd->params.param_id = EQUALIZER_PARAM_ID; + cmd->params.param_size = sizeof(struct asm_equalizer_params); + cmd->params.reserved = 0; + payload = (u8 *)(eq_cmd + sizeof(struct asm_pp_params_command)); + equalizer = (struct asm_equalizer_params *)payload; + + equalizer->enable = eq_params->enable; + equalizer->num_bands = eq_params->num_bands; + pr_debug("%s: enable:%d numbands:%d\n", __func__, eq_params->enable, + eq_params->num_bands); + for (i = 0; i < eq_params->num_bands; i++) { + equalizer->eq_bands[i].band_idx = + eq_params->eq_bands[i].band_idx; + equalizer->eq_bands[i].filter_type = + eq_params->eq_bands[i].filter_type; + equalizer->eq_bands[i].center_freq_hz = + eq_params->eq_bands[i].center_freq_hz; + equalizer->eq_bands[i].filter_gain = + eq_params->eq_bands[i].filter_gain; + equalizer->eq_bands[i].q_factor = + eq_params->eq_bands[i].q_factor; + pr_debug("%s: filter_type:%u bandnum:%d\n", __func__, + eq_params->eq_bands[i].filter_type, i); + pr_debug("%s: center_freq_hz:%u bandnum:%d\n", __func__, + eq_params->eq_bands[i].center_freq_hz, i); + pr_debug("%s: filter_gain:%d bandnum:%d\n", __func__, + eq_params->eq_bands[i].filter_gain, i); + pr_debug("%s: q_factor:%d bandnum:%d\n", __func__, + eq_params->eq_bands[i].q_factor, i); + } + rc = apr_send_pkt(ac->apr, (uint32_t *) eq_cmd); + if (rc < 0) { + pr_err("%s: Equalizer Command failed\n", __func__); + rc = -EINVAL; + goto fail_cmd; + } + + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout in sending equalizer command to apr\n", + __func__); + rc = -EINVAL; + goto fail_cmd; + } + rc = 0; +fail_cmd: + kfree(eq_cmd); + return rc; +} + +int q6asm_read(struct audio_client *ac) +{ + struct asm_stream_cmd_read read; + struct audio_buffer *ab; + int dsp_buf; + struct audio_port_data *port; + int rc; + if (!ac || ac->apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + if (ac->io_mode & SYNC_IO_MODE) { + port = &ac->port[OUT]; + + q6asm_add_hdr(ac, &read.hdr, sizeof(read), FALSE); + + mutex_lock(&port->lock); + + dsp_buf = port->dsp_buf; + ab = &port->buf[dsp_buf]; + + pr_debug("%s:session[%d]dsp-buf[%d][%p]cpu_buf[%d][%p]\n", + __func__, + ac->session, + dsp_buf, + (void *)port->buf[dsp_buf].data, + port->cpu_buf, + (void *)port->buf[port->cpu_buf].phys); + + read.hdr.opcode = ASM_DATA_CMD_READ; + read.buf_add = ab->phys; + read.buf_size = ab->size; + read.uid = port->dsp_buf; + read.hdr.token = port->dsp_buf; + + port->dsp_buf = (port->dsp_buf + 1) & (port->max_buf_cnt - 1); + mutex_unlock(&port->lock); + pr_debug("%s:buf add[0x%x] token[%d] uid[%d]\n", __func__, + read.buf_add, + read.hdr.token, + read.uid); + rc = apr_send_pkt(ac->apr, (uint32_t *) &read); + if (rc < 0) { + pr_err("read op[0x%x]rc[%d]\n", read.hdr.opcode, rc); + goto fail_cmd; + } + return 0; + } +fail_cmd: + return -EINVAL; +} + +int q6asm_read_nolock(struct audio_client *ac) +{ + struct asm_stream_cmd_read read; + struct audio_buffer *ab; + int dsp_buf; + struct audio_port_data *port; + int rc; + if (!ac || ac->apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + if (ac->io_mode & SYNC_IO_MODE) { + port = &ac->port[OUT]; + + q6asm_add_hdr_async(ac, &read.hdr, sizeof(read), FALSE); + + + dsp_buf = port->dsp_buf; + ab = &port->buf[dsp_buf]; + + pr_debug("%s:session[%d]dsp-buf[%d][%p]cpu_buf[%d][%p]\n", + __func__, + ac->session, + dsp_buf, + (void *)port->buf[dsp_buf].data, + port->cpu_buf, + (void *)port->buf[port->cpu_buf].phys); + + read.hdr.opcode = ASM_DATA_CMD_READ; + read.buf_add = ab->phys; + read.buf_size = ab->size; + read.uid = port->dsp_buf; + read.hdr.token = port->dsp_buf; + + port->dsp_buf = (port->dsp_buf + 1) & (port->max_buf_cnt - 1); + pr_info("%s:buf add[0x%x] token[%d] uid[%d]\n", __func__, + read.buf_add, + read.hdr.token, + read.uid); + rc = apr_send_pkt(ac->apr, (uint32_t *) &read); + if (rc < 0) { + pr_err("read op[0x%x]rc[%d]\n", read.hdr.opcode, rc); + goto fail_cmd; + } + return 0; + } +fail_cmd: + return -EINVAL; +} + + +static void q6asm_add_hdr_async(struct audio_client *ac, struct apr_hdr *hdr, + uint32_t pkt_size, uint32_t cmd_flg) +{ + pr_debug("session=%d pkt size=%d cmd_flg=%d\n", pkt_size, cmd_flg, + ac->session); + hdr->hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, \ + APR_HDR_LEN(sizeof(struct apr_hdr)),\ + APR_PKT_VER); + hdr->src_svc = ((struct apr_svc *)ac->apr)->id; + hdr->src_domain = APR_DOMAIN_APPS; + hdr->dest_svc = APR_SVC_ASM; + hdr->dest_domain = APR_DOMAIN_ADSP; + hdr->src_port = ((ac->session << 8) & 0xFF00) | 0x01; + hdr->dest_port = ((ac->session << 8) & 0xFF00) | 0x01; + if (cmd_flg) { + hdr->token = ac->session; + atomic_set(&ac->cmd_state, 1); + } + hdr->pkt_size = pkt_size; + return; +} + +int q6asm_async_write(struct audio_client *ac, + struct audio_aio_write_param *param) +{ + int rc = 0; + struct asm_stream_cmd_write write; + + if (!ac || ac->apr == NULL) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + + q6asm_add_hdr_async(ac, &write.hdr, sizeof(write), FALSE); + + /* Pass physical address as token for AIO scheme */ + write.hdr.token = param->uid; + write.hdr.opcode = ASM_DATA_CMD_WRITE; + write.buf_add = param->paddr; + write.avail_bytes = param->len; + write.uid = param->uid; + write.msw_ts = param->msw_ts; + write.lsw_ts = param->lsw_ts; + /* Use 0xFF00 for disabling timestamps */ + if (param->flags == 0xFF00) + write.uflags = (0x00000000 | (param->flags & 0x800000FF)); + else + write.uflags = (0x80000000 | param->flags); + + pr_debug("%s: session[%d] bufadd[0x%x]len[0x%x]", __func__, ac->session, + write.buf_add, write.avail_bytes); + + rc = apr_send_pkt(ac->apr, (uint32_t *) &write); + if (rc < 0) { + pr_debug("[%s] write op[0x%x]rc[%d]\n", __func__, + write.hdr.opcode, rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_async_read(struct audio_client *ac, + struct audio_aio_read_param *param) +{ + int rc = 0; + struct asm_stream_cmd_read read; + + if (!ac || ac->apr == NULL) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + + q6asm_add_hdr_async(ac, &read.hdr, sizeof(read), FALSE); + + /* Pass physical address as token for AIO scheme */ + read.hdr.token = param->paddr; + read.hdr.opcode = ASM_DATA_CMD_READ; + read.buf_add = param->paddr; + read.buf_size = param->len; + read.uid = param->uid; + + pr_debug("%s: session[%d] bufadd[0x%x]len[0x%x]", __func__, ac->session, + read.buf_add, read.buf_size); + + rc = apr_send_pkt(ac->apr, (uint32_t *) &read); + if (rc < 0) { + pr_debug("[%s] read op[0x%x]rc[%d]\n", __func__, + read.hdr.opcode, rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_async_read_compressed(struct audio_client *ac, + struct audio_aio_read_param *param) +{ + int rc = 0; + struct asm_stream_cmd_read read; + + if (!ac || ac->apr == NULL) { + pr_err("%s: APR handle NULL\n", __func__); + return -EINVAL; + } + + q6asm_add_hdr_async(ac, &read.hdr, sizeof(read), FALSE); + + /* Pass physical address as token for AIO scheme */ + read.hdr.token = param->paddr; + read.hdr.opcode = ASM_DATA_CMD_READ_COMPRESSED; + read.buf_add = param->paddr; + read.buf_size = param->len; + read.uid = param->uid; + + pr_debug("%s: session[%d] bufadd[0x%x]len[0x%x]", __func__, ac->session, + read.buf_add, read.buf_size); + + rc = apr_send_pkt(ac->apr, (uint32_t *) &read); + if (rc < 0) { + pr_debug("[%s] read op[0x%x]rc[%d]\n", __func__, + read.hdr.opcode, rc); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_write(struct audio_client *ac, uint32_t len, uint32_t msw_ts, + uint32_t lsw_ts, uint32_t flags) +{ + int rc = 0; + struct asm_stream_cmd_write write; + struct audio_port_data *port; + struct audio_buffer *ab; + int dsp_buf = 0; + + if (!ac || ac->apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s: session[%d] len=%d", __func__, ac->session, len); + if (ac->io_mode & SYNC_IO_MODE) { + port = &ac->port[IN]; + + q6asm_add_hdr(ac, &write.hdr, sizeof(write), + FALSE); + mutex_lock(&port->lock); + + dsp_buf = port->dsp_buf; + ab = &port->buf[dsp_buf]; + + write.hdr.token = port->dsp_buf; + write.hdr.opcode = ASM_DATA_CMD_WRITE; + write.buf_add = ab->phys; + write.avail_bytes = len; + write.uid = port->dsp_buf; + write.msw_ts = msw_ts; + write.lsw_ts = lsw_ts; + /* Use 0xFF00 for disabling timestamps */ + if (flags == 0xFF00) + write.uflags = (0x00000000 | (flags & 0x800000FF)); + else + write.uflags = (0x80000000 | flags); + port->dsp_buf = (port->dsp_buf + 1) & (port->max_buf_cnt - 1); + + pr_debug("%s:ab->phys[0x%x]bufadd[0x%x]token[0x%x]buf_id[0x%x]" + , __func__, + ab->phys, + write.buf_add, + write.hdr.token, + write.uid); + mutex_unlock(&port->lock); +#ifdef CONFIG_DEBUG_FS + if (out_enable_flag) { + char zero_pattern[2] = {0x00, 0x00}; + /* If First two byte is non zero and last two byte + is zero then it is warm output pattern */ + if ((strncmp(((char *)ab->data), zero_pattern, 2)) && + (!strncmp(((char *)ab->data + 2), zero_pattern, 2))) { + do_gettimeofday(&out_warm_tv); + pr_debug("WARM:apr_send_pkt at %ld sec %ld microsec\n", + out_warm_tv.tv_sec,\ + out_warm_tv.tv_usec); + pr_debug("Warm Pattern Matched"); + } + /* If First two byte is zero and last two byte is + non zero then it is cont ouput pattern */ + else if ((!strncmp(((char *)ab->data), zero_pattern, 2)) + && (strncmp(((char *)ab->data + 2), zero_pattern, 2))) { + do_gettimeofday(&out_cont_tv); + pr_debug("CONT:apr_send_pkt at %ld sec %ld microsec\n", + out_cont_tv.tv_sec,\ + out_cont_tv.tv_usec); + pr_debug("Cont Pattern Matched"); + } + } +#endif + rc = apr_send_pkt(ac->apr, (uint32_t *) &write); + if (rc < 0) { + pr_err("write op[0x%x]rc[%d]\n", write.hdr.opcode, rc); + goto fail_cmd; + } + pr_debug("%s: WRITE SUCCESS\n", __func__); + return 0; + } +fail_cmd: + return -EINVAL; +} + +int q6asm_write_nolock(struct audio_client *ac, uint32_t len, uint32_t msw_ts, + uint32_t lsw_ts, uint32_t flags) +{ + int rc = 0; + struct asm_stream_cmd_write write; + struct audio_port_data *port; + struct audio_buffer *ab; + int dsp_buf = 0; + + if (!ac || ac->apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s: session[%d] len=%d", __func__, ac->session, len); + if (ac->io_mode & SYNC_IO_MODE) { + port = &ac->port[IN]; + + q6asm_add_hdr_async(ac, &write.hdr, sizeof(write), + FALSE); + + dsp_buf = port->dsp_buf; + ab = &port->buf[dsp_buf]; + + write.hdr.token = port->dsp_buf; + write.hdr.opcode = ASM_DATA_CMD_WRITE; + write.buf_add = ab->phys; + write.avail_bytes = len; + write.uid = port->dsp_buf; + write.msw_ts = msw_ts; + write.lsw_ts = lsw_ts; + /* Use 0xFF00 for disabling timestamps */ + if (flags == 0xFF00) + write.uflags = (0x00000000 | (flags & 0x800000FF)); + else + write.uflags = (0x80000000 | flags); + port->dsp_buf = (port->dsp_buf + 1) & (port->max_buf_cnt - 1); + + pr_debug("%s:ab->phys[0x%x]bufadd[0x%x]token[0x%x]buf_id[0x%x]" + , __func__, + ab->phys, + write.buf_add, + write.hdr.token, + write.uid); + + rc = apr_send_pkt(ac->apr, (uint32_t *) &write); + if (rc < 0) { + pr_err("write op[0x%x]rc[%d]\n", write.hdr.opcode, rc); + goto fail_cmd; + } + pr_debug("%s: WRITE SUCCESS\n", __func__); + return 0; + } +fail_cmd: + return -EINVAL; +} + +int q6asm_get_session_time(struct audio_client *ac, uint64_t *tstamp) +{ + struct apr_hdr hdr; + int rc; + + if (!ac || ac->apr == NULL || tstamp == NULL) { + pr_err("APR handle or tstamp NULL\n"); + return -EINVAL; + } + q6asm_add_hdr(ac, &hdr, sizeof(hdr), FALSE); + hdr.opcode = ASM_SESSION_CMD_GET_SESSION_TIME; + atomic_set(&ac->time_flag, 1); + + pr_debug("%s: session[%d]opcode[0x%x]\n", __func__, + ac->session, + hdr.opcode); + rc = apr_send_pkt(ac->apr, (uint32_t *) &hdr); + if (rc < 0) { + pr_err("Commmand 0x%x failed\n", hdr.opcode); + goto fail_cmd; + } + rc = wait_event_timeout(ac->time_wait, + (atomic_read(&ac->time_flag) == 0), 5*HZ); + if (!rc) { + pr_err("%s: timeout in getting session time from DSP\n", + __func__); + goto fail_cmd; + } + + *tstamp = ac->time_stamp; + return 0; + +fail_cmd: + return -EINVAL; +} + +int q6asm_cmd(struct audio_client *ac, int cmd) +{ + struct apr_hdr hdr; + int rc; + atomic_t *state; + int cnt = 0; + + if (!ac || ac->apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + q6asm_add_hdr(ac, &hdr, sizeof(hdr), TRUE); + switch (cmd) { + case CMD_PAUSE: + pr_debug("%s:CMD_PAUSE\n", __func__); + hdr.opcode = ASM_SESSION_CMD_PAUSE; + state = &ac->cmd_state; + break; + case CMD_FLUSH: + pr_debug("%s:CMD_FLUSH\n", __func__); + hdr.opcode = ASM_STREAM_CMD_FLUSH; + state = &ac->cmd_state; + break; + case CMD_OUT_FLUSH: + pr_debug("%s:CMD_OUT_FLUSH\n", __func__); + hdr.opcode = ASM_STREAM_CMD_FLUSH_READBUFS; + state = &ac->cmd_state; + break; + case CMD_EOS: + pr_debug("%s:CMD_EOS\n", __func__); + hdr.opcode = ASM_DATA_CMD_EOS; + atomic_set(&ac->cmd_state, 0); + state = &ac->cmd_state; + break; + case CMD_CLOSE: + pr_debug("%s:CMD_CLOSE\n", __func__); + hdr.opcode = ASM_STREAM_CMD_CLOSE; + atomic_set(&ac->cmd_close_state, 1); + state = &ac->cmd_close_state; + break; + default: + pr_err("Invalid format[%d]\n", cmd); + goto fail_cmd; + } + pr_debug("%s:session[%d]opcode[0x%x] ", __func__, + ac->session, + hdr.opcode); + rc = apr_send_pkt(ac->apr, (uint32_t *) &hdr); + if (rc < 0) { + pr_err("Commmand 0x%x failed\n", hdr.opcode); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, (atomic_read(state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for response opcode[0x%x]\n", + hdr.opcode); + goto fail_cmd; + } + if (cmd == CMD_FLUSH) + q6asm_reset_buf_state(ac); + if (cmd == CMD_CLOSE) { + /* check if DSP return all buffers */ + if (ac->port[IN].buf) { + for (cnt = 0; cnt < ac->port[IN].max_buf_cnt; + cnt++) { + if (ac->port[IN].buf[cnt].used == IN) { + pr_debug("Write Buf[%d] not returned\n", + cnt); + } + } + } + if (ac->port[OUT].buf) { + for (cnt = 0; cnt < ac->port[OUT].max_buf_cnt; cnt++) { + if (ac->port[OUT].buf[cnt].used == OUT) { + pr_debug("Read Buf[%d] not returned\n", + cnt); + } + } + } + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_cmd_nowait(struct audio_client *ac, int cmd) +{ + struct apr_hdr hdr; + int rc; + + if (!ac || ac->apr == NULL) { + pr_err("%s:APR handle NULL\n", __func__); + return -EINVAL; + } + q6asm_add_hdr_async(ac, &hdr, sizeof(hdr), TRUE); + switch (cmd) { + case CMD_PAUSE: + pr_debug("%s:CMD_PAUSE\n", __func__); + hdr.opcode = ASM_SESSION_CMD_PAUSE; + break; + case CMD_EOS: + pr_debug("%s:CMD_EOS\n", __func__); + hdr.opcode = ASM_DATA_CMD_EOS; + break; + default: + pr_err("%s:Invalid format[%d]\n", __func__, cmd); + goto fail_cmd; + } + pr_debug("%s:session[%d]opcode[0x%x] ", __func__, + ac->session, + hdr.opcode); + + rc = apr_send_pkt(ac->apr, (uint32_t *) &hdr); + if (rc < 0) { + pr_err("%s:Commmand 0x%x failed\n", __func__, hdr.opcode); + goto fail_cmd; + } + atomic_inc(&ac->nowait_cmd_cnt); + return 0; +fail_cmd: + return -EINVAL; +} + +static void q6asm_reset_buf_state(struct audio_client *ac) +{ + int cnt = 0; + int loopcnt = 0; + int used; + struct audio_port_data *port = NULL; + + if (ac->io_mode & SYNC_IO_MODE) { + used = (ac->io_mode & TUN_WRITE_IO_MODE ? 1 : 0); + mutex_lock(&ac->cmd_lock); + for (loopcnt = 0; loopcnt <= OUT; loopcnt++) { + port = &ac->port[loopcnt]; + cnt = port->max_buf_cnt - 1; + port->dsp_buf = 0; + port->cpu_buf = 0; + while (cnt >= 0) { + if (!port->buf) + continue; + port->buf[cnt].used = used; + cnt--; + } + } + mutex_unlock(&ac->cmd_lock); + } +} + +int q6asm_reg_tx_overflow(struct audio_client *ac, uint16_t enable) +{ + struct asm_stream_cmd_reg_tx_overflow_event tx_overflow; + int rc; + + if (!ac || ac->apr == NULL) { + pr_err("APR handle NULL\n"); + return -EINVAL; + } + pr_debug("%s:session[%d]enable[%d]\n", __func__, + ac->session, enable); + q6asm_add_hdr(ac, &tx_overflow.hdr, sizeof(tx_overflow), TRUE); + + tx_overflow.hdr.opcode = \ + ASM_SESSION_CMD_REGISTER_FOR_TX_OVERFLOW_EVENTS; + /* tx overflow event: enable */ + tx_overflow.enable = enable; + + rc = apr_send_pkt(ac->apr, (uint32_t *) &tx_overflow); + if (rc < 0) { + pr_err("tx overflow op[0x%x]rc[%d]\n", \ + tx_overflow.hdr.opcode, rc); + goto fail_cmd; + } + rc = wait_event_timeout(ac->cmd_wait, + (atomic_read(&ac->cmd_state) == 0), 5*HZ); + if (!rc) { + pr_err("timeout. waited for tx overflow\n"); + goto fail_cmd; + } + return 0; +fail_cmd: + return -EINVAL; +} + +int q6asm_get_apr_service_id(int session_id) +{ + pr_debug("%s\n", __func__); + + if (session_id < 0 || session_id > SESSION_MAX) { + pr_err("%s: invalid session_id = %d\n", __func__, session_id); + return -EINVAL; + } + + return ((struct apr_svc *)session[session_id]->apr)->id; +} + + +static int __init q6asm_init(void) +{ + pr_debug("%s\n", __func__); + init_waitqueue_head(&this_mmap.cmd_wait); + memset(session, 0, sizeof(session)); +#ifdef CONFIG_DEBUG_FS + out_buffer = kmalloc(OUT_BUFFER_SIZE, GFP_KERNEL); + out_dentry = debugfs_create_file("audio_out_latency_measurement_node",\ + 0664,\ + NULL, NULL, &audio_output_latency_debug_fops); + if (IS_ERR(out_dentry)) + pr_err("debugfs_create_file failed\n"); + in_buffer = kmalloc(IN_BUFFER_SIZE, GFP_KERNEL); + in_dentry = debugfs_create_file("audio_in_latency_measurement_node",\ + 0664,\ + NULL, NULL, &audio_input_latency_debug_fops); + if (IS_ERR(in_dentry)) + pr_err("debugfs_create_file failed\n"); +#endif + return 0; +} + +device_initcall(q6asm_init); diff --git a/sound/soc/msm/qdsp6/q6voice.c b/sound/soc/msm/qdsp6/q6voice.c new file mode 100644 index 000000000000..a2a37837a17e --- /dev/null +++ b/sound/soc/msm/qdsp6/q6voice.c @@ -0,0 +1,4258 @@ +/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#include <linux/slab.h> +#include <linux/kthread.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/uaccess.h> +#include <linux/wait.h> +#include <linux/mutex.h> +#include <linux/dma-mapping.h> + +#include <asm/mach-types.h> +#include <sound/qdsp6v2/audio_acdb.h> +#include <sound/qdsp6v2/rtac.h> + +#include "sound/apr_audio.h" +#include "sound/q6afe.h" + +#include "q6voice.h" + +#define TIMEOUT_MS 3000 + +#define CMD_STATUS_SUCCESS 0 +#define CMD_STATUS_FAIL 1 + +#define CAL_BUFFER_SIZE 4096 +#define NUM_CVP_CAL_BLOCKS 75 +#define NUM_CVS_CAL_BLOCKS 15 +#define CVP_CAL_SIZE (NUM_CVP_CAL_BLOCKS * CAL_BUFFER_SIZE) +#define CVS_CAL_SIZE (NUM_CVS_CAL_BLOCKS * CAL_BUFFER_SIZE) +#define VOICE_CAL_BUFFER_SIZE (CVP_CAL_SIZE + CVS_CAL_SIZE) +/* Total cal needed to support concurrent VOIP & VOLTE sessions */ +/* Due to memory map issue on Q6 separate memory has to be used */ +/* for VOIP & VOLTE */ +#define TOTAL_VOICE_CAL_SIZE (NUM_VOICE_CAL_BUFFERS * VOICE_CAL_BUFFER_SIZE) + +static struct common_data common; + +static int voice_send_enable_vocproc_cmd(struct voice_data *v); +static int voice_send_netid_timing_cmd(struct voice_data *v); +static int voice_send_attach_vocproc_cmd(struct voice_data *v); +static int voice_send_set_device_cmd(struct voice_data *v); +static int voice_send_disable_vocproc_cmd(struct voice_data *v); +static int voice_send_vol_index_cmd(struct voice_data *v); +static int voice_send_cvp_map_memory_cmd(struct voice_data *v); +static int voice_send_cvp_unmap_memory_cmd(struct voice_data *v); +static int voice_send_cvs_map_memory_cmd(struct voice_data *v); +static int voice_send_cvs_unmap_memory_cmd(struct voice_data *v); +static int voice_send_cvs_register_cal_cmd(struct voice_data *v); +static int voice_send_cvs_deregister_cal_cmd(struct voice_data *v); +static int voice_send_cvp_register_cal_cmd(struct voice_data *v); +static int voice_send_cvp_deregister_cal_cmd(struct voice_data *v); +static int voice_send_cvp_register_vol_cal_table_cmd(struct voice_data *v); +static int voice_send_cvp_deregister_vol_cal_table_cmd(struct voice_data *v); +static int voice_send_set_widevoice_enable_cmd(struct voice_data *v); +static int voice_send_set_pp_enable_cmd(struct voice_data *v, + uint32_t module_id, int enable); +static int voice_cvs_stop_playback(struct voice_data *v); +static int voice_cvs_start_playback(struct voice_data *v); +static int voice_cvs_start_record(struct voice_data *v, uint32_t rec_mode); +static int voice_cvs_stop_record(struct voice_data *v); + +static int32_t qdsp_mvm_callback(struct apr_client_data *data, void *priv); +static int32_t qdsp_cvs_callback(struct apr_client_data *data, void *priv); +static int32_t qdsp_cvp_callback(struct apr_client_data *data, void *priv); +static int voice_send_set_device_cmd_v2(struct voice_data *v); + +static u16 voice_get_mvm_handle(struct voice_data *v) +{ + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return 0; + } + + pr_debug("%s: mvm_handle %d\n", __func__, v->mvm_handle); + + return v->mvm_handle; +} + +static void voice_set_mvm_handle(struct voice_data *v, u16 mvm_handle) +{ + pr_debug("%s: mvm_handle %d\n", __func__, mvm_handle); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return; + } + + v->mvm_handle = mvm_handle; +} + +static u16 voice_get_cvs_handle(struct voice_data *v) +{ + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return 0; + } + + pr_debug("%s: cvs_handle %d\n", __func__, v->cvs_handle); + + return v->cvs_handle; +} + +static void voice_set_cvs_handle(struct voice_data *v, u16 cvs_handle) +{ + pr_debug("%s: cvs_handle %d\n", __func__, cvs_handle); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return; + } + + v->cvs_handle = cvs_handle; +} + +static u16 voice_get_cvp_handle(struct voice_data *v) +{ + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return 0; + } + + pr_debug("%s: cvp_handle %d\n", __func__, v->cvp_handle); + + return v->cvp_handle; +} + +static void voice_set_cvp_handle(struct voice_data *v, u16 cvp_handle) +{ + pr_debug("%s: cvp_handle %d\n", __func__, cvp_handle); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return; + } + + v->cvp_handle = cvp_handle; +} + +uint16_t voc_get_session_id(char *name) +{ + u16 session_id = 0; + + if (name != NULL) { + if (!strncmp(name, "Voice session", 13)) + session_id = common.voice[VOC_PATH_PASSIVE].session_id; + else if (!strncmp(name, "VoLTE session", 13)) + session_id = + common.voice[VOC_PATH_VOLTE_PASSIVE].session_id; + else if (!strncmp(name, "Voice2 session", 14)) + session_id = + common.voice[VOC_PATH_VOICE2_PASSIVE].session_id; + else + session_id = common.voice[VOC_PATH_FULL].session_id; + + pr_debug("%s: %s has session id 0x%x\n", __func__, name, + session_id); + } + + return session_id; +} + +static struct voice_data *voice_get_session(u16 session_id) +{ + struct voice_data *v = NULL; + + if ((session_id >= SESSION_ID_BASE) && + (session_id < SESSION_ID_BASE + MAX_VOC_SESSIONS)) { + v = &common.voice[session_id - SESSION_ID_BASE]; + } + + pr_debug("%s: session_id 0x%x session handle 0x%x\n", + __func__, session_id, (unsigned int)v); + + return v; +} + +static bool is_voice_session(u16 session_id) +{ + return (session_id == common.voice[VOC_PATH_PASSIVE].session_id); +} + +static bool is_voip_session(u16 session_id) +{ + return (session_id == common.voice[VOC_PATH_FULL].session_id); +} + +static bool is_volte_session(u16 session_id) +{ + return (session_id == common.voice[VOC_PATH_VOLTE_PASSIVE].session_id); +} + +static bool is_voice2_session(u16 session_id) +{ + return (session_id == common.voice[VOC_PATH_VOICE2_PASSIVE].session_id); +} + +/* Only for memory allocated in the voice driver */ +/* which includes voip & volte */ +static int voice_get_cal_kernel_addr(int16_t session_id, int cal_type, + uint32_t *kvaddr) +{ + int i, result = 0; + pr_debug("%s\n", __func__); + + if (kvaddr == NULL) { + pr_err("%s: NULL pointer sent to function\n", __func__); + result = -EINVAL; + goto done; + } else if (is_voip_session(session_id)) { + i = VOIP_CAL; + } else if (is_volte_session(session_id)) { + i = VOLTE_CAL; + } else { + result = -EINVAL; + goto done; + } + + if (common.voice_cal[i].cal_data[cal_type].kvaddr == 0) { + pr_err("%s: NULL pointer for session_id %d, type %d, cal_type %d\n", + __func__, session_id, i, cal_type); + result = -EFAULT; + goto done; + } + + *kvaddr = common.voice_cal[i].cal_data[cal_type].kvaddr; +done: + return result; +} + +/* Only for memory allocated in the voice driver */ +/* which includes voip & volte */ +static int voice_get_cal_phys_addr(int16_t session_id, int cal_type, + uint32_t *paddr) +{ + int i, result = 0; + pr_debug("%s\n", __func__); + + if (paddr == NULL) { + pr_err("%s: NULL pointer sent to function\n", __func__); + result = -EINVAL; + goto done; + } else if (is_voip_session(session_id)) { + i = VOIP_CAL; + } else if (is_volte_session(session_id)) { + i = VOLTE_CAL; + } else { + result = -EINVAL; + goto done; + } + + if (common.voice_cal[i].cal_data[cal_type].paddr == 0) { + pr_err("%s: No addr for session_id %d, type %d, cal_type %d\n", + __func__, session_id, i, cal_type); + result = -EFAULT; + goto done; + } + + *paddr = common.voice_cal[i].cal_data[cal_type].paddr; +done: + return result; +} + +static int voice_apr_register(void) +{ + pr_debug("%s\n", __func__); + + mutex_lock(&common.common_lock); + + /* register callback to APR */ + if (common.apr_q6_mvm == NULL) { + pr_debug("%s: Start to register MVM callback\n", __func__); + + common.apr_q6_mvm = apr_register("ADSP", "MVM", + qdsp_mvm_callback, + 0xFFFFFFFF, &common); + + if (common.apr_q6_mvm == NULL) { + pr_err("%s: Unable to register MVM\n", __func__); + goto err; + } + } + + if (common.apr_q6_cvs == NULL) { + pr_debug("%s: Start to register CVS callback\n", __func__); + + common.apr_q6_cvs = apr_register("ADSP", "CVS", + qdsp_cvs_callback, + 0xFFFFFFFF, &common); + + if (common.apr_q6_cvs == NULL) { + pr_err("%s: Unable to register CVS\n", __func__); + goto err; + } + + rtac_set_voice_handle(RTAC_CVS, common.apr_q6_cvs); + } + + if (common.apr_q6_cvp == NULL) { + pr_debug("%s: Start to register CVP callback\n", __func__); + + common.apr_q6_cvp = apr_register("ADSP", "CVP", + qdsp_cvp_callback, + 0xFFFFFFFF, &common); + + if (common.apr_q6_cvp == NULL) { + pr_err("%s: Unable to register CVP\n", __func__); + goto err; + } + + rtac_set_voice_handle(RTAC_CVP, common.apr_q6_cvp); + } + + mutex_unlock(&common.common_lock); + + return 0; + +err: + if (common.apr_q6_cvs != NULL) { + apr_deregister(common.apr_q6_cvs); + common.apr_q6_cvs = NULL; + rtac_set_voice_handle(RTAC_CVS, NULL); + } + if (common.apr_q6_mvm != NULL) { + apr_deregister(common.apr_q6_mvm); + common.apr_q6_mvm = NULL; + } + + mutex_unlock(&common.common_lock); + + return -ENODEV; +} + +static int voice_send_dual_control_cmd(struct voice_data *v) +{ + int ret = 0; + struct mvm_modem_dual_control_session_cmd mvm_voice_ctl_cmd; + void *apr_mvm; + u16 mvm_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + pr_debug("%s: VoLTE/Voice2 command to MVM\n", __func__); + if (is_volte_session(v->session_id) || + is_voice2_session(v->session_id)) { + + mvm_handle = voice_get_mvm_handle(v); + mvm_voice_ctl_cmd.hdr.hdr_field = APR_HDR_FIELD( + APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_voice_ctl_cmd.hdr.pkt_size = APR_PKT_SIZE( + APR_HDR_SIZE, + sizeof(mvm_voice_ctl_cmd) - + APR_HDR_SIZE); + pr_debug("%s: send mvm Voice Ctl pkt size = %d\n", + __func__, mvm_voice_ctl_cmd.hdr.pkt_size); + mvm_voice_ctl_cmd.hdr.src_port = v->session_id; + mvm_voice_ctl_cmd.hdr.dest_port = mvm_handle; + mvm_voice_ctl_cmd.hdr.token = 0; + mvm_voice_ctl_cmd.hdr.opcode = + VSS_IMVM_CMD_SET_POLICY_DUAL_CONTROL; + mvm_voice_ctl_cmd.voice_ctl.enable_flag = true; + v->mvm_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_voice_ctl_cmd); + if (ret < 0) { + pr_err("%s: Error sending MVM Voice CTL CMD\n", + __func__); + ret = -EINVAL; + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + ret = -EINVAL; + goto fail; + } + } + ret = 0; +fail: + return ret; +} + + +static int voice_create_mvm_cvs_session(struct voice_data *v) +{ + int ret = 0; + struct mvm_create_ctl_session_cmd mvm_session_cmd; + struct cvs_create_passive_ctl_session_cmd cvs_session_cmd; + struct cvs_create_full_ctl_session_cmd cvs_full_ctl_cmd; + struct mvm_attach_stream_cmd attach_stream_cmd; + void *apr_mvm, *apr_cvs, *apr_cvp; + u16 mvm_handle, cvs_handle, cvp_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + apr_cvs = common.apr_q6_cvs; + apr_cvp = common.apr_q6_cvp; + + if (!apr_mvm || !apr_cvs || !apr_cvp) { + pr_err("%s: apr_mvm or apr_cvs or apr_cvp is NULL\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + cvs_handle = voice_get_cvs_handle(v); + cvp_handle = voice_get_cvp_handle(v); + + pr_debug("%s: mvm_hdl=%d, cvs_hdl=%d\n", __func__, + mvm_handle, cvs_handle); + /* send cmd to create mvm session and wait for response */ + + if (!mvm_handle) { + if (is_voice_session(v->session_id) || + is_volte_session(v->session_id) || + is_voice2_session(v->session_id)) { + mvm_session_cmd.hdr.hdr_field = APR_HDR_FIELD( + APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_session_cmd.hdr.pkt_size = APR_PKT_SIZE( + APR_HDR_SIZE, + sizeof(mvm_session_cmd) - + APR_HDR_SIZE); + pr_debug("%s: send mvm create session pkt size = %d\n", + __func__, mvm_session_cmd.hdr.pkt_size); + mvm_session_cmd.hdr.src_port = v->session_id; + mvm_session_cmd.hdr.dest_port = 0; + mvm_session_cmd.hdr.token = 0; + mvm_session_cmd.hdr.opcode = + VSS_IMVM_CMD_CREATE_PASSIVE_CONTROL_SESSION; + if (is_volte_session(v->session_id)) { + strlcpy(mvm_session_cmd.mvm_session.name, + "default volte voice", + sizeof(mvm_session_cmd.mvm_session.name) - 1); + } else if (is_voice2_session(v->session_id)) { + strlcpy(mvm_session_cmd.mvm_session.name, + "default modem voice2", + sizeof(mvm_session_cmd.mvm_session.name)); + } else { + strlcpy(mvm_session_cmd.mvm_session.name, + "default modem voice", + sizeof(mvm_session_cmd.mvm_session.name) - 1); + } + + v->mvm_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_mvm, + (uint32_t *) &mvm_session_cmd); + if (ret < 0) { + pr_err("%s: Error sending MVM_CONTROL_SESSION\n", + __func__); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + } else { + pr_debug("%s: creating MVM full ctrl\n", __func__); + mvm_session_cmd.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + mvm_session_cmd.hdr.pkt_size = + APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_session_cmd) - + APR_HDR_SIZE); + mvm_session_cmd.hdr.src_port = v->session_id; + mvm_session_cmd.hdr.dest_port = 0; + mvm_session_cmd.hdr.token = 0; + mvm_session_cmd.hdr.opcode = + VSS_IMVM_CMD_CREATE_FULL_CONTROL_SESSION; + strlcpy(mvm_session_cmd.mvm_session.name, + "default voip", + sizeof(mvm_session_cmd.mvm_session.name)); + + v->mvm_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_mvm, + (uint32_t *) &mvm_session_cmd); + if (ret < 0) { + pr_err("Fail in sending MVM_CONTROL_SESSION\n"); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + } + /* Get the created MVM handle. */ + mvm_handle = voice_get_mvm_handle(v); + } + /* send cmd to create cvs session */ + if (!cvs_handle) { + if (is_voice_session(v->session_id) || + is_volte_session(v->session_id) || + is_voice2_session(v->session_id)) { + pr_debug("%s: creating CVS passive session\n", + __func__); + + cvs_session_cmd.hdr.hdr_field = APR_HDR_FIELD( + APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_session_cmd.hdr.pkt_size = + APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_session_cmd) - + APR_HDR_SIZE); + cvs_session_cmd.hdr.src_port = v->session_id; + cvs_session_cmd.hdr.dest_port = 0; + cvs_session_cmd.hdr.token = 0; + cvs_session_cmd.hdr.opcode = + VSS_ISTREAM_CMD_CREATE_PASSIVE_CONTROL_SESSION; + if (is_volte_session(v->session_id)) { + strlcpy(cvs_session_cmd.cvs_session.name, + "default volte voice", + sizeof(cvs_session_cmd.cvs_session.name) - 1); + } else if (is_voice2_session(v->session_id)) { + strlcpy(cvs_session_cmd.cvs_session.name, + "default modem voice2", + sizeof(cvs_session_cmd.cvs_session.name)); + } else { + strlcpy(cvs_session_cmd.cvs_session.name, + "default modem voice", + sizeof(cvs_session_cmd.cvs_session.name) - 1); + } + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, + (uint32_t *) &cvs_session_cmd); + if (ret < 0) { + pr_err("Fail in sending STREAM_CONTROL_SESSION\n"); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + /* Get the created CVS handle. */ + cvs_handle = voice_get_cvs_handle(v); + + } else { + pr_debug("%s: creating CVS full session\n", __func__); + + cvs_full_ctl_cmd.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + + cvs_full_ctl_cmd.hdr.pkt_size = + APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_full_ctl_cmd) - + APR_HDR_SIZE); + + cvs_full_ctl_cmd.hdr.src_port = v->session_id; + cvs_full_ctl_cmd.hdr.dest_port = 0; + cvs_full_ctl_cmd.hdr.token = 0; + cvs_full_ctl_cmd.hdr.opcode = + VSS_ISTREAM_CMD_CREATE_FULL_CONTROL_SESSION; + cvs_full_ctl_cmd.cvs_session.direction = 2; + cvs_full_ctl_cmd.cvs_session.enc_media_type = + common.mvs_info.media_type; + cvs_full_ctl_cmd.cvs_session.dec_media_type = + common.mvs_info.media_type; + cvs_full_ctl_cmd.cvs_session.network_id = + common.mvs_info.network_type; + strlcpy(cvs_full_ctl_cmd.cvs_session.name, + "default q6 voice", + sizeof(cvs_full_ctl_cmd.cvs_session.name)); + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, + (uint32_t *) &cvs_full_ctl_cmd); + + if (ret < 0) { + pr_err("%s: Err %d sending CREATE_FULL_CTRL\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + /* Get the created CVS handle. */ + cvs_handle = voice_get_cvs_handle(v); + + /* Attach MVM to CVS. */ + pr_debug("%s: Attach MVM to stream\n", __func__); + + attach_stream_cmd.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + attach_stream_cmd.hdr.pkt_size = + APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(attach_stream_cmd) - + APR_HDR_SIZE); + attach_stream_cmd.hdr.src_port = v->session_id; + attach_stream_cmd.hdr.dest_port = mvm_handle; + attach_stream_cmd.hdr.token = 0; + attach_stream_cmd.hdr.opcode = + VSS_IMVM_CMD_ATTACH_STREAM; + attach_stream_cmd.attach_stream.handle = cvs_handle; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, + (uint32_t *) &attach_stream_cmd); + if (ret < 0) { + pr_err("%s: Error %d sending ATTACH_STREAM\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + } + } + return 0; + +fail: + return -EINVAL; +} + +static int voice_destroy_mvm_cvs_session(struct voice_data *v) +{ + int ret = 0; + struct mvm_detach_stream_cmd detach_stream; + struct apr_hdr mvm_destroy; + struct apr_hdr cvs_destroy; + void *apr_mvm, *apr_cvs; + u16 mvm_handle, cvs_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + apr_cvs = common.apr_q6_cvs; + + if (!apr_mvm || !apr_cvs) { + pr_err("%s: apr_mvm or apr_cvs is NULL\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + cvs_handle = voice_get_cvs_handle(v); + + /* MVM, CVS sessions are destroyed only for Full control sessions. */ + if (is_voip_session(v->session_id)) { + pr_debug("%s: MVM detach stream\n", __func__); + + /* Detach voice stream. */ + detach_stream.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + detach_stream.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(detach_stream) - APR_HDR_SIZE); + detach_stream.hdr.src_port = v->session_id; + detach_stream.hdr.dest_port = mvm_handle; + detach_stream.hdr.token = 0; + detach_stream.hdr.opcode = VSS_IMVM_CMD_DETACH_STREAM; + detach_stream.detach_stream.handle = cvs_handle; + + v->mvm_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_mvm, (uint32_t *) &detach_stream); + if (ret < 0) { + pr_err("%s: Error %d sending DETACH_STREAM\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait event timeout\n", __func__); + goto fail; + } + /* Destroy CVS. */ + pr_debug("%s: CVS destroy session\n", __func__); + + cvs_destroy.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_destroy.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_destroy) - APR_HDR_SIZE); + cvs_destroy.src_port = v->session_id; + cvs_destroy.dest_port = cvs_handle; + cvs_destroy.token = 0; + cvs_destroy.opcode = APRV2_IBASIC_CMD_DESTROY_SESSION; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_destroy); + if (ret < 0) { + pr_err("%s: Error %d sending CVS DESTROY\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait event timeout\n", __func__); + + goto fail; + } + cvs_handle = 0; + voice_set_cvs_handle(v, cvs_handle); + + /* Destroy MVM. */ + pr_debug("MVM destroy session\n"); + + mvm_destroy.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_destroy.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_destroy) - APR_HDR_SIZE); + mvm_destroy.src_port = v->session_id; + mvm_destroy.dest_port = mvm_handle; + mvm_destroy.token = 0; + mvm_destroy.opcode = APRV2_IBASIC_CMD_DESTROY_SESSION; + + v->mvm_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_destroy); + if (ret < 0) { + pr_err("%s: Error %d sending MVM DESTROY\n", + __func__, ret); + + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait event timeout\n", __func__); + + goto fail; + } + mvm_handle = 0; + voice_set_mvm_handle(v, mvm_handle); + } + return 0; +fail: + return -EINVAL; +} + +static int voice_send_tty_mode_cmd(struct voice_data *v) +{ + int ret = 0; + struct mvm_set_tty_mode_cmd mvm_tty_mode_cmd; + void *apr_mvm; + u16 mvm_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + + if (v->tty_mode) { + /* send tty mode cmd to mvm */ + mvm_tty_mode_cmd.hdr.hdr_field = APR_HDR_FIELD( + APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_tty_mode_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_tty_mode_cmd) - + APR_HDR_SIZE); + pr_debug("%s: pkt size = %d\n", + __func__, mvm_tty_mode_cmd.hdr.pkt_size); + mvm_tty_mode_cmd.hdr.src_port = v->session_id; + mvm_tty_mode_cmd.hdr.dest_port = mvm_handle; + mvm_tty_mode_cmd.hdr.token = 0; + mvm_tty_mode_cmd.hdr.opcode = VSS_ISTREAM_CMD_SET_TTY_MODE; + mvm_tty_mode_cmd.tty_mode.mode = v->tty_mode; + pr_debug("tty mode =%d\n", mvm_tty_mode_cmd.tty_mode.mode); + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_tty_mode_cmd); + if (ret < 0) { + pr_err("%s: Error %d sending SET_TTY_MODE\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + } + return 0; +fail: + return -EINVAL; +} + +static int voice_set_dtx(struct voice_data *v) +{ + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + struct cvs_set_enc_dtx_mode_cmd cvs_set_dtx; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + cvs_handle = voice_get_cvs_handle(v); + + /* Set DTX */ + cvs_set_dtx.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_set_dtx.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_set_dtx) - APR_HDR_SIZE); + cvs_set_dtx.hdr.src_port = v->session_id; + cvs_set_dtx.hdr.dest_port = cvs_handle; + cvs_set_dtx.hdr.token = 0; + cvs_set_dtx.hdr.opcode = VSS_ISTREAM_CMD_SET_ENC_DTX_MODE; + cvs_set_dtx.dtx_mode.enable = common.mvs_info.dtx_mode; + + pr_debug("%s: Setting DTX %d\n", __func__, common.mvs_info.dtx_mode); + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_set_dtx); + if (ret < 0) { + pr_err("%s: Error %d sending SET_DTX\n", __func__, ret); + return -EINVAL; + } + + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + return -EINVAL; + } + + return 0; +} + +static int voice_config_cvs_vocoder(struct voice_data *v) +{ + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + /* Set media type. */ + struct cvs_set_media_type_cmd cvs_set_media_cmd; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + cvs_handle = voice_get_cvs_handle(v); + + cvs_set_media_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_set_media_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_set_media_cmd) - APR_HDR_SIZE); + cvs_set_media_cmd.hdr.src_port = v->session_id; + cvs_set_media_cmd.hdr.dest_port = cvs_handle; + cvs_set_media_cmd.hdr.token = 0; + cvs_set_media_cmd.hdr.opcode = VSS_ISTREAM_CMD_SET_MEDIA_TYPE; + cvs_set_media_cmd.media_type.tx_media_id = common.mvs_info.media_type; + cvs_set_media_cmd.media_type.rx_media_id = common.mvs_info.media_type; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_set_media_cmd); + if (ret < 0) { + pr_err("%s: Error %d sending SET_MEDIA_TYPE\n", + __func__, ret); + + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + + goto fail; + } + /* Set encoder properties. */ + switch (common.mvs_info.media_type) { + case VSS_MEDIA_ID_EVRC_MODEM: { + struct cvs_set_cdma_enc_minmax_rate_cmd cvs_set_cdma_rate; + + pr_debug("Setting EVRC min-max rate\n"); + + cvs_set_cdma_rate.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_set_cdma_rate.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_set_cdma_rate) - APR_HDR_SIZE); + cvs_set_cdma_rate.hdr.src_port = v->session_id; + cvs_set_cdma_rate.hdr.dest_port = cvs_handle; + cvs_set_cdma_rate.hdr.token = 0; + cvs_set_cdma_rate.hdr.opcode = + VSS_ISTREAM_CMD_CDMA_SET_ENC_MINMAX_RATE; + cvs_set_cdma_rate.cdma_rate.min_rate = common.mvs_info.rate; + cvs_set_cdma_rate.cdma_rate.max_rate = common.mvs_info.rate; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_set_cdma_rate); + if (ret < 0) { + pr_err("%s: Error %d sending SET_EVRC_MINMAX_RATE\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + + goto fail; + } + break; + } + case VSS_MEDIA_ID_AMR_NB_MODEM: { + struct cvs_set_amr_enc_rate_cmd cvs_set_amr_rate; + + pr_debug("Setting AMR rate\n"); + + cvs_set_amr_rate.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_set_amr_rate.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_set_amr_rate) - APR_HDR_SIZE); + cvs_set_amr_rate.hdr.src_port = v->session_id; + cvs_set_amr_rate.hdr.dest_port = cvs_handle; + cvs_set_amr_rate.hdr.token = 0; + cvs_set_amr_rate.hdr.opcode = + VSS_ISTREAM_CMD_VOC_AMR_SET_ENC_RATE; + cvs_set_amr_rate.amr_rate.mode = common.mvs_info.rate; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_set_amr_rate); + if (ret < 0) { + pr_err("%s: Error %d sending SET_AMR_RATE\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + ret = voice_set_dtx(v); + if (ret < 0) + goto fail; + + break; + } + case VSS_MEDIA_ID_AMR_WB_MODEM: { + struct cvs_set_amrwb_enc_rate_cmd cvs_set_amrwb_rate; + + pr_debug("Setting AMR WB rate\n"); + + cvs_set_amrwb_rate.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_set_amrwb_rate.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_set_amrwb_rate) - + APR_HDR_SIZE); + cvs_set_amrwb_rate.hdr.src_port = v->session_id; + cvs_set_amrwb_rate.hdr.dest_port = cvs_handle; + cvs_set_amrwb_rate.hdr.token = 0; + cvs_set_amrwb_rate.hdr.opcode = + VSS_ISTREAM_CMD_VOC_AMRWB_SET_ENC_RATE; + cvs_set_amrwb_rate.amrwb_rate.mode = common.mvs_info.rate; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_set_amrwb_rate); + if (ret < 0) { + pr_err("%s: Error %d sending SET_AMRWB_RATE\n", + __func__, ret); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + ret = voice_set_dtx(v); + if (ret < 0) + goto fail; + + break; + } + case VSS_MEDIA_ID_G729: + case VSS_MEDIA_ID_G711_ALAW: + case VSS_MEDIA_ID_G711_MULAW: { + ret = voice_set_dtx(v); + + break; + } + default: + /* Do nothing. */ + break; + } + return 0; + +fail: + return -EINVAL; +} + +static int voice_send_start_voice_cmd(struct voice_data *v) +{ + struct apr_hdr mvm_start_voice_cmd; + int ret = 0; + void *apr_mvm; + u16 mvm_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + + mvm_start_voice_cmd.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_start_voice_cmd.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_start_voice_cmd) - APR_HDR_SIZE); + pr_debug("send mvm_start_voice_cmd pkt size = %d\n", + mvm_start_voice_cmd.pkt_size); + mvm_start_voice_cmd.src_port = v->session_id; + mvm_start_voice_cmd.dest_port = mvm_handle; + mvm_start_voice_cmd.token = 0; + mvm_start_voice_cmd.opcode = VSS_IMVM_CMD_START_VOICE; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_start_voice_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IMVM_CMD_START_VOICE\n"); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; +} + +static int voice_send_disable_vocproc_cmd(struct voice_data *v) +{ + struct apr_hdr cvp_disable_cmd; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr regist failed\n", __func__); + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + /* disable vocproc and wait for respose */ + cvp_disable_cmd.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_disable_cmd.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_disable_cmd) - APR_HDR_SIZE); + pr_debug("cvp_disable_cmd pkt size = %d, cvp_handle=%d\n", + cvp_disable_cmd.pkt_size, cvp_handle); + cvp_disable_cmd.src_port = v->session_id; + cvp_disable_cmd.dest_port = cvp_handle; + cvp_disable_cmd.token = 0; + cvp_disable_cmd.opcode = VSS_IVOCPROC_CMD_DISABLE; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_disable_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IVOCPROC_CMD_DISABLE\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + return 0; +fail: + return -EINVAL; +} + +static int voice_send_set_device_cmd(struct voice_data *v) +{ + struct cvp_set_device_cmd cvp_setdev_cmd; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + /* set device and wait for response */ + cvp_setdev_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_setdev_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_setdev_cmd) - APR_HDR_SIZE); + pr_debug("send create cvp setdev, pkt size = %d\n", + cvp_setdev_cmd.hdr.pkt_size); + cvp_setdev_cmd.hdr.src_port = v->session_id; + cvp_setdev_cmd.hdr.dest_port = cvp_handle; + cvp_setdev_cmd.hdr.token = 0; + cvp_setdev_cmd.hdr.opcode = VSS_IVOCPROC_CMD_SET_DEVICE; + + /* Use default topology if invalid value in ACDB */ + cvp_setdev_cmd.cvp_set_device.tx_topology_id = + get_voice_tx_topology(); + if (cvp_setdev_cmd.cvp_set_device.tx_topology_id == 0) + cvp_setdev_cmd.cvp_set_device.tx_topology_id = + VSS_IVOCPROC_TOPOLOGY_ID_TX_SM_ECNS; + + cvp_setdev_cmd.cvp_set_device.rx_topology_id = + get_voice_rx_topology(); + if (cvp_setdev_cmd.cvp_set_device.rx_topology_id == 0) + cvp_setdev_cmd.cvp_set_device.rx_topology_id = + VSS_IVOCPROC_TOPOLOGY_ID_RX_DEFAULT; + cvp_setdev_cmd.cvp_set_device.tx_port_id = v->dev_tx.port_id; + cvp_setdev_cmd.cvp_set_device.rx_port_id = v->dev_rx.port_id; + pr_debug("topology=%d , tx_port_id=%d, rx_port_id=%d\n", + cvp_setdev_cmd.cvp_set_device.tx_topology_id, + cvp_setdev_cmd.cvp_set_device.tx_port_id, + cvp_setdev_cmd.cvp_set_device.rx_port_id); + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_setdev_cmd); + if (ret < 0) { + pr_err("Fail in sending VOCPROC_FULL_CONTROL_SESSION\n"); + goto fail; + } + pr_debug("wait for cvp create session event\n"); + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + return 0; +fail: + return -EINVAL; +} + +static int voice_send_set_device_cmd_v2(struct voice_data *v) +{ + struct cvp_set_device_cmd_v2 cvp_setdev_cmd_v2; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + /* set device and wait for response */ + cvp_setdev_cmd_v2.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_setdev_cmd_v2.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_setdev_cmd_v2) - APR_HDR_SIZE); + cvp_setdev_cmd_v2.hdr.src_port = v->session_id; + cvp_setdev_cmd_v2.hdr.dest_port = cvp_handle; + cvp_setdev_cmd_v2.hdr.token = 0; + cvp_setdev_cmd_v2.hdr.opcode = VSS_IVOCPROC_CMD_SET_DEVICE_V2; + + /* Use default topology if invalid value in ACDB */ + cvp_setdev_cmd_v2.cvp_set_device_v2.tx_topology_id = + get_voice_tx_topology(); + if (cvp_setdev_cmd_v2.cvp_set_device_v2.tx_topology_id == 0) + cvp_setdev_cmd_v2.cvp_set_device_v2.tx_topology_id = + VSS_IVOCPROC_TOPOLOGY_ID_TX_SM_ECNS; + + cvp_setdev_cmd_v2.cvp_set_device_v2.rx_topology_id = + get_voice_rx_topology(); + if (cvp_setdev_cmd_v2.cvp_set_device_v2.rx_topology_id == 0) + cvp_setdev_cmd_v2.cvp_set_device_v2.rx_topology_id = + VSS_IVOCPROC_TOPOLOGY_ID_RX_DEFAULT; + cvp_setdev_cmd_v2.cvp_set_device_v2.tx_port_id = v->dev_tx.port_id; + cvp_setdev_cmd_v2.cvp_set_device_v2.rx_port_id = v->dev_rx.port_id; + + if (common.ec_ref_ext == true) { + cvp_setdev_cmd_v2.cvp_set_device_v2.vocproc_mode = + VSS_IVOCPROC_VOCPROC_MODE_EC_EXT_MIXING; + cvp_setdev_cmd_v2.cvp_set_device_v2.ec_ref_port_id = + common.ec_port_id; + } else { + cvp_setdev_cmd_v2.cvp_set_device_v2.vocproc_mode = + VSS_IVOCPROC_VOCPROC_MODE_EC_INT_MIXING; + cvp_setdev_cmd_v2.cvp_set_device_v2.ec_ref_port_id = + VSS_IVOCPROC_PORT_ID_NONE; + } + pr_debug("%s:topology=%d , tx_port_id=%d, rx_port_id=%d\n" + "ec_ref_port_id = %x\n", __func__, + cvp_setdev_cmd_v2.cvp_set_device_v2.tx_topology_id, + cvp_setdev_cmd_v2.cvp_set_device_v2.tx_port_id, + cvp_setdev_cmd_v2.cvp_set_device_v2.rx_port_id, + cvp_setdev_cmd_v2.cvp_set_device_v2.ec_ref_port_id); + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_setdev_cmd_v2); + if (ret < 0) { + pr_err("Fail in sending VSS_IVOCPROC_CMD_SET_DEVICE_V2\n"); + goto fail; + } + pr_debug("wait for cvp create session event\n"); + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + return 0; +fail: + return -EINVAL; +} + +static int voice_send_stop_voice_cmd(struct voice_data *v) +{ + struct apr_hdr mvm_stop_voice_cmd; + int ret = 0; + void *apr_mvm; + u16 mvm_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + + mvm_stop_voice_cmd.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_stop_voice_cmd.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_stop_voice_cmd) - APR_HDR_SIZE); + pr_debug("send mvm_stop_voice_cmd pkt size = %d\n", + mvm_stop_voice_cmd.pkt_size); + mvm_stop_voice_cmd.src_port = v->session_id; + mvm_stop_voice_cmd.dest_port = mvm_handle; + mvm_stop_voice_cmd.token = 0; + mvm_stop_voice_cmd.opcode = VSS_IMVM_CMD_STOP_VOICE; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_stop_voice_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IMVM_CMD_STOP_VOICE\n"); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + return 0; +fail: + return -EINVAL; +} + +static int voice_send_cvs_register_cal_cmd(struct voice_data *v) +{ + struct cvs_register_cal_data_cmd cvs_reg_cal_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + uint32_t cal_paddr = 0; + uint32_t cal_buf = 0; + + /* get the cvs cal data */ + get_all_vocstrm_cal(&cal_block); + if (cal_block.cal_size == 0) + goto fail; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + if (is_volte_session(v->session_id) || + is_voip_session(v->session_id)) { + ret = voice_get_cal_phys_addr(v->session_id, CVS_CAL, + &cal_paddr); + if (ret < 0) + return ret; + + ret = voice_get_cal_kernel_addr(v->session_id, CVS_CAL, + &cal_buf); + if (ret < 0) + return ret; + + memcpy((void *)cal_buf, (void *)cal_block.cal_kvaddr, + cal_block.cal_size); + } else { + cal_paddr = cal_block.cal_paddr; + } + + cvs_handle = voice_get_cvs_handle(v); + + /* fill in the header */ + cvs_reg_cal_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvs_reg_cal_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_reg_cal_cmd) - APR_HDR_SIZE); + cvs_reg_cal_cmd.hdr.src_port = v->session_id; + cvs_reg_cal_cmd.hdr.dest_port = cvs_handle; + cvs_reg_cal_cmd.hdr.token = 0; + cvs_reg_cal_cmd.hdr.opcode = VSS_ISTREAM_CMD_REGISTER_CALIBRATION_DATA; + + cvs_reg_cal_cmd.cvs_cal_data.phys_addr = cal_paddr; + cvs_reg_cal_cmd.cvs_cal_data.mem_size = cal_block.cal_size; + + v->cvs_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_reg_cal_cmd); + if (ret < 0) { + pr_err("Fail: sending cvs cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvs_deregister_cal_cmd(struct voice_data *v) +{ + struct cvs_deregister_cal_data_cmd cvs_dereg_cal_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + + get_all_vocstrm_cal(&cal_block); + if (cal_block.cal_size == 0) + return 0; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + cvs_handle = voice_get_cvs_handle(v); + + /* fill in the header */ + cvs_dereg_cal_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvs_dereg_cal_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_dereg_cal_cmd) - APR_HDR_SIZE); + cvs_dereg_cal_cmd.hdr.src_port = v->session_id; + cvs_dereg_cal_cmd.hdr.dest_port = cvs_handle; + cvs_dereg_cal_cmd.hdr.token = 0; + cvs_dereg_cal_cmd.hdr.opcode = + VSS_ISTREAM_CMD_DEREGISTER_CALIBRATION_DATA; + + v->cvs_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_dereg_cal_cmd); + if (ret < 0) { + pr_err("Fail: sending cvs cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvp_map_memory_cmd(struct voice_data *v) +{ + struct vss_map_memory_cmd cvp_map_mem_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + uint32_t cal_paddr = 0; + + /* get all cvp cal data */ + get_all_cvp_cal(&cal_block); + if (cal_block.cal_size == 0) + goto fail; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + + if (is_volte_session(v->session_id) || + is_voip_session(v->session_id)) { + ret = voice_get_cal_phys_addr(v->session_id, CVP_CAL, + &cal_paddr); + if (ret < 0) + return ret; + } else { + cal_paddr = cal_block.cal_paddr; + } + + cvp_handle = voice_get_cvp_handle(v); + + /* fill in the header */ + cvp_map_mem_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvp_map_mem_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_map_mem_cmd) - APR_HDR_SIZE); + cvp_map_mem_cmd.hdr.src_port = v->session_id; + cvp_map_mem_cmd.hdr.dest_port = cvp_handle; + cvp_map_mem_cmd.hdr.token = 0; + cvp_map_mem_cmd.hdr.opcode = VSS_ICOMMON_CMD_MAP_MEMORY; + + pr_debug("%s, phys_addr: 0x%x, mem_size: %d\n", __func__, + cal_paddr, cal_block.cal_size); + cvp_map_mem_cmd.vss_map_mem.phys_addr = cal_paddr; + cvp_map_mem_cmd.vss_map_mem.mem_size = cal_block.cal_size; + cvp_map_mem_cmd.vss_map_mem.mem_pool_id = + VSS_ICOMMON_MAP_MEMORY_SHMEM8_4K_POOL; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_map_mem_cmd); + if (ret < 0) { + pr_err("Fail: sending cvp cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvp_unmap_memory_cmd(struct voice_data *v) +{ + struct vss_unmap_memory_cmd cvp_unmap_mem_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + uint32_t cal_paddr = 0; + + get_all_cvp_cal(&cal_block); + if (cal_block.cal_size == 0) + return 0; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + + if (is_volte_session(v->session_id) || + is_voip_session(v->session_id)) { + ret = voice_get_cal_phys_addr(v->session_id, CVP_CAL, + &cal_paddr); + if (ret < 0) + return ret; + } else { + cal_paddr = cal_block.cal_paddr; + } + + cvp_handle = voice_get_cvp_handle(v); + + /* fill in the header */ + cvp_unmap_mem_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvp_unmap_mem_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_unmap_mem_cmd) - APR_HDR_SIZE); + cvp_unmap_mem_cmd.hdr.src_port = v->session_id; + cvp_unmap_mem_cmd.hdr.dest_port = cvp_handle; + cvp_unmap_mem_cmd.hdr.token = 0; + cvp_unmap_mem_cmd.hdr.opcode = VSS_ICOMMON_CMD_UNMAP_MEMORY; + + cvp_unmap_mem_cmd.vss_unmap_mem.phys_addr = cal_paddr; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_unmap_mem_cmd); + if (ret < 0) { + pr_err("Fail: sending cvp cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvs_map_memory_cmd(struct voice_data *v) +{ + struct vss_map_memory_cmd cvs_map_mem_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + uint32_t cal_paddr = 0; + + /* get all cvs cal data */ + get_all_vocstrm_cal(&cal_block); + if (cal_block.cal_size == 0) + goto fail; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + if (is_volte_session(v->session_id) || + is_voip_session(v->session_id)) { + ret = voice_get_cal_phys_addr(v->session_id, CVS_CAL, + &cal_paddr); + if (ret < 0) + return ret; + } else { + cal_paddr = cal_block.cal_paddr; + } + + cvs_handle = voice_get_cvs_handle(v); + + /* fill in the header */ + cvs_map_mem_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvs_map_mem_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_map_mem_cmd) - APR_HDR_SIZE); + cvs_map_mem_cmd.hdr.src_port = v->session_id; + cvs_map_mem_cmd.hdr.dest_port = cvs_handle; + cvs_map_mem_cmd.hdr.token = 0; + cvs_map_mem_cmd.hdr.opcode = VSS_ICOMMON_CMD_MAP_MEMORY; + + pr_debug("%s, phys_addr: 0x%x, mem_size: %d\n", __func__, + cal_paddr, cal_block.cal_size); + cvs_map_mem_cmd.vss_map_mem.phys_addr = cal_paddr; + cvs_map_mem_cmd.vss_map_mem.mem_size = cal_block.cal_size; + cvs_map_mem_cmd.vss_map_mem.mem_pool_id = + VSS_ICOMMON_MAP_MEMORY_SHMEM8_4K_POOL; + + v->cvs_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_map_mem_cmd); + if (ret < 0) { + pr_err("Fail: sending cvs cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvs_unmap_memory_cmd(struct voice_data *v) +{ + struct vss_unmap_memory_cmd cvs_unmap_mem_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + uint32_t cal_paddr = 0; + + get_all_vocstrm_cal(&cal_block); + if (cal_block.cal_size == 0) + return 0; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + if (is_volte_session(v->session_id) || + is_voip_session(v->session_id)) { + ret = voice_get_cal_phys_addr(v->session_id, CVS_CAL, + &cal_paddr); + if (ret < 0) + return ret; + } else { + cal_paddr = cal_block.cal_paddr; + } + + cvs_handle = voice_get_cvs_handle(v); + + /* fill in the header */ + cvs_unmap_mem_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvs_unmap_mem_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_unmap_mem_cmd) - APR_HDR_SIZE); + cvs_unmap_mem_cmd.hdr.src_port = v->session_id; + cvs_unmap_mem_cmd.hdr.dest_port = cvs_handle; + cvs_unmap_mem_cmd.hdr.token = 0; + cvs_unmap_mem_cmd.hdr.opcode = VSS_ICOMMON_CMD_UNMAP_MEMORY; + + cvs_unmap_mem_cmd.vss_unmap_mem.phys_addr = cal_paddr; + + v->cvs_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_unmap_mem_cmd); + if (ret < 0) { + pr_err("Fail: sending cvs cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvp_register_cal_cmd(struct voice_data *v) +{ + struct cvp_register_cal_data_cmd cvp_reg_cal_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + uint32_t cal_paddr = 0; + uint32_t cal_buf = 0; + + /* get the cvp cal data */ + get_all_vocproc_cal(&cal_block); + if (cal_block.cal_size == 0) + goto fail; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + + if (is_volte_session(v->session_id) || + is_voip_session(v->session_id)) { + ret = voice_get_cal_phys_addr(v->session_id, CVP_CAL, + &cal_paddr); + if (ret < 0) + return ret; + + ret = voice_get_cal_kernel_addr(v->session_id, CVP_CAL, + &cal_buf); + if (ret < 0) + return ret; + + memcpy((void *)cal_buf, (void *)cal_block.cal_kvaddr, + cal_block.cal_size); + } else { + cal_paddr = cal_block.cal_paddr; + } + + cvp_handle = voice_get_cvp_handle(v); + + /* fill in the header */ + cvp_reg_cal_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvp_reg_cal_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_reg_cal_cmd) - APR_HDR_SIZE); + cvp_reg_cal_cmd.hdr.src_port = v->session_id; + cvp_reg_cal_cmd.hdr.dest_port = cvp_handle; + cvp_reg_cal_cmd.hdr.token = 0; + cvp_reg_cal_cmd.hdr.opcode = VSS_IVOCPROC_CMD_REGISTER_CALIBRATION_DATA; + + cvp_reg_cal_cmd.cvp_cal_data.phys_addr = cal_paddr; + cvp_reg_cal_cmd.cvp_cal_data.mem_size = cal_block.cal_size; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_reg_cal_cmd); + if (ret < 0) { + pr_err("Fail: sending cvp cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvp_deregister_cal_cmd(struct voice_data *v) +{ + struct cvp_deregister_cal_data_cmd cvp_dereg_cal_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + + get_all_vocproc_cal(&cal_block); + if (cal_block.cal_size == 0) + return 0; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + /* fill in the header */ + cvp_dereg_cal_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvp_dereg_cal_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_dereg_cal_cmd) - APR_HDR_SIZE); + cvp_dereg_cal_cmd.hdr.src_port = v->session_id; + cvp_dereg_cal_cmd.hdr.dest_port = cvp_handle; + cvp_dereg_cal_cmd.hdr.token = 0; + cvp_dereg_cal_cmd.hdr.opcode = + VSS_IVOCPROC_CMD_DEREGISTER_CALIBRATION_DATA; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_dereg_cal_cmd); + if (ret < 0) { + pr_err("Fail: sending cvp cal,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvp_register_vol_cal_table_cmd(struct voice_data *v) +{ + struct cvp_register_vol_cal_table_cmd cvp_reg_cal_tbl_cmd; + struct acdb_cal_block vol_block; + struct acdb_cal_block voc_block; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + uint32_t cal_paddr = 0; + uint32_t cal_buf = 0; + + /* get the cvp vol cal data */ + get_all_vocvol_cal(&vol_block); + get_all_vocproc_cal(&voc_block); + + if (vol_block.cal_size == 0) + goto fail; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + + if (is_volte_session(v->session_id) || + is_voip_session(v->session_id)) { + ret = voice_get_cal_phys_addr(v->session_id, CVP_CAL, + &cal_paddr); + if (ret < 0) + return ret; + + cal_paddr += voc_block.cal_size; + ret = voice_get_cal_kernel_addr(v->session_id, CVP_CAL, + &cal_buf); + if (ret < 0) + return ret; + + memcpy((void *)(cal_buf + voc_block.cal_size), + (void *)vol_block.cal_kvaddr, vol_block.cal_size); + } else { + cal_paddr = vol_block.cal_paddr; + } + + cvp_handle = voice_get_cvp_handle(v); + + /* fill in the header */ + cvp_reg_cal_tbl_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvp_reg_cal_tbl_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_reg_cal_tbl_cmd) - APR_HDR_SIZE); + cvp_reg_cal_tbl_cmd.hdr.src_port = v->session_id; + cvp_reg_cal_tbl_cmd.hdr.dest_port = cvp_handle; + cvp_reg_cal_tbl_cmd.hdr.token = 0; + cvp_reg_cal_tbl_cmd.hdr.opcode = + VSS_IVOCPROC_CMD_REGISTER_VOLUME_CAL_TABLE; + + cvp_reg_cal_tbl_cmd.cvp_vol_cal_tbl.phys_addr = cal_paddr; + cvp_reg_cal_tbl_cmd.cvp_vol_cal_tbl.mem_size = vol_block.cal_size; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_reg_cal_tbl_cmd); + if (ret < 0) { + pr_err("Fail: sending cvp cal table,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_cvp_deregister_vol_cal_table_cmd(struct voice_data *v) +{ + struct cvp_deregister_vol_cal_table_cmd cvp_dereg_cal_tbl_cmd; + struct acdb_cal_block cal_block; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + + get_all_vocvol_cal(&cal_block); + if (cal_block.cal_size == 0) + return 0; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + /* fill in the header */ + cvp_dereg_cal_tbl_cmd.hdr.hdr_field = APR_HDR_FIELD( + APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_dereg_cal_tbl_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_dereg_cal_tbl_cmd) - APR_HDR_SIZE); + cvp_dereg_cal_tbl_cmd.hdr.src_port = v->session_id; + cvp_dereg_cal_tbl_cmd.hdr.dest_port = cvp_handle; + cvp_dereg_cal_tbl_cmd.hdr.token = 0; + cvp_dereg_cal_tbl_cmd.hdr.opcode = + VSS_IVOCPROC_CMD_DEREGISTER_VOLUME_CAL_TABLE; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_dereg_cal_tbl_cmd); + if (ret < 0) { + pr_err("Fail: sending cvp cal table,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; + +} + +static int voice_send_set_widevoice_enable_cmd(struct voice_data *v) +{ + struct mvm_set_widevoice_enable_cmd mvm_set_wv_cmd; + int ret = 0; + void *apr_mvm; + u16 mvm_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + + /* fill in the header */ + mvm_set_wv_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + mvm_set_wv_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_set_wv_cmd) - APR_HDR_SIZE); + mvm_set_wv_cmd.hdr.src_port = v->session_id; + mvm_set_wv_cmd.hdr.dest_port = mvm_handle; + mvm_set_wv_cmd.hdr.token = 0; + mvm_set_wv_cmd.hdr.opcode = VSS_IWIDEVOICE_CMD_SET_WIDEVOICE; + + mvm_set_wv_cmd.vss_set_wv.enable = v->wv_enable; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_set_wv_cmd); + if (ret < 0) { + pr_err("Fail: sending mvm set widevoice enable,\n"); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; +} + +static int voice_send_set_pp_enable_cmd(struct voice_data *v, + uint32_t module_id, int enable) +{ + struct cvs_set_pp_enable_cmd cvs_set_pp_cmd; + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + cvs_handle = voice_get_cvs_handle(v); + + /* fill in the header */ + cvs_set_pp_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvs_set_pp_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_set_pp_cmd) - APR_HDR_SIZE); + cvs_set_pp_cmd.hdr.src_port = v->session_id; + cvs_set_pp_cmd.hdr.dest_port = cvs_handle; + cvs_set_pp_cmd.hdr.token = 0; + cvs_set_pp_cmd.hdr.opcode = VSS_ICOMMON_CMD_SET_UI_PROPERTY; + + cvs_set_pp_cmd.vss_set_pp.module_id = module_id; + cvs_set_pp_cmd.vss_set_pp.param_id = VOICE_PARAM_MOD_ENABLE; + cvs_set_pp_cmd.vss_set_pp.param_size = MOD_ENABLE_PARAM_LEN; + cvs_set_pp_cmd.vss_set_pp.reserved = 0; + cvs_set_pp_cmd.vss_set_pp.enable = enable; + cvs_set_pp_cmd.vss_set_pp.reserved_field = 0; + pr_debug("voice_send_set_pp_enable_cmd, module_id=%d, enable=%d\n", + module_id, enable); + + v->cvs_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_set_pp_cmd); + if (ret < 0) { + pr_err("Fail: sending cvs set slowtalk enable,\n"); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + return 0; +fail: + return -EINVAL; +} + +static int voice_setup_vocproc(struct voice_data *v) +{ + struct cvp_create_full_ctl_session_cmd cvp_session_cmd; + int ret = 0; + void *apr_cvp; + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + + /* create cvp session and wait for response */ + cvp_session_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_session_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_session_cmd) - APR_HDR_SIZE); + pr_debug(" send create cvp session, pkt size = %d\n", + cvp_session_cmd.hdr.pkt_size); + cvp_session_cmd.hdr.src_port = v->session_id; + cvp_session_cmd.hdr.dest_port = 0; + cvp_session_cmd.hdr.token = 0; + cvp_session_cmd.hdr.opcode = + VSS_IVOCPROC_CMD_CREATE_FULL_CONTROL_SESSION; + + /* Use default topology if invalid value in ACDB */ + cvp_session_cmd.cvp_session.tx_topology_id = + get_voice_tx_topology(); + if (cvp_session_cmd.cvp_session.tx_topology_id == 0) + cvp_session_cmd.cvp_session.tx_topology_id = + VSS_IVOCPROC_TOPOLOGY_ID_TX_SM_ECNS; + + cvp_session_cmd.cvp_session.rx_topology_id = + get_voice_rx_topology(); + if (cvp_session_cmd.cvp_session.rx_topology_id == 0) + cvp_session_cmd.cvp_session.rx_topology_id = + VSS_IVOCPROC_TOPOLOGY_ID_RX_DEFAULT; + + cvp_session_cmd.cvp_session.direction = 2; /*tx and rx*/ + cvp_session_cmd.cvp_session.network_id = VSS_NETWORK_ID_DEFAULT; + cvp_session_cmd.cvp_session.tx_port_id = v->dev_tx.port_id; + cvp_session_cmd.cvp_session.rx_port_id = v->dev_rx.port_id; + + pr_debug("topology=%d net_id=%d, dir=%d tx_port_id=%d, rx_port_id=%d\n", + cvp_session_cmd.cvp_session.tx_topology_id, + cvp_session_cmd.cvp_session.network_id, + cvp_session_cmd.cvp_session.direction, + cvp_session_cmd.cvp_session.tx_port_id, + cvp_session_cmd.cvp_session.rx_port_id); + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_session_cmd); + if (ret < 0) { + pr_err("Fail in sending VOCPROC_FULL_CONTROL_SESSION\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + if (common.ec_ref_ext == true) { + ret = voice_send_set_device_cmd_v2(v); + if (ret < 0) + pr_err("%s: set device V2 failed rc =%x\n", + __func__, ret); + goto fail; + } + /* send cvs cal */ + ret = voice_send_cvs_map_memory_cmd(v); + if (!ret) + voice_send_cvs_register_cal_cmd(v); + + /* send cvp and vol cal */ + ret = voice_send_cvp_map_memory_cmd(v); + if (!ret) { + voice_send_cvp_register_cal_cmd(v); + voice_send_cvp_register_vol_cal_table_cmd(v); + } + + /* enable vocproc */ + ret = voice_send_enable_vocproc_cmd(v); + if (ret < 0) + goto fail; + + /* attach vocproc */ + ret = voice_send_attach_vocproc_cmd(v); + if (ret < 0) + goto fail; + + /* send tty mode if tty device is used */ + voice_send_tty_mode_cmd(v); + + /* enable widevoice if wv_enable is set */ + if (v->wv_enable) + voice_send_set_widevoice_enable_cmd(v); + + /* enable slowtalk if st_enable is set */ + if (v->st_enable) + voice_send_set_pp_enable_cmd(v, MODULE_ID_VOICE_MODULE_ST, + v->st_enable); + voice_send_set_pp_enable_cmd(v, MODULE_ID_VOICE_MODULE_FENS, + v->fens_enable); + + if (is_voip_session(v->session_id)) + voice_send_netid_timing_cmd(v); + + /* Start in-call music delivery if this feature is enabled */ + if (v->music_info.play_enable) + voice_cvs_start_playback(v); + + /* Start in-call recording if this feature is enabled */ + if (v->rec_info.rec_enable) + voice_cvs_start_record(v, v->rec_info.rec_mode); + + rtac_add_voice(voice_get_cvs_handle(v), + voice_get_cvp_handle(v), + v->dev_rx.port_id, v->dev_tx.port_id, + v->session_id); + + return 0; + +fail: + return -EINVAL; +} + +static int voice_send_enable_vocproc_cmd(struct voice_data *v) +{ + int ret = 0; + struct apr_hdr cvp_enable_cmd; + void *apr_cvp; + u16 cvp_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + /* enable vocproc and wait for respose */ + cvp_enable_cmd.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_enable_cmd.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_enable_cmd) - APR_HDR_SIZE); + pr_debug("cvp_enable_cmd pkt size = %d, cvp_handle=%d\n", + cvp_enable_cmd.pkt_size, cvp_handle); + cvp_enable_cmd.src_port = v->session_id; + cvp_enable_cmd.dest_port = cvp_handle; + cvp_enable_cmd.token = 0; + cvp_enable_cmd.opcode = VSS_IVOCPROC_CMD_ENABLE; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_enable_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IVOCPROC_CMD_ENABLE\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + return 0; +fail: + return -EINVAL; +} + +static int voice_send_netid_timing_cmd(struct voice_data *v) +{ + int ret = 0; + void *apr_mvm; + u16 mvm_handle; + struct mvm_set_network_cmd mvm_set_network; + struct mvm_set_voice_timing_cmd mvm_set_voice_timing; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + + ret = voice_config_cvs_vocoder(v); + if (ret < 0) { + pr_err("%s: Error %d configuring CVS voc", + __func__, ret); + goto fail; + } + /* Set network ID. */ + pr_debug("Setting network ID\n"); + + mvm_set_network.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_set_network.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_set_network) - APR_HDR_SIZE); + mvm_set_network.hdr.src_port = v->session_id; + mvm_set_network.hdr.dest_port = mvm_handle; + mvm_set_network.hdr.token = 0; + mvm_set_network.hdr.opcode = VSS_ICOMMON_CMD_SET_NETWORK; + mvm_set_network.network.network_id = common.mvs_info.network_type; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_set_network); + if (ret < 0) { + pr_err("%s: Error %d sending SET_NETWORK\n", __func__, ret); + goto fail; + } + + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + /* Set voice timing. */ + pr_debug("Setting voice timing\n"); + + mvm_set_voice_timing.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_set_voice_timing.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_set_voice_timing) - + APR_HDR_SIZE); + mvm_set_voice_timing.hdr.src_port = v->session_id; + mvm_set_voice_timing.hdr.dest_port = mvm_handle; + mvm_set_voice_timing.hdr.token = 0; + mvm_set_voice_timing.hdr.opcode = VSS_ICOMMON_CMD_SET_VOICE_TIMING; + mvm_set_voice_timing.timing.mode = 0; + mvm_set_voice_timing.timing.enc_offset = 8000; + mvm_set_voice_timing.timing.dec_req_offset = 3300; + mvm_set_voice_timing.timing.dec_offset = 8300; + + v->mvm_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_set_voice_timing); + if (ret < 0) { + pr_err("%s: Error %d sending SET_TIMING\n", __func__, ret); + goto fail; + } + + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + return 0; +fail: + return -EINVAL; +} + +static int voice_send_attach_vocproc_cmd(struct voice_data *v) +{ + int ret = 0; + struct mvm_attach_vocproc_cmd mvm_a_vocproc_cmd; + void *apr_mvm; + u16 mvm_handle, cvp_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + cvp_handle = voice_get_cvp_handle(v); + + /* attach vocproc and wait for response */ + mvm_a_vocproc_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_a_vocproc_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_a_vocproc_cmd) - APR_HDR_SIZE); + pr_debug("send mvm_a_vocproc_cmd pkt size = %d\n", + mvm_a_vocproc_cmd.hdr.pkt_size); + mvm_a_vocproc_cmd.hdr.src_port = v->session_id; + mvm_a_vocproc_cmd.hdr.dest_port = mvm_handle; + mvm_a_vocproc_cmd.hdr.token = 0; + mvm_a_vocproc_cmd.hdr.opcode = VSS_IMVM_CMD_ATTACH_VOCPROC; + mvm_a_vocproc_cmd.mvm_attach_cvp_handle.handle = cvp_handle; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_a_vocproc_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IMVM_CMD_ATTACH_VOCPROC\n"); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + return 0; +fail: + return -EINVAL; +} + +static int voice_destroy_vocproc(struct voice_data *v) +{ + struct mvm_detach_vocproc_cmd mvm_d_vocproc_cmd; + struct apr_hdr cvp_destroy_session_cmd; + int ret = 0; + void *apr_mvm, *apr_cvp; + u16 mvm_handle, cvp_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + apr_cvp = common.apr_q6_cvp; + + if (!apr_mvm || !apr_cvp) { + pr_err("%s: apr_mvm or apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + cvp_handle = voice_get_cvp_handle(v); + + /* stop playback or recording */ + v->music_info.force = 1; + voice_cvs_stop_playback(v); + voice_cvs_stop_record(v); + /* send stop voice cmd */ + voice_send_stop_voice_cmd(v); + + /* Clear mute setting */ + v->dev_tx.mute = common.default_mute_val; + + /* detach VOCPROC and wait for response from mvm */ + mvm_d_vocproc_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_d_vocproc_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_d_vocproc_cmd) - APR_HDR_SIZE); + pr_debug("mvm_d_vocproc_cmd pkt size = %d\n", + mvm_d_vocproc_cmd.hdr.pkt_size); + mvm_d_vocproc_cmd.hdr.src_port = v->session_id; + mvm_d_vocproc_cmd.hdr.dest_port = mvm_handle; + mvm_d_vocproc_cmd.hdr.token = 0; + mvm_d_vocproc_cmd.hdr.opcode = VSS_IMVM_CMD_DETACH_VOCPROC; + mvm_d_vocproc_cmd.mvm_detach_cvp_handle.handle = cvp_handle; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_d_vocproc_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IMVM_CMD_DETACH_VOCPROC\n"); + goto fail; + } + ret = wait_event_timeout(v->mvm_wait, + (v->mvm_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + /* deregister cvp and vol cal */ + voice_send_cvp_deregister_vol_cal_table_cmd(v); + voice_send_cvp_deregister_cal_cmd(v); + voice_send_cvp_unmap_memory_cmd(v); + + /* deregister cvs cal */ + voice_send_cvs_deregister_cal_cmd(v); + voice_send_cvs_unmap_memory_cmd(v); + + /* destrop cvp session */ + cvp_destroy_session_cmd.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_destroy_session_cmd.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_destroy_session_cmd) - APR_HDR_SIZE); + pr_debug("cvp_destroy_session_cmd pkt size = %d\n", + cvp_destroy_session_cmd.pkt_size); + cvp_destroy_session_cmd.src_port = v->session_id; + cvp_destroy_session_cmd.dest_port = cvp_handle; + cvp_destroy_session_cmd.token = 0; + cvp_destroy_session_cmd.opcode = APRV2_IBASIC_CMD_DESTROY_SESSION; + + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_destroy_session_cmd); + if (ret < 0) { + pr_err("Fail in sending APRV2_IBASIC_CMD_DESTROY_SESSION\n"); + goto fail; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + goto fail; + } + + rtac_remove_voice(voice_get_cvs_handle(v)); + cvp_handle = 0; + voice_set_cvp_handle(v, cvp_handle); + + return 0; + +fail: + return -EINVAL; +} + +static int voice_send_mute_cmd(struct voice_data *v) +{ + struct cvs_set_mute_cmd cvs_mute_cmd; + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + cvs_handle = voice_get_cvs_handle(v); + + /* send mute/unmute to cvs */ + cvs_mute_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_mute_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_mute_cmd) - APR_HDR_SIZE); + cvs_mute_cmd.hdr.src_port = v->session_id; + cvs_mute_cmd.hdr.dest_port = cvs_handle; + cvs_mute_cmd.hdr.token = 0; + cvs_mute_cmd.hdr.opcode = VSS_ISTREAM_CMD_SET_MUTE; + cvs_mute_cmd.cvs_set_mute.direction = 0; /*tx*/ + cvs_mute_cmd.cvs_set_mute.mute_flag = v->dev_tx.mute; + + pr_info(" mute value =%d\n", cvs_mute_cmd.cvs_set_mute.mute_flag); + v->cvs_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_mute_cmd); + if (ret < 0) { + pr_err("Fail: send STREAM SET MUTE\n"); + goto fail; + } + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) + pr_err("%s: wait_event timeout\n", __func__); + + return 0; +fail: + return -EINVAL; +} + +static int voice_send_rx_device_mute_cmd(struct voice_data *v) +{ + struct cvp_set_mute_cmd cvp_mute_cmd; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + cvp_mute_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_mute_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_mute_cmd) - APR_HDR_SIZE); + cvp_mute_cmd.hdr.src_port = v->session_id; + cvp_mute_cmd.hdr.dest_port = cvp_handle; + cvp_mute_cmd.hdr.token = 0; + cvp_mute_cmd.hdr.opcode = VSS_IVOCPROC_CMD_SET_MUTE; + cvp_mute_cmd.cvp_set_mute.direction = 1; + cvp_mute_cmd.cvp_set_mute.mute_flag = v->dev_rx.mute; + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_mute_cmd); + if (ret < 0) { + pr_err("Fail in sending RX device mute cmd\n"); + return -EINVAL; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + return -EINVAL; + } + return 0; +} + +static int voice_send_vol_index_cmd(struct voice_data *v) +{ + struct cvp_set_rx_volume_index_cmd cvp_vol_cmd; + int ret = 0; + void *apr_cvp; + u16 cvp_handle; + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvp = common.apr_q6_cvp; + + if (!apr_cvp) { + pr_err("%s: apr_cvp is NULL.\n", __func__); + return -EINVAL; + } + cvp_handle = voice_get_cvp_handle(v); + + /* send volume index to cvp */ + cvp_vol_cmd.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvp_vol_cmd.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvp_vol_cmd) - APR_HDR_SIZE); + cvp_vol_cmd.hdr.src_port = v->session_id; + cvp_vol_cmd.hdr.dest_port = cvp_handle; + cvp_vol_cmd.hdr.token = 0; + cvp_vol_cmd.hdr.opcode = VSS_IVOCPROC_CMD_SET_RX_VOLUME_INDEX; + cvp_vol_cmd.cvp_set_vol_idx.vol_index = v->dev_rx.volume; + v->cvp_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_cvp, (uint32_t *) &cvp_vol_cmd); + if (ret < 0) { + pr_err("Fail in sending RX VOL INDEX\n"); + return -EINVAL; + } + ret = wait_event_timeout(v->cvp_wait, + (v->cvp_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + return -EINVAL; + } + return 0; +} + +static int voice_cvs_start_record(struct voice_data *v, uint32_t rec_mode) +{ + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + + struct cvs_start_record_cmd cvs_start_record; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + cvs_handle = voice_get_cvs_handle(v); + + if (!v->rec_info.recording) { + cvs_start_record.hdr.hdr_field = APR_HDR_FIELD( + APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_start_record.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_start_record) - APR_HDR_SIZE); + cvs_start_record.hdr.src_port = v->session_id; + cvs_start_record.hdr.dest_port = cvs_handle; + cvs_start_record.hdr.token = 0; + cvs_start_record.hdr.opcode = VSS_ISTREAM_CMD_START_RECORD; + + if (rec_mode == VOC_REC_UPLINK) { + cvs_start_record.rec_mode.rx_tap_point = + VSS_TAP_POINT_NONE; + cvs_start_record.rec_mode.tx_tap_point = + VSS_TAP_POINT_STREAM_END; + } else if (rec_mode == VOC_REC_DOWNLINK) { + cvs_start_record.rec_mode.rx_tap_point = + VSS_TAP_POINT_STREAM_END; + cvs_start_record.rec_mode.tx_tap_point = + VSS_TAP_POINT_NONE; + } else if (rec_mode == VOC_REC_BOTH) { + cvs_start_record.rec_mode.rx_tap_point = + VSS_TAP_POINT_STREAM_END; + cvs_start_record.rec_mode.tx_tap_point = + VSS_TAP_POINT_STREAM_END; + } else { + pr_err("%s: Invalid in-call rec_mode %d\n", __func__, + rec_mode); + + ret = -EINVAL; + goto fail; + } + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_start_record); + if (ret < 0) { + pr_err("%s: Error %d sending START_RECORD\n", __func__, + ret); + + goto fail; + } + + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + + goto fail; + } + v->rec_info.recording = 1; + } else { + pr_debug("%s: Start record already sent\n", __func__); + } + + return 0; + +fail: + return ret; +} + +static int voice_cvs_stop_record(struct voice_data *v) +{ + int ret = 0; + void *apr_cvs; + u16 cvs_handle; + struct apr_hdr cvs_stop_record; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + cvs_handle = voice_get_cvs_handle(v); + + if (v->rec_info.recording) { + cvs_stop_record.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvs_stop_record.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_stop_record) - APR_HDR_SIZE); + cvs_stop_record.src_port = v->session_id; + cvs_stop_record.dest_port = cvs_handle; + cvs_stop_record.token = 0; + cvs_stop_record.opcode = VSS_ISTREAM_CMD_STOP_RECORD; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_stop_record); + if (ret < 0) { + pr_err("%s: Error %d sending STOP_RECORD\n", + __func__, ret); + + goto fail; + } + + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + + goto fail; + } + v->rec_info.recording = 0; + } else { + pr_debug("%s: Stop record already sent\n", __func__); + } + + return 0; + +fail: + + return ret; +} + +int voc_start_record(uint32_t port_id, uint32_t set) +{ + int ret = 0; + int rec_mode = 0; + u16 cvs_handle; + int i, rec_set = 0; + + for (i = 0; i < MAX_VOC_SESSIONS; i++) { + struct voice_data *v = &common.voice[i]; + pr_debug("%s: i:%d port_id: %d, set: %d\n", + __func__, i, port_id, set); + + mutex_lock(&v->lock); + rec_mode = v->rec_info.rec_mode; + rec_set = set; + if (set) { + if ((v->rec_route_state.ul_flag != 0) && + (v->rec_route_state.dl_flag != 0)) { + pr_debug("%s: i=%d, rec mode already set.\n", + __func__, i); + mutex_unlock(&v->lock); + if (i < MAX_VOC_SESSIONS) + continue; + else + return 0; + } + + if (port_id == VOICE_RECORD_TX) { + if ((v->rec_route_state.ul_flag == 0) + && (v->rec_route_state.dl_flag == 0)) { + rec_mode = VOC_REC_UPLINK; + v->rec_route_state.ul_flag = 1; + } else if ((v->rec_route_state.ul_flag == 0) + && (v->rec_route_state.dl_flag != 0)) { + voice_cvs_stop_record(v); + rec_mode = VOC_REC_BOTH; + v->rec_route_state.ul_flag = 1; + } + } else if (port_id == VOICE_RECORD_RX) { + if ((v->rec_route_state.ul_flag == 0) + && (v->rec_route_state.dl_flag == 0)) { + rec_mode = VOC_REC_DOWNLINK; + v->rec_route_state.dl_flag = 1; + } else if ((v->rec_route_state.ul_flag != 0) + && (v->rec_route_state.dl_flag == 0)) { + voice_cvs_stop_record(v); + rec_mode = VOC_REC_BOTH; + v->rec_route_state.dl_flag = 1; + } + } + rec_set = 1; + } else { + if ((v->rec_route_state.ul_flag == 0) && + (v->rec_route_state.dl_flag == 0)) { + pr_debug("%s: i=%d, rec already stops.\n", + __func__, i); + mutex_unlock(&v->lock); + if (i < MAX_VOC_SESSIONS) + continue; + else + return 0; + } + + if (port_id == VOICE_RECORD_TX) { + if ((v->rec_route_state.ul_flag != 0) + && (v->rec_route_state.dl_flag == 0)) { + v->rec_route_state.ul_flag = 0; + rec_set = 0; + } else if ((v->rec_route_state.ul_flag != 0) + && (v->rec_route_state.dl_flag != 0)) { + voice_cvs_stop_record(v); + v->rec_route_state.ul_flag = 0; + rec_mode = VOC_REC_DOWNLINK; + rec_set = 1; + } + } else if (port_id == VOICE_RECORD_RX) { + if ((v->rec_route_state.ul_flag == 0) + && (v->rec_route_state.dl_flag != 0)) { + v->rec_route_state.dl_flag = 0; + rec_set = 0; + } else if ((v->rec_route_state.ul_flag != 0) + && (v->rec_route_state.dl_flag != 0)) { + voice_cvs_stop_record(v); + v->rec_route_state.dl_flag = 0; + rec_mode = VOC_REC_UPLINK; + rec_set = 1; + } + } + } + pr_debug("%s: i=%d, mode =%d, set =%d\n", __func__, + i, rec_mode, rec_set); + cvs_handle = voice_get_cvs_handle(v); + + if (cvs_handle != 0) { + if (rec_set) + ret = voice_cvs_start_record(v, rec_mode); + else + ret = voice_cvs_stop_record(v); + } + + /* Cache the value */ + v->rec_info.rec_enable = rec_set; + v->rec_info.rec_mode = rec_mode; + + mutex_unlock(&v->lock); + } + + return ret; +} + +static int voice_cvs_start_playback(struct voice_data *v) +{ + int ret = 0; + struct apr_hdr cvs_start_playback; + void *apr_cvs; + u16 cvs_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + cvs_handle = voice_get_cvs_handle(v); + + if (!v->music_info.playing && v->music_info.count) { + cvs_start_playback.hdr_field = APR_HDR_FIELD( + APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + cvs_start_playback.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_start_playback) - APR_HDR_SIZE); + cvs_start_playback.src_port = v->session_id; + cvs_start_playback.dest_port = cvs_handle; + cvs_start_playback.token = 0; + cvs_start_playback.opcode = VSS_ISTREAM_CMD_START_PLAYBACK; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_start_playback); + + if (ret < 0) { + pr_err("%s: Error %d sending START_PLAYBACK\n", + __func__, ret); + + goto fail; + } + + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + + goto fail; + } + + v->music_info.playing = 1; + } else { + pr_debug("%s: Start playback already sent\n", __func__); + } + + return 0; + +fail: + return ret; +} + +static int voice_cvs_stop_playback(struct voice_data *v) +{ + int ret = 0; + struct apr_hdr cvs_stop_playback; + void *apr_cvs; + u16 cvs_handle; + + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_cvs = common.apr_q6_cvs; + + if (!apr_cvs) { + pr_err("%s: apr_cvs is NULL.\n", __func__); + return -EINVAL; + } + + cvs_handle = voice_get_cvs_handle(v); + + if (v->music_info.playing && ((!v->music_info.count) || + (v->music_info.force))) { + cvs_stop_playback.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + cvs_stop_playback.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(cvs_stop_playback) - APR_HDR_SIZE); + cvs_stop_playback.src_port = v->session_id; + cvs_stop_playback.dest_port = cvs_handle; + cvs_stop_playback.token = 0; + + cvs_stop_playback.opcode = VSS_ISTREAM_CMD_STOP_PLAYBACK; + + v->cvs_state = CMD_STATUS_FAIL; + + ret = apr_send_pkt(apr_cvs, (uint32_t *) &cvs_stop_playback); + if (ret < 0) { + pr_err("%s: Error %d sending STOP_PLAYBACK\n", + __func__, ret); + + + goto fail; + } + + ret = wait_event_timeout(v->cvs_wait, + (v->cvs_state == CMD_STATUS_SUCCESS), + msecs_to_jiffies(TIMEOUT_MS)); + if (!ret) { + pr_err("%s: wait_event timeout\n", __func__); + + goto fail; + } + + v->music_info.playing = 0; + v->music_info.force = 0; + } else { + pr_debug("%s: Stop playback already sent\n", __func__); + } + + return 0; + +fail: + return ret; +} + +int voc_start_playback(uint32_t set) +{ + int ret = 0; + u16 cvs_handle; + int i; + + + for (i = 0; i < MAX_VOC_SESSIONS; i++) { + struct voice_data *v = &common.voice[i]; + + mutex_lock(&v->lock); + v->music_info.play_enable = set; + if (set) + v->music_info.count++; + else + v->music_info.count--; + pr_debug("%s: music_info count =%d\n", __func__, + v->music_info.count); + + cvs_handle = voice_get_cvs_handle(v); + if (cvs_handle != 0) { + if (set) + ret = voice_cvs_start_playback(v); + else + ret = voice_cvs_stop_playback(v); + } + + mutex_unlock(&v->lock); + } + + return ret; +} + +int voc_disable_cvp(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + if (v->voc_state == VOC_RUN) { + if (v->dev_tx.port_id != RT_PROXY_PORT_001_TX && + v->dev_rx.port_id != RT_PROXY_PORT_001_RX) + afe_sidetone(v->dev_tx.port_id, v->dev_rx.port_id, + 0, 0); + + rtac_remove_voice(voice_get_cvs_handle(v)); + /* send cmd to dsp to disable vocproc */ + ret = voice_send_disable_vocproc_cmd(v); + if (ret < 0) { + pr_err("%s: disable vocproc failed\n", __func__); + goto fail; + } + + /* deregister cvp and vol cal */ + voice_send_cvp_deregister_vol_cal_table_cmd(v); + voice_send_cvp_deregister_cal_cmd(v); + voice_send_cvp_unmap_memory_cmd(v); + if (common.ec_ref_ext == true) + voc_set_ext_ec_ref(AFE_PORT_INVALID, false); + v->voc_state = VOC_CHANGE; + } + +fail: mutex_unlock(&v->lock); + + return ret; +} + +int voc_enable_cvp(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + struct sidetone_cal sidetone_cal_data; + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + if (v->voc_state == VOC_CHANGE) { + + if (common.ec_ref_ext == true) { + ret = voice_send_set_device_cmd_v2(v); + if (ret < 0) + pr_err("%s: set device V2 failed\n" + "rc =%x\n", __func__, ret); + goto fail; + } else { + ret = voice_send_set_device_cmd(v); + if (ret < 0) { + pr_err("%s: set device failed rc=%x\n", + __func__, ret); + goto fail; + } + } + /* send cvp and vol cal */ + ret = voice_send_cvp_map_memory_cmd(v); + if (!ret) { + voice_send_cvp_register_cal_cmd(v); + voice_send_cvp_register_vol_cal_table_cmd(v); + } + ret = voice_send_enable_vocproc_cmd(v); + if (ret < 0) { + pr_err("%s: enable vocproc failed\n", __func__); + goto fail; + + } + /* send tty mode if tty device is used */ + voice_send_tty_mode_cmd(v); + + /* enable widevoice if wv_enable is set */ + if (v->wv_enable) + voice_send_set_widevoice_enable_cmd(v); + + /* enable slowtalk */ + if (v->st_enable) + voice_send_set_pp_enable_cmd(v, + MODULE_ID_VOICE_MODULE_ST, + v->st_enable); + /* enable FENS */ + if (v->fens_enable) + voice_send_set_pp_enable_cmd(v, + MODULE_ID_VOICE_MODULE_FENS, + v->fens_enable); + + get_sidetone_cal(&sidetone_cal_data); + if (v->dev_tx.port_id != RT_PROXY_PORT_001_TX && + v->dev_rx.port_id != RT_PROXY_PORT_001_RX) { + ret = afe_sidetone(v->dev_tx.port_id, + v->dev_rx.port_id, + sidetone_cal_data.enable, + sidetone_cal_data.gain); + + if (ret < 0) + pr_err("%s: AFE command sidetone failed\n", + __func__); + } + + rtac_add_voice(voice_get_cvs_handle(v), + voice_get_cvp_handle(v), + v->dev_rx.port_id, v->dev_tx.port_id, + v->session_id); + v->voc_state = VOC_RUN; + } + +fail: + mutex_unlock(&v->lock); + + return ret; +} + +int voc_set_tx_mute(uint16_t session_id, uint32_t dir, uint32_t mute) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + v->dev_tx.mute = mute; + + if ((v->voc_state == VOC_RUN) || + (v->voc_state == VOC_CHANGE) || + (v->voc_state == VOC_STANDBY)) + ret = voice_send_mute_cmd(v); + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_set_rx_device_mute(uint16_t session_id, uint32_t mute) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + v->dev_rx.mute = mute; + + if (v->voc_state == VOC_RUN) + ret = voice_send_rx_device_mute_cmd(v); + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_get_rx_device_mute(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + ret = v->dev_rx.mute; + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_set_tty_mode(uint16_t session_id, uint8_t tty_mode) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + v->tty_mode = tty_mode; + + mutex_unlock(&v->lock); + + return ret; +} + +uint8_t voc_get_tty_mode(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + ret = v->tty_mode; + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_set_widevoice_enable(uint16_t session_id, uint32_t wv_enable) +{ + struct voice_data *v = voice_get_session(session_id); + u16 mvm_handle; + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + v->wv_enable = wv_enable; + + mvm_handle = voice_get_mvm_handle(v); + + if (mvm_handle != 0) + voice_send_set_widevoice_enable_cmd(v); + + mutex_unlock(&v->lock); + + return ret; +} + +uint32_t voc_get_widevoice_enable(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + ret = v->wv_enable; + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_set_pp_enable(uint16_t session_id, uint32_t module_id, uint32_t enable) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + if (module_id == MODULE_ID_VOICE_MODULE_ST) + v->st_enable = enable; + else if (module_id == MODULE_ID_VOICE_MODULE_FENS) + v->fens_enable = enable; + + if (v->voc_state == VOC_RUN) { + if (module_id == MODULE_ID_VOICE_MODULE_ST) + ret = voice_send_set_pp_enable_cmd(v, + MODULE_ID_VOICE_MODULE_ST, + enable); + else if (module_id == MODULE_ID_VOICE_MODULE_FENS) + ret = voice_send_set_pp_enable_cmd(v, + MODULE_ID_VOICE_MODULE_FENS, + enable); + } + mutex_unlock(&v->lock); + + return ret; +} + +int voc_get_pp_enable(uint16_t session_id, uint32_t module_id) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + if (module_id == MODULE_ID_VOICE_MODULE_ST) + ret = v->st_enable; + else if (module_id == MODULE_ID_VOICE_MODULE_FENS) + ret = v->fens_enable; + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_set_rx_vol_index(uint16_t session_id, uint32_t dir, uint32_t vol_idx) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + v->dev_rx.volume = vol_idx; + + if ((v->voc_state == VOC_RUN) || + (v->voc_state == VOC_CHANGE) || + (v->voc_state == VOC_STANDBY)) + ret = voice_send_vol_index_cmd(v); + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_set_rxtx_port(uint16_t session_id, uint32_t port_id, uint32_t dev_type) +{ + struct voice_data *v = voice_get_session(session_id); + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + pr_debug("%s: port_id=%d, type=%d\n", __func__, port_id, dev_type); + + mutex_lock(&v->lock); + + if (dev_type == DEV_RX) + v->dev_rx.port_id = port_id; + else + v->dev_tx.port_id = port_id; + + mutex_unlock(&v->lock); + + return 0; +} + +int voc_set_route_flag(uint16_t session_id, uint8_t path_dir, uint8_t set) +{ + struct voice_data *v = voice_get_session(session_id); + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + pr_debug("%s: path_dir=%d, set=%d\n", __func__, path_dir, set); + + mutex_lock(&v->lock); + + if (path_dir == RX_PATH) + v->voc_route_state.rx_route_flag = set; + else + v->voc_route_state.tx_route_flag = set; + + mutex_unlock(&v->lock); + + return 0; +} + +uint8_t voc_get_route_flag(uint16_t session_id, uint8_t path_dir) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return 0; + } + + mutex_lock(&v->lock); + + if (path_dir == RX_PATH) + ret = v->voc_route_state.rx_route_flag; + else + ret = v->voc_route_state.tx_route_flag; + + mutex_unlock(&v->lock); + + return ret; +} + +int voc_end_voice_call(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + if (v->voc_state == VOC_RUN || v->voc_state == VOC_STANDBY) { + if (v->dev_tx.port_id != RT_PROXY_PORT_001_TX && + v->dev_rx.port_id != RT_PROXY_PORT_001_RX) + afe_sidetone(v->dev_tx.port_id, v->dev_rx.port_id, + 0, 0); + ret = voice_destroy_vocproc(v); + if (ret < 0) + pr_err("%s: destroy voice failed\n", __func__); + voice_destroy_mvm_cvs_session(v); + if (common.ec_ref_ext == true) + voc_set_ext_ec_ref(AFE_PORT_INVALID, false); + v->voc_state = VOC_RELEASE; + } + mutex_unlock(&v->lock); + return ret; +} + +int voc_resume_voice_call(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + struct apr_hdr mvm_start_voice_cmd; + int ret = 0; + void *apr_mvm; + u16 mvm_handle; + + pr_debug("%s:\n", __func__); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + apr_mvm = common.apr_q6_mvm; + + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + return -EINVAL; + } + mvm_handle = voice_get_mvm_handle(v); + + mvm_start_voice_cmd.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + mvm_start_voice_cmd.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_start_voice_cmd) - APR_HDR_SIZE); + pr_debug("send mvm_start_voice_cmd pkt size = %d\n", + mvm_start_voice_cmd.pkt_size); + mvm_start_voice_cmd.src_port = v->session_id; + mvm_start_voice_cmd.dest_port = mvm_handle; + mvm_start_voice_cmd.token = 0; + mvm_start_voice_cmd.opcode = VSS_IMVM_CMD_START_VOICE; + + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, (uint32_t *) &mvm_start_voice_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IMVM_CMD_START_VOICE\n"); + goto fail; + } + v->voc_state = VOC_RUN; + return 0; +fail: + return -EINVAL; +} + +int voc_start_voice_call(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + struct sidetone_cal sidetone_cal_data; + int ret = 0; + + if (v == NULL) { + pr_err("%s: invalid session_id 0x%x\n", __func__, session_id); + + return -EINVAL; + } + + mutex_lock(&v->lock); + + if ((v->voc_state == VOC_INIT) || + (v->voc_state == VOC_RELEASE)) { + ret = voice_apr_register(); + if (ret < 0) { + pr_err("%s: apr register failed\n", __func__); + goto fail; + } + ret = voice_create_mvm_cvs_session(v); + if (ret < 0) { + pr_err("create mvm and cvs failed\n"); + goto fail; + } + ret = voice_send_dual_control_cmd(v); + if (ret < 0) { + pr_err("Err Dual command failed\n"); + goto fail; + } + ret = voice_setup_vocproc(v); + if (ret < 0) { + pr_err("setup voice failed\n"); + goto fail; + } + + ret = voice_send_vol_index_cmd(v); + if (ret < 0) + pr_err("voice volume failed\n"); + + ret = voice_send_mute_cmd(v); + if (ret < 0) + pr_err("voice mute failed\n"); + + ret = voice_send_start_voice_cmd(v); + if (ret < 0) { + pr_err("start voice failed\n"); + goto fail; + } + get_sidetone_cal(&sidetone_cal_data); + if (v->dev_tx.port_id != RT_PROXY_PORT_001_TX && + v->dev_rx.port_id != RT_PROXY_PORT_001_RX) { + ret = afe_sidetone(v->dev_tx.port_id, + v->dev_rx.port_id, + sidetone_cal_data.enable, + sidetone_cal_data.gain); + if (ret < 0) + pr_err("AFE command sidetone failed\n"); + } + + v->voc_state = VOC_RUN; + } else if (v->voc_state == VOC_STANDBY) { + pr_err("Error: PCM Prepare when in Standby\n"); + ret = -EINVAL; + goto fail; + } +fail: mutex_unlock(&v->lock); + return ret; +} + +int voc_standby_voice_call(uint16_t session_id) +{ + struct voice_data *v = voice_get_session(session_id); + struct apr_hdr mvm_standby_voice_cmd; + void *apr_mvm; + u16 mvm_handle; + int ret = 0; + + pr_debug("%s: voc state=%d", __func__, v->voc_state); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + return -EINVAL; + } + if (v->voc_state == VOC_RUN) { + apr_mvm = common.apr_q6_mvm; + if (!apr_mvm) { + pr_err("%s: apr_mvm is NULL.\n", __func__); + ret = -EINVAL; + goto fail; + } + mvm_handle = voice_get_mvm_handle(v); + mvm_standby_voice_cmd.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER); + mvm_standby_voice_cmd.pkt_size = + APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(mvm_standby_voice_cmd) - APR_HDR_SIZE); + pr_debug("send mvm_standby_voice_cmd pkt size = %d\n", + mvm_standby_voice_cmd.pkt_size); + mvm_standby_voice_cmd.src_port = v->session_id; + mvm_standby_voice_cmd.dest_port = mvm_handle; + mvm_standby_voice_cmd.token = 0; + mvm_standby_voice_cmd.opcode = VSS_IMVM_CMD_STANDBY_VOICE; + v->mvm_state = CMD_STATUS_FAIL; + ret = apr_send_pkt(apr_mvm, + (uint32_t *)&mvm_standby_voice_cmd); + if (ret < 0) { + pr_err("Fail in sending VSS_IMVM_CMD_STANDBY_VOICE\n"); + ret = -EINVAL; + goto fail; + } + v->voc_state = VOC_STANDBY; + } +fail: + return ret; +} + +int voc_set_ext_ec_ref(uint16_t port_id, bool state) +{ + int ret = 0; + + mutex_lock(&common.common_lock); + if (state == true) { + if (port_id == AFE_PORT_INVALID) { + pr_err("%s: Invalid port id", __func__); + ret = -EINVAL; + goto fail; + } + common.ec_port_id = port_id; + common.ec_ref_ext = true; + } else { + common.ec_ref_ext = false; + common.ec_port_id = port_id; + } +fail: + mutex_unlock(&common.common_lock); + return ret; +} + +void voc_register_mvs_cb(ul_cb_fn ul_cb, + dl_cb_fn dl_cb, + void *private_data) +{ + common.mvs_info.ul_cb = ul_cb; + common.mvs_info.dl_cb = dl_cb; + common.mvs_info.private_data = private_data; +} + +void voc_config_vocoder(uint32_t media_type, + uint32_t rate, + uint32_t network_type, + uint32_t dtx_mode) +{ + common.mvs_info.media_type = media_type; + common.mvs_info.rate = rate; + common.mvs_info.network_type = network_type; + common.mvs_info.dtx_mode = dtx_mode; +} + +static int32_t qdsp_mvm_callback(struct apr_client_data *data, void *priv) +{ + uint32_t *ptr = NULL; + struct common_data *c = NULL; + struct voice_data *v = NULL; + int i = 0; + + if ((data == NULL) || (priv == NULL)) { + pr_err("%s: data or priv is NULL\n", __func__); + return -EINVAL; + } + + c = priv; + + pr_debug("%s: session_id 0x%x\n", __func__, data->dest_port); + + v = voice_get_session(data->dest_port); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + + return -EINVAL; + } + + pr_debug("%s: Payload Length = %d, opcode=%x\n", __func__, + data->payload_size, data->opcode); + + if (data->opcode == RESET_EVENTS) { + pr_debug("%s: Reset event received in Voice service\n", + __func__); + + apr_reset(c->apr_q6_mvm); + c->apr_q6_mvm = NULL; + + /* Sub-system restart is applicable to all sessions. */ + for (i = 0; i < MAX_VOC_SESSIONS; i++) + c->voice[i].mvm_handle = 0; + + return 0; + } + + if (data->opcode == APR_BASIC_RSP_RESULT) { + if (data->payload_size) { + ptr = data->payload; + + pr_info("%x %x\n", ptr[0], ptr[1]); + /* ping mvm service ACK */ + switch (ptr[0]) { + case VSS_IMVM_CMD_CREATE_PASSIVE_CONTROL_SESSION: + case VSS_IMVM_CMD_CREATE_FULL_CONTROL_SESSION: + /* Passive session is used for CS call + * Full session is used for VoIP call. */ + pr_debug("%s: cmd = 0x%x\n", __func__, ptr[0]); + if (!ptr[1]) { + pr_debug("%s: MVM handle is %d\n", + __func__, data->src_port); + voice_set_mvm_handle(v, data->src_port); + } else + pr_err("got NACK for sending \ + MVM create session \n"); + v->mvm_state = CMD_STATUS_SUCCESS; + wake_up(&v->mvm_wait); + break; + case VSS_IMVM_CMD_START_VOICE: + case VSS_IMVM_CMD_ATTACH_VOCPROC: + case VSS_IMVM_CMD_STOP_VOICE: + case VSS_IMVM_CMD_DETACH_VOCPROC: + case VSS_ISTREAM_CMD_SET_TTY_MODE: + case APRV2_IBASIC_CMD_DESTROY_SESSION: + case VSS_IMVM_CMD_ATTACH_STREAM: + case VSS_IMVM_CMD_DETACH_STREAM: + case VSS_ICOMMON_CMD_SET_NETWORK: + case VSS_ICOMMON_CMD_SET_VOICE_TIMING: + case VSS_IWIDEVOICE_CMD_SET_WIDEVOICE: + case VSS_IMVM_CMD_SET_POLICY_DUAL_CONTROL: + case VSS_IMVM_CMD_STANDBY_VOICE: + pr_debug("%s: cmd = 0x%x\n", __func__, ptr[0]); + v->mvm_state = CMD_STATUS_SUCCESS; + wake_up(&v->mvm_wait); + break; + default: + pr_debug("%s: not match cmd = 0x%x\n", + __func__, ptr[0]); + break; + } + } + } + + return 0; +} + +static int32_t qdsp_cvs_callback(struct apr_client_data *data, void *priv) +{ + uint32_t *ptr = NULL; + struct common_data *c = NULL; + struct voice_data *v = NULL; + int i = 0; + + if ((data == NULL) || (priv == NULL)) { + pr_err("%s: data or priv is NULL\n", __func__); + return -EINVAL; + } + + c = priv; + + pr_debug("%s: session_id 0x%x\n", __func__, data->dest_port); + + v = voice_get_session(data->dest_port); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + + return -EINVAL; + } + + pr_debug("%s: Payload Length = %d, opcode=%x\n", __func__, + data->payload_size, data->opcode); + + if (data->opcode == RESET_EVENTS) { + pr_debug("%s: Reset event received in Voice service\n", + __func__); + + apr_reset(c->apr_q6_cvs); + c->apr_q6_cvs = NULL; + + /* Sub-system restart is applicable to all sessions. */ + for (i = 0; i < MAX_VOC_SESSIONS; i++) + c->voice[i].cvs_handle = 0; + + return 0; + } + + if (data->opcode == APR_BASIC_RSP_RESULT) { + if (data->payload_size) { + ptr = data->payload; + + pr_info("%x %x\n", ptr[0], ptr[1]); + /*response from CVS */ + switch (ptr[0]) { + case VSS_ISTREAM_CMD_CREATE_PASSIVE_CONTROL_SESSION: + case VSS_ISTREAM_CMD_CREATE_FULL_CONTROL_SESSION: + if (!ptr[1]) { + pr_debug("%s: CVS handle is %d\n", + __func__, data->src_port); + voice_set_cvs_handle(v, data->src_port); + } else + pr_err("got NACK for sending \ + CVS create session \n"); + v->cvs_state = CMD_STATUS_SUCCESS; + wake_up(&v->cvs_wait); + break; + case VSS_ISTREAM_CMD_SET_MUTE: + case VSS_ISTREAM_CMD_SET_MEDIA_TYPE: + case VSS_ISTREAM_CMD_VOC_AMR_SET_ENC_RATE: + case VSS_ISTREAM_CMD_VOC_AMRWB_SET_ENC_RATE: + case VSS_ISTREAM_CMD_SET_ENC_DTX_MODE: + case VSS_ISTREAM_CMD_CDMA_SET_ENC_MINMAX_RATE: + case APRV2_IBASIC_CMD_DESTROY_SESSION: + case VSS_ISTREAM_CMD_REGISTER_CALIBRATION_DATA: + case VSS_ISTREAM_CMD_DEREGISTER_CALIBRATION_DATA: + case VSS_ICOMMON_CMD_MAP_MEMORY: + case VSS_ICOMMON_CMD_UNMAP_MEMORY: + case VSS_ICOMMON_CMD_SET_UI_PROPERTY: + case VSS_ISTREAM_CMD_START_PLAYBACK: + case VSS_ISTREAM_CMD_STOP_PLAYBACK: + case VSS_ISTREAM_CMD_START_RECORD: + case VSS_ISTREAM_CMD_STOP_RECORD: + pr_debug("%s: cmd = 0x%x\n", __func__, ptr[0]); + v->cvs_state = CMD_STATUS_SUCCESS; + wake_up(&v->cvs_wait); + break; + case VOICE_CMD_SET_PARAM: + rtac_make_voice_callback(RTAC_CVS, ptr, + data->payload_size); + break; + default: + pr_debug("%s: cmd = 0x%x\n", __func__, ptr[0]); + break; + } + } + } else if (data->opcode == VSS_ISTREAM_EVT_SEND_ENC_BUFFER) { + uint32_t *voc_pkt = data->payload; + uint32_t pkt_len = data->payload_size; + + if (voc_pkt != NULL && c->mvs_info.ul_cb != NULL) { + pr_debug("%s: Media type is 0x%x\n", + __func__, voc_pkt[0]); + + /* Remove media ID from payload. */ + voc_pkt++; + pkt_len = pkt_len - 4; + + c->mvs_info.ul_cb((uint8_t *)voc_pkt, + pkt_len, + c->mvs_info.private_data); + } else + pr_err("%s: voc_pkt is 0x%x ul_cb is 0x%x\n", + __func__, (unsigned int)voc_pkt, + (unsigned int) c->mvs_info.ul_cb); + } else if (data->opcode == VSS_ISTREAM_EVT_REQUEST_DEC_BUFFER) { + struct cvs_send_dec_buf_cmd send_dec_buf; + int ret = 0; + uint32_t pkt_len = 0; + + if (c->mvs_info.dl_cb != NULL) { + send_dec_buf.dec_buf.media_id = c->mvs_info.media_type; + + c->mvs_info.dl_cb( + (uint8_t *)&send_dec_buf.dec_buf.packet_data, + &pkt_len, + c->mvs_info.private_data); + + send_dec_buf.hdr.hdr_field = + APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD, + APR_HDR_LEN(APR_HDR_SIZE), + APR_PKT_VER); + send_dec_buf.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE, + sizeof(send_dec_buf.dec_buf.media_id) + pkt_len); + send_dec_buf.hdr.src_port = v->session_id; + send_dec_buf.hdr.dest_port = voice_get_cvs_handle(v); + send_dec_buf.hdr.token = 0; + send_dec_buf.hdr.opcode = + VSS_ISTREAM_EVT_SEND_DEC_BUFFER; + + ret = apr_send_pkt(c->apr_q6_cvs, + (uint32_t *) &send_dec_buf); + if (ret < 0) { + pr_err("%s: Error %d sending DEC_BUF\n", + __func__, ret); + goto fail; + } + } else + pr_debug("%s: dl_cb is NULL\n", __func__); + } else if (data->opcode == VSS_ISTREAM_EVT_SEND_DEC_BUFFER) { + pr_debug("Send dec buf resp\n"); + } else if (data->opcode == VOICE_EVT_GET_PARAM_ACK) { + rtac_make_voice_callback(RTAC_CVS, data->payload, + data->payload_size); + } else + pr_debug("Unknown opcode 0x%x\n", data->opcode); + +fail: + return 0; +} + +static int32_t qdsp_cvp_callback(struct apr_client_data *data, void *priv) +{ + uint32_t *ptr = NULL; + struct common_data *c = NULL; + struct voice_data *v = NULL; + int i = 0; + + if ((data == NULL) || (priv == NULL)) { + pr_err("%s: data or priv is NULL\n", __func__); + return -EINVAL; + } + + c = priv; + + v = voice_get_session(data->dest_port); + if (v == NULL) { + pr_err("%s: v is NULL\n", __func__); + + return -EINVAL; + } + + pr_debug("%s: Payload Length = %d, opcode=%x\n", __func__, + data->payload_size, data->opcode); + + if (data->opcode == RESET_EVENTS) { + pr_debug("%s: Reset event received in Voice service\n", + __func__); + + apr_reset(c->apr_q6_cvp); + c->apr_q6_cvp = NULL; + + /* Sub-system restart is applicable to all sessions. */ + for (i = 0; i < MAX_VOC_SESSIONS; i++) + c->voice[i].cvp_handle = 0; + + return 0; + } + + if (data->opcode == APR_BASIC_RSP_RESULT) { + if (data->payload_size) { + ptr = data->payload; + + pr_info("%x %x\n", ptr[0], ptr[1]); + + switch (ptr[0]) { + case VSS_IVOCPROC_CMD_CREATE_FULL_CONTROL_SESSION: + /*response from CVP */ + pr_debug("%s: cmd = 0x%x\n", __func__, ptr[0]); + if (!ptr[1]) { + voice_set_cvp_handle(v, data->src_port); + pr_debug("cvphdl=%d\n", data->src_port); + } else + pr_err("got NACK from CVP create \ + session response\n"); + v->cvp_state = CMD_STATUS_SUCCESS; + wake_up(&v->cvp_wait); + break; + case VSS_IVOCPROC_CMD_SET_DEVICE: + case VSS_IVOCPROC_CMD_SET_DEVICE_V2: + case VSS_IVOCPROC_CMD_SET_RX_VOLUME_INDEX: + case VSS_IVOCPROC_CMD_ENABLE: + case VSS_IVOCPROC_CMD_DISABLE: + case APRV2_IBASIC_CMD_DESTROY_SESSION: + case VSS_IVOCPROC_CMD_REGISTER_VOLUME_CAL_TABLE: + case VSS_IVOCPROC_CMD_DEREGISTER_VOLUME_CAL_TABLE: + case VSS_IVOCPROC_CMD_REGISTER_CALIBRATION_DATA: + case VSS_IVOCPROC_CMD_DEREGISTER_CALIBRATION_DATA: + case VSS_ICOMMON_CMD_MAP_MEMORY: + case VSS_ICOMMON_CMD_UNMAP_MEMORY: + case VSS_IVOCPROC_CMD_SET_MUTE: + v->cvp_state = CMD_STATUS_SUCCESS; + wake_up(&v->cvp_wait); + break; + case VOICE_CMD_SET_PARAM: + rtac_make_voice_callback(RTAC_CVP, ptr, + data->payload_size); + break; + default: + pr_debug("%s: not match cmd = 0x%x\n", + __func__, ptr[0]); + break; + } + } + } else if (data->opcode == VOICE_EVT_GET_PARAM_ACK) { + rtac_make_voice_callback(RTAC_CVP, data->payload, + data->payload_size); + } + return 0; +} + + +static void voice_allocate_shared_memory(void) +{ + int i, j, result; + int offset = 0; + int mem_len; + dma_addr_t paddr; + void *kvptr; + pr_debug("%s\n", __func__); + + /* heh, awesome practice here, of allocating your resources from an + * initcall rather than probe, etc.. + */ + kvptr = dma_alloc_writecombine(NULL, TOTAL_VOICE_CAL_SIZE, + &paddr, GFP_KERNEL); + if (WARN_ON(IS_ERR_OR_NULL(kvptr))) { + pr_err("%s: allocation failed\n", __func__); + goto err; + } + (void)mem_len; + (void)result; + + /* Make all phys & buf point to the correct address */ + for (i = 0; i < NUM_VOICE_CAL_BUFFERS; i++) { + for (j = 0; j < NUM_VOICE_CAL_TYPES; j++) { + common.voice_cal[i].cal_data[j].paddr = + (uint32_t)(paddr + offset); + common.voice_cal[i].cal_data[j].kvaddr = + (uint32_t)((uint8_t *)kvptr + offset); + if (j == CVP_CAL) + offset += CVP_CAL_SIZE; + else + offset += CVS_CAL_SIZE; + + pr_debug("%s: kernel addr = 0x%x, phys addr = 0x%x\n", + __func__, + common.voice_cal[i].cal_data[j].kvaddr, + common.voice_cal[i].cal_data[j].paddr); + } + } + + return; +err: + return; +} + +static int __init voice_init(void) +{ + int rc = 0, i = 0; + + memset(&common, 0, sizeof(struct common_data)); + + /* Allocate shared memory */ + voice_allocate_shared_memory(); + + /* set default value */ + common.default_mute_val = 0; /* default is un-mute */ + common.default_vol_val = 0; + common.default_sample_val = 8000; + common.ec_ref_ext = false; + /* Initialize MVS info. */ + common.mvs_info.network_type = VSS_NETWORK_ID_DEFAULT; + + mutex_init(&common.common_lock); + + for (i = 0; i < MAX_VOC_SESSIONS; i++) { + common.voice[i].session_id = SESSION_ID_BASE + i; + + /* initialize dev_rx and dev_tx */ + common.voice[i].dev_rx.volume = common.default_vol_val; + common.voice[i].dev_rx.mute = 0; + common.voice[i].dev_tx.mute = common.default_mute_val; + + common.voice[i].dev_tx.port_id = 1; + common.voice[i].dev_rx.port_id = 0; + common.voice[i].sidetone_gain = 0x512; + + common.voice[i].voc_state = VOC_INIT; + + init_waitqueue_head(&common.voice[i].mvm_wait); + init_waitqueue_head(&common.voice[i].cvs_wait); + init_waitqueue_head(&common.voice[i].cvp_wait); + + mutex_init(&common.voice[i].lock); + } + + return rc; +} + +device_initcall(voice_init); diff --git a/sound/soc/msm/qdsp6/q6voice.h b/sound/soc/msm/qdsp6/q6voice.h new file mode 100644 index 000000000000..49062e2aee86 --- /dev/null +++ b/sound/soc/msm/qdsp6/q6voice.h @@ -0,0 +1,1060 @@ +/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __QDSP6VOICE_H__ +#define __QDSP6VOICE_H__ + +#include <sound/qdsp6v2/apr.h> + +#define MAX_VOC_PKT_SIZE 642 +#define SESSION_NAME_LEN 21 + +#define VOC_REC_UPLINK 0x00 +#define VOC_REC_DOWNLINK 0x01 +#define VOC_REC_BOTH 0x02 + +/* Needed for VOIP & VOLTE support */ +/* Due to Q6 memory map issue */ +enum { + VOIP_CAL, + VOLTE_CAL, + NUM_VOICE_CAL_BUFFERS +}; + +enum { + CVP_CAL, + CVS_CAL, + NUM_VOICE_CAL_TYPES +}; + +struct voice_header { + uint32_t id; + uint32_t data_len; +}; + +struct voice_init { + struct voice_header hdr; + void *cb_handle; +}; + +/* Device information payload structure */ + +struct device_data { + uint32_t volume; /* in index */ + uint32_t mute; + uint32_t sample; + uint32_t enabled; + uint32_t dev_id; + uint32_t port_id; +}; + +struct voice_dev_route_state { + u16 rx_route_flag; + u16 tx_route_flag; +}; + +struct voice_rec_route_state { + u16 ul_flag; + u16 dl_flag; +}; + +enum { + VOC_INIT = 0, + VOC_RUN, + VOC_CHANGE, + VOC_RELEASE, + VOC_STANDBY, +}; + +/* Common */ +#define VSS_ICOMMON_CMD_SET_UI_PROPERTY 0x00011103 +/* Set a UI property */ +#define VSS_ICOMMON_CMD_MAP_MEMORY 0x00011025 +#define VSS_ICOMMON_CMD_UNMAP_MEMORY 0x00011026 +/* General shared memory; byte-accessible, 4 kB-aligned. */ +#define VSS_ICOMMON_MAP_MEMORY_SHMEM8_4K_POOL 3 + +struct vss_icommon_cmd_map_memory_t { + uint32_t phys_addr; + /* Physical address of a memory region; must be at least + * 4 kB aligned. + */ + + uint32_t mem_size; + /* Number of bytes in the region; should be a multiple of 32. */ + + uint16_t mem_pool_id; + /* Type of memory being provided. The memory ID implicitly defines + * the characteristics of the memory. The characteristics might include + * alignment type, permissions, etc. + * Memory pool ID. Possible values: + * 3 -- VSS_ICOMMON_MEM_TYPE_SHMEM8_4K_POOL. + */ +} __packed; + +struct vss_icommon_cmd_unmap_memory_t { + uint32_t phys_addr; + /* Physical address of a memory region; must be at least + * 4 kB aligned. + */ +} __packed; + +struct vss_map_memory_cmd { + struct apr_hdr hdr; + struct vss_icommon_cmd_map_memory_t vss_map_mem; +} __packed; + +struct vss_unmap_memory_cmd { + struct apr_hdr hdr; + struct vss_icommon_cmd_unmap_memory_t vss_unmap_mem; +} __packed; + +/* TO MVM commands */ +#define VSS_IMVM_CMD_CREATE_PASSIVE_CONTROL_SESSION 0x000110FF +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IMVM_CMD_SET_POLICY_DUAL_CONTROL 0x00011327 +/* + * VSS_IMVM_CMD_SET_POLICY_DUAL_CONTROL + * Description: This command is required to let MVM know + * who is in control of session. + * Payload: Defined by vss_imvm_cmd_set_policy_dual_control_t. + * Result: Wait for APRV2_IBASIC_RSP_RESULT response. + */ + +#define VSS_IMVM_CMD_CREATE_FULL_CONTROL_SESSION 0x000110FE +/* Create a new full control MVM session. */ + +#define APRV2_IBASIC_CMD_DESTROY_SESSION 0x0001003C +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IMVM_CMD_ATTACH_STREAM 0x0001123C +/* Attach a stream to the MVM. */ + +#define VSS_IMVM_CMD_DETACH_STREAM 0x0001123D +/* Detach a stream from the MVM. */ + +#define VSS_IMVM_CMD_ATTACH_VOCPROC 0x0001123E +/* Attach a vocproc to the MVM. The MVM will symmetrically connect this vocproc + * to all the streams currently attached to it. + */ + +#define VSS_IMVM_CMD_DETACH_VOCPROC 0x0001123F +/* Detach a vocproc from the MVM. The MVM will symmetrically disconnect this + * vocproc from all the streams to which it is currently attached. +*/ + +#define VSS_IMVM_CMD_START_VOICE 0x00011190 +/* + * Start Voice call command. + * Wait for APRV2_IBASIC_RSP_RESULT response. + * No pay load. + */ + +#define VSS_IMVM_CMD_STANDBY_VOICE 0x00011191 +/* No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IMVM_CMD_STOP_VOICE 0x00011192 +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ISTREAM_CMD_ATTACH_VOCPROC 0x000110F8 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ISTREAM_CMD_DETACH_VOCPROC 0x000110F9 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + + +#define VSS_ISTREAM_CMD_SET_TTY_MODE 0x00011196 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ICOMMON_CMD_SET_NETWORK 0x0001119C +/* Set the network type. */ + +#define VSS_ICOMMON_CMD_SET_VOICE_TIMING 0x000111E0 +/* Set the voice timing parameters. */ + +#define VSS_IWIDEVOICE_CMD_SET_WIDEVOICE 0x00011243 +/* Enable/disable WideVoice */ + +enum msm_audio_voc_rate { + VOC_0_RATE, /* Blank frame */ + VOC_8_RATE, /* 1/8 rate */ + VOC_4_RATE, /* 1/4 rate */ + VOC_2_RATE, /* 1/2 rate */ + VOC_1_RATE /* Full rate */ +}; + +struct vss_istream_cmd_set_tty_mode_t { + uint32_t mode; + /**< + * TTY mode. + * + * 0 : TTY disabled + * 1 : HCO + * 2 : VCO + * 3 : FULL + */ +} __packed; + +struct vss_istream_cmd_attach_vocproc_t { + uint16_t handle; + /**< Handle of vocproc being attached. */ +} __packed; + +struct vss_istream_cmd_detach_vocproc_t { + uint16_t handle; + /**< Handle of vocproc being detached. */ +} __packed; + +struct vss_imvm_cmd_attach_stream_t { + uint16_t handle; + /* The stream handle to attach. */ +} __packed; + +struct vss_imvm_cmd_detach_stream_t { + uint16_t handle; + /* The stream handle to detach. */ +} __packed; + +struct vss_icommon_cmd_set_network_t { + uint32_t network_id; + /* Network ID. (Refer to VSS_NETWORK_ID_XXX). */ +} __packed; + +struct vss_icommon_cmd_set_voice_timing_t { + uint16_t mode; + /* + * The vocoder frame synchronization mode. + * + * 0 : No frame sync. + * 1 : Hard VFR (20ms Vocoder Frame Reference interrupt). + */ + uint16_t enc_offset; + /* + * The offset in microseconds from the VFR to deliver a Tx vocoder + * packet. The offset should be less than 20000us. + */ + uint16_t dec_req_offset; + /* + * The offset in microseconds from the VFR to request for an Rx vocoder + * packet. The offset should be less than 20000us. + */ + uint16_t dec_offset; + /* + * The offset in microseconds from the VFR to indicate the deadline to + * receive an Rx vocoder packet. The offset should be less than 20000us. + * Rx vocoder packets received after this deadline are not guaranteed to + * be processed. + */ +} __packed; + +struct vss_imvm_cmd_create_control_session_t { + char name[SESSION_NAME_LEN]; + /* + * A variable-sized stream name. + * + * The stream name size is the payload size minus the size of the other + * fields. + */ +} __packed; + + +struct vss_imvm_cmd_set_policy_dual_control_t { + bool enable_flag; + /* Set to TRUE to enable modem state machine control */ +} __packed; + +struct vss_iwidevoice_cmd_set_widevoice_t { + uint32_t enable; + /* WideVoice enable/disable; possible values: + * - 0 -- WideVoice disabled + * - 1 -- WideVoice enabled + */ +} __packed; + +struct mvm_attach_vocproc_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_attach_vocproc_t mvm_attach_cvp_handle; +} __packed; + +struct mvm_detach_vocproc_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_detach_vocproc_t mvm_detach_cvp_handle; +} __packed; + +struct mvm_create_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_imvm_cmd_create_control_session_t mvm_session; +} __packed; + +struct mvm_modem_dual_control_session_cmd { + struct apr_hdr hdr; + struct vss_imvm_cmd_set_policy_dual_control_t voice_ctl; +} __packed; + +struct mvm_set_tty_mode_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_tty_mode_t tty_mode; +} __packed; + +struct mvm_attach_stream_cmd { + struct apr_hdr hdr; + struct vss_imvm_cmd_attach_stream_t attach_stream; +} __packed; + +struct mvm_detach_stream_cmd { + struct apr_hdr hdr; + struct vss_imvm_cmd_detach_stream_t detach_stream; +} __packed; + +struct mvm_set_network_cmd { + struct apr_hdr hdr; + struct vss_icommon_cmd_set_network_t network; +} __packed; + +struct mvm_set_voice_timing_cmd { + struct apr_hdr hdr; + struct vss_icommon_cmd_set_voice_timing_t timing; +} __packed; + +struct mvm_set_widevoice_enable_cmd { + struct apr_hdr hdr; + struct vss_iwidevoice_cmd_set_widevoice_t vss_set_wv; +} __packed; + +/* TO CVS commands */ +#define VSS_ISTREAM_CMD_CREATE_PASSIVE_CONTROL_SESSION 0x00011140 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_ISTREAM_CMD_CREATE_FULL_CONTROL_SESSION 0x000110F7 +/* Create a new full control stream session. */ + +#define APRV2_IBASIC_CMD_DESTROY_SESSION 0x0001003C + +#define VSS_ISTREAM_CMD_SET_MUTE 0x00011022 + +#define VSS_ISTREAM_CMD_REGISTER_CALIBRATION_DATA 0x00011279 + +#define VSS_ISTREAM_CMD_DEREGISTER_CALIBRATION_DATA 0x0001127A + +#define VSS_ISTREAM_CMD_SET_MEDIA_TYPE 0x00011186 +/* Set media type on the stream. */ + +#define VSS_ISTREAM_EVT_SEND_ENC_BUFFER 0x00011015 +/* Event sent by the stream to its client to provide an encoded packet. */ + +#define VSS_ISTREAM_EVT_REQUEST_DEC_BUFFER 0x00011017 +/* Event sent by the stream to its client requesting for a decoder packet. + * The client should respond with a VSS_ISTREAM_EVT_SEND_DEC_BUFFER event. + */ + +#define VSS_ISTREAM_EVT_SEND_DEC_BUFFER 0x00011016 +/* Event sent by the client to the stream in response to a + * VSS_ISTREAM_EVT_REQUEST_DEC_BUFFER event, providing a decoder packet. + */ + +#define VSS_ISTREAM_CMD_VOC_AMR_SET_ENC_RATE 0x0001113E +/* Set AMR encoder rate. */ + +#define VSS_ISTREAM_CMD_VOC_AMRWB_SET_ENC_RATE 0x0001113F +/* Set AMR-WB encoder rate. */ + +#define VSS_ISTREAM_CMD_CDMA_SET_ENC_MINMAX_RATE 0x00011019 +/* Set encoder minimum and maximum rate. */ + +#define VSS_ISTREAM_CMD_SET_ENC_DTX_MODE 0x0001101D +/* Set encoder DTX mode. */ + +#define MODULE_ID_VOICE_MODULE_FENS 0x00010EEB +#define MODULE_ID_VOICE_MODULE_ST 0x00010EE3 +#define VOICE_PARAM_MOD_ENABLE 0x00010E00 +#define MOD_ENABLE_PARAM_LEN 4 + +#define VSS_ISTREAM_CMD_START_PLAYBACK 0x00011238 +/* Start in-call music delivery on the Tx voice path. */ + +#define VSS_ISTREAM_CMD_STOP_PLAYBACK 0x00011239 +/* Stop the in-call music delivery on the Tx voice path. */ + +#define VSS_ISTREAM_CMD_START_RECORD 0x00011236 +/* Start in-call conversation recording. */ +#define VSS_ISTREAM_CMD_STOP_RECORD 0x00011237 +/* Stop in-call conversation recording. */ + +#define VSS_TAP_POINT_NONE 0x00010F78 +/* Indicates no tapping for specified path. */ + +#define VSS_TAP_POINT_STREAM_END 0x00010F79 +/* Indicates that specified path should be tapped at the end of the stream. */ + +struct vss_istream_cmd_start_record_t { + uint32_t rx_tap_point; + /* Tap point to use on the Rx path. Supported values are: + * VSS_TAP_POINT_NONE : Do not record Rx path. + * VSS_TAP_POINT_STREAM_END : Rx tap point is at the end of the stream. + */ + uint32_t tx_tap_point; + /* Tap point to use on the Tx path. Supported values are: + * VSS_TAP_POINT_NONE : Do not record tx path. + * VSS_TAP_POINT_STREAM_END : Tx tap point is at the end of the stream. + */ +} __packed; + +struct vss_istream_cmd_create_passive_control_session_t { + char name[SESSION_NAME_LEN]; + /**< + * A variable-sized stream name. + * + * The stream name size is the payload size minus the size of the other + * fields. + */ +} __packed; + +struct vss_istream_cmd_set_mute_t { + uint16_t direction; + /**< + * 0 : TX only + * 1 : RX only + * 2 : TX and Rx + */ + uint16_t mute_flag; + /**< + * Mute, un-mute. + * + * 0 : Silence disable + * 1 : Silence enable + * 2 : CNG enable. Applicable to TX only. If set on RX behavior + * will be the same as 1 + */ +} __packed; + +struct vss_istream_cmd_create_full_control_session_t { + uint16_t direction; + /* + * Stream direction. + * + * 0 : TX only + * 1 : RX only + * 2 : TX and RX + * 3 : TX and RX loopback + */ + uint32_t enc_media_type; + /* Tx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ + uint32_t dec_media_type; + /* Rx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ + uint32_t network_id; + /* Network ID. (Refer to VSS_NETWORK_ID_XXX). */ + char name[SESSION_NAME_LEN]; + /* + * A variable-sized stream name. + * + * The stream name size is the payload size minus the size of the other + * fields. + */ +} __packed; + +struct vss_istream_cmd_set_media_type_t { + uint32_t rx_media_id; + /* Set the Rx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ + uint32_t tx_media_id; + /* Set the Tx vocoder type. (Refer to VSS_MEDIA_ID_XXX). */ +} __packed; + +struct vss_istream_evt_send_enc_buffer_t { + uint32_t media_id; + /* Media ID of the packet. */ + uint8_t packet_data[MAX_VOC_PKT_SIZE]; + /* Packet data buffer. */ +} __packed; + +struct vss_istream_evt_send_dec_buffer_t { + uint32_t media_id; + /* Media ID of the packet. */ + uint8_t packet_data[MAX_VOC_PKT_SIZE]; + /* Packet data. */ +} __packed; + +struct vss_istream_cmd_voc_amr_set_enc_rate_t { + uint32_t mode; + /* Set the AMR encoder rate. + * + * 0x00000000 : 4.75 kbps + * 0x00000001 : 5.15 kbps + * 0x00000002 : 5.90 kbps + * 0x00000003 : 6.70 kbps + * 0x00000004 : 7.40 kbps + * 0x00000005 : 7.95 kbps + * 0x00000006 : 10.2 kbps + * 0x00000007 : 12.2 kbps + */ +} __packed; + +struct vss_istream_cmd_voc_amrwb_set_enc_rate_t { + uint32_t mode; + /* Set the AMR-WB encoder rate. + * + * 0x00000000 : 6.60 kbps + * 0x00000001 : 8.85 kbps + * 0x00000002 : 12.65 kbps + * 0x00000003 : 14.25 kbps + * 0x00000004 : 15.85 kbps + * 0x00000005 : 18.25 kbps + * 0x00000006 : 19.85 kbps + * 0x00000007 : 23.05 kbps + * 0x00000008 : 23.85 kbps + */ +} __packed; + +struct vss_istream_cmd_cdma_set_enc_minmax_rate_t { + uint16_t min_rate; + /* Set the lower bound encoder rate. + * + * 0x0000 : Blank frame + * 0x0001 : Eighth rate + * 0x0002 : Quarter rate + * 0x0003 : Half rate + * 0x0004 : Full rate + */ + uint16_t max_rate; + /* Set the upper bound encoder rate. + * + * 0x0000 : Blank frame + * 0x0001 : Eighth rate + * 0x0002 : Quarter rate + * 0x0003 : Half rate + * 0x0004 : Full rate + */ +} __packed; + +struct vss_istream_cmd_set_enc_dtx_mode_t { + uint32_t enable; + /* Toggle DTX on or off. + * + * 0 : Disables DTX + * 1 : Enables DTX + */ +} __packed; + +struct vss_istream_cmd_register_calibration_data_t { + uint32_t phys_addr; + /* Phsical address to be registered with stream. The calibration data + * is stored at this address. + */ + uint32_t mem_size; + /* Size of the calibration data in bytes. */ +}; + +struct vss_icommon_cmd_set_ui_property_enable_t { + uint32_t module_id; + /* Unique ID of the module. */ + uint32_t param_id; + /* Unique ID of the parameter. */ + uint16_t param_size; + /* Size of the parameter in bytes: MOD_ENABLE_PARAM_LEN */ + uint16_t reserved; + /* Reserved; set to 0. */ + uint16_t enable; + uint16_t reserved_field; + /* Reserved, set to 0. */ +}; + +struct cvs_create_passive_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_create_passive_control_session_t cvs_session; +} __packed; + +struct cvs_create_full_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_create_full_control_session_t cvs_session; +} __packed; + +struct cvs_destroy_session_cmd { + struct apr_hdr hdr; +} __packed; + +struct cvs_set_mute_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_mute_t cvs_set_mute; +} __packed; + +struct cvs_set_media_type_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_media_type_t media_type; +} __packed; + +struct cvs_send_dec_buf_cmd { + struct apr_hdr hdr; + struct vss_istream_evt_send_dec_buffer_t dec_buf; +} __packed; + +struct cvs_set_amr_enc_rate_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_voc_amr_set_enc_rate_t amr_rate; +} __packed; + +struct cvs_set_amrwb_enc_rate_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_voc_amrwb_set_enc_rate_t amrwb_rate; +} __packed; + +struct cvs_set_cdma_enc_minmax_rate_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_cdma_set_enc_minmax_rate_t cdma_rate; +} __packed; + +struct cvs_set_enc_dtx_mode_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_set_enc_dtx_mode_t dtx_mode; +} __packed; + +struct cvs_register_cal_data_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_register_calibration_data_t cvs_cal_data; +} __packed; + +struct cvs_deregister_cal_data_cmd { + struct apr_hdr hdr; +} __packed; + +struct cvs_set_pp_enable_cmd { + struct apr_hdr hdr; + struct vss_icommon_cmd_set_ui_property_enable_t vss_set_pp; +} __packed; +struct cvs_start_record_cmd { + struct apr_hdr hdr; + struct vss_istream_cmd_start_record_t rec_mode; +} __packed; + +/* TO CVP commands */ + +#define VSS_IVOCPROC_CMD_CREATE_FULL_CONTROL_SESSION 0x000100C3 +/**< Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define APRV2_IBASIC_CMD_DESTROY_SESSION 0x0001003C + +#define VSS_IVOCPROC_CMD_SET_DEVICE 0x000100C4 + +#define VSS_IVOCPROC_CMD_SET_DEVICE_V2 0x000112C6 + +#define VSS_IVOCPROC_CMD_SET_VP3_DATA 0x000110EB + +#define VSS_IVOCPROC_CMD_SET_RX_VOLUME_INDEX 0x000110EE + +#define VSS_IVOCPROC_CMD_ENABLE 0x000100C6 +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IVOCPROC_CMD_DISABLE 0x000110E1 +/**< No payload. Wait for APRV2_IBASIC_RSP_RESULT response. */ + +#define VSS_IVOCPROC_CMD_REGISTER_CALIBRATION_DATA 0x00011275 +#define VSS_IVOCPROC_CMD_DEREGISTER_CALIBRATION_DATA 0x00011276 + +#define VSS_IVOCPROC_CMD_REGISTER_VOLUME_CAL_TABLE 0x00011277 +#define VSS_IVOCPROC_CMD_DEREGISTER_VOLUME_CAL_TABLE 0x00011278 + +#define VSS_IVOCPROC_TOPOLOGY_ID_NONE 0x00010F70 +#define VSS_IVOCPROC_TOPOLOGY_ID_TX_SM_ECNS 0x00010F71 +#define VSS_IVOCPROC_TOPOLOGY_ID_TX_DM_FLUENCE 0x00010F72 + +#define VSS_IVOCPROC_TOPOLOGY_ID_RX_DEFAULT 0x00010F77 + +/* Newtwork IDs */ +#define VSS_NETWORK_ID_DEFAULT 0x00010037 +#define VSS_NETWORK_ID_VOIP_NB 0x00011240 +#define VSS_NETWORK_ID_VOIP_WB 0x00011241 +#define VSS_NETWORK_ID_VOIP_WV 0x00011242 + +/* Media types */ +#define VSS_MEDIA_ID_EVRC_MODEM 0x00010FC2 +/* 80-VF690-47 CDMA enhanced variable rate vocoder modem format. */ +#define VSS_MEDIA_ID_AMR_NB_MODEM 0x00010FC6 +/* 80-VF690-47 UMTS AMR-NB vocoder modem format. */ +#define VSS_MEDIA_ID_AMR_WB_MODEM 0x00010FC7 +/* 80-VF690-47 UMTS AMR-WB vocoder modem format. */ +#define VSS_MEDIA_ID_PCM_NB 0x00010FCB +#define VSS_MEDIA_ID_PCM_WB 0x00010FCC +/* Linear PCM (16-bit, little-endian). */ +#define VSS_MEDIA_ID_G711_ALAW 0x00010FCD +/* G.711 a-law (contains two 10ms vocoder frames). */ +#define VSS_MEDIA_ID_G711_MULAW 0x00010FCE +/* G.711 mu-law (contains two 10ms vocoder frames). */ +#define VSS_MEDIA_ID_G729 0x00010FD0 +/* G.729AB (contains two 10ms vocoder frames. */ +#define VSS_MEDIA_ID_4GV_NB_MODEM 0x00010FC3 +/*CDMA EVRC-B vocoder modem format */ +#define VSS_MEDIA_ID_4GV_WB_MODEM 0x00010FC4 +/*CDMA EVRC-WB vocoder modem format */ + +#define VSS_IVOCPROC_CMD_SET_MUTE 0x000110EF + +#define VOICE_CMD_SET_PARAM 0x00011006 +#define VOICE_CMD_GET_PARAM 0x00011007 +#define VOICE_EVT_GET_PARAM_ACK 0x00011008 + +/* Default AFE port ID. Applicable to Tx and Rx. */ +#define VSS_IVOCPROC_PORT_ID_NONE 0xFFFF + +struct vss_ivocproc_cmd_create_full_control_session_t { + uint16_t direction; + /* + * stream direction. + * 0 : TX only + * 1 : RX only + * 2 : TX and RX + */ + uint32_t tx_port_id; + /* + * TX device port ID which vocproc will connect to. If not supplying a + * port ID set to VSS_IVOCPROC_PORT_ID_NONE. + */ + uint32_t tx_topology_id; + /* + * Tx leg topology ID. If not supplying a topology ID set to + * VSS_IVOCPROC_TOPOLOGY_ID_NONE. + */ + uint32_t rx_port_id; + /* + * RX device port ID which vocproc will connect to. If not supplying a + * port ID set to VSS_IVOCPROC_PORT_ID_NONE. + */ + uint32_t rx_topology_id; + /* + * Rx leg topology ID. If not supplying a topology ID set to + * VSS_IVOCPROC_TOPOLOGY_ID_NONE. + */ + int32_t network_id; + /* + * Network ID. (Refer to VSS_NETWORK_ID_XXX). If not supplying a network + * ID set to VSS_NETWORK_ID_DEFAULT. + */ +} __packed; + +struct vss_ivocproc_cmd_set_volume_index_t { + uint16_t vol_index; + /**< + * Volume index utilized by the vocproc to index into the volume table + * provided in VSS_IVOCPROC_CMD_CACHE_VOLUME_CALIBRATION_TABLE and set + * volume on the VDSP. + */ +} __packed; + +struct vss_ivocproc_cmd_set_device_t { + uint32_t tx_port_id; + /**< + * TX device port ID which vocproc will connect to. + * VSS_IVOCPROC_PORT_ID_NONE means vocproc will not connect to any port. + */ + uint32_t tx_topology_id; + /**< + * TX leg topology ID. + * VSS_IVOCPROC_TOPOLOGY_ID_NONE means vocproc does not contain any + * pre/post-processing blocks and is pass-through. + */ + int32_t rx_port_id; + /**< + * RX device port ID which vocproc will connect to. + * VSS_IVOCPROC_PORT_ID_NONE means vocproc will not connect to any port. + */ + uint32_t rx_topology_id; + /**< + * RX leg topology ID. + * VSS_IVOCPROC_TOPOLOGY_ID_NONE means vocproc does not contain any + * pre/post-processing blocks and is pass-through. + */ +} __packed; + +/* Internal EC */ +#define VSS_IVOCPROC_VOCPROC_MODE_EC_INT_MIXING 0x00010F7C + +/* External EC */ +#define VSS_IVOCPROC_VOCPROC_MODE_EC_EXT_MIXING 0x00010F7D + +struct vss_ivocproc_cmd_set_device_v2_t { + uint16_t tx_port_id; + /* Tx device port ID to which the vocproc connects. */ + uint32_t tx_topology_id; + /* Tx path topology ID. */ + uint16_t rx_port_id; + /* Rx device port ID to which the vocproc connects. */ + uint32_t rx_topology_id; + /* Rx path topology ID. */ + uint32_t vocproc_mode; + /* Vocproc mode. The supported values: + * VSS_IVOCPROC_VOCPROC_MODE_EC_INT_MIXING - 0x00010F7C + * VSS_IVOCPROC_VOCPROC_MODE_EC_EXT_MIXING - 0x00010F7D + */ + uint16_t ec_ref_port_id; + /* Port ID to which the vocproc connects for receiving + * echo cancellation reference signal. + */ +} __packed; + +struct vss_ivocproc_cmd_register_calibration_data_t { + uint32_t phys_addr; + /* Phsical address to be registered with vocproc. Calibration data + * is stored at this address. + */ + uint32_t mem_size; + /* Size of the calibration data in bytes. */ +} __packed; + +struct vss_ivocproc_cmd_register_volume_cal_table_t { + uint32_t phys_addr; + /* Phsical address to be registered with the vocproc. The volume + * calibration table is stored at this location. + */ + + uint32_t mem_size; + /* Size of the volume calibration table in bytes. */ +} __packed; + +struct vss_ivocproc_cmd_set_mute_t { + uint16_t direction; + /* + * 0 : TX only. + * 1 : RX only. + * 2 : TX and Rx. + */ + uint16_t mute_flag; + /* + * Mute, un-mute. + * + * 0 : Disable. + * 1 : Enable. + */ +} __packed; + +struct cvp_create_full_ctl_session_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_create_full_control_session_t cvp_session; +} __packed; + +struct cvp_command { + struct apr_hdr hdr; +} __packed; + +struct cvp_set_device_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_set_device_t cvp_set_device; +} __packed; + +struct cvp_set_device_cmd_v2 { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_set_device_v2_t cvp_set_device_v2; +} __packed; + +struct cvp_set_vp3_data_cmd { + struct apr_hdr hdr; +} __packed; + +struct cvp_set_rx_volume_index_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_set_volume_index_t cvp_set_vol_idx; +} __packed; + +struct cvp_register_cal_data_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_register_calibration_data_t cvp_cal_data; +} __packed; + +struct cvp_deregister_cal_data_cmd { + struct apr_hdr hdr; +} __packed; + +struct cvp_register_vol_cal_table_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_register_volume_cal_table_t cvp_vol_cal_tbl; +} __packed; + +struct cvp_deregister_vol_cal_table_cmd { + struct apr_hdr hdr; +} __packed; + +struct cvp_set_mute_cmd { + struct apr_hdr hdr; + struct vss_ivocproc_cmd_set_mute_t cvp_set_mute; +} __packed; + +/* CB for up-link packets. */ +typedef void (*ul_cb_fn)(uint8_t *voc_pkt, + uint32_t pkt_len, + void *private_data); + +/* CB for down-link packets. */ +typedef void (*dl_cb_fn)(uint8_t *voc_pkt, + uint32_t *pkt_len, + void *private_data); + + +struct mvs_driver_info { + uint32_t media_type; + uint32_t rate; + uint32_t network_type; + uint32_t dtx_mode; + ul_cb_fn ul_cb; + dl_cb_fn dl_cb; + void *private_data; +}; + +struct incall_rec_info { + uint32_t rec_enable; + uint32_t rec_mode; + uint32_t recording; +}; + +struct incall_music_info { + uint32_t play_enable; + uint32_t playing; + int count; + int force; +}; + +struct voice_data { + int voc_state;/*INIT, CHANGE, RELEASE, RUN */ + + wait_queue_head_t mvm_wait; + wait_queue_head_t cvs_wait; + wait_queue_head_t cvp_wait; + + /* cache the values related to Rx and Tx */ + struct device_data dev_rx; + struct device_data dev_tx; + + u32 mvm_state; + u32 cvs_state; + u32 cvp_state; + + /* Handle to MVM in the Q6 */ + u16 mvm_handle; + /* Handle to CVS in the Q6 */ + u16 cvs_handle; + /* Handle to CVP in the Q6 */ + u16 cvp_handle; + + struct mutex lock; + + uint16_t sidetone_gain; + uint8_t tty_mode; + /* widevoice enable value */ + uint8_t wv_enable; + /* slowtalk enable value */ + uint32_t st_enable; + /* FENC enable value */ + uint32_t fens_enable; + + struct voice_dev_route_state voc_route_state; + + u16 session_id; + + struct incall_rec_info rec_info; + + struct incall_music_info music_info; + + struct voice_rec_route_state rec_route_state; +}; + +struct cal_mem { + /* Physical Address */ + uint32_t paddr; + /* Kernel Virtual Address */ + uint32_t kvaddr; +}; + +struct cal_data { + struct cal_mem cal_data[NUM_VOICE_CAL_TYPES]; +}; + +#define MAX_VOC_SESSIONS 4 +#define SESSION_ID_BASE 0xFFF0 + +struct common_data { + /* these default values are for all devices */ + uint32_t default_mute_val; + uint32_t default_vol_val; + uint32_t default_sample_val; + bool ec_ref_ext; + uint16_t ec_port_id; + + /* APR to MVM in the Q6 */ + void *apr_q6_mvm; + /* APR to CVS in the Q6 */ + void *apr_q6_cvs; + /* APR to CVP in the Q6 */ + void *apr_q6_cvp; + + struct cal_data voice_cal[NUM_VOICE_CAL_BUFFERS]; + + struct mutex common_lock; + + struct mvs_driver_info mvs_info; + + struct voice_data voice[MAX_VOC_SESSIONS]; +}; + +void voc_register_mvs_cb(ul_cb_fn ul_cb, + dl_cb_fn dl_cb, + void *private_data); + +void voc_config_vocoder(uint32_t media_type, + uint32_t rate, + uint32_t network_type, + uint32_t dtx_mode); + +enum { + DEV_RX = 0, + DEV_TX, +}; + +enum { + RX_PATH = 0, + TX_PATH, +}; + +/* called by alsa driver */ +int voc_set_pp_enable(uint16_t session_id, uint32_t module_id, uint32_t enable); +int voc_get_pp_enable(uint16_t session_id, uint32_t module_id); +int voc_set_widevoice_enable(uint16_t session_id, uint32_t wv_enable); +uint32_t voc_get_widevoice_enable(uint16_t session_id); +uint8_t voc_get_tty_mode(uint16_t session_id); +int voc_set_tty_mode(uint16_t session_id, uint8_t tty_mode); +int voc_start_voice_call(uint16_t session_id); +int voc_standby_voice_call(uint16_t session_id); +int voc_resume_voice_call(uint16_t session_id); +int voc_end_voice_call(uint16_t session_id); +int voc_set_rxtx_port(uint16_t session_id, + uint32_t dev_port_id, + uint32_t dev_type); +int voc_set_rx_vol_index(uint16_t session_id, uint32_t dir, uint32_t voc_idx); +int voc_set_tx_mute(uint16_t session_id, uint32_t dir, uint32_t mute); +int voc_set_rx_device_mute(uint16_t session_id, uint32_t mute); +int voc_get_rx_device_mute(uint16_t session_id); +int voc_disable_cvp(uint16_t session_id); +int voc_enable_cvp(uint16_t session_id); +int voc_set_route_flag(uint16_t session_id, uint8_t path_dir, uint8_t set); +uint8_t voc_get_route_flag(uint16_t session_id, uint8_t path_dir); + +#define VOICE_SESSION_NAME "Voice session" +#define VOIP_SESSION_NAME "VoIP session" +#define VOLTE_SESSION_NAME "VoLTE session" +#define VOICE2_SESSION_NAME "Voice2 session" + +#define VOC_PATH_PASSIVE 0 +#define VOC_PATH_FULL 1 +#define VOC_PATH_VOLTE_PASSIVE 2 +#define VOC_PATH_VOICE2_PASSIVE 3 + +uint16_t voc_get_session_id(char *name); + +int voc_start_playback(uint32_t set); +int voc_start_record(uint32_t port_id, uint32_t set); +int voc_set_ext_ec_ref(uint16_t port_id, bool state); + +#endif |