Merge tag 'pm+acpi-3.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael...
authorLinus Torvalds <torvalds@linux-foundation.org>
Thu, 9 Oct 2014 20:07:43 +0000 (16:07 -0400)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 9 Oct 2014 20:07:43 +0000 (16:07 -0400)
Pull ACPI and power management updates from Rafael Wysocki:
 "Features-wise, to me the most important this time is a rework of
  wakeup interrupts handling in the core that makes them work
  consistently across all of the available sleep states, including
  suspend-to-idle.  Many thanks to Thomas Gleixner for his help with
  this work.

  Second is an update of the generic PM domains code that has been in
  need of some care for quite a while.  Unused code is being removed, DT
  support is being added and domains are now going to be attached to
  devices in bus type code in analogy with the ACPI PM domain.  The
  majority of work here was done by Ulf Hansson who also has been the
  most active developer this time.

  Apart from this we have a traditional ACPICA update, this time to
  upstream version 20140828 and a few ACPI wakeup interrupts handling
  patches on top of the general rework mentioned above.  There also are
  several cpufreq commits including renaming the cpufreq-cpu0 driver to
  cpufreq-dt, as this is what implements generic DT-based cpufreq
  support, and a new DT-based idle states infrastructure for cpuidle.

  In addition to that, the ACPI LPSS driver is updated, ACPI support for
  Apple machines is improved, a few bugs are fixed and a few cleanups
  are made all over.

  Finally, the Adaptive Voltage Scaling (AVS) subsystem now has a tree
  maintained by Kevin Hilman that will be merged through the PM tree.

  Numbers-wise, the generic PM domains update takes the lead this time
  with 32 non-merge commits, second is cpufreq (15 commits) and the 3rd
  place goes to the wakeup interrupts handling rework (13 commits).

  Specifics:

   - Rework the handling of wakeup IRQs by the IRQ core such that all of
     them will be switched over to "wakeup" mode in suspend_device_irqs()
     and in that mode the first interrupt will abort system suspend in
     progress or wake up the system if already in suspend-to-idle (or
     equivalent) without executing any interrupt handlers.  Among other
     things that eliminates the wakeup-related motivation to use the
     IRQF_NO_SUSPEND interrupt flag with interrupts which don't really
     need it and should not use it (Thomas Gleixner and Rafael Wysocki)

   - Switch over ACPI to handling wakeup interrupts with the help of the
     new mechanism introduced by the above IRQ core rework (Rafael Wysocki)

   - Rework the core generic PM domains code to eliminate code that's
     not used, add DT support and add a generic mechanism by which
     devices can be added to PM domains automatically during enumeration
     (Ulf Hansson, Geert Uytterhoeven and Tomasz Figa).

   - Add debugfs-based mechanics for debugging generic PM domains
     (Maciej Matraszek).

   - ACPICA update to upstream version 20140828.  Included are updates
     related to the SRAT and GTDT tables and the _PSx methods are in the
     METHOD_NAME list now (Bob Moore and Hanjun Guo).

   - Add _OSI("Darwin") support to the ACPI core (unfortunately, that
     can't really be done in a straightforward way) to prevent
     Thunderbolt from being turned off on Apple systems after boot (or
     after resume from system suspend) and rework the ACPI Smart Battery
     Subsystem (SBS) driver to work correctly with Apple platforms
     (Matthew Garrett and Andreas Noever).

   - ACPI LPSS (Low-Power Subsystem) driver update cleaning up the code,
     adding support for 133MHz I2C source clock on Intel Baytrail to it
     and making it avoid using UART RTS override with Auto Flow Control
     (Heikki Krogerus).

   - ACPI backlight updates removing the video_set_use_native_backlight
     quirk which is not necessary any more, making the code check the
     list of output devices returned by the _DOD method to avoid
     creating acpi_video interfaces that won't work and adding a quirk
     for Lenovo Ideapad Z570 (Hans de Goede, Aaron Lu and Stepan Bujnak)

   - New Win8 ACPI OSI quirks for some Dell laptops (Edward Lin)

   - Assorted ACPI code cleanups (Fabian Frederick, Rasmus Villemoes,
     Sudip Mukherjee, Yijing Wang, and Zhang Rui)

   - cpufreq core updates and cleanups (Viresh Kumar, Preeti U Murthy,
     Rasmus Villemoes)

   - cpufreq driver updates: cpufreq-cpu0/cpufreq-dt (driver name change
     among other things), ppc-corenet, powernv (Viresh Kumar, Preeti U
     Murthy, Shilpasri G Bhat, Lucas Stach)

   - cpuidle support for DT-based idle states infrastructure, new ARM64
     cpuidle driver, cpuidle core cleanups (Lorenzo Pieralisi, Rasmus
     Villemoes)

   - ARM big.LITTLE cpuidle driver updates: support for DT-based
     initialization and Exynos5800 compatible string (Lorenzo Pieralisi,
     Kevin Hilman)

   - Rework of the test_suspend kernel command line argument and a new
     trace event for console resume (Srinivas Pandruvada, Todd E Brandt)

   - Second attempt to optimize swsusp_free() (hibernation core) to make
     it avoid going through all PFNs which may be way too slow on some
     systems (Joerg Roedel)

   - devfreq updates (Paul Bolle, Punit Agrawal, Ãrjan Eide).

   - rockchip-io Adaptive Voltage Scaling (AVS) driver and AVS entry
     update in MAINTAINERS (Heiko Stübner, Kevin Hilman)

   - PM core fix related to clock management (Geert Uytterhoeven)

   - PM core's sysfs code cleanup (Johannes Berg)"

* tag 'pm+acpi-3.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (105 commits)
  ACPI / fan: printk replacement
  PM / clk: Fix crash in clocks management code if !CONFIG_PM_RUNTIME
  PM / Domains: Rename cpu_data to cpuidle_data
  cpufreq: cpufreq-dt: fix potential double put of cpu OF node
  cpufreq: cpu0: rename driver and internals to 'cpufreq_dt'
  PM / hibernate: Iterate over set bits instead of PFNs in swsusp_free()
  cpufreq: ppc-corenet: remove duplicate update of cpu_data
  ACPI / sleep: Rework the handling of ACPI GPE wakeup from suspend-to-idle
  PM / sleep: Rename platform suspend/resume functions in suspend.c
  PM / sleep: Export dpm_suspend_late/noirq() and dpm_resume_early/noirq()
  ACPICA: Introduce acpi_enable_all_wakeup_gpes()
  ACPICA: Clear all non-wakeup GPEs in acpi_hw_enable_wakeup_gpe_block()
  ACPI / video: check _DOD list when creating backlight devices
  PM / Domains: Move dev_pm_domain_attach|detach() to pm_domain.h
  cpufreq: Replace strnicmp with strncasecmp
  cpufreq: powernv: Set the cpus to nominal frequency during reboot/kexec
  cpufreq: powernv: Set the pstate of the last hotplugged out cpu in policy->cpus to minimum
  cpufreq: Allow stop CPU callback to be used by all cpufreq drivers
  PM / devfreq: exynos: Enable building exynos PPMU as module
  PM / devfreq: Export helper functions for drivers
  ...

115 files changed:
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt [deleted file]
Documentation/devicetree/bindings/cpufreq/cpufreq-dt.txt [new file with mode: 0644]
Documentation/devicetree/bindings/power/power_domain.txt [new file with mode: 0644]
Documentation/devicetree/bindings/power/rockchip-io-domain.txt [new file with mode: 0644]
Documentation/kernel-parameters.txt
Documentation/power/suspend-and-interrupts.txt [new file with mode: 0644]
MAINTAINERS
arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
arch/arm/mach-exynos/exynos.c
arch/arm/mach-exynos/pm_domains.c
arch/arm/mach-imx/imx27-dt.c
arch/arm/mach-imx/mach-imx51.c
arch/arm/mach-mvebu/pmsu.c
arch/arm/mach-omap2/pm.c
arch/arm/mach-s3c64xx/common.c
arch/arm/mach-s3c64xx/common.h
arch/arm/mach-s3c64xx/mach-anw6410.c
arch/arm/mach-s3c64xx/mach-crag6410.c
arch/arm/mach-s3c64xx/mach-hmt.c
arch/arm/mach-s3c64xx/mach-mini6410.c
arch/arm/mach-s3c64xx/mach-ncp.c
arch/arm/mach-s3c64xx/mach-real6410.c
arch/arm/mach-s3c64xx/mach-smartq5.c
arch/arm/mach-s3c64xx/mach-smartq7.c
arch/arm/mach-s3c64xx/mach-smdk6400.c
arch/arm/mach-s3c64xx/mach-smdk6410.c
arch/arm/mach-s3c64xx/pm.c
arch/arm/mach-shmobile/cpufreq.c
arch/arm/mach-shmobile/pm-r8a7779.c
arch/arm/mach-shmobile/pm-rmobile.c
arch/arm/mach-zynq/common.c
arch/x86/kernel/apic/io_apic.c
drivers/acpi/acpi_lpss.c
drivers/acpi/acpi_pnp.c
drivers/acpi/acpica/evxfgpe.c
drivers/acpi/acpica/hwgpe.c
drivers/acpi/acpica/utresrc.c
drivers/acpi/battery.c
drivers/acpi/blacklist.c
drivers/acpi/device_pm.c
drivers/acpi/fan.c
drivers/acpi/osl.c
drivers/acpi/pci_root.c
drivers/acpi/processor_core.c
drivers/acpi/sbs.c
drivers/acpi/sleep.c
drivers/acpi/utils.c
drivers/acpi/video.c
drivers/acpi/video_detect.c
drivers/amba/bus.c
drivers/base/platform.c
drivers/base/power/clock_ops.c
drivers/base/power/common.c
drivers/base/power/domain.c
drivers/base/power/domain_governor.c
drivers/base/power/main.c
drivers/base/power/sysfs.c
drivers/base/power/wakeup.c
drivers/base/syscore.c
drivers/cpufreq/Kconfig
drivers/cpufreq/Kconfig.arm
drivers/cpufreq/Makefile
drivers/cpufreq/cpufreq-cpu0.c [deleted file]
drivers/cpufreq/cpufreq-dt.c [new file with mode: 0644]
drivers/cpufreq/cpufreq.c
drivers/cpufreq/exynos4210-cpufreq.c
drivers/cpufreq/exynos4x12-cpufreq.c
drivers/cpufreq/exynos5250-cpufreq.c
drivers/cpufreq/highbank-cpufreq.c
drivers/cpufreq/powernv-cpufreq.c
drivers/cpufreq/ppc-corenet-cpufreq.c
drivers/cpufreq/s5pv210-cpufreq.c
drivers/cpuidle/Kconfig
drivers/cpuidle/Kconfig.arm
drivers/cpuidle/Kconfig.arm64 [new file with mode: 0644]
drivers/cpuidle/Makefile
drivers/cpuidle/cpuidle-arm64.c [new file with mode: 0644]
drivers/cpuidle/cpuidle-big_little.c
drivers/cpuidle/dt_idle_states.c [new file with mode: 0644]
drivers/cpuidle/dt_idle_states.h [new file with mode: 0644]
drivers/cpuidle/governor.c
drivers/devfreq/Kconfig
drivers/devfreq/devfreq.c
drivers/devfreq/exynos/exynos_ppmu.c
drivers/i2c/i2c-core.c
drivers/mmc/core/sdio_bus.c
drivers/pci/pcie/pme.c
drivers/platform/x86/fujitsu-laptop.c
drivers/power/avs/Kconfig
drivers/power/avs/Makefile
drivers/power/avs/rockchip-io-domain.c [new file with mode: 0644]
drivers/sh/pm_runtime.c
drivers/spi/spi.c
include/acpi/acnames.h
include/acpi/acpixf.h
include/acpi/actbl1.h
include/acpi/actbl3.h
include/linux/acpi.h
include/linux/cpufreq.h
include/linux/interrupt.h
include/linux/irq.h
include/linux/irqdesc.h
include/linux/pm.h
include/linux/pm_domain.h
include/linux/suspend.h
kernel/irq/chip.c
kernel/irq/internals.h
kernel/irq/manage.c
kernel/irq/pm.c
kernel/power/Kconfig
kernel/power/process.c
kernel/power/snapshot.c
kernel/power/suspend.c
kernel/power/suspend_test.c

index 8b4f7b7fe88b9fc181b41e4a6eb68406dd6094d3..abde1ea8a1198a6744a2bfd3d84e8054a7275f85 100644 (file)
@@ -8,6 +8,8 @@ Required Properties:
     * samsung,exynos4210-pd - for exynos4210 type power domain.
 - reg: physical base address of the controller and length of memory mapped
     region.
+- #power-domain-cells: number of cells in power domain specifier;
+    must be 0.
 
 Optional Properties:
 - clocks: List of clock handles. The parent clocks of the input clocks to the
@@ -29,6 +31,7 @@ Example:
        lcd0: power-domain-lcd0 {
                compatible = "samsung,exynos4210-pd";
                reg = <0x10023C00 0x10>;
+               #power-domain-cells = <0>;
        };
 
        mfc_pd: power-domain@10044060 {
@@ -37,12 +40,8 @@ Example:
                clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>,
                        <&clock CLK_MOUT_USER_ACLK333>;
                clock-names = "oscclk", "pclk0", "clk0";
+               #power-domain-cells = <0>;
        };
 
-Example of the node using power domain:
-
-       node {
-               /* ... */
-               samsung,power-domain = <&lcd0>;
-               /* ... */
-       };
+See Documentation/devicetree/bindings/power/power_domain.txt for description
+of consumer-side bindings.
diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt
deleted file mode 100644 (file)
index 366690c..0000000
+++ /dev/null
@@ -1,64 +0,0 @@
-Generic CPU0 cpufreq driver
-
-It is a generic cpufreq driver for CPU0 frequency management.  It
-supports both uniprocessor (UP) and symmetric multiprocessor (SMP)
-systems which share clock and voltage across all CPUs.
-
-Both required and optional properties listed below must be defined
-under node /cpus/cpu@0.
-
-Required properties:
-- None
-
-Optional properties:
-- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt for
-  details. OPPs *must* be supplied either via DT, i.e. this property, or
-  populated at runtime.
-- clock-latency: Specify the possible maximum transition latency for clock,
-  in unit of nanoseconds.
-- voltage-tolerance: Specify the CPU voltage tolerance in percentage.
-- #cooling-cells:
-- cooling-min-level:
-- cooling-max-level:
-     Please refer to Documentation/devicetree/bindings/thermal/thermal.txt.
-
-Examples:
-
-cpus {
-       #address-cells = <1>;
-       #size-cells = <0>;
-
-       cpu@0 {
-               compatible = "arm,cortex-a9";
-               reg = <0>;
-               next-level-cache = <&L2>;
-               operating-points = <
-                       /* kHz    uV */
-                       792000  1100000
-                       396000  950000
-                       198000  850000
-               >;
-               clock-latency = <61036>; /* two CLK32 periods */
-               #cooling-cells = <2>;
-               cooling-min-level = <0>;
-               cooling-max-level = <2>;
-       };
-
-       cpu@1 {
-               compatible = "arm,cortex-a9";
-               reg = <1>;
-               next-level-cache = <&L2>;
-       };
-
-       cpu@2 {
-               compatible = "arm,cortex-a9";
-               reg = <2>;
-               next-level-cache = <&L2>;
-       };
-
-       cpu@3 {
-               compatible = "arm,cortex-a9";
-               reg = <3>;
-               next-level-cache = <&L2>;
-       };
-};
diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-dt.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-dt.txt
new file mode 100644 (file)
index 0000000..e41c98f
--- /dev/null
@@ -0,0 +1,64 @@
+Generic cpufreq driver
+
+It is a generic DT based cpufreq driver for frequency management.  It supports
+both uniprocessor (UP) and symmetric multiprocessor (SMP) systems which share
+clock and voltage across all CPUs.
+
+Both required and optional properties listed below must be defined
+under node /cpus/cpu@0.
+
+Required properties:
+- None
+
+Optional properties:
+- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt for
+  details. OPPs *must* be supplied either via DT, i.e. this property, or
+  populated at runtime.
+- clock-latency: Specify the possible maximum transition latency for clock,
+  in unit of nanoseconds.
+- voltage-tolerance: Specify the CPU voltage tolerance in percentage.
+- #cooling-cells:
+- cooling-min-level:
+- cooling-max-level:
+     Please refer to Documentation/devicetree/bindings/thermal/thermal.txt.
+
+Examples:
+
+cpus {
+       #address-cells = <1>;
+       #size-cells = <0>;
+
+       cpu@0 {
+               compatible = "arm,cortex-a9";
+               reg = <0>;
+               next-level-cache = <&L2>;
+               operating-points = <
+                       /* kHz    uV */
+                       792000  1100000
+                       396000  950000
+                       198000  850000
+               >;
+               clock-latency = <61036>; /* two CLK32 periods */
+               #cooling-cells = <2>;
+               cooling-min-level = <0>;
+               cooling-max-level = <2>;
+       };
+
+       cpu@1 {
+               compatible = "arm,cortex-a9";
+               reg = <1>;
+               next-level-cache = <&L2>;
+       };
+
+       cpu@2 {
+               compatible = "arm,cortex-a9";
+               reg = <2>;
+               next-level-cache = <&L2>;
+       };
+
+       cpu@3 {
+               compatible = "arm,cortex-a9";
+               reg = <3>;
+               next-level-cache = <&L2>;
+       };
+};
diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
new file mode 100644 (file)
index 0000000..98c1667
--- /dev/null
@@ -0,0 +1,49 @@
+* Generic PM domains
+
+System on chip designs are often divided into multiple PM domains that can be
+used for power gating of selected IP blocks for power saving by reduced leakage
+current.
+
+This device tree binding can be used to bind PM domain consumer devices with
+their PM domains provided by PM domain providers. A PM domain provider can be
+represented by any node in the device tree and can provide one or more PM
+domains. A consumer node can refer to the provider by a phandle and a set of
+phandle arguments (so called PM domain specifiers) of length specified by the
+#power-domain-cells property in the PM domain provider node.
+
+==PM domain providers==
+
+Required properties:
+ - #power-domain-cells : Number of cells in a PM domain specifier;
+   Typically 0 for nodes representing a single PM domain and 1 for nodes
+   providing multiple PM domains (e.g. power controllers), but can be any value
+   as specified by device tree binding documentation of particular provider.
+
+Example:
+
+       power: power-controller@12340000 {
+               compatible = "foo,power-controller";
+               reg = <0x12340000 0x1000>;
+               #power-domain-cells = <1>;
+       };
+
+The node above defines a power controller that is a PM domain provider and
+expects one cell as its phandle argument.
+
+==PM domain consumers==
+
+Required properties:
+ - power-domains : A phandle and PM domain specifier as defined by bindings of
+                   the power controller specified by phandle.
+
+Example:
+
+       leaky-device@12350000 {
+               compatible = "foo,i-leak-current";
+               reg = <0x12350000 0x1000>;
+               power-domains = <&power 0>;
+       };
+
+The node above defines a typical PM domain consumer device, which is located
+inside a PM domain with index 0 of a power controller represented by a node
+with the label "power".
diff --git a/Documentation/devicetree/bindings/power/rockchip-io-domain.txt b/Documentation/devicetree/bindings/power/rockchip-io-domain.txt
new file mode 100644 (file)
index 0000000..6fbf6e7
--- /dev/null
@@ -0,0 +1,83 @@
+Rockchip SRAM for IO Voltage Domains:
+-------------------------------------
+
+IO domain voltages on some Rockchip SoCs are variable but need to be
+kept in sync between the regulators and the SoC using a special
+register.
+
+A specific example using rk3288:
+- If the regulator hooked up to a pin like SDMMC0_VDD is 3.3V then
+  bit 7 of GRF_IO_VSEL needs to be 0.  If the regulator hooked up to
+  that same pin is 1.8V then bit 7 of GRF_IO_VSEL needs to be 1.
+
+Said another way, this driver simply handles keeping bits in the SoC's
+general register file (GRF) in sync with the actual value of a voltage
+hooked up to the pins.
+
+Note that this driver specifically doesn't include:
+- any logic for deciding what voltage we should set regulators to
+- any logic for deciding whether regulators (or internal SoC blocks)
+  should have power or not have power
+
+If there were some other software that had the smarts of making
+decisions about regulators, it would work in conjunction with this
+driver.  When that other software adjusted a regulator's voltage then
+this driver would handle telling the SoC about it.  A good example is
+vqmmc for SD.  In that case the dw_mmc driver simply is told about a
+regulator.  It changes the regulator between 3.3V and 1.8V at the
+right time.  This driver notices the change and makes sure that the
+SoC is on the same page.
+
+
+Required properties:
+- compatible: should be one of:
+  - "rockchip,rk3188-io-voltage-domain" for rk3188
+  - "rockchip,rk3288-io-voltage-domain" for rk3288
+- rockchip,grf: phandle to the syscon managing the "general register files"
+
+
+You specify supplies using the standard regulator bindings by including
+a phandle the the relevant regulator.  All specified supplies must be able
+to report their voltage.  The IO Voltage Domain for any non-specified
+supplies will be not be touched.
+
+Possible supplies for rk3188:
+- ap0-supply:    The supply connected to AP0_VCC.
+- ap1-supply:    The supply connected to AP1_VCC.
+- cif-supply:    The supply connected to CIF_VCC.
+- flash-supply:  The supply connected to FLASH_VCC.
+- lcdc0-supply:  The supply connected to LCD0_VCC.
+- lcdc1-supply:  The supply connected to LCD1_VCC.
+- vccio0-supply: The supply connected to VCCIO0.
+- vccio1-supply: The supply connected to VCCIO1.
+                 Sometimes also labeled VCCIO1 and VCCIO2.
+
+Possible supplies for rk3288:
+- audio-supply:  The supply connected to APIO4_VDD.
+- bb-supply:     The supply connected to APIO5_VDD.
+- dvp-supply:    The supply connected to DVPIO_VDD.
+- flash0-supply: The supply connected to FLASH0_VDD.  Typically for eMMC
+- flash1-supply: The supply connected to FLASH1_VDD.  Also known as SDIO1.
+- gpio30-supply: The supply connected to APIO1_VDD.
+- gpio1830       The supply connected to APIO2_VDD.
+- lcdc-supply:   The supply connected to LCDC_VDD.
+- sdcard-supply: The supply connected to SDMMC0_VDD.
+- wifi-supply:   The supply connected to APIO3_VDD.  Also known as SDIO0.
+
+
+Example:
+
+       io-domains {
+               compatible = "rockchip,rk3288-io-voltage-domain";
+               rockchip,grf = <&grf>;
+
+               audio-supply = <&vcc18_codec>;
+               bb-supply = <&vcc33_io>;
+               dvp-supply = <&vcc_18>;
+               flash0-supply = <&vcc18_flashio>;
+               gpio1830-supply = <&vcc33_io>;
+               gpio30-supply = <&vcc33_pmuio>;
+               lcdc-supply = <&vcc33_lcd>;
+               sdcard-supply = <&vccio_sd>;
+               wifi-supply = <&vcc18_wl>;
+       };
index d9a452e8fb9b3bb1f34d89ddaa87cb8dd90b04bd..cc4ab2517abc64feb9c9f65c32940bde044018e8 100644 (file)
@@ -3321,11 +3321,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 
        tdfx=           [HW,DRM]
 
-       test_suspend=   [SUSPEND]
+       test_suspend=   [SUSPEND][,N]
                        Specify "mem" (for Suspend-to-RAM) or "standby" (for
-                       standby suspend) as the system sleep state to briefly
-                       enter during system startup.  The system is woken from
-                       this state using a wakeup-capable RTC alarm.
+                       standby suspend) or "freeze" (for suspend type freeze)
+                       as the system sleep state during system startup with
+                       the optional capability to repeat N number of times.
+                       The system is woken from this state using a
+                       wakeup-capable RTC alarm.
 
        thash_entries=  [KNL,NET]
                        Set number of hash buckets for TCP connection
diff --git a/Documentation/power/suspend-and-interrupts.txt b/Documentation/power/suspend-and-interrupts.txt
new file mode 100644 (file)
index 0000000..6966364
--- /dev/null
@@ -0,0 +1,123 @@
+System Suspend and Device Interrupts
+
+Copyright (C) 2014 Intel Corp.
+Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+
+
+Suspending and Resuming Device IRQs
+-----------------------------------
+
+Device interrupt request lines (IRQs) are generally disabled during system
+suspend after the "late" phase of suspending devices (that is, after all of the
+->prepare, ->suspend and ->suspend_late callbacks have been executed for all
+devices).  That is done by suspend_device_irqs().
+
+The rationale for doing so is that after the "late" phase of device suspend
+there is no legitimate reason why any interrupts from suspended devices should
+trigger and if any devices have not been suspended properly yet, it is better to
+block interrupts from them anyway.  Also, in the past we had problems with
+interrupt handlers for shared IRQs that device drivers implementing them were
+not prepared for interrupts triggering after their devices had been suspended.
+In some cases they would attempt to access, for example, memory address spaces
+of suspended devices and cause unpredictable behavior to ensue as a result.
+Unfortunately, such problems are very difficult to debug and the introduction
+of suspend_device_irqs(), along with the "noirq" phase of device suspend and
+resume, was the only practical way to mitigate them.
+
+Device IRQs are re-enabled during system resume, right before the "early" phase
+of resuming devices (that is, before starting to execute ->resume_early
+callbacks for devices).  The function doing that is resume_device_irqs().
+
+
+The IRQF_NO_SUSPEND Flag
+------------------------
+
+There are interrupts that can legitimately trigger during the entire system
+suspend-resume cycle, including the "noirq" phases of suspending and resuming
+devices as well as during the time when nonboot CPUs are taken offline and
+brought back online.  That applies to timer interrupts in the first place,
+but also to IPIs and to some other special-purpose interrupts.
+
+The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when
+requesting a special-purpose interrupt.  It causes suspend_device_irqs() to
+leave the corresponding IRQ enabled so as to allow the interrupt to work all
+the time as expected.
+
+Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one
+user of it.  Thus, if the IRQ is shared, all of the interrupt handlers installed
+for it will be executed as usual after suspend_device_irqs(), even if the
+IRQF_NO_SUSPEND flag was not passed to request_irq() (or equivalent) by some of
+the IRQ's users.  For this reason, using IRQF_NO_SUSPEND and IRQF_SHARED at the
+same time should be avoided.
+
+
+System Wakeup Interrupts, enable_irq_wake() and disable_irq_wake()
+------------------------------------------------------------------
+
+System wakeup interrupts generally need to be configured to wake up the system
+from sleep states, especially if they are used for different purposes (e.g. as
+I/O interrupts) in the working state.
+
+That may involve turning on a special signal handling logic within the platform
+(such as an SoC) so that signals from a given line are routed in a different way
+during system sleep so as to trigger a system wakeup when needed.  For example,
+the platform may include a dedicated interrupt controller used specifically for
+handling system wakeup events.  Then, if a given interrupt line is supposed to
+wake up the system from sleep sates, the corresponding input of that interrupt
+controller needs to be enabled to receive signals from the line in question.
+After wakeup, it generally is better to disable that input to prevent the
+dedicated controller from triggering interrupts unnecessarily.
+
+The IRQ subsystem provides two helper functions to be used by device drivers for
+those purposes.  Namely, enable_irq_wake() turns on the platform's logic for
+handling the given IRQ as a system wakeup interrupt line and disable_irq_wake()
+turns that logic off.
+
+Calling enable_irq_wake() causes suspend_device_irqs() to treat the given IRQ
+in a special way.  Namely, the IRQ remains enabled, by on the first interrupt
+it will be disabled, marked as pending and "suspended" so that it will be
+re-enabled by resume_device_irqs() during the subsequent system resume.  Also
+the PM core is notified about the event which casues the system suspend in
+progress to be aborted (that doesn't have to happen immediately, but at one
+of the points where the suspend thread looks for pending wakeup events).
+
+This way every interrupt from a wakeup interrupt source will either cause the
+system suspend currently in progress to be aborted or wake up the system if
+already suspended.  However, after suspend_device_irqs() interrupt handlers are
+not executed for system wakeup IRQs.  They are only executed for IRQF_NO_SUSPEND
+IRQs at that time, but those IRQs should not be configured for system wakeup
+using enable_irq_wake().
+
+
+Interrupts and Suspend-to-Idle
+------------------------------
+
+Suspend-to-idle (also known as the "freeze" sleep state) is a relatively new
+system sleep state that works by idling all of the processors and waiting for
+interrupts right after the "noirq" phase of suspending devices.
+
+Of course, this means that all of the interrupts with the IRQF_NO_SUSPEND flag
+set will bring CPUs out of idle while in that state, but they will not cause the
+IRQ subsystem to trigger a system wakeup.
+
+System wakeup interrupts, in turn, will trigger wakeup from suspend-to-idle in
+analogy with what they do in the full system suspend case.  The only difference
+is that the wakeup from suspend-to-idle is signaled using the usual working
+state interrupt delivery mechanisms and doesn't require the platform to use
+any special interrupt handling logic for it to work.
+
+
+IRQF_NO_SUSPEND and enable_irq_wake()
+-------------------------------------
+
+There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND
+flag on the same IRQ.
+
+First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND
+interrupts (interrupt handlers are invoked after suspend_device_irqs()) are
+directly at odds with the rules for handling system wakeup interrupts (interrupt
+handlers are not invoked after suspend_device_irqs()).
+
+Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not
+to individual interrupt handlers, so sharing an IRQ between a system wakeup
+interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense.
index 75b98b4958c86de35faf011d7e707dbca43f53a9..40d4796886c9ba7f5225701286dc4c37970f6b0b 100644 (file)
@@ -8490,11 +8490,11 @@ S:      Maintained
 F:     Documentation/security/Smack.txt
 F:     security/smack/
 
-SMARTREFLEX DRIVERS FOR ADAPTIVE VOLTAGE SCALING (AVS)
+DRIVERS FOR ADAPTIVE VOLTAGE SCALING (AVS)
 M:     Kevin Hilman <khilman@kernel.org>
 M:     Nishanth Menon <nm@ti.com>
 S:     Maintained
-F:     drivers/power/avs/smartreflex.c
+F:     drivers/power/avs/
 F:     include/linux/power/smartreflex.h
 L:     linux-pm@vger.kernel.org
 
index a25c262326dcdcc2e681f6f2e900de8ea59891a2..322fd1519b09fe049a731b662de2408ff37074ff 100644 (file)
@@ -38,6 +38,7 @@
                        compatible = "arm,cortex-a15";
                        reg = <0>;
                        cci-control-port = <&cci_control1>;
+                       cpu-idle-states = <&CLUSTER_SLEEP_BIG>;
                };
 
                cpu1: cpu@1 {
@@ -45,6 +46,7 @@
                        compatible = "arm,cortex-a15";
                        reg = <1>;
                        cci-control-port = <&cci_control1>;
+                       cpu-idle-states = <&CLUSTER_SLEEP_BIG>;
                };
 
                cpu2: cpu@2 {
@@ -52,6 +54,7 @@
                        compatible = "arm,cortex-a7";
                        reg = <0x100>;
                        cci-control-port = <&cci_control2>;
+                       cpu-idle-states = <&CLUSTER_SLEEP_LITTLE>;
                };
 
                cpu3: cpu@3 {
@@ -59,6 +62,7 @@
                        compatible = "arm,cortex-a7";
                        reg = <0x101>;
                        cci-control-port = <&cci_control2>;
+                       cpu-idle-states = <&CLUSTER_SLEEP_LITTLE>;
                };
 
                cpu4: cpu@4 {
                        compatible = "arm,cortex-a7";
                        reg = <0x102>;
                        cci-control-port = <&cci_control2>;
+                       cpu-idle-states = <&CLUSTER_SLEEP_LITTLE>;
+               };
+
+               idle-states {
+                       CLUSTER_SLEEP_BIG: cluster-sleep-big {
+                               compatible = "arm,idle-state";
+                               local-timer-stop;
+                               entry-latency-us = <1000>;
+                               exit-latency-us = <700>;
+                               min-residency-us = <2000>;
+                       };
+
+                       CLUSTER_SLEEP_LITTLE: cluster-sleep-little {
+                               compatible = "arm,idle-state";
+                               local-timer-stop;
+                               entry-latency-us = <1000>;
+                               exit-latency-us = <500>;
+                               min-residency-us = <2500>;
+                       };
                };
        };
 
index 6a24e111d6e1819f8b259adc4d1a41306a627651..b89e5f35db841e50b64a5790af79e0c11c1e783f 100644 (file)
@@ -193,7 +193,6 @@ static void __init exynos_init_late(void)
                /* to be supported later */
                return;
 
-       pm_genpd_poweroff_unused();
        exynos_pm_init();
 }
 
index fd76e1b5a471a4f3ffcfd3a2c4539a16b2d26f40..20f267121b3e7876e4ab806ab6c2f655e9467499 100644 (file)
@@ -105,78 +105,6 @@ static int exynos_pd_power_off(struct generic_pm_domain *domain)
        return exynos_pd_power(domain, false);
 }
 
-static void exynos_add_device_to_domain(struct exynos_pm_domain *pd,
-                                        struct device *dev)
-{
-       int ret;
-
-       dev_dbg(dev, "adding to power domain %s\n", pd->pd.name);
-
-       while (1) {
-               ret = pm_genpd_add_device(&pd->pd, dev);
-               if (ret != -EAGAIN)
-                       break;
-               cond_resched();
-       }
-
-       pm_genpd_dev_need_restore(dev, true);
-}
-
-static void exynos_remove_device_from_domain(struct device *dev)
-{
-       struct generic_pm_domain *genpd = dev_to_genpd(dev);
-       int ret;
-
-       dev_dbg(dev, "removing from power domain %s\n", genpd->name);
-
-       while (1) {
-               ret = pm_genpd_remove_device(genpd, dev);
-               if (ret != -EAGAIN)
-                       break;
-               cond_resched();
-       }
-}
-
-static void exynos_read_domain_from_dt(struct device *dev)
-{
-       struct platform_device *pd_pdev;
-       struct exynos_pm_domain *pd;
-       struct device_node *node;
-
-       node = of_parse_phandle(dev->of_node, "samsung,power-domain", 0);
-       if (!node)
-               return;
-       pd_pdev = of_find_device_by_node(node);
-       if (!pd_pdev)
-               return;
-       pd = platform_get_drvdata(pd_pdev);
-       exynos_add_device_to_domain(pd, dev);
-}
-
-static int exynos_pm_notifier_call(struct notifier_block *nb,
-                                   unsigned long event, void *data)
-{
-       struct device *dev = data;
-
-       switch (event) {
-       case BUS_NOTIFY_BIND_DRIVER:
-               if (dev->of_node)
-                       exynos_read_domain_from_dt(dev);
-
-               break;
-
-       case BUS_NOTIFY_UNBOUND_DRIVER:
-               exynos_remove_device_from_domain(dev);
-
-               break;
-       }
-       return NOTIFY_DONE;
-}
-
-static struct notifier_block platform_nb = {
-       .notifier_call = exynos_pm_notifier_call,
-};
-
 static __init int exynos4_pm_init_power_domain(void)
 {
        struct platform_device *pdev;
@@ -202,7 +130,6 @@ static __init int exynos4_pm_init_power_domain(void)
                pd->base = of_iomap(np, 0);
                pd->pd.power_off = exynos_pd_power_off;
                pd->pd.power_on = exynos_pd_power_on;
-               pd->pd.of_node = np;
 
                pd->oscclk = clk_get(dev, "oscclk");
                if (IS_ERR(pd->oscclk))
@@ -228,15 +155,12 @@ static __init int exynos4_pm_init_power_domain(void)
                        clk_put(pd->oscclk);
 
 no_clk:
-               platform_set_drvdata(pdev, pd);
-
                on = __raw_readl(pd->base + 0x4) & INT_LOCAL_PWR_EN;
 
                pm_genpd_init(&pd->pd, NULL, !on);
+               of_genpd_add_provider_simple(np, &pd->pd);
        }
 
-       bus_register_notifier(&platform_bus_type, &platform_nb);
-
        return 0;
 }
 arch_initcall(exynos4_pm_init_power_domain);
index 080e66c6a1d02722022f12a7991cf0b0b566f72e..dc8f1a6f45f20a9551045105400a30587d00b1a1 100644 (file)
@@ -20,7 +20,7 @@
 
 static void __init imx27_dt_init(void)
 {
-       struct platform_device_info devinfo = { .name = "cpufreq-cpu0", };
+       struct platform_device_info devinfo = { .name = "cpufreq-dt", };
 
        mxc_arch_reset_init_dt();
 
index c77deb3f08939f50304797ea5ac1c1b17983fef7..2c5fcaf8675b96bfdd4ee391871c1602eba5ae08 100644 (file)
@@ -51,7 +51,7 @@ static void __init imx51_ipu_mipi_setup(void)
 
 static void __init imx51_dt_init(void)
 {
-       struct platform_device_info devinfo = { .name = "cpufreq-cpu0", };
+       struct platform_device_info devinfo = { .name = "cpufreq-dt", };
 
        mxc_arch_reset_init_dt();
        imx51_ipu_mipi_setup();
index 8a70a51533fd4507e02f10cb576667500cfbf575..bbd8664d1bacb2732ec58072d630e70d963904de 100644 (file)
@@ -644,7 +644,7 @@ static int __init armada_xp_pmsu_cpufreq_init(void)
                }
        }
 
-       platform_device_register_simple("cpufreq-generic", -1, NULL, 0);
+       platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
        return 0;
 }
 
index 828aee9ea6a8b4146cff3b90ae1c1e909e0bc217..58920bc8807bce963536296bc7914db421c2b6aa 100644 (file)
@@ -282,7 +282,7 @@ static inline void omap_init_cpufreq(void)
        if (!of_have_populated_dt())
                devinfo.name = "omap-cpufreq";
        else
-               devinfo.name = "cpufreq-cpu0";
+               devinfo.name = "cpufreq-dt";
        platform_device_register_full(&devinfo);
 }
 
index 5c45aae675b63e86f21c44aea48e41bfccddb444..16547f2641a32d343bb8ea44adfdf106ad781f49 100644 (file)
@@ -440,8 +440,3 @@ void s3c64xx_restart(enum reboot_mode mode, const char *cmd)
        /* if all else fails, or mode was for soft, jump to 0 */
        soft_restart(0);
 }
-
-void __init s3c64xx_init_late(void)
-{
-       s3c64xx_pm_late_initcall();
-}
index 7043e7a3a67ed868f9e4465d7afae197619dcdc9..9eb8644129116e9354274283c0cfd4d66e032f9e 100644 (file)
@@ -23,7 +23,6 @@ void s3c64xx_init_irq(u32 vic0, u32 vic1);
 void s3c64xx_init_io(struct map_desc *mach_desc, int size);
 
 void s3c64xx_restart(enum reboot_mode mode, const char *cmd);
-void s3c64xx_init_late(void);
 
 void s3c64xx_clk_init(struct device_node *np, unsigned long xtal_f,
        unsigned long xusbxti_f, bool is_s3c6400, void __iomem *reg_base);
@@ -52,12 +51,6 @@ extern void s3c6410_map_io(void);
 #define s3c6410_init NULL
 #endif
 
-#ifdef CONFIG_PM
-int __init s3c64xx_pm_late_initcall(void);
-#else
-static inline int s3c64xx_pm_late_initcall(void) { return 0; }
-#endif
-
 #ifdef CONFIG_S3C64XX_PL080
 extern struct pl08x_platform_data s3c64xx_dma0_plat_data;
 extern struct pl08x_platform_data s3c64xx_dma1_plat_data;
index 60576dfbea8d42cf152cdd0aced552f10f888d27..6224c67f5061251568995a80d0d278b060c26840 100644 (file)
@@ -233,7 +233,6 @@ MACHINE_START(ANW6410, "A&W6410")
        .init_irq       = s3c6410_init_irq,
        .map_io         = anw6410_map_io,
        .init_machine   = anw6410_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index fe116334afda929958a2671ba224bb06d8fe88be..10b913baab2883562acb916ecef103d96b35b761 100644 (file)
@@ -857,7 +857,6 @@ MACHINE_START(WLF_CRAGG_6410, "Wolfson Cragganmore 6410")
        .init_irq       = s3c6410_init_irq,
        .map_io         = crag6410_map_io,
        .init_machine   = crag6410_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index 19e8feb908fdba504cda3f150202f053dbd21255..e4b087c58ee61ae14fbe69f8204c85d806909666 100644 (file)
@@ -277,7 +277,6 @@ MACHINE_START(HMT, "Airgoo-HMT")
        .init_irq       = s3c6410_init_irq,
        .map_io         = hmt_map_io,
        .init_machine   = hmt_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index 9cbc07602ef388cb78a97e0196899c041e7ce927..ab61af50bfb9c485e2488ca67579777a11ebafda 100644 (file)
@@ -366,7 +366,6 @@ MACHINE_START(MINI6410, "MINI6410")
        .init_irq       = s3c6410_init_irq,
        .map_io         = mini6410_map_io,
        .init_machine   = mini6410_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index 4bae7dc49eeabe9a9602f8efda7bb06e89eb931b..80cb1446f69f8241d20bd3e7f69d72b7a29b63a4 100644 (file)
@@ -103,7 +103,6 @@ MACHINE_START(NCP, "NCP")
        .init_irq       = s3c6410_init_irq,
        .map_io         = ncp_map_io,
        .init_machine   = ncp_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index fbad2af1ef1604a46ffa1a06dc0f387ef40006c0..85fa9598b9801f21833b75dc10d46f2017d12ac6 100644 (file)
@@ -335,7 +335,6 @@ MACHINE_START(REAL6410, "REAL6410")
        .init_irq       = s3c6410_init_irq,
        .map_io         = real6410_map_io,
        .init_machine   = real6410_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index dec4c08e834f4d5026284342568102d2a8e6a1a4..33224ab36fac8c8692ab84a7455c63f520184ada 100644 (file)
@@ -156,7 +156,6 @@ MACHINE_START(SMARTQ5, "SmartQ 5")
        .init_irq       = s3c6410_init_irq,
        .map_io         = smartq_map_io,
        .init_machine   = smartq5_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index 27b322069c7dd4420495eb26b370ffd01c8e84c0..fc7fece22fb0d8e5a12e83aac2c880154ae50d7c 100644 (file)
@@ -172,7 +172,6 @@ MACHINE_START(SMARTQ7, "SmartQ 7")
        .init_irq       = s3c6410_init_irq,
        .map_io         = smartq_map_io,
        .init_machine   = smartq7_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index 91074976834042646cb8a42e6d4ffe44eb1fd59c..6f425126a735eb05e66732097a9b050e76b2234b 100644 (file)
@@ -92,7 +92,6 @@ MACHINE_START(SMDK6400, "SMDK6400")
        .init_irq       = s3c6400_init_irq,
        .map_io         = smdk6400_map_io,
        .init_machine   = smdk6400_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index 1dc86d76b530032a75cdd644a34219909cf8f7c9..661eb662d05159861fb9fffd9b95705b8dbac238 100644 (file)
@@ -705,7 +705,6 @@ MACHINE_START(SMDK6410, "SMDK6410")
        .init_irq       = s3c6410_init_irq,
        .map_io         = smdk6410_map_io,
        .init_machine   = smdk6410_machine_init,
-       .init_late      = s3c64xx_init_late,
        .init_time      = samsung_timer_init,
        .restart        = s3c64xx_restart,
 MACHINE_END
index 6b37694fa3351fc55a66f0a5b0b4377e23ab734a..aaf7bea4032f4824a611f92710335aa717f70a33 100644 (file)
@@ -347,10 +347,3 @@ static __init int s3c64xx_pm_initcall(void)
        return 0;
 }
 arch_initcall(s3c64xx_pm_initcall);
-
-int __init s3c64xx_pm_late_initcall(void)
-{
-       pm_genpd_poweroff_unused();
-
-       return 0;
-}
index 8a24b2be46ae34165a7c613b1d483097ce67a861..57fbff024dcd5dd6ccf23afb94e09d6b0ae47796 100644 (file)
@@ -12,6 +12,6 @@
 
 int __init shmobile_cpufreq_init(void)
 {
-       platform_device_register_simple("cpufreq-cpu0", -1, NULL, 0);
+       platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
        return 0;
 }
index 69f70b7f7fb2ee406bcf0b0d75fd16c7378aabf8..82fe3d7f96624e3d68c3645ad7a1282dd308ceea 100644 (file)
@@ -87,7 +87,6 @@ static void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd)
        genpd->dev_ops.stop = pm_clk_suspend;
        genpd->dev_ops.start = pm_clk_resume;
        genpd->dev_ops.active_wakeup = pd_active_wakeup;
-       genpd->dev_irq_safe = true;
        genpd->power_off = pd_power_down;
        genpd->power_on = pd_power_up;
 
index a88079af7914afe0db35514b558d13e952c36e88..717e6413d29cb998cd067ece5cc715acc7bb1e93 100644 (file)
@@ -110,7 +110,6 @@ static void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd)
        genpd->dev_ops.stop             = pm_clk_suspend;
        genpd->dev_ops.start            = pm_clk_resume;
        genpd->dev_ops.active_wakeup    = rmobile_pd_active_wakeup;
-       genpd->dev_irq_safe             = true;
        genpd->power_off                = rmobile_pd_power_down;
        genpd->power_on                 = rmobile_pd_power_up;
        __rmobile_pd_power_up(rmobile_pd, false);
index 613c476872eb06c5d6ba76c249ae2df888bf5e92..26f92c28d22b7c2c3e8a40ecaf1b02ca34bb04b5 100644 (file)
@@ -110,7 +110,7 @@ static void __init zynq_init_late(void)
  */
 static void __init zynq_init_machine(void)
 {
-       struct platform_device_info devinfo = { .name = "cpufreq-cpu0", };
+       struct platform_device_info devinfo = { .name = "cpufreq-dt", };
        struct soc_device_attribute *soc_dev_attr;
        struct soc_device *soc_dev;
        struct device *parent = NULL;
index 337ce5a9b15c86bb7e9ea7747749fed1aee0d2d7..1183d545da1e95a56233f39baf8047a38c77fe22 100644 (file)
@@ -2623,6 +2623,7 @@ static struct irq_chip ioapic_chip __read_mostly = {
        .irq_eoi                = ack_apic_level,
        .irq_set_affinity       = native_ioapic_set_affinity,
        .irq_retrigger          = ioapic_retrigger_irq,
+       .flags                  = IRQCHIP_SKIP_SET_WAKE,
 };
 
 static inline void init_IO_APIC_traps(void)
@@ -3173,6 +3174,7 @@ static struct irq_chip msi_chip = {
        .irq_ack                = ack_apic_edge,
        .irq_set_affinity       = msi_set_affinity,
        .irq_retrigger          = ioapic_retrigger_irq,
+       .flags                  = IRQCHIP_SKIP_SET_WAKE,
 };
 
 int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
@@ -3271,6 +3273,7 @@ static struct irq_chip dmar_msi_type = {
        .irq_ack                = ack_apic_edge,
        .irq_set_affinity       = dmar_msi_set_affinity,
        .irq_retrigger          = ioapic_retrigger_irq,
+       .flags                  = IRQCHIP_SKIP_SET_WAKE,
 };
 
 int arch_setup_dmar_msi(unsigned int irq)
@@ -3321,6 +3324,7 @@ static struct irq_chip hpet_msi_type = {
        .irq_ack = ack_apic_edge,
        .irq_set_affinity = hpet_msi_set_affinity,
        .irq_retrigger = ioapic_retrigger_irq,
+       .flags = IRQCHIP_SKIP_SET_WAKE,
 };
 
 int default_setup_hpet_msi(unsigned int irq, unsigned int id)
@@ -3384,6 +3388,7 @@ static struct irq_chip ht_irq_chip = {
        .irq_ack                = ack_apic_edge,
        .irq_set_affinity       = ht_set_affinity,
        .irq_retrigger          = ioapic_retrigger_irq,
+       .flags                  = IRQCHIP_SKIP_SET_WAKE,
 };
 
 int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev)
index b0ea767c86968ceac331294a6f74694c486f39c2..93d160661f4c94391534a1581e8f4e74edd5e608 100644 (file)
@@ -54,55 +54,58 @@ ACPI_MODULE_NAME("acpi_lpss");
 
 #define LPSS_PRV_REG_COUNT             9
 
-struct lpss_shared_clock {
-       const char *name;
-       unsigned long rate;
-       struct clk *clk;
-};
+/* LPSS Flags */
+#define LPSS_CLK                       BIT(0)
+#define LPSS_CLK_GATE                  BIT(1)
+#define LPSS_CLK_DIVIDER               BIT(2)
+#define LPSS_LTR                       BIT(3)
+#define LPSS_SAVE_CTX                  BIT(4)
 
 struct lpss_private_data;
 
 struct lpss_device_desc {
-       bool clk_required;
-       const char *clkdev_name;
-       bool ltr_required;
+       unsigned int flags;
        unsigned int prv_offset;
        size_t prv_size_override;
-       bool clk_divider;
-       bool clk_gate;
-       bool save_ctx;
-       struct lpss_shared_clock *shared_clock;
        void (*setup)(struct lpss_private_data *pdata);
 };
 
 static struct lpss_device_desc lpss_dma_desc = {
-       .clk_required = true,
-       .clkdev_name = "hclk",
+       .flags = LPSS_CLK,
 };
 
 struct lpss_private_data {
        void __iomem *mmio_base;
        resource_size_t mmio_size;
+       unsigned int fixed_clk_rate;
        struct clk *clk;
        const struct lpss_device_desc *dev_desc;
        u32 prv_reg_ctx[LPSS_PRV_REG_COUNT];
 };
 
+/* UART Component Parameter Register */
+#define LPSS_UART_CPR                  0xF4
+#define LPSS_UART_CPR_AFCE             BIT(4)
+
 static void lpss_uart_setup(struct lpss_private_data *pdata)
 {
        unsigned int offset;
-       u32 reg;
+       u32 val;
 
        offset = pdata->dev_desc->prv_offset + LPSS_TX_INT;
-       reg = readl(pdata->mmio_base + offset);
-       writel(reg | LPSS_TX_INT_MASK, pdata->mmio_base + offset);
-
-       offset = pdata->dev_desc->prv_offset + LPSS_GENERAL;
-       reg = readl(pdata->mmio_base + offset);
-       writel(reg | LPSS_GENERAL_UART_RTS_OVRD, pdata->mmio_base + offset);
+       val = readl(pdata->mmio_base + offset);
+       writel(val | LPSS_TX_INT_MASK, pdata->mmio_base + offset);
+
+       val = readl(pdata->mmio_base + LPSS_UART_CPR);
+       if (!(val & LPSS_UART_CPR_AFCE)) {
+               offset = pdata->dev_desc->prv_offset + LPSS_GENERAL;
+               val = readl(pdata->mmio_base + offset);
+               val |= LPSS_GENERAL_UART_RTS_OVRD;
+               writel(val, pdata->mmio_base + offset);
+       }
 }
 
-static void lpss_i2c_setup(struct lpss_private_data *pdata)
+static void byt_i2c_setup(struct lpss_private_data *pdata)
 {
        unsigned int offset;
        u32 val;
@@ -111,100 +114,56 @@ static void lpss_i2c_setup(struct lpss_private_data *pdata)
        val = readl(pdata->mmio_base + offset);
        val |= LPSS_RESETS_RESET_APB | LPSS_RESETS_RESET_FUNC;
        writel(val, pdata->mmio_base + offset);
-}
 
-static struct lpss_device_desc wpt_dev_desc = {
-       .clk_required = true,
-       .prv_offset = 0x800,
-       .ltr_required = true,
-       .clk_divider = true,
-       .clk_gate = true,
-};
+       if (readl(pdata->mmio_base + pdata->dev_desc->prv_offset))
+               pdata->fixed_clk_rate = 133000000;
+}
 
 static struct lpss_device_desc lpt_dev_desc = {
-       .clk_required = true,
+       .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR,
        .prv_offset = 0x800,
-       .ltr_required = true,
-       .clk_divider = true,
-       .clk_gate = true,
 };
 
 static struct lpss_device_desc lpt_i2c_dev_desc = {
-       .clk_required = true,
+       .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR,
        .prv_offset = 0x800,
-       .ltr_required = true,
-       .clk_gate = true,
 };
 
 static struct lpss_device_desc lpt_uart_dev_desc = {
-       .clk_required = true,
+       .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR,
        .prv_offset = 0x800,
-       .ltr_required = true,
-       .clk_divider = true,
-       .clk_gate = true,
        .setup = lpss_uart_setup,
 };
 
 static struct lpss_device_desc lpt_sdio_dev_desc = {
+       .flags = LPSS_LTR,
        .prv_offset = 0x1000,
        .prv_size_override = 0x1018,
-       .ltr_required = true,
-};
-
-static struct lpss_shared_clock pwm_clock = {
-       .name = "pwm_clk",
-       .rate = 25000000,
 };
 
 static struct lpss_device_desc byt_pwm_dev_desc = {
-       .clk_required = true,
-       .save_ctx = true,
-       .shared_clock = &pwm_clock,
+       .flags = LPSS_SAVE_CTX,
 };
 
 static struct lpss_device_desc byt_uart_dev_desc = {
-       .clk_required = true,
+       .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX,
        .prv_offset = 0x800,
-       .clk_divider = true,
-       .clk_gate = true,
-       .save_ctx = true,
        .setup = lpss_uart_setup,
 };
 
 static struct lpss_device_desc byt_spi_dev_desc = {
-       .clk_required = true,
+       .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX,
        .prv_offset = 0x400,
-       .clk_divider = true,
-       .clk_gate = true,
-       .save_ctx = true,
 };
 
 static struct lpss_device_desc byt_sdio_dev_desc = {
-       .clk_required = true,
-};
-
-static struct lpss_shared_clock i2c_clock = {
-       .name = "i2c_clk",
-       .rate = 100000000,
+       .flags = LPSS_CLK,
 };
 
 static struct lpss_device_desc byt_i2c_dev_desc = {
-       .clk_required = true,
+       .flags = LPSS_CLK | LPSS_SAVE_CTX,
        .prv_offset = 0x800,
-       .save_ctx = true,
-       .shared_clock = &i2c_clock,
-       .setup = lpss_i2c_setup,
-};
-
-static struct lpss_shared_clock bsw_pwm_clock = {
-       .name = "pwm_clk",
-       .rate = 19200000,
-};
-
-static struct lpss_device_desc bsw_pwm_dev_desc = {
-       .clk_required = true,
-       .save_ctx = true,
-       .shared_clock = &bsw_pwm_clock,
+       .setup = byt_i2c_setup,
 };
 
 #else
@@ -237,7 +196,7 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
        { "INT33FC", },
 
        /* Braswell LPSS devices */
-       { "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
+       { "80862288", LPSS_ADDR(byt_pwm_dev_desc) },
        { "8086228A", LPSS_ADDR(byt_uart_dev_desc) },
        { "8086228E", LPSS_ADDR(byt_spi_dev_desc) },
        { "808622C1", LPSS_ADDR(byt_i2c_dev_desc) },
@@ -251,7 +210,8 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
        { "INT3436", LPSS_ADDR(lpt_sdio_dev_desc) },
        { "INT3437", },
 
-       { "INT3438", LPSS_ADDR(wpt_dev_desc) },
+       /* Wildcat Point LPSS devices */
+       { "INT3438", LPSS_ADDR(lpt_dev_desc) },
 
        { }
 };
@@ -276,7 +236,6 @@ static int register_device_clock(struct acpi_device *adev,
                                 struct lpss_private_data *pdata)
 {
        const struct lpss_device_desc *dev_desc = pdata->dev_desc;
-       struct lpss_shared_clock *shared_clock = dev_desc->shared_clock;
        const char *devname = dev_name(&adev->dev);
        struct clk *clk = ERR_PTR(-ENODEV);
        struct lpss_clk_data *clk_data;
@@ -289,12 +248,7 @@ static int register_device_clock(struct acpi_device *adev,
        clk_data = platform_get_drvdata(lpss_clk_dev);
        if (!clk_data)
                return -ENODEV;
-
-       if (dev_desc->clkdev_name) {
-               clk_register_clkdev(clk_data->clk, dev_desc->clkdev_name,
-                                   devname);
-               return 0;
-       }
+       clk = clk_data->clk;
 
        if (!pdata->mmio_base
            || pdata->mmio_size < dev_desc->prv_offset + LPSS_CLK_SIZE)
@@ -303,24 +257,19 @@ static int register_device_clock(struct acpi_device *adev,
        parent = clk_data->name;
        prv_base = pdata->mmio_base + dev_desc->prv_offset;
 
-       if (shared_clock) {
-               clk = shared_clock->clk;
-               if (!clk) {
-                       clk = clk_register_fixed_rate(NULL, shared_clock->name,
-                                                     "lpss_clk", 0,
-                                                     shared_clock->rate);
-                       shared_clock->clk = clk;
-               }
-               parent = shared_clock->name;
+       if (pdata->fixed_clk_rate) {
+               clk = clk_register_fixed_rate(NULL, devname, parent, 0,
+                                             pdata->fixed_clk_rate);
+               goto out;
        }
 
-       if (dev_desc->clk_gate) {
+       if (dev_desc->flags & LPSS_CLK_GATE) {
                clk = clk_register_gate(NULL, devname, parent, 0,
                                        prv_base, 0, 0, NULL);
                parent = devname;
        }
 
-       if (dev_desc->clk_divider) {
+       if (dev_desc->flags & LPSS_CLK_DIVIDER) {
                /* Prevent division by zero */
                if (!readl(prv_base))
                        writel(LPSS_CLK_DIVIDER_DEF_MASK, prv_base);
@@ -344,7 +293,7 @@ static int register_device_clock(struct acpi_device *adev,
                kfree(parent);
                kfree(clk_name);
        }
-
+out:
        if (IS_ERR(clk))
                return PTR_ERR(clk);
 
@@ -392,7 +341,10 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
 
        pdata->dev_desc = dev_desc;
 
-       if (dev_desc->clk_required) {
+       if (dev_desc->setup)
+               dev_desc->setup(pdata);
+
+       if (dev_desc->flags & LPSS_CLK) {
                ret = register_device_clock(adev, pdata);
                if (ret) {
                        /* Skip the device, but continue the namespace scan. */
@@ -413,9 +365,6 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
                goto err_out;
        }
 
-       if (dev_desc->setup)
-               dev_desc->setup(pdata);
-
        adev->driver_data = pdata;
        pdev = acpi_create_platform_device(adev);
        if (!IS_ERR_OR_NULL(pdev)) {
@@ -692,19 +641,19 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb,
 
        switch (action) {
        case BUS_NOTIFY_BOUND_DRIVER:
-               if (pdata->dev_desc->save_ctx)
+               if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
                        pdev->dev.pm_domain = &acpi_lpss_pm_domain;
                break;
        case BUS_NOTIFY_UNBOUND_DRIVER:
-               if (pdata->dev_desc->save_ctx)
+               if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
                        pdev->dev.pm_domain = NULL;
                break;
        case BUS_NOTIFY_ADD_DEVICE:
-               if (pdata->dev_desc->ltr_required)
+               if (pdata->dev_desc->flags & LPSS_LTR)
                        return sysfs_create_group(&pdev->dev.kobj,
                                                  &lpss_attr_group);
        case BUS_NOTIFY_DEL_DEVICE:
-               if (pdata->dev_desc->ltr_required)
+               if (pdata->dev_desc->flags & LPSS_LTR)
                        sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group);
        default:
                break;
@@ -721,7 +670,7 @@ static void acpi_lpss_bind(struct device *dev)
 {
        struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
 
-       if (!pdata || !pdata->mmio_base || !pdata->dev_desc->ltr_required)
+       if (!pdata || !pdata->mmio_base || !(pdata->dev_desc->flags & LPSS_LTR))
                return;
 
        if (pdata->mmio_size >= pdata->dev_desc->prv_offset + LPSS_LTR_SIZE)
index 1f8b20496f32698a76bf7958ded72ae9961e34d4..b193f842599902445015a219687310cdf3bfe9c9 100644 (file)
@@ -130,10 +130,6 @@ static const struct acpi_device_id acpi_pnp_device_ids[] = {
        {"PNP0401"},            /* ECP Printer Port */
        /* apple-gmux */
        {"APP000B"},
-       /* fujitsu-laptop.c */
-       {"FUJ02bf"},
-       {"FUJ02B1"},
-       {"FUJ02E3"},
        /* system */
        {"PNP0c02"},            /* General ID for reserving resources */
        {"PNP0c01"},            /* memory controller */
index 0cf159cc6e6d79085fa03ef15dcc91d9e53ec409..56710a03c9b0785229ac56470d82cd449f0478d8 100644 (file)
@@ -596,6 +596,38 @@ acpi_status acpi_enable_all_runtime_gpes(void)
 
 ACPI_EXPORT_SYMBOL(acpi_enable_all_runtime_gpes)
 
+/******************************************************************************
+ *
+ * FUNCTION:    acpi_enable_all_wakeup_gpes
+ *
+ * PARAMETERS:  None
+ *
+ * RETURN:      Status
+ *
+ * DESCRIPTION: Enable all "wakeup" GPEs and disable all of the other GPEs, in
+ *              all GPE blocks.
+ *
+ ******************************************************************************/
+
+acpi_status acpi_enable_all_wakeup_gpes(void)
+{
+       acpi_status status;
+
+       ACPI_FUNCTION_TRACE(acpi_enable_all_wakeup_gpes);
+
+       status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS);
+       if (ACPI_FAILURE(status)) {
+               return_ACPI_STATUS(status);
+       }
+
+       status = acpi_hw_enable_all_wakeup_gpes();
+       (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
+
+       return_ACPI_STATUS(status);
+}
+
+ACPI_EXPORT_SYMBOL(acpi_enable_all_wakeup_gpes)
+
 /*******************************************************************************
  *
  * FUNCTION:    acpi_install_gpe_block
index 2e6caabba07a1852b766164231da39527a5e5f44..ea62d40fd161c75c9a598ef1e317ecb3c81a4ede 100644 (file)
@@ -396,11 +396,11 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
        /* Examine each GPE Register within the block */
 
        for (i = 0; i < gpe_block->register_count; i++) {
-               if (!gpe_block->register_info[i].enable_for_wake) {
-                       continue;
-               }
 
-               /* Enable all "wake" GPEs in this register */
+               /*
+                * Enable all "wake" GPEs in this register and disable the
+                * remaining ones.
+                */
 
                status =
                    acpi_hw_write(gpe_block->register_info[i].enable_for_wake,
index 14cb6c0c8be2b67df5a2693681c2028245b2363f..5cd017c7ac0ea58af1449742e53686f51bd2c8e9 100644 (file)
@@ -87,7 +87,9 @@ const char *acpi_gbl_io_decode[] = {
 
 const char *acpi_gbl_ll_decode[] = {
        "ActiveHigh",
-       "ActiveLow"
+       "ActiveLow",
+       "ActiveBoth",
+       "Reserved"
 };
 
 const char *acpi_gbl_max_decode[] = {
index 5fdfe65fe165d0a95b72e258a6c4584682f7019a..8ec8a89a20ab9d734b6fca390d60f3501dc591a3 100644 (file)
@@ -695,7 +695,7 @@ static void acpi_battery_quirks(struct acpi_battery *battery)
        if (battery->power_unit && dmi_name_in_vendors("LENOVO")) {
                const char *s;
                s = dmi_get_system_info(DMI_PRODUCT_VERSION);
-               if (s && !strnicmp(s, "ThinkPad", 8)) {
+               if (s && !strncasecmp(s, "ThinkPad", 8)) {
                        dmi_walk(find_battery, battery);
                        if (test_bit(ACPI_BATTERY_QUIRK_THINKPAD_MAH,
                                     &battery->flags) &&
index 36eb42e3b0bb80688a52d4748d19d762d777aec9..ed122e17636e32298129a8e28429ef86cb27a4fa 100644 (file)
@@ -247,8 +247,8 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
        },
 
        /*
-        * These machines will power on immediately after shutdown when
-        * reporting the Windows 2012 OSI.
+        * The wireless hotkey does not work on those machines when
+        * returning true for _OSI("Windows 2012")
         */
        {
        .callback = dmi_disable_osi_win8,
@@ -258,6 +258,38 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
                    DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7737"),
                },
        },
+       {
+       .callback = dmi_disable_osi_win8,
+       .ident = "Dell Inspiron 7537",
+       .matches = {
+                   DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+                   DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7537"),
+               },
+       },
+       {
+       .callback = dmi_disable_osi_win8,
+       .ident = "Dell Inspiron 5437",
+       .matches = {
+                   DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+                   DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5437"),
+               },
+       },
+       {
+       .callback = dmi_disable_osi_win8,
+       .ident = "Dell Inspiron 3437",
+       .matches = {
+                   DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+                   DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 3437"),
+               },
+       },
+       {
+       .callback = dmi_disable_osi_win8,
+       .ident = "Dell Vostro 3446",
+       .matches = {
+                   DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+                   DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3446"),
+               },
+       },
 
        /*
         * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
index 67075f800e34cb69b3a28d852b9bc15b00ffa702..bea6896be1229b12d5983bb7a738db2c256fd3d1 100644 (file)
@@ -1040,6 +1040,40 @@ static struct dev_pm_domain acpi_general_pm_domain = {
        },
 };
 
+/**
+ * acpi_dev_pm_detach - Remove ACPI power management from the device.
+ * @dev: Device to take care of.
+ * @power_off: Whether or not to try to remove power from the device.
+ *
+ * Remove the device from the general ACPI PM domain and remove its wakeup
+ * notifier.  If @power_off is set, additionally remove power from the device if
+ * possible.
+ *
+ * Callers must ensure proper synchronization of this function with power
+ * management callbacks.
+ */
+static void acpi_dev_pm_detach(struct device *dev, bool power_off)
+{
+       struct acpi_device *adev = ACPI_COMPANION(dev);
+
+       if (adev && dev->pm_domain == &acpi_general_pm_domain) {
+               dev->pm_domain = NULL;
+               acpi_remove_pm_notifier(adev);
+               if (power_off) {
+                       /*
+                        * If the device's PM QoS resume latency limit or flags
+                        * have been exposed to user space, they have to be
+                        * hidden at this point, so that they don't affect the
+                        * choice of the low-power state to put the device into.
+                        */
+                       dev_pm_qos_hide_latency_limit(dev);
+                       dev_pm_qos_hide_flags(dev);
+                       acpi_device_wakeup(adev, ACPI_STATE_S0, false);
+                       acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0);
+               }
+       }
+}
+
 /**
  * acpi_dev_pm_attach - Prepare device for ACPI power management.
  * @dev: Device to prepare.
@@ -1072,42 +1106,9 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
                acpi_dev_pm_full_power(adev);
                acpi_device_wakeup(adev, ACPI_STATE_S0, false);
        }
+
+       dev->pm_domain->detach = acpi_dev_pm_detach;
        return 0;
 }
 EXPORT_SYMBOL_GPL(acpi_dev_pm_attach);
-
-/**
- * acpi_dev_pm_detach - Remove ACPI power management from the device.
- * @dev: Device to take care of.
- * @power_off: Whether or not to try to remove power from the device.
- *
- * Remove the device from the general ACPI PM domain and remove its wakeup
- * notifier.  If @power_off is set, additionally remove power from the device if
- * possible.
- *
- * Callers must ensure proper synchronization of this function with power
- * management callbacks.
- */
-void acpi_dev_pm_detach(struct device *dev, bool power_off)
-{
-       struct acpi_device *adev = ACPI_COMPANION(dev);
-
-       if (adev && dev->pm_domain == &acpi_general_pm_domain) {
-               dev->pm_domain = NULL;
-               acpi_remove_pm_notifier(adev);
-               if (power_off) {
-                       /*
-                        * If the device's PM QoS resume latency limit or flags
-                        * have been exposed to user space, they have to be
-                        * hidden at this point, so that they don't affect the
-                        * choice of the low-power state to put the device into.
-                        */
-                       dev_pm_qos_hide_latency_limit(dev);
-                       dev_pm_qos_hide_flags(dev);
-                       acpi_device_wakeup(adev, ACPI_STATE_S0, false);
-                       acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0);
-               }
-       }
-}
-EXPORT_SYMBOL_GPL(acpi_dev_pm_detach);
 #endif /* CONFIG_PM */
index 8acf53e6296605a013a6359299c3e9cf29bcbf67..5328b1090e08681dc4905585470fcd43b2b514f1 100644 (file)
 #include <linux/module.h>
 #include <linux/init.h>
 #include <linux/types.h>
-#include <asm/uaccess.h>
+#include <linux/uaccess.h>
 #include <linux/thermal.h>
 #include <linux/acpi.h>
 
-#define PREFIX "ACPI: "
-
 #define ACPI_FAN_CLASS                 "fan"
 #define ACPI_FAN_FILE_STATE            "state"
 
@@ -127,8 +125,9 @@ static const struct thermal_cooling_device_ops fan_cooling_ops = {
 };
 
 /* --------------------------------------------------------------------------
-                                 Driver Interface
-   -------------------------------------------------------------------------- */
+ *                               Driver Interface
+ * --------------------------------------------------------------------------
+*/
 
 static int acpi_fan_add(struct acpi_device *device)
 {
@@ -143,7 +142,7 @@ static int acpi_fan_add(struct acpi_device *device)
 
        result = acpi_bus_update_power(device->handle, NULL);
        if (result) {
-               printk(KERN_ERR PREFIX "Setting initial power state\n");
+               dev_err(&device->dev, "Setting initial power state\n");
                goto end;
        }
 
@@ -168,10 +167,9 @@ static int acpi_fan_add(struct acpi_device *device)
                                   &device->dev.kobj,
                                   "device");
        if (result)
-               dev_err(&device->dev, "Failed to create sysfs link "
-                       "'device'\n");
+               dev_err(&device->dev, "Failed to create sysfs link 'device'\n");
 
-       printk(KERN_INFO PREFIX "%s [%s] (%s)\n",
+       dev_info(&device->dev, "ACPI: %s [%s] (%s)\n",
               acpi_device_name(device), acpi_device_bid(device),
               !device->power.state ? "on" : "off");
 
@@ -217,7 +215,7 @@ static int acpi_fan_resume(struct device *dev)
 
        result = acpi_bus_update_power(to_acpi_device(dev)->handle, NULL);
        if (result)
-               printk(KERN_ERR PREFIX "Error updating fan power state\n");
+               dev_err(dev, "Error updating fan power state\n");
 
        return result;
 }
index 3abe9b223ba717a644ecfd0a4edf401004f24807..9964f70be98de5f26df212a415bcf472aabd0c95 100644 (file)
@@ -152,6 +152,16 @@ static u32 acpi_osi_handler(acpi_string interface, u32 supported)
                        osi_linux.dmi ? " via DMI" : "");
        }
 
+       if (!strcmp("Darwin", interface)) {
+               /*
+                * Apple firmware will behave poorly if it receives positive
+                * answers to "Darwin" and any other OS. Respond positively
+                * to Darwin and then disable all other vendor strings.
+                */
+               acpi_update_interfaces(ACPI_DISABLE_ALL_VENDOR_STRINGS);
+               supported = ACPI_UINT32_MAX;
+       }
+
        return supported;
 }
 
@@ -825,7 +835,7 @@ acpi_os_install_interrupt_handler(u32 gsi, acpi_osd_handler handler,
 
        acpi_irq_handler = handler;
        acpi_irq_context = context;
-       if (request_irq(irq, acpi_irq, IRQF_SHARED | IRQF_NO_SUSPEND, "acpi", acpi_irq)) {
+       if (request_irq(irq, acpi_irq, IRQF_SHARED, "acpi", acpi_irq)) {
                printk(KERN_ERR PREFIX "SCI (IRQ%d) allocation failed\n", irq);
                acpi_irq_handler = NULL;
                return AE_NOT_ACQUIRED;
index e6ae603ed1a18594b2d4766371d748d5a0cfedc3..cd4de7e038ea5e0460a896232a99c89554436b52 100644 (file)
@@ -35,6 +35,7 @@
 #include <linux/pci-aspm.h>
 #include <linux/acpi.h>
 #include <linux/slab.h>
+#include <linux/dmi.h>
 #include <acpi/apei.h> /* for acpi_hest_init() */
 
 #include "internal.h"
@@ -429,6 +430,19 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
        struct acpi_device *device = root->device;
        acpi_handle handle = device->handle;
 
+       /*
+        * Apple always return failure on _OSC calls when _OSI("Darwin") has
+        * been called successfully. We know the feature set supported by the
+        * platform, so avoid calling _OSC at all
+        */
+
+       if (dmi_match(DMI_SYS_VENDOR, "Apple Inc.")) {
+               root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL;
+               decode_osc_control(root, "OS assumes control of",
+                                  root->osc_control_set);
+               return;
+       }
+
        /*
         * All supported architectures that use ACPI have support for
         * PCI domains, so we indicate this in _OSC support capabilities.
index e32321ce9a5cfbc4a80e7101b2784506ff057fc6..ef58f46c844287e4f64ff44a16ff63b1dd884537 100644 (file)
@@ -16,7 +16,7 @@ static int map_lapic_id(struct acpi_subtable_header *entry,
                 u32 acpi_id, int *apic_id)
 {
        struct acpi_madt_local_apic *lapic =
-               (struct acpi_madt_local_apic *)entry;
+               container_of(entry, struct acpi_madt_local_apic, header);
 
        if (!(lapic->lapic_flags & ACPI_MADT_ENABLED))
                return -ENODEV;
@@ -32,7 +32,7 @@ static int map_x2apic_id(struct acpi_subtable_header *entry,
                         int device_declaration, u32 acpi_id, int *apic_id)
 {
        struct acpi_madt_local_x2apic *apic =
-               (struct acpi_madt_local_x2apic *)entry;
+               container_of(entry, struct acpi_madt_local_x2apic, header);
 
        if (!(apic->lapic_flags & ACPI_MADT_ENABLED))
                return -ENODEV;
@@ -49,7 +49,7 @@ static int map_lsapic_id(struct acpi_subtable_header *entry,
                int device_declaration, u32 acpi_id, int *apic_id)
 {
        struct acpi_madt_local_sapic *lsapic =
-               (struct acpi_madt_local_sapic *)entry;
+               container_of(entry, struct acpi_madt_local_sapic, header);
 
        if (!(lsapic->lapic_flags & ACPI_MADT_ENABLED))
                return -ENODEV;
index 366ca40a6f703433efc1016a658eaac9e28c5c48..a7a3edd28beb8d5f881ef891075e0f650e0482de 100644 (file)
@@ -35,6 +35,7 @@
 #include <linux/jiffies.h>
 #include <linux/delay.h>
 #include <linux/power_supply.h>
+#include <linux/dmi.h>
 
 #include "sbshc.h"
 #include "battery.h"
@@ -61,6 +62,8 @@ static unsigned int cache_time = 1000;
 module_param(cache_time, uint, 0644);
 MODULE_PARM_DESC(cache_time, "cache time in milliseconds");
 
+static bool sbs_manager_broken;
+
 #define MAX_SBS_BAT                    4
 #define ACPI_SBS_BLOCK_MAX             32
 
@@ -109,6 +112,7 @@ struct acpi_sbs {
        u8 batteries_supported:4;
        u8 manager_present:1;
        u8 charger_present:1;
+       u8 charger_exists:1;
 };
 
 #define to_acpi_sbs(x) container_of(x, struct acpi_sbs, charger)
@@ -429,9 +433,19 @@ static int acpi_ac_get_present(struct acpi_sbs *sbs)
 
        result = acpi_smbus_read(sbs->hc, SMBUS_READ_WORD, ACPI_SBS_CHARGER,
                                 0x13, (u8 *) & status);
-       if (!result)
-               sbs->charger_present = (status >> 15) & 0x1;
-       return result;
+
+       if (result)
+               return result;
+
+       /*
+        * The spec requires that bit 4 always be 1. If it's not set, assume
+        * that the implementation doesn't support an SBS charger
+        */
+       if (!((status >> 4) & 0x1))
+               return -ENODEV;
+
+       sbs->charger_present = (status >> 15) & 0x1;
+       return 0;
 }
 
 static ssize_t acpi_battery_alarm_show(struct device *dev,
@@ -483,16 +497,21 @@ static int acpi_battery_read(struct acpi_battery *battery)
                                  ACPI_SBS_MANAGER, 0x01, (u8 *)&state, 2);
        } else if (battery->id == 0)
                battery->present = 1;
+
        if (result || !battery->present)
                return result;
 
        if (saved_present != battery->present) {
                battery->update_time = 0;
                result = acpi_battery_get_info(battery);
-               if (result)
+               if (result) {
+                       battery->present = 0;
                        return result;
+               }
        }
        result = acpi_battery_get_state(battery);
+       if (result)
+               battery->present = 0;
        return result;
 }
 
@@ -524,6 +543,7 @@ static int acpi_battery_add(struct acpi_sbs *sbs, int id)
        result = power_supply_register(&sbs->device->dev, &battery->bat);
        if (result)
                goto end;
+
        result = device_create_file(battery->bat.dev, &alarm_attr);
        if (result)
                goto end;
@@ -554,6 +574,7 @@ static int acpi_charger_add(struct acpi_sbs *sbs)
        if (result)
                goto end;
 
+       sbs->charger_exists = 1;
        sbs->charger.name = "sbs-charger";
        sbs->charger.type = POWER_SUPPLY_TYPE_MAINS;
        sbs->charger.properties = sbs_ac_props;
@@ -580,9 +601,12 @@ static void acpi_sbs_callback(void *context)
        struct acpi_battery *bat;
        u8 saved_charger_state = sbs->charger_present;
        u8 saved_battery_state;
-       acpi_ac_get_present(sbs);
-       if (sbs->charger_present != saved_charger_state)
-               kobject_uevent(&sbs->charger.dev->kobj, KOBJ_CHANGE);
+
+       if (sbs->charger_exists) {
+               acpi_ac_get_present(sbs);
+               if (sbs->charger_present != saved_charger_state)
+                       kobject_uevent(&sbs->charger.dev->kobj, KOBJ_CHANGE);
+       }
 
        if (sbs->manager_present) {
                for (id = 0; id < MAX_SBS_BAT; ++id) {
@@ -598,12 +622,31 @@ static void acpi_sbs_callback(void *context)
        }
 }
 
+static int disable_sbs_manager(const struct dmi_system_id *d)
+{
+       sbs_manager_broken = true;
+       return 0;
+}
+
+static struct dmi_system_id acpi_sbs_dmi_table[] = {
+       {
+               .callback = disable_sbs_manager,
+               .ident = "Apple",
+               .matches = {
+                       DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc.")
+               },
+       },
+       { },
+};
+
 static int acpi_sbs_add(struct acpi_device *device)
 {
        struct acpi_sbs *sbs;
        int result = 0;
        int id;
 
+       dmi_check_system(acpi_sbs_dmi_table);
+
        sbs = kzalloc(sizeof(struct acpi_sbs), GFP_KERNEL);
        if (!sbs) {
                result = -ENOMEM;
@@ -619,17 +662,24 @@ static int acpi_sbs_add(struct acpi_device *device)
        device->driver_data = sbs;
 
        result = acpi_charger_add(sbs);
-       if (result)
+       if (result && result != -ENODEV)
                goto end;
 
-       result = acpi_manager_get_info(sbs);
-       if (!result) {
-               sbs->manager_present = 1;
-               for (id = 0; id < MAX_SBS_BAT; ++id)
-                       if ((sbs->batteries_supported & (1 << id)))
-                               acpi_battery_add(sbs, id);
-       } else
+       result = 0;
+
+       if (!sbs_manager_broken) {
+               result = acpi_manager_get_info(sbs);
+               if (!result) {
+                       sbs->manager_present = 0;
+                       for (id = 0; id < MAX_SBS_BAT; ++id)
+                               if ((sbs->batteries_supported & (1 << id)))
+                                       acpi_battery_add(sbs, id);
+               }
+       }
+
+       if (!sbs->manager_present)
                acpi_battery_add(sbs, 0);
+
        acpi_smbus_register_callback(sbs->hc, acpi_sbs_callback, sbs);
       end:
        if (result)
index 54da4a3fe65e65d4b6334d93b82244f95c245b1f..05a31b573fc327b2a843c314294eff18d6b288ba 100644 (file)
@@ -14,6 +14,7 @@
 #include <linux/irq.h>
 #include <linux/dmi.h>
 #include <linux/device.h>
+#include <linux/interrupt.h>
 #include <linux/suspend.h>
 #include <linux/reboot.h>
 #include <linux/acpi.h>
@@ -626,6 +627,19 @@ static int acpi_freeze_begin(void)
        return 0;
 }
 
+static int acpi_freeze_prepare(void)
+{
+       acpi_enable_all_wakeup_gpes();
+       enable_irq_wake(acpi_gbl_FADT.sci_interrupt);
+       return 0;
+}
+
+static void acpi_freeze_restore(void)
+{
+       disable_irq_wake(acpi_gbl_FADT.sci_interrupt);
+       acpi_enable_all_runtime_gpes();
+}
+
 static void acpi_freeze_end(void)
 {
        acpi_scan_lock_release();
@@ -633,6 +647,8 @@ static void acpi_freeze_end(void)
 
 static const struct platform_freeze_ops acpi_freeze_ops = {
        .begin = acpi_freeze_begin,
+       .prepare = acpi_freeze_prepare,
+       .restore = acpi_freeze_restore,
        .end = acpi_freeze_end,
 };
 
index 07c8c5a5ee95cfec11ae94114a22398cd99273f6..834f35c4bf8d50061e1cae9f0b581cdad9467369 100644 (file)
@@ -661,7 +661,6 @@ EXPORT_SYMBOL(acpi_evaluate_dsm);
  * @uuid: UUID of requested functions, should be 16 bytes at least
  * @rev: revision number of requested functions
  * @funcs: bitmap of requested functions
- * @exclude: excluding special value, used to support i915 and nouveau
  *
  * Evaluate device's _DSM method to check whether it supports requested
  * functions. Currently only support 64 functions at maximum, should be
index 8e7e18567ae626fefd9039943662b84a627c8a58..807a88a0f394f8a639cbc3f6e2b78073078986cc 100644 (file)
@@ -411,12 +411,6 @@ static int __init video_set_bqc_offset(const struct dmi_system_id *d)
        return 0;
 }
 
-static int __init video_set_use_native_backlight(const struct dmi_system_id *d)
-{
-       use_native_backlight_dmi = true;
-       return 0;
-}
-
 static int __init video_disable_native_backlight(const struct dmi_system_id *d)
 {
        use_native_backlight_dmi = false;
@@ -467,265 +461,6 @@ static struct dmi_system_id video_dmi_table[] __initdata = {
                DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 7720"),
                },
        },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad X230",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X230"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad T430 and T430s",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T430"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad T430",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "2349D15"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad T431s",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "20AACTO1WW"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad Edge E530",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "3259A2G"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad Edge E530",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "3259CTO"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad Edge E530",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "3259HJG"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ThinkPad W530",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-       .ident = "ThinkPad X1 Carbon",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X1 Carbon"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Lenovo Yoga 13",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo IdeaPad Yoga 13"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Lenovo Yoga 2 11",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Yoga 2 11"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "Thinkpad Helix",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad Helix"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Dell Inspiron 7520",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
-               DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7520"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer Aspire 5733Z",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5733Z"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer Aspire 5742G",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5742G"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer Aspire V5-171",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "V5-171"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer Aspire V5-431",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-431"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer Aspire V5-471G",
-        .matches = {
-               DMI_MATCH(DMI_BOARD_VENDOR, "Acer"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-471G"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer TravelMate B113",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate B113"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer Aspire V5-572G",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Acer Aspire"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "V5-572G/Dazzle_CX"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "Acer Aspire V5-573G",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Acer Aspire"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "V5-573G/Dazzle_HW"),
-               },
-       },
-       {
-        .callback = video_set_use_native_backlight,
-        .ident = "ASUS Zenbook Prime UX31A",
-        .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
-               DMI_MATCH(DMI_PRODUCT_NAME, "UX31A"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP ProBook 4340s",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4340s"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP ProBook 4540s",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4540s"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP ProBook 2013 models",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook "),
-               DMI_MATCH(DMI_PRODUCT_NAME, " G1"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP EliteBook 2013 models",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook "),
-               DMI_MATCH(DMI_PRODUCT_NAME, " G1"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP EliteBook 2014 models",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook "),
-               DMI_MATCH(DMI_PRODUCT_NAME, " G2"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP ZBook 14",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 14"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP ZBook 15",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 15"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP ZBook 17",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 17"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP EliteBook 8470p",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8470p"),
-               },
-       },
-       {
-       .callback = video_set_use_native_backlight,
-       .ident = "HP EliteBook 8780w",
-       .matches = {
-               DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
-               DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8780w"),
-               },
-       },
 
        /*
         * These models have a working acpi_video backlight control, and using
@@ -1419,6 +1154,23 @@ acpi_video_device_bind(struct acpi_video_bus *video,
        }
 }
 
+static bool acpi_video_device_in_dod(struct acpi_video_device *device)
+{
+       struct acpi_video_bus *video = device->video;
+       int i;
+
+       /* If we have a broken _DOD, no need to test */
+       if (!video->attached_count)
+               return true;
+
+       for (i = 0; i < video->attached_count; i++) {
+               if (video->attached_array[i].bind_info == device)
+                       return true;
+       }
+
+       return false;
+}
+
 /*
  *  Arg:
  *     video   : video bus device
@@ -1858,6 +1610,15 @@ static void acpi_video_dev_register_backlight(struct acpi_video_device *device)
        static int count;
        char *name;
 
+       /*
+        * Do not create backlight device for video output
+        * device that is not in the enumerated list.
+        */
+       if (!acpi_video_device_in_dod(device)) {
+               dev_dbg(&device->dev->dev, "not in _DOD list, ignore\n");
+               return;
+       }
+
        result = acpi_video_init_brightness(device);
        if (result)
                return;
index c42feb2bacd0eb783bc94f0e10187d08ea907eea..27c43499977a2a6236e1c3675bf2bb42a314e42f 100644 (file)
@@ -174,6 +174,14 @@ static struct dmi_system_id video_detect_dmi_table[] = {
                DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5737"),
                },
        },
+       {
+       .callback = video_detect_force_vendor,
+       .ident = "Lenovo IdeaPad Z570",
+       .matches = {
+               DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+               DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"),
+               },
+       },
        { },
 };
 
index 3cf61a127ee54623f5dc4e99e5e5a0ad46868797..47bbdc1b5be327ddf913f920d8b78cfc4f253a0c 100644 (file)
@@ -15,6 +15,7 @@
 #include <linux/io.h>
 #include <linux/pm.h>
 #include <linux/pm_runtime.h>
+#include <linux/pm_domain.h>
 #include <linux/amba/bus.h>
 #include <linux/sizes.h>
 
@@ -182,9 +183,15 @@ static int amba_probe(struct device *dev)
        int ret;
 
        do {
+               ret = dev_pm_domain_attach(dev, true);
+               if (ret == -EPROBE_DEFER)
+                       break;
+
                ret = amba_get_enable_pclk(pcdev);
-               if (ret)
+               if (ret) {
+                       dev_pm_domain_detach(dev, true);
                        break;
+               }
 
                pm_runtime_get_noresume(dev);
                pm_runtime_set_active(dev);
@@ -199,6 +206,7 @@ static int amba_probe(struct device *dev)
                pm_runtime_put_noidle(dev);
 
                amba_put_disable_pclk(pcdev);
+               dev_pm_domain_detach(dev, true);
        } while (0);
 
        return ret;
@@ -220,6 +228,7 @@ static int amba_remove(struct device *dev)
        pm_runtime_put_noidle(dev);
 
        amba_put_disable_pclk(pcdev);
+       dev_pm_domain_detach(dev, true);
 
        return ret;
 }
index ab4f4ce02722d0520e53f30baaeefd00ada8aea6..b2afc29403f9e887554d441a0c36629ccb49bce6 100644 (file)
@@ -21,6 +21,7 @@
 #include <linux/err.h>
 #include <linux/slab.h>
 #include <linux/pm_runtime.h>
+#include <linux/pm_domain.h>
 #include <linux/idr.h>
 #include <linux/acpi.h>
 #include <linux/clk/clk-conf.h>
@@ -506,11 +507,12 @@ static int platform_drv_probe(struct device *_dev)
        if (ret < 0)
                return ret;
 
-       acpi_dev_pm_attach(_dev, true);
-
-       ret = drv->probe(dev);
-       if (ret)
-               acpi_dev_pm_detach(_dev, true);
+       ret = dev_pm_domain_attach(_dev, true);
+       if (ret != -EPROBE_DEFER) {
+               ret = drv->probe(dev);
+               if (ret)
+                       dev_pm_domain_detach(_dev, true);
+       }
 
        if (drv->prevent_deferred_probe && ret == -EPROBE_DEFER) {
                dev_warn(_dev, "probe deferral not supported\n");
@@ -532,7 +534,7 @@ static int platform_drv_remove(struct device *_dev)
        int ret;
 
        ret = drv->remove(dev);
-       acpi_dev_pm_detach(_dev, true);
+       dev_pm_domain_detach(_dev, true);
 
        return ret;
 }
@@ -543,7 +545,7 @@ static void platform_drv_shutdown(struct device *_dev)
        struct platform_device *dev = to_platform_device(_dev);
 
        drv->shutdown(dev);
-       acpi_dev_pm_detach(_dev, true);
+       dev_pm_domain_detach(_dev, true);
 }
 
 /**
index b99e6c06ee678ecb5bcc6e206d3954976832eb38..78369305e0698109a42b172af8b925c05209701d 100644 (file)
@@ -368,8 +368,13 @@ int pm_clk_suspend(struct device *dev)
 
        spin_lock_irqsave(&psd->lock, flags);
 
-       list_for_each_entry_reverse(ce, &psd->clock_list, node)
-               clk_disable(ce->clk);
+       list_for_each_entry_reverse(ce, &psd->clock_list, node) {
+               if (ce->status < PCE_STATUS_ERROR) {
+                       if (ce->status == PCE_STATUS_ENABLED)
+                               clk_disable(ce->clk);
+                       ce->status = PCE_STATUS_ACQUIRED;
+               }
+       }
 
        spin_unlock_irqrestore(&psd->lock, flags);
 
@@ -385,6 +390,7 @@ int pm_clk_resume(struct device *dev)
        struct pm_subsys_data *psd = dev_to_psd(dev);
        struct pm_clock_entry *ce;
        unsigned long flags;
+       int ret;
 
        dev_dbg(dev, "%s()\n", __func__);
 
@@ -394,8 +400,13 @@ int pm_clk_resume(struct device *dev)
 
        spin_lock_irqsave(&psd->lock, flags);
 
-       list_for_each_entry(ce, &psd->clock_list, node)
-               __pm_clk_enable(dev, ce->clk);
+       list_for_each_entry(ce, &psd->clock_list, node) {
+               if (ce->status < PCE_STATUS_ERROR) {
+                       ret = __pm_clk_enable(dev, ce->clk);
+                       if (!ret)
+                               ce->status = PCE_STATUS_ENABLED;
+               }
+       }
 
        spin_unlock_irqrestore(&psd->lock, flags);
 
index df2e5eeaeb05570757ec7872c2022484c1824c87..b0f138806bbc4cb67bc782c075a9228acc500f12 100644 (file)
@@ -11,6 +11,8 @@
 #include <linux/export.h>
 #include <linux/slab.h>
 #include <linux/pm_clock.h>
+#include <linux/acpi.h>
+#include <linux/pm_domain.h>
 
 /**
  * dev_pm_get_subsys_data - Create or refcount power.subsys_data for device.
@@ -82,3 +84,53 @@ int dev_pm_put_subsys_data(struct device *dev)
        return ret;
 }
 EXPORT_SYMBOL_GPL(dev_pm_put_subsys_data);
+
+/**
+ * dev_pm_domain_attach - Attach a device to its PM domain.
+ * @dev: Device to attach.
+ * @power_on: Used to indicate whether we should power on the device.
+ *
+ * The @dev may only be attached to a single PM domain. By iterating through
+ * the available alternatives we try to find a valid PM domain for the device.
+ * As attachment succeeds, the ->detach() callback in the struct dev_pm_domain
+ * should be assigned by the corresponding attach function.
+ *
+ * This function should typically be invoked from subsystem level code during
+ * the probe phase. Especially for those that holds devices which requires
+ * power management through PM domains.
+ *
+ * Callers must ensure proper synchronization of this function with power
+ * management callbacks.
+ *
+ * Returns 0 on successfully attached PM domain or negative error code.
+ */
+int dev_pm_domain_attach(struct device *dev, bool power_on)
+{
+       int ret;
+
+       ret = acpi_dev_pm_attach(dev, power_on);
+       if (ret)
+               ret = genpd_dev_pm_attach(dev);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_domain_attach);
+
+/**
+ * dev_pm_domain_detach - Detach a device from its PM domain.
+ * @dev: Device to attach.
+ * @power_off: Used to indicate whether we should power off the device.
+ *
+ * This functions will reverse the actions from dev_pm_domain_attach() and thus
+ * try to detach the @dev from its PM domain. Typically it should be invoked
+ * from subsystem level code during the remove phase.
+ *
+ * Callers must ensure proper synchronization of this function with power
+ * management callbacks.
+ */
+void dev_pm_domain_detach(struct device *dev, bool power_off)
+{
+       if (dev->pm_domain && dev->pm_domain->detach)
+               dev->pm_domain->detach(dev, power_off);
+}
+EXPORT_SYMBOL_GPL(dev_pm_domain_detach);
index eee55c1e5fde49310779c9fc4e15aa8ce5415114..40bc2f4072cc28ea4138ae36b3b08cb96f2ed158 100644 (file)
@@ -8,6 +8,7 @@
 
 #include <linux/kernel.h>
 #include <linux/io.h>
+#include <linux/platform_device.h>
 #include <linux/pm_runtime.h>
 #include <linux/pm_domain.h>
 #include <linux/pm_qos.h>
        __routine = genpd->dev_ops.callback;                    \
        if (__routine) {                                        \
                __ret = __routine(dev);                         \
-       } else {                                                \
-               __routine = dev_gpd_data(dev)->ops.callback;    \
-               if (__routine)                                  \
-                       __ret = __routine(dev);                 \
        }                                                       \
        __ret;                                                  \
 })
@@ -70,8 +67,6 @@ static struct generic_pm_domain *pm_genpd_lookup_name(const char *domain_name)
        return genpd;
 }
 
-#ifdef CONFIG_PM
-
 struct generic_pm_domain *dev_to_genpd(struct device *dev)
 {
        if (IS_ERR_OR_NULL(dev->pm_domain))
@@ -147,13 +142,13 @@ static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd)
 {
        s64 usecs64;
 
-       if (!genpd->cpu_data)
+       if (!genpd->cpuidle_data)
                return;
 
        usecs64 = genpd->power_on_latency_ns;
        do_div(usecs64, NSEC_PER_USEC);
-       usecs64 += genpd->cpu_data->saved_exit_latency;
-       genpd->cpu_data->idle_state->exit_latency = usecs64;
+       usecs64 += genpd->cpuidle_data->saved_exit_latency;
+       genpd->cpuidle_data->idle_state->exit_latency = usecs64;
 }
 
 /**
@@ -193,9 +188,9 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
                return 0;
        }
 
-       if (genpd->cpu_data) {
+       if (genpd->cpuidle_data) {
                cpuidle_pause_and_lock();
-               genpd->cpu_data->idle_state->disabled = true;
+               genpd->cpuidle_data->idle_state->disabled = true;
                cpuidle_resume_and_unlock();
                goto out;
        }
@@ -285,8 +280,6 @@ int pm_genpd_name_poweron(const char *domain_name)
        return genpd ? pm_genpd_poweron(genpd) : -EINVAL;
 }
 
-#endif /* CONFIG_PM */
-
 #ifdef CONFIG_PM_RUNTIME
 
 static int genpd_start_dev_no_timing(struct generic_pm_domain *genpd,
@@ -430,7 +423,7 @@ static bool genpd_abort_poweroff(struct generic_pm_domain *genpd)
  * Queue up the execution of pm_genpd_poweroff() unless it's already been done
  * before.
  */
-void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
+static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
 {
        queue_work(pm_wq, &genpd->power_off_work);
 }
@@ -520,17 +513,17 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
                }
        }
 
-       if (genpd->cpu_data) {
+       if (genpd->cpuidle_data) {
                /*
-                * If cpu_data is set, cpuidle should turn the domain off when
-                * the CPU in it is idle.  In that case we don't decrement the
-                * subdomain counts of the master domains, so that power is not
-                * removed from the current domain prematurely as a result of
-                * cutting off the masters' power.
+                * If cpuidle_data is set, cpuidle should turn the domain off
+                * when the CPU in it is idle.  In that case we don't decrement
+                * the subdomain counts of the master domains, so that power is
+                * not removed from the current domain prematurely as a result
+                * of cutting off the masters' power.
                 */
                genpd->status = GPD_STATE_POWER_OFF;
                cpuidle_pause_and_lock();
-               genpd->cpu_data->idle_state->disabled = false;
+               genpd->cpuidle_data->idle_state->disabled = false;
                cpuidle_resume_and_unlock();
                goto out;
        }
@@ -619,8 +612,6 @@ static int pm_genpd_runtime_suspend(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       might_sleep_if(!genpd->dev_irq_safe);
-
        stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL;
        if (stop_ok && !stop_ok(dev))
                return -EBUSY;
@@ -665,8 +656,6 @@ static int pm_genpd_runtime_resume(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       might_sleep_if(!genpd->dev_irq_safe);
-
        /* If power.irq_safe, the PM domain is never powered off. */
        if (dev->power.irq_safe)
                return genpd_start_dev_no_timing(genpd, dev);
@@ -733,6 +722,13 @@ void pm_genpd_poweroff_unused(void)
        mutex_unlock(&gpd_list_lock);
 }
 
+static int __init genpd_poweroff_unused(void)
+{
+       pm_genpd_poweroff_unused();
+       return 0;
+}
+late_initcall(genpd_poweroff_unused);
+
 #else
 
 static inline int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
@@ -741,6 +737,9 @@ static inline int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
        return NOTIFY_DONE;
 }
 
+static inline void
+genpd_queue_power_off_work(struct generic_pm_domain *genpd) {}
+
 static inline void genpd_power_off_work_fn(struct work_struct *work) {}
 
 #define pm_genpd_runtime_suspend       NULL
@@ -774,46 +773,6 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd,
        return GENPD_DEV_CALLBACK(genpd, bool, active_wakeup, dev);
 }
 
-static int genpd_suspend_dev(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, suspend, dev);
-}
-
-static int genpd_suspend_late(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, suspend_late, dev);
-}
-
-static int genpd_resume_early(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, resume_early, dev);
-}
-
-static int genpd_resume_dev(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, resume, dev);
-}
-
-static int genpd_freeze_dev(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, freeze, dev);
-}
-
-static int genpd_freeze_late(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, freeze_late, dev);
-}
-
-static int genpd_thaw_early(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, thaw_early, dev);
-}
-
-static int genpd_thaw_dev(struct generic_pm_domain *genpd, struct device *dev)
-{
-       return GENPD_DEV_CALLBACK(genpd, int, thaw, dev);
-}
-
 /**
  * pm_genpd_sync_poweroff - Synchronously power off a PM domain and its masters.
  * @genpd: PM domain to power off, if possible.
@@ -995,7 +954,7 @@ static int pm_genpd_suspend(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_suspend_dev(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_suspend(dev);
 }
 
 /**
@@ -1016,7 +975,7 @@ static int pm_genpd_suspend_late(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_suspend_late(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_suspend_late(dev);
 }
 
 /**
@@ -1103,7 +1062,7 @@ static int pm_genpd_resume_early(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_resume_early(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_resume_early(dev);
 }
 
 /**
@@ -1124,7 +1083,7 @@ static int pm_genpd_resume(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_resume_dev(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_resume(dev);
 }
 
 /**
@@ -1145,7 +1104,7 @@ static int pm_genpd_freeze(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_freeze_dev(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_freeze(dev);
 }
 
 /**
@@ -1167,7 +1126,7 @@ static int pm_genpd_freeze_late(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_freeze_late(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_freeze_late(dev);
 }
 
 /**
@@ -1231,7 +1190,7 @@ static int pm_genpd_thaw_early(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_thaw_early(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_thaw_early(dev);
 }
 
 /**
@@ -1252,7 +1211,7 @@ static int pm_genpd_thaw(struct device *dev)
        if (IS_ERR(genpd))
                return -EINVAL;
 
-       return genpd->suspend_power_off ? 0 : genpd_thaw_dev(genpd, dev);
+       return genpd->suspend_power_off ? 0 : pm_generic_thaw(dev);
 }
 
 /**
@@ -1344,13 +1303,13 @@ static void pm_genpd_complete(struct device *dev)
 }
 
 /**
- * pm_genpd_syscore_switch - Switch power during system core suspend or resume.
+ * genpd_syscore_switch - Switch power during system core suspend or resume.
  * @dev: Device that normally is marked as "always on" to switch power for.
  *
  * This routine may only be called during the system core (syscore) suspend or
  * resume phase for devices whose "always on" flags are set.
  */
-void pm_genpd_syscore_switch(struct device *dev, bool suspend)
+static void genpd_syscore_switch(struct device *dev, bool suspend)
 {
        struct generic_pm_domain *genpd;
 
@@ -1366,7 +1325,18 @@ void pm_genpd_syscore_switch(struct device *dev, bool suspend)
                genpd->suspended_count--;
        }
 }
-EXPORT_SYMBOL_GPL(pm_genpd_syscore_switch);
+
+void pm_genpd_syscore_poweroff(struct device *dev)
+{
+       genpd_syscore_switch(dev, true);
+}
+EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweroff);
+
+void pm_genpd_syscore_poweron(struct device *dev)
+{
+       genpd_syscore_switch(dev, false);
+}
+EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
 
 #else
 
@@ -1466,6 +1436,9 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 
        spin_unlock_irq(&dev->power.lock);
 
+       if (genpd->attach_dev)
+               genpd->attach_dev(dev);
+
        mutex_lock(&gpd_data->lock);
        gpd_data->base.dev = dev;
        list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
@@ -1483,39 +1456,6 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
        return ret;
 }
 
-/**
- * __pm_genpd_of_add_device - Add a device to an I/O PM domain.
- * @genpd_node: Device tree node pointer representing a PM domain to which the
- *   the device is added to.
- * @dev: Device to be added.
- * @td: Set of PM QoS timing parameters to attach to the device.
- */
-int __pm_genpd_of_add_device(struct device_node *genpd_node, struct device *dev,
-                            struct gpd_timing_data *td)
-{
-       struct generic_pm_domain *genpd = NULL, *gpd;
-
-       dev_dbg(dev, "%s()\n", __func__);
-
-       if (IS_ERR_OR_NULL(genpd_node) || IS_ERR_OR_NULL(dev))
-               return -EINVAL;
-
-       mutex_lock(&gpd_list_lock);
-       list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
-               if (gpd->of_node == genpd_node) {
-                       genpd = gpd;
-                       break;
-               }
-       }
-       mutex_unlock(&gpd_list_lock);
-
-       if (!genpd)
-               return -EINVAL;
-
-       return __pm_genpd_add_device(genpd, dev, td);
-}
-
-
 /**
  * __pm_genpd_name_add_device - Find I/O PM domain and add a device to it.
  * @domain_name: Name of the PM domain to add the device to.
@@ -1558,6 +1498,9 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
        genpd->device_count--;
        genpd->max_off_time_changed = true;
 
+       if (genpd->detach_dev)
+               genpd->detach_dev(dev);
+
        spin_lock_irq(&dev->power.lock);
 
        dev->pm_domain = NULL;
@@ -1743,112 +1686,6 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
        return ret;
 }
 
-/**
- * pm_genpd_add_callbacks - Add PM domain callbacks to a given device.
- * @dev: Device to add the callbacks to.
- * @ops: Set of callbacks to add.
- * @td: Timing data to add to the device along with the callbacks (optional).
- *
- * Every call to this routine should be balanced with a call to
- * __pm_genpd_remove_callbacks() and they must not be nested.
- */
-int pm_genpd_add_callbacks(struct device *dev, struct gpd_dev_ops *ops,
-                          struct gpd_timing_data *td)
-{
-       struct generic_pm_domain_data *gpd_data_new, *gpd_data = NULL;
-       int ret = 0;
-
-       if (!(dev && ops))
-               return -EINVAL;
-
-       gpd_data_new = __pm_genpd_alloc_dev_data(dev);
-       if (!gpd_data_new)
-               return -ENOMEM;
-
-       pm_runtime_disable(dev);
-       device_pm_lock();
-
-       ret = dev_pm_get_subsys_data(dev);
-       if (ret)
-               goto out;
-
-       spin_lock_irq(&dev->power.lock);
-
-       if (dev->power.subsys_data->domain_data) {
-               gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
-       } else {
-               gpd_data = gpd_data_new;
-               dev->power.subsys_data->domain_data = &gpd_data->base;
-       }
-       gpd_data->refcount++;
-       gpd_data->ops = *ops;
-       if (td)
-               gpd_data->td = *td;
-
-       spin_unlock_irq(&dev->power.lock);
-
- out:
-       device_pm_unlock();
-       pm_runtime_enable(dev);
-
-       if (gpd_data != gpd_data_new)
-               __pm_genpd_free_dev_data(dev, gpd_data_new);
-
-       return ret;
-}
-EXPORT_SYMBOL_GPL(pm_genpd_add_callbacks);
-
-/**
- * __pm_genpd_remove_callbacks - Remove PM domain callbacks from a given device.
- * @dev: Device to remove the callbacks from.
- * @clear_td: If set, clear the device's timing data too.
- *
- * This routine can only be called after pm_genpd_add_callbacks().
- */
-int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td)
-{
-       struct generic_pm_domain_data *gpd_data = NULL;
-       bool remove = false;
-       int ret = 0;
-
-       if (!(dev && dev->power.subsys_data))
-               return -EINVAL;
-
-       pm_runtime_disable(dev);
-       device_pm_lock();
-
-       spin_lock_irq(&dev->power.lock);
-
-       if (dev->power.subsys_data->domain_data) {
-               gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
-               gpd_data->ops = (struct gpd_dev_ops){ NULL };
-               if (clear_td)
-                       gpd_data->td = (struct gpd_timing_data){ 0 };
-
-               if (--gpd_data->refcount == 0) {
-                       dev->power.subsys_data->domain_data = NULL;
-                       remove = true;
-               }
-       } else {
-               ret = -EINVAL;
-       }
-
-       spin_unlock_irq(&dev->power.lock);
-
-       device_pm_unlock();
-       pm_runtime_enable(dev);
-
-       if (ret)
-               return ret;
-
-       dev_pm_put_subsys_data(dev);
-       if (remove)
-               __pm_genpd_free_dev_data(dev, gpd_data);
-
-       return 0;
-}
-EXPORT_SYMBOL_GPL(__pm_genpd_remove_callbacks);
-
 /**
  * pm_genpd_attach_cpuidle - Connect the given PM domain with cpuidle.
  * @genpd: PM domain to be connected with cpuidle.
@@ -1861,7 +1698,7 @@ EXPORT_SYMBOL_GPL(__pm_genpd_remove_callbacks);
 int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state)
 {
        struct cpuidle_driver *cpuidle_drv;
-       struct gpd_cpu_data *cpu_data;
+       struct gpd_cpuidle_data *cpuidle_data;
        struct cpuidle_state *idle_state;
        int ret = 0;
 
@@ -1870,12 +1707,12 @@ int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state)
 
        genpd_acquire_lock(genpd);
 
-       if (genpd->cpu_data) {
+       if (genpd->cpuidle_data) {
                ret = -EEXIST;
                goto out;
        }
-       cpu_data = kzalloc(sizeof(*cpu_data), GFP_KERNEL);
-       if (!cpu_data) {
+       cpuidle_data = kzalloc(sizeof(*cpuidle_data), GFP_KERNEL);
+       if (!cpuidle_data) {
                ret = -ENOMEM;
                goto out;
        }
@@ -1893,9 +1730,9 @@ int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state)
                ret = -EAGAIN;
                goto err;
        }
-       cpu_data->idle_state = idle_state;
-       cpu_data->saved_exit_latency = idle_state->exit_latency;
-       genpd->cpu_data = cpu_data;
+       cpuidle_data->idle_state = idle_state;
+       cpuidle_data->saved_exit_latency = idle_state->exit_latency;
+       genpd->cpuidle_data = cpuidle_data;
        genpd_recalc_cpu_exit_latency(genpd);
 
  out:
@@ -1906,7 +1743,7 @@ int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state)
        cpuidle_driver_unref();
 
  err_drv:
-       kfree(cpu_data);
+       kfree(cpuidle_data);
        goto out;
 }
 
@@ -1929,7 +1766,7 @@ int pm_genpd_name_attach_cpuidle(const char *name, int state)
  */
 int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd)
 {
-       struct gpd_cpu_data *cpu_data;
+       struct gpd_cpuidle_data *cpuidle_data;
        struct cpuidle_state *idle_state;
        int ret = 0;
 
@@ -1938,20 +1775,20 @@ int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd)
 
        genpd_acquire_lock(genpd);
 
-       cpu_data = genpd->cpu_data;
-       if (!cpu_data) {
+       cpuidle_data = genpd->cpuidle_data;
+       if (!cpuidle_data) {
                ret = -ENODEV;
                goto out;
        }
-       idle_state = cpu_data->idle_state;
+       idle_state = cpuidle_data->idle_state;
        if (!idle_state->disabled) {
                ret = -EAGAIN;
                goto out;
        }
-       idle_state->exit_latency = cpu_data->saved_exit_latency;
+       idle_state->exit_latency = cpuidle_data->saved_exit_latency;
        cpuidle_driver_unref();
-       genpd->cpu_data = NULL;
-       kfree(cpu_data);
+       genpd->cpuidle_data = NULL;
+       kfree(cpuidle_data);
 
  out:
        genpd_release_lock(genpd);
@@ -1970,17 +1807,13 @@ int pm_genpd_name_detach_cpuidle(const char *name)
 /* Default device callbacks for generic PM domains. */
 
 /**
- * pm_genpd_default_save_state - Default "save device state" for PM domians.
+ * pm_genpd_default_save_state - Default "save device state" for PM domains.
  * @dev: Device to handle.
  */
 static int pm_genpd_default_save_state(struct device *dev)
 {
        int (*cb)(struct device *__dev);
 
-       cb = dev_gpd_data(dev)->ops.save_state;
-       if (cb)
-               return cb(dev);
-
        if (dev->type && dev->type->pm)
                cb = dev->type->pm->runtime_suspend;
        else if (dev->class && dev->class->pm)
@@ -1997,17 +1830,13 @@ static int pm_genpd_default_save_state(struct device *dev)
 }
 
 /**
- * pm_genpd_default_restore_state - Default PM domians "restore device state".
+ * pm_genpd_default_restore_state - Default PM domains "restore device state".
  * @dev: Device to handle.
  */
 static int pm_genpd_default_restore_state(struct device *dev)
 {
        int (*cb)(struct device *__dev);
 
-       cb = dev_gpd_data(dev)->ops.restore_state;
-       if (cb)
-               return cb(dev);
-
        if (dev->type && dev->type->pm)
                cb = dev->type->pm->runtime_resume;
        else if (dev->class && dev->class->pm)
@@ -2023,109 +1852,6 @@ static int pm_genpd_default_restore_state(struct device *dev)
        return cb ? cb(dev) : 0;
 }
 
-#ifdef CONFIG_PM_SLEEP
-
-/**
- * pm_genpd_default_suspend - Default "device suspend" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_suspend(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.suspend;
-
-       return cb ? cb(dev) : pm_generic_suspend(dev);
-}
-
-/**
- * pm_genpd_default_suspend_late - Default "late device suspend" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_suspend_late(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.suspend_late;
-
-       return cb ? cb(dev) : pm_generic_suspend_late(dev);
-}
-
-/**
- * pm_genpd_default_resume_early - Default "early device resume" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_resume_early(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.resume_early;
-
-       return cb ? cb(dev) : pm_generic_resume_early(dev);
-}
-
-/**
- * pm_genpd_default_resume - Default "device resume" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_resume(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.resume;
-
-       return cb ? cb(dev) : pm_generic_resume(dev);
-}
-
-/**
- * pm_genpd_default_freeze - Default "device freeze" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_freeze(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.freeze;
-
-       return cb ? cb(dev) : pm_generic_freeze(dev);
-}
-
-/**
- * pm_genpd_default_freeze_late - Default "late device freeze" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_freeze_late(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.freeze_late;
-
-       return cb ? cb(dev) : pm_generic_freeze_late(dev);
-}
-
-/**
- * pm_genpd_default_thaw_early - Default "early device thaw" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_thaw_early(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.thaw_early;
-
-       return cb ? cb(dev) : pm_generic_thaw_early(dev);
-}
-
-/**
- * pm_genpd_default_thaw - Default "device thaw" for PM domians.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_thaw(struct device *dev)
-{
-       int (*cb)(struct device *__dev) = dev_gpd_data(dev)->ops.thaw;
-
-       return cb ? cb(dev) : pm_generic_thaw(dev);
-}
-
-#else /* !CONFIG_PM_SLEEP */
-
-#define pm_genpd_default_suspend       NULL
-#define pm_genpd_default_suspend_late  NULL
-#define pm_genpd_default_resume_early  NULL
-#define pm_genpd_default_resume                NULL
-#define pm_genpd_default_freeze                NULL
-#define pm_genpd_default_freeze_late   NULL
-#define pm_genpd_default_thaw_early    NULL
-#define pm_genpd_default_thaw          NULL
-
-#endif /* !CONFIG_PM_SLEEP */
-
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -2177,15 +1903,452 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
        genpd->domain.ops.complete = pm_genpd_complete;
        genpd->dev_ops.save_state = pm_genpd_default_save_state;
        genpd->dev_ops.restore_state = pm_genpd_default_restore_state;
-       genpd->dev_ops.suspend = pm_genpd_default_suspend;
-       genpd->dev_ops.suspend_late = pm_genpd_default_suspend_late;
-       genpd->dev_ops.resume_early = pm_genpd_default_resume_early;
-       genpd->dev_ops.resume = pm_genpd_default_resume;
-       genpd->dev_ops.freeze = pm_genpd_default_freeze;
-       genpd->dev_ops.freeze_late = pm_genpd_default_freeze_late;
-       genpd->dev_ops.thaw_early = pm_genpd_default_thaw_early;
-       genpd->dev_ops.thaw = pm_genpd_default_thaw;
        mutex_lock(&gpd_list_lock);
        list_add(&genpd->gpd_list_node, &gpd_list);
        mutex_unlock(&gpd_list_lock);
 }
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+/*
+ * Device Tree based PM domain providers.
+ *
+ * The code below implements generic device tree based PM domain providers that
+ * bind device tree nodes with generic PM domains registered in the system.
+ *
+ * Any driver that registers generic PM domains and needs to support binding of
+ * devices to these domains is supposed to register a PM domain provider, which
+ * maps a PM domain specifier retrieved from the device tree to a PM domain.
+ *
+ * Two simple mapping functions have been provided for convenience:
+ *  - __of_genpd_xlate_simple() for 1:1 device tree node to PM domain mapping.
+ *  - __of_genpd_xlate_onecell() for mapping of multiple PM domains per node by
+ *    index.
+ */
+
+/**
+ * struct of_genpd_provider - PM domain provider registration structure
+ * @link: Entry in global list of PM domain providers
+ * @node: Pointer to device tree node of PM domain provider
+ * @xlate: Provider-specific xlate callback mapping a set of specifier cells
+ *         into a PM domain.
+ * @data: context pointer to be passed into @xlate callback
+ */
+struct of_genpd_provider {
+       struct list_head link;
+       struct device_node *node;
+       genpd_xlate_t xlate;
+       void *data;
+};
+
+/* List of registered PM domain providers. */
+static LIST_HEAD(of_genpd_providers);
+/* Mutex to protect the list above. */
+static DEFINE_MUTEX(of_genpd_mutex);
+
+/**
+ * __of_genpd_xlate_simple() - Xlate function for direct node-domain mapping
+ * @genpdspec: OF phandle args to map into a PM domain
+ * @data: xlate function private data - pointer to struct generic_pm_domain
+ *
+ * This is a generic xlate function that can be used to model PM domains that
+ * have their own device tree nodes. The private data of xlate function needs
+ * to be a valid pointer to struct generic_pm_domain.
+ */
+struct generic_pm_domain *__of_genpd_xlate_simple(
+                                       struct of_phandle_args *genpdspec,
+                                       void *data)
+{
+       if (genpdspec->args_count != 0)
+               return ERR_PTR(-EINVAL);
+       return data;
+}
+EXPORT_SYMBOL_GPL(__of_genpd_xlate_simple);
+
+/**
+ * __of_genpd_xlate_onecell() - Xlate function using a single index.
+ * @genpdspec: OF phandle args to map into a PM domain
+ * @data: xlate function private data - pointer to struct genpd_onecell_data
+ *
+ * This is a generic xlate function that can be used to model simple PM domain
+ * controllers that have one device tree node and provide multiple PM domains.
+ * A single cell is used as an index into an array of PM domains specified in
+ * the genpd_onecell_data struct when registering the provider.
+ */
+struct generic_pm_domain *__of_genpd_xlate_onecell(
+                                       struct of_phandle_args *genpdspec,
+                                       void *data)
+{
+       struct genpd_onecell_data *genpd_data = data;
+       unsigned int idx = genpdspec->args[0];
+
+       if (genpdspec->args_count != 1)
+               return ERR_PTR(-EINVAL);
+
+       if (idx >= genpd_data->num_domains) {
+               pr_err("%s: invalid domain index %u\n", __func__, idx);
+               return ERR_PTR(-EINVAL);
+       }
+
+       if (!genpd_data->domains[idx])
+               return ERR_PTR(-ENOENT);
+
+       return genpd_data->domains[idx];
+}
+EXPORT_SYMBOL_GPL(__of_genpd_xlate_onecell);
+
+/**
+ * __of_genpd_add_provider() - Register a PM domain provider for a node
+ * @np: Device node pointer associated with the PM domain provider.
+ * @xlate: Callback for decoding PM domain from phandle arguments.
+ * @data: Context pointer for @xlate callback.
+ */
+int __of_genpd_add_provider(struct device_node *np, genpd_xlate_t xlate,
+                       void *data)
+{
+       struct of_genpd_provider *cp;
+
+       cp = kzalloc(sizeof(*cp), GFP_KERNEL);
+       if (!cp)
+               return -ENOMEM;
+
+       cp->node = of_node_get(np);
+       cp->data = data;
+       cp->xlate = xlate;
+
+       mutex_lock(&of_genpd_mutex);
+       list_add(&cp->link, &of_genpd_providers);
+       mutex_unlock(&of_genpd_mutex);
+       pr_debug("Added domain provider from %s\n", np->full_name);
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(__of_genpd_add_provider);
+
+/**
+ * of_genpd_del_provider() - Remove a previously registered PM domain provider
+ * @np: Device node pointer associated with the PM domain provider
+ */
+void of_genpd_del_provider(struct device_node *np)
+{
+       struct of_genpd_provider *cp;
+
+       mutex_lock(&of_genpd_mutex);
+       list_for_each_entry(cp, &of_genpd_providers, link) {
+               if (cp->node == np) {
+                       list_del(&cp->link);
+                       of_node_put(cp->node);
+                       kfree(cp);
+                       break;
+               }
+       }
+       mutex_unlock(&of_genpd_mutex);
+}
+EXPORT_SYMBOL_GPL(of_genpd_del_provider);
+
+/**
+ * of_genpd_get_from_provider() - Look-up PM domain
+ * @genpdspec: OF phandle args to use for look-up
+ *
+ * Looks for a PM domain provider under the node specified by @genpdspec and if
+ * found, uses xlate function of the provider to map phandle args to a PM
+ * domain.
+ *
+ * Returns a valid pointer to struct generic_pm_domain on success or ERR_PTR()
+ * on failure.
+ */
+static struct generic_pm_domain *of_genpd_get_from_provider(
+                                       struct of_phandle_args *genpdspec)
+{
+       struct generic_pm_domain *genpd = ERR_PTR(-ENOENT);
+       struct of_genpd_provider *provider;
+
+       mutex_lock(&of_genpd_mutex);
+
+       /* Check if we have such a provider in our array */
+       list_for_each_entry(provider, &of_genpd_providers, link) {
+               if (provider->node == genpdspec->np)
+                       genpd = provider->xlate(genpdspec, provider->data);
+               if (!IS_ERR(genpd))
+                       break;
+       }
+
+       mutex_unlock(&of_genpd_mutex);
+
+       return genpd;
+}
+
+/**
+ * genpd_dev_pm_detach - Detach a device from its PM domain.
+ * @dev: Device to attach.
+ * @power_off: Currently not used
+ *
+ * Try to locate a corresponding generic PM domain, which the device was
+ * attached to previously. If such is found, the device is detached from it.
+ */
+static void genpd_dev_pm_detach(struct device *dev, bool power_off)
+{
+       struct generic_pm_domain *pd = NULL, *gpd;
+       int ret = 0;
+
+       if (!dev->pm_domain)
+               return;
+
+       mutex_lock(&gpd_list_lock);
+       list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
+               if (&gpd->domain == dev->pm_domain) {
+                       pd = gpd;
+                       break;
+               }
+       }
+       mutex_unlock(&gpd_list_lock);
+
+       if (!pd)
+               return;
+
+       dev_dbg(dev, "removing from PM domain %s\n", pd->name);
+
+       while (1) {
+               ret = pm_genpd_remove_device(pd, dev);
+               if (ret != -EAGAIN)
+                       break;
+               cond_resched();
+       }
+
+       if (ret < 0) {
+               dev_err(dev, "failed to remove from PM domain %s: %d",
+                       pd->name, ret);
+               return;
+       }
+
+       /* Check if PM domain can be powered off after removing this device. */
+       genpd_queue_power_off_work(pd);
+}
+
+/**
+ * genpd_dev_pm_attach - Attach a device to its PM domain using DT.
+ * @dev: Device to attach.
+ *
+ * Parse device's OF node to find a PM domain specifier. If such is found,
+ * attaches the device to retrieved pm_domain ops.
+ *
+ * Both generic and legacy Samsung-specific DT bindings are supported to keep
+ * backwards compatibility with existing DTBs.
+ *
+ * Returns 0 on successfully attached PM domain or negative error code.
+ */
+int genpd_dev_pm_attach(struct device *dev)
+{
+       struct of_phandle_args pd_args;
+       struct generic_pm_domain *pd;
+       int ret;
+
+       if (!dev->of_node)
+               return -ENODEV;
+
+       if (dev->pm_domain)
+               return -EEXIST;
+
+       ret = of_parse_phandle_with_args(dev->of_node, "power-domains",
+                                       "#power-domain-cells", 0, &pd_args);
+       if (ret < 0) {
+               if (ret != -ENOENT)
+                       return ret;
+
+               /*
+                * Try legacy Samsung-specific bindings
+                * (for backwards compatibility of DT ABI)
+                */
+               pd_args.args_count = 0;
+               pd_args.np = of_parse_phandle(dev->of_node,
+                                               "samsung,power-domain", 0);
+               if (!pd_args.np)
+                       return -ENOENT;
+       }
+
+       pd = of_genpd_get_from_provider(&pd_args);
+       if (IS_ERR(pd)) {
+               dev_dbg(dev, "%s() failed to find PM domain: %ld\n",
+                       __func__, PTR_ERR(pd));
+               of_node_put(dev->of_node);
+               return PTR_ERR(pd);
+       }
+
+       dev_dbg(dev, "adding to PM domain %s\n", pd->name);
+
+       while (1) {
+               ret = pm_genpd_add_device(pd, dev);
+               if (ret != -EAGAIN)
+                       break;
+               cond_resched();
+       }
+
+       if (ret < 0) {
+               dev_err(dev, "failed to add to PM domain %s: %d",
+                       pd->name, ret);
+               of_node_put(dev->of_node);
+               return ret;
+       }
+
+       dev->pm_domain->detach = genpd_dev_pm_detach;
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(genpd_dev_pm_attach);
+#endif
+
+
+/***        debugfs support        ***/
+
+#ifdef CONFIG_PM_ADVANCED_DEBUG
+#include <linux/pm.h>
+#include <linux/device.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/init.h>
+#include <linux/kobject.h>
+static struct dentry *pm_genpd_debugfs_dir;
+
+/*
+ * TODO: This function is a slightly modified version of rtpm_status_show
+ * from sysfs.c, but dependencies between PM_GENERIC_DOMAINS and PM_RUNTIME
+ * are too loose to generalize it.
+ */
+#ifdef CONFIG_PM_RUNTIME
+static void rtpm_status_str(struct seq_file *s, struct device *dev)
+{
+       static const char * const status_lookup[] = {
+               [RPM_ACTIVE] = "active",
+               [RPM_RESUMING] = "resuming",
+               [RPM_SUSPENDED] = "suspended",
+               [RPM_SUSPENDING] = "suspending"
+       };
+       const char *p = "";
+
+       if (dev->power.runtime_error)
+               p = "error";
+       else if (dev->power.disable_depth)
+               p = "unsupported";
+       else if (dev->power.runtime_status < ARRAY_SIZE(status_lookup))
+               p = status_lookup[dev->power.runtime_status];
+       else
+               WARN_ON(1);
+
+       seq_puts(s, p);
+}
+#else
+static void rtpm_status_str(struct seq_file *s, struct device *dev)
+{
+       seq_puts(s, "active");
+}
+#endif
+
+static int pm_genpd_summary_one(struct seq_file *s,
+               struct generic_pm_domain *gpd)
+{
+       static const char * const status_lookup[] = {
+               [GPD_STATE_ACTIVE] = "on",
+               [GPD_STATE_WAIT_MASTER] = "wait-master",
+               [GPD_STATE_BUSY] = "busy",
+               [GPD_STATE_REPEAT] = "off-in-progress",
+               [GPD_STATE_POWER_OFF] = "off"
+       };
+       struct pm_domain_data *pm_data;
+       const char *kobj_path;
+       struct gpd_link *link;
+       int ret;
+
+       ret = mutex_lock_interruptible(&gpd->lock);
+       if (ret)
+               return -ERESTARTSYS;
+
+       if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup)))
+               goto exit;
+       seq_printf(s, "%-30s  %-15s  ", gpd->name, status_lookup[gpd->status]);
+
+       /*
+        * Modifications on the list require holding locks on both
+        * master and slave, so we are safe.
+        * Also gpd->name is immutable.
+        */
+       list_for_each_entry(link, &gpd->master_links, master_node) {
+               seq_printf(s, "%s", link->slave->name);
+               if (!list_is_last(&link->master_node, &gpd->master_links))
+                       seq_puts(s, ", ");
+       }
+
+       list_for_each_entry(pm_data, &gpd->dev_list, list_node) {
+               kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+               if (kobj_path == NULL)
+                       continue;
+
+               seq_printf(s, "\n    %-50s  ", kobj_path);
+               rtpm_status_str(s, pm_data->dev);
+               kfree(kobj_path);
+       }
+
+       seq_puts(s, "\n");
+exit:
+       mutex_unlock(&gpd->lock);
+
+       return 0;
+}
+
+static int pm_genpd_summary_show(struct seq_file *s, void *data)
+{
+       struct generic_pm_domain *gpd;
+       int ret = 0;
+
+       seq_puts(s, "    domain                      status         slaves\n");
+       seq_puts(s, "           /device                                      runtime status\n");
+       seq_puts(s, "----------------------------------------------------------------------\n");
+
+       ret = mutex_lock_interruptible(&gpd_list_lock);
+       if (ret)
+               return -ERESTARTSYS;
+
+       list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
+               ret = pm_genpd_summary_one(s, gpd);
+               if (ret)
+                       break;
+       }
+       mutex_unlock(&gpd_list_lock);
+
+       return ret;
+}
+
+static int pm_genpd_summary_open(struct inode *inode, struct file *file)
+{
+       return single_open(file, pm_genpd_summary_show, NULL);
+}
+
+static const struct file_operations pm_genpd_summary_fops = {
+       .open = pm_genpd_summary_open,
+       .read = seq_read,
+       .llseek = seq_lseek,
+       .release = single_release,
+};
+
+static int __init pm_genpd_debug_init(void)
+{
+       struct dentry *d;
+
+       pm_genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
+
+       if (!pm_genpd_debugfs_dir)
+               return -ENOMEM;
+
+       d = debugfs_create_file("pm_genpd_summary", S_IRUGO,
+                       pm_genpd_debugfs_dir, NULL, &pm_genpd_summary_fops);
+       if (!d)
+               return -ENOMEM;
+
+       return 0;
+}
+late_initcall(pm_genpd_debug_init);
+
+static void __exit pm_genpd_debug_exit(void)
+{
+       debugfs_remove_recursive(pm_genpd_debugfs_dir);
+}
+__exitcall(pm_genpd_debug_exit);
+#endif /* CONFIG_PM_ADVANCED_DEBUG */
index a089e3bcdfbc5d7ee4aa6850b90f34e9fd2a9efb..d88a62e104d4e03b55cf778b025e544072d7c701 100644 (file)
@@ -42,7 +42,7 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
  * default_stop_ok - Default PM domain governor routine for stopping devices.
  * @dev: Device to check.
  */
-bool default_stop_ok(struct device *dev)
+static bool default_stop_ok(struct device *dev)
 {
        struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
        unsigned long flags;
@@ -229,10 +229,7 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain)
 
 #else /* !CONFIG_PM_RUNTIME */
 
-bool default_stop_ok(struct device *dev)
-{
-       return false;
-}
+static inline bool default_stop_ok(struct device *dev) { return false; }
 
 #define default_power_down_ok  NULL
 #define always_on_power_down_ok        NULL
index b67d9aef9fe431d58ad2da42e7a7328c87208aa4..44973196d3fd76d1b3a4b91bc728cf6cf87c2653 100644 (file)
@@ -540,7 +540,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie)
  * Call the "noirq" resume handlers for all devices in dpm_noirq_list and
  * enable device drivers to receive interrupts.
  */
-static void dpm_resume_noirq(pm_message_t state)
+void dpm_resume_noirq(pm_message_t state)
 {
        struct device *dev;
        ktime_t starttime = ktime_get();
@@ -662,7 +662,7 @@ static void async_resume_early(void *data, async_cookie_t cookie)
  * dpm_resume_early - Execute "early resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
-static void dpm_resume_early(pm_message_t state)
+void dpm_resume_early(pm_message_t state)
 {
        struct device *dev;
        ktime_t starttime = ktime_get();
@@ -1093,7 +1093,7 @@ static int device_suspend_noirq(struct device *dev)
  * Prevent device drivers from receiving interrupts and call the "noirq" suspend
  * handlers for all non-sysdev devices.
  */
-static int dpm_suspend_noirq(pm_message_t state)
+int dpm_suspend_noirq(pm_message_t state)
 {
        ktime_t starttime = ktime_get();
        int error = 0;
@@ -1232,7 +1232,7 @@ static int device_suspend_late(struct device *dev)
  * dpm_suspend_late - Execute "late suspend" callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
-static int dpm_suspend_late(pm_message_t state)
+int dpm_suspend_late(pm_message_t state)
 {
        ktime_t starttime = ktime_get();
        int error = 0;
index 95b181d1ca6df76d1b3a355e6a7bffca3aa26854..a9d26ed11bf479f5e4a32c9ddaac3f51287dedaa 100644 (file)
@@ -92,9 +92,6 @@
  *     wakeup_count - Report the number of wakeup events related to the device
  */
 
-static const char enabled[] = "enabled";
-static const char disabled[] = "disabled";
-
 const char power_group_name[] = "power";
 EXPORT_SYMBOL_GPL(power_group_name);
 
@@ -336,11 +333,14 @@ static DEVICE_ATTR(pm_qos_remote_wakeup, 0644,
 #endif /* CONFIG_PM_RUNTIME */
 
 #ifdef CONFIG_PM_SLEEP
+static const char _enabled[] = "enabled";
+static const char _disabled[] = "disabled";
+
 static ssize_t
 wake_show(struct device * dev, struct device_attribute *attr, char * buf)
 {
        return sprintf(buf, "%s\n", device_can_wakeup(dev)
-               ? (device_may_wakeup(dev) ? enabled : disabled)
+               ? (device_may_wakeup(dev) ? _enabled : _disabled)
                : "");
 }
 
@@ -357,11 +357,11 @@ wake_store(struct device * dev, struct device_attribute *attr,
        cp = memchr(buf, '\n', n);
        if (cp)
                len = cp - buf;
-       if (len == sizeof enabled - 1
-                       && strncmp(buf, enabled, sizeof enabled - 1) == 0)
+       if (len == sizeof _enabled - 1
+                       && strncmp(buf, _enabled, sizeof _enabled - 1) == 0)
                device_set_wakeup_enable(dev, 1);
-       else if (len == sizeof disabled - 1
-                       && strncmp(buf, disabled, sizeof disabled - 1) == 0)
+       else if (len == sizeof _disabled - 1
+                       && strncmp(buf, _disabled, sizeof _disabled - 1) == 0)
                device_set_wakeup_enable(dev, 0);
        else
                return -EINVAL;
@@ -570,7 +570,8 @@ static ssize_t async_show(struct device *dev, struct device_attribute *attr,
                          char *buf)
 {
        return sprintf(buf, "%s\n",
-                       device_async_suspend_enabled(dev) ? enabled : disabled);
+                       device_async_suspend_enabled(dev) ?
+                               _enabled : _disabled);
 }
 
 static ssize_t async_store(struct device *dev, struct device_attribute *attr,
@@ -582,9 +583,10 @@ static ssize_t async_store(struct device *dev, struct device_attribute *attr,
        cp = memchr(buf, '\n', n);
        if (cp)
                len = cp - buf;
-       if (len == sizeof enabled - 1 && strncmp(buf, enabled, len) == 0)
+       if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0)
                device_enable_async_suspend(dev);
-       else if (len == sizeof disabled - 1 && strncmp(buf, disabled, len) == 0)
+       else if (len == sizeof _disabled - 1 &&
+                strncmp(buf, _disabled, len) == 0)
                device_disable_async_suspend(dev);
        else
                return -EINVAL;
index eb1bd2ecad8bf9e3854660d4c7a643f760415d01..c2744b30d5d92e9dde512e492cf9fdf44f21b5ef 100644 (file)
@@ -24,6 +24,9 @@
  */
 bool events_check_enabled __read_mostly;
 
+/* If set and the system is suspending, terminate the suspend. */
+static bool pm_abort_suspend __read_mostly;
+
 /*
  * Combined counters of registered wakeup events and wakeup events in progress.
  * They need to be modified together atomically, so it's better to use one
@@ -719,7 +722,18 @@ bool pm_wakeup_pending(void)
                pm_print_active_wakeup_sources();
        }
 
-       return ret;
+       return ret || pm_abort_suspend;
+}
+
+void pm_system_wakeup(void)
+{
+       pm_abort_suspend = true;
+       freeze_wake();
+}
+
+void pm_wakeup_clear(void)
+{
+       pm_abort_suspend = false;
 }
 
 /**
index dbb8350ea8dc232d713a10c9a4179e221dbcbb45..8d98a329f6ea63a2daf179bb3f15e5307c6a0d13 100644 (file)
@@ -9,7 +9,7 @@
 #include <linux/syscore_ops.h>
 #include <linux/mutex.h>
 #include <linux/module.h>
-#include <linux/interrupt.h>
+#include <linux/suspend.h>
 #include <trace/events/power.h>
 
 static LIST_HEAD(syscore_ops_list);
@@ -54,9 +54,8 @@ int syscore_suspend(void)
        pr_debug("Checking wakeup interrupts\n");
 
        /* Return error code if there are any wakeup interrupts pending. */
-       ret = check_wakeup_irqs();
-       if (ret)
-               return ret;
+       if (pm_wakeup_pending())
+               return -EBUSY;
 
        WARN_ONCE(!irqs_disabled(),
                "Interrupts enabled before system core suspend.\n");
index ffe350f86bca570e879177c63395efff5de6587d..3489f8f5fadabee1b8db494c7b5dbd996bd70e33 100644 (file)
@@ -183,14 +183,14 @@ config CPU_FREQ_GOV_CONSERVATIVE
 
          If in doubt, say N.
 
-config GENERIC_CPUFREQ_CPU0
-       tristate "Generic CPU0 cpufreq driver"
+config CPUFREQ_DT
+       tristate "Generic DT based cpufreq driver"
        depends on HAVE_CLK && OF
-       # if CPU_THERMAL is on and THERMAL=m, CPU0 cannot be =y:
+       # if CPU_THERMAL is on and THERMAL=m, CPUFREQ_DT cannot be =y:
        depends on !CPU_THERMAL || THERMAL
        select PM_OPP
        help
-         This adds a generic cpufreq driver for CPU0 frequency management.
+         This adds a generic DT based cpufreq driver for frequency management.
          It supports both uniprocessor (UP) and symmetric multiprocessor (SMP)
          systems which share clock and voltage across all CPUs.
 
index 28c666c8014969925697abbf1746f74705c8c809..83a75dc84761a3c4e9a4e385c66d19ab17706b31 100644 (file)
@@ -92,7 +92,7 @@ config ARM_EXYNOS_CPU_FREQ_BOOST_SW
 
 config ARM_HIGHBANK_CPUFREQ
        tristate "Calxeda Highbank-based"
-       depends on ARCH_HIGHBANK && GENERIC_CPUFREQ_CPU0 && REGULATOR
+       depends on ARCH_HIGHBANK && CPUFREQ_DT && REGULATOR
        default m
        help
          This adds the CPUFreq driver for Calxeda Highbank SoC
index db6d9a2fea4d534f135af08880f229d511633c91..40c53dc1937ec6a4fc22e61f497939ba2ada54bd 100644 (file)
@@ -13,7 +13,7 @@ obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND)   += cpufreq_ondemand.o
 obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE)        += cpufreq_conservative.o
 obj-$(CONFIG_CPU_FREQ_GOV_COMMON)              += cpufreq_governor.o
 
-obj-$(CONFIG_GENERIC_CPUFREQ_CPU0)     += cpufreq-cpu0.o
+obj-$(CONFIG_CPUFREQ_DT)               += cpufreq-dt.o
 
 ##################################################################################
 # x86 drivers.
diff --git a/drivers/cpufreq/cpufreq-cpu0.c b/drivers/cpufreq/cpufreq-cpu0.c
deleted file mode 100644 (file)
index 0d2172b..0000000
+++ /dev/null
@@ -1,248 +0,0 @@
-/*
- * Copyright (C) 2012 Freescale Semiconductor, Inc.
- *
- * The OPP code in function cpu0_set_target() is reused from
- * drivers/cpufreq/omap-cpufreq.c
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-
-#define pr_fmt(fmt)    KBUILD_MODNAME ": " fmt
-
-#include <linux/clk.h>
-#include <linux/cpu.h>
-#include <linux/cpu_cooling.h>
-#include <linux/cpufreq.h>
-#include <linux/cpumask.h>
-#include <linux/err.h>
-#include <linux/module.h>
-#include <linux/of.h>
-#include <linux/pm_opp.h>
-#include <linux/platform_device.h>
-#include <linux/regulator/consumer.h>
-#include <linux/slab.h>
-#include <linux/thermal.h>
-
-static unsigned int transition_latency;
-static unsigned int voltage_tolerance; /* in percentage */
-
-static struct device *cpu_dev;
-static struct clk *cpu_clk;
-static struct regulator *cpu_reg;
-static struct cpufreq_frequency_table *freq_table;
-static struct thermal_cooling_device *cdev;
-
-static int cpu0_set_target(struct cpufreq_policy *policy, unsigned int index)
-{
-       struct dev_pm_opp *opp;
-       unsigned long volt = 0, volt_old = 0, tol = 0;
-       unsigned int old_freq, new_freq;
-       long freq_Hz, freq_exact;
-       int ret;
-
-       freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000);
-       if (freq_Hz <= 0)
-               freq_Hz = freq_table[index].frequency * 1000;
-
-       freq_exact = freq_Hz;
-       new_freq = freq_Hz / 1000;
-       old_freq = clk_get_rate(cpu_clk) / 1000;
-
-       if (!IS_ERR(cpu_reg)) {
-               rcu_read_lock();
-               opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_Hz);
-               if (IS_ERR(opp)) {
-                       rcu_read_unlock();
-                       pr_err("failed to find OPP for %ld\n", freq_Hz);
-                       return PTR_ERR(opp);
-               }
-               volt = dev_pm_opp_get_voltage(opp);
-               rcu_read_unlock();
-               tol = volt * voltage_tolerance / 100;
-               volt_old = regulator_get_voltage(cpu_reg);
-       }
-
-       pr_debug("%u MHz, %ld mV --> %u MHz, %ld mV\n",
-                old_freq / 1000, volt_old ? volt_old / 1000 : -1,
-                new_freq / 1000, volt ? volt / 1000 : -1);
-
-       /* scaling up?  scale voltage before frequency */
-       if (!IS_ERR(cpu_reg) && new_freq > old_freq) {
-               ret = regulator_set_voltage_tol(cpu_reg, volt, tol);
-               if (ret) {
-                       pr_err("failed to scale voltage up: %d\n", ret);
-                       return ret;
-               }
-       }
-
-       ret = clk_set_rate(cpu_clk, freq_exact);
-       if (ret) {
-               pr_err("failed to set clock rate: %d\n", ret);
-               if (!IS_ERR(cpu_reg))
-                       regulator_set_voltage_tol(cpu_reg, volt_old, tol);
-               return ret;
-       }
-
-       /* scaling down?  scale voltage after frequency */
-       if (!IS_ERR(cpu_reg) && new_freq < old_freq) {
-               ret = regulator_set_voltage_tol(cpu_reg, volt, tol);
-               if (ret) {
-                       pr_err("failed to scale voltage down: %d\n", ret);
-                       clk_set_rate(cpu_clk, old_freq * 1000);
-               }
-       }
-
-       return ret;
-}
-
-static int cpu0_cpufreq_init(struct cpufreq_policy *policy)
-{
-       policy->clk = cpu_clk;
-       return cpufreq_generic_init(policy, freq_table, transition_latency);
-}
-
-static struct cpufreq_driver cpu0_cpufreq_driver = {
-       .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
-       .verify = cpufreq_generic_frequency_table_verify,
-       .target_index = cpu0_set_target,
-       .get = cpufreq_generic_get,
-       .init = cpu0_cpufreq_init,
-       .name = "generic_cpu0",
-       .attr = cpufreq_generic_attr,
-};
-
-static int cpu0_cpufreq_probe(struct platform_device *pdev)
-{
-       struct device_node *np;
-       int ret;
-
-       cpu_dev = get_cpu_device(0);
-       if (!cpu_dev) {
-               pr_err("failed to get cpu0 device\n");
-               return -ENODEV;
-       }
-
-       np = of_node_get(cpu_dev->of_node);
-       if (!np) {
-               pr_err("failed to find cpu0 node\n");
-               return -ENOENT;
-       }
-
-       cpu_reg = regulator_get_optional(cpu_dev, "cpu0");
-       if (IS_ERR(cpu_reg)) {
-               /*
-                * If cpu0 regulator supply node is present, but regulator is
-                * not yet registered, we should try defering probe.
-                */
-               if (PTR_ERR(cpu_reg) == -EPROBE_DEFER) {
-                       dev_dbg(cpu_dev, "cpu0 regulator not ready, retry\n");
-                       ret = -EPROBE_DEFER;
-                       goto out_put_node;
-               }
-               pr_warn("failed to get cpu0 regulator: %ld\n",
-                       PTR_ERR(cpu_reg));
-       }
-
-       cpu_clk = clk_get(cpu_dev, NULL);
-       if (IS_ERR(cpu_clk)) {
-               ret = PTR_ERR(cpu_clk);
-               pr_err("failed to get cpu0 clock: %d\n", ret);
-               goto out_put_reg;
-       }
-
-       /* OPPs might be populated at runtime, don't check for error here */
-       of_init_opp_table(cpu_dev);
-
-       ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
-       if (ret) {
-               pr_err("failed to init cpufreq table: %d\n", ret);
-               goto out_put_clk;
-       }
-
-       of_property_read_u32(np, "voltage-tolerance", &voltage_tolerance);
-
-       if (of_property_read_u32(np, "clock-latency", &transition_latency))
-               transition_latency = CPUFREQ_ETERNAL;
-
-       if (!IS_ERR(cpu_reg)) {
-               struct dev_pm_opp *opp;
-               unsigned long min_uV, max_uV;
-               int i;
-
-               /*
-                * OPP is maintained in order of increasing frequency, and
-                * freq_table initialised from OPP is therefore sorted in the
-                * same order.
-                */
-               for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)
-                       ;
-               rcu_read_lock();
-               opp = dev_pm_opp_find_freq_exact(cpu_dev,
-                               freq_table[0].frequency * 1000, true);
-               min_uV = dev_pm_opp_get_voltage(opp);
-               opp = dev_pm_opp_find_freq_exact(cpu_dev,
-                               freq_table[i-1].frequency * 1000, true);
-               max_uV = dev_pm_opp_get_voltage(opp);
-               rcu_read_unlock();
-               ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV);
-               if (ret > 0)
-                       transition_latency += ret * 1000;
-       }
-
-       ret = cpufreq_register_driver(&cpu0_cpufreq_driver);
-       if (ret) {
-               pr_err("failed register driver: %d\n", ret);
-               goto out_free_table;
-       }
-
-       /*
-        * For now, just loading the cooling device;
-        * thermal DT code takes care of matching them.
-        */
-       if (of_find_property(np, "#cooling-cells", NULL)) {
-               cdev = of_cpufreq_cooling_register(np, cpu_present_mask);
-               if (IS_ERR(cdev))
-                       pr_err("running cpufreq without cooling device: %ld\n",
-                              PTR_ERR(cdev));
-       }
-
-       of_node_put(np);
-       return 0;
-
-out_free_table:
-       dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
-out_put_clk:
-       if (!IS_ERR(cpu_clk))
-               clk_put(cpu_clk);
-out_put_reg:
-       if (!IS_ERR(cpu_reg))
-               regulator_put(cpu_reg);
-out_put_node:
-       of_node_put(np);
-       return ret;
-}
-
-static int cpu0_cpufreq_remove(struct platform_device *pdev)
-{
-       cpufreq_cooling_unregister(cdev);
-       cpufreq_unregister_driver(&cpu0_cpufreq_driver);
-       dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
-
-       return 0;
-}
-
-static struct platform_driver cpu0_cpufreq_platdrv = {
-       .driver = {
-               .name   = "cpufreq-cpu0",
-               .owner  = THIS_MODULE,
-       },
-       .probe          = cpu0_cpufreq_probe,
-       .remove         = cpu0_cpufreq_remove,
-};
-module_platform_driver(cpu0_cpufreq_platdrv);
-
-MODULE_AUTHOR("Shawn Guo <shawn.guo@linaro.org>");
-MODULE_DESCRIPTION("Generic CPU0 cpufreq driver");
-MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
new file mode 100644 (file)
index 0000000..6bbb8b9
--- /dev/null
@@ -0,0 +1,364 @@
+/*
+ * Copyright (C) 2012 Freescale Semiconductor, Inc.
+ *
+ * Copyright (C) 2014 Linaro.
+ * Viresh Kumar <viresh.kumar@linaro.org>
+ *
+ * The OPP code in function set_target() is reused from
+ * drivers/cpufreq/omap-cpufreq.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt)    KBUILD_MODNAME ": " fmt
+
+#include <linux/clk.h>
+#include <linux/cpu.h>
+#include <linux/cpu_cooling.h>
+#include <linux/cpufreq.h>
+#include <linux/cpumask.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/pm_opp.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/consumer.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+
+struct private_data {
+       struct device *cpu_dev;
+       struct regulator *cpu_reg;
+       struct thermal_cooling_device *cdev;
+       unsigned int voltage_tolerance; /* in percentage */
+};
+
+static int set_target(struct cpufreq_policy *policy, unsigned int index)
+{
+       struct dev_pm_opp *opp;
+       struct cpufreq_frequency_table *freq_table = policy->freq_table;
+       struct clk *cpu_clk = policy->clk;
+       struct private_data *priv = policy->driver_data;
+       struct device *cpu_dev = priv->cpu_dev;
+       struct regulator *cpu_reg = priv->cpu_reg;
+       unsigned long volt = 0, volt_old = 0, tol = 0;
+       unsigned int old_freq, new_freq;
+       long freq_Hz, freq_exact;
+       int ret;
+
+       freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000);
+       if (freq_Hz <= 0)
+               freq_Hz = freq_table[index].frequency * 1000;
+
+       freq_exact = freq_Hz;
+       new_freq = freq_Hz / 1000;
+       old_freq = clk_get_rate(cpu_clk) / 1000;
+
+       if (!IS_ERR(cpu_reg)) {
+               rcu_read_lock();
+               opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_Hz);
+               if (IS_ERR(opp)) {
+                       rcu_read_unlock();
+                       dev_err(cpu_dev, "failed to find OPP for %ld\n",
+                               freq_Hz);
+                       return PTR_ERR(opp);
+               }
+               volt = dev_pm_opp_get_voltage(opp);
+               rcu_read_unlock();
+               tol = volt * priv->voltage_tolerance / 100;
+               volt_old = regulator_get_voltage(cpu_reg);
+       }
+
+       dev_dbg(cpu_dev, "%u MHz, %ld mV --> %u MHz, %ld mV\n",
+               old_freq / 1000, volt_old ? volt_old / 1000 : -1,
+               new_freq / 1000, volt ? volt / 1000 : -1);
+
+       /* scaling up?  scale voltage before frequency */
+       if (!IS_ERR(cpu_reg) && new_freq > old_freq) {
+               ret = regulator_set_voltage_tol(cpu_reg, volt, tol);
+               if (ret) {
+                       dev_err(cpu_dev, "failed to scale voltage up: %d\n",
+                               ret);
+                       return ret;
+               }
+       }
+
+       ret = clk_set_rate(cpu_clk, freq_exact);
+       if (ret) {
+               dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);
+               if (!IS_ERR(cpu_reg))
+                       regulator_set_voltage_tol(cpu_reg, volt_old, tol);
+               return ret;
+       }
+
+       /* scaling down?  scale voltage after frequency */
+       if (!IS_ERR(cpu_reg) && new_freq < old_freq) {
+               ret = regulator_set_voltage_tol(cpu_reg, volt, tol);
+               if (ret) {
+                       dev_err(cpu_dev, "failed to scale voltage down: %d\n",
+                               ret);
+                       clk_set_rate(cpu_clk, old_freq * 1000);
+               }
+       }
+
+       return ret;
+}
+
+static int allocate_resources(int cpu, struct device **cdev,
+                             struct regulator **creg, struct clk **cclk)
+{
+       struct device *cpu_dev;
+       struct regulator *cpu_reg;
+       struct clk *cpu_clk;
+       int ret = 0;
+       char *reg_cpu0 = "cpu0", *reg_cpu = "cpu", *reg;
+
+       cpu_dev = get_cpu_device(cpu);
+       if (!cpu_dev) {
+               pr_err("failed to get cpu%d device\n", cpu);
+               return -ENODEV;
+       }
+
+       /* Try "cpu0" for older DTs */
+       if (!cpu)
+               reg = reg_cpu0;
+       else
+               reg = reg_cpu;
+
+try_again:
+       cpu_reg = regulator_get_optional(cpu_dev, reg);
+       if (IS_ERR(cpu_reg)) {
+               /*
+                * If cpu's regulator supply node is present, but regulator is
+                * not yet registered, we should try defering probe.
+                */
+               if (PTR_ERR(cpu_reg) == -EPROBE_DEFER) {
+                       dev_dbg(cpu_dev, "cpu%d regulator not ready, retry\n",
+                               cpu);
+                       return -EPROBE_DEFER;
+               }
+
+               /* Try with "cpu-supply" */
+               if (reg == reg_cpu0) {
+                       reg = reg_cpu;
+                       goto try_again;
+               }
+
+               dev_warn(cpu_dev, "failed to get cpu%d regulator: %ld\n",
+                        cpu, PTR_ERR(cpu_reg));
+       }
+
+       cpu_clk = clk_get(cpu_dev, NULL);
+       if (IS_ERR(cpu_clk)) {
+               /* put regulator */
+               if (!IS_ERR(cpu_reg))
+                       regulator_put(cpu_reg);
+
+               ret = PTR_ERR(cpu_clk);
+
+               /*
+                * If cpu's clk node is present, but clock is not yet
+                * registered, we should try defering probe.
+                */
+               if (ret == -EPROBE_DEFER)
+                       dev_dbg(cpu_dev, "cpu%d clock not ready, retry\n", cpu);
+               else
+                       dev_err(cpu_dev, "failed to get cpu%d clock: %d\n", ret,
+                               cpu);
+       } else {
+               *cdev = cpu_dev;
+               *creg = cpu_reg;
+               *cclk = cpu_clk;
+       }
+
+       return ret;
+}
+
+static int cpufreq_init(struct cpufreq_policy *policy)
+{
+       struct cpufreq_frequency_table *freq_table;
+       struct thermal_cooling_device *cdev;
+       struct device_node *np;
+       struct private_data *priv;
+       struct device *cpu_dev;
+       struct regulator *cpu_reg;
+       struct clk *cpu_clk;
+       unsigned int transition_latency;
+       int ret;
+
+       ret = allocate_resources(policy->cpu, &cpu_dev, &cpu_reg, &cpu_clk);
+       if (ret) {
+               pr_err("%s: Failed to allocate resources\n: %d", __func__, ret);
+               return ret;
+       }
+
+       np = of_node_get(cpu_dev->of_node);
+       if (!np) {
+               dev_err(cpu_dev, "failed to find cpu%d node\n", policy->cpu);
+               ret = -ENOENT;
+               goto out_put_reg_clk;
+       }
+
+       /* OPPs might be populated at runtime, don't check for error here */
+       of_init_opp_table(cpu_dev);
+
+       ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+       if (ret) {
+               dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
+               goto out_put_node;
+       }
+
+       priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+       if (!priv) {
+               ret = -ENOMEM;
+               goto out_free_table;
+       }
+
+       of_property_read_u32(np, "voltage-tolerance", &priv->voltage_tolerance);
+
+       if (of_property_read_u32(np, "clock-latency", &transition_latency))
+               transition_latency = CPUFREQ_ETERNAL;
+
+       if (!IS_ERR(cpu_reg)) {
+               struct dev_pm_opp *opp;
+               unsigned long min_uV, max_uV;
+               int i;
+
+               /*
+                * OPP is maintained in order of increasing frequency, and
+                * freq_table initialised from OPP is therefore sorted in the
+                * same order.
+                */
+               for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)
+                       ;
+               rcu_read_lock();
+               opp = dev_pm_opp_find_freq_exact(cpu_dev,
+                               freq_table[0].frequency * 1000, true);
+               min_uV = dev_pm_opp_get_voltage(opp);
+               opp = dev_pm_opp_find_freq_exact(cpu_dev,
+                               freq_table[i-1].frequency * 1000, true);
+               max_uV = dev_pm_opp_get_voltage(opp);
+               rcu_read_unlock();
+               ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV);
+               if (ret > 0)
+                       transition_latency += ret * 1000;
+       }
+
+       /*
+        * For now, just loading the cooling device;
+        * thermal DT code takes care of matching them.
+        */
+       if (of_find_property(np, "#cooling-cells", NULL)) {
+               cdev = of_cpufreq_cooling_register(np, cpu_present_mask);
+               if (IS_ERR(cdev))
+                       dev_err(cpu_dev,
+                               "running cpufreq without cooling device: %ld\n",
+                               PTR_ERR(cdev));
+               else
+                       priv->cdev = cdev;
+       }
+
+       priv->cpu_dev = cpu_dev;
+       priv->cpu_reg = cpu_reg;
+       policy->driver_data = priv;
+
+       policy->clk = cpu_clk;
+       ret = cpufreq_generic_init(policy, freq_table, transition_latency);
+       if (ret)
+               goto out_cooling_unregister;
+
+       of_node_put(np);
+
+       return 0;
+
+out_cooling_unregister:
+       cpufreq_cooling_unregister(priv->cdev);
+       kfree(priv);
+out_free_table:
+       dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+out_put_node:
+       of_node_put(np);
+out_put_reg_clk:
+       clk_put(cpu_clk);
+       if (!IS_ERR(cpu_reg))
+               regulator_put(cpu_reg);
+
+       return ret;
+}
+
+static int cpufreq_exit(struct cpufreq_policy *policy)
+{
+       struct private_data *priv = policy->driver_data;
+
+       cpufreq_cooling_unregister(priv->cdev);
+       dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+       clk_put(policy->clk);
+       if (!IS_ERR(priv->cpu_reg))
+               regulator_put(priv->cpu_reg);
+       kfree(priv);
+
+       return 0;
+}
+
+static struct cpufreq_driver dt_cpufreq_driver = {
+       .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
+       .verify = cpufreq_generic_frequency_table_verify,
+       .target_index = set_target,
+       .get = cpufreq_generic_get,
+       .init = cpufreq_init,
+       .exit = cpufreq_exit,
+       .name = "cpufreq-dt",
+       .attr = cpufreq_generic_attr,
+};
+
+static int dt_cpufreq_probe(struct platform_device *pdev)
+{
+       struct device *cpu_dev;
+       struct regulator *cpu_reg;
+       struct clk *cpu_clk;
+       int ret;
+
+       /*
+        * All per-cluster (CPUs sharing clock/voltages) initialization is done
+        * from ->init(). In probe(), we just need to make sure that clk and
+        * regulators are available. Else defer probe and retry.
+        *
+        * FIXME: Is checking this only for CPU0 sufficient ?
+        */
+       ret = allocate_resources(0, &cpu_dev, &cpu_reg, &cpu_clk);
+       if (ret)
+               return ret;
+
+       clk_put(cpu_clk);
+       if (!IS_ERR(cpu_reg))
+               regulator_put(cpu_reg);
+
+       ret = cpufreq_register_driver(&dt_cpufreq_driver);
+       if (ret)
+               dev_err(cpu_dev, "failed register driver: %d\n", ret);
+
+       return ret;
+}
+
+static int dt_cpufreq_remove(struct platform_device *pdev)
+{
+       cpufreq_unregister_driver(&dt_cpufreq_driver);
+       return 0;
+}
+
+static struct platform_driver dt_cpufreq_platdrv = {
+       .driver = {
+               .name   = "cpufreq-dt",
+               .owner  = THIS_MODULE,
+       },
+       .probe          = dt_cpufreq_probe,
+       .remove         = dt_cpufreq_remove,
+};
+module_platform_driver(dt_cpufreq_platdrv);
+
+MODULE_AUTHOR("Viresh Kumar <viresh.kumar@linaro.org>");
+MODULE_AUTHOR("Shawn Guo <shawn.guo@linaro.org>");
+MODULE_DESCRIPTION("Generic cpufreq driver");
+MODULE_LICENSE("GPL");
index 61190f6b48299ae7f89dbf6933b5e3d2333d84c1..24bf76fba141197eda0c2d4ec24ecbf4311860d1 100644 (file)
@@ -437,7 +437,7 @@ static struct cpufreq_governor *__find_governor(const char *str_governor)
        struct cpufreq_governor *t;
 
        list_for_each_entry(t, &cpufreq_governor_list, governor_list)
-               if (!strnicmp(str_governor, t->name, CPUFREQ_NAME_LEN))
+               if (!strncasecmp(str_governor, t->name, CPUFREQ_NAME_LEN))
                        return t;
 
        return NULL;
@@ -455,10 +455,10 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy,
                goto out;
 
        if (cpufreq_driver->setpolicy) {
-               if (!strnicmp(str_governor, "performance", CPUFREQ_NAME_LEN)) {
+               if (!strncasecmp(str_governor, "performance", CPUFREQ_NAME_LEN)) {
                        *policy = CPUFREQ_POLICY_PERFORMANCE;
                        err = 0;
-               } else if (!strnicmp(str_governor, "powersave",
+               } else if (!strncasecmp(str_governor, "powersave",
                                                CPUFREQ_NAME_LEN)) {
                        *policy = CPUFREQ_POLICY_POWERSAVE;
                        err = 0;
@@ -1382,7 +1382,7 @@ static int __cpufreq_remove_dev_prepare(struct device *dev,
                if (!cpufreq_suspended)
                        pr_debug("%s: policy Kobject moved to cpu: %d from: %d\n",
                                 __func__, new_cpu, cpu);
-       } else if (cpufreq_driver->stop_cpu && cpufreq_driver->setpolicy) {
+       } else if (cpufreq_driver->stop_cpu) {
                cpufreq_driver->stop_cpu(policy);
        }
 
index 61a54310a1b9df6923a1fbfbb3ab6e1872c914eb..843ec824fd91051db1af8751d155018261d9043c 100644 (file)
@@ -127,7 +127,7 @@ int exynos4210_cpufreq_init(struct exynos_dvfs_info *info)
         * dependencies on platform headers. It is necessary to enable
         * Exynos multi-platform support and will be removed together with
         * this whole driver as soon as Exynos gets migrated to use
-        * cpufreq-cpu0 driver.
+        * cpufreq-dt driver.
         */
        np = of_find_compatible_node(NULL, NULL, "samsung,exynos4210-clock");
        if (!np) {
index 351a2074cfea784c8a522b3fa6080a67c59e7180..9e78a850e29f4dc967fceb422f22cdc6a571355c 100644 (file)
@@ -174,7 +174,7 @@ int exynos4x12_cpufreq_init(struct exynos_dvfs_info *info)
         * dependencies on platform headers. It is necessary to enable
         * Exynos multi-platform support and will be removed together with
         * this whole driver as soon as Exynos gets migrated to use
-        * cpufreq-cpu0 driver.
+        * cpufreq-dt driver.
         */
        np = of_find_compatible_node(NULL, NULL, "samsung,exynos4412-clock");
        if (!np) {
index c91ce69dc63101d3a1f866b906acaf780c20070a..3eafdc7ba7877f4eedc2484fa7208c32f5cf49eb 100644 (file)
@@ -153,7 +153,7 @@ int exynos5250_cpufreq_init(struct exynos_dvfs_info *info)
         * dependencies on platform headers. It is necessary to enable
         * Exynos multi-platform support and will be removed together with
         * this whole driver as soon as Exynos gets migrated to use
-        * cpufreq-cpu0 driver.
+        * cpufreq-dt driver.
         */
        np = of_find_compatible_node(NULL, NULL, "samsung,exynos5250-clock");
        if (!np) {
index bf8902a0866dd4d767cdab4ba45d84ca397033fe..ec399ad2f059379891a4d384e5b24dc3b9c3ab59 100644 (file)
@@ -6,7 +6,7 @@
  * published by the Free Software Foundation.
  *
  * This driver provides the clk notifier callbacks that are used when
- * the cpufreq-cpu0 driver changes to frequency to alert the highbank
+ * the cpufreq-dt driver changes to frequency to alert the highbank
  * EnergyCore Management Engine (ECME) about the need to change
  * voltage. The ECME interfaces with the actual voltage regulators.
  */
@@ -60,7 +60,7 @@ static struct notifier_block hb_cpufreq_clk_nb = {
 
 static int hb_cpufreq_driver_init(void)
 {
-       struct platform_device_info devinfo = { .name = "cpufreq-cpu0", };
+       struct platform_device_info devinfo = { .name = "cpufreq-dt", };
        struct device *cpu_dev;
        struct clk *cpu_clk;
        struct device_node *np;
@@ -95,7 +95,7 @@ static int hb_cpufreq_driver_init(void)
                goto out_put_node;
        }
 
-       /* Instantiate cpufreq-cpu0 */
+       /* Instantiate cpufreq-dt */
        platform_device_register_full(&devinfo);
 
 out_put_node:
index 379c0837f5a97d510f12077271f802bb832ace8b..2dfd4fdb5a52bd8d82fe2603acf550732803798b 100644 (file)
@@ -26,6 +26,7 @@
 #include <linux/cpufreq.h>
 #include <linux/smp.h>
 #include <linux/of.h>
+#include <linux/reboot.h>
 
 #include <asm/cputhreads.h>
 #include <asm/firmware.h>
@@ -35,6 +36,7 @@
 #define POWERNV_MAX_PSTATES    256
 
 static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1];
+static bool rebooting;
 
 /*
  * Note: The set of pstates consists of contiguous integers, the
@@ -283,6 +285,15 @@ static void set_pstate(void *freq_data)
        set_pmspr(SPRN_PMCR, val);
 }
 
+/*
+ * get_nominal_index: Returns the index corresponding to the nominal
+ * pstate in the cpufreq table
+ */
+static inline unsigned int get_nominal_index(void)
+{
+       return powernv_pstate_info.max - powernv_pstate_info.nominal;
+}
+
 /*
  * powernv_cpufreq_target_index: Sets the frequency corresponding to
  * the cpufreq table entry indexed by new_index on the cpus in the
@@ -293,6 +304,9 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy,
 {
        struct powernv_smp_call_data freq_data;
 
+       if (unlikely(rebooting) && new_index != get_nominal_index())
+               return 0;
+
        freq_data.pstate_id = powernv_freqs[new_index].driver_data;
 
        /*
@@ -317,6 +331,33 @@ static int powernv_cpufreq_cpu_init(struct cpufreq_policy *policy)
        return cpufreq_table_validate_and_show(policy, powernv_freqs);
 }
 
+static int powernv_cpufreq_reboot_notifier(struct notifier_block *nb,
+                               unsigned long action, void *unused)
+{
+       int cpu;
+       struct cpufreq_policy cpu_policy;
+
+       rebooting = true;
+       for_each_online_cpu(cpu) {
+               cpufreq_get_policy(&cpu_policy, cpu);
+               powernv_cpufreq_target_index(&cpu_policy, get_nominal_index());
+       }
+
+       return NOTIFY_DONE;
+}
+
+static struct notifier_block powernv_cpufreq_reboot_nb = {
+       .notifier_call = powernv_cpufreq_reboot_notifier,
+};
+
+static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy)
+{
+       struct powernv_smp_call_data freq_data;
+
+       freq_data.pstate_id = powernv_pstate_info.min;
+       smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1);
+}
+
 static struct cpufreq_driver powernv_cpufreq_driver = {
        .name           = "powernv-cpufreq",
        .flags          = CPUFREQ_CONST_LOOPS,
@@ -324,6 +365,7 @@ static struct cpufreq_driver powernv_cpufreq_driver = {
        .verify         = cpufreq_generic_frequency_table_verify,
        .target_index   = powernv_cpufreq_target_index,
        .get            = powernv_cpufreq_get,
+       .stop_cpu       = powernv_cpufreq_stop_cpu,
        .attr           = powernv_cpu_freq_attr,
 };
 
@@ -342,12 +384,14 @@ static int __init powernv_cpufreq_init(void)
                return rc;
        }
 
+       register_reboot_notifier(&powernv_cpufreq_reboot_nb);
        return cpufreq_register_driver(&powernv_cpufreq_driver);
 }
 module_init(powernv_cpufreq_init);
 
 static void __exit powernv_cpufreq_exit(void)
 {
+       unregister_reboot_notifier(&powernv_cpufreq_reboot_nb);
        cpufreq_unregister_driver(&powernv_cpufreq_driver);
 }
 module_exit(powernv_cpufreq_exit);
index 3607070797af307f959d0f413a2497acd262ba95..bee5df7794d33d1078116c8ac2f3618075230c8c 100644 (file)
@@ -199,7 +199,6 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
        }
 
        data->table = table;
-       per_cpu(cpu_data, cpu) = data;
 
        /* update ->cpus if we have cluster, no harm if not */
        cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
index 3f9791f07b8ea05f745a2bcbce6b738379443087..567caa6313fffa447f19992d7af5df940c4d0068 100644 (file)
@@ -597,7 +597,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
         * and dependencies on platform headers. It is necessary to enable
         * S5PV210 multi-platform support and will be removed together with
         * this whole driver as soon as S5PV210 gets migrated to use
-        * cpufreq-cpu0 driver.
+        * cpufreq-dt driver.
         */
        np = of_find_compatible_node(NULL, NULL, "samsung,s5pv210-clock");
        if (!np) {
index 32748c36c477099cf00344b3410dd8c2331b9404..c5029c1209b4c0bbb6fe52b577345138b788de13 100644 (file)
@@ -25,11 +25,19 @@ config CPU_IDLE_GOV_MENU
        bool "Menu governor (for tickless system)"
        default y
 
+config DT_IDLE_STATES
+       bool
+
 menu "ARM CPU Idle Drivers"
 depends on ARM
 source "drivers/cpuidle/Kconfig.arm"
 endmenu
 
+menu "ARM64 CPU Idle Drivers"
+depends on ARM64
+source "drivers/cpuidle/Kconfig.arm64"
+endmenu
+
 menu "MIPS CPU Idle Drivers"
 depends on MIPS
 source "drivers/cpuidle/Kconfig.mips"
index 58bcd0d166ec3a4f42d00fb24d086512bae11710..8c16ab20fb15594cc409860ebb83a423d2dde2d6 100644 (file)
@@ -7,6 +7,7 @@ config ARM_BIG_LITTLE_CPUIDLE
        depends on MCPM
        select ARM_CPU_SUSPEND
        select CPU_IDLE_MULTIPLE_DRIVERS
+       select DT_IDLE_STATES
        help
          Select this option to enable CPU idle driver for big.LITTLE based
          ARM systems. Driver manages CPUs coordination through MCPM and
diff --git a/drivers/cpuidle/Kconfig.arm64 b/drivers/cpuidle/Kconfig.arm64
new file mode 100644 (file)
index 0000000..d0a08ed
--- /dev/null
@@ -0,0 +1,14 @@
+#
+# ARM64 CPU Idle drivers
+#
+
+config ARM64_CPUIDLE
+       bool "Generic ARM64 CPU idle Driver"
+       select ARM64_CPU_SUSPEND
+       select DT_IDLE_STATES
+       help
+         Select this to enable generic cpuidle driver for ARM64.
+         It provides a generic idle driver whose idle states are configured
+         at run-time through DT nodes. The CPUidle suspend backend is
+         initialized by calling the CPU operations init idle hook
+         provided by architecture code.
index 11edb31c55e9862aa2e21e0df073f2d7dd49b721..4d177b916f75224325e0c37991f82d622e8e6392 100644 (file)
@@ -4,6 +4,7 @@
 
 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
 obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
+obj-$(CONFIG_DT_IDLE_STATES)             += dt_idle_states.o
 
 ##################################################################################
 # ARM SoC drivers
@@ -21,6 +22,10 @@ obj-$(CONFIG_ARM_EXYNOS_CPUIDLE)        += cpuidle-exynos.o
 # MIPS drivers
 obj-$(CONFIG_MIPS_CPS_CPUIDLE)         += cpuidle-cps.o
 
+###############################################################################
+# ARM64 drivers
+obj-$(CONFIG_ARM64_CPUIDLE)            += cpuidle-arm64.o
+
 ###############################################################################
 # POWERPC drivers
 obj-$(CONFIG_PSERIES_CPUIDLE)          += cpuidle-pseries.o
diff --git a/drivers/cpuidle/cpuidle-arm64.c b/drivers/cpuidle/cpuidle-arm64.c
new file mode 100644 (file)
index 0000000..50997ea
--- /dev/null
@@ -0,0 +1,133 @@
+/*
+ * ARM64 generic CPU idle driver.
+ *
+ * Copyright (C) 2014 ARM Ltd.
+ * Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) "CPUidle arm64: " fmt
+
+#include <linux/cpuidle.h>
+#include <linux/cpumask.h>
+#include <linux/cpu_pm.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+
+#include <asm/cpuidle.h>
+#include <asm/suspend.h>
+
+#include "dt_idle_states.h"
+
+/*
+ * arm64_enter_idle_state - Programs CPU to enter the specified state
+ *
+ * dev: cpuidle device
+ * drv: cpuidle driver
+ * idx: state index
+ *
+ * Called from the CPUidle framework to program the device to the
+ * specified target state selected by the governor.
+ */
+static int arm64_enter_idle_state(struct cpuidle_device *dev,
+                                 struct cpuidle_driver *drv, int idx)
+{
+       int ret;
+
+       if (!idx) {
+               cpu_do_idle();
+               return idx;
+       }
+
+       ret = cpu_pm_enter();
+       if (!ret) {
+               /*
+                * Pass idle state index to cpu_suspend which in turn will
+                * call the CPU ops suspend protocol with idle index as a
+                * parameter.
+                */
+               ret = cpu_suspend(idx);
+
+               cpu_pm_exit();
+       }
+
+       return ret ? -1 : idx;
+}
+
+static struct cpuidle_driver arm64_idle_driver = {
+       .name = "arm64_idle",
+       .owner = THIS_MODULE,
+       /*
+        * State at index 0 is standby wfi and considered standard
+        * on all ARM platforms. If in some platforms simple wfi
+        * can't be used as "state 0", DT bindings must be implemented
+        * to work around this issue and allow installing a special
+        * handler for idle state index 0.
+        */
+       .states[0] = {
+               .enter                  = arm64_enter_idle_state,
+               .exit_latency           = 1,
+               .target_residency       = 1,
+               .power_usage            = UINT_MAX,
+               .flags                  = CPUIDLE_FLAG_TIME_VALID,
+               .name                   = "WFI",
+               .desc                   = "ARM64 WFI",
+       }
+};
+
+static const struct of_device_id arm64_idle_state_match[] __initconst = {
+       { .compatible = "arm,idle-state",
+         .data = arm64_enter_idle_state },
+       { },
+};
+
+/*
+ * arm64_idle_init
+ *
+ * Registers the arm64 specific cpuidle driver with the cpuidle
+ * framework. It relies on core code to parse the idle states
+ * and initialize them using driver data structures accordingly.
+ */
+static int __init arm64_idle_init(void)
+{
+       int cpu, ret;
+       struct cpuidle_driver *drv = &arm64_idle_driver;
+
+       /*
+        * Initialize idle states data, starting at index 1.
+        * This driver is DT only, if no DT idle states are detected (ret == 0)
+        * let the driver initialization fail accordingly since there is no
+        * reason to initialize the idle driver if only wfi is supported.
+        */
+       ret = dt_init_idle_driver(drv, arm64_idle_state_match, 1);
+       if (ret <= 0) {
+               if (ret)
+                       pr_err("failed to initialize idle states\n");
+               return ret ? : -ENODEV;
+       }
+
+       /*
+        * Call arch CPU operations in order to initialize
+        * idle states suspend back-end specific data
+        */
+       for_each_possible_cpu(cpu) {
+               ret = cpu_init_idle(cpu);
+               if (ret) {
+                       pr_err("CPU %d failed to init idle CPU ops\n", cpu);
+                       return ret;
+               }
+       }
+
+       ret = cpuidle_register(drv, NULL);
+       if (ret) {
+               pr_err("failed to register cpuidle driver\n");
+               return ret;
+       }
+
+       return 0;
+}
+device_initcall(arm64_idle_init);
index ef94c3b81f18048c6feee67368d2fd24da416424..fbc00a1d3c486a5e9d98f31ebecaf5149b2f3d5c 100644 (file)
@@ -24,6 +24,8 @@
 #include <asm/smp_plat.h>
 #include <asm/suspend.h>
 
+#include "dt_idle_states.h"
+
 static int bl_enter_powerdown(struct cpuidle_device *dev,
                              struct cpuidle_driver *drv, int idx);
 
@@ -73,6 +75,12 @@ static struct cpuidle_driver bl_idle_little_driver = {
        .state_count = 2,
 };
 
+static const struct of_device_id bl_idle_state_match[] __initconst = {
+       { .compatible = "arm,idle-state",
+         .data = bl_enter_powerdown },
+       { },
+};
+
 static struct cpuidle_driver bl_idle_big_driver = {
        .name = "big_idle",
        .owner = THIS_MODULE,
@@ -159,6 +167,7 @@ static int __init bl_idle_driver_init(struct cpuidle_driver *drv, int part_id)
 static const struct of_device_id compatible_machine_match[] = {
        { .compatible = "arm,vexpress,v2p-ca15_a7" },
        { .compatible = "samsung,exynos5420" },
+       { .compatible = "samsung,exynos5800" },
        {},
 };
 
@@ -190,6 +199,17 @@ static int __init bl_idle_init(void)
        if (ret)
                goto out_uninit_little;
 
+       /* Start at index 1, index 0 standard WFI */
+       ret = dt_init_idle_driver(&bl_idle_big_driver, bl_idle_state_match, 1);
+       if (ret < 0)
+               goto out_uninit_big;
+
+       /* Start at index 1, index 0 standard WFI */
+       ret = dt_init_idle_driver(&bl_idle_little_driver,
+                                 bl_idle_state_match, 1);
+       if (ret < 0)
+               goto out_uninit_big;
+
        ret = cpuidle_register(&bl_idle_little_driver, NULL);
        if (ret)
                goto out_uninit_big;
diff --git a/drivers/cpuidle/dt_idle_states.c b/drivers/cpuidle/dt_idle_states.c
new file mode 100644 (file)
index 0000000..52f4d11
--- /dev/null
@@ -0,0 +1,213 @@
+/*
+ * DT idle states parsing code.
+ *
+ * Copyright (C) 2014 ARM Ltd.
+ * Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) "DT idle-states: " fmt
+
+#include <linux/cpuidle.h>
+#include <linux/cpumask.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+
+#include "dt_idle_states.h"
+
+static int init_state_node(struct cpuidle_state *idle_state,
+                          const struct of_device_id *matches,
+                          struct device_node *state_node)
+{
+       int err;
+       const struct of_device_id *match_id;
+
+       match_id = of_match_node(matches, state_node);
+       if (!match_id)
+               return -ENODEV;
+       /*
+        * CPUidle drivers are expected to initialize the const void *data
+        * pointer of the passed in struct of_device_id array to the idle
+        * state enter function.
+        */
+       idle_state->enter = match_id->data;
+
+       err = of_property_read_u32(state_node, "wakeup-latency-us",
+                                  &idle_state->exit_latency);
+       if (err) {
+               u32 entry_latency, exit_latency;
+
+               err = of_property_read_u32(state_node, "entry-latency-us",
+                                          &entry_latency);
+               if (err) {
+                       pr_debug(" * %s missing entry-latency-us property\n",
+                                state_node->full_name);
+                       return -EINVAL;
+               }
+
+               err = of_property_read_u32(state_node, "exit-latency-us",
+                                          &exit_latency);
+               if (err) {
+                       pr_debug(" * %s missing exit-latency-us property\n",
+                                state_node->full_name);
+                       return -EINVAL;
+               }
+               /*
+                * If wakeup-latency-us is missing, default to entry+exit
+                * latencies as defined in idle states bindings
+                */
+               idle_state->exit_latency = entry_latency + exit_latency;
+       }
+
+       err = of_property_read_u32(state_node, "min-residency-us",
+                                  &idle_state->target_residency);
+       if (err) {
+               pr_debug(" * %s missing min-residency-us property\n",
+                            state_node->full_name);
+               return -EINVAL;
+       }
+
+       idle_state->flags = CPUIDLE_FLAG_TIME_VALID;
+       if (of_property_read_bool(state_node, "local-timer-stop"))
+               idle_state->flags |= CPUIDLE_FLAG_TIMER_STOP;
+       /*
+        * TODO:
+        *      replace with kstrdup and pointer assignment when name
+        *      and desc become string pointers
+        */
+       strncpy(idle_state->name, state_node->name, CPUIDLE_NAME_LEN - 1);
+       strncpy(idle_state->desc, state_node->name, CPUIDLE_DESC_LEN - 1);
+       return 0;
+}
+
+/*
+ * Check that the idle state is uniform across all CPUs in the CPUidle driver
+ * cpumask
+ */
+static bool idle_state_valid(struct device_node *state_node, unsigned int idx,
+                            const cpumask_t *cpumask)
+{
+       int cpu;
+       struct device_node *cpu_node, *curr_state_node;
+       bool valid = true;
+
+       /*
+        * Compare idle state phandles for index idx on all CPUs in the
+        * CPUidle driver cpumask. Start from next logical cpu following
+        * cpumask_first(cpumask) since that's the CPU state_node was
+        * retrieved from. If a mismatch is found bail out straight
+        * away since we certainly hit a firmware misconfiguration.
+        */
+       for (cpu = cpumask_next(cpumask_first(cpumask), cpumask);
+            cpu < nr_cpu_ids; cpu = cpumask_next(cpu, cpumask)) {
+               cpu_node = of_cpu_device_node_get(cpu);
+               curr_state_node = of_parse_phandle(cpu_node, "cpu-idle-states",
+                                                  idx);
+               if (state_node != curr_state_node)
+                       valid = false;
+
+               of_node_put(curr_state_node);
+               of_node_put(cpu_node);
+               if (!valid)
+                       break;
+       }
+
+       return valid;
+}
+
+/**
+ * dt_init_idle_driver() - Parse the DT idle states and initialize the
+ *                        idle driver states array
+ * @drv:         Pointer to CPU idle driver to be initialized
+ * @matches:     Array of of_device_id match structures to search in for
+ *               compatible idle state nodes. The data pointer for each valid
+ *               struct of_device_id entry in the matches array must point to
+ *               a function with the following signature, that corresponds to
+ *               the CPUidle state enter function signature:
+ *
+ *               int (*)(struct cpuidle_device *dev,
+ *                       struct cpuidle_driver *drv,
+ *                       int index);
+ *
+ * @start_idx:    First idle state index to be initialized
+ *
+ * If DT idle states are detected and are valid the state count and states
+ * array entries in the cpuidle driver are initialized accordingly starting
+ * from index start_idx.
+ *
+ * Return: number of valid DT idle states parsed, <0 on failure
+ */
+int dt_init_idle_driver(struct cpuidle_driver *drv,
+                       const struct of_device_id *matches,
+                       unsigned int start_idx)
+{
+       struct cpuidle_state *idle_state;
+       struct device_node *state_node, *cpu_node;
+       int i, err = 0;
+       const cpumask_t *cpumask;
+       unsigned int state_idx = start_idx;
+
+       if (state_idx >= CPUIDLE_STATE_MAX)
+               return -EINVAL;
+       /*
+        * We get the idle states for the first logical cpu in the
+        * driver mask (or cpu_possible_mask if the driver cpumask is not set)
+        * and we check through idle_state_valid() if they are uniform
+        * across CPUs, otherwise we hit a firmware misconfiguration.
+        */
+       cpumask = drv->cpumask ? : cpu_possible_mask;
+       cpu_node = of_cpu_device_node_get(cpumask_first(cpumask));
+
+       for (i = 0; ; i++) {
+               state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i);
+               if (!state_node)
+                       break;
+
+               if (!idle_state_valid(state_node, i, cpumask)) {
+                       pr_warn("%s idle state not valid, bailing out\n",
+                               state_node->full_name);
+                       err = -EINVAL;
+                       break;
+               }
+
+               if (state_idx == CPUIDLE_STATE_MAX) {
+                       pr_warn("State index reached static CPU idle driver states array size\n");
+                       break;
+               }
+
+               idle_state = &drv->states[state_idx++];
+               err = init_state_node(idle_state, matches, state_node);
+               if (err) {
+                       pr_err("Parsing idle state node %s failed with err %d\n",
+                              state_node->full_name, err);
+                       err = -EINVAL;
+                       break;
+               }
+               of_node_put(state_node);
+       }
+
+       of_node_put(state_node);
+       of_node_put(cpu_node);
+       if (err)
+               return err;
+       /*
+        * Update the driver state count only if some valid DT idle states
+        * were detected
+        */
+       if (i)
+               drv->state_count = state_idx;
+
+       /*
+        * Return the number of present and valid DT idle states, which can
+        * also be 0 on platforms with missing DT idle states or legacy DT
+        * configuration predating the DT idle states bindings.
+        */
+       return i;
+}
+EXPORT_SYMBOL_GPL(dt_init_idle_driver);
diff --git a/drivers/cpuidle/dt_idle_states.h b/drivers/cpuidle/dt_idle_states.h
new file mode 100644 (file)
index 0000000..4818134
--- /dev/null
@@ -0,0 +1,7 @@
+#ifndef __DT_IDLE_STATES
+#define __DT_IDLE_STATES
+
+int dt_init_idle_driver(struct cpuidle_driver *drv,
+                       const struct of_device_id *matches,
+                       unsigned int start_idx);
+#endif
index ca89412f512243a64a05d765708f0e02a848a6be..fb9f511cca23724b4da463714bd49a8e53fba588 100644 (file)
@@ -28,7 +28,7 @@ static struct cpuidle_governor * __cpuidle_find_governor(const char *str)
        struct cpuidle_governor *gov;
 
        list_for_each_entry(gov, &cpuidle_governors, governor_list)
-               if (!strnicmp(str, gov->name, CPUIDLE_NAME_LEN))
+               if (!strncasecmp(str, gov->name, CPUIDLE_NAME_LEN))
                        return gov;
 
        return NULL;
index 3dced0a9eae3038feae328815aee028e8ee86b36..faf4e70c42e0467f072cde73d6cc972ff27509ae 100644 (file)
@@ -78,9 +78,8 @@ config ARM_EXYNOS4_BUS_DEVFREQ
          This does not yet operate with optimal voltages.
 
 config ARM_EXYNOS5_BUS_DEVFREQ
-       bool "ARM Exynos5250 Bus DEVFREQ Driver"
+       tristate "ARM Exynos5250 Bus DEVFREQ Driver"
        depends on SOC_EXYNOS5250
-       select ARCH_HAS_OPP
        select DEVFREQ_GOV_SIMPLE_ONDEMAND
        select PM_OPP
        help
index 9f90369dd6bdd208832b9da0c7bb4ccadfe0dadf..30b538d8cc90a5cb5e6baa88172c5711aa804b93 100644 (file)
@@ -1119,6 +1119,7 @@ struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
 
        return opp;
 }
+EXPORT_SYMBOL(devfreq_recommended_opp);
 
 /**
  * devfreq_register_opp_notifier() - Helper function to get devfreq notified
@@ -1142,6 +1143,7 @@ int devfreq_register_opp_notifier(struct device *dev, struct devfreq *devfreq)
 
        return ret;
 }
+EXPORT_SYMBOL(devfreq_register_opp_notifier);
 
 /**
  * devfreq_unregister_opp_notifier() - Helper function to stop getting devfreq
@@ -1168,6 +1170,7 @@ int devfreq_unregister_opp_notifier(struct device *dev, struct devfreq *devfreq)
 
        return ret;
 }
+EXPORT_SYMBOL(devfreq_unregister_opp_notifier);
 
 static void devm_devfreq_opp_release(struct device *dev, void *res)
 {
index 75fcc5140ffb47267ea327f9f2c2796d9ea451a2..97b75e513d29123c9901e2d06195f6ae40dc1596 100644 (file)
@@ -73,6 +73,7 @@ void busfreq_mon_reset(struct busfreq_ppmu_data *ppmu_data)
                exynos_ppmu_start(ppmu_base);
        }
 }
+EXPORT_SYMBOL(busfreq_mon_reset);
 
 void exynos_read_ppmu(struct busfreq_ppmu_data *ppmu_data)
 {
@@ -97,6 +98,7 @@ void exynos_read_ppmu(struct busfreq_ppmu_data *ppmu_data)
 
        busfreq_mon_reset(ppmu_data);
 }
+EXPORT_SYMBOL(exynos_read_ppmu);
 
 int exynos_get_busier_ppmu(struct busfreq_ppmu_data *ppmu_data)
 {
@@ -114,3 +116,4 @@ int exynos_get_busier_ppmu(struct busfreq_ppmu_data *ppmu_data)
 
        return busy;
 }
+EXPORT_SYMBOL(exynos_get_busier_ppmu);
index ccfbbab82a157da532fb039f7032d0de38680d0f..2f90ac6a7f794ad8e79577a2c5e59ed853bf413f 100644 (file)
@@ -50,6 +50,7 @@
 #include <linux/irqflags.h>
 #include <linux/rwsem.h>
 #include <linux/pm_runtime.h>
+#include <linux/pm_domain.h>
 #include <linux/acpi.h>
 #include <linux/jump_label.h>
 #include <asm/uaccess.h>
@@ -643,10 +644,13 @@ static int i2c_device_probe(struct device *dev)
        if (status < 0)
                return status;
 
-       acpi_dev_pm_attach(&client->dev, true);
-       status = driver->probe(client, i2c_match_id(driver->id_table, client));
-       if (status)
-               acpi_dev_pm_detach(&client->dev, true);
+       status = dev_pm_domain_attach(&client->dev, true);
+       if (status != -EPROBE_DEFER) {
+               status = driver->probe(client, i2c_match_id(driver->id_table,
+                                       client));
+               if (status)
+                       dev_pm_domain_detach(&client->dev, true);
+       }
 
        return status;
 }
@@ -666,7 +670,7 @@ static int i2c_device_remove(struct device *dev)
                status = driver->remove(client);
        }
 
-       acpi_dev_pm_detach(&client->dev, true);
+       dev_pm_domain_detach(&client->dev, true);
        return status;
 }
 
index 4fa8fef9147f75ec4e9ff974957093bb28f0ec6a..65cf7a7e05eaf23ef73bf7ea8889b35c0f367597 100644 (file)
@@ -16,6 +16,7 @@
 #include <linux/export.h>
 #include <linux/slab.h>
 #include <linux/pm_runtime.h>
+#include <linux/pm_domain.h>
 #include <linux/acpi.h>
 
 #include <linux/mmc/card.h>
@@ -315,7 +316,7 @@ int sdio_add_func(struct sdio_func *func)
        ret = device_add(&func->dev);
        if (ret == 0) {
                sdio_func_set_present(func);
-               acpi_dev_pm_attach(&func->dev, false);
+               dev_pm_domain_attach(&func->dev, false);
        }
 
        return ret;
@@ -332,7 +333,7 @@ void sdio_remove_func(struct sdio_func *func)
        if (!sdio_func_present(func))
                return;
 
-       acpi_dev_pm_detach(&func->dev, false);
+       dev_pm_domain_detach(&func->dev, false);
        device_del(&func->dev);
        put_device(&func->dev);
 }
index 82e06a86cd77b38ae99316e624b1cf72f26c7dd2..a9f9c46e50221d75eefc73457425876199f79a0e 100644 (file)
@@ -41,11 +41,17 @@ static int __init pcie_pme_setup(char *str)
 }
 __setup("pcie_pme=", pcie_pme_setup);
 
+enum pme_suspend_level {
+       PME_SUSPEND_NONE = 0,
+       PME_SUSPEND_WAKEUP,
+       PME_SUSPEND_NOIRQ,
+};
+
 struct pcie_pme_service_data {
        spinlock_t lock;
        struct pcie_device *srv;
        struct work_struct work;
-       bool noirq; /* Don't enable the PME interrupt used by this service. */
+       enum pme_suspend_level suspend_level;
 };
 
 /**
@@ -223,7 +229,7 @@ static void pcie_pme_work_fn(struct work_struct *work)
        spin_lock_irq(&data->lock);
 
        for (;;) {
-               if (data->noirq)
+               if (data->suspend_level != PME_SUSPEND_NONE)
                        break;
 
                pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta);
@@ -250,7 +256,7 @@ static void pcie_pme_work_fn(struct work_struct *work)
                spin_lock_irq(&data->lock);
        }
 
-       if (!data->noirq)
+       if (data->suspend_level == PME_SUSPEND_NONE)
                pcie_pme_interrupt_enable(port, true);
 
        spin_unlock_irq(&data->lock);
@@ -367,6 +373,21 @@ static int pcie_pme_probe(struct pcie_device *srv)
        return ret;
 }
 
+static bool pcie_pme_check_wakeup(struct pci_bus *bus)
+{
+       struct pci_dev *dev;
+
+       if (!bus)
+               return false;
+
+       list_for_each_entry(dev, &bus->devices, bus_list)
+               if (device_may_wakeup(&dev->dev)
+                   || pcie_pme_check_wakeup(dev->subordinate))
+                       return true;
+
+       return false;
+}
+
 /**
  * pcie_pme_suspend - Suspend PCIe PME service device.
  * @srv: PCIe service device to suspend.
@@ -375,11 +396,26 @@ static int pcie_pme_suspend(struct pcie_device *srv)
 {
        struct pcie_pme_service_data *data = get_service_data(srv);
        struct pci_dev *port = srv->port;
+       bool wakeup;
 
+       if (device_may_wakeup(&port->dev)) {
+               wakeup = true;
+       } else {
+               down_read(&pci_bus_sem);
+               wakeup = pcie_pme_check_wakeup(port->subordinate);
+               up_read(&pci_bus_sem);
+       }
        spin_lock_irq(&data->lock);
-       pcie_pme_interrupt_enable(port, false);
-       pcie_clear_root_pme_status(port);
-       data->noirq = true;
+       if (wakeup) {
+               enable_irq_wake(srv->irq);
+               data->suspend_level = PME_SUSPEND_WAKEUP;
+       } else {
+               struct pci_dev *port = srv->port;
+
+               pcie_pme_interrupt_enable(port, false);
+               pcie_clear_root_pme_status(port);
+               data->suspend_level = PME_SUSPEND_NOIRQ;
+       }
        spin_unlock_irq(&data->lock);
 
        synchronize_irq(srv->irq);
@@ -394,12 +430,17 @@ static int pcie_pme_suspend(struct pcie_device *srv)
 static int pcie_pme_resume(struct pcie_device *srv)
 {
        struct pcie_pme_service_data *data = get_service_data(srv);
-       struct pci_dev *port = srv->port;
 
        spin_lock_irq(&data->lock);
-       data->noirq = false;
-       pcie_clear_root_pme_status(port);
-       pcie_pme_interrupt_enable(port, true);
+       if (data->suspend_level == PME_SUSPEND_NOIRQ) {
+               struct pci_dev *port = srv->port;
+
+               pcie_clear_root_pme_status(port);
+               pcie_pme_interrupt_enable(port, true);
+       } else {
+               disable_irq_wake(srv->irq);
+       }
+       data->suspend_level = PME_SUSPEND_NONE;
        spin_unlock_irq(&data->lock);
 
        return 0;
index 87aa28c4280fe815fa488006c814e026c1c99350..2655d4a988f36ae1ac18510b850d7b7b70a741ba 100644 (file)
@@ -1050,6 +1050,13 @@ static struct acpi_driver acpi_fujitsu_hotkey_driver = {
                },
 };
 
+static const struct acpi_device_id fujitsu_ids[] __used = {
+       {ACPI_FUJITSU_HID, 0},
+       {ACPI_FUJITSU_HOTKEY_HID, 0},
+       {"", 0}
+};
+MODULE_DEVICE_TABLE(acpi, fujitsu_ids);
+
 static int __init fujitsu_init(void)
 {
        int ret, result, max_brightness;
@@ -1208,12 +1215,3 @@ MODULE_LICENSE("GPL");
 MODULE_ALIAS("dmi:*:svnFUJITSUSIEMENS:*:pvr:rvnFUJITSU:rnFJNB1D3:*:cvrS6410:*");
 MODULE_ALIAS("dmi:*:svnFUJITSUSIEMENS:*:pvr:rvnFUJITSU:rnFJNB1E6:*:cvrS6420:*");
 MODULE_ALIAS("dmi:*:svnFUJITSU:*:pvr:rvnFUJITSU:rnFJNB19C:*:cvrS7020:*");
-
-static struct pnp_device_id pnp_ids[] __used = {
-       {.id = "FUJ02bf"},
-       {.id = "FUJ02B1"},
-       {.id = "FUJ02E3"},
-       {.id = ""}
-};
-
-MODULE_DEVICE_TABLE(pnp, pnp_ids);
index 2a1008b61121ae6cfa49f5a757037ade4d14ed25..7f3d389bd601e7d616c8d528b64a248706c3131a 100644 (file)
@@ -10,3 +10,11 @@ menuconfig POWER_AVS
          AVS is also called SmartReflex on OMAP devices.
 
          Say Y here to enable Adaptive Voltage Scaling class support.
+
+config ROCKCHIP_IODOMAIN
+        tristate "Rockchip IO domain support"
+        depends on ARCH_ROCKCHIP && OF
+        help
+          Say y here to enable support io domains on Rockchip SoCs. It is
+          necessary for the io domain setting of the SoC to match the
+          voltage supplied by the regulators.
index 0843386a6c1951e6fe5b44f59d9a43992409a922..ba4c7bc6922533dcc15627ecaff584a0e3d122e9 100644 (file)
@@ -1 +1,2 @@
 obj-$(CONFIG_POWER_AVS_OMAP)           += smartreflex.o
+obj-$(CONFIG_ROCKCHIP_IODOMAIN)                += rockchip-io-domain.o
diff --git a/drivers/power/avs/rockchip-io-domain.c b/drivers/power/avs/rockchip-io-domain.c
new file mode 100644 (file)
index 0000000..3ae35d0
--- /dev/null
@@ -0,0 +1,351 @@
+/*
+ * Rockchip IO Voltage Domain driver
+ *
+ * Copyright 2014 MundoReader S.L.
+ * Copyright 2014 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/err.h>
+#include <linux/mfd/syscon.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <linux/regulator/consumer.h>
+
+#define MAX_SUPPLIES           16
+
+/*
+ * The max voltage for 1.8V and 3.3V come from the Rockchip datasheet under
+ * "Recommended Operating Conditions" for "Digital GPIO".   When the typical
+ * is 3.3V the max is 3.6V.  When the typical is 1.8V the max is 1.98V.
+ *
+ * They are used like this:
+ * - If the voltage on a rail is above the "1.8" voltage (1.98V) we'll tell the
+ *   SoC we're at 3.3.
+ * - If the voltage on a rail is above the "3.3" voltage (3.6V) we'll consider
+ *   that to be an error.
+ */
+#define MAX_VOLTAGE_1_8                1980000
+#define MAX_VOLTAGE_3_3                3600000
+
+#define RK3288_SOC_CON2                        0x24c
+#define RK3288_SOC_CON2_FLASH0         BIT(7)
+#define RK3288_SOC_FLASH_SUPPLY_NUM    2
+
+struct rockchip_iodomain;
+
+/**
+ * @supplies: voltage settings matching the register bits.
+ */
+struct rockchip_iodomain_soc_data {
+       int grf_offset;
+       const char *supply_names[MAX_SUPPLIES];
+       void (*init)(struct rockchip_iodomain *iod);
+};
+
+struct rockchip_iodomain_supply {
+       struct rockchip_iodomain *iod;
+       struct regulator *reg;
+       struct notifier_block nb;
+       int idx;
+};
+
+struct rockchip_iodomain {
+       struct device *dev;
+       struct regmap *grf;
+       struct rockchip_iodomain_soc_data *soc_data;
+       struct rockchip_iodomain_supply supplies[MAX_SUPPLIES];
+};
+
+static int rockchip_iodomain_write(struct rockchip_iodomain_supply *supply,
+                                  int uV)
+{
+       struct rockchip_iodomain *iod = supply->iod;
+       u32 val;
+       int ret;
+
+       /* set value bit */
+       val = (uV > MAX_VOLTAGE_1_8) ? 0 : 1;
+       val <<= supply->idx;
+
+       /* apply hiword-mask */
+       val |= (BIT(supply->idx) << 16);
+
+       ret = regmap_write(iod->grf, iod->soc_data->grf_offset, val);
+       if (ret)
+               dev_err(iod->dev, "Couldn't write to GRF\n");
+
+       return ret;
+}
+
+static int rockchip_iodomain_notify(struct notifier_block *nb,
+                                   unsigned long event,
+                                   void *data)
+{
+       struct rockchip_iodomain_supply *supply =
+                       container_of(nb, struct rockchip_iodomain_supply, nb);
+       int uV;
+       int ret;
+
+       /*
+        * According to Rockchip it's important to keep the SoC IO domain
+        * higher than (or equal to) the external voltage.  That means we need
+        * to change it before external voltage changes happen in the case
+        * of an increase.
+        *
+        * Note that in the "pre" change we pick the max possible voltage that
+        * the regulator might end up at (the client requests a range and we
+        * don't know for certain the exact voltage).  Right now we rely on the
+        * slop in MAX_VOLTAGE_1_8 and MAX_VOLTAGE_3_3 to save us if clients
+        * request something like a max of 3.6V when they really want 3.3V.
+        * We could attempt to come up with better rules if this fails.
+        */
+       if (event & REGULATOR_EVENT_PRE_VOLTAGE_CHANGE) {
+               struct pre_voltage_change_data *pvc_data = data;
+
+               uV = max_t(unsigned long, pvc_data->old_uV, pvc_data->max_uV);
+       } else if (event & (REGULATOR_EVENT_VOLTAGE_CHANGE |
+                           REGULATOR_EVENT_ABORT_VOLTAGE_CHANGE)) {
+               uV = (unsigned long)data;
+       } else {
+               return NOTIFY_OK;
+       }
+
+       dev_dbg(supply->iod->dev, "Setting to %d\n", uV);
+
+       if (uV > MAX_VOLTAGE_3_3) {
+               dev_err(supply->iod->dev, "Voltage too high: %d\n", uV);
+
+               if (event == REGULATOR_EVENT_PRE_VOLTAGE_CHANGE)
+                       return NOTIFY_BAD;
+       }
+
+       ret = rockchip_iodomain_write(supply, uV);
+       if (ret && event == REGULATOR_EVENT_PRE_VOLTAGE_CHANGE)
+               return NOTIFY_BAD;
+
+       dev_info(supply->iod->dev, "Setting to %d done\n", uV);
+       return NOTIFY_OK;
+}
+
+static void rk3288_iodomain_init(struct rockchip_iodomain *iod)
+{
+       int ret;
+       u32 val;
+
+       /* if no flash supply we should leave things alone */
+       if (!iod->supplies[RK3288_SOC_FLASH_SUPPLY_NUM].reg)
+               return;
+
+       /*
+        * set flash0 iodomain to also use this framework
+        * instead of a special gpio.
+        */
+       val = RK3288_SOC_CON2_FLASH0 | (RK3288_SOC_CON2_FLASH0 << 16);
+       ret = regmap_write(iod->grf, RK3288_SOC_CON2, val);
+       if (ret < 0)
+               dev_warn(iod->dev, "couldn't update flash0 ctrl\n");
+}
+
+/*
+ * On the rk3188 the io-domains are handled by a shared register with the
+ * lower 8 bits being still being continuing drive-strength settings.
+ */
+static const struct rockchip_iodomain_soc_data soc_data_rk3188 = {
+       .grf_offset = 0x104,
+       .supply_names = {
+               NULL,
+               NULL,
+               NULL,
+               NULL,
+               NULL,
+               NULL,
+               NULL,
+               NULL,
+               "ap0",
+               "ap1",
+               "cif",
+               "flash",
+               "vccio0",
+               "vccio1",
+               "lcdc0",
+               "lcdc1",
+       },
+};
+
+static const struct rockchip_iodomain_soc_data soc_data_rk3288 = {
+       .grf_offset = 0x380,
+       .supply_names = {
+               "lcdc",         /* LCDC_VDD */
+               "dvp",          /* DVPIO_VDD */
+               "flash0",       /* FLASH0_VDD (emmc) */
+               "flash1",       /* FLASH1_VDD (sdio1) */
+               "wifi",         /* APIO3_VDD  (sdio0) */
+               "bb",           /* APIO5_VDD */
+               "audio",        /* APIO4_VDD */
+               "sdcard",       /* SDMMC0_VDD (sdmmc) */
+               "gpio30",       /* APIO1_VDD */
+               "gpio1830",     /* APIO2_VDD */
+       },
+       .init = rk3288_iodomain_init,
+};
+
+static const struct of_device_id rockchip_iodomain_match[] = {
+       {
+               .compatible = "rockchip,rk3188-io-voltage-domain",
+               .data = (void *)&soc_data_rk3188
+       },
+       {
+               .compatible = "rockchip,rk3288-io-voltage-domain",
+               .data = (void *)&soc_data_rk3288
+       },
+       { /* sentinel */ },
+};
+
+static int rockchip_iodomain_probe(struct platform_device *pdev)
+{
+       struct device_node *np = pdev->dev.of_node;
+       const struct of_device_id *match;
+       struct rockchip_iodomain *iod;
+       int i, ret = 0;
+
+       if (!np)
+               return -ENODEV;
+
+       iod = devm_kzalloc(&pdev->dev, sizeof(*iod), GFP_KERNEL);
+       if (!iod)
+               return -ENOMEM;
+
+       iod->dev = &pdev->dev;
+       platform_set_drvdata(pdev, iod);
+
+       match = of_match_node(rockchip_iodomain_match, np);
+       iod->soc_data = (struct rockchip_iodomain_soc_data *)match->data;
+
+       iod->grf = syscon_regmap_lookup_by_phandle(np, "rockchip,grf");
+       if (IS_ERR(iod->grf)) {
+               dev_err(&pdev->dev, "couldn't find grf regmap\n");
+               return PTR_ERR(iod->grf);
+       }
+
+       for (i = 0; i < MAX_SUPPLIES; i++) {
+               const char *supply_name = iod->soc_data->supply_names[i];
+               struct rockchip_iodomain_supply *supply = &iod->supplies[i];
+               struct regulator *reg;
+               int uV;
+
+               if (!supply_name)
+                       continue;
+
+               reg = devm_regulator_get_optional(iod->dev, supply_name);
+               if (IS_ERR(reg)) {
+                       ret = PTR_ERR(reg);
+
+                       /* If a supply wasn't specified, that's OK */
+                       if (ret == -ENODEV)
+                               continue;
+                       else if (ret != -EPROBE_DEFER)
+                               dev_err(iod->dev, "couldn't get regulator %s\n",
+                                       supply_name);
+                       goto unreg_notify;
+               }
+
+               /* set initial correct value */
+               uV = regulator_get_voltage(reg);
+
+               /* must be a regulator we can get the voltage of */
+               if (uV < 0) {
+                       dev_err(iod->dev, "Can't determine voltage: %s\n",
+                               supply_name);
+                       goto unreg_notify;
+               }
+
+               if (uV > MAX_VOLTAGE_3_3) {
+                       dev_crit(iod->dev,
+                                "%d uV is too high. May damage SoC!\n",
+                                uV);
+                       ret = -EINVAL;
+                       goto unreg_notify;
+               }
+
+               /* setup our supply */
+               supply->idx = i;
+               supply->iod = iod;
+               supply->reg = reg;
+               supply->nb.notifier_call = rockchip_iodomain_notify;
+
+               ret = rockchip_iodomain_write(supply, uV);
+               if (ret) {
+                       supply->reg = NULL;
+                       goto unreg_notify;
+               }
+
+               /* register regulator notifier */
+               ret = regulator_register_notifier(reg, &supply->nb);
+               if (ret) {
+                       dev_err(&pdev->dev,
+                               "regulator notifier request failed\n");
+                       supply->reg = NULL;
+                       goto unreg_notify;
+               }
+       }
+
+       if (iod->soc_data->init)
+               iod->soc_data->init(iod);
+
+       return 0;
+
+unreg_notify:
+       for (i = MAX_SUPPLIES - 1; i >= 0; i--) {
+               struct rockchip_iodomain_supply *io_supply = &iod->supplies[i];
+
+               if (io_supply->reg)
+                       regulator_unregister_notifier(io_supply->reg,
+                                                     &io_supply->nb);
+       }
+
+       return ret;
+}
+
+static int rockchip_iodomain_remove(struct platform_device *pdev)
+{
+       struct rockchip_iodomain *iod = platform_get_drvdata(pdev);
+       int i;
+
+       for (i = MAX_SUPPLIES - 1; i >= 0; i--) {
+               struct rockchip_iodomain_supply *io_supply = &iod->supplies[i];
+
+               if (io_supply->reg)
+                       regulator_unregister_notifier(io_supply->reg,
+                                                     &io_supply->nb);
+       }
+
+       return 0;
+}
+
+static struct platform_driver rockchip_iodomain_driver = {
+       .probe   = rockchip_iodomain_probe,
+       .remove  = rockchip_iodomain_remove,
+       .driver  = {
+               .name  = "rockchip-iodomain",
+               .of_match_table = rockchip_iodomain_match,
+       },
+};
+
+module_platform_driver(rockchip_iodomain_driver);
+
+MODULE_DESCRIPTION("Rockchip IO-domain driver");
+MODULE_AUTHOR("Heiko Stuebner <heiko@sntech.de>");
+MODULE_AUTHOR("Doug Anderson <dianders@chromium.org>");
+MODULE_LICENSE("GPL v2");
index 72f63817a1a0e474e13b56b783c6586e6ffc4712..fe2c2d595f599b2f21ea446a824f9f0a7d238282 100644 (file)
@@ -75,8 +75,6 @@ static struct pm_clk_notifier_block platform_bus_notifier = {
        .con_ids = { NULL, },
 };
 
-static bool default_pm_on;
-
 static int __init sh_pm_runtime_init(void)
 {
        if (IS_ENABLED(CONFIG_ARCH_SHMOBILE_MULTI)) {
@@ -96,16 +94,7 @@ static int __init sh_pm_runtime_init(void)
                        return 0;
        }
 
-       default_pm_on = true;
        pm_clk_add_notifier(&platform_bus_type, &platform_bus_notifier);
        return 0;
 }
 core_initcall(sh_pm_runtime_init);
-
-static int __init sh_pm_runtime_late_init(void)
-{
-       if (default_pm_on)
-               pm_genpd_poweroff_unused();
-       return 0;
-}
-late_initcall(sh_pm_runtime_late_init);
index e19512ffc40e5ef34fefeffd70568f2fbd959f59..ebcb33df2eb22facb58cebc10277c3ab12925a42 100644 (file)
@@ -35,6 +35,7 @@
 #include <linux/spi/spi.h>
 #include <linux/of_gpio.h>
 #include <linux/pm_runtime.h>
+#include <linux/pm_domain.h>
 #include <linux/export.h>
 #include <linux/sched/rt.h>
 #include <linux/delay.h>
@@ -264,10 +265,12 @@ static int spi_drv_probe(struct device *dev)
        if (ret)
                return ret;
 
-       acpi_dev_pm_attach(dev, true);
-       ret = sdrv->probe(to_spi_device(dev));
-       if (ret)
-               acpi_dev_pm_detach(dev, true);
+       ret = dev_pm_domain_attach(dev, true);
+       if (ret != -EPROBE_DEFER) {
+               ret = sdrv->probe(to_spi_device(dev));
+               if (ret)
+                       dev_pm_domain_detach(dev, true);
+       }
 
        return ret;
 }
@@ -278,7 +281,7 @@ static int spi_drv_remove(struct device *dev)
        int ret;
 
        ret = sdrv->remove(to_spi_device(dev));
-       acpi_dev_pm_detach(dev, true);
+       dev_pm_domain_detach(dev, true);
 
        return ret;
 }
index c728113374f55b1f02607a2a84c0062198f415a3..f97804bdf1ff93a8d1923bc4fe75b6abc36e097d 100644 (file)
 #define METHOD_NAME__PRS        "_PRS"
 #define METHOD_NAME__PRT        "_PRT"
 #define METHOD_NAME__PRW        "_PRW"
+#define METHOD_NAME__PS0        "_PS0"
+#define METHOD_NAME__PS1        "_PS1"
+#define METHOD_NAME__PS2        "_PS2"
+#define METHOD_NAME__PS3        "_PS3"
 #define METHOD_NAME__REG        "_REG"
 #define METHOD_NAME__SB_        "_SB_"
 #define METHOD_NAME__SEG        "_SEG"
index b7c89d47efbefc18b55502bf89e6480e4d11d4b4..9fc1d71c82bc13faec7d409ebdc280f947544774 100644 (file)
@@ -46,7 +46,7 @@
 
 /* Current ACPICA subsystem version in YYYYMMDD format */
 
-#define ACPI_CA_VERSION                 0x20140724
+#define ACPI_CA_VERSION                 0x20140828
 
 #include <acpi/acconfig.h>
 #include <acpi/actypes.h>
@@ -692,6 +692,7 @@ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
                                                     *event_status))
 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
+ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))
 
 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
                                acpi_get_gpe_device(u32 gpe_index,
index 7626bfeac2cb88a8c09ee06f5a81758d3357ace8..29e79370641da878dbf0875ad87950f6a0f4c83f 100644 (file)
@@ -952,7 +952,8 @@ enum acpi_srat_type {
        ACPI_SRAT_TYPE_CPU_AFFINITY = 0,
        ACPI_SRAT_TYPE_MEMORY_AFFINITY = 1,
        ACPI_SRAT_TYPE_X2APIC_CPU_AFFINITY = 2,
-       ACPI_SRAT_TYPE_RESERVED = 3     /* 3 and greater are reserved */
+       ACPI_SRAT_TYPE_GICC_AFFINITY = 3,
+       ACPI_SRAT_TYPE_RESERVED = 4     /* 4 and greater are reserved */
 };
 
 /*
@@ -968,7 +969,7 @@ struct acpi_srat_cpu_affinity {
        u32 flags;
        u8 local_sapic_eid;
        u8 proximity_domain_hi[3];
-       u32 reserved;           /* Reserved, must be zero */
+       u32 clock_domain;
 };
 
 /* Flags */
@@ -1010,6 +1011,20 @@ struct acpi_srat_x2apic_cpu_affinity {
 
 #define ACPI_SRAT_CPU_ENABLED       (1)        /* 00: Use affinity structure */
 
+/* 3: GICC Affinity (ACPI 5.1) */
+
+struct acpi_srat_gicc_affinity {
+       struct acpi_subtable_header header;
+       u32 proximity_domain;
+       u32 acpi_processor_uid;
+       u32 flags;
+       u32 clock_domain;
+};
+
+/* Flags for struct acpi_srat_gicc_affinity */
+
+#define ACPI_SRAT_GICC_ENABLED     (1) /* 00: Use affinity structure */
+
 /* Reset to default packing */
 
 #pragma pack()
index 787bcc81446381245279063545c59c08b9b9e46d..5480cb2236bf33356cb3b0c8811de2de280897af 100644 (file)
@@ -310,10 +310,15 @@ struct acpi_gtdt_timer_entry {
        u32 common_flags;
 };
 
+/* Flag Definitions: timer_flags and virtual_timer_flags above */
+
+#define ACPI_GTDT_GT_IRQ_MODE               (1)
+#define ACPI_GTDT_GT_IRQ_POLARITY           (1<<1)
+
 /* Flag Definitions: common_flags above */
 
-#define ACPI_GTDT_GT_IS_SECURE_TIMER    (1)
-#define ACPI_GTDT_GT_ALWAYS_ON          (1<<1)
+#define ACPI_GTDT_GT_IS_SECURE_TIMER        (1)
+#define ACPI_GTDT_GT_ALWAYS_ON              (1<<1)
 
 /* 1: SBSA Generic Watchdog Structure */
 
index 807cbc46d73e0eda897c3e1a3aa2df7ae9519c81..b7926bb9b4442f90d2d7f32d8c11a07370bfe8c9 100644 (file)
@@ -587,7 +587,6 @@ static inline int acpi_subsys_freeze(struct device *dev) { return 0; }
 #if defined(CONFIG_ACPI) && defined(CONFIG_PM)
 struct acpi_device *acpi_dev_pm_get_node(struct device *dev);
 int acpi_dev_pm_attach(struct device *dev, bool power_on);
-void acpi_dev_pm_detach(struct device *dev, bool power_off);
 #else
 static inline struct acpi_device *acpi_dev_pm_get_node(struct device *dev)
 {
@@ -597,7 +596,6 @@ static inline int acpi_dev_pm_attach(struct device *dev, bool power_on)
 {
        return -ENODEV;
 }
-static inline void acpi_dev_pm_detach(struct device *dev, bool power_off) {}
 #endif
 
 #ifdef CONFIG_ACPI
index 7d1955afa62c7c319b4467bd98139f270ed004e4..138336b6bb0437f4cc0972f7c4ce34a59bf9b2ec 100644 (file)
@@ -112,6 +112,9 @@ struct cpufreq_policy {
        spinlock_t              transition_lock;
        wait_queue_head_t       transition_wait;
        struct task_struct      *transition_task; /* Task which is doing the transition */
+
+       /* For cpufreq driver's internal use */
+       void                    *driver_data;
 };
 
 /* Only for ACPI */
index 698ad053d064aef74793449f4b4b55018994a908..69517a24bc50678e4f2d69d931b9586cb76a9b69 100644 (file)
@@ -193,11 +193,6 @@ extern void irq_wake_thread(unsigned int irq, void *dev_id);
 /* The following three functions are for the core kernel use only. */
 extern void suspend_device_irqs(void);
 extern void resume_device_irqs(void);
-#ifdef CONFIG_PM_SLEEP
-extern int check_wakeup_irqs(void);
-#else
-static inline int check_wakeup_irqs(void) { return 0; }
-#endif
 
 /**
  * struct irq_affinity_notify - context for notification of IRQ affinity changes
index 62af59242ddc33faf11b0b872cb09499af3b42b5..03f48d936f6690a17f196f4e8707f6aece369a56 100644 (file)
@@ -173,6 +173,7 @@ struct irq_data {
  * IRQD_IRQ_DISABLED           - Disabled state of the interrupt
  * IRQD_IRQ_MASKED             - Masked state of the interrupt
  * IRQD_IRQ_INPROGRESS         - In progress state of the interrupt
+ * IRQD_WAKEUP_ARMED           - Wakeup mode armed
  */
 enum {
        IRQD_TRIGGER_MASK               = 0xf,
@@ -186,6 +187,7 @@ enum {
        IRQD_IRQ_DISABLED               = (1 << 16),
        IRQD_IRQ_MASKED                 = (1 << 17),
        IRQD_IRQ_INPROGRESS             = (1 << 18),
+       IRQD_WAKEUP_ARMED               = (1 << 19),
 };
 
 static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
@@ -257,6 +259,12 @@ static inline bool irqd_irq_inprogress(struct irq_data *d)
        return d->state_use_accessors & IRQD_IRQ_INPROGRESS;
 }
 
+static inline bool irqd_is_wakeup_armed(struct irq_data *d)
+{
+       return d->state_use_accessors & IRQD_WAKEUP_ARMED;
+}
+
+
 /*
  * Functions for chained handlers which can be enabled/disabled by the
  * standard disable_irq/enable_irq calls. Must be called with
index ff24667cd86cfdeba1bfdc1689faba0598e6d246..faf433af425e41e2da532939af63ec258f8fd619 100644 (file)
@@ -38,6 +38,11 @@ struct pt_regs;
  * @threads_oneshot:   bitfield to handle shared oneshot threads
  * @threads_active:    number of irqaction threads currently running
  * @wait_for_threads:  wait queue for sync_irq to wait for threaded handlers
+ * @nr_actions:                number of installed actions on this descriptor
+ * @no_suspend_depth:  number of irqactions on a irq descriptor with
+ *                     IRQF_NO_SUSPEND set
+ * @force_resume_depth:        number of irqactions on a irq descriptor with
+ *                     IRQF_FORCE_RESUME set
  * @dir:               /proc/irq/ procfs entry
  * @name:              flow handler name for /proc/interrupts output
  */
@@ -70,6 +75,11 @@ struct irq_desc {
        unsigned long           threads_oneshot;
        atomic_t                threads_active;
        wait_queue_head_t       wait_for_threads;
+#ifdef CONFIG_PM_SLEEP
+       unsigned int            nr_actions;
+       unsigned int            no_suspend_depth;
+       unsigned int            force_resume_depth;
+#endif
 #ifdef CONFIG_PROC_FS
        struct proc_dir_entry   *dir;
 #endif
index 72c0fe098a27871aa1ea44ea0076d45ab696ea2e..383fd68aaee15a9e345b43d0260276e0526144ce 100644 (file)
@@ -619,6 +619,7 @@ extern int dev_pm_put_subsys_data(struct device *dev);
  */
 struct dev_pm_domain {
        struct dev_pm_ops       ops;
+       void (*detach)(struct device *dev, bool power_off);
 };
 
 /*
@@ -679,12 +680,16 @@ struct dev_pm_domain {
 extern void device_pm_lock(void);
 extern void dpm_resume_start(pm_message_t state);
 extern void dpm_resume_end(pm_message_t state);
+extern void dpm_resume_noirq(pm_message_t state);
+extern void dpm_resume_early(pm_message_t state);
 extern void dpm_resume(pm_message_t state);
 extern void dpm_complete(pm_message_t state);
 
 extern void device_pm_unlock(void);
 extern int dpm_suspend_end(pm_message_t state);
 extern int dpm_suspend_start(pm_message_t state);
+extern int dpm_suspend_noirq(pm_message_t state);
+extern int dpm_suspend_late(pm_message_t state);
 extern int dpm_suspend(pm_message_t state);
 extern int dpm_prepare(pm_message_t state);
 
index ebc4c76ffb737bae3d9be1a6325752745254eaed..73e938b7e9374c68ac00fd99c65247eac9241fd4 100644 (file)
@@ -35,18 +35,10 @@ struct gpd_dev_ops {
        int (*stop)(struct device *dev);
        int (*save_state)(struct device *dev);
        int (*restore_state)(struct device *dev);
-       int (*suspend)(struct device *dev);
-       int (*suspend_late)(struct device *dev);
-       int (*resume_early)(struct device *dev);
-       int (*resume)(struct device *dev);
-       int (*freeze)(struct device *dev);
-       int (*freeze_late)(struct device *dev);
-       int (*thaw_early)(struct device *dev);
-       int (*thaw)(struct device *dev);
        bool (*active_wakeup)(struct device *dev);
 };
 
-struct gpd_cpu_data {
+struct gpd_cpuidle_data {
        unsigned int saved_exit_latency;
        struct cpuidle_state *idle_state;
 };
@@ -71,7 +63,6 @@ struct generic_pm_domain {
        unsigned int suspended_count;   /* System suspend device counter */
        unsigned int prepared_count;    /* Suspend counter of prepared devices */
        bool suspend_power_off; /* Power status before system suspend */
-       bool dev_irq_safe;      /* Device callbacks are IRQ-safe */
        int (*power_off)(struct generic_pm_domain *domain);
        s64 power_off_latency_ns;
        int (*power_on)(struct generic_pm_domain *domain);
@@ -80,8 +71,9 @@ struct generic_pm_domain {
        s64 max_off_time_ns;    /* Maximum allowed "suspended" time. */
        bool max_off_time_changed;
        bool cached_power_down_ok;
-       struct device_node *of_node; /* Node in device tree */
-       struct gpd_cpu_data *cpu_data;
+       struct gpd_cpuidle_data *cpuidle_data;
+       void (*attach_dev)(struct device *dev);
+       void (*detach_dev)(struct device *dev);
 };
 
 static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)
@@ -108,7 +100,6 @@ struct gpd_timing_data {
 
 struct generic_pm_domain_data {
        struct pm_domain_data base;
-       struct gpd_dev_ops ops;
        struct gpd_timing_data td;
        struct notifier_block nb;
        struct mutex lock;
@@ -127,17 +118,11 @@ static inline struct generic_pm_domain_data *dev_gpd_data(struct device *dev)
        return to_gpd_data(dev->power.subsys_data->domain_data);
 }
 
-extern struct dev_power_governor simple_qos_governor;
-
 extern struct generic_pm_domain *dev_to_genpd(struct device *dev);
 extern int __pm_genpd_add_device(struct generic_pm_domain *genpd,
                                 struct device *dev,
                                 struct gpd_timing_data *td);
 
-extern int __pm_genpd_of_add_device(struct device_node *genpd_node,
-                                   struct device *dev,
-                                   struct gpd_timing_data *td);
-
 extern int __pm_genpd_name_add_device(const char *domain_name,
                                      struct device *dev,
                                      struct gpd_timing_data *td);
@@ -151,10 +136,6 @@ extern int pm_genpd_add_subdomain_names(const char *master_name,
                                        const char *subdomain_name);
 extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
                                     struct generic_pm_domain *target);
-extern int pm_genpd_add_callbacks(struct device *dev,
-                                 struct gpd_dev_ops *ops,
-                                 struct gpd_timing_data *td);
-extern int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td);
 extern int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state);
 extern int pm_genpd_name_attach_cpuidle(const char *name, int state);
 extern int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd);
@@ -165,8 +146,7 @@ extern void pm_genpd_init(struct generic_pm_domain *genpd,
 extern int pm_genpd_poweron(struct generic_pm_domain *genpd);
 extern int pm_genpd_name_poweron(const char *domain_name);
 
-extern bool default_stop_ok(struct device *dev);
-
+extern struct dev_power_governor simple_qos_governor;
 extern struct dev_power_governor pm_domain_always_on_gov;
 #else
 
@@ -184,12 +164,6 @@ static inline int __pm_genpd_add_device(struct generic_pm_domain *genpd,
 {
        return -ENOSYS;
 }
-static inline int __pm_genpd_of_add_device(struct device_node *genpd_node,
-                                          struct device *dev,
-                                          struct gpd_timing_data *td)
-{
-       return -ENOSYS;
-}
 static inline int __pm_genpd_name_add_device(const char *domain_name,
                                             struct device *dev,
                                             struct gpd_timing_data *td)
@@ -217,16 +191,6 @@ static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 {
        return -ENOSYS;
 }
-static inline int pm_genpd_add_callbacks(struct device *dev,
-                                        struct gpd_dev_ops *ops,
-                                        struct gpd_timing_data *td)
-{
-       return -ENOSYS;
-}
-static inline int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td)
-{
-       return -ENOSYS;
-}
 static inline int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int st)
 {
        return -ENOSYS;
@@ -255,10 +219,6 @@ static inline int pm_genpd_name_poweron(const char *domain_name)
 {
        return -ENOSYS;
 }
-static inline bool default_stop_ok(struct device *dev)
-{
-       return false;
-}
 #define simple_qos_governor NULL
 #define pm_domain_always_on_gov NULL
 #endif
@@ -269,45 +229,87 @@ static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
        return __pm_genpd_add_device(genpd, dev, NULL);
 }
 
-static inline int pm_genpd_of_add_device(struct device_node *genpd_node,
-                                        struct device *dev)
-{
-       return __pm_genpd_of_add_device(genpd_node, dev, NULL);
-}
-
 static inline int pm_genpd_name_add_device(const char *domain_name,
                                           struct device *dev)
 {
        return __pm_genpd_name_add_device(domain_name, dev, NULL);
 }
 
-static inline int pm_genpd_remove_callbacks(struct device *dev)
-{
-       return __pm_genpd_remove_callbacks(dev, true);
-}
-
 #ifdef CONFIG_PM_GENERIC_DOMAINS_RUNTIME
-extern void genpd_queue_power_off_work(struct generic_pm_domain *genpd);
 extern void pm_genpd_poweroff_unused(void);
 #else
-static inline void genpd_queue_power_off_work(struct generic_pm_domain *gpd) {}
 static inline void pm_genpd_poweroff_unused(void) {}
 #endif
 
 #ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP
-extern void pm_genpd_syscore_switch(struct device *dev, bool suspend);
+extern void pm_genpd_syscore_poweroff(struct device *dev);
+extern void pm_genpd_syscore_poweron(struct device *dev);
 #else
-static inline void pm_genpd_syscore_switch(struct device *dev, bool suspend) {}
+static inline void pm_genpd_syscore_poweroff(struct device *dev) {}
+static inline void pm_genpd_syscore_poweron(struct device *dev) {}
 #endif
 
-static inline void pm_genpd_syscore_poweroff(struct device *dev)
+/* OF PM domain providers */
+struct of_device_id;
+
+struct genpd_onecell_data {
+       struct generic_pm_domain **domains;
+       unsigned int num_domains;
+};
+
+typedef struct generic_pm_domain *(*genpd_xlate_t)(struct of_phandle_args *args,
+                                               void *data);
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+int __of_genpd_add_provider(struct device_node *np, genpd_xlate_t xlate,
+                       void *data);
+void of_genpd_del_provider(struct device_node *np);
+
+struct generic_pm_domain *__of_genpd_xlate_simple(
+                                       struct of_phandle_args *genpdspec,
+                                       void *data);
+struct generic_pm_domain *__of_genpd_xlate_onecell(
+                                       struct of_phandle_args *genpdspec,
+                                       void *data);
+
+int genpd_dev_pm_attach(struct device *dev);
+#else /* !CONFIG_PM_GENERIC_DOMAINS_OF */
+static inline int __of_genpd_add_provider(struct device_node *np,
+                                       genpd_xlate_t xlate, void *data)
+{
+       return 0;
+}
+static inline void of_genpd_del_provider(struct device_node *np) {}
+
+#define __of_genpd_xlate_simple                NULL
+#define __of_genpd_xlate_onecell       NULL
+
+static inline int genpd_dev_pm_attach(struct device *dev)
+{
+       return -ENODEV;
+}
+#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
+
+static inline int of_genpd_add_provider_simple(struct device_node *np,
+                                       struct generic_pm_domain *genpd)
+{
+       return __of_genpd_add_provider(np, __of_genpd_xlate_simple, genpd);
+}
+static inline int of_genpd_add_provider_onecell(struct device_node *np,
+                                       struct genpd_onecell_data *data)
 {
-       pm_genpd_syscore_switch(dev, true);
+       return __of_genpd_add_provider(np, __of_genpd_xlate_onecell, data);
 }
 
-static inline void pm_genpd_syscore_poweron(struct device *dev)
+#ifdef CONFIG_PM
+extern int dev_pm_domain_attach(struct device *dev, bool power_on);
+extern void dev_pm_domain_detach(struct device *dev, bool power_off);
+#else
+static inline int dev_pm_domain_attach(struct device *dev, bool power_on)
 {
-       pm_genpd_syscore_switch(dev, false);
+       return -ENODEV;
 }
+static inline void dev_pm_domain_detach(struct device *dev, bool power_off) {}
+#endif
 
 #endif /* _LINUX_PM_DOMAIN_H */
index 519064e0c94302fd39ced27b6aab51b55f904f60..3388c1b6f7d8d3b981eee4d886fdcdf9e0f30297 100644 (file)
@@ -189,6 +189,8 @@ struct platform_suspend_ops {
 
 struct platform_freeze_ops {
        int (*begin)(void);
+       int (*prepare)(void);
+       void (*restore)(void);
        void (*end)(void);
 };
 
@@ -371,6 +373,8 @@ extern int unregister_pm_notifier(struct notifier_block *nb);
 extern bool events_check_enabled;
 
 extern bool pm_wakeup_pending(void);
+extern void pm_system_wakeup(void);
+extern void pm_wakeup_clear(void);
 extern bool pm_get_wakeup_count(unsigned int *count, bool block);
 extern bool pm_save_wakeup_count(unsigned int count);
 extern void pm_wakep_autosleep_enabled(bool set);
@@ -418,6 +422,8 @@ static inline int unregister_pm_notifier(struct notifier_block *nb)
 #define pm_notifier(fn, pri)   do { (void)(fn); } while (0)
 
 static inline bool pm_wakeup_pending(void) { return false; }
+static inline void pm_system_wakeup(void) {}
+static inline void pm_wakeup_clear(void) {}
 
 static inline void lock_system_sleep(void) {}
 static inline void unlock_system_sleep(void) {}
index 6223fab9a9d22b7bedd5e6b1f23ccd8a0347d6d1..8fb52e9bddc1deb1334eab08d1b2cf75233c299e 100644 (file)
@@ -342,6 +342,31 @@ static bool irq_check_poll(struct irq_desc *desc)
        return irq_wait_for_poll(desc);
 }
 
+static bool irq_may_run(struct irq_desc *desc)
+{
+       unsigned int mask = IRQD_IRQ_INPROGRESS | IRQD_WAKEUP_ARMED;
+
+       /*
+        * If the interrupt is not in progress and is not an armed
+        * wakeup interrupt, proceed.
+        */
+       if (!irqd_has_set(&desc->irq_data, mask))
+               return true;
+
+       /*
+        * If the interrupt is an armed wakeup source, mark it pending
+        * and suspended, disable it and notify the pm core about the
+        * event.
+        */
+       if (irq_pm_check_wakeup(desc))
+               return false;
+
+       /*
+        * Handle a potential concurrent poll on a different core.
+        */
+       return irq_check_poll(desc);
+}
+
 /**
  *     handle_simple_irq - Simple and software-decoded IRQs.
  *     @irq:   the interrupt number
@@ -359,9 +384,8 @@ handle_simple_irq(unsigned int irq, struct irq_desc *desc)
 {
        raw_spin_lock(&desc->lock);
 
-       if (unlikely(irqd_irq_inprogress(&desc->irq_data)))
-               if (!irq_check_poll(desc))
-                       goto out_unlock;
+       if (!irq_may_run(desc))
+               goto out_unlock;
 
        desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
        kstat_incr_irqs_this_cpu(irq, desc);
@@ -412,9 +436,8 @@ handle_level_irq(unsigned int irq, struct irq_desc *desc)
        raw_spin_lock(&desc->lock);
        mask_ack_irq(desc);
 
-       if (unlikely(irqd_irq_inprogress(&desc->irq_data)))
-               if (!irq_check_poll(desc))
-                       goto out_unlock;
+       if (!irq_may_run(desc))
+               goto out_unlock;
 
        desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
        kstat_incr_irqs_this_cpu(irq, desc);
@@ -485,9 +508,8 @@ handle_fasteoi_irq(unsigned int irq, struct irq_desc *desc)
 
        raw_spin_lock(&desc->lock);
 
-       if (unlikely(irqd_irq_inprogress(&desc->irq_data)))
-               if (!irq_check_poll(desc))
-                       goto out;
+       if (!irq_may_run(desc))
+               goto out;
 
        desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
        kstat_incr_irqs_this_cpu(irq, desc);
@@ -541,19 +563,23 @@ handle_edge_irq(unsigned int irq, struct irq_desc *desc)
        raw_spin_lock(&desc->lock);
 
        desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
+
+       if (!irq_may_run(desc)) {
+               desc->istate |= IRQS_PENDING;
+               mask_ack_irq(desc);
+               goto out_unlock;
+       }
+
        /*
-        * If we're currently running this IRQ, or its disabled,
-        * we shouldn't process the IRQ. Mark it pending, handle
-        * the necessary masking and go out
+        * If its disabled or no action available then mask it and get
+        * out of here.
         */
-       if (unlikely(irqd_irq_disabled(&desc->irq_data) ||
-                    irqd_irq_inprogress(&desc->irq_data) || !desc->action)) {
-               if (!irq_check_poll(desc)) {
-                       desc->istate |= IRQS_PENDING;
-                       mask_ack_irq(desc);
-                       goto out_unlock;
-               }
+       if (irqd_irq_disabled(&desc->irq_data) || !desc->action) {
+               desc->istate |= IRQS_PENDING;
+               mask_ack_irq(desc);
+               goto out_unlock;
        }
+
        kstat_incr_irqs_this_cpu(irq, desc);
 
        /* Start handling the irq */
@@ -602,18 +628,21 @@ void handle_edge_eoi_irq(unsigned int irq, struct irq_desc *desc)
        raw_spin_lock(&desc->lock);
 
        desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
+
+       if (!irq_may_run(desc)) {
+               desc->istate |= IRQS_PENDING;
+               goto out_eoi;
+       }
+
        /*
-        * If we're currently running this IRQ, or its disabled,
-        * we shouldn't process the IRQ. Mark it pending, handle
-        * the necessary masking and go out
+        * If its disabled or no action available then mask it and get
+        * out of here.
         */
-       if (unlikely(irqd_irq_disabled(&desc->irq_data) ||
-                    irqd_irq_inprogress(&desc->irq_data) || !desc->action)) {
-               if (!irq_check_poll(desc)) {
-                       desc->istate |= IRQS_PENDING;
-                       goto out_eoi;
-               }
+       if (irqd_irq_disabled(&desc->irq_data) || !desc->action) {
+               desc->istate |= IRQS_PENDING;
+               goto out_eoi;
        }
+
        kstat_incr_irqs_this_cpu(irq, desc);
 
        do {
index 099ea2e0eb8833676b3d234f2740487873bac093..4332d766619d1c700c600ec0678bc6c3ca47a6fa 100644 (file)
@@ -63,8 +63,8 @@ enum {
 
 extern int __irq_set_trigger(struct irq_desc *desc, unsigned int irq,
                unsigned long flags);
-extern void __disable_irq(struct irq_desc *desc, unsigned int irq, bool susp);
-extern void __enable_irq(struct irq_desc *desc, unsigned int irq, bool resume);
+extern void __disable_irq(struct irq_desc *desc, unsigned int irq);
+extern void __enable_irq(struct irq_desc *desc, unsigned int irq);
 
 extern int irq_startup(struct irq_desc *desc, bool resend);
 extern void irq_shutdown(struct irq_desc *desc);
@@ -194,3 +194,15 @@ static inline void kstat_incr_irqs_this_cpu(unsigned int irq, struct irq_desc *d
        __this_cpu_inc(*desc->kstat_irqs);
        __this_cpu_inc(kstat.irqs_sum);
 }
+
+#ifdef CONFIG_PM_SLEEP
+bool irq_pm_check_wakeup(struct irq_desc *desc);
+void irq_pm_install_action(struct irq_desc *desc, struct irqaction *action);
+void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action);
+#else
+static inline bool irq_pm_check_wakeup(struct irq_desc *desc) { return false; }
+static inline void
+irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) { }
+static inline void
+irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { }
+#endif
index 3dc6a61bf06a447acb70f67fc3942fc2286e32d2..0a9104b4608b8dc374769a7435b594f27a0b97ae 100644 (file)
@@ -382,14 +382,8 @@ setup_affinity(unsigned int irq, struct irq_desc *desc, struct cpumask *mask)
 }
 #endif
 
-void __disable_irq(struct irq_desc *desc, unsigned int irq, bool suspend)
+void __disable_irq(struct irq_desc *desc, unsigned int irq)
 {
-       if (suspend) {
-               if (!desc->action || (desc->action->flags & IRQF_NO_SUSPEND))
-                       return;
-               desc->istate |= IRQS_SUSPENDED;
-       }
-
        if (!desc->depth++)
                irq_disable(desc);
 }
@@ -401,7 +395,7 @@ static int __disable_irq_nosync(unsigned int irq)
 
        if (!desc)
                return -EINVAL;
-       __disable_irq(desc, irq, false);
+       __disable_irq(desc, irq);
        irq_put_desc_busunlock(desc, flags);
        return 0;
 }
@@ -442,20 +436,8 @@ void disable_irq(unsigned int irq)
 }
 EXPORT_SYMBOL(disable_irq);
 
-void __enable_irq(struct irq_desc *desc, unsigned int irq, bool resume)
+void __enable_irq(struct irq_desc *desc, unsigned int irq)
 {
-       if (resume) {
-               if (!(desc->istate & IRQS_SUSPENDED)) {
-                       if (!desc->action)
-                               return;
-                       if (!(desc->action->flags & IRQF_FORCE_RESUME))
-                               return;
-                       /* Pretend that it got disabled ! */
-                       desc->depth++;
-               }
-               desc->istate &= ~IRQS_SUSPENDED;
-       }
-
        switch (desc->depth) {
        case 0:
  err_out:
@@ -497,7 +479,7 @@ void enable_irq(unsigned int irq)
                 KERN_ERR "enable_irq before setup/request_irq: irq %u\n", irq))
                goto out;
 
-       __enable_irq(desc, irq, false);
+       __enable_irq(desc, irq);
 out:
        irq_put_desc_busunlock(desc, flags);
 }
@@ -1218,6 +1200,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
        new->irq = irq;
        *old_ptr = new;
 
+       irq_pm_install_action(desc, new);
+
        /* Reset broken irq detection when installing new handler */
        desc->irq_count = 0;
        desc->irqs_unhandled = 0;
@@ -1228,7 +1212,7 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
         */
        if (shared && (desc->istate & IRQS_SPURIOUS_DISABLED)) {
                desc->istate &= ~IRQS_SPURIOUS_DISABLED;
-               __enable_irq(desc, irq, false);
+               __enable_irq(desc, irq);
        }
 
        raw_spin_unlock_irqrestore(&desc->lock, flags);
@@ -1336,6 +1320,8 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id)
        /* Found it - now remove it from the list of entries: */
        *action_ptr = action->next;
 
+       irq_pm_remove_action(desc, action);
+
        /* If this was the last handler, shut down the IRQ line: */
        if (!desc->action) {
                irq_shutdown(desc);
index abcd6ca86cb76b56e5979613a1964c0db743b5d0..3ca5325927045572edfa7d3eebd79f01b2a4c29a 100644 (file)
 #include <linux/irq.h>
 #include <linux/module.h>
 #include <linux/interrupt.h>
+#include <linux/suspend.h>
 #include <linux/syscore_ops.h>
 
 #include "internals.h"
 
+bool irq_pm_check_wakeup(struct irq_desc *desc)
+{
+       if (irqd_is_wakeup_armed(&desc->irq_data)) {
+               irqd_clear(&desc->irq_data, IRQD_WAKEUP_ARMED);
+               desc->istate |= IRQS_SUSPENDED | IRQS_PENDING;
+               desc->depth++;
+               irq_disable(desc);
+               pm_system_wakeup();
+               return true;
+       }
+       return false;
+}
+
+/*
+ * Called from __setup_irq() with desc->lock held after @action has
+ * been installed in the action chain.
+ */
+void irq_pm_install_action(struct irq_desc *desc, struct irqaction *action)
+{
+       desc->nr_actions++;
+
+       if (action->flags & IRQF_FORCE_RESUME)
+               desc->force_resume_depth++;
+
+       WARN_ON_ONCE(desc->force_resume_depth &&
+                    desc->force_resume_depth != desc->nr_actions);
+
+       if (action->flags & IRQF_NO_SUSPEND)
+               desc->no_suspend_depth++;
+
+       WARN_ON_ONCE(desc->no_suspend_depth &&
+                    desc->no_suspend_depth != desc->nr_actions);
+}
+
+/*
+ * Called from __free_irq() with desc->lock held after @action has
+ * been removed from the action chain.
+ */
+void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action)
+{
+       desc->nr_actions--;
+
+       if (action->flags & IRQF_FORCE_RESUME)
+               desc->force_resume_depth--;
+
+       if (action->flags & IRQF_NO_SUSPEND)
+               desc->no_suspend_depth--;
+}
+
+static bool suspend_device_irq(struct irq_desc *desc, int irq)
+{
+       if (!desc->action || desc->no_suspend_depth)
+               return false;
+
+       if (irqd_is_wakeup_set(&desc->irq_data)) {
+               irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED);
+               /*
+                * We return true here to force the caller to issue
+                * synchronize_irq(). We need to make sure that the
+                * IRQD_WAKEUP_ARMED is visible before we return from
+                * suspend_device_irqs().
+                */
+               return true;
+       }
+
+       desc->istate |= IRQS_SUSPENDED;
+       __disable_irq(desc, irq);
+
+       /*
+        * Hardware which has no wakeup source configuration facility
+        * requires that the non wakeup interrupts are masked at the
+        * chip level. The chip implementation indicates that with
+        * IRQCHIP_MASK_ON_SUSPEND.
+        */
+       if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
+               mask_irq(desc);
+       return true;
+}
+
 /**
  * suspend_device_irqs - disable all currently enabled interrupt lines
  *
- * During system-wide suspend or hibernation device drivers need to be prevented
- * from receiving interrupts and this function is provided for this purpose.
- * It marks all interrupt lines in use, except for the timer ones, as disabled
- * and sets the IRQS_SUSPENDED flag for each of them.
+ * During system-wide suspend or hibernation device drivers need to be
+ * prevented from receiving interrupts and this function is provided
+ * for this purpose.
+ *
+ * So we disable all interrupts and mark them IRQS_SUSPENDED except
+ * for those which are unused, those which are marked as not
+ * suspendable via an interrupt request with the flag IRQF_NO_SUSPEND
+ * set and those which are marked as active wakeup sources.
+ *
+ * The active wakeup sources are handled by the flow handler entry
+ * code which checks for the IRQD_WAKEUP_ARMED flag, suspends the
+ * interrupt and notifies the pm core about the wakeup.
  */
 void suspend_device_irqs(void)
 {
@@ -28,18 +116,36 @@ void suspend_device_irqs(void)
 
        for_each_irq_desc(irq, desc) {
                unsigned long flags;
+               bool sync;
 
                raw_spin_lock_irqsave(&desc->lock, flags);
-               __disable_irq(desc, irq, true);
+               sync = suspend_device_irq(desc, irq);
                raw_spin_unlock_irqrestore(&desc->lock, flags);
-       }
 
-       for_each_irq_desc(irq, desc)
-               if (desc->istate & IRQS_SUSPENDED)
+               if (sync)
                        synchronize_irq(irq);
+       }
 }
 EXPORT_SYMBOL_GPL(suspend_device_irqs);
 
+static void resume_irq(struct irq_desc *desc, int irq)
+{
+       irqd_clear(&desc->irq_data, IRQD_WAKEUP_ARMED);
+
+       if (desc->istate & IRQS_SUSPENDED)
+               goto resume;
+
+       /* Force resume the interrupt? */
+       if (!desc->force_resume_depth)
+               return;
+
+       /* Pretend that it got disabled ! */
+       desc->depth++;
+resume:
+       desc->istate &= ~IRQS_SUSPENDED;
+       __enable_irq(desc, irq);
+}
+
 static void resume_irqs(bool want_early)
 {
        struct irq_desc *desc;
@@ -54,7 +160,7 @@ static void resume_irqs(bool want_early)
                        continue;
 
                raw_spin_lock_irqsave(&desc->lock, flags);
-               __enable_irq(desc, irq, true);
+               resume_irq(desc, irq);
                raw_spin_unlock_irqrestore(&desc->lock, flags);
        }
 }
@@ -93,38 +199,3 @@ void resume_device_irqs(void)
        resume_irqs(false);
 }
 EXPORT_SYMBOL_GPL(resume_device_irqs);
-
-/**
- * check_wakeup_irqs - check if any wake-up interrupts are pending
- */
-int check_wakeup_irqs(void)
-{
-       struct irq_desc *desc;
-       int irq;
-
-       for_each_irq_desc(irq, desc) {
-               /*
-                * Only interrupts which are marked as wakeup source
-                * and have not been disabled before the suspend check
-                * can abort suspend.
-                */
-               if (irqd_is_wakeup_set(&desc->irq_data)) {
-                       if (desc->depth == 1 && desc->istate & IRQS_PENDING)
-                               return -EBUSY;
-                       continue;
-               }
-               /*
-                * Check the non wakeup interrupts whether they need
-                * to be masked before finally going into suspend
-                * state. That's for hardware which has no wakeup
-                * source configuration facility. The chip
-                * implementation indicates that with
-                * IRQCHIP_MASK_ON_SUSPEND.
-                */
-               if (desc->istate & IRQS_SUSPENDED &&
-                   irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
-                       mask_irq(desc);
-       }
-
-       return 0;
-}
index e4e4121fa327d72e15f121697a493561ee6611b2..bbef57f5bdfdbc1764f015e2cafe3ec38ce79f0b 100644 (file)
@@ -302,6 +302,10 @@ config PM_GENERIC_DOMAINS_RUNTIME
        def_bool y
        depends on PM_RUNTIME && PM_GENERIC_DOMAINS
 
+config PM_GENERIC_DOMAINS_OF
+       def_bool y
+       depends on PM_GENERIC_DOMAINS && OF
+
 config CPU_PM
        bool
        depends on SUSPEND || CPU_IDLE
index 4ee194eb524b3663dd39dfa7c22eb9565321853b..7b323221b9ee9ad015556cf965ab8f211a1ff8c8 100644 (file)
@@ -129,6 +129,7 @@ int freeze_processes(void)
        if (!pm_freezing)
                atomic_inc(&system_freezing_cnt);
 
+       pm_wakeup_clear();
        printk("Freezing user space processes ... ");
        pm_freezing = true;
        error = try_to_freeze_tasks(true);
index f1604d8cf489a2e0cc689ab82ce6c3adfb82f693..791a61892bb536d5ce9296a1ceae1c1c1d0e7384 100644 (file)
@@ -725,6 +725,14 @@ static void memory_bm_clear_bit(struct memory_bitmap *bm, unsigned long pfn)
        clear_bit(bit, addr);
 }
 
+static void memory_bm_clear_current(struct memory_bitmap *bm)
+{
+       int bit;
+
+       bit = max(bm->cur.node_bit - 1, 0);
+       clear_bit(bit, bm->cur.node->data);
+}
+
 static int memory_bm_test_bit(struct memory_bitmap *bm, unsigned long pfn)
 {
        void *addr;
@@ -1333,23 +1341,39 @@ static struct memory_bitmap copy_bm;
 
 void swsusp_free(void)
 {
-       struct zone *zone;
-       unsigned long pfn, max_zone_pfn;
+       unsigned long fb_pfn, fr_pfn;
 
-       for_each_populated_zone(zone) {
-               max_zone_pfn = zone_end_pfn(zone);
-               for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
-                       if (pfn_valid(pfn)) {
-                               struct page *page = pfn_to_page(pfn);
-
-                               if (swsusp_page_is_forbidden(page) &&
-                                   swsusp_page_is_free(page)) {
-                                       swsusp_unset_page_forbidden(page);
-                                       swsusp_unset_page_free(page);
-                                       __free_page(page);
-                               }
-                       }
+       if (!forbidden_pages_map || !free_pages_map)
+               goto out;
+
+       memory_bm_position_reset(forbidden_pages_map);
+       memory_bm_position_reset(free_pages_map);
+
+loop:
+       fr_pfn = memory_bm_next_pfn(free_pages_map);
+       fb_pfn = memory_bm_next_pfn(forbidden_pages_map);
+
+       /*
+        * Find the next bit set in both bitmaps. This is guaranteed to
+        * terminate when fb_pfn == fr_pfn == BM_END_OF_MAP.
+        */
+       do {
+               if (fb_pfn < fr_pfn)
+                       fb_pfn = memory_bm_next_pfn(forbidden_pages_map);
+               if (fr_pfn < fb_pfn)
+                       fr_pfn = memory_bm_next_pfn(free_pages_map);
+       } while (fb_pfn != fr_pfn);
+
+       if (fr_pfn != BM_END_OF_MAP && pfn_valid(fr_pfn)) {
+               struct page *page = pfn_to_page(fr_pfn);
+
+               memory_bm_clear_current(forbidden_pages_map);
+               memory_bm_clear_current(free_pages_map);
+               __free_page(page);
+               goto loop;
        }
+
+out:
        nr_copy_pages = 0;
        nr_meta_pages = 0;
        restore_pblist = NULL;
index 18c62195660f6c6c458d74346ee7979ec4388db4..4ca9a33ff62020e63d15219ce9f097611ebf6507 100644 (file)
@@ -145,18 +145,30 @@ static int platform_suspend_prepare(suspend_state_t state)
 }
 
 static int platform_suspend_prepare_late(suspend_state_t state)
+{
+       return state == PM_SUSPEND_FREEZE && freeze_ops->prepare ?
+               freeze_ops->prepare() : 0;
+}
+
+static int platform_suspend_prepare_noirq(suspend_state_t state)
 {
        return state != PM_SUSPEND_FREEZE && suspend_ops->prepare_late ?
                suspend_ops->prepare_late() : 0;
 }
 
-static void platform_suspend_wake(suspend_state_t state)
+static void platform_resume_noirq(suspend_state_t state)
 {
        if (state != PM_SUSPEND_FREEZE && suspend_ops->wake)
                suspend_ops->wake();
 }
 
-static void platform_suspend_finish(suspend_state_t state)
+static void platform_resume_early(suspend_state_t state)
+{
+       if (state == PM_SUSPEND_FREEZE && freeze_ops->restore)
+               freeze_ops->restore();
+}
+
+static void platform_resume_finish(suspend_state_t state)
 {
        if (state != PM_SUSPEND_FREEZE && suspend_ops->finish)
                suspend_ops->finish();
@@ -172,7 +184,7 @@ static int platform_suspend_begin(suspend_state_t state)
                return 0;
 }
 
-static void platform_suspend_end(suspend_state_t state)
+static void platform_resume_end(suspend_state_t state)
 {
        if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end)
                freeze_ops->end();
@@ -180,7 +192,7 @@ static void platform_suspend_end(suspend_state_t state)
                suspend_ops->end();
 }
 
-static void platform_suspend_recover(suspend_state_t state)
+static void platform_recover(suspend_state_t state)
 {
        if (state != PM_SUSPEND_FREEZE && suspend_ops->recover)
                suspend_ops->recover();
@@ -265,12 +277,21 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
        if (error)
                goto Platform_finish;
 
-       error = dpm_suspend_end(PMSG_SUSPEND);
+       error = dpm_suspend_late(PMSG_SUSPEND);
        if (error) {
-               printk(KERN_ERR "PM: Some devices failed to power down\n");
+               printk(KERN_ERR "PM: late suspend of devices failed\n");
                goto Platform_finish;
        }
        error = platform_suspend_prepare_late(state);
+       if (error)
+               goto Devices_early_resume;
+
+       error = dpm_suspend_noirq(PMSG_SUSPEND);
+       if (error) {
+               printk(KERN_ERR "PM: noirq suspend of devices failed\n");
+               goto Platform_early_resume;
+       }
+       error = platform_suspend_prepare_noirq(state);
        if (error)
                goto Platform_wake;
 
@@ -318,11 +339,17 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
        enable_nonboot_cpus();
 
  Platform_wake:
-       platform_suspend_wake(state);
-       dpm_resume_start(PMSG_RESUME);
+       platform_resume_noirq(state);
+       dpm_resume_noirq(PMSG_RESUME);
+
+ Platform_early_resume:
+       platform_resume_early(state);
+
+ Devices_early_resume:
+       dpm_resume_early(PMSG_RESUME);
 
  Platform_finish:
-       platform_suspend_finish(state);
+       platform_resume_finish(state);
        return error;
 }
 
@@ -361,14 +388,16 @@ int suspend_devices_and_enter(suspend_state_t state)
        suspend_test_start();
        dpm_resume_end(PMSG_RESUME);
        suspend_test_finish("resume devices");
+       trace_suspend_resume(TPS("resume_console"), state, true);
        resume_console();
+       trace_suspend_resume(TPS("resume_console"), state, false);
 
  Close:
-       platform_suspend_end(state);
+       platform_resume_end(state);
        return error;
 
  Recover_platform:
-       platform_suspend_recover(state);
+       platform_recover(state);
        goto Resume_devices;
 }
 
index bd91bc177c93a65ca757c70467cd6e235af6a67c..084452e34a125ff24da375e6dce25c1224b46310 100644 (file)
@@ -22,6 +22,8 @@
 #define TEST_SUSPEND_SECONDS   10
 
 static unsigned long suspend_test_start_time;
+static u32 test_repeat_count_max = 1;
+static u32 test_repeat_count_current;
 
 void suspend_test_start(void)
 {
@@ -74,6 +76,7 @@ static void __init test_wakealarm(struct rtc_device *rtc, suspend_state_t state)
        int                     status;
 
        /* this may fail if the RTC hasn't been initialized */
+repeat:
        status = rtc_read_time(rtc, &alm.time);
        if (status < 0) {
                printk(err_readtime, dev_name(&rtc->dev), status);
@@ -100,10 +103,21 @@ static void __init test_wakealarm(struct rtc_device *rtc, suspend_state_t state)
        if (state == PM_SUSPEND_STANDBY) {
                printk(info_test, pm_states[state]);
                status = pm_suspend(state);
+               if (status < 0)
+                       state = PM_SUSPEND_FREEZE;
        }
+       if (state == PM_SUSPEND_FREEZE) {
+               printk(info_test, pm_states[state]);
+               status = pm_suspend(state);
+       }
+
        if (status < 0)
                printk(err_suspend, status);
 
+       test_repeat_count_current++;
+       if (test_repeat_count_current < test_repeat_count_max)
+               goto repeat;
+
        /* Some platforms can't detect that the alarm triggered the
         * wakeup, or (accordingly) disable it after it afterwards.
         * It's supposed to give oneshot behavior; cope.
@@ -137,16 +151,28 @@ static char warn_bad_state[] __initdata =
 static int __init setup_test_suspend(char *value)
 {
        int i;
+       char *repeat;
+       char *suspend_type;
 
-       /* "=mem" ==> "mem" */
+       /* example : "=mem[,N]" ==> "mem[,N]" */
        value++;
+       suspend_type = strsep(&value, ",");
+       if (!suspend_type)
+               return 0;
+
+       repeat = strsep(&value, ",");
+       if (repeat) {
+               if (kstrtou32(repeat, 0, &test_repeat_count_max))
+                       return 0;
+       }
+
        for (i = 0; pm_labels[i]; i++)
-               if (!strcmp(pm_labels[i], value)) {
+               if (!strcmp(pm_labels[i], suspend_type)) {
                        test_state_label = pm_labels[i];
                        return 0;
                }
 
-       printk(warn_bad_state, value);
+       printk(warn_bad_state, suspend_type);
        return 0;
 }
 __setup("test_suspend", setup_test_suspend);