Mike Mason [Tue, 8 Jul 2008 16:04:35 +0000 (02:04 +1000)]
powerpc/eeh: PERR/SERR bit settings during EEH device recovery
The following patch restores the PERR and SERR bits in the PCI
command register during an EEH device recovery. We have found
at least one case (an Agilent test card) where the PERR/SERR
bits are set to 1 by firmware at boot time, but are not restored
to 1 during EEH recovery. The patch fixes the Agilent card
problem. It has been tested on several other EEH-enabled cards
with no regressions.
Signed-off-by: Mike Mason <mmlnx@us.ibm.com>
Acked-by: Linas Vepstas <linasvepstas@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Michael Neuling [Tue, 8 Jul 2008 08:53:03 +0000 (18:53 +1000)]
powerpc: remove unused variable in emulate_fp_pair
regs is not used in emulate_fp_pair so remove it.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Michael Neuling [Tue, 8 Jul 2008 08:43:41 +0000 (18:43 +1000)]
powerpc: fix swapcontext backwards compat. with VSX ucontext changes
When the ucontext changed to add the VSX context, this broke backwards
compatibly on swapcontext. swapcontext only compares the ucontext size
passed in from the user to the new kernel ucontext size.
This adds a check against the old ucontext size (with VMX but without
VSX). It also adds some sanity check for ucontexts without VSX, but
where VSX is used according the MSR. Fixes for both 32 and 64bit
processes on 64bit kernels
Kudos to Paulus for noticing.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Paul Gortmaker [Mon, 7 Jul 2008 22:42:09 +0000 (08:42 +1000)]
powerpc/ibmebus: more meaningful variable name
Choose a more meaningful name for better System.map readability and
autopsy value etc.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Grant Erickson [Mon, 7 Jul 2008 22:03:11 +0000 (08:03 +1000)]
ibm_newemac: Parameterize EMAC Multicast Match Handling
Various instances of the EMAC core have varying: 1) number of address
match slots, 2) width of the registers for handling address match slots,
3) number of registers for handling address match slots and 4) base
offset for those registers.
As the driver stands today, it assumes that all EMACs have 4 IAHT and
GAHT 32-bit registers, starting at offset 0x30 from the register base,
with only 16-bits of each used for a total of 64 match slots.
The 405EX(r) and 460EX now use the EMAC4SYNC core rather than the EMAC4
core. This core has 8 IAHT and GAHT registers, starting at offset 0x80
from the register base, with ALL 32-bits of each used for a total of
256 match slots.
This adds a new compatible device tree entry "emac4sync" and a new,
related feature flag "EMAC_FTR_EMAC4SYNC" along with a series of macros
and inlines which supply the appropriate parameterized value based on
the presence or absence of the EMAC4SYNC feature.
The code has further been reworked where appropriate to use those macros
and inlines.
In addition, the register size passed to ioremap is now taken from the
device tree:
c4 for EMAC4SYNC cores
74 for EMAC4 cores
70 for EMAC cores
rather than sizeof (emac_regs).
Finally, the device trees have been updated with the appropriate compatible
entries and resource sizes.
This has been tested on an AMCC Haleakala board such that: 1) inbound
ICMP requests to 'haleakala.local' via MDNS from both Mac OS X 10.4.11
and Ubuntu 8.04 systems as well as 2) outbound ICMP requests from
'haleakala.local' to those same systems in the '.local' domain via MDNS
now work.
Signed-off-by: Grant Erickson <gerickson@nuovations.com>
Acked-by: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Dave Kleikamp [Mon, 7 Jul 2008 14:28:55 +0000 (00:28 +1000)]
powerpc/mm: Don't clear _PAGE_COHERENT when _PAGE_SAO is set
The current low level hash code on LPAR configurations clears
_PAGE_COHERENT (M) when either _PAGE_GUARDED (G) or _PAGE_NO_CACHE (I)
is set. This conflicts with _PAGE_SAO which has M, I and W bits sets at
once (normally invalid combo) to indicate the new SAO attribute.
This changes the code to allow that case.
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Dave Kleikamp [Mon, 7 Jul 2008 14:28:54 +0000 (00:28 +1000)]
powerpc/mm: Add Strong Access Ordering support
Allow an application to enable Strong Access Ordering on specific pages of
memory on Power 7 hardware. Currently, power has a weaker memory model than
x86. Implementing a stronger memory model allows an emulator to more
efficiently translate x86 code into power code, resulting in faster code
execution.
On Power 7 hardware, storing 0b1110 in the WIMG bits of the hpte enables
strong access ordering mode for the memory page. This patchset allows a
user to specify which pages are thus enabled by passing a new protection
bit through mmap() and mprotect(). I have defined PROT_SAO to be 0x10.
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Dave Kleikamp [Mon, 7 Jul 2008 14:28:53 +0000 (00:28 +1000)]
powerpc/mm: Add SAO Feature bit to the cputable
Add the CPU feature bit for the new Strong Access Ordering
facility of Power7
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Dave Kleikamp [Mon, 7 Jul 2008 14:28:52 +0000 (00:28 +1000)]
powerpc/mm: Define flags for Strong Access Ordering
This patch defines:
- PROT_SAO, which is passed into mmap() and mprotect() in the prot field
- VM_SAO in vma->vm_flags, and
- _PAGE_SAO, the combination of WIMG bits in the pte that enables strong
access ordering for the page.
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Dave Kleikamp [Mon, 7 Jul 2008 14:28:51 +0000 (00:28 +1000)]
mm: Allow architectures to define additional protection bits
This patch allows architectures to define functions to deal with
additional protections bits for mmap() and mprotect().
arch_calc_vm_prot_bits() maps additonal protection bits to vm_flags
arch_vm_get_page_prot() maps additional vm_flags to the vma's vm_page_prot
arch_validate_prot() checks for valid values of the protection bits
Note: vm_get_page_prot() is now pretty ugly, but the generated code
should be identical for architectures that don't define additional
protection bits.
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Srinivasa Ds [Mon, 7 Jul 2008 14:22:27 +0000 (00:22 +1000)]
powerpc: Implement task_pt_regs() accessor
The task_pt_regs() macro allows access to the pt_regs of a given task.
This macro is not currently defined for the powerpc architecture, but
we need it for some upcoming utrace additions.
Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Benjamin Herrenschmidt [Mon, 7 Jul 2008 03:44:31 +0000 (13:44 +1000)]
powerpc: Use new printk extension %pS to print symbols on oops
This changes the oops and backtrace code to use the new %pS
printk extension to print out symbols rather than manually
calling print_symbol.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Mark Nelson [Fri, 4 Jul 2008 19:05:45 +0000 (05:05 +1000)]
powerpc: move device_to_mask() to dma-mapping.h
Move device_to_mask() to dma-mapping.h because we need to use it from
outside dma_64.c in a later patch.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Mark Nelson [Fri, 4 Jul 2008 19:05:44 +0000 (05:05 +1000)]
powerpc/cell: cell_dma_dev_setup_iommu() return the iommu table
Make cell_dma_dev_setup_iommu() return a pointer to the struct iommu_table
(or NULL if no table can be found) rather than putting this pointer into
dev->archdata.dma_data (let the caller do that), and rename this function
to cell_get_iommu_table() to reflect this change.
This will allow us to get the iommu table for a device that doesn't have
the table in the archdata.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Mark Nelson [Fri, 4 Jul 2008 19:05:42 +0000 (05:05 +1000)]
powerpc/dma: implement new dma_*map*_attrs() interfaces
Update powerpc to use the new dma_*map*_attrs() interfaces. In doing so
update struct dma_mapping_ops to accept a struct dma_attrs and propagate
these changes through to all users of the code (generic IOMMU and the
64bit DMA code, and the iseries and ps3 platform code).
The old dma_*map_*() interfaces are reimplemented as calls to the
corresponding new interfaces.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Mark Nelson [Fri, 4 Jul 2008 19:05:41 +0000 (05:05 +1000)]
powerpc/dma: Add struct iommu_table argument to iommu_map_sg()
Make iommu_map_sg take a struct iommu_table. It did so before commit
740c3ce66700640a6e6136ff679b067e92125794 (iommu sg merging: ppc: make
iommu respect the segment size limits).
This stops the function looking in the archdata.dma_data for the iommu
table because in the future it will be called with a device that has
no table there.
This also has the nice side effect of making iommu_map_sg() match the
other map functions.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Maxim Shchetynin [Fri, 4 Jul 2008 19:05:39 +0000 (05:05 +1000)]
powerpc/spufs: add atomic busy_spus counter to struct cbe_spu_info
As nr_active counter includes also spus waiting for syscalls to return
we need a seperate counter that only counts spus that are currently running
on spu side. This counter shall be used by a cpufreq governor that targets
a frequency dependent from the number of running spus.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Geoff Levand [Tue, 1 Jul 2008 18:17:05 +0000 (04:17 +1000)]
powerpc/ps3: Quiet system bus match output
Reduce the output verbosity of ps3_system_bus_match().
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Jeremy Kerr [Thu, 3 Jul 2008 01:42:20 +0000 (11:42 +1000)]
powerpc/spufs: only add ".ctx" file with "debug" mount option
Currently, the .ctx debug file in spu context directories is always
present.
We'd prefer to prevent users from relying on this file, so add a
"debug" mount option to spufs. The .ctx file will only be added to
the context directories when this option is present.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Jeremy Kerr [Mon, 30 Jun 2008 04:38:37 +0000 (14:38 +1000)]
powerpc/spufs: add sizes for context files
Populate the size member of a few context files. Leave out files that
have different semantics with read vs mmap, or contain a
variable-length hex string.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Jeremy Kerr [Mon, 30 Jun 2008 02:17:28 +0000 (12:17 +1000)]
powerpc/spufs: allow spufs files to specify sizes
Currently, spufs never specifies the i_size for the files in context
directories, so stat() always reports 0-byte files.
This change adds allows the spufs_dir_(nosched_)contents arrays to
specify a file size. This allows stat() to report correct file sizes,
and makes SEEK_END work.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Jeremy Kerr [Tue, 1 Jul 2008 00:22:50 +0000 (10:22 +1000)]
powerpc/spufs: avoid magic numbers for mapping sizes
Use a set of #defines for the size of context mappings, instead of
magic numbers.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Luke Browning [Fri, 6 Jun 2008 03:26:54 +0000 (11:26 +0800)]
powerpc/spufs: don't extend time time slice if context is not in spu_run
An spu context shouldn't get an extra tick if the time slice code
couldn't find something else to run. This means contexts that are not
within spu_run (ie, SPU_SCHED_SPU_RUN is cleared) will not receive
extra ticks while we have no other contexts waiting.
Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Luke Browning [Mon, 16 Jun 2008 01:36:43 +0000 (11:36 +1000)]
powerpc/spufs: provide context debug file
Add a ctxt file to spufs that shows spu context information that is used
in scheduling. This info can be used for debugging spufs scheduler
issues, and to isolate between application and spufs problems as it
shows a lot of state such as priorities and dispatch counts.
This file contains internal spufs state and is subject to change at any
time, and therefore no applications should depend on it. The file is
intended for the use of spufs kernel developers.
Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Nathan Fontenot [Thu, 3 Jul 2008 03:25:08 +0000 (13:25 +1000)]
powerpc/pseries: Update numa association of hotplug memory add for drconf memory
Update the association of a memory section with a numa node that
occurs during hotplug add of a memory section. This adds a check in
the hot_add_scn_to_nid() routine for the
ibm,dynamic-reconfiguration-memory node in the device tree. If
present the new hot_add_drconf_scn_to_nid() routine is invoked, which
can properly parse the ibm,dynamic-reconfiguration-memory node of the
device tree and make the proper numa node associations.
This also introduces the valid_hot_add_scn() routine as a helper
function for code that is common to the hot_add_scn_to_nid() and
hot_add_drconf_scn_to_nid() routines.
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Nathan Fontenot [Thu, 3 Jul 2008 03:35:54 +0000 (13:35 +1000)]
powerpc/pseries: Split code into helper routines for drconf memory
This splits off several pieces of code that parse the
ibm,dynamic-reconfiguration-memory node of the device tree into separate
helper routines. This is in preparation for the next commit that will
use these helper routines. There are no functional changes in this patch.
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Nathan Fontenot [Thu, 3 Jul 2008 03:22:39 +0000 (13:22 +1000)]
powerpc/pseries: Update the device tree correctly for drconf memory add/remove
This updates the device tree manipulation routines so that memory
add/remove of lmbs represented under the
ibm,dynamic-reconfiguration-memory node of the device tree invokes the
hotplug notifier chain.
This change is needed because of the change in the way memory is
represented under the ibm,dynamic-reconfiguration-memory node. All lmbs
are described in the ibm,dynamic-memory property instead of having a
separate node for each lmb as in previous device tree layouts. This
requires the update_node() routine to check for updates to the
ibm,dynamic-memory property and invoke the hotplug notifier chain.
This also updates the pseries hotplug notifier to be able to gather information
for lmbs represented under the ibm,dynamic-reconfiguration-memory node and
have the lmbs added/removed.
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Nathan Fontenot [Thu, 3 Jul 2008 03:20:58 +0000 (13:20 +1000)]
powerpc/pseries: Use base address to derive starting page frame number
Use the base address of the lmb to derive the starting page frame number
instead of trying to extract it from the drc index of the lmb. The drc
index should not be used for this as it will, and did, break.
Until this point, systems that have had memory represented in the device
tree with a node for each lmb the drc index would (luckily) closely
track the base address of the lmb. For example a lmb with a drc index
of
8000000a would have a base address of
a0000000. This correlation
allowed the current code to derive the starting page frame number from
the drc inddex
Device tree layouts where lmbs are represented under the
ibm,dynamic-reconfiguration-memory node in the ibm,dynamic-memory
property do not have this correlation between the drc index and base
address of the lmb.
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Nathan Fontenot [Thu, 3 Jul 2008 03:19:24 +0000 (13:19 +1000)]
powerpc/pseries: Allow phandle to be specified in formats other than decimal
Allow the phandle passed to the /proc/ppc64/ofdt file to be specified
in formats other than decimal. This allows us to easily specify phandle
values in hex that would otherwise appear as negative integers.
This is an issue on systems where the value of
/proc/device-tree/ibm,dynamic-reconfiguration-memory.ibm,phandle is
fffffff9. Having to pass this to the ofdt file as a string results in
a large negative number, and simple_strtoul() does not handle negative
numbers.
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 2 Jul 2008 12:51:37 +0000 (22:51 +1000)]
powerpc: Remove old dump_task_* functions
Since Roland's ptrace cleanup starting with commit
f65255e8d51ecbc6c9eef20d39e0377d19b658ca ("[POWERPC] Use user_regset
accessors for FP regs"), the dump_task_* functions are no longer being
used.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 2 Jul 2008 04:06:37 +0000 (14:06 +1000)]
powerpc: Clean up copy_to/from_user for vsx and fpr
This merges and cleans up some of the ugly copy/to from user code
which is required for the new fpr and vsx layout in the thread_struct.
Also fixes some hard coded buffer sizes and removes a redundant
fpr_flush_to_thread.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Kumar Gala [Tue, 1 Jul 2008 15:16:40 +0000 (01:16 +1000)]
powerpc: Fixup lwsync at runtime
To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync
at runtime. On e500v1/v2 lwsync causes an illop so we need to patch up
the code. We default to 'sync' since that is always safe and if the cpu
is capable we will replace 'sync' with 'lwsync'.
We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is
needed. This flag could be moved elsewhere since we dont really use it
for the normal CPU_FTR purpose.
Finally we only store the relative offset in the fixup section to keep it
as small as possible rather than using a full fixup_entry.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Kumar Gala [Tue, 1 Jul 2008 15:16:16 +0000 (01:16 +1000)]
powerpc: Fix building of feature-fixup tests on ppc32
We need to use PPC_LCMPI otherwise we get compile errors like:
arch/powerpc/lib/feature-fixups-test.S: Assembler messages:
arch/powerpc/lib/feature-fixups-test.S:142: Error: Unrecognized opcode: `cmpdi'
arch/powerpc/lib/feature-fixups-test.S:149: Error: Unrecognized opcode: `cmpdi'
arch/powerpc/lib/feature-fixups-test.S:164: Error: Unrecognized opcode: `cmpdi'
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Tue, 1 Jul 2008 07:00:39 +0000 (17:00 +1000)]
powerpc: Fix compile warning in init_thread
Currently we get this warning:
arch/powerpc/kernel/init_task.c:33: warning: missing braces around initializer
arch/powerpc/kernel/init_task.c:33: warning: (near initialization for 'init_task.thread.fpr[0]')
This fixes it.
Noticed by Stephen Rothwell.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Tony Breeds [Tue, 1 Jul 2008 01:30:06 +0000 (11:30 +1000)]
powerpc: Fix building of arch/powerpc/mm/mem.o when MEMORY_HOTPLUG=y and SPARSEMEM=n
Currently the kernel fails to build with the above config options with:
CC arch/powerpc/mm/mem.o
arch/powerpc/mm/mem.c: In function 'arch_add_memory':
arch/powerpc/mm/mem.c:130: error: implicit declaration of function 'create_section_mapping'
This explicitly includes asm/sparsemem.h in arch/powerpc/mm/mem.c and
moves the guards in include/asm-powerpc/sparsemem.h to protect the
SPARSEMEM specific portions only.
Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Tue, 1 Jul 2008 04:01:39 +0000 (14:01 +1000)]
powerpc: Update for VSX core file and ptrace
This correctly hooks the VSX dump into Roland McGrath core file
infrastructure. It adds the VSX dump information as an additional elf
note in the core file (after talking more to the tool chain/gdb guys).
This also ensures the formats are consistent between signals, ptrace
and core files.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Tue, 1 Jul 2008 04:01:39 +0000 (14:01 +1000)]
powerpc: Fix compile error for CONFIG_VSX
Fix compile error when CONFIG_VSX is enabled.
arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
arch/powerpc/kernel/signal_64.c:241: error: 'i' undeclared (first use in this function)
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Eric B Munson [Mon, 30 Jun 2008 16:12:13 +0000 (02:12 +1000)]
powerpc: Keep 3 high personality bytes across exec
Currently when a 32 bit process is exec'd on a powerpc 64 bit host the
value in the top three bytes of the personality is clobbered. patch
adds a check in the SET_PERSONALITY macro that will carry all the
values in the top three bytes across the exec.
These three bytes currently carry flags to disable address randomisation,
limit the address space, force zeroing of an mmapped page, etc. Should an
application set any of these bits they will be maintained and honoured on
homogeneous environment but discarded and ignored on a heterogeneous
environment. So if an application requires all mmapped pages to be initialised
to zero and a wrapper is used to setup the personality and exec the target,
these flags will remain set on an all 32 or all 64 bit envrionment, but they
will be lost in the exec on a mixed 32/64 bit environment. Losing these bits
means that the same application would behave differently in different
environments. Tested on a POWER5+ machine with 64bit kernel and a mixed
64/32 bit user space.
Signed-off-by: Eric B Munson <ebmunson@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Bart Van Assche [Sat, 28 Jun 2008 06:51:35 +0000 (16:51 +1000)]
powerpc: Make sure that include/asm-powerpc/spinlock.h does not trigger compilation warnings
When compiling kernel modules for ppc that include <linux/spinlock.h>,
gcc prints a warning message every time it encounters a function
declaration where the inline keyword appears after the return type.
This makes sure that the order of the inline keyword and the return
type is as gcc expects it. Additionally, the __inline__ keyword is
replaced by inline, as checkpatch expects.
Signed-off-by: Bart Van Assche <bart.vanassche@gmail.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Stephen Rothwell [Fri, 27 Jun 2008 06:18:27 +0000 (16:18 +1000)]
powerpc: Explicitly copy elements of pt_regs
Gcc 4.3 produced this warning:
arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
arch/powerpc/kernel/signal_64.c:161: warning: array subscript is above array bounds
This is caused by us copying to aliases of elements of the pt_regs
structure. Make those explicit.
This adds one extra __get_user and unrolls a loop.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Bernhard Walle [Thu, 26 Jun 2008 17:02:15 +0000 (03:02 +1000)]
powerpc: Remove experimental status of kdump on 64-bit powerpc
This removes the experimental status of kdump on PPC64. kdump is on
PPC64 now since more than one year and it has proven to be stable.
Signed-off-by: Bernhard Walle <bwalle@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Andy Whitcroft [Thu, 26 Jun 2008 09:55:58 +0000 (19:55 +1000)]
powerpc: Add 64 bit version of huge_ptep_set_wrprotect
The implementation of huge_ptep_set_wrprotect() directly calls
ptep_set_wrprotect() to mark a hugepte write protected. However this
call is not appropriate on ppc64 kernels as this is a small page only
implementation. This can lead to the hash not being flushed correctly
when a mapping is being converted to COW, allowing processes to continue
using the original copy.
Currently huge_ptep_set_wrprotect() unconditionally calls
ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is
generic. On 64 bit this is implemented as:
pte_update(mm, addr, ptep, _PAGE_RW, 0);
On ppc64 this last parameter is the page size and is passed directly on
to hpte_need_flush():
hpte_need_flush(mm, addr, ptep, old, huge);
And this directly affects the page size we pass to flush_hash_page():
flush_hash_page(vaddr, rpte, psize, ssize, 0);
As this changes the way the hash is calculated we will flush the wrong
pages, potentially leaving live hashes to the original page.
Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
headers.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Andrew Lewis [Thu, 26 Jun 2008 09:29:05 +0000 (19:29 +1000)]
powerpc: Prevent memory corruption due to cache invalidation of unaligned DMA buffer
On PowerPC processors with non-coherent cache architectures the DMA
subsystem calls invalidate_dcache_range() before performing a DMA read
operation. If the address and length of the DMA buffer are not aligned
to a cache-line boundary this can result in memory outside of the DMA
buffer being invalidated in the cache. If this memory has an
uncommitted store then the data will be lost and a subsequent read of
that address will result in an old value being returned from main memory.
Only when the DMA buffer starts on a cache-line boundary and is an exact
mutiple of the cache-line size can invalidate_dcache_range() be called,
otherwise flush_dcache_range() must be called. flush_dcache_range()
will first flush uncommitted writes, and then invalidate the cache.
Signed-off-by: Andrew Lewis <andrew-lewis at netspace.net.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Kumar Gala [Thu, 26 Jun 2008 08:58:11 +0000 (18:58 +1000)]
powerpc/bootwrapper: Pad .dtb by default
Since most bootloaders or wrappers tend to update or add some information
to the .dtb they a handled they need some working space to do that in.
By default add 1K of padding via a default setting of DTS_FLAGS.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:18 +0000 (14:07 +1000)]
powerpc: Add CONFIG_VSX config option
Add CONFIG_VSX config build option. Must compile with POWER4, FPU and ALTIVEC.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:18 +0000 (14:07 +1000)]
powerpc: Add VSX context save/restore, ptrace and signal support
This patch extends the floating point save and restore code to use the
VSX load/stores when VSX is available. This will make FP context
save/restore marginally slower on FP only code, when VSX is available,
as it has to load/store 128bits rather than just 64bits.
Mixing FP, VMX and VSX code will get constant architected state.
The signals interface is extended to enable access to VSR 0-31
doubleword 1 after discussions with tool chain maintainers. Backward
compatibility is maintained.
The ptrace interface is also extended to allow access to VSR 0-31 full
registers.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:18 +0000 (14:07 +1000)]
powerpc: Add VSX assembler code macros
This adds the macros for the VSX load/store instruction as most
binutils are not going to support this for a while.
Also add VSX register save/restore macros and vsr[0-63] register definitions.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:18 +0000 (14:07 +1000)]
powerpc: Add VSX CPU feature
Add a VSX CPU feature. Also add code to detect if VSX is available
from the device tree.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:18 +0000 (14:07 +1000)]
powerpc: Introduce VSX thread_struct and CONFIG_VSX
The layout of the new VSR registers and how they overlap on top of the
legacy FPR and VR registers is:
VSR doubleword 0 VSR doubleword 1
----------------------------------------------------------------
VSR[0] | FPR[0] | |
----------------------------------------------------------------
VSR[1] | FPR[1] | |
----------------------------------------------------------------
| ... | |
| ... | |
----------------------------------------------------------------
VSR[30] | FPR[30] | |
----------------------------------------------------------------
VSR[31] | FPR[31] | |
----------------------------------------------------------------
VSR[32] | VR[0] |
----------------------------------------------------------------
VSR[33] | VR[1] |
----------------------------------------------------------------
| ... |
| ... |
----------------------------------------------------------------
VSR[62] | VR[30] |
----------------------------------------------------------------
VSR[63] | VR[31] |
----------------------------------------------------------------
VSX has 64 128bit registers. The first 32 regs overlap with the FP
registers and hence extend them with and additional 64 bits. The
second 32 regs overlap with the VMX registers.
This commit introduces the thread_struct changes required to reflect
this register layout. Ptrace and signals code is updated so that the
floating point registers are correctly accessed from the thread_struct
when CONFIG_VSX is enabled.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:18 +0000 (14:07 +1000)]
powerpc: Make load_up_fpu and load_up_altivec callable
Make load_up_fpu and load_up_altivec callable so they can be reused by
the VSX code.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:18 +0000 (14:07 +1000)]
powerpc: Move altivec_unavailable
Move the altivec_unavailable code, to make room at 0xf40 where the
vsx_unavailable exception will be.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Thu, 26 Jun 2008 07:07:48 +0000 (17:07 +1000)]
powerpc: Add macros to access floating point registers in thread_struct.
We are going to change where the floating point registers are stored
in the thread_struct, so in preparation add some macros to access the
floating point registers. Update all code to use these new macros.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Neuling [Wed, 25 Jun 2008 04:07:17 +0000 (14:07 +1000)]
powerpc: Fix MSR setting in 32 bit signal code
If we set the SPE MSR bit in save_user_regs we can blow away the VEC
bit. This doesn't matter in reality as they are in fact the same bit
but looks bad.
Also, when we add VSX in a later patch, we need to be able to set two
separate MSR bits here.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Tony Breeds [Tue, 24 Jun 2008 04:20:29 +0000 (14:20 +1000)]
powerpc: Change the default link address for pSeries zImage kernels
Currently we set the start of the .text section to be 4Mb for pSeries.
In situations where the zImage is > 8Mb we'll fail to boot (due to
overlapping with OF). Move .text in a zImage from 4MB to 64MB
(well past OF).
We still will not be able to load large zImage unless we also move OF,
to that end, add a note to the zImage ELF to move OF to 32Mb. If this
is the very first kernel booted then we'll need to move OF manually by
setting real-base.
Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:33:05 +0000 (11:33 +1000)]
powerpc: Use an alternative feature section in entry_64.S
Use an alternative feature section in _switch. There are three cases
handled here, either we don't have an SLB, in which case we jump over the
entire code section, or if we do we either do or don't have 1TB segments.
Boot tested on Power3, Power5 and Power5+.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:33:03 +0000 (11:33 +1000)]
powerpc: Add self-tests of the feature fixup code
This commit adds tests of the feature fixup code, they are run during
boot if CONFIG_FTR_FIXUP_SELFTEST=y. Some of the tests manually invoke
the patching routines to check their behaviour, and others use the
macros and so are patched during the normal patching done during boot.
Because we have two sets of macros with different names, we use a macro
to generate the test of the macros, very niiiice.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:33:02 +0000 (11:33 +1000)]
powerpc: Add logic to patch alternative feature sections
This commit adds the logic to patch alternative sections. This is fairly
straightforward, except for branches. Relative branches that jump from
inside the else section to outside of it need to be translated as they're
moved, otherwise they will jump to the wrong location.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:54 +0000 (11:32 +1000)]
powerpc: Introduce infrastructure for feature sections with alternatives
The current feature section logic only supports nop'ing out code, this means
if you want to choose at runtime between instruction sequences, one or both
cases will have to execute the nop'ed out contents of the other section, eg:
BEGIN_FTR_SECTION
or 1,1,1
END_FTR_SECTION_IFSET(FOO)
BEGIN_FTR_SECTION
or 2,2,2
END_FTR_SECTION_IFCLR(FOO)
and the resulting code will be either,
or 1,1,1
nop
or,
nop
or 2,2,2
For small code segments this is fine, but for larger code blocks and in
performance criticial code segments, it would be nice to avoid the nops.
This commit starts to implement logic to allow the following:
BEGIN_FTR_SECTION
or 1,1,1
FTR_SECTION_ELSE
or 2,2,2
ALT_FTR_SECTION_END_IFSET(FOO)
and the resulting code will be:
or 1,1,1
or,
or 2,2,2
We achieve this by extending the existing FTR macros. The current feature
section semantic just becomes a special case, ie. if the else case is empty
we nop out the default case.
The key limitation is that the size of the else case must be less than or
equal to the size of the default case. If the else case is smaller the
remainder of the section is nop'ed.
We let the linker put the else case code in with the rest of the text,
so that relative branches from the else case are more likley to link,
this has the disadvantage that we can't free the unused else cases.
This commit introduces the required macro and linker script changes, but
does not enable the patching of the alternative sections.
We also need to update two hand-made section entries in reg.h and timex.h
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:48 +0000 (11:32 +1000)]
powerpc: Consolidate feature fixup macros for 64/32 bit
Currently we have three versions of MAKE_FTR_SECTION_ENTRY(), the macro that
generates a feature section entry. There is 64bit version, a 32bit version
and version for 32bit code built with a 64bit kernel.
Rather than triplicating (?) the MAKE_FTR_SECTION_ENTRY() logic, we can
move the 64bit/32bit differences into separate macros, and then only have
one version of MAKE_FTR_SECTION_ENTRY().
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:39 +0000 (11:32 +1000)]
powerpc: Consolidate CPU and firmware feature fixup macros
The CPU and firmware feature fixup macros are currently spread across
three files, firmware.h, cputable.h and asm-compat.h. Consolidate them
into their own file, feature-fixups.h
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:36 +0000 (11:32 +1000)]
powerpc: Split out do_feature_fixups() from cputable.c
The logic to patch CPU feature sections lives in cputable.c, but these days
it's used for CPU features as well as firmware features. Move it into
it's own file for neatness and as preparation for some additions.
While we're moving the code, we pull the loop body logic into a separate
routine, and remove a comment which doesn't apply anymore.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:35 +0000 (11:32 +1000)]
powerpc: Add PPC_NOP_INSTR, a hash define for the preferred nop instruction
A bunch of code has hard-coded the value for a "nop" instruction, it
would be nice to have a #define for it.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:32 +0000 (11:32 +1000)]
powerpc: Add tests of the code patching routines
Add tests of the existing code patching routines, as well as the new
routines added in the last commit. The self-tests are run late in boot
when CONFIG_CODE_PATCHING_SELFTEST=y, which depends on DEBUG_KERNEL=y.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:29 +0000 (11:32 +1000)]
powerpc: Add new code patching routines
This commit adds some new routines for patching code, which will be used
in a following commit.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:28 +0000 (11:32 +1000)]
powerpc: Add ppc_function_entry() which gets the entry point for a function
Because function pointers point to different things on 32-bit vs 64-bit,
add a macro that deals with dereferencing the OPD on 64-bit. The soon to
be merged ftrace wants this, as well as other code I am working on.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:24 +0000 (11:32 +1000)]
powerpc: Make create_branch() return errors if the branch target is too large
If you pass a target value to create_branch() which is more than 32MB - 4,
or - 32MB away from the branch site, then it's impossible to create an
immediate branch. The current code doesn't check, which will lead to us
creating a branch to somewhere else - which is bad.
For code that cares to check we return 0, which is easy to check for, and
for code that doesn't at least we'll be creating an illegal instruction,
rather than a branch to some random address.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:22 +0000 (11:32 +1000)]
powerpc: Allow create_branch() to return errors
Currently create_branch() creates a branch instruction for you, and
patches it into the call site. In some circumstances it would be nice
to be able to create the instruction and patch it later, and also some
code might want to check for errors in the branch creation before
doing the patching. A future commit will change create_branch() to
check for errors.
For callers that don't care, replace create_branch() with
patch_branch(), which just creates the branch and patches it directly.
While we're touching all the callers, change to using unsigned int *,
as this seems to match usage better. That allows (and requires) us to
remove the volatile in the definition of vector in powermac/smp.c and
mpc86xx_smp.c, that's correct because now that we're passing vector as
an unsigned int * the compiler knows that it's value might change
across the patch_branch() call.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Jon Loeliger <jdl@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Michael Ellerman [Tue, 24 Jun 2008 01:32:21 +0000 (11:32 +1000)]
powerpc: Move code patching code into arch/powerpc/lib/code-patching.c
We currently have a few routines for patching code in asm/system.h, because
they didn't fit anywhere else. I'd like to clean them up a little and add
some more, so first move them into a dedicated C file - they don't need to
be inlined.
While we're moving the code, drop create_function_call(), it's intended
caller never got merged and will be replaced in future with something
different.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Julia Lawall [Mon, 23 Jun 2008 18:14:52 +0000 (04:14 +1000)]
drivers/macintosh/smu.c: Improve error handling
This makes two changes:
* As noted by Akinobu Mita in patch
b1fceac2b9e04d278316b2faddf276015fc06e3b, alloc_bootmem never returns NULL
and always returns a zeroed region of memory. Thus the error checking code
and memset after the call to alloc_bootmem are not necessary.
* The old error handling code consisted of setting a global variable to
NULL and returning an error code, which could cause previously allocated
resources never to be freed. The patch adds calls to appropriate resource
deallocation functions.
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Adrian Bunk [Mon, 23 Jun 2008 17:48:28 +0000 (03:48 +1000)]
powerpc: asm/elf.h: Reduce userspace header
This makes asm/elf.h export less non-userspace stuff to userspace.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Adrian Bunk [Mon, 23 Jun 2008 17:48:21 +0000 (03:48 +1000)]
powerpc: Don't export asm/asm-compat.h to userspace
asm/asm-compat.h doesn't seem to be intended for userspace usage.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Adrian Bunk [Mon, 23 Jun 2008 17:46:57 +0000 (03:46 +1000)]
drivers/macintosh: Various cleanups
This contains the following cleanups:
- make the following needlessly global code static:
- adb.c: adb_controller
- adb.c: adb_init()
- adbhid.c: adb_to_linux_keycodes[] (also make it const)
- via-pmu68k.c: backlight_level
- via-pmu68k.c: backlight_enabled
- remove the following unused code:
- via-pmu68k.c: sleep_notifier_list
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Kumar Gala [Fri, 20 Jun 2008 16:31:01 +0000 (02:31 +1000)]
powerpc: Move common module code into its own file
Refactor common code between ppc32 and ppc64 module handling into a
shared filed.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Dave Kleikamp [Wed, 18 Jun 2008 22:32:56 +0000 (08:32 +1000)]
powerpc: hash_huge_page() should get the WIMG bits from the lpte
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Joel Schopp [Wed, 18 Jun 2008 20:23:23 +0000 (06:23 +1000)]
powerpc: Tell firmware we support architecture V2.06
Add the bits to the architecture-vec so that ibm,client-architecture
lets the firmware know we support the 2.06 architecture.
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Joel Schopp [Wed, 18 Jun 2008 20:18:21 +0000 (06:18 +1000)]
powerpc: Add cputable entry for Power7 architected mode
Add an entry for Power7 architected mode and add "(raw)" to Power7 raw
mode to distinguish it more clearly.
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Paul Mackerras [Wed, 18 Jun 2008 05:29:12 +0000 (15:29 +1000)]
powerpc: Only demote individual slices rather than whole process
At present, if we have a kernel with a 64kB page size, and some
process maps something that has to be mapped with 4kB pages (such as a
cache-inhibited mapping on POWER5+, or the eHCA infiniband queue-pair
pages), we change the process to use 4kB pages everywhere. This hurts
the performance of HPC programs that access eHCA from userspace.
With this patch, the kernel will only demote the slice(s) containing
the eHCA or cache-inhibited mappings, leaving the remaining slices
able to use 64kB hardware pages.
This also changes the slice_get_unmapped_area code so that it is
willing to place a 64k-page mapping into (or across) a 4k-page slice
if there is no better alternative, i.e. if the program specified
MAP_FIXED or if there is not sufficient space available in slices that
are either empty or already have 64k-page mappings in them.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Michael Neuling [Wed, 18 Jun 2008 00:47:26 +0000 (10:47 +1000)]
powerpc: Add cputable entry for POWER7
Add a cputable entry for the POWER7 processor.
Also tell firmware that we know about POWER7.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Scott Wood [Tue, 17 Jun 2008 16:59:59 +0000 (02:59 +1000)]
powerpc: Fix copy-and-paste error in clrsetbits_le16
This was pointed out by Detlev Zundel when this code was being
added to U-boot.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Becky Bruce [Fri, 13 Jun 2008 23:41:43 +0000 (09:41 +1000)]
powerpc: Get rid of bitfields in ppc_bat struct
While working on the 36-bit physical support, I noticed that there
was exactly one line of code that actually referenced the bitfields.
So I got rid of them and redefined ppc_bat as a struct of 2 u32's:
batu and batl. I also got rid of the previous union that held the
bitfield structs and a word representation of the batu/l values.
This seems like a nicer solution than adding in a bunch of
new bitfields to support extended bat addressing that would never
get used, and just leaving the struct as-is would have been
incomplete in the face of large physical addressing.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Becky Bruce [Fri, 13 Jun 2008 23:41:42 +0000 (09:41 +1000)]
powerpc: Change BAT code to use phys_addr_t
Currently, the physical address is an unsigned long, but it should
be phys_addr_t in set_bat, [v/p]_mapped_by_bat. Also, create a
macro that can convert a large physical address into the correct
format for programming the BAT registers.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Becky Bruce [Fri, 13 Jun 2008 23:12:44 +0000 (09:12 +1000)]
powerpc: Silly spelling fix in pgtable-ppc32
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Arnd Bergmann [Thu, 12 Jun 2008 09:34:39 +0000 (19:34 +1000)]
powerpc: Increase CRASH_HANDLER_MAX
There are now two potential callers of machine_crash_shutdown,
so increase the limit accordingly.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Arnd Bergmann [Thu, 12 Jun 2008 09:14:36 +0000 (19:14 +1000)]
powerpc/cell: Disable ptcal in case of crash kdump
We need to disable ptcal before starting a new kernel after a crash,
in order to avoid overwriting data in the kdump kernel.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Arnd Bergmann [Thu, 12 Jun 2008 09:14:35 +0000 (19:14 +1000)]
powerpc/pseries: Call pseries_kexec_setup only on pseries
The pseries_kexec_setup function overwrites some ppc_md
pointers, so make sure it only gets called when running on
the right architecture.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Arnd Bergmann [Thu, 12 Jun 2008 09:14:34 +0000 (19:14 +1000)]
powerpc: Provide dummy crash_shutdown_register
When kexec is disabled, the crash_shutdown_{un,}register
functions are not available in the kernel.
This provides dummy inline functions for those so that
the callers don't have to worry about it.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Benjamin Herrenschmidt [Wed, 11 Jun 2008 05:37:10 +0000 (15:37 +1000)]
powerpc: Free a PTE bit on ppc64 with 64K pages
This frees a PTE bit when using 64K pages on ppc64. This is done
by getting rid of the separate _PAGE_HASHPTE bit. Instead, we just test
if any of the 16 sub-page bits is set. For non-combo pages (ie. real
64K pages), we set SUB0 and the location encoding in that field.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Anton Vorontsov [Tue, 10 Jun 2008 22:31:00 +0000 (08:31 +1000)]
powerpc: Implement OF PCI address accessors stubs for CONFIG_PCI=n
To avoid "#ifdef CONFIG_PCI" in the drivers we should provide stubs in
place of OF PCI address accessors.
Without these stubs build breaks for drivers not strictly requiring PCI,
for example CONFIG_FB_OF=y without CONFIG_PCI:
LD .tmp_vmlinux1
drivers/built-in.o: In function `offb_map_reg':
offb.c:(.text+0x6e7c): undefined reference to `of_get_pci_address'
OF PCI IRQ accessors require pci_dev argument, so drivers using PCI
IRQs should depend on CONFIG_PCI anyway.
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Daniel Walker [Mon, 9 Jun 2008 23:26:09 +0000 (09:26 +1000)]
macintosh/media bay: Convert semaphore to mutex
Signed-off-by: Daniel Walker <dwalker@mvista.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Daniel Walker [Mon, 9 Jun 2008 23:26:08 +0000 (09:26 +1000)]
macintosh/therm_windtunnel: Convert semaphore to mutex
Signed-off-by: Daniel Walker <dwalker@mvista.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Nick Piggin [Mon, 9 Jun 2008 23:26:08 +0000 (09:26 +1000)]
spufs: Convert nopfn to fault
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Segher Boessenkool [Sat, 7 Jun 2008 23:42:52 +0000 (09:42 +1000)]
powerpc: Get rid of CROSS32{AS,LD,OBJCOPY}
CROSS32AS and CROSS32LD are never used (instead, CROSS32CC is used
with proper command line options).
CROSS32OBJCOPY isn't used anymore either, since the "wrapper" stuff
was added.
Remove these unused variables.
Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Stephen Rothwell [Fri, 23 May 2008 06:31:08 +0000 (16:31 +1000)]
pcmcia: Use linux/of_{device,platform}.h instead of asm
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Vitaly Bordug <vitb@kernel.crashing.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Stephen Rothwell [Fri, 23 May 2008 06:28:54 +0000 (16:28 +1000)]
drivers/net: Use linux/of_{device,platform}.h instead of asm
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Stephen Rothwell [Fri, 23 May 2008 06:27:02 +0000 (16:27 +1000)]
macintosh: Use linux/of_{device,platform}.h instead of asm
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Stephen Rothwell [Fri, 23 May 2008 06:25:50 +0000 (16:25 +1000)]
hwmon: Use linux/of_platform.h instead of asm
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Stelian Pop <stelian@popies.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Stephen Rothwell [Fri, 23 May 2008 06:20:05 +0000 (16:20 +1000)]
pasemi-rng: Use linux/of_platform.h instead of asm
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Stephen Rothwell [Fri, 23 May 2008 05:12:23 +0000 (15:12 +1000)]
viotape: Use unlocked_ioctl
This pushes the BKL down into the driver. Based on a patch by Alan Cox.
We need to do it this way for now as the inode parameter of viotap_ioctl
is used internally as a flag. We should do a further cleanup patch.
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Nick Piggin [Wed, 21 May 2008 14:12:31 +0000 (00:12 +1000)]
powerpc: Optimise smp_wmb on 64-bit processors
For 64-bit processors, lwsync is the recommended method of store/store
ordering on caching enabled memory. For those subarchs which have
lwsync, use it rather than eieio for smp_wmb.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Paul Mackerras [Mon, 30 Jun 2008 00:16:50 +0000 (10:16 +1000)]
Merge branch 'linux-2.6'