firefly-linux-kernel-4.4.55.git
19 years ago[PATCH] mm: ptd_alloc inline and out
Hugh Dickins [Sun, 30 Oct 2005 01:16:22 +0000 (18:16 -0700)]
[PATCH] mm: ptd_alloc inline and out

It seems odd to me that, whereas pud_alloc and pmd_alloc test inline, only
calling out-of-line __pud_alloc __pmd_alloc if allocation needed,
pte_alloc_map and pte_alloc_kernel are entirely out-of-line.  Though it does
add a little to kernel size, change them to macros testing inline, calling
__pte_alloc or __pte_alloc_kernel to allocate out-of-line.  Mark none of them
as fastcalls, leave that to CONFIG_REGPARM or not.

It also seems more natural for the out-of-line functions to leave the offset
calculation and map to the inline, which has to do it anyway for the common
case.  At least mremap move wants __pte_alloc without _map.

Macros rather than inline functions, certainly to avoid the header file issues
which arise from CONFIG_HIGHPTE needing kmap_types.h, but also in case any
architectures I haven't built would have other such problems.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: init_mm without ptlock
Hugh Dickins [Sun, 30 Oct 2005 01:16:21 +0000 (18:16 -0700)]
[PATCH] mm: init_mm without ptlock

First step in pushing down the page_table_lock.  init_mm.page_table_lock has
been used throughout the architectures (usually for ioremap): not to serialize
kernel address space allocation (that's usually vmlist_lock), but because
pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.

Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
and drop it when allocating a new one, to check lest a racing task already
did.  Similarly no page_table_lock in vmalloc's map_vm_area.

Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
user mms, which are converted only by a later patch, for now they have to lock
differently according to whether or not it's init_mm.

If sources get muddled, there's a danger that an arch source taking
init_mm.page_table_lock will be mixed with common source also taking it (or
neither take it).  So break the rules and make another change, which should
break the build for such a mismatch: remove the redundant mm arg from
pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).

Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
took page_table_lock for no good reason.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: ia64 use expand_upwards
Hugh Dickins [Sun, 30 Oct 2005 01:16:20 +0000 (18:16 -0700)]
[PATCH] mm: ia64 use expand_upwards

ia64 has expand_backing_store function for growing its Register Backing Store
vma upwards.  But more complete code for this purpose is found in the
CONFIG_STACK_GROWSUP part of mm/mmap.c.  Uglify its #ifdefs further to provide
expand_upwards for ia64 as well as expand_stack for parisc.

The Register Backing Store vma should be marked VM_ACCOUNT.  Implement the
intention of growing it only a page at a time, instead of passing an address
outside of the vma to handle_mm_fault, with unknown consequences.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: mm_struct hiwaters moved
Hugh Dickins [Sun, 30 Oct 2005 01:16:19 +0000 (18:16 -0700)]
[PATCH] mm: mm_struct hiwaters moved

Slight and timid rearrangement of mm_struct: hiwater_rss and hiwater_vm were
tacked on the end, but it seems better to keep them near _file_rss, _anon_rss
and total_vm, in the same cacheline on those arches verified.

There are likely to be more profitable rearrangements, but less obvious (is it
good or bad that saved_auxv[AT_VECTOR_SIZE] isolates cpu_vm_mask and context
from many others?), needing serious instrumentation.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: update_hiwaters just in time
Hugh Dickins [Sun, 30 Oct 2005 01:16:18 +0000 (18:16 -0700)]
[PATCH] mm: update_hiwaters just in time

update_mem_hiwater has attracted various criticisms, in particular from those
concerned with mm scalability.  Originally it was called whenever rss or
total_vm got raised.  Then many of those callsites were replaced by a timer
tick call from account_system_time.  Now Frank van Maarseveen reports that to
be found inadequate.  How about this?  Works for Frank.

Replace update_mem_hiwater, a poor combination of two unrelated ops, by macros
update_hiwater_rss and update_hiwater_vm.  Don't attempt to keep
mm->hiwater_rss up to date at timer tick, nor every time we raise rss (usually
by 1): those are hot paths.  Do the opposite, update only when about to lower
rss (usually by many), or just before final accounting in do_exit.  Handle
mm->hiwater_vm in the same way, though it's much less of an issue.  Demand
that whoever collects these hiwater statistics do the work of taking the
maximum with rss or total_vm.

And there has been no collector of these hiwater statistics in the tree.  The
new convention needs an example, so match Frank's usage by adding a VmPeak
line above VmSize to /proc/<pid>/status, and also a VmHWM line above VmRSS
(High-Water-Mark or High-Water-Memory).

There was a particular anomaly during mremap move, that hiwater_vm might be
captured too high.  A fleeting such anomaly remains, but it's quickly
corrected now, whereas before it would stick.

What locking?  None: if the app is racy then these statistics will be racy,
it's not worth any overhead to make them exact.  But whenever it suits,
hiwater_vm is updated under exclusive mmap_sem, and hiwater_rss under
page_table_lock (for now) or with preemption disabled (later on): without
going to any trouble, minimize the time between reading current values and
updating, to minimize those occasions when a racing thread bumps a count up
and back down in between.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: zap_pte out of line
Hugh Dickins [Sun, 30 Oct 2005 01:16:17 +0000 (18:16 -0700)]
[PATCH] mm: zap_pte out of line

There used to be just one call to zap_pte, but it shouldn't be inline now
there are two.  Check for the common case pte_none before calling, and move
its rss accounting up into install_page or install_file_pte - which helps the
next patch.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: do_mremap current mm
Hugh Dickins [Sun, 30 Oct 2005 01:16:16 +0000 (18:16 -0700)]
[PATCH] mm: do_mremap current mm

Cleanup: relieve do_mremap from its surfeit of current->mms.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: do_swap_page race major
Hugh Dickins [Sun, 30 Oct 2005 01:16:15 +0000 (18:16 -0700)]
[PATCH] mm: do_swap_page race major

Small adjustment: do_swap_page should report its !pte_same race as a major
fault if it had to read into swap cache, because whatever raced with it will
have found page already in cache and reported minor fault.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: zap_pte_range dec rss
Hugh Dickins [Sun, 30 Oct 2005 01:16:14 +0000 (18:16 -0700)]
[PATCH] mm: zap_pte_range dec rss

Small adjustment: zap_pte_range decrement its rss counts from 0 then finally
add, avoiding negations - we don't have or need a sub_mm_rss.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: copy_one_pte inc rss
Hugh Dickins [Sun, 30 Oct 2005 01:16:13 +0000 (18:16 -0700)]
[PATCH] mm: copy_one_pte inc rss

Small adjustment, following Nick's suggestion: it's more straightforward for
copy_pte_range to let copy_one_pte do the rss incrementation, than use an
index it passed back.  Saves a #define, and 16 bytes of .text.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] core remove PageReserved
Nick Piggin [Sun, 30 Oct 2005 01:16:12 +0000 (18:16 -0700)]
[PATCH] core remove PageReserved

Remove PageReserved() calls from core code by tightening VM_RESERVED
handling in mm/ to cover PageReserved functionality.

PageReserved special casing is removed from get_page and put_page.

All setting and clearing of PageReserved is retained, and it is now flagged
in the page_alloc checks to help ensure we don't introduce any refcount
based freeing of Reserved pages.

MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
deprecated.  We never completely handled it correctly anyway, and is be
reintroduced in future if required (Hugh has a proof of concept).

Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
be trivially removed.

Last real user of PageReserved is swsusp, which uses PageReserved to
determine whether a struct page points to valid memory or not.  This still
needs to be addressed (a generic page_is_ram() should work).

A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
thus mapcounted and count towards shared rss).  These writes to the struct
page could cause excessive cacheline bouncing on big systems.  There are a
number of ways this could be addressed if it is an issue.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Refcount bug fix for filemap_xip.c

Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: m68k kill stram swap
Hugh Dickins [Sun, 30 Oct 2005 01:16:10 +0000 (18:16 -0700)]
[PATCH] mm: m68k kill stram swap

Please, please now delete the Atari CONFIG_STRAM_SWAP code.  It may be
excellent and ingenious code, but its reference to swap_vfsmnt betrays that it
hasn't been built since 2.5.1 (four years old come December), it's delving
deep into matters which are the preserve of core mm code, its only purpose is
to give the more conscientious mm guys an anxiety attack from time to time;
yet we keep on breaking it more and more.

If you want to use RAM for swap, then if the MTD driver does not already
provide just what you need, I'm sure David could be persuaded to add the
extra.  But you'd also like to be able to allocate extents of that swap for
other use: we can give you a core interface for that if you need.  But unbuilt
for four years suggests to me that there's no need at all.

I cannot swear the patch below won't break your build, but believe so.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: sh64 hugetlbpage.c
Hugh Dickins [Sun, 30 Oct 2005 01:16:09 +0000 (18:16 -0700)]
[PATCH] mm: sh64 hugetlbpage.c

The sh64 hugetlbpage.c seems to be erroneous, left over from a bygone age,
clashing with the common hugetlb.c.  Replace it by a copy of the sh
hugetlbpage.c.  Except, delete that mk_pte_huge macro neither uses.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: dup_mmap down new mmap_sem
Hugh Dickins [Sun, 30 Oct 2005 01:16:08 +0000 (18:16 -0700)]
[PATCH] mm: dup_mmap down new mmap_sem

One anomaly remains from when Andrea rationalized the responsibilities of
mmap_sem and page_table_lock: in dup_mmap we add vmas to the child holding its
page_table_lock, but not the mmap_sem which normally guards the vma list and
rbtree.  Which could be an issue for unuse_mm: though since it just walks down
the list (today with page_table_lock, tomorrow not), it's probably okay.  Will
need a memory barrier?  Oh, keep it simple, Nick and I agreed, no harm in
taking child's mmap_sem here.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: dup_mmap use oldmm more
Hugh Dickins [Sun, 30 Oct 2005 01:16:06 +0000 (18:16 -0700)]
[PATCH] mm: dup_mmap use oldmm more

Use the parent's oldmm throughout dup_mmap, instead of perversely going back
to current->mm.  (Can you hear the sigh of relief from those mpnts?  Usually I
squash them, but not today.)

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: batch updating mm_counters
Hugh Dickins [Sun, 30 Oct 2005 01:16:05 +0000 (18:16 -0700)]
[PATCH] mm: batch updating mm_counters

tlb_finish_mmu used to batch zap_pte_range's update of mm rss, which may be
worthwhile if the mm is contended, and would reduce atomic operations if the
counts were atomic.  Let zap_pte_range now batch its updates to file_rss and
anon_rss, per page-table in case we drop the lock outside; and copy_pte_range
batch them too.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: rss = file_rss + anon_rss
Hugh Dickins [Sun, 30 Oct 2005 01:16:05 +0000 (18:16 -0700)]
[PATCH] mm: rss = file_rss + anon_rss

I was lazy when we added anon_rss, and chose to change as few places as
possible.  So currently each anonymous page has to be counted twice, in rss
and in anon_rss.  Which won't be so good if those are atomic counts in some
configurations.

Change that around: keep file_rss and anon_rss separately, and add them
together (with get_mm_rss macro) when the total is needed - reading two
atomics is much cheaper than updating two atomics.  And update anon_rss
upfront, typically in memory.c, not tucked away in page_add_anon_rmap.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: mm_init set_mm_counters
Hugh Dickins [Sun, 30 Oct 2005 01:16:04 +0000 (18:16 -0700)]
[PATCH] mm: mm_init set_mm_counters

How is anon_rss initialized?  In dup_mmap, and by mm_alloc's memset; but
that's not so good if an mm_counter_t is a special type.  And how is rss
initialized?  By set_mm_counter, all over the place.  Come on, we just need to
initialize them both at once by set_mm_counter in mm_init (which follows the
memcpy when forking).

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: tlb_finish_mmu forget rss
Hugh Dickins [Sun, 30 Oct 2005 01:16:03 +0000 (18:16 -0700)]
[PATCH] mm: tlb_finish_mmu forget rss

zap_pte_range has been counting the pages it frees in tlb->freed, then
tlb_finish_mmu has used that to update the mm's rss.  That got stranger when I
added anon_rss, yet updated it by a different route; and stranger when rss and
anon_rss became mm_counters with special access macros.  And it would no
longer be viable if we're relying on page_table_lock to stabilize the
mm_counter, but calling tlb_finish_mmu outside that lock.

Remove the mmu_gather's freed field, let tlb_finish_mmu stick to its own
business, just decrement the rss mm_counter in zap_pte_range (yes, there was
some point to batching the update, and a subsequent patch restores that).  And
forget the anal paranoia of first reading the counter to avoid going negative
- if rss does go negative, just fix that bug.

Remove the mmu_gather's flushes and avoided_flushes from arm and arm26: no use
was being made of them.  But arm26 alone was actually using the freed, in the
way some others use need_flush: give it a need_flush.  arm26 seems to prefer
spaces to tabs here: respect that.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: tlb_is_full_mm was obscure
Hugh Dickins [Sun, 30 Oct 2005 01:16:02 +0000 (18:16 -0700)]
[PATCH] mm: tlb_is_full_mm was obscure

tlb_is_full_mm?  What does that mean?  The TLB is full?  No, it means that the
mm's last user has gone and the whole mm is being torn down.  And it's an
inline function because sparc64 uses a different (slightly better)
"tlb_frozen" name for the flag others call "fullmm".

And now the ptep_get_and_clear_full macro used in zap_pte_range refers
directly to tlb->fullmm, which would be wrong for sparc64.  Rather than
correct that, I'd prefer to scrap tlb_is_full_mm altogether, and change
sparc64 to just use the same poor name as everyone else - is that okay?

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: tlb_gather_mmu get_cpu_var
Hugh Dickins [Sun, 30 Oct 2005 01:16:01 +0000 (18:16 -0700)]
[PATCH] mm: tlb_gather_mmu get_cpu_var

tlb_gather_mmu dates from before kernel preemption was allowed, and uses
smp_processor_id or __get_cpu_var to find its per-cpu mmu_gather.  That works
because it's currently only called after getting page_table_lock, which is not
dropped until after the matching tlb_finish_mmu.  But don't rely on that, it
will soon change: now disable preemption internally by proper get_cpu_var in
tlb_gather_mmu, put_cpu_var in tlb_finish_mmu.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: move_page_tables by extents
Hugh Dickins [Sun, 30 Oct 2005 01:16:00 +0000 (18:16 -0700)]
[PATCH] mm: move_page_tables by extents

Speeding up mremap's moving of ptes has never been a priority, but the locking
will get more complicated shortly, and is already too baroque.

Scrap the current one-by-one moving, do an extent at a time: curtailed by end
of src and dst pmds (have to use PMD_SIZE: the way pmd_addr_end gets elided
doesn't match this usage), and by latency considerations.

One nice property of the old method is lost: it never allocated a page table
unless absolutely necessary, so you could free empty page tables by mremapping
to and fro.  Whereas this way, it allocates a dst table wherever there was a
src table.  I keep diving in to reinstate the old behaviour, then come out
preferring not to clutter how it now is.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: page fault handlers tidyup
Hugh Dickins [Sun, 30 Oct 2005 01:15:59 +0000 (18:15 -0700)]
[PATCH] mm: page fault handlers tidyup

Impose a little more consistency on the page fault handlers do_wp_page,
do_swap_page, do_anonymous_page, do_no_page, do_file_page: why not pass their
arguments in the same order, called the same names?

break_cow is all very well, but what it did was inlined elsewhere: easier to
compare if it's brought back into do_wp_page.

do_file_page's fallback to do_no_page dates from a time when we were testing
pte_file by using it wherever possible: currently it's peculiar to nonlinear
vmas, so just check that.  BUG_ON if not?  Better not, it's probably page
table corruption, so just show the pte: hmm, there's a pte_ERROR macro, let's
use that for do_wp_page's invalid pfn too.

Hah!  Someone in the ppc64 world noticed pte_ERROR was unused so removed it:
restored (and say "pud" not "pmd" in its pud_ERROR).

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: exit_mmap need not reset
Hugh Dickins [Sun, 30 Oct 2005 01:15:58 +0000 (18:15 -0700)]
[PATCH] mm: exit_mmap need not reset

exit_mmap resets various mm_struct fields, but the mm is well on its way out,
and none of those fields matter by this point.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: unlink_file_vma, remove_vma
Hugh Dickins [Sun, 30 Oct 2005 01:15:57 +0000 (18:15 -0700)]
[PATCH] mm: unlink_file_vma, remove_vma

Divide remove_vm_struct into two parts: first anon_vma_unlink plus
unlink_file_vma, to unlink the vma from the list and tree by which rmap or
vmtruncate might find it; then remove_vma to close, fput and free.

The intention here is to do the anon_vma_unlink and unlink_file_vma earlier,
in free_pgtables before freeing any page tables: so we can be sure that any
page tables traversed by rmap and vmtruncate are stable (and other, ordinary
cases are stabilized by holding mmap_sem).

This will be crucial to traversing pgd,pud,pmd without page_table_lock.  But
testing the split-out patch showed that lifting the page_table_lock is
symbiotically necessary to make this change - the lock ordering is wrong to
move those unlinks into free_pgtables while it's under ptlock.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: remove_vma_list consolidation
Hugh Dickins [Sun, 30 Oct 2005 01:15:56 +0000 (18:15 -0700)]
[PATCH] mm: remove_vma_list consolidation

unmap_vma doesn't amount to much, let's put it inside unmap_vma_list.  Except
it doesn't unmap anything, unmap_region just did the unmapping: rename it to
remove_vma_list.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: vm_stat_account unshackled
Hugh Dickins [Sun, 30 Oct 2005 01:15:56 +0000 (18:15 -0700)]
[PATCH] mm: vm_stat_account unshackled

The original vm_stat_account has fallen into disuse, with only one user, and
only one user of vm_stat_unaccount.  It's easier to keep track if we convert
them all to __vm_stat_account, then free it from its __shackles.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: anon is already wrprotected
Hugh Dickins [Sun, 30 Oct 2005 01:15:55 +0000 (18:15 -0700)]
[PATCH] mm: anon is already wrprotected

do_anonymous_page's pte_wrprotect causes some confusion: in such a case,
vm_page_prot must already be forcing COW, so must omit write permission, and
so the pte_wrprotect is redundant.  Replace it by a comment to that effect,
and reword the comment on unuse_pte which also caused confusion.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: zap_pte_range dont dirty anon
Hugh Dickins [Sun, 30 Oct 2005 01:15:54 +0000 (18:15 -0700)]
[PATCH] mm: zap_pte_range dont dirty anon

zap_pte_range already avoids wasting time to mark_page_accessed on anon pages:
it can also skip anon set_page_dirty - the page only needs to be marked dirty
if shared with another mm, but that will say pte_dirty too.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: msync_pte_range progress
Hugh Dickins [Sun, 30 Oct 2005 01:15:53 +0000 (18:15 -0700)]
[PATCH] mm: msync_pte_range progress

Use latency breaking in msync_pte_range like that in copy_pte_range, instead
of the ugly CONFIG_PREEMPT filemap_msync alternatives.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: copy_pte_range progress fix
Hugh Dickins [Sun, 30 Oct 2005 01:15:53 +0000 (18:15 -0700)]
[PATCH] mm: copy_pte_range progress fix

My latency breaking in copy_pte_range didn't work as intended: instead of
checking at regularish intervals, after the first interval it checked every
time around the loop, too impatient to be preempted.  Fix that.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] slab: add additional debugging to detect slabs from the wrong node
Christoph Lameter [Sun, 30 Oct 2005 01:15:52 +0000 (18:15 -0700)]
[PATCH] slab: add additional debugging to detect slabs from the wrong node

This patch adds some stack dumps if the slab logic is processing slab
blocks from the wrong node.  This is necessary in order to detect
situations as encountered by Petr.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] shrink_list(): skip anon pages if not may_swap
Lee Schermerhorn [Sun, 30 Oct 2005 01:15:51 +0000 (18:15 -0700)]
[PATCH] shrink_list(): skip anon pages if not may_swap

Martin Hicks' page cache reclaim patch added the 'may_swap' flag to the
scan_control struct; and modified shrink_list() not to add anon pages to
the swap cache if may_swap is not asserted.

Ref:  http://marc.theaimsgroup.com/?l=linux-mm&m=111461480725322&w=4

However, further down, if the page is mapped, shrink_list() calls
try_to_unmap() which will call try_to_unmap_one() via try_to_unmap_anon ().
 try_to_unmap_one() will BUG_ON() an anon page that is NOT in the swap
cache.  Martin says he never encountered this path in his testing, but
agrees that it might happen.

This patch modifies shrink_list() to skip anon pages that are not already
in the swap cache when !may_swap, rather than just not adding them to the
cache.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm/msync.c cleanup
OGAWA Hirofumi [Sun, 30 Oct 2005 01:15:50 +0000 (18:15 -0700)]
[PATCH] mm/msync.c cleanup

This is not problem actually, but sync_page_range() is using for exported
function to filesystems.

The msync_xxx is more readable at least to me.

Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] Remove near all BUGs in mm/mempolicy.c
Andi Kleen [Sun, 30 Oct 2005 01:15:49 +0000 (18:15 -0700)]
[PATCH] Remove near all BUGs in mm/mempolicy.c

Most of them can never be triggered and were only for development.

Signed-off-by: "Andi Kleen" <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] Convert mempolicies to nodemask_t
Andi Kleen [Sun, 30 Oct 2005 01:15:48 +0000 (18:15 -0700)]
[PATCH] Convert mempolicies to nodemask_t

The NUMA policy code predated nodemask_t so it used open coded bitmaps.
Convert everything to nodemask_t.  Big patch, but shouldn't have any actual
behaviour changes (except I removed one unnecessary check against
node_online_map and one unnecessary BUG_ON)

Signed-off-by: "Andi Kleen" <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: set per-cpu-pages lower threshold to zero
Seth, Rohit [Sun, 30 Oct 2005 01:15:48 +0000 (18:15 -0700)]
[PATCH] mm: set per-cpu-pages lower threshold to zero

Set the low water mark for hot pages in pcp to zero.

(akpm: for the life of me I cannot remember why we created pcp->low.  Neither
can Martin and the changelog is silent.  Maybe it was just a brainfart, but I
have this feeling that there was a reason.  If not, we should remove the
fields completely.  We'll see.)

Signed-off-by: Rohit Seth <rohit.seth@intel.com>
Cc: <linux-mm@kvack.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] mm: page_alloc: increase size of per-cpu-pages
Seth, Rohit [Sun, 30 Oct 2005 01:15:47 +0000 (18:15 -0700)]
[PATCH] mm: page_alloc: increase size of per-cpu-pages

Increase the page allocator's per-cpu magazines from 1/4MB to 1/2MB.

Over 100+ runs for a workload, the difference in mean is about 2%.  The best
results for both are almost same.  Though the max variation in results with
1/2MB is only 2.2%, whereas with 1/4MB it is 12%.

Signed-off-by: Rohit Seth <rohit.seth@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] swaptoken tuning
Rik Van Riel [Sun, 30 Oct 2005 01:15:46 +0000 (18:15 -0700)]
[PATCH] swaptoken tuning

It turns out that the original swap token implementation, by Song Jiang, only
enforced the swap token while the task holding the token is handling a page
fault.  This patch approximates that, without adding an additional flag to the
mm_struct, by checking whether the mm->mmap_sem is held for reading, like the
page fault code does.

This patch has the effect of automatically, and gradually, disabling the
enforcement of the swap token when there is little or no paging going on, and
"turning up" the intensity of the swap token code the more the task holding
the token is thrashing.

Thanks to Song Jiang for pointing out this aspect of the token based thrashing
control concept.

The new code shows a slight degradation over the old swap token code, but
still a big win over running without the swap token.

2.6.12+ swap token disabled

$ for i in `seq 10` ; do /usr/bin/time ./qsbench -n 30000000 -p 3 ; done
101.74user 23.13system 8:26.91elapsed 24%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (38597major+430315minor)pagefaults 0swaps
101.98user 24.91system 8:03.06elapsed 26%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (33939major+430457minor)pagefaults 0swaps
101.93user 22.12system 7:34.90elapsed 27%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (33166major+421267minor)pagefaults 0swaps
101.82user 22.38system 8:31.40elapsed 24%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (39338major+433262minor)pagefaults 0swaps

2.6.12+ swap token enabled, timeout 300 seconds

$ for i in `seq 4` ; do /usr/bin/time ./qsbench -n 30000000 -p 3 ; done
102.58user 16.08system 3:41.44elapsed 53%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (19707major+285786minor)pagefaults 0swaps
102.07user 19.56system 4:00.64elapsed 50%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (19012major+299259minor)pagefaults 0swaps
102.64user 18.25system 4:07.31elapsed 48%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (21990major+304831minor)pagefaults 0swaps
101.39user 19.41system 5:15.81elapsed 38%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (24850major+323321minor)pagefaults 0swaps

2.6.12+ with new swap token code, timeout 300 seconds

$ for i in `seq 4` ; do /usr/bin/time ./qsbench -n 30000000 -p 3 ; done
101.87user 24.66system 5:53.20elapsed 35%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (26848major+363497minor)pagefaults 0swaps
102.83user 19.95system 4:17.25elapsed 47%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (19946major+305722minor)pagefaults 0swaps
102.09user 19.46system 5:12.57elapsed 38%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (25461major+334994minor)pagefaults 0swaps
101.67user 20.61system 4:52.97elapsed 41%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (22190major+329508minor)pagefaults 0swaps

Signed-off-by: Rik Van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] add sem_is_read/write_locked()
Rik Van Riel [Sun, 30 Oct 2005 01:15:44 +0000 (18:15 -0700)]
[PATCH] add sem_is_read/write_locked()

Add sem_is_read/write_locked functions to the read/write semaphores, along the
same lines of the *_is_locked spinlock functions.  The swap token tuning patch
uses sem_is_read_locked; sem_is_write_locked is added for completeness.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] fix alpha breakage
Ivan Kokshaysky [Sun, 30 Oct 2005 01:15:43 +0000 (18:15 -0700)]
[PATCH] fix alpha breakage

barrier.h uses barrier() in non-SMP case.  And doesn't include compiler.h.

Cc: Al Viro <viro@ftp.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] TIMERS: add missing compensation for HZ == 250
YOSHIFUJI Hideaki [Sun, 30 Oct 2005 01:15:42 +0000 (18:15 -0700)]
[PATCH] TIMERS: add missing compensation for HZ == 250

Add missing compensation for (HZ == 250) != (1 << SHIFT_HZ) in
second_overflow().

Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years ago[PATCH] vmalloc_node
Christoph Lameter [Sun, 30 Oct 2005 01:15:41 +0000 (18:15 -0700)]
[PATCH] vmalloc_node

This patch adds

vmalloc_node(size, node) -> Allocate necessary memory on the specified node

and

get_vm_area_node(size, flags, node)

and the other functions that it depends on.

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
19 years agoMerge master.kernel.org:/home/rmk/linux-2.6-arm
Linus Torvalds [Sat, 29 Oct 2005 21:02:16 +0000 (14:02 -0700)]
Merge master.kernel.org:/home/rmk/linux-2.6-arm

19 years ago[ARM] 3061/1: cleanup the XIP link address mess
Nicolas Pitre [Sat, 29 Oct 2005 20:44:56 +0000 (21:44 +0100)]
[ARM] 3061/1: cleanup the XIP link address mess

Patch from Nicolas Pitre

Since vmlinux.lds.S is preprocessed, we can use the defines already
present in asm/memory.h (allowed by patch #3060) for the XIP kernel link
address instead of relying on a duplicated Makefile hardcoded value, and
also get rid of its dependency on awk to handle it at the same time.

While at it let's clean XIP stuff even further and make things clearer
in head.S with a nice code reduction.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
19 years ago[ARM] 3060/1: allow constants found in asm/memory.h to be used in asm code
Nicolas Pitre [Sat, 29 Oct 2005 20:44:55 +0000 (21:44 +0100)]
[ARM] 3060/1: allow constants found in asm/memory.h to be used in asm code

Patch from Nicolas Pitre

This patch allows for assorted type of cleanups by letting assembly code
use the same set of defines for constant values and avoid duplicated
definitions that might not always be in sync, or that might simply be
confusing due to the different names for the same thing.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
19 years agoMerge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus
Linus Torvalds [Sat, 29 Oct 2005 19:19:15 +0000 (12:19 -0700)]
Merge branch 'upstream' of git://ftp.linux-mips.org/upstream-linus

19 years agoUpdate MIPS defconfig files.
Ralf Baechle [Sat, 29 Oct 2005 18:32:54 +0000 (19:32 +0100)]
Update MIPS defconfig files.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years ago prom_free_prom_memory() returns unsigned long
Arthur Othieno [Fri, 28 Oct 2005 04:42:56 +0000 (00:42 -0400)]
prom_free_prom_memory() returns unsigned long

    Some boards declare prom_free_prom_memory as a void function but the
    caller free_initmem() expects a return value.

    Fix those up and return 0 instead, just like everyone else does.

Signed-off-by: Arthur Othieno <a.othieno@bluewin.ch>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoGet rid of SINGLE_ONLY_FPU. Linux does not support half FPU other than
Ralf Baechle [Sun, 23 Oct 2005 14:05:47 +0000 (15:05 +0100)]
Get rid of SINGLE_ONLY_FPU.  Linux does not support half FPU other than
by emulation of a full FPU.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFix all the get_user / put_user related sparse warnings.
Ralf Baechle [Sun, 23 Oct 2005 12:58:21 +0000 (13:58 +0100)]
Fix all the get_user / put_user related sparse warnings.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDelete unused ieee754_cname[] and declaration.
Ralf Baechle [Sun, 23 Oct 2005 12:48:12 +0000 (13:48 +0100)]
Delete unused ieee754_cname[] and declaration.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoInclude for prototypes.
Ralf Baechle [Sun, 23 Oct 2005 12:46:25 +0000 (13:46 +0100)]
Include for prototypes.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoProtect against multiple inclusion.
Ralf Baechle [Sun, 23 Oct 2005 12:44:31 +0000 (13:44 +0100)]
Protect against multiple inclusion.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoRemove useless casts of kmalloc return values.
Ralf Baechle [Fri, 21 Oct 2005 21:26:07 +0000 (22:26 +0100)]
Remove useless casts of kmalloc return values.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoHack to resolve longstanding prefetch issue
Ralf Baechle [Thu, 20 Oct 2005 21:55:26 +0000 (22:55 +0100)]
Hack to resolve longstanding prefetch issue

Prefetching may be fatal on some systems if we're prefetching beyond the
end of memory on some systems.  It's also a seriously bad idea on non
dma-coherent systems.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoMore foolproofing of the CPU configuration.
Ralf Baechle [Thu, 20 Oct 2005 21:33:09 +0000 (22:33 +0100)]
More foolproofing of the CPU configuration.

Limit the number of cpu type options in the cpu menu to just those
types that are actually available for the select platform.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agopci-expmem-hack
Andrew Isaacson [Thu, 20 Oct 2005 06:59:46 +0000 (23:59 -0700)]
pci-expmem-hack

CFE 1.2.5 and earlier fails to turn on the ExpMemEn bit in the
PCIFeatureControl register, which means that DMA does not work
beyond physical address 01_0000_0000, ergo to DRAM beyond 1GB.

With ExpMemEn turned on, 01_0000_0000-0f_ffff_ffff is mapped,
so DMA works for up to 61 GB of DRAM.

Will be fixed in CFE 1.2.6 (yet to be released).

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoBCM1480 HT support
Andrew Isaacson [Thu, 20 Oct 2005 06:59:11 +0000 (23:59 -0700)]
BCM1480 HT support

PCI support code for PLX 7250 PCI-X tunnel on BCM91480B BigSur board.

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoSupport for the BCM1480 on-chip PCI-X bridge.
Andrew Isaacson [Thu, 20 Oct 2005 06:58:49 +0000 (23:58 -0700)]
Support for the BCM1480 on-chip PCI-X bridge.

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoSB1 cache exception handling.
Andrew Isaacson [Thu, 20 Oct 2005 06:57:40 +0000 (23:57 -0700)]
SB1 cache exception handling.

Expand SB1 cache error handling by adding SB1_CEX_ALWAYS_FATAL and
SB1_CEX_STALL, allowing configurable behavior on cache errors.

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoSupport for BigSur board.
Andrew Isaacson [Thu, 20 Oct 2005 06:57:11 +0000 (23:57 -0700)]
Support for BigSur board.

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoAdd support for BCM1480 family of chips.
Andrew Isaacson [Thu, 20 Oct 2005 06:56:38 +0000 (23:56 -0700)]
Add support for BCM1480 family of chips.

 - Kconfig and Makefile changes
 - arch/mips/sibyte/bcm1480/
 - changes to sibyte common code to support 1480

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoAdd support for SB1A CPU.
Andrew Isaacson [Thu, 20 Oct 2005 06:56:20 +0000 (23:56 -0700)]
Add support for SB1A CPU.

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoSibyte header cleanup
Andrew Isaacson [Thu, 20 Oct 2005 06:55:57 +0000 (23:55 -0700)]
Sibyte header cleanup

Update sibyte headers to match Broadcom internal copies:
 - comment cleanup and updates
 - fix LittleSur part number to match the board silkscreen

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoBCM1480 headers
Andrew Isaacson [Thu, 20 Oct 2005 06:55:11 +0000 (23:55 -0700)]
BCM1480 headers

Add header files for BCM1480/1280/1455/1255 family of chips, and
update sb1250 headers which are shared by BCM1480 family.

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
diff --git a/include/asm-mips/sibyte/bcm1480_int.h b/include/asm-mips/sibyte/bcm1480_int.h
new file mode 100644

19 years agoSibyte fixes
Andrew Isaacson [Thu, 20 Oct 2005 06:54:43 +0000 (23:54 -0700)]
Sibyte fixes

Fix typo in cpu_probe_sibyte.

Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoMake UL what should be UL.
Ralf Baechle [Wed, 19 Oct 2005 13:45:09 +0000 (14:45 +0100)]
Make UL what should be UL.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFix zero length sys_cacheflush
Atsushi Nemoto [Wed, 19 Oct 2005 10:57:14 +0000 (19:57 +0900)]
Fix zero length sys_cacheflush

Cacheflush(0, 0, 0) was crashing the system.  This is because
flush_icache_range(start, end) tries to flushing whole address space
(0 - ~0UL) if both start and end are zero.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoGet 64-bit right in the kgdb stub.
Ralf Baechle [Tue, 18 Oct 2005 12:25:29 +0000 (13:25 +0100)]
Get 64-bit right in the kgdb stub.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoSys_lookup_dcookie arguments occupy 4 argument slots.
Ralf Baechle [Tue, 18 Oct 2005 11:48:31 +0000 (12:48 +0100)]
Sys_lookup_dcookie arguments occupy 4 argument slots.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFPU emulator garbage collection.
Ralf Baechle [Tue, 18 Oct 2005 09:26:46 +0000 (10:26 +0100)]
FPU emulator garbage collection.

First argument of fpu_emulator_cop1Handler() was unused.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDon't print file name and line in die and die_if_kernel.
Ralf Baechle [Thu, 13 Oct 2005 16:07:54 +0000 (17:07 +0100)]
Don't print file name and line in die and die_if_kernel.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoRename page argument of flush_cache_page to something more descriptive.
Ralf Baechle [Tue, 11 Oct 2005 23:02:34 +0000 (00:02 +0100)]
Rename page argument of flush_cache_page to something more descriptive.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDefine EOWNERDEAD and ENOTRECOVERABLE.
Ralf Baechle [Sun, 9 Oct 2005 17:56:01 +0000 (18:56 +0100)]
Define EOWNERDEAD and ENOTRECOVERABLE.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoSliceup Kconfig; it's grown too large.
Ralf Baechle [Sat, 29 Oct 2005 18:32:41 +0000 (19:32 +0100)]
Sliceup Kconfig; it's grown too large.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoMore configcheck fixes.
Ralf Baechle [Sat, 29 Oct 2005 18:32:40 +0000 (19:32 +0100)]
More configcheck fixes.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years ago2.6.14-rc1 updates for MIPS compat types.
Ralf Baechle [Sat, 29 Oct 2005 18:32:40 +0000 (19:32 +0100)]
2.6.14-rc1 updates for MIPS compat types.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoComplete the fcntl.h cleanup.
Ralf Baechle [Sat, 29 Oct 2005 18:32:40 +0000 (19:32 +0100)]
Complete the fcntl.h cleanup.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoCleanup Sibyte Kconfig a bit further.
Ralf Baechle [Sat, 29 Oct 2005 18:32:39 +0000 (19:32 +0100)]
Cleanup Sibyte Kconfig a bit further.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDate: Fri Jan 14 03:03:23 2005 +0000
Ralf Baechle [Fri, 14 Jan 2005 03:03:23 +0000 (03:03 +0000)]
Date:   Fri Jan 14 03:03:23 2005 +0000

Locking cleanups.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFix weirdness in <asm/bug.h>
Ralf Baechle [Sat, 29 Oct 2005 18:32:38 +0000 (19:32 +0100)]
Fix weirdness in <asm/bug.h>

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFix wrong comment.
Ralf Baechle [Sat, 29 Oct 2005 18:32:38 +0000 (19:32 +0100)]
Fix wrong comment.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFixup a few lose ends in explicit support for MIPS R1/R2.
Ralf Baechle [Fri, 7 Oct 2005 15:58:15 +0000 (16:58 +0100)]
Fixup a few lose ends in explicit support for MIPS R1/R2.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDocument the meaning of the CPU_MIPS32, CPU_MIPS64, CPU_MIPSR1 and
Ralf Baechle [Fri, 7 Oct 2005 11:06:12 +0000 (12:06 +0100)]
Document the meaning of the CPU_MIPS32, CPU_MIPS64, CPU_MIPSR1 and
CPU_MIPSR2.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoProtect manipulation of c0_status against preemption and multithreading.
Ralf Baechle [Thu, 6 Oct 2005 16:39:32 +0000 (17:39 +0100)]
Protect manipulation of c0_status against preemption and multithreading.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDetect 4KSD and treat it like 4KSc.
Ralf Baechle [Tue, 4 Oct 2005 14:01:26 +0000 (15:01 +0100)]
Detect 4KSD and treat it like 4KSc.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoWe're no longer hosted on oss for ages ...
Ralf Baechle [Tue, 4 Oct 2005 12:30:10 +0000 (13:30 +0100)]
We're no longer hosted on oss for ages ...

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoConvert the remaining SPIN_LOCK_UNLOCKED instances to DEFINE_SPINLOCK.
Ralf Baechle [Mon, 3 Oct 2005 12:41:19 +0000 (13:41 +0100)]
Convert the remaining SPIN_LOCK_UNLOCKED instances to DEFINE_SPINLOCK.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDefine and initialize kdb_lock using DEFINE_SPINLOCK.
Ralf Baechle [Mon, 3 Oct 2005 12:40:26 +0000 (13:40 +0100)]
Define and initialize kdb_lock using DEFINE_SPINLOCK.
Convert kgdb_cpulock into a raw_spinlock_t.

SPIN_LOCK_UNLOCKED is deprecated and it's replacement DEFINE_SPINLOCK is
not suitable for arrays of spinlocks.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoMake kgdb_wait static.
Ralf Baechle [Mon, 3 Oct 2005 12:30:57 +0000 (13:30 +0100)]
Make kgdb_wait static.

Nothing outside gdb-stub.c uses kgdb_wait, so change it's definition to
static.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDon't copy SB1 cache error handler to uncached memory.
Ralf Baechle [Sat, 1 Oct 2005 19:22:39 +0000 (20:22 +0100)]
Don't copy SB1 cache error handler to uncached memory.

This may have made sense on a paranoid day with pass 1 BCM1250 processors
that were throwing cache error exception left and right for no good
reason.  On modern silicion that hardly makes sense and the code had
gotten just an obscurity ...

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoProvide 64-bit address space definitions for the Sibyte SB1 CPU core.
Ralf Baechle [Sat, 1 Oct 2005 16:34:35 +0000 (17:34 +0100)]
Provide 64-bit address space definitions for the Sibyte SB1 CPU core.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoNo need to explicitly call __read_64bit_c0_split; __read_64bit_c0_register
Ralf Baechle [Sat, 1 Oct 2005 12:14:58 +0000 (13:14 +0100)]
No need to explicitly call __read_64bit_c0_split; __read_64bit_c0_register
will do that itself iff needed.  Fix format string.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFix stale comment in c-sb1.c.
Andrew Isaacson [Wed, 22 Jun 2005 23:02:03 +0000 (16:02 -0700)]
Fix stale comment in c-sb1.c.

Signed-Off-By: Andrew Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoCleanup the mess in cpu_cache_init.
Ralf Baechle [Sat, 1 Oct 2005 12:06:32 +0000 (13:06 +0100)]
Cleanup the mess in cpu_cache_init.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoUse cpumask_t rather than hand-rolled bitmask code in sb1250_set_affinity.
Andrew Isaacson [Wed, 22 Jun 2005 23:01:09 +0000 (16:01 -0700)]
Use cpumask_t rather than hand-rolled bitmask code in sb1250_set_affinity.

Signed-Off-By: Andrew Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoUse R4000 TLB routines for SB1 also.
Ralf Baechle [Sat, 1 Oct 2005 10:14:17 +0000 (11:14 +0100)]
Use R4000 TLB routines for SB1 also.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoFix build error caused by missmatching duplicate declaration.
Ralf Baechle [Sat, 1 Oct 2005 09:17:54 +0000 (10:17 +0100)]
Fix build error caused by missmatching duplicate declaration.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
19 years agoDon't call memset to clean irq_desc; these data fields have already
Ralf Baechle [Fri, 30 Sep 2005 23:03:42 +0000 (00:03 +0100)]
Don't call memset to clean irq_desc; these data fields have already
previously been initialized statically in kernel/irq/handle.c.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>